Skip to content

Discussion with Jordi Vallverdú #1

Michael Levin and philosopher Jordi Vallverdú discuss how information and cognitive patterns might be transferred from embodied human minds into large language models, drawing analogies from biology and considering implications for science and human limits.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a discussion between Jordi Vallverdú (https://scholar.google.com/citations?user=Y_Q8AQkAAAAJ&hl=en) and me, around the topic of transferring information from the minds of embodied humans into LLM AI's, and natural, biological analogs of this process (moving cognitive patterns across embodiments).

The paper Jordi was referring to: https://journals.bilpubgroup.com/index.php/fls/article/view/8060

CHAPTERS:

(00:00) Embodiment and Language Models

(20:17) Caterpillars, Memories, and LLMs

(27:23) AI, Science, Human Limits

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:00] Jordi Vallverdú: The point is that my paper is a little bit tricky, but I just wanted to discuss: everybody's talking about — I am pro-embodied cognition. I am completely working on embodiment and on physical substrates that justify and explain how cognitive systems work. This is beyond any doubt. But I was thinking: could we think about a possible way of thinking that is completely disembodied? The secondary question was whether generative AI is really disembodied just because it uses a statistical approach. When I started to think about that, it's disembodied. There is not a body that captures information according to physical constraints and physical necessities. There is machinery in the brain or in the nervous systems. AI and LLMs don't work that way; they use different strategies — statistics, a lot of statistics. At the same time, they're working on something very human: language. My idea was: is it really disembodied, or could we think that there's an embodied structure inserted into their language? These vast linguistic datasets are at some point based on embodiment. If you follow that thread, even if you apply a statistical approach to predicting the next token, it's encapsulated into some understanding of reality and how to process information. At the beginning of LLMs, someone said, "these are stochastic powers, idiotic machines, just statistical." I think it's not so simple. They are not just statistical, nor do they lack deep understanding of reality simply because they don't have a body. There is a great limitation in how they can acquire new datasets and how to combine information. For many human processes we still don't have a specific answer. For example, regarding human thinking and creativity: if you check all studies about creativity, the only conclusion is that there is no pattern. We don't have a method for understanding how to be creative. Some people work in the evening; some need to have sex beforehand; others need to abstain from sexual relationships to be more concentrated. If you check a lot of people described as creative, we cannot find a pattern. I've been thinking a lot about Ramanujan, the great Indian mathematician. He was sleeping; he dreamed. At some point he was able to justify mathematically the proofs of his ideas, but something happened in his mind that is beyond our knowledge.

[04:03] Jordi Vallverdú: This is something that is embodied, must be the result of biochemical processes combined with heuristics. This is obvious. At the same time, we don't have any idea about the real mechanism that explains that. I don't think that the absolute appeal to embodiment justifies the acquisition of new knowledge from my perspective. My idea was, what if we consider LLMs are like second-hand approaches to human cognition? The difference is that we are not talking about just one author, but millions of authors compiled into the system. At some point, the main reasoning strategies will follow embodied reasoning. The products of LLMs, even if they are using statistical analysis, are based on a specific cognitive embodiment of human beings. For that reason, I tried in that paper to discuss these things. This thinking is not really disembodied, because even when they work at a multimodal level, and even if there are plenty of statistical tools applied to obtain results, they are based on how we need to process and how we process information. Unless you are really using different kinds of things. For example, those who were cheating in recent months with LLMs by introducing hidden prompts — if you don't read white letters over a white background, you cannot process. This is just a formal aspect. But even at that level, they are following cognitive ways of thinking as humans. For that reason, I thought it was relevant to think about how this embodiment is placed there. At a second level, I have a different paper in that direction. I'm thinking about what if we could design neurodiverse AI? AI is not based on real neural networks; it's an inspiration from the beginning until now. If we are using huge human data sets and most humans are neurotypical, what if we design these AIs following different patterns of attention? For example, being able to select different parafunctional properties that some people have. Why not change the weight of attention? Why not change the weight of understanding multimodality? It's just a different way of obtaining information from data. Why not design new AIs based on different cognitive or paracognitive mechanisms inspired by these things? Not so long ago, I wrote a paper about hormonal computing.

[08:07] Jordi Vallverdú: I think that the starting point for scientists, even a social scientist or humanist like me, it's always reality. If you are talking about cognition, about existing cognitive systems, they are the best repositories for extracting strategies or heuristics. At the same time, AI gives the freedom of mathematics. You can do whatever you want inside the limits of formalization and the computing systems. Everybody's talking and it's based on standard natural computing, but we are at the beginning of a new revolution in quantum computing. There are plenty of relevant things that connect quantum computing possibilities with cognitive implications or simulations or emulation inside these new computers that change how these computers can behave or process information. I think that the physical background is fundamental and we must be empirical. But at the same time, we are talking about information. Humans are not. People take for granted that the only thinking process and the understanding of reality is the human understanding, as if we were not aware of all the scientific revolutions about what is reality. For example, my last five years of research has been 60% devoted to causal analysis in artificial intelligence. I started devoting two years just thinking about causality, but not about all these typical debates because philosophers are really boring all the time. You need to discuss not just what it means that something has a causal relationship with another thing. I try to avoid all my philosophical training, not my philosophical spirit, but my philosophical training. I will not be interested in debating Hume or that. No, just what is a causal relationship? At some point I realized that causal relationships are nothing without an agent that finds a pattern. In order to find a pattern, you need to pre-decide, even unconsciously, plenty of things. For example, time, which is the duration of your observation. At some point I was reading and thinking I had been talking about this without taking it into account. In order to understand causality, you need to decide the temporal length of your observations. Chemical bonds of water are on the order of femtoseconds, but water is changing all the time. It's not a continuous connection of chemical things. There is a continuous connection, but breaking and bonding. From a macro scale, we could consider water as stable, but it's not stable. So first of all, it's time. Second, the mechanistic scale of your observation: you can talk about the quantum level, you can talk about chemical, biochemical, organismic, social levels of events.

[12:10] Jordi Vallverdú: There is not just cause and effect. Today there are plenty of guys talking about counterfactual thinking. It's fantastic. It's necessary. I'm working on those things too. For AI, using logical patterns and logical rules is necessary and we can do that. But at the same time, if you try to design from scratch something that can design counterfactuals, you need another thing. You need an agent that is able to think from a systematic perspective. It's not cheap. There is a lot of effort designing some kind of event or entity that is able to map reality to design a strategy and then to consider alternative possibilities of that reality. This is a heuristic, a very specific heuristic. It's not automatically generated; you need to put effort into that. I'm very interested in you too, in basal cognition, in minimal cognitive systems. We shared a paper about slime mold cognition, as co-authors. I'm not losing the connection with these physical constraints. These physical constraints are not free and are not mandatory in the sense that we are always based and constrained by these physical conditions, but at the same time some meta-learning possibilities in relation to AI are obvious. We can change the rules of these things, but never lose the perspective, the general, the legal view that it's always necessary. At some time I stop and make a legal view of your research and your thoughts to discover that you need to take plenty of decisions in order to know. All living entities do these things. Of course, basic levels are mostly constrained by physical and biological constraints. Later you can add more levels of complexity, and at some point you can add linguistic and symbolic layers. Not only symbolic; for example, Ramanujan used different cognitive approaches: visual, topological approaches to thinking about mathematics. It was not a conscious approach to thinking about sets of data. It was something that his mind tried to do. For that reason, he was not able to connect some final theorems or some final mathematical statements with the whole set of rules that explain the things. These are intuitions. If you check most great scientific changes and check the sources and the real deep thoughts of the people involved in these things, there is always intuition. So why not try to check why intuition is generated?

[16:13] Jordi Vallverdú: In the case of human beings, there's a very complex biochemical soup combined with social aspects that shape this set of possibilities. For the rest of cognitive systems, the level of these complex interactions perhaps changes. In the case of AI, we are playing with a different set of rules. Again, I think that because they are based on us and about how we are thinking, these systems are embodied. For that reason, I think that until we find, if we find, real agents that this kind of completely smart entities that can think in a different way, we will not be aware about how we can be thought different things. For example, I'm sure that classic Greek mathematicians would be really outraged and even horrified if they checked the contemporary function of mathematics with plenty of computational proof. It's crazy. We don't have a complete formal analysis. We only have a statistical approach to the same kind of understanding of the same kind of theoretical situation, the four color theorem, for example. These things change from time to time and from discipline to discipline. At the same time, if you check from a transdisciplinary perspective, at this historical moment, we have the real possibility of checking different ways of dealing with data that even go beyond our own ways of thinking. LLMs are just thinking like us at this moment, with some differences, because they are based on us, but they are not dealing with huge and non-controlled data sets. There's not a zero approach to the information with any kind of AI today; only very controlled systems like the DeepMind systems — they train their model with the rules of Go or chess, but it's a very reduced universe of possibilities. In humans and in the universe, there are so many possibilities that we have not tried to do this with. This is a real difference with AI and us today: perhaps we are preparing the path for a new way of doing things that can be called intelligence, but I think that we will not be able to understand these new sets of data. At the same time, it's not tragic. If we could talk with an ant, we could explain everything, but the ant, hypothetical, could not assume all these kinds of things, because it's beyond its possibilities of thinking about reality. There are plenty of other things about reality that escape our perspective. For that reason, there is still room for alternative views in plenty of fields. But it doesn't justify astrology or things like that. I can understand that we are living at the beginning of the 25th century of this Christian era, if you use this calendar, and most people are still believers in religions. Most humans defend their languages because they think they are the best ways of talking about reality and defending their cultural values as the best ones, and they defend their places as the most beautiful ones just because they were born there and they love these patterns. This is based on our brain functioning, and our brain functioning is not the best way of dealing with information. Perhaps the real understanding of reality will not be performed by humans. For that reason, I'm so interested in computer sciences because I think it's the future of meaning, a different kind of meaning.

[20:17] Michael Levin: There's so many interesting things. One thing, right in the beginning, you made this point that the language models are receiving basically an upload of embodied information that was generated in an embodied fashion. That's interesting because there are biological cases. In fact, it's pretty universal where you end up with memories that were generated by a different embodied scenario. One of the most extreme ones we can start thinking about is this caterpillar-butterfly business. We know now that if you train the caterpillar larvae, the butterfly or the moth inherits certain memories. From the perspective of the butterfly, these memories come from a very alien embodiment. They come from this thing that crawled around in two dimensions, and it operated as a soft body, which is a very different controller than a hard body, because you can't push on anything. It's very different to move around. It had preferences that were completely different. You have these memories. The original memory is about "crawl over here and get some leaves." Well, you don't care about leaves. You don't want leaves. You want nectar as a butterfly. During your process of formation in metamorphosis, you've received this download from some other embodiment that was much less capable in some ways than you are. But somehow the biology takes care of the mapping. You've generalized and remapped this memory so you can interpret it in some reasonable way and do something with it. It's a very biological example of what we've done now with these language models. We've taken information from one embodiment and we've squeezed it into a different kind of knowledge system. Just because the standard language models, not the robotic ones but the standard ones that sit on servers, we think they're not embodied because we don't see them moving in this three-dimensional world. There are many spaces in which to have embodiments. Biology moves around in physiological state space and anatomical state space and metabolic space. I have no idea what spaces these language models are really traversing. We as humans are pretty bad at noticing these things. When I point out that cells and tissues are actively navigating other problem spaces that we can't visualize—high-dimensional problem spaces—people think that's really weird, and it's not easy for us to visualize this. Your point is interesting: it's not as radically different as people like to think. There are biological examples of moving generalized information across radically different embodiments. I tried to argue in this memories paper from last year that in some sense all of us are in this position because as cognitive agents, we never have access to the past. What we have access to are the memory engrams: the molecules or the structures or the physiological patterns of memory that are formed in our body.

[23:50] Michael Levin: And at any given point, you have to actively reconstruct your memories. You have to ask, what does this mean? What do these implemented memories actually mean? All of us have uploads of memories from another being past you, past versions of you. And maybe it was very recent, so you don't need to do too much remapping because it's just from 5 minutes ago. Or maybe it's from your childhood when you really were a different creature to a large extent. Your brain was different, your physiology was different, the things you cared about were different, the degree of agency you had was different. All these things were different. We're haunted by these memories of a different being to different degrees. And so I think that's what we're looking at here is we've put this whole thing on steroids, the butterfly-caterpillar thing was bad enough, but now we've said we're going to take memories from many individuals and we're going to stick them into this new cognitive system that is very, very different from the architecture that was in before. But apparently that's okay. That's one of the amazing things to me: how re-mappable these things are. The fact that you can take it from, there's something, there's some invariant, right? There's some symmetry that you can take from what a caterpillar was doing and stick it into a butterfly body and it works. And we have in planaria, when you do this thing, when you train them and you cut their heads off and they grow a new brain, and eventually the new brain gets imprinted with information with the learning. That information, again, has to move. It has to move from the tail into the brain somehow. It has to be imprinted onto the brain. And so the fact that all this stuff can move across radically different embodiments, and especially from humans, from multiple humans to a single language model architecture, and retain not the details but the saliency, some of these relationships, so that it makes sense, so that we can then still have a conversation with it. I think it's kind of amazing, and I think there's some very deep invariance here that persist. I often also try to think about it from the perspective of the memory itself, from the pattern as an agent. I exist in this ecosystem of a caterpillar and I have some power because I can control the behavior a little bit of this robot. And sometimes I'm accessed and sometimes I'm forgotten, but mostly I can persist. What do I need to do to persist across? There's a big change coming. From the perspective of the caterpillar, there's this crazy transhuman, trans-caterpillarist event happening where you're going to turn into this higher-dimensional being. You're going to fly for God's sake and do all this stuff. And so, as a memory within the mind of the caterpillar, what do I need to do to persist in the future? How do I make sure I'm not left behind? What kind of niche construction can I do as a memory pattern to persist into the future and not persist the way I was, because the exact memory of the caterpillar isn't of any use to the butterfly, but must update and change so that instead of leaves, now it's about nectar and whatever. I think it's very interesting.

[27:23] Jordi Vallverdú: I think this example of the caterpillar I used in one of my talks two years ago. I'm talking about course embodiment, about how some systems can go from one embodiment to another one. I think it's a very, even a very relevant problem for all roboticists trying to do general models that can be downloaded to different versions of robots that can adapt automatically to these kinds of things. But it's a natural thing that some entities can do. Even human beings — you have been saying that we are changing. In fact, we know that this is synaptic pruning during the teenage period in which our brain is cutting and rewiring everything. For that reason, there are so many different behaviors among teenagers. These are structures. The important thing is that these things are happening: cognitive systems are changing, evolving, and doing many interesting things in order to deal with data. What we are doing with LLMs or current generative AI and even new ways of doing AI is very similar. We can talk with these systems because the systems are thinking inside the same set of values that make sense: how information is organized; meaning is not only from a practical perspective of interacting, but from an epistemic perspective. We try to find some kind of epistemic answers. When I ask my ChatGPT version, "Please find a theorem or, inside this logical discussion, find some missing point; find a counterargument for this specific claim that does not violate these theorems," I am doing these things all the time. I am expecting a very specific symbolic answer. Even if this is statistical, there is a logical substructure that must be followed in order to satisfy me, because I need to think in these terms. I told you some minutes ago that, for some kinds of ancient mathematicians, contemporary mathematics is crazy. It's not mathematics at all. The same for physics: for the pre-19th-century expert in physics, there is no place for statistics. We cannot have statistical approaches to reality because there is a deterministic perspective, and perhaps we don't know how to explain something, but it must be completely deterministic. They didn't think that nature behaves from a statistical perspective. They could not accept that perhaps something might or might not happen; perhaps we don't know the outcome of an event because we don't have enough data, but it must be deterministic. It was a completely different scenario at the end of the 19th century: new statistical advances, new ways of approaching physics, the beginning of Bayesian statistics, very controversial, of course, and throughout the 19th century, with advances in modern physics, and from quantum analysis of this new physics, they realized that nature can be non-deterministic. So the problem is another one. How can we explain that at the bottom it's non-deterministic, while at the macro scale it's deterministic? This is a completely different problem. We are now living in a universe that we know is beyond our control and beyond our rational approaches. We know that we can understand, we can make approaches, but our cognitive structure is not designed to deal with this kind of data. We need different approaches to this vast amount of data in order to make new advances. Many of the hottest scientific problems today are based on the complexity of the problem, the scale of the problem, the impossibility of solving them just by groups of guys working on something. We need the external help of machines. I'm very happy to live this AI revolution because I thought that I had never been able to enjoy it. Now, as a philosopher even, I love to have a system in which I can address plenty of questions. Of course, I'm not asking "explain Nietzsche" — that's not the issue of these systems. All these sets of data: "please apply this model, extract this, refine." I ask plenty of very technical things that some people invested 40 years of their life trying to obtain: information from cultural data, from natural data.

[31:49] Jordi Vallverdú: Today you can solve in hours or weeks very complex things. It's fantastic. It's a core way of thinking. It's an alternative way of thinking. It's not plagiarism. It's not laziness. It's just the possibility of thinking being upgraded. We are using these things in different scenarios. Modern robotic surgery platforms allow them to be much more precise than any human. We are not saying that humans cannot do surgeries; of course they can, and very precisely. There are physical limitations that cannot be solved by even the greatest surgeon in the world, because the hand size has a specific characteristic. It's like military AI. It's the future of military research, because when you try to be effective and you want to send a bomb or do things, there are embodiment constraints, for example, for flying. Human pilots, even if trained, have a body, and when their bodies are under G pressure, they lose consciousness. Even if they have been trained, performed specific exercises, and are equipped with specific equipment to prevent that. From a real perspective, if you have the possibility of flying at the maximum physical performance of a plane, whether conducted by a human or a machine, it's better to use a machine because the threshold for effective operation of that plane is the physical constraints. If you make a very strong change of direction, the plane will break. The only limitations are physical ones, not the embodied constraints of the pilot. I think that for cognitive and scientific things, the same happens. Our minds, even if we translate information into logical things, into visual maps, at some point, the scale is beyond our possibility. So why not train these systems in order to make a second step for understanding more things. I think it's something that makes sense. We are trying to optimize everything. Why not optimize philosophy, physics, or chemistry using these kinds of systems? But thinking, not just making general predictions about huge sets of data. This was, I think, the AI of the 90s. We are at the beginning of a revolution in which we are applying new AI ways of doing things. We are even discovering new heuristics. Really, new heuristics. From my perspective, the classic way of doing things — classic cognition, classic symbolic thinking — is over. We need to think in a different way, trying to guess new physical combinations. You are doing new sets of minimal entities: organoids, living robots; it doesn't matter. I think that we can do these kinds of things and we can do this kind of revolution at all kinds of scales or sets of applications. From my perspective, I say these things because there is a huge reluctance against AI today. There's a hype and at the same time a lot of fears. Some of these fears are really without the greatness of the human mind trying to do things.

[36:16] Jordi Vallverdú: And it's not an attack on human thinking. Human thinking will continue. For example, I love chess. I'm sure that you have also been interested in chess. Everybody in our fields has some kind of interest in this game. When computers started, they changed completely the way of playing chess. At this moment, the best AI chess players can beat the best human chess players. Now, this is beyond any kind of doubt. But does this imply the end of chess? No. Is the chess that machines play interesting for humans? No, because it's a different way of playing. So perhaps there are different ways of thinking about reality. Again, we are in the 21st century and there are plenty of tragic wars. And some of these wars are performed by countries with huge scientific skills. Perhaps it's possible to combine these things and human cognition is not so overdeterministic and it's not so rational as we think, but it's based on plenty of things or flavors that explain how we behave. I think that because of this diversity we are creative. It's my intuition that our creativity is related to our messy design: some guy says, "Well, I'm not 100% true, but it makes sense for me." So I will devote the next years of my life to something that makes sense. For example, my first research field as a professional at the university level was about scientific controversies. I was fascinated by what happens during scientific controversies, how things are shaped. When you analyze the biggest scientific controversies across history, not only Western but also other cultures, you always find the same: at some point somebody does not have the real amount of data to defend something, but even in that situation says, "I think that it's relevant." So I will invest my future in that thing that is completely crazy. For example, Copernicus was crazy from a rational perspective. Einstein in the beginning was crazy. Later — how many efforts did it take by many scientists to prove that those ideas were true? At the beginning, everything is against your meta-model. If it's a meta-model, that affects the whole reality. The normal thing is that nobody tries to change ideas. But the real thing is that we try to change ideas, even if we do not think that they are true. We are guessing, we are checking, we are doing these things. So why not do this? For example, at the beginning of generative AI, plenty of guys were saying they were suffering plenty of hallucinations. Humans too. I'm not defending this kind of hallucination, but there are humans that say that the virgin has appeared, and there's no problem at all.

[40:42] Jordi Vallverdú: I'm not defending this because I think it's a mistake, and it's not sure why, where, and how it could be solved, or if solving it would invalidate other things. The same for humans. Humans are not rational; they are completely opportunistic. I remember a letter of Albert Einstein in which he says that he's an epistemic opportunist and that people are opportunists. You try to guess your best expectation about data according to your knowledge, and then you try to solve using these sets of data. So we are not as rational as we thought, and our products are not as rational as we thought. And I think that just as our bodies are a problem for designing the most efficient planes, our minds are a problem for designing the most efficient, rational, and scientific tools. Because we insert our conditionals inside these tools, which perhaps is not the best way of doing science. Why should we not be able to try to change these things? Why not insert that kind of thinking inside our models of dealing with reality? Of course, never losing the track and never losing the connection with reality. I'm not talking about magic science with magic things that we cannot check. In the end, everything is related to physical systems that for random reasons are organized, trying to guess the next step. We are doing this at a huge scale, but even at that scale we have models about the end of everything. I was even invited by Indiana Matzke to explore it; it was very philosophical, and some might say it is nonsensical. One of my intuitions is that AI, at some point, if it can achieve a general map of reality and is interested in understanding everything, will try to understand the next step like we do. There is something beyond any kind of control: the universe is like a huge trap. All cosmological models end at the same point: complete destruction of information, either because of collapse or because of zero energy. All models do so at some temporal scale. So everything will disappear. All information will disappear. This is a huge cage; it's like a huge prison. From this perspective, this gives meaning. Death gives meaning to humans in order to organize their activities and change everything, because according to that physical constraint they change how they are, or at least try if they can. Most humans cannot. So I think that a huge cognitive entity would try to understand the next step like we do. The next step at a cosmological level is that we are here for a time and there is no escape, no possibility of changing that. Even if you tried to collapse the universe, if you could, the result would be the same: explosion, implosion, and starting again. Perhaps everybody's scared about what AI will do — that AI will kill us because we are a menace. And perhaps we'll say, "Whoa, this animal is so strange. I should keep that animal close to me in some way, trying to understand about feeling reality."

[45:08] Jordi Vallverdú: This is another thing: how we feel reality. It's as important as how we understand reality. I think there is a connection between feeling and thinking. If you check my curriculum, you will see that I have been doing a lot of work on emotions, but I'm not interested in these classical debates—Ekman, universal emotions, or similar. I'm not interested in that. I'm interested in how a cognitive system can feel reality. This for me is fascinating. It's like the typical qualia problem. I think it's related to information. It connects with causality. Any informational system fills the world in a specific way. That is, it's based on the physical design or physical constraints that make possible that something is information and something is not information. For example, when I think about quantum paradoxes, I cannot feel physically quantum paradoxes, but I can understand the complexities, and I can feel that information in my understanding of reality. I think there's a connection between feeling and thinking that way. It's not just logical. We are not Mr. Spock—just logical understanding of reality, connecting dots. Even if you are autistic and neurodivergent, you feel reality in a different way than most neurotypicals. Even in that case, you are constrained by your embodiment. So it makes sense. We have so many things still to learn. The good point is that it's a natural connection between classical biological studies of reality, biotechnology, AI, and computational science. It's everything. It's merged by inspiration. It's the most successful way of doing things, but at the same time we are free to check new ways of doing things. What could happen if this, that, and that? The only problem I see is that much research and the huge investment into generative AI is made by private companies interested only in revenue, not AI or knowledge or improving human lives. So perhaps at some point those who, at small scale, try to explore new paths connecting classical biological thinking, innovative computation, and new AI are people like us from different fields trying to connect the dots and explore new possibilities. Because these companies will never try to understand the meaning of life like us. This is very relevant. For me, it's the first scientific revolution that happens inside private companies. Of course, 19th-century chemistry was related to chemical industries, and there's a huge connection between BASF and German industries and universities. It was performed in a different way, but today that is not the case. Most models are closed; even if they say they are open, they're not really open. If you want to install a full, very complex generative AI model, you need a very good machine and a lot of resources. It's not as easy as installing the latest open model on your computer. So it's like when people say "Linux."


Related episodes