Skip to content

Discussion with Ricard Sole

A 45-minute conversation with synthetic biologist Ricard Solé exploring morphospace, bioelectricity, synthetic collectives, limits of bottom-up engineering, major transitions in machines, and the evolution of language, communication, patterns, and memory.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~45 minute discussion with Ricard Sole (http://complex.upf.edu/ricard-sol%C3%A9), a synthetic biologist whose work spans evolution, language, and many other fascinating areas.

CHAPTERS:

(00:00) Goodwin, theory and evolution

(07:07) Morphospace, shape and bioelectricity

(15:21) Synthetic morphospaces and collectives

(24:27) Limits of bottom-up engineering

(31:07) Machines and major transitions

(39:05) Language, communication, polycomputing

(44:31) Patterns, agency and memory

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:00] Michael Levin: I was looking over the one that you did with Brian Goodwin. He was always a hero of mine. I thought it was a great opportunity to have you talk about how you knew him, what you guys did together, and what your experiences were.

[00:21] Ricard Sole: It's a peculiar story because when I was in high school I stumbled into the Spanish translation of 'Toward a Theoretical Biology.' It was just the four volumes; they compressed that into a thick volume, but not everything was there. I read about Stuart Kauffman and Waddington and Brian Goodwin. At that time, I liked mathematics and biology. It was like, wow, it's possible to do mathematical models of brains and everything. I started PhDs simultaneously in quantum physics and quantum biology, and my supervisor helped me go to England to work with a guy at the Open University, who was not Brian Goodwin, although he knew Brian Goodwin was there, so I was hoping to meet him. When I went there, Rob Ransom, the person I was going to work with, who has this book 'Computers and Embryos,' quit. It was the start of Margaret Thatcher's destruction of universities. He quit and said, "Well, listen, there's Brian Goodwin here. I think you should talk to him." So I went to visit Brian. I must say I was warned about Brian because at that time hyper-reductionism was the standard in genetics. Even my supervisor was saying that the genes should explain everything. I was warned because there was this legend that Brian didn't believe in genes, which was of course not true, and he was a crazy person. I went to his office and he was incredibly kind. We started to chat and he began to tell me things about the irreducible nature of biological complexity, but evolution by natural selection is great; it cannot explain every single thing. It was a great conversation, but I was pretty skeptical about some of the things he said. At the end he said, "I see you are a skeptic. No problem. Let's keep talking. But just think about the questions I was asking you." I went out of the office and I was thinking I don't have an answer for the questions Brian was asking me. The truth is that after more than 30 years, I still have these questions in my mind because they are really important, really relevant. It's a pity that now I go to conferences where people talk about organoids and synthetic multicellularity and speak about symmetry breaking and constraints; it's a pity Brian cannot see this happening because he was a strong advocate of these emerging phenomena and broken symmetries. At the time, for most people that sounded like very abstract stuff. He was right.

[03:48] Michael Levin: I'm really interested in the non-monotonic nature of science sometimes. You have these great thinkers and they come up with this amazing stuff and then it sleeps at best for a long time. A lot of people haven't heard of it. I find these incredible pockets of papers that I'm like, my God, how have we not seen this. And some of this stuff is really foundational, but also hard to put into practice, so Robert Rosen's stuff, for example, is clearly important, but I found it hard to make it practical. How would you say, I think you've been very successful in bringing these kinds of things into the laboratory, and how does it impact, maybe talk about what you're doing now, what you have done, and how it impacts that?

[04:47] Ricard Sole: It is. For example, the translation into the wet lab, into reality, things that come from the whiteboard or the blackboard, that's my preferred one. The 21st century is a time where you can actually engineer living matter in ways that make sense. It makes sense to talk about engineering living matter with all the emergent phenomena that we have to deal with. The old papers: one of the things that I always do and my students end up enjoying is that they read the classical papers. So we cite papers from the time of cybernetics. It's amazing to see how much insight there was already that we haven't yet translated into final theories or in the dreams of the people that thought of machines that sometimes would be living machines. It's been a long process. Sometimes you wonder, and I know this is a shared interest, how far can we go beyond what biology has already explored? Because evolution has been extremely powerful, as we can see, in finding solutions of all kinds. Is there room for more? Is there room for going beyond the potential of evolution? That's one of the things I'm very interested in and also connected with my most all interest in all this story, which is the possible and the actual. So, health theory, but also how other ways, like synthetic biology for example, can help to interrogate the potential creativity of nature. That's why it's such an exciting time that we might actually get there.

[07:07] Michael Levin: One of my favorite things I came across many decades ago was Darcy Thompson's "Growth and Form." One of my favorite parts of that book were the grids, where he's got this geometric grid and you put a fish or a crustacean or something on the grid, and then you start mathematically deforming the grid. So of course the image changes, but what you get are other species of that animal; he does it with skulls and with all kinds of things. Clearly there's something here, but what is this grid? What does it mean that you're deforming the grid? I had an undergraduate student we were talking about this, and she said we should deform the grid and change the fish. Everybody laughed; the other students said, "You can't change the real fish." I thought no, actually, we should absolutely be able to do that if we understood what this was. He had this impression of a latent space around the possible forms and ways that you could explore other regions of that latent space, which look like deformations of that grid. If we understood the mechanism that was pulling down parts of that space, we should absolutely be able to change it.

[08:29] Ricard Sole: The example I think is relevant for many reasons. And it was one of the things that Darcy Thompson was right about. I remember being in a conference a year ago, and it was this guy talking about the diversity of shapes in one group of bats, shapes of heads. The plasticity that you can have there, because you have this modularity in the bones, in the skulls, allows bats to range from a flat face to a very elongated mouth. And it was amazing. But then the question is, and that's something I've been discussing with several people over the years: whether this also translates into diversity when you go into the logic of life. And because we are discussing now what can be detected on other planets, different life forms, that also has relevance for our work in bioengineering, because whatever we understand about the limits can say things about what is going to be allowed to be obtained from engineering. I believe that in terms of the fundamental logic, there are huge constraints; life elsewhere might appear different. But in terms of the kind of molecular information that starts over everything, the logic of cells, it's constrained. That is, we will see essentially the same things.

[10:26] Michael Levin: Interesting.

[10:28] Ricard Sole: Wow. It's a big claim.

[10:31] Michael Levin: It's a very interesting claim. I wonder about that. The shapes of the heads and faces — I had another undergrad in the lab, Maya Emmons-Bell, who discovered that you can take a completely genetically wild-type planarian. Planarians come in many different shapes: flat heads, round heads, triangular heads. You cut off the head and it will regenerate, but if you perturb the bioelectric circuit that controls the number of heads, it also apparently controls the shape of the head; if you perturb it, you can get heads of other species. You get heads of other species that are between 100 and 150 million years evolutionary distance. You get flat heads, round heads, and everything inside matches the shape of the brain; the distribution of stem cells becomes like these other species. It's a stochastic effect because you get a distribution of these species, but the frequency of the different species matches the evolutionary distance between them. If you look at the tree, it matches the frequency with which you get the different head shapes. I see it as a navigation problem: the collective is navigating morphospace of possible shapes, and you can get the exact same hardware to visit other regions of that space. It doesn't have to be locked in genetically; by manipulating the navigational information, you can get the exact same genetic hardware to visit these other regions.

[12:17] Ricard Sole: I know that work. I must say, even if I hadn't seen that, this is really true. The pictures that you have in the papers of this phenotype are really going into things that you could say do not exist. I have said this is science fiction. But it opens the door for many new ideas. Bioelectricity has been, as you know, something that has not been very much considered. When I was with Brian Goodwin at the Open University, people there were trying to figure out what could be the role of bioelectricity, but it was considered a totally marginal and speculative matter. But it's clear that the potential, even for new forms of genetic attractors, is quite a thing. One of the things when you read the papers is you ask yourself, now what's next? Because we are finding out a potential that was totally unexpected.

[13:20] Michael Levin: We've been playing for years now with this idea that the reason bioelectricity is interesting is for the same reason that it's interesting in the brain, which is that it's not just another piece of biophysics, but it actually underlies the basal collective intelligence of the cells as they navigate these shape spaces. If you take that seriously, then you end up exploring some other weird tools from neuroscience. For example, neurocognitive modulators: hallucinogens, anesthetics, anxiolytics. A lot of this isn't published yet; we're still writing up some of this, but there's some crazy things. Along the lines of what we were just saying, one thing you can do if you have embryonic morphogenetic systems in the frog — we did this in the frog. This was Kelly Sullivan, another student in the lab who worked on this. You can treat it with various very mild cognitive modulators of the serotonergic class. One of the things that happens is it starts to hallucinate shapes of other species. You can make a frog tadpole start making zebrafish tails, and it starts making faces that belong to different frogs. I really think that this notion, which you explored in your paper — this notion of amorphous space — and you've done some very interesting things with it in synthetic biology, is incredibly powerful because I see these synthetic things that we make as exploratory vehicles of that space. That's really what we're doing. You're using it to map out that space of possibilities. What do you think about that? How do you see — you've written about the morphospace of synthetic biology — how do you see that space? What's the level of reality of it in terms of what it is that we're studying?

[15:21] Ricard Sole: I'm a strong advocate for trying to build spaces because many questions that we're trying to answer require a taxonomy that compares, for example, cognitions of different kinds, or organoid complexity of different kinds. One of the interests is that in morphospaces, and that's what David Raub in the original paper emphasized: you try to fill a space of the possible with what we observe in reality. What tends to happen is that reality only fills part of it. Some of the space might be impossible to achieve because there are fundamental physical laws or even mathematical ones. But oftentimes we find voids in this space of the possible where there's no reason really why we don't find equivalences there. For us, it's been an opportunity for thinking: could we, using synthetic biology, try to fill that? There are crazy projects that are still ongoing. One of my preferred projects that has been around for years is the possibility of implementing, using bacteria, new rules of interaction that make our bacteria behave like ants, ants as in ant colonies. Ants use these specific rules of building morphogenetic fields, et cetera, because then collectively they can solve problems. In bacteria, you see preconditions of that, in terms of sensing and all the things that can be identified in ant colonies, but not the whole story. You can ask yourself, why is it not there? One thing is asking this from the theory, another thing is trying to make it in the lab. If we can do it, we can actually ask ourselves, or answer the question of what are the reasons why bacteria don't need to use that or cannot get there for some reason. For us, it's a fantastic opportunity. I think most people working in synthetic biology don't have this objective. For us, synthetic biology is a way of interrogating complexity, interrogating evolution. One of the successful stories that we have is the idea that when you try to build computational circuits and implement them in cells, I think it's a good lesson for design and evolution. People used to follow the standard electronic design principles. This has some problems because in electronics, you connect things with cables. The cables are exactly the same thing. Within cells, every cable has an identity. You want to connect one gene with another gene, you need to use a molecule that has to be different and identify the promoter side, et cetera. That immediately produces problems because the more cables you put inside, the more this interferes with something else and it can even happen that the cells kill themselves. To avoid that, we ended up in a design principle based on multiple cells that do not have to be connected between them. The principle that we obtain, which we call distributed computation, is not based on standard electronic design, because we can somehow break the circuits into pieces. We don't find that in biological systems, which is a good story because it tells us that maybe there are things to discover there that can be a part of what we need to go into the voids of the morphospace. Maybe this is an accident, but I think that is a good indication.

[19:54] Michael Levin: I don't know if I ever told you, but my postdoc, Rick, and I have this crazy project to train an ant colony — not the individual ants, but the colony. We wanted to communicate. The simplest way of communicating with something you might think is to train it. So we tried to train the colony. It was right before COVID got started, so we never finished it, but we started building the equipment. The idea was to have ants; we actually got ants into the lab. We wanted to have a machine with two areas, A and B. Area one has a camera that counts the number of ants on a little platform. That's it. Area B receives food in proportion to how many ants are standing on platform A. So no individual ant would ever have the experience of "I do action A and then I receive food at B" because it's too far. As the collective, I thought, could the colony learn to send a bunch of ants over so that it could pick up the reward, like associative conditioning, but then distribute it?

[21:01] Ricard Sole: That's a very interesting idea. I think it makes a lot of sense. It's a very interesting experiment. It reminds me of a different story that points in the direction of what you're saying. It's work by people from Brussels: they use small robots to interact with cockroaches. Cockroaches do not behave like ants. You can use robots that trick their interactions so that they end up collectively making decision-making transitions, which is different from what you're suggesting, but it tells us that the right ways of interfering with the interactions between ants can actually trigger collective dynamics that is what you expect from collective intelligence?

[22:02] Michael Levin: I haven't seen that work. That's very cool. That's a chimera right at the population level. We do a lot of that stuff at the level of morphology by combining cells from different species to see how the goals of the collective change when you've got parts that used to be different things. I'll have to look that up. That's very interesting. I meant to ask you about this: distributed learning. Do you think we could do something like this in bacteria? Could we use a biofilm and try to show distributed associative learning that's spread out in space? What do you think about that?

[22:46] Ricard Sole: We've been thinking about that with one of my postdocs, Nuria Conde. This is not published, but it's what we're trying to do as an extension of the distributed computation work. The work that they did on associative learning is theoretical so far, although we are trying to implement it in the lab. One of my students, Jordi Bly, is almost there. The obvious follow-up is what if we implement that in a multicellular system that has coherence. We were thinking in biofluence as one possibility. We're thinking maybe spheroids. Having a spheroid augmented in its potential capabilities by associative learning distributed in different cells. The last thing of the model is that we have shown you can have associative learning with short and long memory processing plus forgetting. The whole spectrum of possibilities is there. It's the kind of thing that we like. It's in the middle, in the twilight zone between real biological matter, the organs and organoids, and everything that happens in cell collectives in nature. I think this is again for the voids in the morphospace; that's one of the big candidates, and it should be done. It should be doable.

[24:27] Michael Levin: How have you, in your synthetic biology approaches, how much do you see that, and do you have any cool examples of the material trying to resist what you're trying to do to it? There are difficulties, as you've pointed out, that are just passive difficulties of it. But do you see interesting examples of the system resisting the hacking that you're trying to do, trying to get around whatever you're trying to force?

[25:03] Ricard Sole: There's what I was mentioning before: the whole problem's driven into trying to do stuff at the single-cell level, meaning that trying to modify the wiring is a risky business. I've always been a bit skeptical about some of the hype around synthetic biology, like we can scale up and be like electronic design principles and scale up like transistor technology. I'm skeptical for one reason, and it comes to my mind this very nice paper called "Can a Biologist Fix a Radio," which is a discussion about if I have a radio and I want to understand the principles of work, can I use the methods of biology, cutting one cable, taking out one element and see how these mutations modify the system? This gives the impression that we can do something similar to what we do in technology, in hardware and in cells. I don't think we are there at all. If we are honest, we essentially use cells as a chassis because cells are going to work as living machines or living systems and do the things we want by introducing something that doesn't interfere with the rest. Take your radio and you add a little piece there, being sure that it's not going to interfere with the rest. It's important to answer ourselves, and we don't know, what are the limits of what is really doable? The way circuits have been evolving, over evolutionary time scales through tinkering and constant reuse, also happens in software. All these ideas by François Jacob saying that evolution tinkers, whereas engineers design, trying to optimize, which of course is true. When you go into large software projects, which you can actually build the networks behind, a software designer or a group of designers have limited control of the complexity of the system. Once the system has achieved a level of complexity that makes it very difficult for any single person to follow what's going on, the rational design principle is to take modules that are already there, that we know interact well with the rest, and copy and then modify, which is not very different from how the genome works. I duplicate genes and then I rewire. When you look at these large software projects and see the statistical distribution of connections, it can be explained extremely well because there are a large number of regularities. Very different software projects have the same power laws with the same exponents. It's not because the engineers have a book saying try to make power laws; it's because it's inevitable. The system is evolving by tinkering. The same happened with cellular networks. If you analyze how proteins interact and the origin, you see clearly that the process of copying and rewiring is extremely extensive. That means the circuits you have are pretty much nothing to do with electronic designs or anything we are familiar with from engineering. So the big question is, is it possible to somehow interfere with this kind of tinkering process or with this kind of topology? I'm very skeptical about that.

[29:45] Michael Levin: My thinking is that we're going to have to do what we've done for millennia in neuroscience, which is to try to control a top-down, incentivizing training. I agree with you. I think this kind of bottom-up tinkering is going to be very, very difficult in the long term. I think we have some opportunities now to really understand what are the interfaces that they give us for top-down control, to make them reset the set points and make them motivated in effect to do the kinds of things we want them to do, not to try to micromanage it. I agree with you. I think it's going to be very hard.

[30:33] Ricard Sole: I don't want to dismiss all the work that's been done in a direction that is absolutely relevant. You might design a given construct that you incorporate in a bacterium, and you have a sensor–actuator system that detects a cancer cell. That's fantastic. It's an amazing amount of opportunities in biomedical research and elsewhere. It's just that if you go into more complexity, and especially if you want to explain complexity, that might not be the guiding principle.

[31:07] Michael Levin: Related to this, what's your take on the fact that, even though molecular biology has been going great and there's some amazing protein engineering and all this stuff is going very well, there's been an uptick of papers saying that living things are not machines and trying to draw a stark distinction between these two things? What's a useful definition of "machine" for you, and what's your take on this question?

[31:45] Ricard Sole: Okay, that's not an easy question.

[31:46] Michael Levin: No, it's not.

[31:48] Ricard Sole: I have mixed feelings. On the one hand, if you have a more coarse-grained view of behavior of, for example, cells, and you just think in terms of these are replicating systems that require finding resources, and that means that they have to deal with the environment in a more or less complex way. In a way, you could define the fundamental machinery that allows that to happen, which of course happens in an embodied way, which leads into non-trivial things. To some extent, talking about machines I don't think is inappropriate. But of course, as soon as you go up into the hierarchy of complexity and have multicellular systems, where you have all these conflicts between unicellular and multicellular constraints, and in particular, when you go into systems that have jumped from the sensor-actuator description, and you have either interneurons or ways of having information being processed in the middle in ways that are not all cybernetics. How good is the machine metaphor? I'm struggling with that. If you ask me a definition, I'm still thinking about that. What I can say is that I don't think this is something that has to be done in developing the theories. Because right now we have a literature that is growing very fast, also in the context of agency and even consciousness in living systems. My personal impression is that there are discrete classes here and this is not a continuum, but to substantiate that we need to get very serious and go into making models of things. Otherwise, we will get trapped into this permanent discussion about whether or not these organisms have these or that properties. We still lack good definitions. That's part of the job that we have to do.

[34:32] Michael Levin: What's your favorite? I'm interested in the notion of distinct classes and sharp transitions and things like that. What's your favorite example of an emergence into discontinuity that is a sharp distinction that you just can't zoom in and make a continuum out of? I ask everybody this: what's your favorite one?

[35:01] Ricard Sole: But this is a mean question.

[35:07] Michael Levin: I'm just interested, because there are so many different examples of these across the areas that we're interested in. I think most people believe in sharp transitions from talking to people. But I have a really hard time finding an actual good example. At the quantum level, maybe you do, but otherwise I find it hard to find a good, sharp transition. The more I look into it, everything seems continuous. And so I'm constantly on the lookout for is there one that I'll say that's a sharp transition? What are some of your favorites?

[35:51] Ricard Sole: I think that what are typically identified as major evolutionary transitions are sharp. We can discuss the intermediate steps that might drive them. For example, as soon as polymers became useful in any way, maybe not information carriers yet, the fact that you have linear polymers represents a revolution. Because if you think not in the chemistry, but in the potential space that can be searched for things that can carry information, it's really a revolution. All of a sudden, even with a small number of monomers, you have access to a space of possibilities that has little to do with what chemistry says. It's more about what evolution and biology can exploit. If you go all the way up to language, for example, language is a semi-human language, but it's such a big revolution in many ways. That's why it's so difficult to explain how it emerged, because the complexity that it carries is amazing. Sometimes, when people ask me whether the brain is the most complex object in the universe — which is a sentence we like to say — of course, we don't know, because we don't know anything else about living things in the universe. But it is, in one sense: language allows recursion and allows it to access the infinite. So whether or not there are other objects in the universe that are as complex in terms of the combinatorial matter that they are, I think it doesn't really matter, because being capable not only of understanding yourself, but also of capturing the idea of the infinite. What else can be more powerful than that? For me, each major transition represents a revolution in some sense, because all of a sudden you get into the possibility of managing information in completely different ways. All the transitions that have, let's say, an ecological meaning — for example, when the first predators emerged — immediately created a runaway effect that led to complex sensors, probably brains. And so the fact that something as simple as that starts a major revolution and opens up a completely new space of the possible. I don't know if that answers your question.

[39:00] Michael Levin: It's pretty good. I like it.

[39:05] Ricard Sole: Yeah.

[39:07] Michael Levin: Language. It's interesting you brought up language. Do you have any thoughts on how we recognize, either in a solid implementation or even a proto early version, as we look at the communication between cells, between tissues, between bacteria, what are some of the signs that you would look for, the way you would say, "this looks like a simple language." What's the key? What distinguishes language? I mean, communication is everywhere, but language from communication. What do you think?

[39:44] Ricard Sole: There is a potential for having clear-cut discrete classes. When you are moving in bacteria, there's been the idea that they have some kind of a language proposed by Ben Jacob and others that propose bacterial intelligence. It fits well into Shannon's communication channel, where you have communication, which means I send my signals; signals are correctly interpreted as much as possible by the receiver. That remains there. From there into something that you can call having a grammar, something that implies that language is organized in a complex way that allows hierarchy and recursivity. The jump is enormous. That's why we can identify precursors of language at different levels. The vast majority of life on the planet uses communication that is more or less efficient; sometimes uses noise because noise is helpful. Humans clearly have a singular character there. It's nothing like that. Now we have new tools for understanding or even playing with the communication systems in whales and our cousins there. We'll see how far this goes, because it's going to be possible to know if they have a grammar structure. In some species of birds, it seems to be protogrammers. It's a beautiful possibility that synthetic alternatives bring into understanding language. I always put this example of Luke Steele's talking robots programmed to exchange, identify objects or actions, put a word that they invent, and then agree with each other. Let's use the same word for this action or this color or this object. At the end of the experiment, the amazing thing is that you get what is expected. The robots have a lexicon to refer to the external world. When you analyze the structure of that, what Luke found was that it was a protogrammer. It was an emerging organization of words. The reason was that if I have a blue ball rallying from left to right, there are two important things here. One is that you cannot capture that without having a meta structure that connects words in a trivial way. Unexpectedly, you need a body. You need a body that is physical in the environment where you refer to the rest. It was a surprising first example that embodiment for cognition can be relevant. Now you have this new generation of language systems where maybe we can answer the big question: if a complex language emerges in an artificial system, is it going to be like ours? Is it going to have the same kind of fundamental basic formal structures? Or can it be something totally different? That's going to happen in the next decade.

[43:36] Michael Levin: It's interesting. I wonder how. Josh Bongard has some amazing work on this. We've been looking into this notion of polycomputing, this idea that the exact same set of physical events can be interpreted by different observers as different computations. I wonder if that extends to the observation of a grammar. I wonder if different observers with different cognitive structures can look at the same physical stream and one sees a grammar and one doesn't, or one sees a grammar and one sees a completely different grammar. I'm not sure if it's overloaded that way, if it's possible. I wonder if some of this is very much in the eye of the observer, not just whether we are smart enough to detect it. That's clear. But really, is it possible that you see completely different things?

[44:31] Ricard Sole: I'm very curious about that because I just recently stumbled into the concept of this idea that you guys suggest on polycomputation. Thinking about all the unexpected surprises that we have from the material nature of reading systems and how the agency that this brings unexpectedly can fill the gaps that apparently are there. It opens the door also for language. Can complex communication systems emerge by exploiting all kinds of even agentian materials? We are all the time assuming that the neural network metaphor is going to be the only thing that carries out the complexity you need for that. Who knows? That's an open field to be developed.

[45:34] Michael Levin: We're starting to look at agency in really weird places, for example, in the communication channel itself. So you've got two agents that are communicating, but what if the channel has an agenda too? Also the patterns themselves. That's the latest thing that I've been working on is this idea that the patterns themselves within these cognitive systems may have tendencies to self-persistence and to actually do some niche construction. Are there thoughts that kind of modify the cognitive agents so that they would have more thoughts of the same type? Do they have certain kinds of these homeostatic or even allostatic capacities as patterns within the system? It's going to be very interesting in the next few decades. We have to start looking at these systems that we're making, whether they be the language models or these more bottom-up things that people make, for things that we didn't bake in. That's a major, major area.

[46:38] Ricard Sole: It reminds me of ideas that Hermann Haken and colleagues suggested years ago about connections between pattern information and pattern recognition. When you look at the papers, what is implicit there is that pattern recognition systems, which have to be modeled in terms of some neural network, for example, are spatially restricted systems, and that generates patterns. They made the conjecture — they didn't go far with that — but the idea is that when you have pattern information you might have preconditions that allow pattern recognition. There could be very deep connections between things we consider different, because one happens as a spatially dynamical system, the other carries information processing. They may be more connected than we thought. So this is worth re-exploring. These ideas were there more than 30 years ago.

[47:54] Michael Levin: That's really good. Steven Grossberg in 1978, it's unbelievable. In 1978, this incredible paper was extremely long. You could say everything you wanted to say. This super long paper was called "Communication, Memory, and Development." I got to look up these Hawken papers because I'm not that familiar with them. That sounds very useful, but he was talking about the processing in the retina and how pattern formation within the retina has interesting symmetries with developmental pattern formation, recognition versus pattern formation.

[48:35] Ricard Sole: Very interesting. These kinds of ideas have to be taken back and put in the current context. Actually, Steven Rose was also at the Open University when I visited Brian Goodwin. We had opportunities to discuss memory and, in particular, how memory might require reinterpretation in terms of substrates and the assumptions we usually made about what memory evolved to do. I saw that the seeds of different ways of thinking were present, but we needed the new technology and tools that we have now. But the question remained there.

[49:21] Michael Levin: That would be a conversation I'd like to hear. Goodwin and Grossberg talking about memory. That's amazing.

[49:29] Ricard Sole: Yeah.

[49:31] Michael Levin: Cool. Thanks. Thanks so much. It's great. It's great to get your ideas. Let's catch up whenever you're ready. I think there are some experimental things that we want to do together, right?

[49:41] Ricard Sole: That would be fantastic. Looking forward to that.

[49:45] Michael Levin: Yeah, plenty to do.


Related episodes