Skip to content

"Mapping Diverse Intelligence Spaces" by Janet Wiles - discussion

Janet Wiles presents a talk on mapping diverse intelligence spaces through computational models of memory, language and xenobots, followed by a discussion with Michael Levin on development, plasticity, bioelectric signaling, and synthetic bioengineering.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~36 minute talk by Janet Wiles (https://about.uq.edu.au/experts/13) "Mapping Diverse Intelligence Spaces: looking for xenobots in computational models of memory and language", followed by a ~24 minute discussion of her work on the intersection of cognitive science, language, robotics, and computer science, and some of our work on evolution and synthetic bioengineering.

Slides kindly provided by Janet: https://thoughtforms.life/wp-content/uploads/2025/09/Thoughtforms-Conversation-with-Michael-Levin-Talk-Slides-from-Janet-Wiles.pdf

CHAPTERS:

(00:00) Mapping developmental lineages

(11:18) Memory, partitioning, language

(36:46) Memory transfer dynamics

(49:30) Plasticity, gradients, bioelectricity

(59:30) Talking with organs

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:00] Janet Wiles: From our conversation before, I'm a computer scientist, I have modeling projects. I've chosen a few modeling projects to talk about this morning. I have three things I thought I would be really interested to talk about. One is this concept of mapping diverse intelligence spaces and what particularly developmental spaces look like. One of the questions that's come up as I listen to your podcast is: when we try and unify something like the electromagnetic spectrum, we end up with a spectrum or a periodic table. When we try and think about developmental spaces, what is it we're looking for? The approach I take in my work is you can think of it as tripartite. There's what in your work you've been calling the platonic spaces or the platonic function. What's the function independent of how it's implemented? A computational formalism or an algorithm or a computer program. You might consider the mechanism, the physical trace or the device. I think thinking of these three, particularly in areas of robotics and biology, is really useful for computational models. I take the stance in the middle at this computational formalism. In the second and third parts, I thought I'd briefly touch on memory and memory phenomena. I'm intrigued by this idea of how basal memory trace can have a sense-making goal, which comes from the TAME approach. I hope we'll be able to discuss some of that. Starting with the first part, trying to think about how we map an ontogenetic space. What does that mean? What is development from a computational perspective? This is an old project with three remarkable PhD students, Nick Gear, James Watson, and Kai Willetson. We were looking at how you'd go from a DNA sequence. We were thinking of them as a genome. We'd be thinking about the expression of that — gene expression and bioelectric expression in the network interactions — as this intermediate representation. James was looking at L-systems to grow plants, and Nick was looking at lineage trees. I thought I'd pull something out of Nick's work. You're familiar with C. elegans ontogeny. Thinking about a cell, a zygote, that divides repeatedly until you end up with an organism. Nick was taking this and asking, how could you get a network model to reproduce these lineages? Not so much because you want to understand C. elegans in this particular case, but because that process forces you. It's a very disciplined way of asking what is a developmental lineage? What do they look like?

[03:46] Janet Wiles: What do these gene expression networks do in terms of creating them? He started with every cell contains the same set of genes and the same set of gene interactions, but the expression is unique to that cell. They have inputs, morphogenetic cell-to-cell signals. He evolved that. We were doing a lot of work in evolutionary computation at the time and ended up with lineage trees. It's very hard to see at a glance this whole space. He created what we called LinMap, which was a way of mapping. The green-colored heat map here is 60,000 lineages, each mapped according to a single measure of their complexity. He tried six different measures of complexity, but this is just one of them. You can then go in and use that heat-map space as a way of indexing what lineage trees look like that generate it. The large black region is cells that divide and divide again, divide forever. You can think of them either as bacteria or in a multi-celled organism. As we were doing here, it would be cancer. There's no differentiation. In other parts of the space, you'd have these very small, sparse lineages which differentiate and stop dividing very quickly. Then you've got other structures. Some of the structures show these beautiful semi- or quasi-irregular patterns where the subtrees are repeated across many of the different branches, but not perfectly. You get ones that are more complex. The interesting thing about mapping the entire space is you see all the different complexity classes you'd come to expect in complex systems science. You see regions where there's just a single attractor. Nothing much happens. You can perturb the system and get no difference, through to the most complex regions where you're going to expect computationally complex phenomena. This is where, if you're mapping the attractors in this space, Lyapunov exponents would be close to zero. The system itself has some stability. It's not chaotic and it's not collapsing into a single attractor or a group of pre-specified attractors. Nick was asking this question: are all phenotypes equally available for natural selection? The heat map shows the range of complexities in ontogenetic space, but it's not telling us about the likelihood of a specific lineage. The traditional view in evolutionary computation is that any character trait is reachable by a small amount of a random walk or a mutation in that space. Mutation can enable you to reach everything. One of the things we've learned in evolutionary computation is that there are a lot of different biases in the system built into the structure of the genome. We were particularly looking at what the developmental process does. In a sense: what does the developmental process give you in terms of constraints on how you move through that space? To examine this question, we went for the simplest possible two-cell network we could think of. Instead of evolving C. elegans, this one is just a random set of what is generated by a network.

[07:32] Janet Wiles: At some point it will differentiate into A cells (yellow) and B cells (red). This is a computer scientist view, not a biologist view of A cells and B cells. Then you can create a fate distribution map. If you have four red and four yellow cells in your final organism, what happens if you evolve the lineage for one more time step instead of differentiating? You end up with eight red and eight yellow. If you allow the yellow cells to divide one at a time, you might end up with 16 cells. An organism with four red and four yellow cells is very close in this developmental space to one with eight red and 16 yellow cells. It's much more difficult in this space to get one that has one more red cell or one less red cell. You can then map this: take 10,000 samples and look at the cell fates, and you can try a traditional Markov process or stochastic control process. If you don't have a developmental system, you can ask, what are the cell fates? A and B are randomly assigned to the terminal cells. You get a cluster around the most frequent organisms you develop, in terms of the number of A and B cells. What happens when you then put in the network and try to map it? When you have these dynamic gene expression networks, you get something that doesn't look like the Gaussian structures you would see with a stochastic control process. After mapping 10,000 networks, the most frequent are in the same place: an equal number of A and B cells. The size of the organism is specified by how many steps until probabilistically all of the cells will differentiate. The neighbors are in very different places. We called this distribution—we don't know how to name the distribution, but we called it our diamond distribution. It looks very much like the facets of a diamond. Instead of the neighbors being one more or one less cell, the neighbors are major jumps in what you can think of as high-dimensional hyperspaces for the corners of Boolean hypercubes. When you project them down onto structures, you get something like this. These are not Boolean spaces, but it's interesting to visualize what this means in terms of gene expression. Pausing with this, what I think such a map shows is what you can call the adjacent possible. You can think of it as the bioelectric neighborhood of ontogenetic space. The fact that it has this highly structured representation was fascinating. Have you found this in any of your work? My hunch is that the facets of the diamond are actually pullback attractors in developmental spaces. I don't have any evidence for that, but I'd be intrigued to discuss it. Another thing I found fascinating is thinking about looking for xenobots in these spaces. Are xenobots pullback attractors in frog biology, or could you think about this diamond distribution as being the most frequent distributions that you observe, then rarefied by evolutionary processes? Xenobots don't need the rarefaction because they are neighbors in this space. Do you have any comments around this? Or shall I keep going with the rest of it?

[11:18] Michael Levin: Keep going. I've got many things to talk about. I'm just taking notes as we go. Please keep going, and then I'll come back to things.

[11:26] Janet Wiles: One other thing about trying to think about what's a Xenobot is trying to think about whether large language models are kind of Xenobot for human languages and cultures. That's something that's fascinated me as well. The second part is memory and this idea of a unification of memory across multiple scales or multi-scale competency architecture. I did a postdoc—I'm a PhD in computer science, but I did a postdoc in psychology studying episodic memory structures, in particular tensor, so rank-three memory systems. Episodic memory is interesting because it tries to encode everything about an event: who, what, when, why, how—that's context. Within a given context, it can associate cues and targets. A paper by Humphries, Humphries, Van and Pike in 1989 in Psychological Review set up a lot of this work. The nice thing about these structures is they're like one input layer, one output layer of a neural network, which everybody who works in the space would realize: you can learn orthogonal patterns in a single trial. They're very efficient, but you can't learn different associations easily without catastrophic interference. If you add context, you can learn in context. The context is not added to the cue; it's a cross product with the other two, which is what gives you the rank-three tensor. These are very efficient, very effective, but they only really work when the information is already orthogonal—that is, already represented either in a form that could be symbolic, or what they call one-hot encoding. The other episodic structures that I find really interesting are the ones that model neurogenesis. The dentate gyrus in the mammalian hippocampus is a region that has new brain cells that develop all the way through adult life, including adult humans. As I tell my students, they should go out and do a lot of exercise because it generates new cells in dentate gyrus. It's no good just doing a lot of sport; you actually need to do something to cause them to differentiate and integrate. What they do in this region is encode novel events and unexpected rewards. Brad Imani created a beautiful model of this. Lara Rangel and Andrew Chiba's lab tested it in rats and showed that over different time periods these cells integrate and collect together information. These are episodic memory systems. The other way of thinking about neural networks is the classic multi-layered encoders. These are much more semantic memory. When we were doing memory research, we would differentiate between memory and learning. Memory can be done in a single trial—it's not always done in a single trial, but it can be—and learning takes longer. Encoders are actually learning systems if they require learning either a compressed or an expanded representation in the hidden layers. This is useful when you have non-orthogonal vectors. These are feed-forward systems: the traditional multilayered neural networks. Experimenting with these, people like Jeff Elman and others added recurrent connections. If you take a multilayer system with recurrent connections on the hidden layers, you get a simple recurrent network. That embodies the idea of learning to predict in a recurrent system.

[15:39] Janet Wiles: Interestingly, the context unit just copies back the previous context; it can build up structures and learn semantic and syntactic structures. Jeff Elman used it for grammar learning. So the input predicts the next input, not the current input. So it's not like a typical encoder. It's encoding the next step. This is very much like large language models today, which are based on predicting the next input with something that is either recurrent or unfolding these recurrent layers. So it's interesting from a modeling language point of view because you get both syntax and semantics. But unlike a traditional Chomskyan linguistic approach, these are not distinct and separate, but rather they're integrated within the same representations. So exploring this notion of how an item in memory might have some form of goal or be an active element, I think it's really interesting, in studying these simple recurrent networks, to think about who is the actor and who is the object. In a feedback network, which doesn't have to be a simple recurrent network, there's always a duality. The inputs act as operators on the activations already in the network. Activations in the network act as carried functions over the input sequences. The connections within the network say what kind of carried functions are possible in that system. We were playing around with this. This is work by Anthony Blosh and me. We were looking at how these networks could learn really simple tasks. A three-function task is a task where there's a sequence. You give an AND, an OR, an XOR as an input pattern followed by a whole sequence of zeros and ones, and it has to compute that function as a carried function over all the rest of the inputs until it gets another function and gets reset. We were using this as a way to think about operators and carried functions in neural networks, and realized that if we really believed what we were saying, we should be able to map them. So we mapped the states for each function within the hidden unit space. Looking at the hidden unit manifold, mapping the first and the third canonical components, we could actually see the geometric structure of how the hidden layer is shaping the data as it passes through it. We could map onto that the active patterns. What an active pattern does is cause transitions in this sort of space. There are five different input types, so five different operators, and you can map the transitions within that entire space for all of those operators. Or you can think about each of the six major regions in that space that compute this function as being carried functions. You can start from positions in those spaces and ask, what does it do to the inputs? You can look at it in terms of operators and carried functions, and you can actually map these dynamics. This was all done with Boolean inputs, real-valued hidden units. Thinking about connecting to the TAME framework, one of the questions you can ask is whether the activations in the neuromic are really like the bioelectrics in cells, and we can ask what would a xenobot be in this space, and how would we go looking for it? There are a couple of different ways we could think about it. One is to relax the idea of the inputs. Instead of having them Boolean, they could be real valued. What would that do within the system? Would the rest of the hidden unit space actually force attractors towards the hidden unit manifold that we see already? Or would there be additional attractors that we could discover in that space? The other way you can think about it is rather than starting from one of the canonical positions, you could start from anywhere in the hidden space and ask what happens.

[19:52] Janet Wiles: You can combine them with real-valued inputs exploring parts of the space. At the time, we didn't think about how we'd develop those attractors. I think it's possible now with neural networks to actually look at the attractor dynamics for an entire space, the space that your typical function actually reaches, but then also the rest of the space as well. When you think about memory, it's not just thinking about the memory trace and what that active memory pattern would do, but it's also thinking about what an access process is over that memory trace. Again, going back to Mike Comfrey's work, Mike was my postdoc advisor, modeled with the memory tensor. The memory tensor is constructed by the outer product of a queue and a target and a context. You can either use that queue or use a unit vector instead of a queue. Then you can look at, do you actually look for a recognition where you're trying to produce a yes/no answer? Do you know your phone number? Yes or no? You don't have to actually recall the phone number to know it. Cued recall is where you have to produce it. Someone says, "What's your phone number?" and you have to come up with, in a memory system it would be a full vector, not just a yes/no answer. A familiarity rating or a lexical decision can actually be independent of a context. It's averaged across all possible contexts or calculated across contexts. Free association is where you're not giving a specific cue. You're just asking, when you think about breakfast, what do you think of? You might think of coffee. The conjecture here is that a memory trace could be said to have a goal when it directs energy towards its own encoding, storage, and retrieval. Or it doesn't necessarily need initially to be doing all of that. A trace could be just an encoding which the organism or the memory trace is not necessarily putting energy into. If it is useful, you can imagine an evolutionary process that then encodes it to the point where it becomes critical for the organism and so ends up with energy being put into a particular type of encoding. Just before we leave memory, I've got just two slides here. One is thinking about memory not just as generalizing in terms of access, but also as partitioning. As I just said, any non-linearity could actually be a memory trace. It doesn't have to be intentional to start with. But thinking about the active retrieval system, it needs to surface what's relevant in the moment, but also to suppress what's not useful. This sets up the notion that partitioning is actually intrinsically useful to memory. A lot of people would think of this as being part of a cognitive system. But if you think of it as part of the memory system, it gives you very powerful data structures as a computer scientist, potentially also useful in psychology, though I was looking more at it as a computational device. Just in terms of an example where we did that, we were working with these language learning robots called lingodroids. In one study, they were exposed to multiple languages. Prior to European colonization in the Northern Territory, people would typically be speaking 10 languages. You couldn't marry into your own language group, which meant everyone was multilingual.

[24:05] Janet Wiles: Your spouse would be multilingual, then your children would have to be multilingual. With 10 languages, we were working with a language center where they were working with some of these languages, languages like Alawa and Rembaranga. We were asking ourselves, how could a robot learn to be multilingual? When you think about a two-year-old child, they're learning language, but they're not told which language they're speaking. They're not told this is Alawa, this is Rembaranga; they just have to learn it. One thing with children is that when you bring them up multilingually, it helps if one person speaks one language. Maybe the grandparents on one side speak Portuguese and the grandparents on the other side speak a different language. Or one parent might speak one language and the other parent another. So we gave this contextual information to the robot of who they were speaking to. We included the interlocutor. We were using the tensor memory system. With one-shot learning, words could be learned in a single trial. The robots did learn successfully to be multilingual, in this particular case, two different languages. When it didn't have a word, it would generalize across all the different words it knew, so it would get confusion across languages. But if it had a word already with a given person, it would use that word. This kind of memory structure enables us to separate the language. Think about the language you speak to your grandparents compared to the language you speak to your mates in school. It's a similar grammar, and it's still English. But the word choices can be very different. You can generalize, but you choose to use the words for the people you're speaking with. This was a real insight to us in terms of the power of partitioning within the contextual system. The last slide touches on the power of this. In consciousness research, people often mention dissociative identity disorder, and it's in the DSM-5, but a different way of thinking about it is that it's not intrinsically a mental illness, but rather an adaptive way to live with the history of trauma. Maggie Walters is someone who grew up with DID. She describes herself as a high-functioning multiple. Her alters she calls her superpowers. She's written a brilliant memoir where she's written the major part of it, but the alters have actually dictated various chapters. Maggie credits her alters with literally keeping her alive. From a memory research point of view, I met Maggie at the Barring Writers Festival several years ago before this came out, and then I read her memoir last year. When I met her, I didn't realize that she was multiple; high-functioning multiples typically you will not realize. George Blair West is a psychiatrist who wrote the foreword to "Split," Maggie's memoir. He says straight out that DID is not intrinsically a mental illness. He calls it perhaps the most powerful adaptive response that the human mind is capable of. It allows a child to hand off the traumatic experiences to an exiled part, and then another executive part can face the outside world, which is what a child facing that kind of trauma needs to be able to do. One reason I included this when talking about memory is that one approach taken in psychiatry to DID was reintegration treatment, and it has actually caused a lot of trauma to people like Maggie and others. She speaks about this and does a lot of public awareness raising.

[28:18] Janet Wiles: What reintegration does is confusing the trauma with the dissociated identities. The trauma is what is the problem, and the dissociated identity can be seen as being a way of managing that. This is thinking about why memory partitioning can actually be seen as really powerful. I think there are some ways of thinking about that from a basal memory point of view as well. Part three was exploring language and communication systems. Lingodroids was a project that ran for over 10 years. It involved a whole variety of different robots, neuroscientists, language studies, postdocs, and PhD students. We did a lot of work in simulation and also in physically embodied robots of different kinds. The previous study that I mentioned about the multilingual robots was done in simulation, not with real robots. "In theory and practice are the same. In practice, they're not." That's a quote that roboticists always used to say when we were trying to move from simulation to real robots. This speaks to the idea that when we say machines are not machines, it highlights the difference between what you might think of as the mathematical space of functions. This is, you might think of as platonic spaces, but then the algorithmic space of formalisms and then the real world of meat and metal bodies. In the real world of meat and metal bodies, in practice, there is no perfect implementation either of a mathematical function or an algorithmic formalism. I think this is interesting when we're trying to explore the real robots and particularly for language and communication. With Lingodroids, I've got two studies I thought I'd mention here by Scott Keith. Scott was interested in a whole range of "what can robots learn?" in terms of languages. But he was interested in this one particular question, if humans are going to communicate with robots, can robots communicate with robots? Can they evolve a language when they're heterogeneous? A lot of the assumptions in cognitive psychology were that the reason why lingodroids were so successful is because they have a similar physical body structure and slight differences, and if there's a similar cognitive architecture. Scott was saying, supposing they don't. There are different kinds of robot mapping systems. A laser scanner will estimate range, and it will create an occupancy grid, a map of its world based on probabilistic filters. It maps boundaries. It has no concept of time or events. It doesn't store episodic memories at all. The little iRAT robots we were using have a forward-facing camera used to estimate appearance. They create these hippocampally-inspired topological maps called RatSlam. The version we were using was OpenRatSlam. They're based around an episodic memory system, which creates networks of events in space and time. Two totally different ways of structuring memory. Scott was asking this question: can they evolve successful languages for space, for distances, and direction? It turned out that they do. They were remarkably successful, not 100%, probably around 80%. The lesson from this one is that robots don't need the same body. They don't even need the same cognitive architecture.

[32:32] Janet Wiles: What they did need was that the language games work when the robots are both playing the same game. One way of thinking about this is they needed the same culture. The second study from the Lingodroids was with a robot called Opie, which we were using in a child development lab to look at what children can learn from robots and teach to robots. This was a study where the child was teaching a robot color names. This is another question about how humans and robots can communicate. It was a language learning study. The robot on its tablet would produce a color patch; it might be a blue patch, and the child would name it. The robot would say, "What color is this?" "Blue." It might put up another patch, another blue, but it may be lighter. "What color is this?" "It's also blue." "Are they different or same?" The child might say, "They're different." "What's the difference?" "Lighter." The child would say, "One's lighter than the other." In this space, the robot could pull out a structured view of both the colors and the relationships between colors. The interesting thing about this is Opie and the child don't sense color in the same way. The child is producing IGB patches of color on its tablet. They're sharing the experience. Furthermore, the child is using vision. It's sensing. The robot is not even sensing the patch. It's producing it. This language is not just for transmitting information. It's connecting the private realities of these two very different agents. They don't even need to be doing the same thing in the same experience. They need to be playing the same game, a language game, but they can have different parts of the game. There's still this way of creating an external language that connects their private realities. As mentioned in the last slide, in current studies I have been working with Bill Bingley and Alex Haslam over the last four years, particularly studying intelligence and looking at groups. When we study cells and how they work together in biology, often it's a very hierarchical structure. When you study humans and how we work together in groups, we're always part of many different groups. This notion of multi-group social intelligence: you're not 100% part of a group and you don't lose your own individual identity; you retain your identity. In this particular work, we were looking at what is a good definition of socially minded intelligence, where an individual can look at their own goals and look at the group, choose a group that they're part of, or the environment might make a group salient. Then looking at the group's goals and working towards group goals or individual goals. For people who are socially minded or have social-minded intelligence, you actually recruit other people toward your own goals. How do you look at what we call these aligned abilities for a particular time, task, or event, and think about deconstructing it into socially minded abilities to recruit resources, and then what resources are available in that context? This is to mention current work. A lot of my current work is in human-centered AI. For these projects, we usually have some theory and also practical work. We've been working on language in the Florence Project, which is trying to develop a personal knowledge bank and ecosystem of communication devices, both hardware and software, that enables people living with dementia to have quality of life at home. That's coming to the end of the talk. I will stop here and I'm open to comments or questions.

[36:46] Michael Levin: Thank you so much. Amazing body of work. A few things I wanted to check into. First of all, you mentioned in the robots different kinds of bodies. Have you looked at what kind of architectures make it easier to move personalities, specific memories, and so on across bodies with different architectures? I'm thinking of the butterfly caterpillar business, where not only do memories survive the total refactoring of the brain, but they actually get remapped. The butterfly doesn't do things nor care about things the way that the caterpillar does. Have you ever played with that in your system?

[37:35] Janet Wiles: We haven't done that explicitly. I've thought a little bit about the butterfly brain case. When would it be useful to do it, and how would you do it? When you think about different ways of memory encoding, you can encode an entire event, or you can just have the fewest number of variables. If you have very few variables those variables might be something like a smell. A question I was going to ask you about the butterfly case is: does the butterfly choose to lay its eggs on those particular leaves that it favored, or avoid leaves that it disfavored as a caterpillar? Is there actually an evolutionary advantage to the butterfly to attain that memory?

[38:32] Michael Levin: There are two sets of things going on. There is definitely an evolutionary component, which is that they don't want to compete with their larval form. They try to stay away from each other in some ways. That is one. But what I'm talking about is a completely different thing: an experimentally induced memory. What they did was, and actually one of the people, Doug Blackiston, who did a lot of this work, was a staff scientist in my group for many years. He just became faculty last month. There is also Russian work going back to the 1970s where you train the caterpillar to see a particular color stimulus, crawl over to that stimulus, and eat the leaves that you're given. The butterfly will remember to do that. What's interesting is that it doesn't crawl. Instead of a soft body controller, it has a hard body, completely different. It doesn't care about leaves at all; it wants nectar. So the reward it gets is completely different. And yet you have to keep that new memory. So that's not an evolutionary memory. That's a behavioral, episodic memory that it acquired. We have all kinds of wild examples of this. This goes back to something else we're going to talk about: who's the actor and who's the object, in the sense that some of these memories have a pretty active component. I suspect, like James's idea that thoughts are thinkers, that some of these memories have an agentic component. They like to occupy new media, new environments that they can live in. Maybe they even do some niche construction to modify wherever they end up to make it easier for them to do their thing. We have weird biological examples. For example, planaria: when you train them for a particular task, you can cut off their heads and the tail will sit there doing nothing for eight or nine days. Then they regrow a new brain. Wherever that information is — we don't know where — it gets imprinted onto the new brain. So it moves, presumably from the body or from somewhere even weirder, onto this new brain that must be imprinted from scratch. So it can move in and out of body tissues. We can cut a little piece of a two-headed flatworm. This is non-genetic. If we cut a piece of two-headed flatworm and stick it into a one-headed host, something like 17% of the time that tiny piece convinces the whole host that it's going to be two-headed from then on. We have other examples where we can put in bioelectric memories; sometimes they take over and sometimes they get wiped out. It's a very important biomedical direction to figure out what patterns are convincing, because we would like to get the buy-in of the cells and tissues when we make interventions. We don't want to fight the tissue. We want it to accept the set point. It seems like in these cases you have a really good model of this.

[41:59] Janet Wiles: I think it's interesting to think what gets buy-in and what doesn't. Or probabilistically, if sometimes it gets buy-in and sometimes it doesn't. How do you make the sweet spot, the 17%, into 50 or 60 or 70%? That's a really interesting question. In some of the heat maps that we created, we were modeling spiking neurons and made a very simplistic model and a network. Pitt Stratton was doing this work and presented it to an early stage of a grant we were part of. One of the electrophysiologists stopped him 30 seconds into his talk and said, you haven't included the most important part, which is after the spike, there's a period at which everything goes fallow for a bit. And if you don't include that, you haven't modeled it. He just wouldn't listen to the rest of the talk. So Pete and I—what do you do after that? You're working with groups. You don't understand the electrophys, but he's the expert. We thought the first thing we'll do. We didn't think it'd make any difference at all. The first thing we did that night was model and add this whole component into it. It turned out the sweet spot went from very, very tiny to this massive region of space. When you are trying to get a computational element, something that can retain its own state for a long period of time, there need to be certain delays that stop the system from either jumping immediately to chaos or immediately into one of the attractors. You don't want a system where the attractors are perfectly defined and the ball will roll down into one of those attractors immediately; you would like it to be able to wind its way through several computational steps before it makes a decision. Building in these sorts of delays in the system was something we started looking for in our networks. In trying to think about medicine, what are the underlying attractor dynamics? What are the Lyapunovs in that? How quickly does it diverge to an attractor? How large are those attractors? There's some work that Kyle Willetson did on how you map the structure of those attractors and actually visualize them in high-dimensional spaces. Are they really fragmented? If you look at something like Saccharomyces cerevisiae, and you have a single cell that's got to deal with the world in terms of temperature and all sorts of things, it has quite a fragmented structure. It has quite a lot of complexity and a lot of different switches between basins of attraction. Whereas if you look at something like blood cells or cells in a multi-celled organism, where you can be pretty sure the temperature is going to be very stable, the attractors can be much larger, dominated by one main attractor with just a few switches. You can even have more genes in your gene network, but the whole system will be much more stable. Thinking about this in terms of the underlying landscape, can you map that landscape and where is the system within that landscape? Is it deep in the middle of a very stable attractor? Is it right on the edge between them? Are you trying to get it to jump from a stable place to an unstable one, or an unstable to a stable one—what are you trying to do within that attractor space?

[46:04] Michael Levin: Two things come to mind. One is we have this one very strange case where we make these planaria and they're headless, so they have no head. They sit there, and typically the way you do these experiments, you wait till regeneration completes. You can tell when it's done, and then whatever the phenotype is, you call it, and then you move on, and that's the end of that. For whatever reason, we held on to these animals for months. What we found is that somewhere between four and six months, they suddenly regrow a head. In other words, it was fine at the beginning; the set point was okay as a no-head, completed regeneration that served as a stop point, stable. Then something was still ticking. There's some crazy — I can't think of what biological timekeeper has that weird; it's just too long for anything normal. I mean, cicadas do 13 years, but still, for four months, what process takes four months? You need some kind of clock in there. Something is happening. One interesting recent idea is that what it's doing is it's searching a difficult space for a solution. That's what takes four months. It isn't that there's some clock that's counting down; it's that it's trying to find its way out of a complicated space. Usually we're pretty amazed at how fast they solve all these things, but maybe in that case, either because the system itself has been dumbed down to some extent or it's been chased into some corner of the space from which it's really hard to extricate yourself, it's something like that. The other thing — and I don't have any evidence for this at all, but something fun I was thinking — is that maybe one of the reasons why the two-head pattern can be so compelling to a one-headed worm is specifically that it's different, novel, unique. It's like we've been a one-headed worm for 400 million years, and here's this new way of doing things. Most of us are not going to do it, but 14% of us are like, "you know what, this looks interesting." There's some explore–exploit thing going on where some small percentage looks at this and says, "This is a crazy way to be a worm. Let's try that."

[48:34] Janet Wiles: I love it. Great idea.

[48:35] Michael Levin: I don't have any data for that, but it's a thought.

[48:41] Janet Wiles: A lot of these systems, when you add a stochastic element into the network, generate not a random set of cells at the end of a lineage tree, but a structured lineage that can have odd components to it. This whole quasi-systematic "diamond distributions" — I had wondered whether the different head types you get in the planaria, and a few odd forms that turn up occasionally, are points in that diamond distribution, and whether you've actually mapped the space of possible head types.

[49:30] Michael Levin: That's very important. One of the things we're doing now is some of the technology really needs to be improved, but with new advances in bioelectrical imaging, we're trying to get a much richer data set on the planarian bioelectrics. Specifically, because we know the pattern that determines number of heads, but the shape of the head we don't. And so now we finally have a way where we can get the resolution needed and try to map out the space of possible heads and things that are much stranger. We've made things that don't look anything like a planarian. They're not even flat. That's a whole other thing: how you keep flat. Another thing to talk about is the lineage thing. Let me describe the following idea and see if it makes sense to you. I think there's a spectrum and the spectrum is represented by C. elegans on one end and planaria on the other end. What the spectrum fundamentally is, is how you deal with an unreliable medium. The thing with planaria—remember something interesting about planaria, unlike the rest of us, when we have offspring, we don't inherit our somatic mutations. The soma is disposable and you regenerate from the egg. The planaria that we work with don't do that. They tear themselves in half and regenerate, which means that you inherit every mutation that doesn't kill the neoblast you inherited, and then it's going to be part of what produces the next set of tissues. So they are incredibly messy. They can be mixoploid. They can have a different number of chromosomes. If you know from the start that your material is unreliable, and we've done simulations of this—we have a couple of papers looking at simulations of what happens when the material is competent, meaning that it has certain homeostatic properties on its own, that the genome sets the hardware, but it has some problem-solving ability anyway, and also unreliable because you never quite know what it's going to do—what we find is that evolution then exerts most of its effort on cranking on the algorithm. It stops; the pressure—Steve Frank talks about how when we started having RAID arrays, the quality of each of the disk media went down because the pressure is off. So something like this: the pressure goes off and you end up with an animal that's the most regenerative, cancer resistant, it doesn't age, the thing's immortal, and all of that with a really dirty genome. It's the opposite of everything you've ever learned in biology where they tell you that they keep your genome clean. What happens in C. elegans is that because of the hard-coded strategy, that strategy goes, we're going to do the same thing every single time. We know what our lineages are. We're not going to regenerate. There's no plasticity. We're just going to do the same thing and that works perfectly well. At the other end of the strategy, we can't rely on knowing anything about our substrate. We are going to be maximally improvisational. Even the developmental mode of planaria is often called chaotic because, unlike other creatures, you don't know what the lineage relationships are going to be. They make it up as they go along. They're very plastic. Amphibians and humans are somewhere, or mammals are somewhere in between. We have a little bit of plasticity, but certainly not as much as planaria. It's really interesting to think about the interactions between the autonomy and this approach where you improvise. You don't go for fidelity and error correction and separation of layers and all the stuff we love to do in our computer architectures, but you assume your memories are unreliable, both your genetic memories and your behavioral memories. All you're going to do is tell the best story you can at any given moment. It seems reasonable that you can get Xenobots and Anthrobots and all this crazy stuff. Because I suspect that when you make a tadpole with an eye on its tail instead of in the head, they can see out of the box. You don't need adaptation, no selection. It works fine. That's amazing. Except if you accept that, that's because it never assumed it was going to be in the head. It works fine when it's in the head, but I don't think it ever assumed that at all. It tries to reconstruct some viable story from scratch every time.

[54:15] Janet Wiles: Thinking about some of Nick's simulations of C. elegans and the difference — how would you repeat his simulations with Benaria? One question it raises is we had a simple morphogenic gradient in the C. elegans that Nick was doing. The system itself, the genetics, because they can afford to be stable, the internal gene regulatory system, the expression network, the bioelectrics of it, can be determined based on its developmental history. It pays a little bit of attention to the morphogen, but the morphogen gradient doesn't push it around very much. The morphogen gradient is a weak operator in that system, and the carried function is very strong. My question was: within Vinaria, do they create reliable morphogen gradients, and are the morphogen gradients themselves actively sorting themselves? Because if you have autonomy in the gradients and the gradients are active elements, then you can pay not nearly as much attention to your operators and operators need to have a different internal structure, and you can put the reliability into the world because you can't trust your own internal genes.

[55:59] Michael Levin: They certainly do have morphogen gradients, and people have modeled some self-organizing and stabilizing properties of these gradients. What we've seen is that the gradients are downstream of the bioelectrics, because if you change the bioelectrics, the gradients will snap into line. Also, if you try to manipulate them at the level of the gradients, you can control head-tail, you can make extra heads, but the extra heads aren't scaled appropriately. In other words, you've acted somewhere down on the hierarchy where you get the tissue identity, but you didn't get the scaling circuits. Whereas if you do it bioelectrically, you get both. Another thing is looking at these gene expression cascades: xenobots have several hundred — they have a transcriptome that's different by several hundred genes, different from what they would be doing in the embryo. Anthrobots have over 9,000 different transcripts, so half the genome, and a very specific pattern: they turned on a few embryonic things, but no Yamanaga factors, for example, no reprogramming kinds of things. It's amazingly interesting how they choose which things are going to be expressed; people say environment. The environment is extremely simple. They're in a bowl of cell culture medium, just like they would have been in the body, except that they're not next to other cells. The difference is that they're not being influenced by other cells to be a trachea.

[57:48] Janet Wiles: If you think about the bioelectrics as being, from a computational perspective, very fluid, they can be multi-scale, they can integrate across an entire organism. They also don't have to reside within a given cell. You can communicate the voltage to another cell and then it can take over a position in a gradient, for example. They could do something that would be harder to do with genes, which is: they can switch the state of the cell very rapidly. Two cells could swap their states bioelectrically in a way that would be hard to do genetically.

[58:34] Michael Levin: We see patterns move across tissues. We see them move across embryos. In the latest work, we've looked at groups of embryos sitting in the same dish, and they will move across. They don't even obey the boundaries of the actual embryo because the embryos are talking to each other. A group of embryos resists teratogens: a big group better than a small group, better than a singleton.

[59:04] Janet Wiles: That's quite remarkable. A fascinating thing. I wonder whether that would also work, thinking about some of the lingodorid studies, would that work across species as well? Could your patterns move? You think about the gut bio. Does it work across cooperative species?

[59:30] Michael Levin: We're doing that now. We had a paper showing that various bacteria that live on the planaria have a say in the shape of the planarian. If you muck with the percentages of species, you can get planaria with weird visual systems. The bacteria make a difference. We now have a project where we have Xenobots talking to bacteria, and we're expanding this outward to see what are the weirdest things that we can get to talk to each other.

[1:00:04] Janet Wiles: I'm not sure how much time you have. I did want to ask, how do you communicate with the liver? Just communicating across species.

[1:00:18] Michael Levin: This is something that we're spending some considerable effort on, this idea that we want to be able to talk to the body organs. The approach we're taking now is, there's two ways you could imagine doing it. One way is to try to crack the code. Record bioelectrically, record everything you can under different conditions and see if you can work it out with some sort of machine learning. There's another approach which we're taking, which is to communicate to the molecular pathways inside the liver. And that is to use AI. I'll show you at some point some of what we've done, trying to have a system that looks at a molecular network, at a real biological molecular network, and tries to put human-understandable names on the nodes and the states, such that our goal is to have this being done in a human language, so that a normal person can, instead of "Hey Siri," say, "Hey liver, why do I not feel good today?" and it'll tell you. So that's the idea. Various kinds of AIs to try to make it understandable what it's doing, which sounds very artificial, but there's a model of the brain that's that, right? One model is that you're just a big network that is good at doing things, plus a language model sitting on top that tries to confabulate some explanations for what you just did and why. At the very least, whether that's true or that, I don't know, but I think we could construct something like that.

[1:02:27] Janet Wiles: One reason I'm really interested in how you're doing it and the processes is the fungi project I mentioned when we last met, where we're trying to communicate with mycelium. We would love to hook an AI up to the mycelium. We're currently at the stage of doing a lot of recordings and trying to decode what people mean in different cases. But I think it's really interesting trying to learn from how you say "good morning liver" to "morning mycelium."

[1:02:56] Michael Levin: I think that'd be great. Have you talked to Olaf Witkowski at all?

[1:03:02] Janet Wiles: No.

[1:03:03] Michael Levin: I'll set up a conversation with you, me, and him, and we can bring our people, we can all talk about. He's also talking to some yeasts or bacteria, and he has a different approach from what we're doing. I'll bring Yanbo, who's the guy in my lab who's doing it, and we can all compare notes. That'll be great fun.

[1:03:28] Janet Wiles: That sounds great. Yeah.

[1:03:32] Michael Levin: Thank you so much. This was incredibly interesting. A lot of amazing work.

[1:03:39] Janet Wiles: Thank you. It's a great pleasure to talk to you. Your work has really made me rethink a lot of the fundamentals.


Related episodes