Skip to content
· Episode

Novel Bodies, Unconventional Minds: Diverse Intelligence and the Study of Consciousness

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~55 minute talk titled "Novel Bodies, Unconventional Minds: a diverse intelligence perspective on a continuum of cognition (and consciousness)" by Michael Levin given at the CIFAR (https://cifar.ca/next-generation/cifar-neuroscience-of-consciousness-winter-school/) Neuroscience of Consciousness Winter School, + about 20 min of Q&A. My lab doesn't normally work on consciousness per se, but we do a lot of things that are relevant to questions in the philosophy of mind so in this talk I ...

CHAPTERS:

(00:00) Introduction and Talk Outline
(03:00) Challenging Conventional Categories
(06:00) Unusual Biology and Cell Journey
(12:00) Collective Intelligence and Problem Spaces
(19:00) Morphogenesis Collective Intelligence
(24:00) Bioelectrical Networks and Control
(30:00) Planaria Memory and Regeneration
(34:00) Scale of Self, Cancer
(38:00) Patterns from Platonic Space
(41:00) Novel Life Forms: Xenobots
(48:00) Anthrobots and Beyond

CONNECT WITH ME:


Transcript

Thank you for having us out here. The talk is going to be me first and then Wes Clawson towards the end. You might know that my lab doesn't really work on consciousness per se, but we work on a lot of things that are related to this. Usually, I don't mention it in my talks because for this audience, I'm going to say some very speculative things that do bear on consciousness. So, let's see what we can do.

If you're interested in any of the primary data, the data sets, the software, the papers, all of that stuff is here. This is kind of my own personal take on what I think some of these things mean. So right out of the gate, I'm going to say, first of all, what I do not claim. I do not claim to have a new theory of consciousness, nor do I have any data that specifically supports one theory over another.

What I am going to claim today, among a few other even more speculative things, is that if we use cellular mechanisms and problem-solving behavior as evidence of consciousness in addressing the problem of other minds, then for the exact same reasons that we tend to associate consciousness with complex brains, we need to take seriously the possibility of it occurring in many different body structures and some even more unusual types of architectures.

I feel strongly that issues around AI and many ethical problems are not going to be resolved if we keep a focus on humans, on 3D space as a definition of embodiment, and on the brain as privileged. Towards the end, I'm going to kind of get to a very out-there idea, which is that I don't actually think we make minds. I think we make pointers into a sort of platonic space. For the same reason that biochemistry and below that quantum foam and so on doesn't tell the story of the human mind, I think that algorithms and the materials of which machines are made don't tell the story of machines either. We'll get to that at the very end.

So I want to do four things. I want to peek under the hood of some biology that you may or may not have seen before to kind of expand our understanding of the one natural set of minds that we have here on Earth. I want to show you a model system, which is a collective intelligence of cells behaving in anatomical space. I think it's as close as we get to a really unconventional intelligence that we can practice some of our tools on, communicating with it and so on. I will then show you some new model systems, so these are novel beings that are partially outside of the evolutionary stream. And then towards the end, I'm going to say some really strange things about the very minimal end of the spectrum. I think that we need to use these things as fodder for some really hard questions and shake up some conventional assumptions.

Okay, so first let's look at some biology. This is kind of a traditional view. This well-known depiction of Adam naming the animals in the Garden of Eden has something that I think is profoundly wrong and then one other thing that I think is very deep and correct. The thing that I think that it gets wrong is that it gives you the idea that there are these natural kinds. There are very distinct animals here, we know what they are. Adam is also distinct, he is of course different from the other animals. I think this is fundamentally problematic, and I'll explain why.

But I think one interesting thing it gets right is that in this old story, it was on Adam to name the animals. God couldn't do it, the angels couldn't do it. It was Adam whose job was to name the animals. In some of these ancient traditions, naming something means that you've discovered its true inner nature, and I think that part is very deep. I think we're going to have to discover the inner nature of a whole number of novel beings with whom we are going to have to share our world.

So what's wrong with this picture of natural kinds? Well, the first thing is that if we take evolutionary theory and developmental biology seriously, then the continuity thesis becomes the null hypothesis. It is not that somebody has to argue that the human is continuous with other beings and that there's a whole graded set of steps as far as the cognition, intelligence, and so on. Actually, that's just the biological fact, and whatever we think of as properties of a modern adult human, we have to then start asking where and when and how much did these things show up?

So the idea is that if you like sharp categories that people use all the time, such as is it conscious or isn't it? Does it have this or doesn't it? That needs really strong evidence. We need really strong evidence for sharp emergent phase transitions or anything like it. The baseline, I'm going to argue because of the facts of biology, the baseline is a continuity thesis.

Furthermore, however, there is actually a whole other axis here, which is the fact that because biology is incredibly interoperable, and I'll show you why that is in terms of its plasticity, actually there's another axis where you can start to make slow and gradual changes, both in the technological space and in the biological space. Again, you have this idea that this is not a magical distinct category of human, but you can start to ask questions about all these other different kinds of things. I think because of this, because of the interoperability of life at every level, the old categories of life versus machine just don't do us any good anymore. We have to have much more nuanced categories because none of these things can be sharply delineated.

So my framework is to attempt to recognize, create, and ethically relate to truly diverse kinds of agents. This means, of course, the familiar creatures such as primates, and birds, and maybe an octopus. An interesting thing about octopus, of course, it's very interesting what is it like to be a creature that has some autonomous parts, such as its tentacles. But we should note, we are all octopuses in an important sense. We are all chock-full of organs that are taking autonomous action all the time, only partially under our control, in some cases not at all under our conscious control, in various spaces that are hard for us to see. So all of us are in kind of this position. Octopus isn't, I don't think, actually unique in that sense.

But not only those kind of creatures, but also colonies, and swarms, and synthetic new life forms, and AIs, whether purely software or robotic, and maybe someday exobiological agents. Of course, I'm not the first person to try for something like this. Here's 1943, Rosenbluth, Wiener, and Bigelow tried to map out some of the great transitions to go from passive matter all the way up to human-level metacognition and so on. So I'm trying to develop this kind of framework, and the rules are simple. I wanted to move experimental work forward. In other words, I don't want something that's just philosophy. It needs to actually lead to not only new experiments, but new discoveries, new capabilities. For our lab, that's mostly biomedicine, but also a little bit of robotics and things like that. I would like it to enable some better ethical frameworks where we can actually try to understand how we're going to relate to some of these unusual creatures.

So the first thing that I'm going to take you through is just some unusual biology that as we think about the very simple thought experiments in cognitive science and philosophy 101, philosophy of mind and so on, I want us to keep in mind some of these unusual things.

First of all, these are planaria. These are flatworms. They have a true brain, central nervous system, similar to our direct ancestor. You can cut them into pieces, and every piece regenerates a new worm. That's interesting for many reasons. But the other interesting thing is that they're smart. You can train them. It's pretty much the only animal where you can do regeneration and learning in the same animal.

Back in the '60s, this guy named McConnell, and then we later replicated some of his work with modern tools, and he was absolutely right despite all the flack that he got at the time. The fact is that if you train these worms on this test, this is place conditioning. They learn to collect liver treats in this little bumpy kind of area. You can cut off their head. The tail sits there having no behavior. You need the brain in order to have behavior. The tail sits there doing nothing. But then it grows back a new head, and when it does grow back a new head, you can test these animals and find evidence of recall of their training.

So now this is telling you, first of all, that the memory isn't entirely in the brain, and interestingly, that it can be imprinted onto the new brain as the new brain develops. So we are now looking at information moving throughout the body. There's all sorts of interesting philosophical questions about what's it like? And who's the original worm? And the malfunctioning transporter experiment and all of that. But now you've got this amazing property of information, behavioral information actually being able to move through a body and imprint onto a new brain, which we don't know how this is going to play out in human patients with new stem cell therapies for degenerative brain disease, but I have a feeling that memories are going to move into new real estate, but we'll find out.

Now there's an even more interesting, I think, example, which is the caterpillar to butterfly transition. What happens here is that this creature basically dissolves most of its brain, a radical reconstruction and refactoring of its body and brain. But again, you can train these caterpillars and you can then show that after all that happens, the butterfly or moth remembers the original information.

Now one question, of course, is where is the information and how does it survive the refactoring of the brain? But even more interesting is the idea that the information that these caterpillars have is of no use to the butterfly directly. The butterfly doesn't move the way that this two-dimensional creature crawls. The butterflies fly with muscles, and the food that it learned to find over certain color stimuli is of no use to the butterfly because the butterfly doesn't like leaves. It likes nectar. So what actually has to happen here is not just mere preservation of the information, that's not sufficient, you have to remap the information across a drastic change of body architecture. So the information has to be generalized, it has to be remapped into a behavioral repertoire that makes sense here, and it's quite different. So it's not just about holding onto the information.

And then again, if you're interested in philosophy of mind, what might it be like to be a creature that has this incredible reconstruction and wakes up in a new higher dimensional world than it went into?

Okay. So that's some interesting biology, but perhaps the most fundamental piece is this: all of us start life as a single cell. Whatever is true of this end up here has had to scale up and had to undergo a process of transformation from a single cell. Developmental biology offers no support for the idea that there's some special bright lightning flash that happens. Everything is slow and gradual and it takes a long time, and we all made this journey from a quiescent blob that presumably is well-handled by the science of biochemistry and physics, and eventually we're here in the land of psychiatry and psychoanalysis and things like this. And this happens slowly and gradually.

Even that's not the end of the story, because the cells that make up this amazing creature can disconnect from the collective and give up on their large-scale goals that they have and become cancer. As I'll show you in a minute, there is a weird kind of life after death possible via this Anthrobot platform. Already this is a little bit disturbing because it's pretty clear that we were all single cells, and so this journey from physics to mind is something that we all had to take. Again, I would argue that this is emphasizing models of scaling and transformation, not is something cognitive or isn't it, because you're not going to find any bright lines here.

But at least some people think, well, okay, but we have a nice centralized brain so we must be at least a centralized intelligence. Descartes, of course, really liked the pineal gland because there's only one of them in the brain, but if he had access to good microscopy he would've said that actually we don't, there isn't one of anything. Inside the pineal gland is all of this stuff, and inside of each one of these cells is all of this stuff. So all of it is parts. All of it is collective intelligence. I claim that all intelligence really is collective intelligence, and this is the sort of thing that we're made of.

This is a free-living organism, named Lacrymaria, but this shows you what individual cells can do. This is a single cell. There's no brain, there's no nervous system. It handles all of its local needs here. This video is in real time, so the soft body roboticists drool when they start to see this. We don't have anything that has this degree of plasticity. But it's really important that any kind of claims about machines can do this versus organisms can do that, and what real life has in terms of valence and preferences and so on, we have to be able to make claims about a system like this. This is pretty close to a molecular machine, if there is such a thing, and we have to be able to know what we're going to say about it.

In fact, even below this level, so what is it made of? Well, it's made of molecular networks like this, and we now know that even these molecular networks, nevermind the cell, the nucleus, all the other stuff that that creature had, just a small set of molecules turning each other on and off, so a chemical cycle, a gene regulatory network, just that alone is already capable of six different kinds of learning, including Pavlovian conditioning. You can take a look at the data here. But the idea is that that just falls out of the math. There is no biology, there's no complex extra mechanisms that you need. Already the kinds of materials that individual cells are made of, at the very bottom, is already capable of memory and learning. We're using this in the lab to try to train cells and doing things like drug conditioning and so on because the molecular networks are already kind of competent in this task.

So what I think is really important then is to remember that we have to go beyond traditional embodiment. Humans are okay, not even great but okay, at noticing intelligence of medium-sized objects moving at medium speeds in three-dimensional space. But biology uses these same tricks in all sorts of other spaces. So there's the space of possible gene expressions, there's the space of anatomical states, which is what we'll talk about most, we have physiological state space, lots of other spaces, and life is doing problem solving in these perception action loops and everything else in all of these spaces all the time. It's just hard for us to notice.

I think that if we had direct primary perception of our blood chemistry, let's say we had some kind of taste receptor or something looking inside at our body physiology, we would have no trouble recognizing our liver and our kidneys as some sort of a competent symbiont that navigates physiological space to keep you alive on a daily basis. But our sense organs just aren't mostly built for that, so we need a lot of help to visualize what intelligence looks like in these other spaces.

So in my framework, I tend to use a spectrum like this, and this is an axis of persuadability because what it's doing is putting the emphasis on the interaction protocol. So it's a very engineering approach. I think cognitive claims are primarily interaction protocol claims as far as what bag of tools are you going to use with a given system. So you've got physical hardware rewiring and you've got the control theory in cybernetics and behavior science and many other things.

The idea then, and I've provided lots of examples of how you do this in these other papers, the idea is that where something fits along this continuum has to be an empirical question. You can't do this from an armchair and just decide that only certain kinds of creatures do this or that. You have to do experiments knowing that we have to be more creative in what space and what kind of goals we're looking at. So we have to hypothesize some kind of problem space, some sort of goals that the system might be trying to reach, and then some degree of competency, and then we use tools. We use the established tools of perturbational behavioral science to try training it, try communicating with it, and then we look at the empirical outcomes.

What does that let you do? So the idea around using agency talk with the systems that are not normally thought to be fodder for that is simply the empirical question, does porting the tools from other disciplines, for example cognitive and behavioral science, give you a better interaction with that system? That's how you know if you've got it right or how right you've got it.

So using those strategies, the next thing I want to look at is an example of the collective intelligence of morphogenesis. What does it mean to say that this system has collective intelligence and how does that help us at all beyond the standard molecular biology paradigm which typically assumes that that's just not the right question to ask?

I find it very interesting that Turing, who was of course really interested in intelligence in diverse embodiments and different machine minds and things like that, also wrote a paper on the chemical basis of morphogenesis. He was asking, how do embryos organize themselves? I think he saw an incredibly profound symmetry between the way that minds come to be and the way that bodies come to be, and the processes through which they self-organize, I think, was very, very deep.

So I want to show you some examples of what I mean by intelligence in the case of cells and morphogenesis. What I mean by intelligence, and I'm not saying this is the only definition or an all-encompassing definition, but this is what I like, because it helps us do experiments. Intelligence, as William James put it, is an ability to reach the same goal by different means. How much ingenuity does a system have in a given problem space to reach its goals, despite various things that go wrong?

Well, the first thing that we know is that if you cut early embryos into pieces, you don't get half embryos, you get perfectly normal monozygotic twins, triplets and so on. So that's interesting, and the reason I mention it is not because there's an increase in complexity. That's not intelligence. It's not because it's reliable. Even that's not intelligence. It's about problem-solving and the ability to handle novelty. That's what's impressive about this example, that you can start off in many different starting positions. We could do hours on various examples. I just wanted to show you one example that I think is instructive.

This is a cross-section through a kidney tubule in a newt, and there's about eight to 10 cells that normally work together to form this little tube, and then there's a lumen in the middle. Now one thing you can do with these embryos is prevent them from dividing early on such that the DNA keeps dividing but the cell stays the same, so you end up with polyploid newts that have multiple copies of the chromosome complement. When you do this, the first thing you find out is that it works and you get a living newt. That's kind of wild. It doesn't actually matter how many copies of your genetic material you have, you can still get a newt. Fine.

Second thing you find out is that the cells scale proportionally to the amount of DNA you have and they get bigger. Interesting. Then you find out that the newt is exactly the same size. How could that be? Well, that's because fewer cells are now building the exact same structure, but they're bigger, so fewer of them get to do this. Interesting. It scales.

And then, the most amazing thing, by the time you get to 5N or 6N newts, the cells get so gigantic that one single cell bends around itself to give you the same structure. What's interesting about this is that this is a completely different molecular mechanism. This is cell-to-cell communication. This is cytoskeletal bending. So what you have here is a system that can use different molecular affordances that it has, different mechanisms, in the service of a large-scale goal in anatomical space. It's trying to traverse from that egg to that proper newt target morphology, and it will use different mechanisms when you do really strange things like make its cells much bigger.

So now think about what this means. This is... if you're a newt coming into this world, what can you rely on? Well, we already knew you can't really rely on the outside world. Things change all the time. But you can't even rely on your own parts. You don't know how many copies of your genetic material you're going to have. You don't know the size of your cells. You don't know how many cells you're going to have. You have to get your job done regardless in creative ways, given the problem that you have. So this is the sort of thing that we're interested in, not just the reliability of development which starts to look like it's a mechanical sort of feed-forward process. That isn't it at all. It's this ability to use the tools you have to reach your goal despite all kinds of weird things happening.

So now how could this work? We've been studying this for a long time, these kinds of systems, and we took our inspiration from the one more or less uncontroversial example where systems can store some record of some representation of a goal, and then they can work to achieve it. I don't need to tell anybody here what the hardware and software is in the case of the neural system, except just to point out that the same kind of neural decoding strategy that neuroscientists do when they try to understand how the cognitive content of a mind maps onto the electrophysiology that they're measuring here, that system is actually incredibly ancient. Every cell in your body has ion channels. Most of them have electrical synapses with their neighbors. This idea of having electrical networks integrate information over space and time was invented around the time of bacterial biofilms. It certainly didn't wait around until neurons and muscle came on the scene. It is very old, and even bacterial biofilms coordinate their activity via these electrical networks.

So that same kind of idea, could we decode this electrical activity and try to understand what problems is it solving? What space is it working in? If your brain is usually thinking about moving you through three-dimensional space, what are your somatic networks thinking about? What did your body cell electrical networks think about before there was a brain and given that they can't move through three-dimensional space? Well, it turns out that what they were thinking about is anatomy. They were thinking about shape. So what I'm going to argue is that most of the tricks that we see happening in brains, with possibly a few exceptions, are actually really old kinds of things that were simply pivoted from other problem spaces into a familiar 3D space of behavior.

In order to actually do this, we have to develop some tools. The first thing we developed were ways to visualize the electrical activity in non-neural cell groups. We do this using voltage-sensitive fluorescent dyes, and so this is kind of like a scan of brain activity except this is an early frog embryo. There is no brain yet. You can watch all of the cells communicating to try to figure out who's going to be head, tail, how many eyes, all of those kind of things.

We do a lot of quantitative simulation and we do everything from the molecular biology of these ion channels through tissue level, and then large-scale things like pattern completion during regeneration and so on. The idea is that as it turns out, the tools and concepts from neuroscience port perfectly. The tools and the concepts, lots of the ways in which computational neuroscience studies decision-making and memory and visual illusions and various disorders and all these kinds of things, those things actually don't distinguish between neurons and non-neural networks. We've ported most of these things and they're really useful and they work really well.

That leads me to conjecture that neuroscience really isn't about neurons at all. What it's about is scaling a multilevel agency from very humble low-level mechanisms up through very high-level goals. We actually, there's an AI tool that we created to mimic something that I always used to have my students do, which is to take a neuroscience paper, do a find-replace, and every time it says neuron, replace that with cell, and every time it says milliseconds, say hour, and you have yourself a developmental biology paper. So it's kind of fun to play with.

So the next thing we developed were now the functional tools, right? So how do we actually change the bioelectric content of these networks? We do it exactly how neuroscience does it, so no magnets, waves, frequencies, fields, none of that. We target the ion channels and the gap junctions, the topology of these electrical networks. We can do this with optogenetics, we can do this with pharmacology, we can replace channels, all the same tools. This is how cells normally hack each other.

So now it's time for me to show you what happens when you do this. If we go in and we do a kind of non-neural decoding and a kind of inception of false memories into this electrical network, we try to communicate with it, what can we tell it to do? Well, here's an interesting prompt. We can take a certain bioelectrical state that actually occurs during normal face development that tells the cells where the eyes are supposed to go. We take that bioelectrical pattern and we can introduce it elsewhere in the body, and we do that by injecting RNA for specific ion channels. If we inject it here in a region that's going to be gut, what these cells end up doing is making a perfectly nice eye. These eyes have lens, retina, optic nerve, all that good stuff.

So from here, you learn a couple things. First of all, these bioelectrical patterns, just like in the brain, they are instructive for behavior. In this case, morphogenetic behavior. We were able to prompt the system to build an eye. The second thing is, it's incredibly modular. We didn't have to tell it how to build an eye any more than when you train a dog or a horse do you have to tell it what to do with all the synaptic machinery and everything else. The system takes care of all that. You provide a high level of communication, and if you know what you're doing, you can convince the system to do very complicated things on the molecular level, and that's what happens.

Something else that's kind of interesting and a reminder about humility in this whole thing, in the developmental biology textbook, it actually says that only the cells up here in the neurectoderm are competent to become eye. That's because traditionally people prompt them with the Pax6 master eye, so-called master eye gene. If you do that, indeed, only the cells up here can become eye. But the competency wasn't the problem with the cells, it was a problem with us, the scientists, because if you have a better prompt, in this case the bioelectric one, it turns out that actually pretty much any region in the animal can do it. So that reminds us, when your system looks like it's limited and isn't able to do specific things, the question may be, do we really understand how to prompt it to do so? Do we know how to communicate with it?

It also does many other interesting things like scale itself to the task. So this is a lens sitting out in the tail of a tadpole. These blue cells are the ones that we injected, but there's not enough of them to make this organ, so what do they do? They recruit all their neighbors to help them finish the task. So it's a self-scaling kind of thing, like many other kinds of collective intelligences do.

I want to switch from this into a different model, which are now planaria. The idea there is remember you chop them into pieces. The piece knows exactly how many heads it's supposed to have. It turns out that one way it remembers how many heads it's supposed to have is by this bioelectrical pattern, which says one head, one tail. This is very robust, but we can go ahead and we can rewrite that and we can say, using drugs that open and close ion channels, as guided by a computational model, we can say, no, you should have two heads. When you do this and then you cut the animal, there you go. This is, it builds a two-headed animal. This is not Photoshop or AI. These are real animals.

Now something very important and interesting here is that this bioelectrical map is not a map of this two-headed animal. This is a map of this perfectly normal anatomically and molecular biologically. Here the head markers are here, not in the tail. This is a map of this perfectly normal animal. This memory is latent. It is not expressed until the animal is injured. It in fact disagrees. The memory it has disagrees with what the situation is right now, because right now it has one head. So it's, I think it's a very primitive kind of counterfactual. I think it's an example in this unconventional system, it's an example of the kind of mental time travel that we all enjoy, the ability to represent states that are not true right now, either from the past or from the future.

So a normal body of a planarian can represent at least two, probably many more, but at least two different representations of what it's going to do if it gets injured at a future time. One reason I keep calling this thing a memory is because it has all the properties of memory. In fact, if I take these two-headed worms and cut them into pieces, the fragments will continue to build two-headed animals in perpetuity. Remember, there's nothing wrong with the genetics here. The genome, we haven't touched the genome. The genome is unchanged. The question of how many heads you're supposed to have is in fact not really nailed down in the genome. But much like with the cognitive systems that you're used to in brains, you can learn things that don't need to make it back into the DNA, and they are stored stably, but we can rewrite them in either direction like any good memory. Here they are if you want to see what these animals look like. There's lots of interesting behavioral science that we can do with animals with multiple heads in the same body.

Interestingly enough, it's not just the number of heads that you can build, but actually even the type of head. We can ask this guy with a triangle head to build heads like these other species, 100 to 150 million years of evolutionary distance. No changes in the genome, just alter the bioelectrical pattern memory. You can get flat heads like a P. felina, round heads like an S. mediterranea. The shape of the brain changes, the distribution of stem cell changes, just like these other guys. So the hardware is perfectly capable to visit these other attractors in the morphogenetic landscape that normally belong to these other species, but you can visit there if you change the content of the bioelectric memory.

The final example of this that I want to show you is what happens when you change the size of the self. If you think about our concept of the cognitive lightcone, which is basically the size of the biggest goals that a system actively pursues, the individual cells have very small cognitive lightcones. All it cares about is this, both in time and space, very, very small kind of region here, maintaining physiology and metabolism and so on. But what happens during both evolution and development is that they join into networks and then what happens is their cognitive lightcone becomes very large, so they're cooperating now towards a huge goal.

How do we know it's a goal? Because in this case, making a salamander limb, if you amputate anywhere along this axis, what the animal, what the cells will do is work very hard to rebuild it, and then they stop. So it's a very, very clear homeostatic kind of process where they could tell when they've been deviated from their set point, and they'll work hard and get there, and then they stop. So these things work on very small goals. The collective works on these grandiose construction projects, creating them, maintaining them, detecting deviation and so on.

But that process has a failure mode, and that failure mode is cancer, because what happens when individual cells disconnect from this electrical network, they no longer have access to these enormous set points that they were trying to reach. Now everything is back to their ancient unicellular kind of metabolism and proliferation and things like that. This is human glioblastoma. What's happening here is that these cells are not any more selfish than these cells, right? This idea a lot of game theory of cancer focuses on cells being uncooperative and so on. I don't think they're any more selfish. They just have smaller selves.

So what's happened here is a scale up and then a scale down of the dynamic border between self and world. The size of your goals determines the size of yourself and the kind of cognitive capacity that you're capable of reaching, but it can change. It's plastic. It can change during the lifetime of an individual, and that kind of weird way of thinking about it leads directly to therapeutics. As the previous stuff I showed you with the bioelectrics, we actually have lots of regenerative medicine coming along those lines to try to regenerate organs and so on. I haven't shown you any of that, but this is just a simple example of how this leads to therapeutic approaches in cancer.

When oncogenes are injected into these animals, you can see the cells are bioelectrically decoupling from the rest of the network. What you can do, you don't kill them, you don't repair the DNA, you leave the hardware intact, but what you do is you force them to reconnect to the rest of the cells, so you just inject an ion channel that keeps the voltage such that they're going to be connected to the rest of the cells. This is the same animal. Here's the oncoprotein blazingly expressed. There's no tumor, okay? Because what drives is not the genetic hardware. What drives are the physiological decisions and the scale of the self. So these individual cells would like to crawl off and be amoebas and go where they like and eat what they want, but the collective actually is just working on making nice skin, nice muscles, and so on. So you can see that's just an example of how we test some of these ideas and make sure that there's some utility in these kind of models.

Okay. So the next thing I want to address is this issue of where do these patterns come from? What I showed you is the ability of groups of cells forming a collective intelligence that navigates anatomical space, and it navigates it to reach specific patterns, specific anatomical structures. So now we want to ask where do these come from? The obvious answer is, well, evolution, of course. Evolution shapes the anatomical morphous space. It rewards certain attractors. It wipes out this or another attractors. But what else is there?

In particular, we are interested in asking what happens when there's a new agent that has a lifestyle that has never faced selection? We're interested in the plasticity of self-assembly. The first thing that I just want to talk about briefly is this idea of where do things come from? What kind of answers do we want to the question of where things come from? We're used to saying, well, some of them come from genetics. Some of them come from environment. So I just want to remind us that there's a mathematical space that provides a really important third kind of input into this whole system.

Here's this is called a Halley plot. It's a very simple way of graphing equations and complex numbers. This is what you get when you plot something like this, right? A very small, there's about six characters here. The compression is insane, and inside of this is hiding all of this, and these are just videos that you can make by changing the formula slightly and making the frames. Where does this pattern come from? You're not going to find anything in the laws of physics, you're not going to find anything in the history of life or of the universe or anything else that tells you why it is that this pattern is the way it is.

I think that what evolution is doing is exploiting free lunches that you get from a kind of platonic space of mathematics and computation and some other things. What evolution actually makes are pointers into that space that pull down patterns that are not to be directly found in the physical world. I'll take this weird idea one step further in a minute. But the idea is, so how do we test this? Could we find some novel life forms with no history? No history is hard on Earth, but no selection for the new pattern is possible.

So here are some epithelial cells from the top of a frog embryo. We liberate them from the rest of the animal. We put them in a Petri dish here. What could they do? Well, they could do a lot of things. They could die, they could crawl away from each other, they could spread into a 2D monolayer. Instead what they do is they make this interesting thing, bot, and xenobot because Xenopus laevis is the name of the frog, and we think it's, among other things, a biorobotics platform. But you can see what's happening here. It's using the little cilia, the little hairs on its surface to swim. It coordinates them, and it can go in circles. It can sort of patrol back and forth. It has collective behaviors.

Here's one traversing a maze. So it's going to go down here. It's going to take a corner without bumping into the opposite wall, so it takes this corner, and then here, for some spontaneous reason that no one knows, it turns around and goes back where it comes from. So it has, it's fully self-motile. We're not pacing it. We're not activating it. It's doing its own thing. It has various behaviors. If we study the calcium signaling here, it looks very interesting. Remember there's no neurons here. This is just skin. These are just epithelial cells. But you can imagine deploying all sorts of interesting connectivity mathematics on it. So we've already done some of that, various other tool, information metrics. We've done all of that, so that will be forthcoming. You can sort of ask that question, what would we say about this? If these were neurons, what would we say about some of these patterns both within and between bots?

There's something else that I can show you about these bots. One thing that we did was study the transcriptome of xenobots compared to the tissue that they normally come from, compared to the embryo. Remember, these are made without any new synthetic biology circuits. There's no mutation. There are no new DNA added or changed. But what is their transcriptome like? It turns out that of course they're missing a lot of transcripts that embryos have because they're missing a lot of endoderm and mesodermal structures and so on. But it turns out that they actually have hundreds and hundreds of new upregulated genes. They upregulate in their novel lifestyle just by removing the other cells and liberating these guys into their own novel lifestyle. They turn on hundreds of genes.

Some of these are extremely interesting. One of the things we found was a cluster of genes involved in hearing, genes that again, they were very, very much upregulated over what happens in normal embryos. These xenobots are expressing a cluster of genes for hearing. We thought, "That's weird. What could that possibly mean?" We decided to test it. So here, what we're doing is we're tracking this little bot here. I'll show you. So what it normally does, this particular one, is it kind of spins in circles. Then what we do is we turn on a speaker that's underneath the dish to provide it some sound. You can see now that... And so here's the track, what it's doing, and then you're going to see what happens. Now the vibration, it goes off, and it's back to going in a circle. So by analyzing some things that these guys are doing differently, we start to gain insight into some ways to interact with them, some ways to provide signals and to change their behaviors. Again, embryos do not do this. This is just a xenobot thing.

The other thing that they do is this fascinating thing called kinematic replication. So the xenobots can't reproduce in the normal froggy fashion. They don't have any of those organs. But if you provide them with a bunch of loose skin cells, then what you see is that they run around. They collect them into little balls. They polish the little balls. And because they're working with an agential material, these are not passive pellets, these are cells themselves, the little balls mature into the next generation of xenobots. And guess what they do. They run around and make the next generation of xenobots, and the next.

So in this system, and to our knowledge, there is no strong heredity. In other words, these look all basically alike, not exactly like their parents more than other individuals. But it is a new kind of self-replication, and to our knowledge, it doesn't exist anywhere in the world. There is no other animal that reproduces this way. It looks a little bit like von Neumann's dream of a robot that builds copies of itself by finding parts in the environment.

So now we can ask, what did evolution learn during the process of evolving frogs? Well, it certainly learned how to do this. This is a standard developmental sequence, and then here are some tadpoles. But apparently, it also knew, it also learned this, although there's never been any xenobots. There's never been any selection to be a good xenobot. We're not yet making any claims about their level of intelligence, although we've done a bunch of experiments on memory and things like that, which we will be reporting soon. I'm not saying anything about what specific goals they have. I'm not saying anything about their consciousness, because you actually can't tell any of that just from reading behavior. You have to do perturbative experiments, which we're doing, but they're not ready yet.

But what you have here is an interesting model system in which to try to ask where do behavioral, not just morphological, but behavioral patterns come from? If they weren't under specific evolutionary selection in this novel circumstance, where did they actually come from? One thing you might think is that, okay, well, this is something very, very frog specific. Embryos and amphibians are both plastic, this is maybe some kind of frog-specific thing. But I want to point out how general this is.

So I'll show you this, and I'll ask, what do you think this is? What sort of thing is this? You might think, well, it's something we got out of the bottom of a pond somewhere. You could try to guess the genome. If you guessed something primitive, you would be wrong. This is 100% homo sapiens. So these are what we call anthrobots. They're made of human adult, human tracheal epithelial cells. There is nothing embryonic about this. They don't look like any stage of human embryonic development, but they do have these little cilia. Again, there's this little motile creature that does interesting things.

What does it do? Well, one thing it does is if you put it on a dish of human neurons with a big scratch through it, it will move down the scratch, and eventually it will settle down. If it settles down, a bunch of them together will form this thing we call a superbot. If you lift it up four days later, you'll see that what that superbot was actually doing was trying to knit the two sides of the wound together.

Now, who would have thought that your tracheal epithelial cells that sit there quietly in your airway actually have the ability to form this self-motile little creature with weird abilities such as healing neural wounds? This was the first thing we tried. This isn't experiment 800 out of 1,000 things we tried. This was the first thing, so you can just imagine, how many other things are these things doing that we have no idea? They express about 9,000 genes differently than their tissue of origin, so their transcriptome is completely redone. About half the genome is altered. They have four distinct behaviors that you can build an ethogram out of, in terms of the transition probabilities between these behaviors.

So now we see that there's the default kinds of form and function that we expect, but there are also some really interesting things that you might call emergent, that we need to discover by interacting and prompting these things, and trying to guess what level of cognitive sophistication they have.

What I'm really interested in is this idea of a much wider continuum of beings. I don't think we should be trying to maintain sharp categories that are going to lead us to be doing things like this, when we're confronted by all sorts of novel beings to see, well, is it really human? Is it 51% or whatever? I think we have to start to try to understand the space of possible bodies and minds, because if we can't rely just on the genetics and the environment and the history, what do we have?

So my kind of weird claim is that in the same way that evolution exploits patterns in patterns of geometry, of computation, all the different kinds of things that mathematicians study, just as evolution exploits those things in the platonic space, the other thing that exists in these platonic space are structures that regulate different kinds of minds. I really think that what these xenobots and anthrobots really are, among other things, are exploration vehicles for an enormous platonic space of form and cognition.

There's kind of two ways that you can think about this. The conventional way is that there are just some facts that hold. These are truths about network properties and numbers and computation and things like that, and these are just amazing facts that hold, and when we find them, we kind of write it down, and it's surprising and it's great. The good news is that it's kind of minimal, but the bad news is I really think this is a mysterian outlook. If we just want to find surprising things that emerge, I think that's really giving up on what's the best thing about science, which is the hypothesis that there's an order to the world that we can study.

I think that option two is better, which is the assumption that there is an ordered non-physical latent space of patterns, and these are very boring kind of low agency patterns like facts about triangles and things like that, and also some much higher agency patterns that can be studied systematically. That's the research agenda, is to figure out how we can use anthrobots and xenobots and frogolottos and all this weird stuff that we make as pointers into this space to see what is actually there? That's what I think these synthetic morphology beings are.

Is there a precedent for this? Well, there is. Most mathematicians, I think, don't think that they are finding a grab bag of random facts. They're working on a map of mathematics. These things have a structure to them. They think they're discovering it, not inventing it. So I think we could sort of develop a model like this where all the layers of the biology that we see is using all sorts of different things from this kind of space, and that what's found there is not just facts about the body, but actually different kinds of minds. That's my hypothesis.

Therefore, and I'm almost done, therefore, what I think we're looking at here, because life is so incredibly interoperable, the ability of problem-solving at every level of organization allows it to form pretty much any combination of evolved material, engineered material, and software, is some kind of possible agent. All of them take advantage of this incredible space of, as Whitehead said, patterns that ingress into the physical world. Everything that Darwin said when he was impressed by when he said endless forms most beautiful is like a tiny little corner of this incredible space of bodies and minds.

Many of these already exist. Hybrids and cyborgs and you're going to hear from Wes, a cool story about his hybrid. But a lot of these things already exist. I think in the coming decades, there will be more and more of them. I think we need to start working on frameworks for an ethical synthbiosys. This is a word that actually GPT came up with when I asked it to try to come up with a simple word that enables kind of visualizing a symbiosis with all of these novel creatures that are coming. And to really try to understand what does it mean to be in a kind of beneficial relationship with beings that are nowhere on the tree of life with us, that are completely radical, that are pulling down very different patterns from the space of possible minds.

So I'm just going to give you a couple of quick things and then I'll stop. My main conclusion about many of these things is that we really need a lot of humility about the idea that we know what we have once we've made it. That's because I think that when we make things, we, in an important sense, we get out more than we put in by building pointers, these living or nonliving pointers into this space of patterns. We pull down things that we did not know we were going to get, and we've done, and I have to stop here, but I can answer questions about it, we've done studies of extremely simple systems. These are minimal kinds of things, sorting algorithms, bubble sort, that it turns out, if you look at them the right way, have interesting problem-solving behaviors that nobody had noticed before. They do these weird side quests and they have delayed gratification and they do things that are literally not anywhere in the algorithm.

So based on this and other work in minimal matter and so on, I think that it doesn't take cells, it doesn't take life, or large complexity to have emergent goal-directed competencies that are very hard to predict. Not just complexity, not just unpredictability, but emergent cognitive patterns that we did not know about. So I think we have to be very careful. If we don't even know what bubble sort can do, I think we really have to be careful about thinking that we know what certain kinds of AIs can do, what linear algebra can do when used on large data. I think we really don't know.

So for this reason, I really like this kind of thing. Magritte said this is not a pipe, right? This is a representation of a pipe, and I think the same thing is true for computationalism, that we have to be really careful. There are many limitations of things like Turing machine models and so on, and people take those to be the limitations of quote unquote machines, and I think we have to keep in mind that for the exact same reason that we don't think that the story of biochemistry is a sufficient story for a conscious mind, we should not think that what machines do or can do is fully described by our models of what we think they are. The formal models have limitations. The formal models are not the same thing as the actual thing.

I think that the new Garden of Eden is going to look something like this, and I think we really need to step it up in terms of dissolving some old categories that I think are doing us no practical good whatsoever, and really develop good stories of the scaling and propagation of cognitive behaviors into different problem spaces.

So I'll just stop here and just say that here are some things I feel relatively confident about, that I think very little in this field is binary. I think we're talking about scaling and transformation. I don't think it's about brains. I don't think it's about embodiment in three-dimensional space. I think these kind of properties and probably consciousness is all around us, and our formal models do not tell the whole story.

What needs to be worked on and what we're working on and other people are as well is what kind of, what aspects of the architecture manages the different types of impressions that we get? Does evolution have any monopoly on making minds? The only reasonable argument for this that I've seen come from Richard Watson. I used to think definitely not. I'm not so sure now, but this needs much more work. I think the research program is creating new tools for exploring the space of these minds.

What I have no idea about at this point is how well and how consciousness tracks intelligence. I'm not sure if there really are any phase transitions or if it's completely smooth and all of the phase transitions are kind of in our minds. I'm not sure how we gain first-person understanding of any minds and certainly not exotic minds.

So if anybody's interested in following this further, here are some papers where we go through all this in detail. I want to thank the people who did all the work. You'll hear from Wes momentarily. Here are all the post-docs and the students that did all the things I showed you today. I always thank our funders. Disclosures, here are three companies that have supported some of our work and the model systems get all of the credit. So thank you. I will stop here and let...

Thanks very much for that beautiful presentation of some really extraordinary work. I've been discussing, so I'll just be chairing the discussion session, and while people formulate questions, let me kick off with two real quick ones of my own.

You showed these, going back to Mike's talk primarily, but Wes's as well, these examples where we never trained some of these systems or they start to do things. It's a question of what's trainable. Mike and I think we talked about it before at some point, but there's this really interesting distinction in training animals between trace conditioning and delay conditioning. Delay conditioning is where the conditioned stimulus and the unconditioned stimulus are pretty much at the same time or immediately following, and trace conditioning is where there's a time delay. This has been associated in normal animals with awareness of stimulus conditions. So one question is, how far have you been able to push these conditioning paradigms?

And the second question, if I can just add, I always ask two, I don't know why it's every time so far. I remember also you mentioning at some point you had the planaria. I'm just thinking of the contrast between planaria and xenobots and the anthrobots. There seems to be this fascinating thing you talk about in planaria where they almost don't rely on their genome at all for anything. And I would imagine that's not the case for the anthrobots or the xenobots. How much does comparing those two model systems tell you about the role of genetics in how you can probe into this space of possible lives?

Okay, let's see. On the first question, yes, all of those different kinds of learning assays are very doable with all of these systems. We talked about collaborating on some of it, so let's do some of that. We've already done some, but we should do way more. We'll find out.

One thing I didn't get a chance to talk about today that's interesting, and this is a new pre-print that we just put up, we've actually been looking at, and this is Federico Pogosi's work from my group, where we've been looking at causal emergence, IIT style metrics in gene regulatory networks that have been trained. There's actually, in terms of what you mentioned about awareness of what you're learning and so on, it actually, even in GRNs, causal emergence goes up by training. They actually become more of an integrated agent by virtue of being trained. So all of these questions are very tractable in these systems, just like they would be in a neural system.

For the genetics thing, I would say this. If you want, I'll take the time to go through what's up with the planaria. It takes a couple minutes to explain. But what I think is most interesting about both xenobots and anthrobots in terms of the genetics is that they are not genetically modified. There are no synthetic biology new circuits in them. They are basically in the same environment that they were in before, which in one case is pond water and in another case it's a cell culture medium, so there's nothing really informative in the environment. And yet, because of their new independent lifestyle, they up-regulate many hundreds, and in the case of anthrobots, thousands of new genes that they're going to deploy in their new lifestyle.

So I think what we're looking at, which is very consistent with the story that we've developed in the planaria, is that it's not so much that the DNA is telling you what you're going to be and what you can do, but it's more of a resource book. I realize this is an enormous claim and I'm not saying that we've proven this in the general case. But I think what we're seeing here in this specific example is that, much like all the other molecular mechanisms that newts use, that all of these systems use when we put them in weird scenarios, the genetic information, just like the molecular biology pathways, are tools that these systems can deploy in favor of their new lifestyle. There are resources there, affordances, and that's what I think is one of the most interesting things about this, how they deploy all these tools.

Thank you. Maybe you would introduce yourself when you're asking the question, given the direct host, which would be great.

Thanks very much, Michael and Wesa. This is Jason Mattingly in Australia at Queensland Brain Institute. Michael, I'm really interested in this latent non-physical space that you talk about when you talk about engrossing and so on. I just cannot get my head around what that space might look like, how we would go about discovering it. You've probably thought deeply about this and I'm wondering if you could say something about what this space might, this latent but non-physical space might look like.

Sure. I realize this is a wild idea, but it's not as wild as it sounds at first, and I'll give you a couple of examples.

First of all, mathematicians are already committed to the fact that there is a whole structure. The truths of number theory, facts about certain kinds of logic being differently powered than others, and so on. All of these things are true no matter what the settings of the various constants of physics are. So at the beginning of the Big Bang, you could have shuffled all the structures; the physics would be different, but all of the... And by the way, this is not the only view. There are different views of mathematics that don't believe this, but it is a common view that these things are non-physical in the sense that they do not derive their structure or their reality from any of the things you study in physics, and there's nothing you can do in physics to change them or anything like that.

So just imagine, here comes evolution. Imagine that in a certain world, the most fit thing is a certain kind of triangle. It's a shape of a certain triangle that provides the highest fitness. So you do a bunch of generations and you get the first angle, and that's great. Then you do a bunch more generations and you get the second angle. Now something magical happens. You don't need to do the same set of generations to get the third angle. You already have the third angle. And this is a free gift from what? From the laws of geometry in flat space where if you know two angles, you know the third. And evolution gets to save one third of the time.

That happens all over the place. That's true for geometric facts, that's true for things related to computation. So when you evolve an ion channel, which is a voltage-gated current conductance, you get to make logic gates with a truth table. You don't need to evolve the truth table or the properties of that truth table; it's given to you for free. All of these kinds of things of mathematics are... we already know these things exist and are not determined by features of the physical world. We already know that evolution exploits the heck out of them.

There's only one extra move that I make that's really weird, which is to say that the platonic space is not just for low-agency things like facts about triangles and computational kinds of things. It also contains what we normally recognize as kinds of minds. That's really the most controversial piece of what I just said. The rest of it, I think, is pretty regular.

So now the question is, what is the structure of that space? Well, we know a part of it, which is what mathematicians have been studying. They in fact have a pretty good map of at least some corner of that space. We do not have a map of our space that has to do with cognitive kinds of things. But I think we have a very healthy research program now to map out that space by making new kinds of constructs that dip into that space and show us what else is in there.

So for example, if you have a standard frog embryo, you have one point in that space, and it's a well-understood point. Everybody's studied it for 100 years. But you don't know what's around it, so you can start to make... you can start to make these things that are tools that are like... I see these things as like periscopes to explore that latent space. So we can make certain changes. We can make a frogolottle, and now you get to find out what's in the space between a frog and an axolotl. If you make xenobots, you are somewhere else in that space that's related but not really the same.

And so all of these things, to me, are constructions of pointers into that space where we get to find out what comes forward. And then eventually, with enough effort to understand the mapping, we start to build up a map of that space, and the goal is to have rational design. So the goal is then, if somebody says to me, "I want an organ that looks like this," or, "I want a bio-bot that can do this and that," or, "I want an AI that does this other thing," we have some idea of what it is that we're building to have these things appear.

And the final thing I'll say is that with all of these things, I think you get way more than you put in. And if simple old bubble sort can do things that we never had any idea about and it is not obvious at all from the actual six lines of code that is bubble sort, then that space is rich and surprising. But I don't think it's random, and I don't like the kind of mysterian approach that we're just going to assume these are random things that show up from time to time. I think we should be mapping that space.


Related episodes