Skip to content

Discussion with Daniel McShea and Gunnar Babcock 1

Daniel McShea and Gunnar Babcock join Michael Levin for a wide-ranging discussion of evolution, bioelectric aging, goal-directed behavior, death genes and multicellularity, and how machine metaphors and physicalism shape explanations in biology and mind.

Watch Episode Here


Listen to Episode Here


Show Notes

This is ~1 hour conversation with Daniel McShea (https://scholars.duke.edu/person/dmcshea) and Gunnar Babcock (https://gunnarbabcock.com/) on topics of biology, evolution, causation, explanation, and the machine metaphor.

CHAPTERS:

(00:01) Immortality, identity, bioelectric aging

(08:17) Goal encoding and memory

(18:01) Stress, goals, problem-solving

(27:56) Death genes and multicellularity

(39:12) Morphogenetic representations and goals

(48:04) Mental language and physicalism

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:01] Gunnar Babcock: There's some great stuff by philosopher Bernard Williams on immortality, as I recollect, that draws up some interesting identity questions. If you have a psychological notion of an identity condition, then you might think that the biological entity that I am can be kept immortal indefinitely and I can keep those processes going, but if you're committed to the psychological identity, then it gets more tenuous. There are good reasons to think it's tenuous.

[00:38] Michael Levin: We should be really careful about assuming that. About 100 years is great, but otherwise too much, because it's all relative. You could imagine a species like us that lives kind of like an octopus, really smart, but lives very short. To them, you guys live 100 years, that's crazy. Nobody could stay sane for 100 years. Maybe we're being chauvinistic or maybe not.

[01:06] Gunnar Babcock: No, I think that absolutely makes sense. I think about somebody like Parfit, who says that my identity over my lifetime is stable and consistent. But he would actually argue that my sense of values and the things that I hold and everything else are just entirely different than three-year-old me. So he would say I'm actually just an entirely different person than I was at three. My identity shifts even over the course of our lifetimes; it's not stable enough to point at one consistent thing that's there. So if you construe it in that looser sense, then again, the biological thing is stable enough to point at, I think, a single trajectory. If it's the psychological identity, it gets tricky. I think that in the biological, Dan, you've read some of my stuff. I've been really interested in immortal cell lines and looked at tracking identity through those over time and some of the interesting stuff that you get into when you look at cell division. Some of the latest stuff suggests that there isn't truly symmetrical division among cells, but you would perhaps know more about that than I do.

[02:28] Michael Levin: Well, it's really interesting, even the biological version. So the kind of caterpillar-butterfly thing: there was a science fiction story a long time ago where they got life extension just a little bit. It wasn't much, but people were living to like 130 a year, something like that. That was enough to get them over a hump where they would start transforming into whatever the next developmental stage is. It's just that nobody had ever noticed it because nobody lived that long. Once you get to it, they were growing wings or something. Even developmentally, if we did have longer biologically, who knows what further. We study this all the time in terms of bioelectric pre-patterns that kind of get you to adulthood, but then what? Maybe if you were to prevent them from degrading, which is what happens during aging, maybe there's more transformation down the line and more stuff would happen biologically. We don't know. The other interesting thing with the cell line stuff is the anthrobots. We've made these anthrobots out of human cells donated by adult human donors. Maybe many of these donors are very elderly. By the time we make these anthrobots, they may or may not still be around, but the bots are around and they're running around and doing interesting things. That's kind of like the cell lines, but even more so because unlike the cell lines, these guys you can test for behavior. One of the things I'm interested in is, for example, we get a sheet on the donor: this was a white male. Some of them are smokers. Now my question is, do the anthrobots seek out nicotine? And if they do, what else that we don't know how to test for is being propagated from the donor? We know information moves through the body and in the planaria it moves in and out of the brain pretty readily. Who knows. That's a whole other set of philosophical identity issues if it turns out that these bots are carrying some behavioral traits.

[04:49] Gunnar Babcock: That would be super fascinating if you could track behavioral traits that have been passed along like that and then track how long they maintain.

[05:04] Daniel McShea: Following up on your caterpillar butterfly thing, there's some bioelectric field or set of fields more likely that are decaying over time as we age, right? Are there any fields developing that are headed in the other direction, that are forming as we age? Because that would suggest that we're a larval stage, or an earlier stage in something longer than that. These fields are developing on different time scales, right?

[05:34] Michael Levin: We don't know because whole-body imaging is still extremely hard even in small model systems. So we have snapshots of specific stages in frog and planaria and things like that. We have hardly anything in human. We certainly do not have good time timelines like that, but we're working on those technologies. I think that will exist. I think it's completely possible. The conventional suggestion would be that it probably isn't going to happen because there are theories of disposable bodies.

[06:18] Daniel McShea: Hen's teeth and horse's toes, it could be, the formulas could be in there and serving other purposes as well, which could maintain them evolutionarily. Tommy, in the tadpole, are there adult frog bioelectric fields that are measurable and changing in documentable ways?

[06:44] Michael Levin: Sure, yeah.

[06:45] Daniel McShea: Is the adult active during the tadpole stage? It's just invisible.

[06:49] Michael Levin: Adult animals absolutely have them. The question is, what are they doing? The functional evidence from the adult data is that they're operating in cancer suppression. They're operating in wound healing. They're operating in local tissue maintenance. It's unclear whether they're doing the same very large scale instruction the way that they are in development. If you take a rat and you de-nervate, cut the lingual nerve, the rosettes on the tongue, the papillae, histologically, they fall apart, they become disorganized. That suggests that you need upkeep; now that happens to be neural. I think the vast majority of it is probably not neural, but the point is it needs constant upkeep, and that's a multi-cell type tissue histology thing. I wouldn't be surprised at all if these things were. That's one of our hypotheses in our aging regeneration program here: what's happening is that with age, those patterns become fuzzier. They degrade and it becomes harder and they scale down to the point where you're just trying to keep individual cells' identity, but the tissue is starting to not have that precise pattern. They're absolutely measurable and they do have tissue-level roles.

[08:17] Daniel McShea: I have one item on my agenda with you. Go for it. I have to make sure this doesn't come out as garbage. Let me look at what I wrote here.

[08:33] Gunnar Babcock: One item agenda after, but you go, Dan.

[08:36] Daniel McShea: You want to go first?

[08:38] Gunnar Babcock: Oh, no, you go. You go.

[08:40] Daniel McShea: Your notion of encoding of goals, I both like and dislike that language. I dislike it because it's the language of machines and we want to get away from machine metaphors. On the other hand, if you're going to reach people who already have machine metaphors in their heads and communicate with them, it sure helps to use their language. Encoding of goals, I think I know it's a top-down kind of thing. It's totally consistent with our view of field theory of how fields govern lower-level things within them. We've got a lot to agree about there. One issue for us is how human motivation is encoded. It's going to be neurally encoded; there's going to be some large-scale pattern of neural activation that forms the higher-level field, in our terms, that governs lower-level neural activity, moving and thinking and talking. Is this way of thinking about it consistent with yours? Do you know anything about these neural fields?

[09:53] Michael Levin: Let's talk about that, the human case first. I think Earl Miller at MIT has done some of the most amazing recent work on that topic. His work is mostly on memory and other aspects of behavior. I'm not sure he covers goals specifically, but he's been a pioneer in really talking about how the field, the actual field of the neuronal network, not the individual neuronal states, but the field, is critical for behavior. You probably want to take a look at that. I think it's fantastic stuff and we're using a lot of it as an inspiration to look for that outside the brain.

[10:34] Daniel McShea: So the name again was the MIT guy.

[10:36] Michael Levin: Miller. Earl Miller. M-I-L-L-E-R. State-of-the-art neuroscience. I think that's perfectly reasonable. The other thing I think is quite likely. The other thing I would add to that is: to the extent that goal mechanisms in the human brain are dependent on memory mechanisms, the substrate of memory is a very contentious topic still. There have been a bunch of conferences and lots of people recently putting cracks in this idea that it's the classic story of synaptic — that it's all in synaptic plasticity. If that's the case, Glanzman had an amazing conference recently that covered a lot of this stuff at UCLA. If that's the case, then you've got the field and those kinds of things, but you also have subcellular substrates, whether they be cytoskeleton or RNA as the basis of memories; there's going to be that multi-scale thing going on. My conjecture on all of this is that I don't think there is one mechanism of memory. I think that what the nervous system is doing is using all kinds of things in the cell as a reservoir. Then it's interpreting what the neurons are for: interpreting the information that's there and confabulating about it because it can't have the ground truth. I agree with you that the field is an important top-down controller.

[12:21] Daniel McShea: Just to clarify what you just said, when we get down to the subcellular level, whether it's synapses, intracellular chemistry, or cytosol, the memory can't be there. Memory of my cat when I was a kid, there's no cats down in those cells. Nothing cat-like, nothing cat-image-like. What there is is a pattern of interaction with that cell and other cells based on their subcellular machinery. And that higher level pattern constitutes the memory. Yes or no?

[12:54] Michael Levin: I agree with you. I have recently been looking into the idea that all of those things — those n-grams, regardless of the encoded medium — are messages from your past self. To decode them you need a creative process because the squeezing down of rich experience into some kind of compressed representation, where you throw away correlations and everything, gives you a compressed n-gram. Decoding it again later is a creative process because you don't know what it meant before; you have to do your best to guess. I think it's a constant construction, and the butterfly–caterpillar gives a drastic version of that. For all of us, we're constantly reconstructing from the evidence that has been left in the subcellular.

[13:51] Daniel McShea: Right, all that makes sense, but let me pursue it another step further. If I were to pluck out one cell whose subcellular machinery is involved in the memory of my cat, there's nothing I could do to that cell, no experiment I could do that would derive the notion of my cat from it, because I need thousands or millions of those cells, but their subcellular machinery is programmed in the right way in order to get the cat idea.

[14:19] Michael Levin: I understand why you're saying that, and that may well be true, but I am not 100% sure. I don't have a brilliant theory of how it would be otherwise, so I know why you're saying this, it makes sense. But we've had a number of examples where very small pieces of tissue transplanted from one animal to another. Most of this is not classic behavior. This is behavior in anatomical space. That might not hold. One thing that we see is that very small pieces from one animal of one state into another tend to carry across and not only become interpreted correctly, but push the whole host in a completely different direction. I don't know why that is, but I think it has something to do with amplification of novelty. When you get new information, it's the flip side of surprise minimization. On the one hand, you want to minimize surprise. On the other hand, you need to explore too. When you get information that suddenly doesn't agree with the rest of the body, you might want to take that seriously. I'm not sure that even across, and this is why we want to do things like this anthrobot. Here was a human whose neurons were adapted to nicotine. You've made anthrobots. Anthrobots have no neurons. They're entirely skin tracheal. If they still have the behavior, that means that it crossed over from that neural. We don't have these data yet, so who knows how this will turn out. Other people have done cross-species memory transplants, and in planaria you can do tail-to-brain and things like that. So I don't know how much, if the information really is, if the engrams really are, for example, cytoskeletal states, maybe the interpretation machinery doesn't have to be tons of neurons. It could be a lot of the molecular networks inside a single cell such that you get quite a bit already.

[16:49] Daniel McShea: Yeah.

[16:50] Michael Levin: I'm not sure.

[16:53] Daniel McShea: It runs against the grain of my thinking, but that doesn't make it false.

[16:58] Michael Levin: What you said is more probable, but I'm not sure that it has to be that way.

[17:06] Daniel McShea: So the basic notion that there are processes for encoding things at lower levels has to be right, because that's how DNA serves as a memory molecule. Natural selection over a gazillion years takes all this dynamical function and encodes it somehow over time into this stone-cold, dead molecule DNA. It can be extracted by the same dynamical processes when they come along. So we know this encoding at a lower level works. It's just that behavior seems to be operating at a high scale. There hasn't been evolutionary time to encode love of nicotine. It doesn't mean it didn't happen. It doesn't mean there aren't processes for doing it overnight, for all I know.

[17:57] Michael Levin: Yeah.

[17:58] Daniel McShea: It'd just be weird if there were.

[18:01] Michael Levin: I think a couple of things. One is that we should go back and talk about the goal in coding thing too, because I'm interested in your thoughts on this. We see, first of all, behavior — I expand that to behavior in other spaces. For example, behavior in anatomical space, behavior in transcriptional space, in physiological state space, and in those spaces, we also see incredible plasticity and ability to solve problems that you have not seen evolutionarily before. So evolution produces these problem-solving agents that can solve problems in different spaces. And I think it's a hugely interesting and important question for novel agents like synthetic biobots, chimeras, hybrids, all that kind of stuff, where their goals come from. Because in that case, you don't have eons of selection to lean on for a specific goal or any of the kind of really out-of-the-box novelty that we see them accommodating to. I don't say adapting. It doesn't take generations.

[19:21] Daniel McShea: Selection in the past produces all these dynamical processes with enormous capabilities, many of which are never realized in their time. Insects have the capability to generate an extra leg pair. Hell, where'd that come from? But leg pairs had to be selected at one point or another. And so there are these modules built into the system that can be invoked in novel contexts and do wildly different things. You can get legs on your head if you're a fruit fly. What you just said, I think, is totally consistent with some sort of combination of selection embedding capabilities, and those capabilities being deployed in wildly different contexts overnight, in a matter of minutes and hours to produce new sorts of behaviors. I'm totally on board with that.

[20:10] Michael Levin: I take it a little bit even further. I agree with all of that: recombination of past modules, and in new ways, is absolutely a part of it. But we see them doing new things; there's more to it than reuse of modules that you've had before. Really, we're seeing behaviors that I don't think ever existed before, and you can decompose it: the pieces of it did, but eventually, if you keep going, you just shade into... That's a completely novel reuse of physics. Eventually you just end up bumping into two things: physics and, in general, wherever the laws of mathematics come from — these very basic things. Those things are absolutely being reused, but these modules get smaller and smaller when you get weirder in terms of the perturbations that you're doing to these things.

[21:15] Daniel McShea: Tell me if the following story is along the lines you're talking; it could be wildly off base, and don't be afraid to tell me that. I did some work with a behavioral human-evolution guy interested in primatology. He thinks that one of the bases of civilization and the success of human beings is our ability to recognize when interactions are working. You and I are carrying a couch upstairs; we've never done it before. It falls all over the place. We hurt ourselves. Finally, we find a pattern: you in front, me in back, turning it on its side. As it's happening in the moment, we have a module that says, "this is working." The behavior we come up with to actually get the couch upstairs is completely novel. There was no module for twisting large things in three dimensions at all. There's a "this is working" module. If you pitch these modules at a high enough level of generality, you'll get a lot of what you see out of it. It will look miraculous from the outside.

[22:21] Michael Levin: I think that's right. But I think that begins at the microbial level. So we see this kind of generic stress response that tells you, "How are we doing? Is this working? Is this not working?"

[22:35] Daniel McShea: Yes.

[22:36] Michael Levin: It's this generic stress response that starts extremely early on. It starts around things like protein folding and DNA damage and these very tiny things, and then evolution apparently scaled it up, and this is a bunch of stuff that isn't published yet that'll come out hopefully this year, where what evolution is doing is reusing some of those same loops, but it's in the—so this loop measures where we are, compares it to where you want it to be, and then the stress is proportional to the error, and then you try to adjust. This is a homeostatic thing. So what I think evolution's able to do is to replace the wild cards in that basic loop. What do we measure? What do we remember? You can put whatever you want and the loop still works. And so it seems to be reusing a lot of those same things for very high-level stuff. How many eyes do we have? What is our primary axis? And then it keeps going, and you can do couch moving and everything else. I agree with you. I think that basic feedback of stress is driving a lot of this.

[23:46] Daniel McShea: Yeah.

[23:47] Michael Levin: Which allows you to do some crazy experiments. For example, you can use anxiolytics, because you can cut that loop. You can cut the loop with anxiolytic drugs and say, measure the error, but don't worry about it. If the error is high, meh, it doesn't matter. And so if you do that, you can do that with embryos, you can do that with regenerating organs, you can do that with cells. You get some very interesting results when you decouple, when you lower the effort that they put in reducing error. It becomes very dangerous.

[24:22] Daniel McShea: Decreasing tolerances as a route to novelty.

[24:26] Gunnar Babcock: I want to jump in and ask: Forgive me if this is just an incredibly obvious question, but when something's working, it seems like when you set the expectation of the goal or what something's driving towards or a system there, when it's working, still seems to be measured in terms of survivability or fitness success, which works on the lower scale stuff. But once you get up to the couch, it's not necessarily going to be working at that level, right?

[25:00] Daniel McShea: Yeah, go ahead. Say it again.

[25:03] Gunnar Babcock: Well, so if regeneration increases something's fitness, you get a certain cycle, and then I can evaluate whether or not a process is working on the basis of whether or not it's helping achieve fitness goals, then I'm using the same measurement scale of a natural selection process. But once you get up to the goal of getting a couch upstairs, it's unclear whether or not that's in fact increasing my fitness. So it seems like we're conflating something that's happening in those processes. Because the story that I tell down there, it seems like the goal or what I'm using to evaluate whether or not a system's working is entirely different than it's going to be for other questions that I might ask at higher level processes.

[25:59] Daniel McShea: I'll throw in the story that I just told you: this working module, a general capability, is hypothesized to have evolved in early primates as a way of doing collaborative hunting, where one individual goes after the prey, it doesn't work, two of them go after it chaotically, it only sort of works. Five of them go after it. It works a little bit. Three circle around from the front, three from the back. There's some recognition of the fact that we're getting disproportionate returns on our labor here. Selection never envisioned that actual strategy of coming around from the front and coming around from the back of the prey animal. It's not wired in there. What is wired in there is the recognition strategy that some collaboration is producing results that are close to the target. Does that make sense?

[26:53] Gunnar Babcock: That afford a selective advantage.

[26:56] Daniel McShea: The whole process has a selective advantage. The recognition has a selective advantage. But how the recognition plays out, producing this strategy, which is inherited from one primate tribe to the next, this strategy of encirclement of prey was never wired in there and will never be wired in there. It's just social collaboration working to produce a goal is what's wired in there.

[27:25] Gunnar Babcock: Yeah, I think.

[27:29] Daniel McShea: Michael, jump in.

[27:31] Gunnar Babcock: I always want to step back and try to reflect on when we evaluate a process or a system as working versus not, and how the evaluation is being done, because it seems like there's slippage there sometimes.

[27:53] Daniel McShea: Interesting.

[27:54] Gunnar Babcock: Right.

[27:55] Daniel McShea: Yeah.

[27:56] Michael Levin: I understand the discipline. Dan Dennett was always very strong on this discipline to try to put everything in the evolutionary point of view. I think there's great use to that, but I don't think it's the only way to see goals or working. I think it's one perspective. As Dan was saying, what is incredibly advantageous is problem solving in various guises. Individual instances of it may not themselves be widely applicable or adaptable, but the capacity to do it, which then plays out in different ways. One of the things that people are seeing now is the "thanatotranscriptome." When animals or even human patients die, as the body dies, the individual cells, most of whom are perfectly fine, are turning on a whole bunch of new genes. This is called the "thanatotranscriptome." It's a set of genes that are turned on. It's not the same as transcription going down because the cells are deprived of oxygen. This is a very active, energetic process. They're actually turning on new genes. One way to think about it is, in the case of a human or a mammal or a bird living in dry air, that's a dead end. There's no second life for these cells. But in weird scenarios, for example, in our Anthrobots, the collective, and in the Zenobots. To make a Zenobot, the original embryo may no longer be around because you've dissociated into cells. The original embryo has died, but the cells have not. The fact that they can turn on new genes — they turn on hundreds of new genes that normal embryos don't have — and they have a new life doing new things like kinematic self-replication that, as far as we know, no other animal does, means they can have a new life. Has selection ever seen that particular thing? I'm not sure that it ever has. I'm not sure that in itself, especially in a mammal, would be of any use. But the general ability to — we're in a different scenario now, and we need to figure out how to get along in some new anatomical configuration, some new environment — I think it was true from day one, because you can't even really count on your morphology being the same generation to generation. This is that plasticity; it's why, for example, the kidney tubules in the salamander. Coming into the world, you're a newborn, you don't know how many copies of your genome you're going to have, you don't know what your cell size is going to be, you don't know how many cells, you can't assume any of that. You just have to figure out how you're going to survive in whatever internal and external affordances you've got. I think that's incredibly adaptive and that is what evolution is aiming for. But the individual instances of it will be like the humans: you try all kinds of stuff. Some of it helps and some of it doesn't.

[31:29] Gunnar Babcock: Okay, yeah, And that's...

[31:34] Daniel McShea: Are these cells going protist? Are they turning on the ancient protist machinery? They're thinking all the external constraints are gone. I'm not part of a multicellular anymore. The signals coming in from the larger gradients are fading. I'm a protist. Let's do this.

[31:51] Michael Levin: They actually are, and we'll be pre-printing this in the next week or two. The thing that has been known along those lines is that's what cancer cells do too. That's the atavistic stuff that Paul Davies and others have worked on: when you're disconnected from the collective, the rest of the body is just outer environment to you. You're an amoeba. There are a number of scenarios in which that happens.

[32:24] Gunnar Babcock: But are you also suggesting that there might be enough affordances that they're trying to generate a new multicellular assemblage?

[32:36] Michael Levin: That's exactly what I think. Much more work on this is needed, but I do think that tumors and biobots are a kind of reboot of multicellularity. You first go single cell and you roll back and then again you try your best to assemble in some new functional form. You make something that is not any stage of normal human development, you have behaviors that are not normal behaviors for any human or human tissue, you have found a new way to be with stock hardware. One reason why we haven't immediately gone ahead and engineered synthetic circuits into all these things is because I want to know what the default hardware is able to do before we add all this stuff to it.

[33:24] Daniel McShea: If I've got a factory of well-trained workers doing their jobs and management goes home and never comes back, the system is going to fall apart and we're going to lean more heavily on the default hardware in an individual person deprived of their collective. I wouldn't expect anything constructive to come out of that. It would be interesting and weird. But is the analogy appropriate here?

[33:52] Michael Levin: I think the analogy is appropriate, except that what I think is happening in bodies is multi-level intelligence. And when one level goes home, the bottom levels will do what they can. They will continue to bend the levels below them. They distort those levels. What I see in our experiments and others is a constant push of all the levels to make. It's a kind of sense making there. They're trying to make sense of their world. They're trying to do something that is halfway coherent and adaptive. When new levels show up above them that will distort their option space, they can be hacked and they will go along. But if those levels aren't there, they will come up with something else. That novelty, that push to, it's not even just problem solving, but it's not just finding solutions to a problem. It's finding new problems to solve. More than that, I think it's finding new spaces in which to find those problems. You can get along by solving problems in the anatomical space, or you can ditch that entirely and focus on solving it in metabolic space and being unicellular. I think there's always pressure to do this.

[35:17] Daniel McShea: The origin of all metamorphoses.

[35:22] Gunnar Babcock: I'll try to get at this again. In problem solving, in the way that you're talking about it, the problem being solved — can I generalize to a sense of saying something like regeneration or survival or something along those lines? Or are the problems being solved always going to be unique and specific, particular to whatever's going on in individual situations? I guess what I'm thinking of is, could I look at a system that rapidly falls apart and disintegrates as solving a problem in that action itself? Or is that incompatible with the way that you're thinking about problem solving?

[36:17] Michael Levin: Philosophically, I think what's important here is that everything is from the perspective of some observer. I don't think there's an objective one answer to what problem this thing is solving or what space it lives in. I think that we are all — we as scientists, but also the parasites, the conspecifics, the parts, the wholes — everybody has some perspective on the problem. They are all making models to different degrees of sophistication. They're making models of what the problem is, what the system's behavior is, which tells them how to hack it. If you're a parasite or an exploiter, and you want to control your own parts, you need to understand how your parts work to some extent. They will all have a different view of what the problem is. I think for us we could say fundamentally that all of this is about persistence or survival, bare persistence. I can't disprove it. My gut feeling is that it's a lot more than that. I don't think it's about mere persistence. I think it's about transformation and expansion into new spaces. I think that's even more fundamental because here's the thing with persistence. I don't know whose quote this is, but this paradox: if you try to persist by never changing, you're going to die out when circumstances change. But if you do change and adapt, then you're no longer yourself. So you're also gone. No matter what, you can't persist as an unchanging thing. More fundamental, I think biology basically bought into this from the beginning, just realizing that everything is going to change. My parts are going to change, they'll be mutated, the environment changes. We're not going to try to persist anything in particular, which I know is not the way that neo-Darwinian thinking is supposed to go. But I think it's way more about change and exploration and expansion into new spaces than it is about persistence of a single thing, identifiably so.

[38:32] Gunnar Babcock: If I can track it down, I'll send it to you. I don't think there's a nice little quote that gets at it, but Aristotle identified this problem. He talked about the example of grass.

[38:47] Michael Levin: No, please go ahead.

[38:55] Gunnar Babcock: Dan, do you have any follow-ups? I don't want to shift gears.

[39:05] Daniel McShea: I've got three lunches and two dinners worth of follow-ups. We have limited time, so you go ahead.

[39:12] Gunnar Babcock: Okay. Here's the question that I wanted to ask. I was talking with Justin Garson at the Minnesota meeting, and he has this idea about goal-directedness that it's representational and that, fundamentally, any way that you're going to be able to talk about a goal-directed system, it has to be via some sort of representation. And I think for him, that's primarily mental representations. But I wonder if in your way of thinking, would you be sympathetic to a view that's so liberal as to think about a morphogenetic field that's doing the directing in the sense that you're often pointing at Michael as a kind of representation? Would something like Justin's thinking here possibly fit? Would you be okay with the idea of representations in that way? I'm not doing any justice to Justin's.

[40:27] Daniel McShea: Sadly, you are.

[40:29] Gunnar Babcock: I think he is committed to a mental representation, but Dan and I were pressing him, saying it seems like there are all these developmental processes that don't clearly have representations in the way that you're thinking about them, Justin. But it definitely made me want to ask if you think representation characterizes the view of your work.

[40:52] Michael Levin: So here's my view, which he may hate or not. I don't know. Some people think it's worse than the other. I support it, but maybe in a way that they wouldn't like. First of all, about goals in particular, I take a very engineering approach to this, which is I think you're entitled to say something has a goal state when it is the case that an efficient way to manipulate the system's behavior is by altering the goal state as opposed to bottom-up rewiring. The only way I know anything has a goal is if I can tell you where and how that goal is specified, such that I can go in and change it, and now I change the goal. I don't change your hardware, I don't change anything about you, but I change the goal, and if you are now off and running — it's a matter of degree — if I can change the goal with a minimal manipulation, and then you are autonomously off and running to implement that goal, now I think you have goals. If the only way to change what you do is to get in there and I'm the one micromanaging every aspect of your function, I'm rewiring everything, then there's not much goal directness going on here. But if I can do anything from resetting your set point, train you on something, give you a coherent argument if you're a human, and now you're off on this thing that I don't need to micromanage, then you've got goals. That's it. I take a very practical and engineering approach in this. Now about representation. We absolutely have examples in...

[42:27] Daniel McShea: You just froze up for me. Both of you froze up.

[42:30] Michael Levin: Sorry, are we back?

[42:31] Daniel McShea: You're back. Sorry, I lost you. Both froze.

[42:35] Michael Levin: We absolutely have examples in regeneration that I have referred to as representation. Let's keep in mind, it's by definition, because this is ancient evolutionary stuff. It isn't full-blown human; I know my goals. I'm not talking about second order metacognition. But what is a very simple example of representation? Not only representation, but I think counterfactual representation. I'll tell you what you can do. You take a planarian and it has a particular bioelectrical gradient that tells you how many heads the planarian is supposed to have. If you cut it into pieces, the pieces consult the pattern and they make exactly the number of heads they should have. Now, we can go ahead and alter that pattern within an intact worm. When you do it within an intact worm, what you have is something very interesting. You have a schizophrenic situation where the body of the worm is one-headed, but the memory of what a correct worm is supposed to look like actually says two heads. Nothing happens until you injure it. That memory is latent. It is not active. It is just sitting there until you injure the animal. As soon as you injure the animal, the cells consult the pattern. That is ground truth as far as they're concerned. There's nothing else for them to compare to. They make two heads. So what I think is happening here when you're looking at that pattern is that it's a goal state because it is a suitable control knob for you to set how many heads this thing will autonomously make. I think it is a representation because you can read it out in a way that doesn't require you to look at the anatomy or to know what is there. The pattern guides this behavior. We have a mapping, at least in this case, if not all cases, between what the memory says and what it's going to do. I think it's counterfactual because in the case of these cryptic worms that have that pattern, it is what you are going to do if you get cut in the future. It isn't what's true right now. I spend a lot of time on this in my talks because people see the pattern. They go, oh, that's a pattern of the two-headed anatomy. In fact, it might not if you never injure it. It is not a readout of the current situation. I think it is an extremely early, minimal example of mental time travel. I think this is the kind of thing that later evolved into our ability to remember and envision things that are not true right now. It is the ability of the network to hold a state that does not reflect current circumstances, but reflects possible future circumstances. It's minimal. It's not self-reflective. That's the second thing. The third thing I would say is this. I think he's right to say that representation is mental, not because you need neurons for this or because it only applies to brains. This is the part where people leave the train that I'm on. I say it's mental, but mental doesn't mean brainy mammals. I really think that the cellular collectives that are solving problems and navigating anatomical space are a kind of mind. I don't think they're a human mind. I don't think they do everything that brainy minds do, but I think they are a kind of primitive cognitive system. I do think that it is not crazy to say that representation is mental, but not mental the way most people take it to mean, which is brainy animals.

[46:11] Gunnar Babcock: A possible response to Justin might be something to the effect of, "Yeah, Justin, you're right — maybe goal-directedness does rely on representation, and that's what you need for goal-directed states. But Justin, the problem is you have too constrained a conception of what counts as a mental representation."

[46:35] Michael Levin: Exactly.

[46:36] Gunnar Babcock: You're really committed to this idea of mental representation as being a neural process that has what we think about mental representations. But if you take a really much more expansive view the one you're taking, Michael, then you can get a representational model of goal directedness going.

[46:53] Michael Levin: That's exactly what I would say. I've had these discussions. We've had neuroscience conferences and things like that. I'll say, what's a neuron? They'll say, neurons. I'm like, no, really, what's a neuron? They say, well, it has these properties. They'll list some properties and say, every cell in the body does it. They say, well, neural networks do this stuff; all the tissues in your body are doing it. They're doing it in a different space. They're not projecting it into three-dimensional space by moving muscles around. That's a newer innovation. But they're doing the same kinds of stuff. I've argued that a lot of what we study in cognitive science is just a pivot, a speed up, an incredible speed up, of course, but also a pivot of ancient things that we're using to navigate anatomical space that now we can use the same tricks to navigate this three-dimensional space. So I would say almost all of that kind of stuff holds if you're willing to say that mental doesn't mean human, it doesn't mean brainy, it means some other things as far as perception, action cycles and problem solving and memory and things like that. And those are extremely ancient.

[48:04] Daniel McShea: I think we've got a bad language problem here, and that's my objection to Justin. Everything you guys said is translatable, and most of it was in physical terms. You're not dualists. There's no mind-body separation. You have no notion of symbols, the idea of something manipulating something physical. Representationalists do. That's the problem: they're computer scientists, and when they're psychiatrists and psychologists, they do. They're really dualists about these things. They think the abstract can affect the physical. But that's not the way that language reads out there in those three worlds — AI, computer science generally, and psychology. Everything you just said is great. I think it's exactly the wrong language to use out there. As Gunnar knows, this is a problem area of mine.

[49:10] Michael Levin: I would love to — we're going to need another hour on this — because I would love to dig into this. I understand the reasons to try to avoid the dualist language and all of that. But actually there's a way in which it's really helpful. I don't think we should toss it entirely. I'm not saying leave it as a metaphor or colloquialism.

[49:41] Daniel McShea: You want command of the language. And that's a tactic to take. You can say, I know what you mean by mental. I know what I mean by mental. I want to command the scientific social environment to the point where my view of the mental, which is purely physicalist, is the one that everybody calls to mind. And if you can succeed at that, more power to you. Pan-mentalism, I get it.

[50:05] Gunnar Babcock: There is, it's not quite, you don't have to go the dualist route. A lot of people do go the dualist route.

[50:15] Daniel McShea: Yeah, that's the problem.

[50:17] Gunnar Babcock: In the teleosomatic literature you have people trying to make sense of the phenomenal experience of a mental representation. They're physicalists, but they're saying I have this experience of mental representations. It's this phenomenon that you want to make sense of and you want to say it's important and different somehow. I think they're the folks that would want to deny that you get that in the non-brainy stuff. Michael, correct me if I'm wrong; I think you're wanting to argue that no, there is a sense that if you talk about that phenomenon in a graded way, it's going to go much lower, that you're going to get something similar at much lower levels.

[51:08] Michael Levin: I don't talk much about consciousness and things like that, but one of the papers that I'm writing now with Nick Rouleau takes all the existing theories of consciousness and looks at what they are saying about the brain that is supposed to give rise to this, pointing out that all of them, without exception, as far as I can tell, can say the exact same thing about very unconventional systems. And so, yes, if you take that stuff seriously, it absolutely has to go all the way to the bottom. If you're not looking for human-level hopes and dreams in your paramecium, but you're willing to scale appropriately, it goes all the way to the bottom. I think we should do more on this, Dan, because what you were saying before I want to push back on a little bit. I agree with you that we can tell physicalist stories, bottom-up physicalist stories. It's true. After everything we publish, the next thing that happens is somebody says, "I can see how this happens; it's just chemistry." But after the fact, you can tell a chemical story. It's never going to be fairies. It's always chemistry underneath. But I do think that there's some stuff that you would call dualist that is actually worthwhile. So I'd like to dig into this more.

[52:33] Daniel McShea: I have to go in a moment, but let me just throw in, you use bottom-up and physicalist together; they're a package. Anti-reductionists like me want to break that package. By physicalist, I don't mean bottom-up. I mean top-down as well as bottom-up. And so when we talk about fields or gradients or whatever causing lower level events, that's physicalist. And if what you want to call mental has downward effects because it's a pre-pattern of some kind of physically instantiated, not reductionistically, but physically instantiated thing with causal powers. I'm just worried about fighting too many battles at once.

[53:16] Michael Levin: I think that's fantastic. What I'm really interested in is to get your thoughts on. So fields, for sure: top-down and physicalist. We can get into some other stuff: virtual governors and quasi-particles. I think a lot of people would say you can get even weirder. You can get gliders and you get progressively less physical with this stuff. I think you keep important causality. So is that still physical? I don't know. I think we need another hour on this.

[53:52] Daniel McShea: To be continued.


Related episodes