Watch Episode Here
Listen to Episode Here
Show Notes
This is a ~1 hour-long discussion meeting with Stuart Kauffman and Katherine Peil Kauffman, covering evolution, machines, metaphors in science, etc.
CHAPTERS:
(00:00) Kantian Wholes And Xenobots
(05:35) Anthrobots And New Genes
(13:16) Microbial Villages And Affordances
(18:48) Biology Reinterpreting Itself
(24:04) Machines, Wholes, And Coherence
(29:14) Emotion, Bioelectricity, Resonance
(35:12) Bioelectric Minds And Machines
(44:09) Valence, Memory, And Pain
(53:27) Entropy Paper And Cosmology
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:00] Stuart Kauffman: Mike, I want to bring up something that I've mentioned to you that I think is really serious. I think we disagree, but I think that your zenobots are the way to test it. I have sent you the Andrea Roles-Duke Hoffman paper, "A Third Transition in Science." Let me just go through our argument. Organisms are Kantian wholes, said Kant, where the parts exist for and by means of the whole. So you're a Kantian whole. You get to exist. You've got livers and kidneys. You exist because you have them, but they exist because they're part of you. The definition of a Kantian whole allows us to talk about the function of a part. It is that subset of the causal properties of the part that sustains the whole. So for the heart, it's pumping blood, not making heart sounds. A given part's function is a subset of its causal properties, but it might have other subsets of causal properties that are of use. Those are Darwinian pre-adaptations or exaptations, à la Verba and Gould. The critical step is this: from one use of a thing, you cannot deduce its other uses. My example is an engine block: you can clearly use an engine block as a paperweight. You can jury-rig it and use the engine block with its rigid corners to crack open coconuts. From the use of an engine block as a paperweight, you cannot deduce its use to crack open a coconut. If that step is right, and evolution has exaptations in them, and it does, they're not deductive and therefore evolution cannot be deductive and therefore there can be no entailing law. That already tells us that evolution is not a deductive process. It's a continuous jury-rigging process with no entailing law. And that means really complex things can come into existence without deduction. So hold that.
[02:47] Stuart Kauffman: Let me make the same claim about human invention. You may or may not know that I came up with this tap equation, which basically says you make things out of things by combining them. One example: the printing press is a recombination between movable type and a wine press. What one is doing is using things for funny reasons and funny ways. In general, you can't deduce it. Human innovation is also not deductive. It shows up because in patents it has to be creative over the prior art, which means it's not obvious. Typically it's because it's not deduced. Now if that's right, the notion that cognition is limited to deduction is wrong. All computation is algorithmic and therefore it's deduction. Therefore, the human mind and the evolution of the biosphere are not limited to deduction, which is more than Godel's theorem, which says if you've got a formal system, within it you can have formally undecidable statements. This is beyond Godel's statement. That's a long preamble to something I fell in love with in your Xenobots. You take some skin tissues from frogs or something, and the cells organize themselves and do weird things. It's just stunning that you did that. They seem like superb places to look, Michael. Can you look at your Xenobots and establish that cells are using subsets of causal properties to achieve new functions and new functional integration in the Xenobot that are not the causal subsets of the cells that are normally used? In other words, are they jerry-rigging themselves so they get to exist? There's a final paper I'm writing, a paper with Andrea Rolli and some other people, including some soil scientists, and the basic idea is "Life Will Find a Way." I'm really excited about this theme. If every part can get used in indefinitely many ways, how many parts do you have to have? They'll just manage to find a collective way of enabling one another and so they get to exist with one another. Are your Xenobots a stunning example of it?
[05:35] Michael Levin: First of all, I kept waiting for the thing that we're supposed to disagree on. I don't disagree with anything you just said. In fact, I agree with that completely with a couple of twists that we can talk about. But let's just talk about the bots for a second. I don't know if you've seen, beyond the Xenobots, there have been the Anthro bots now. Have you seen those? Have you heard of this?
[06:02] Stuart Kauffman: Do you know Cape? What are they?
[06:06] Michael Levin: So, let me tell you.
[06:08] Katherine Peil Kauffman: Human lung tissue.
[06:10] Michael Levin: Let me tell you how that came about. When we first made Xenobots... some people said, this has got to be a frog-specific embryonic thing. These are animal caps. This is what frog embryos might do. It's a very niche-specific thing. I said, what's the furthest we can get from amphibian embryos? Well, that would be an adult human. Because I didn't believe for a second that this was some specific frog.
[06:34] Stuart Kauffman: This is of course not what a stupid belief.
[06:37] Michael Levin: Lots of people said it. We caught all kinds of heat for calling them anything other than animal caps because developmental biologists have been looking at animal caps for decades. What we did was, and this was the work of Gizem Gomushka, who's a PhD student in my lab and a few other collaborators. We took tracheal epithelial cells taken from donors and put them in a different environment where they did exactly the same thing as the Xenobots do, which is they rebooted their multicellularity and they created anthrobots. If I show you an anthrobot, I could show you a video of it. It looks like some sort of unicellular prehistoric thing running around. They have autonomous motion. They have all kinds of interesting behaviors. Here's one other thing they do that's interesting. If you take a Petri dish and you seed a lawn of human peripheral neurons, so there's a nice neural network growing on the bottom of this dish, you take a scalpel and you put a big scratch through it, a wound assay. You throw some anthrobots in, they run around, they can settle in the wound. Four days later, if you pick them up, what you'll notice is that what they've meanwhile done is taken the two ends of that scratch and healed them together. They heal these neural wounds. Who would have thought that your tracheal cells, which sit there quietly for decades in your airway, will actually have the capacity to run around, have autonomous motion, and heal peripheral nerve wounds when they come across them? It's quite new. And so that's published. The thing that isn't published that I can tell you a little bit about now — we're going to pre-print it probably in a couple of weeks, if not earlier — is we've looked at transcriptomes both for Xenobots and for Anthrobots. One question you might ask is, what genes do these things express? The conventional story is your genome tells you what your anatomy and what your lifestyle is going to be. We looked at both Anthrobots and Xenobots, and we found thousands of new genes that the original source material does not express. These are things picked out of the genome and once the preprint comes out, you can see all the different categories, all kinds of amazing things that are perhaps interesting in their new lifestyle that the native material does not express. Remember in all of these things, there are no drugs, there are no synthetic circuits, there are no transgenes, there is no genomic editing. These are completely wild-type cells, which have now reassembled themselves into a new form factor, and now they have some novel behaviors. What we have not yet published — this will come later in the year, and this I think is important — as you said, Stu, yes, they reconfigure themselves into new anatomies and new behaviors, but what we haven't shown yet is any of the stuff that I'm really interested in, which is goal-directed behaviors. We haven't published that yet. What we show right now, people can interpret as emergence, open-loop feed-forward emergence. A bunch of low-level rules happen, and we know complex things happen from simple rules being iterated. We've been studying their behavior. We've been studying their various competencies. It's coming. There's a lot more. I'm 100% with you.
[09:56] Michael Levin: This is why we make these synthetic things, because we want to understand the plasticity. We want to understand the space of the adjacent possible and the bigger space that comes from. What is the morphospace, both of behavior, physiology, and morphology for these things. I agree completely. The one thing that I want to say is I'm on board with everything that you've said. But there are two things that I tend to focus on in our work. One is this idea that I think we should be really careful not to confuse the formal models that we have with the actual thing. I'm not worried about Turing machines, models of Turing machines, models of the mind or any of that, because they're formal models. They don't necessarily capture everything that's of interest. There may be some use for something in cognition, but there's no reason to say that a brain or a human or a living thing is a Turing machine. I don't think anything is a Turing machine. I think we have these formal models that capture some slice of what we're looking at. I think all these wranglings about what it really is are misguided. I don't think anything really is anything. We as observers have different formal models that we use; they are lenses through which we see these things. Some of them are better than others. Some of them are more useful in some contexts than others. I don't think we need to worry about what it really is. I don't think any of these things really is anything. And where that really comes into play is in some of our latest work on simple algorithms that we've been doing. For the same reason that organicists look at living things and say your formal models of chemistry and all these things are not sufficient to capture the beauty and the capabilities of life. For the exact same reason, we should be very careful about saying that our formal models of simple things, whether they be sorting algorithms like we study or Turing machines or anything else, fully and completely explain what simple things do. I don't believe any of that. So I think all of these formal models have limitations. We see these limitations as early as dumb sorting algorithms. We already see these things doing something that is not in the algorithm. And this is a paper; the preprint's been up for a while, the real paper will come out in a couple of weeks. I think very early on in the spectrum of complexity, you already get an insufficiency of these formal models that we use. Can you look at something as if it were an algorithm? You can, and then that'll help you do certain things. But I don't think we should think that now we've plumbed its steps and that's all that it is. So that's why I really don't get too worked up about Turing machines or any of that, because it's one lens. Sometimes it helps, but I don't think it gives you everything you want to know about these things.
[13:16] Stuart Kauffman: I think it's lovely. One of the things, among things to contemplate, I'd like to go someplace that's close to your anthropos, I think. It's more articles where people discuss beyond the formal world. What does it mean to be beyond the formal? We're still deeply in the Newtonian paradigm. Put that on the side. Kate knows all of this. I'm working with this wonderful guy you'll have to meet. It's our friend, Jan Dyksterhaus. Jan Dyksterhaus lives in a house on a ****. Two years ago, we came up with the 140 species experiment, and we have money in it. We're doing it in Holland. It's getting neater. We now have 70 fungal species. Each is DNA identified and tagged at the genus level at least. We have 70 bacterial species. Each DNA is sequenced, and each is stored separately right now at minus 80. Our plan is, within months now, it's taken a long time to get here, to mix these equimolar, equal number of each kind of species. We're going to have 140 species that have never seen one another before and plate them onto some sterilized dune sand, maybe 50 times onto a square meter, and watch them for two years. The question is, what do they do? Nobody's ever done this experiment. Sort of like your Xenobot experiment. Nobody's ever taken 140 species and said, "So what are you going to do with one another?" From the third transition, we can't deduce what they're going to do, step one. Step two, if pre-adaptations arise, mutations will arise over that time course.
[16:02] Stuart Kauffman: If we take 20 copies of the same system on 20 plots, do the same mutations arise or do different ones? Different ones. Can we see that a molecule that used to have one function has some other function, which we can ask in the xenobots also? Meanwhile, Jan has an idea of villages. We put down this one-meter square, 20 or 50 times, and his guess is we're going to find little nodules in the soil, maybe a couple-millimeter radius. It's going to have some subset of the 140 species in it, say a specific seven bacteria and 11 fungi. Call that village one. How many times does village one occur? Maybe we'll find village two that has 18 bacteria and four fungi. How many villages are there? What's the size distribution of villages? Are they small, five to ten things? Are they huge? Do different copies of the same village accumulate the same mutations? We have no idea. I realized about a year ago there's nothing special about catalysis. It just speeds things up. My example now is I have a flat tummy and you and I are bacteria. You can crawl over me to get food. I've afforded you a way to get to food and an affordance is the same thing as an enzyme. It just lets something happen faster. Instead of my idea of collectively autocatalytic sets, it's a set of things that are a mutual affordance set or a collective affordance set. My hope is if the villages that Jan is imagining are collective affordance sets, then we can ask of such a collective affordance: it's roughly, how do you guys make a living with one another? That's also what's going on in your xenobots and your anthropots. Then we can ask, is it easy for this to arrive, Michael? And when it arises, are molecules and properties being used for their old functions or wacky new ones that we could never have said? Is this idea of "life will find a way" truly generic? Are we seeing it, but we don't know to look for it, and it's been there all the time? Why didn't life vanish once it emerged? Maybe if you have enough species, it can always reconstitute itself. There's something beyond formalization that's emergent and collective and solo villages.
[18:48] Michael Levin: I think one of the things that we need to do with formalization is to remember that formalization assumes one observer that is trying to formalize. And I think what's very different in biology is, and this is a concept of polycomputing that Josh Bongaert and I have been developing, that what I think you have in any kind of a biological system is an incredibly rich soup. It's a multi-scale soup of observers that are constantly trying to interpret and hack each other. So the interpretation, the models that they're making of each other, how they interpret the signals, it's a very pluralistic kind of vision because they're doing their best. Every system is doing its best to understand what are these other systems doing. It makes these models, and there isn't one unique, objective, privileged one that is correct, everything is up for interpretation. In the entropy paper that I just put out a few weeks ago, it's what you just said is really important. Not only do Xenobots and synthetics and the kind of synthetic chimera have to handle novelty, but I actually think every normal embryo and every normal standard biological system is figuring out what's going on from scratch every single time. I do not think that any of these things overtrain on the past. I don't think they take the past too seriously in the sense of, here are my genetic affordances, here is my environment, I know exactly what it means and it's fixed. I think they reinterpret this stuff on the fly every time. Now, it just so happens that with the same starting conditions, they end up doing the same thing, which gives us this idea that development is reliable, it's fixed, this is what they know how to do, this standard embryo. But the reason that we see these amazing examples of plasticity, where we can put an eye on the **** of a tadpole and they can still see out of that eye, and we can move the face, scramble the face, and they all come back to where they need to be, all of these things work because the biology never expected anything to have a fixed meaning, not the genes, not the signaling factors, there's none of it. This is what we're working on now on the evolutionary scale, but also on the cognitive scale. Because if you think about it, at any given moment of life, you don't have access to the past. What you have access to is the engrams, the memory traces that your past self has left for you. It's their messages, their messages from your past self. Now, when you get these messages, you don't necessarily know what they meant. You have to reinterpret them, which gives rise to confabulation and construction, active construction of the cognitive self. So I don't think this showed up when brains evolved. I think all of biology is doing this all the time. I live forward, I don't live backward. I need to figure out what is a coherent story about my environment and myself that I can tell right now based on whatever information I got from my past self. I don't care what my past self meant by it. My job is to reinterpret it right now in the most adaptive way possible so that I know what to do next. And that aspect, this idea that biology doesn't optimize for fidelity of information the way that our computational devices do.
[21:25] Michael Levin: All of our computing devices, you better believe your bits need to stay what they were before. But I think biology assumes that the medium is unreliable and it is not tied to one particular interpretation. Everything's going to change. Not only your environment's going to change, but your own parts are going to change. They're going to mutate, they're going to evolve. You have no idea. This is why you can do things: imagine a salamander. Very early on, you can make them have multiple copies of their genome, so they're polyploid newts. They end up with very large cells. The actual salamander is still the same size, so they use fewer, bigger cells to build the same structures. And if you make the cells really gigantic, things that normally have 8 to 10 cells in a circle in a kidney lumen: one cell will wrap around itself, leave a hole in the middle, and give you the same lumen. Completely different molecular mechanism. No cell-to-cell communication; it's a cytoskeletal bending. One cell bends around itself. Now, look, you're a salamander coming into this world. What can you count on? You can't count on having the right number of chromosomes. You can't count on having the right cell size. You can't count on having the right cell number. You need to have an algorithm that is going to figure out some way. And if you're completely messed up, you might be a xenobot. You might not make it to be a salamander, but you're going to be able to put yourself together despite incredible variety. You can't count on any of that stuff. And that's why, both cognitively — I think it's the same problem. I think all of this is cognition, and just at different scales. And I think that's what it is. It's uncertainty and it's reinterpretation on the fly. And there is no fixed model. Everybody's doing the best they can and reinterpreting the information that they have.
[24:04] Stuart Kauffman: I'm loving this. I want to go in two directions. Second direction will be to come back to Kate, because the confabulation is making sense of your world willy-nilly. Somehow it's going to hang together. The criterion is the whole hangs together somehow. But also this gets at whether or not a living organism is at any moment something you might call a machine. So let's just take a typical machine, a cannon or a meat grinder. The parts do the same thing over and over again. It's a metal meat grinder, and the parts do the same thing. There's no reinterpreting of anything. In that sense, it's like a Turing machine. The cogs on the wheel turn the cogs on the other wheel, and the millstone grinds slowly. I think what you're saying is we're really trying to say the same thing, that at any moment the causal properties in an organism work together and the causal features that get used at any moment might keep varying in all kinds of weird ways. So you get this huge polyploid cell that builds a hole through itself. So it's still doing a liver membrane thing. In other words, maybe these are all stories of life will find a way, of a sufficiency of jury-rigging a way to make sense of it all in a holistic way. And Kate, it seems to me that this comes back so strongly to your notion of it's something like emotional coherence. There's a word coherence here, which is like the Kantian whole, like the Xenobot managing to do something, getting on with being a Xenobot that's totally missing from our physics. It's totally missing from a machine metaphor of clockworks. Try it, Kate. What do you think?
[26:23] Katherine Peil Kauffman: I've been trying to create the biggest picture, the simplest picture that can be grounded in direct experience. I think that understanding the function of emotion is something that's going to change. It reverberates down through biology and even physics, and all the way up into social stuff. We need to look, we need to identify the agent in the machine, because all the way up and all the way down, there is, as Michael suggests, this plasticity and this on-the-fly creativity in the moment that's adaptive and novel. It's part of how life works at such a deep level. That to me is what's the very most important. It seems in almost all the conversations we're having. We're working with a quantum physicist to bridge psyche, biology, and physics. It's all coming down to what the structure of a Kantian whole is. We need to clarify something that's happening at every level. To me, the way to do that is to think about boundaries, internal and external, up and down. To me, it makes the most sense to think about internal and external on every scale. Because external is going to be from the top down, the outside in, but it's really the outside in. So the fundamental boundary between the self and its environment, whatever that self might be. I think that the concept is really important because a Kantian whole is an identity structure and that you have activity, flexible activity, decision making toward two purposes at every level. One of them is to retain the stability of the organization and the order of the internal, and to adapt to the chaos and the requirements of the external world. So you have a self–not-self boundary that's built into the story. We're working on talking about that, defining that boundary as part of this dance between res potentia and res extensa and the role that consciousness and intelligence play. I've been interested in Don Hoffman and how he's talking about the...
[29:14] Stuart Kauffman: Oh, Mike talked to Don Hoffman, you remember?
[29:16] Katherine Peil Kauffman: Mike has the coolest conversations with everybody. It's really an honor to have access to this and to participate in it. The idea is that at this nexus of what we mean by a Kantian whole, there is a bottom-up amplifying feedback and a top-down homeostatic feedback; that's the closure that gives you constraint closure, operational closure. It's information closure because something is being amplified, and you see the parts are doing the same thing as the wholes in their environment, and at every level you have both going on. In the paper I wrote about this in 2014, I used the chemistry of the E. coli bacterium and infotaxis as an example of what creatures can do. This is what early creatures must have been doing all along to argue that there is sentience and intelligence that's always played out in the evolutionary process on here-and-now time scales. Alec had used a cybernetic feedback model. It's great because it fits what I'm saying and what flows upward in terms of the behavior of organisms. Then Mike's work came out and I realized these feedback dynamics and the circuitry — this is bioelectrics. So there's this language that Mike's got his thumb on the pulse of that can link this missing piece of the story to the network dynamics of systems science, the dance of parts and wholes. What I see happening is Richard Watson's work with song. The idea of resonance is one way, one of the best ways to get from the physics up to any kind of sentient experience, any kind of embodied way of harmonizing with the physics. What I see going on with Mike's work — I'd love to chat with you alone about this sometime to straighten me out — is that the cell membrane at resting potential seems to be associated with some sense of home state. I like to say it's the edge-of-chaos state because the cell knows that's where it wants to be.
[32:14] Katherine Peil Kauffman: It's least energy, but there's an awareness of that and an urge to maintain that, get back to that, because you're being constantly tossed off of that with energy exchange and everything else. So that's the right state. But it's not just a state of an individual. There's two aspects of identity. It's the whole that always needs to be on the edge of chaos, but it's also the part that needs to be in a collective with others in the same space such that they can all interact and do something communally. So I see that as the harmonics where we can sync together in a collective way. Richard Watson has thought about this carefully. He's talking about simple binary motions that have to do with quantum mechanics, rotations and trajectories, where the tiniest bistable rotation lets you see a whole bunch more of something that's close to self, the nearest neighbors, and you can sync up with them through harmonic resonance. What I was describing in my paper as this three-step loop, where there's an ongoing comparison between the self and the not-self environment, I see in Mike's work happening at the cell membrane. It's the use of the bioelectric polarity: more polar is associated with more free energy that I can use, a dynamic that I call self-development. The lack of polarization is associated with less free energy and a "don't let this happen" situation. It's about plasticity. I'll get some of this written down better. But I think the algorithmic bit of it: we need to clarify the machine because there is a bistable switching happening that's common and allows this connectivity all the way up. I think the Edge of Chaos story and Stu's original idea of Boolean networks are central to cracking that algorithmic code. I think it's the edge of chaos. So that's a little bit of a splatter.
[35:12] Michael Levin: The thing about bioelectricity, and no one loves bioelectricity more than I do, but one thing about bioelectricity is that it is really good at showing you how selves and their goals, which I take to be very, very closely related things, scale up. As you point out, from the homeostatic properties of a single cell bioelectric pattern, when the cells join into larger scale electrical networks, the size of the goals that they can pursue not only enlarges, so that cognitive light going gets bigger, but it gets projected into new spaces. For example, whereas a single cell was only concerned with metabolic goals, physiological goals, bioelectric goals, the collective is now concerned with morphogenetic goals. Do we have a proper limb? If we don't, let's build more until we do. The bioelectrics is great at showing how minds can scale. That's one reason why we study it so much: it shows in a very clear, molecularly realistic way how the goal states of systems can enlarge and project into new spaces. The interesting things that we care about in terms of living things, and I would say some non-living things, cognition shows up way earlier than that. For example, we've shown that if you look at it the right way, and this is what Stu was saying before, you could think of it as a paperweight and then you'll miss a bunch of stuff. If you look at it the correct way, even very simple gene regulatory networks — we're talking five nodes and up — if you look at them the right way, they can do six different kinds of learning. They can do habituation, association, they can do Pavlovian conditioning. You don't need the rest of the cell. You don't need a nucleus. You don't need bioelectricity. You don't need a membrane. You don't need any of it. Just from having certain specific relationships among the nodes, you get transcription-independent behaviors: you don't need transcription, you don't need promoters, you don't need any of it. Just a small number of molecules that turn each other on and off, you can already get learning. Some aspects of what we like about cognition — the ability to learn from experience — start up very, very early on. The other thing that starts up very early on is the ability to do things that were not explicitly in the algorithm. This creativity — Stu, when you called it "life finds a way," which I think is brilliant. In my paper, I called it "beginner's mind." Same kind of idea: you are not tied to some fixed interpretation of what's going on; you are an agent whose job it is, and whose opportunity it is, to figure out how to deal with new affordances — what can I make of this? That's the basics of intelligence. Use the things you have in some new way. I think that starts extremely early on the scale. I don't think it requires even cells necessarily. I think that very simple mechanisms in their own simplified way already exhibit this ability to do more things than is apparent to us if we use an algorithmic lens on them. Here's my take on machines. If we think that these are binary categories — you got machines, and then you got living things that are not machines — I think we're in huge trouble there. What I like is a spectrum, and I think that it's perfectly fine. If your orthopedic surgeon wants to see you as a machine, that's cool, because he's got hammers and chisels, and that is the frame that they're using, and that's going to be okay for some things. If your psychotherapist thinks you're a simple machine, that's no good. There are different lenses that you can take on different things.
[38:37] Michael Levin: Is a meat grinder machine useful for biology? Not really. But there are other aspects of the machine metaphor that are useful for other things. They do not capture the whole thing in any way. I don't think we should have these binary categories where we say it really is a machine or it really isn't a machine. I don't believe in any of that. I think these are all frames that we take on. The beauty of it is that in the empirical testing, we can compare. You can bring your metaphor, I bring mine, and we say, what does this help you invent? Did you make xenobots? Did you make anthrobots? Did you make some sort of regenerative intervention? Well, no, because I was using a bottom-up machine metaphor where molecular rules percolate up to complexity. Well, I did because I was using a top-down controller where I thought the cells actually have the ability to solve certain problems and I've manipulated them to solve it in a different way. There it is. That's what I think about these metaphors. The other thing I wanted to come back to is this business of deductive and inductive. This is something that I was trying to lay down in this latest entropy paper. If you think about the butterfly-caterpillar thing, there are memories in biology that survive drastic remodeling of the brain. Not only do they survive; much like any species, the species always has this paradox. If I don't change, I'm going to die out because the environment will change and then I'll die out. Or I can change, but then I'm not the same, so then I'm gone again. You have this paradox. Memories have the same thing, because if you're a caterpillar, in order for you to survive into a butterfly, the butterfly doesn't— you learned about associating leaves with certain color stimuli, and you were trained for this memory. Those exact memories are of no use to the butterfly whatsoever. The butterfly doesn't care about leaves; it flies, it doesn't crawl, everything is different. For you to survive, you have to change. If you think about learning, you have all these distinct experiences. They get squeezed, by a process of generalization, into a simple rule, some engram that you've got. The particulars—you've learned a pattern. And even algae can learn patterns. We have a paper coming out next week showing surprise in algae. They get surprised because they have expectations because of patterns. It's algae. So what happens is...
[42:02] Stuart Kauffman: That's the title of the book, "The Surprise Algae."
[42:06] Michael Levin: The Surprise Algae. Who would think? But they can be surprised because they catch on to patterns and then they have expectations. And so you squeeze your experiences down into, so you think I visualize this as a kind of bow tie, like the middle of an autoencoder, where you take all these experiences, you squeeze them down into a very compressed representation because you've gotten rid of all the correlations. And so that's your past. So your past self has done that. It has left for you some n-grams. But now your current self has to re-inflate those. And you can't do it deductively because you're missing information. You don't have all the information where you can just know exactly what it means. That's the whole point of the compression, is you've gotten rid of all of that stuff. So now you have this thing, and now you have to say, what the hell does this mean? What does this memory mean? And if your brain stays exactly the same and everything else stays exactly the same, then your interpretation is likely to be what your past self-interpretation was. It doesn't have to be, and you have no guarantee that it will. And so there, that becomes a creative process. That is now very much a creative process because you cannot deduce what it meant because you don't have all the details. I agree completely. I think these deductive metaphors only take us so far. They're useful for some things, but they only take us so far because the fundamental thing that you see in evolution, in development, and in cognition, which I think all of these are cognition, is this creative interpretation of the stuff around of affordances and your memories are also your own affordances. And you are handed these molecular or biophysical, whatever they are. You need to figure out how to use them and you are not necessarily tied to how your past self used them. So I completely agree with you about this deductive inductive thing. The creativity and problem solving is at every level. And so as you said, Kate, in the homeostatic bioelectrics, but even below that and above that, it permeates everywhere.
[44:09] Katherine Peil Kauffman: Can I make one quick comment? All of the information is getting compressed down. I love your bow tie. It fits with the Dow story. But what would you say is the most important compact information that is stored and can be re-inflated? And does it include any kind of value system?
[44:37] Michael Levin: I think it must because if you think about it, you train the caterpillar on some color cue and then here are the leaves. The caterpillar learns I crawl this way and then I get these leaves. The butterfly doesn't care about leaves, it wants nectar. One of the things it has to remember is not "hey, I got leaves," maybe not even "I got food," maybe "this was really nice, I enjoyed that." How much generalization? We don't actually know in that particular case, but this is something we can study. We're studying memory in anthrobots because we're going to check if you can recover some of the donors' memory from these anthrobots. That's a whole other thing. That squeeze down is maybe that the valence of the experience is critical. What's the first thing you need to know about any kind of procedural memory? Was that a good thing I did or was that a terrible thing I did? That's the core of it. After that, you can layer on top of yes, it was good and it was delicious, or yes, it was good and something else happened. That's another thing: a lot of people think about what's the memory mechanism, where's the encoding, what are the n-grams? At this point, I suspect that there is no single memory medium. I think that what's happening is, and this is a total conjecture at this point but we're doing some work on this, that all of the levels—the molecules, the cytoskeleton, the signaling pathways, the lipids of the memory—are being used as a reservoir in the sense of reservoir computing. And what the neurons are doing is interpreting the patterns in the reservoir. I think memory is extremely—maybe there is no single memory mechanism. It's incredibly opportunistic and it uses whatever scratch pads it can. Future you is good at reading those and trying to make a coherent story out of it. What the nervous system is doing is more interpretation than it is actual storage. The whole idea of synaptic memory—memory stored in synapses—is very contentious now. There are many cracks in that story. I think you're right. I think the emotional valence is probably key to these things.
[47:06] Katherine Peil Kauffman: I've been suggesting that there's a fundamental semantic information bit that is binary. It gives a yes/no value system: no to anything that's going to destroy the structure and the stability, and yes to everything that's going to be a novel new adaptive opportunity. Those two things come from the basic sense of "Am I in my optimal state?" Yes. And "Am I able to connect with others?" There's a dual sense heart identity structure where I am at once whole and balanced as an individual, but I am also whole and balanced within a collection. You have a social aspect, a collective aspect, and an autonomous aspect of identity all the way down.
[48:10] Michael Levin: I agree with all of that. There's one layer I think we need to think about, which is that as we all change, the valence can sometimes change. For example, think about the metamorphosis from dragonfly larva to dragonfly. When you're a dragonfly larva, you love being underwater. When you're a dragonfly, you don't like that at all. One of the things that has to happen under certain changes is that valence may also have to adapt to your new self. You can't just leave.
[48:54] Katherine Peil Kauffman: The valence itself isn't adapting, but the memory of it is. And it's because of the valence that you can have those memory categories. This is straight Pavlovian conditioning in that you get extinction after a while. But it's the fundamental binary information that stays the same.
[49:14] Michael Levin: If you have a memory that something was good for you, by the time you retrieve it and you have now changed, you're a different being than you were before, you're going to have to reinterpret. You can't just hold on to "this was good," because by now it may no longer be good. So it can't be constant. You can't hold all those things fixed as you change.
[49:38] Katherine Peil Kauffman: Well, it's not. That's why pain tells us, hey, this isn't working anymore. That's the reality sandwich of the fact that you're no longer the same. That's the way cognition actually works. But the substrate of emotion is a different process and they work together. And that's where some of the cognitive science can be misleading because you've got to make that distinction. Emotion is always from the embodiment. And there are specific things that the embodiment needs. And of course, when you have something as beautiful and dramatic as metamorphosis, those things are going to change. I want to understand.
[50:23] Michael Levin: Is it rare though? In that drastic example, it's rare, but all of us were kids once and our preference has changed drastically across puberty and things that seem really important that don't and vice versa. I think we all have a smaller version of that. We're all changing continuously.
[50:49] Katherine Peil Kauffman: This is where you get the physiological needs of the body as the basis of universal internal goal states and needs. That's what the basic emotions help us understand. Certain things, agency, autonomy, and community are all necessary. But if community costs you your autonomy and your freedom, you break away from it in that moment. These things are mediated by the self-regulatory information that emotion provides. Once you get that, it weaves together and gives you a better understanding of the literature that's out there concerning the self, concerning motivations, concerning how other people can get into your OODA loop by providing rewards and punishments that engage your bodily responses with your mind out of the loop. There's a lot more going on there. But what we're talking about is the very beginning of whatever algorithmic aspect of the machinery. Because when I say machine, I'm talking about the fact that there are certain patterns of living systems that have evolved that are emergent of laws and forces of nature. I'm not at all sure that bioelectricity doesn't go all the way down in terms of quantum biology. In fact, I think it does, but there clearly are things that are going to happen that are deterministic, whether we like it or not. I'm going to drop my pen and I know it's going to fall. One of the things that is part of that determinism that we are not creating is when stuff goes wrong, we feel pain. We don't like it. If we were constructing everything, we wouldn't have any need for that. It's a reality sandwich that's incredibly important.
[52:50] Michael Levin: We're working now on some things around rewards and punishments for gene regulatory networks. How do you punish a GRN? There better be a way because you can do it with a paramecium and that thing's chock full of GRN. Trying to figure out the basement of it.
[53:12] Katherine Peil Kauffman: I'm so glad you're doing this work. It's just music to my heart that you are understanding the importance of what you're doing and making so many connections for so many of us. Thank you. Go ahead.
[53:26] Michael Levin: Thank you.
[53:27] Stuart Kauffman: What's the paper you've just published at Entropy, Mike?
[53:30] Michael Levin: I'll send it to you. It's called 'a gentle memories as a cognitive glue.'
[53:41] Stuart Kauffman: I have written the story. I've been doing some cosmology, because every Jewish fruit fly geneticist should do something. It's a long, weird story. A new colleague in Botswana and I think we know what dark matter and dark energy are, and if we're right, they're a creation of space-time by matter. That's another story, but it's really weird. I published this paper with Sudip Patra in India, with the outrageous title "Cosmos, Mind, and Matter: Is Mind in Space-Time". Mike, it might be right, it is really weird. It's online. It has been accepted for publication by Biosystems. It's out except that all the references are screwed up, but it should be straightened out in a few days. I'd like to send it to you.
[54:38] Michael Levin: Please do. Yeah, please do. Yeah.
[54:41] Katherine Peil Kauffman: They're working on a project on the evolution of free will that gets at exactly these issues. Hopefully we'll be able to put something together.