Skip to content

Discussion with Stuart Kauffman and Katherine Peil Kauffman

Stuart Kauffman and Katherine Peil Kauffman join a one-hour discussion on emergent cognition, emotion and bioelectricity, bottom-up collective intelligence, and Kantian wholes. They relate these ideas to self-construction, information, and bioelectric selves.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~1 hour-long discussion meeting with Stuart Kauffman and Katherine Peil Kauffman. Their two papers referenced in the discussion are:

Stu's: https://royalsocietypublishing.org/doi/10.1098/rsfs.2022.0063

Kate's: https://journals.sagepub.com/doi/pdf/10.7453/gahmj.2013.058

CHAPTERS:

(00:01) Sorting Algorithms, Emergent Cognition

(12:22) Emotion, Valence, Bioelectricity

(20:12) Bottom-Up Collective Intelligence

(34:11) Kantian Wholes, Bioelectric Selves

(48:13) Self-Construction And Information

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:01] Stuart Kauffman: Can we start with your sorting algorithm paper?

[00:08] Michael Levin: Sure.

Stuart Kauffman: We keep doing the most amazing things. Why don't you talk about it for a while, then Kate. I think it ties into a bunch of stuff that I've done too, Michael, that I hadn't put together. If that's a possible agenda, other possible things to consider from my end, and not yet Kate's, is I sent you the paper on a third transition in science. And the other thing is I've gotten to this very weird stuff that von Neumann's universal constructor self-reproducing system is, I think, fundamentally wrong about life. It's very strange, but it's an awful lot to talk about. So why don't you take the lead?

[00:55] Michael Levin: That sounds great, and I'd love to hear your thoughts on all of that. I had seen your paper before, and I'd also seen the paper that you sent me, and there's a lot in common here. There are two fundamental things that are important that I try to address with that algorithms paper. One is I'm interested in extremely basal cognition. In other words, I want to understand what are the simplest possible systems where features start to creep in that are making the system amenable to the tools of behavioral cognitive sciences. Where does it come in? The thing with biology is we work on simple biologicals, but in biology, there's always more mechanism to be discovered. So it's always possible that somebody says you just wait long enough; you'll find a mechanism for it. Their evolution baked it in there somewhere, and so I was looking for an extremely simplified minimal model, which is transparent, deterministic. These sorting algorithms are that; people have been studying them for many decades. Every computer science student plays with them. We think we know what they do. There's only six lines of code or so. They're transparent, they're deterministic. There is nowhere to hide. There is no more explicit mechanism. The algorithm is what it is. That's it. There's no more baked-in features to be found. So basal cognition is one angle. The other thing I'm really interested in, which ties into what some of your two papers were about, has to do with the emergence of novel agents, and then the emergence of the goals of these novel agents. Typically, when we look at a standard biological system and ask why it has this structure or behavior, the answer usually is eons of selection. It's been selected to do specific things. So my question is: systems that have never been here before, where do their goals come from? It's not just the emergence of complexity, which is easy — complexity can emerge from simple rules — but the emergence of basal intelligence: goal-directedness, some competency in William James's definition of intelligence, the ability to reach the same goal by different means. Where do those come from? Where do the goals of novel composite systems come from? That's what I'm very interested in. We face this in the lab with Xenobots and now with Anthrobots. With these algorithms, we try to make it as simple as possible. You have these standard algorithms. We only made two changes. Otherwise, they're exactly as they always have been. The first change is that they are now distributed bottom-up, meaning that every cell, having a certain number, is following an algorithm based on what the neighbors are and what they are going to do. There is no omniscient, top-down, universal controller that's running all of it. Each one has its own local preferences and its own local view of the world. We also break the assumption of a reliable medium. In other words, typically with these algorithms, when the algorithm says swap two numbers, they swap, and you assume that's it. In our case, sometimes the cells are broken and sometimes they either don't initiate swaps or they refuse to be swapped. That's it.

[04:44] Michael Levin: We don't add any code to the algorithm to test for whether in fact an operation succeeded. We don't give them any way of knowing how well the sorting is going overall. We don't have any of that. It's the traditional algorithm. I think there are two most important things in that paper. The first one is on the basal cognition end: one version of competency of navigating a problem space is what I call delayed gratification. It's the idea that when you come upon a barrier in your space, sometimes to go around that barrier you have to temporarily be doing worse. William James's example is two magnets separated by a piece of wood. The magnets are not able to go around the piece of wood because to do that they would temporarily have to get further from each other. They're not smart enough to do that. They're always going to be minimizing that distance. So all they're ever going to do is sit there pressed up against the wood. He says, now look at Romeo and Juliet. They've got physical barriers, they've got social barriers, but they have more skills. They have planning and memory. They can temporarily get further from each other to then get closer afterwards. Your ability to temporarily make things worse in order to later achieve certain gains I call delayed gratification. We simply asked if you do introduce barriers in these algorithms' journey towards being sorted. By barrier I mean a cell that is broken and it's just not going to move. You need to move it, but it isn't going to move. What we found is that when they come upon these barriers, they actually backtrack. The sorting of the whole string gets worse temporarily. Then they rearrange a bunch of other stuff and are able to do better. This is completely emergent. This isn't baked into the algorithm; there's nothing in the algorithm about any of this. The algorithm doesn't ask whether the cell's moved. It doesn't ask how you're doing. It doesn't say anything about being able to backtrack. It's the same old traditional algorithm that everybody's been playing with. It turns out that it has this unexpected capacity for delayed gratification in its problem space. That's one big thing we found. The other thing we found is because we give the algorithms to each cell as opposed to having one centralized one, we get to do the kind of experiments we do in biology, which is to make a chimera. In our lab, we make frogolotls. Frogolotls are some cells from a frog embryo and some cells from an axolotl; each of them has different hardware. They have different genetics. You smush them together and they get along perfectly well and make a new organism. You could ask the question: baby axolotls have legs, tadpoles don't. If I combine frog and axolotl cells, is a frogolotl going to have legs or not? We have all the genetics. You have the genome of the frog, you have the genome of the axolotl, you still can't say whether they're going to have legs or not, because you can't directly read the collective decision-making in anatomical space from the protein-level hardware, which is what you get from reading the genomes. We made these chimeric strings and found they still sort perfectly well, even if they're made up of different algorithms. The amazing thing is that if you ask during that whole process of sorting what the distribution of the two algorithms is — Adam Goldstein calls them algotypes; I think it's a good word — what's the distribution of these algotypes within any given string?

[08:33] Michael Levin: You find something really crazy. At the beginning, let's define this notion of clustering. Clustering just means I look next to me and I ask, what's the probability that the cell next to me is the same algotype as I am? Keeping in mind that these algorithms do not have any notion of algotype in them. They don't store what algorithm they are. They don't know how to check their neighbors. None of this is explicit. The algorithm doesn't do any of that. This is all completely emergent somehow. And so what it turns out is that, at the beginning of the whole process, the clustering is 50%. It's at its lowest point. And it has to be that way because we assign the algotypes randomly to the numbers in the string. At the end, it's also 50% because by the time you've sorted them according to number, the assignment is random, just like it was at the beginning. So it's 50% at the beginning, it's 50% at the end. But in the middle, it goes like this. In the middle, it's quite a bit higher than that, because during that whole process, cells with the same algotype try to hang out together. And they spend as much time as they can together. And then what happens is, this is crazy to say, but it's almost a minimal model of the human condition where you get a certain amount of time to do interesting things, but the laws of the world eventually, the laws of your universe eventually, yank you from where you were trying to be because the sorting algorithm can't be denied for too long. It's going to sort the numbers eventually. But in the middle, they get to hang out together. We also did this thing. I said, let's see how much effort, how much do they really want to cluster? And the way to test it is to allow numbers to repeat. So if I allow you to have multiple of every number, then as long as all your fives are in the correct location, you can cluster; half of the fives are one algorithm, half of the fives are the other, and you don't get pulled apart. You can keep that. If you allow them to have multiple, then the clustering goes even higher, because it obviously wants to cluster more. It's just that eventually the sorting algorithm takes, it can't be resisted anymore. So there's this crazy innate tendency for them to cluster. I don't know what causes it, but I have a hypothesis about it that we're not sure yet. I think it has to do with surprise minimization. I think it's a Fristonian thing where you cluster with your own algotype because they're the least surprising. They're the most like you, basically. So that's the thing. This is an extremely minimal model. And it's got this nice feature, which I think a lot of systems in this field have, which is you can take the mechanistic reductionistic tack, and you can follow all the steps, and you're never going to see a miracle. The computer works correctly, the algorithm works correctly. If you insist on the micro-scale walkthrough, everything makes sense. But if you pull back and look at it from a larger scale, you see there's something going on here that isn't because the laws are violated. It isn't because, at the micro scale, there's any magic. It's because there's a larger scale pattern that's completely not obvious from the explicit algorithm that we put in. And if that dumb six-line algorithm has these emerging capacities for novel problem-solving behaviors and novel patterns that they try to maintain, then of course, the more complex ones, both in computer science and in biology, will have them through the roof, things that we can't even begin to expect. So that's my story of the algorithms.

[12:22] Stuart Kauffman: Kate, do you want to go next? Michael, your Xenobots are fascinating, and so are your axolotl-frog chimeras. We can't possibly have time to talk about all of that today, but I'd love to talk about it, and Kate might also. Katie, do you want to say some things?

[12:53] Katherine Peil Kauffman: I first want to thank you for what you're doing, because I've been walking backwards from psychology for 30-ish years, wanting to understand what the valence of emotion really is and what it comes from. When I found Stu's book, "The Origins of Order," it was the first time I'd been exposed, having a background in clinical psychology, to the whole bottom-up emergent self-organization story. It was so evocative. People saw things in there that Stu didn't really intend, but that's just how his genius is. I experience your work as the next level of that because I've been talking about emotion as a sensory system for a very long time. In the paper that I wrote, which was the culmination of working alone for 20 years, there's a lot of stuff crammed together in there. That's what I did when I was in the Harvard community. I wish I'd known about you right up the street at Tufts. At the chemical level I was pointing to ideas about self-regulation, the immune system, genetic regulation, epigenetics, all that stuff being balled up in this deep, deep sensory or function of self-regulation. I had Candace Pert's "Molecules of Emotion" there. I couldn't get any lower than that other than the concepts of feedback, a couple of positive and negative feedback loops. Michael Levin comes along and I can see exactly where using membrane potential is, and changes in membrane potential, well, either going to polarized or unpolarized and then ultimately being more negative and more positive. That's a very clear one-to-one relationship of what I've been saying about the value system, and that's where my interest is in psychology. We're reductionists and we're still stuck with Cartesian dualism and all of that. Sadly, even the best psychological stuff is based on this idea that there are dual processes in the brain. You've got the bottom-up, quick and dirty, and it is foul somehow. There's something wrong with that. There's a really, really deep evaluative system that comes up from these deeper things; that's the value system. And now, I mean, science is supposed to say anything about values at all.

[15:42] Katherine Peil Kauffman: I think your work is really the perfect example of how everything is perfectly objective and observable, and yet stays away from the subjective perspective, which is taboo. In psychology, that's what it's all about. It's about identity. It's about personality. It's about free will. I'm seeing implications in what you're doing for the concept of free will, that there's clearly something going on there with the capacity to make decisions at all. The fact that all of this is emergent is amazing. The one thing I want to say about Friston: you mentioned Mark Solms and Friston as where you're heading in this direction. I'm absolutely right there with what they're saying. The one thing I want to pitch at you today is surprise reduction is only monopolar. Surprise reduction means that whatever memory system, there's an internality here; however it's encoded in the algorithm, if it matches the external challenges, the environment, that's what it's about. That's definitely part of what I'm talking about. The signaling system itself that organisms use is bipolar and when you're getting that positive feedback signal, it's got a valence to it: when membrane potential is depolarizing, it's going up, and if it's depolarized, it's going down. When you get back to positive and negative charge, negative charge is associated with healing and regeneration, right? Have I got that right? Positive charge is associated with damage and degeneration. So I'm finding an exact link between what I'm talking about and the source of values and why we experience feelings in pleasurable or painful categories. You do have to go to subjectivity. That's a key part of what basal intelligence is about, because without that valence you don't really have decision-making: okay, what am I? What state am I? What state is my neighbor? It gets down to the idea of a self versus not-self comparison between yourself and the environment. I try to wrap that into the paper, but to have what you're saying — please correct me if I'm bastardizing your work in any way — the implications for values and ethics and that entire realm of social systems are buried right here in what you're doing, as far as I see.

[18:33] Michael Levin: I think you're absolutely right in the sense that when you have simple emergence like you get in fractals or you get in cellular automata, it's free of valence because complex things happen. A glider goes this way, glider goes that way. It's all equivalent. But as soon as you get emergence of these homeostatic or homeodynamic systems, which expend effort to try to keep specific states, now you're in the land of valence and values and everything else, micro level, because not all states are equally preferable. They will actively try to reach a specific, and sometimes they have all kinds of competencies in doing that. Already you start to see; the question becomes, where did those specific preferred states come from? Why is it that this is the one they like instead of that one? I really think that's one of the grand mysteries next that we need to deal with, not just the emergence of complexity, but specifically emergence of goal-directed intelligence and develop some kind of science to try to guess these goals, because we make things all the time, fighting for the swarm robotics and Internet of Things and financial institutions and social institutions. And biological chimeras and so on, we have very little ability to guess what the goals of the system are going to be and what the competencies of that system are going to be to implement those goals in the face of resistance. We need to develop a science of that, I think.

[20:12] Stuart Kauffman: So can I take a turn. I want to focus on your bottom up and you're allowing things to crap out and die. But before that, Michael, and thanks, Kate. Just briefly, and it's in our third transition in science paper, things focus on living cells. The goal of a living cell is to continue to exist by its progeny. That's what gets selected. And the goals that emerge aren't set from the outside like we do when we're programming it. It's what's useful to it. That's why your chimeras and your Xenobot seem so fascinating to me. But now put that aside. Michael, when I read your paper I realized that for some years I've been doing things that are cousins of what you've done, and there may be something really general going on. When you take your algorithm and you make each number an agent, so it's now bottom up. There's no outside control of the total system. So they're now, in a sense, co-evolving with one another, trying to do whatever they're trying to do. Years ago, I found myself doing something similar in another way. I made this NK fitness landscape model. It's this rugged landscape. This is in 1995 in Home in the Universe. We made a big square lattice, N by N, where there's N squared points, and it's just the NK model. You can think of that as one big patch. You implement a cousin of your letting things die: it's just a finite temperature. The system goes downhill in energy, but every now and then it goes uphill in energy. It's a Monte Carlo. We ran this and asked how long you go downhill until you get to some minimum; then, because errors are made, you wander around. You can ask how low energy you get to. Then we took this big patch that's N by N and broke it up into four quadrants, four patches. The rule now becomes the move you made. Each patch—northwest, northeast, southwest, southeast—makes moves that are good for it. But when it does so, it screws up things for the patches on its boundaries. The patches are now co-evolving with one another. They become separate agents. Then we made the patches smaller and smaller in size and more and more numerous.

[23:47] Stuart Kauffman: And we asked, what happens to the energy you get to? The remarkable thing is just what you found: as the patches get smaller and more numerous, the total system does better and gets ever lower energy. Breaking the thing up into a bunch of co-evolving things does something really good. Then it turns out something amazing happens. There's a phase transition. When the patches get too small, the thing becomes chaotic and it screws it all up. The optimal behavior is found right at that boundary. What's going on in this case is it's like yours. There's something about breaking things up so that it's bottom-up that allows better behavior to emerge. It's like yours in that, given the Monte Carlo simulation and finite temperature, the things find alternative ways of getting to wherever they're going. About 12 years ago, I was at the University of Vermont, and we were asking. I'm a doc, and I was talking to a friend who's a doctor, and we were thinking about fitness landscapes. There are randomized clinical trials, which are the be-all and end-all, and they're just big T-tests. In the NK model you can tune the structure of the landscape. We asked if you have a single-peak landscape like Fujiyama or a rugged landscape, does randomized clinical trials work well? It works well on a single-peak landscape and just screws up all over the place on a multi-peak landscape. I expected that. But Jeffrey Horbar, my colleague, had an idea related to what you did with your algorithm, the bottom-up. He said there are quality improvement centers emerging in medicine where if you have 100 hospitals, they break up into 10 groups of 10 hospitals. Each one is a quality improvement center. Within a quality improvement center, the 10 hospitals are trying to do something neat. They want to find a good combination of procedures. You could do the procedure or not. The deal is they try a given procedure within one of these centers. If, on anecdotal evidence — not statistically secure — it looks like a good idea, they say, okay, let's do it. If it doesn't look good later on, they just take it back. We implemented it on a computer. The anecdotal evidence is incomplete, noisy information; it's not quite knowing what you're doing, it's messing around. What we found on a computer model is it radically outperforms randomized clinical trials. We think, therefore, the large message of your bottom-up is: if you can connect a large number of people to try to solve a hard problem and you break them up into little patches where different groups try things and, based on anecdotal evidence, say, "that looks like a good idea, let's try it," and if not, take it back, they'll solve really hard combinatorial optimization problems that you'll never solve if it's all one big system.

[27:22] Michael Levin: Fascinating.

[27:23] Stuart Kauffman: And that's what you're finding too. There's something really general going on. To finish up this look, I'm involved in trying to get going on global soil restoration. The idea is we're trying to create a global Creative Commons computational network for hopefully millions of farmers where people can upload data, share it, own it, share it with who they want, and try to solve whole hard problems. And the same thing would work in clinical medicine. So there's something about this bottom-up, noisy, sloppy, trying things on anecdotal evidence that actually works. The final thing to say about this is if we're 30,000 years ago and you had a toothache and you were in the south of France, you'd go to the medicine woman. She would say, Michael, you need the following six herbs. And they basically would work. Noted in a randomized clinical trial. How do we learn that? I think evolution is something like that too.

[28:31] Michael Levin: Very interesting. The clinical trial thing — I've always wondered. Acupuncture, which I've benefited from many times: how long would it take, and how many patients would you need if you were starting from scratch and needed to know where the points were for a specific disease? I can't imagine the size of the data set you would need.

[28:54] Stuart Kauffman: Nobody did a randomized clinical trial in developing acupuncture. We learned T tests, so we thought we were being scientific.

[29:06] Michael Levin: There are two things. One is experimentally: we put out this reprint a while back, and the real paper should be out soon, looking at groups of embryos responding to teratogens. It turns out that embryos are communicating with each other. There's a whole hyper-developmental biology where instead of looking at how a single embryo develops, in standard developmental biology, individual cells work together to build a nice embryo. But groups of embryos are also working together and we can now track them communicating with each other. We have techniques to watch the information go back and forth. Literally, large groups do better than singletons or small groups in resisting certain teratogenic influences. The larger groups have gene expression that small groups don't have. That meta-embryo has its own transcriptome that's distinct from the transcriptome of any of its members. They can solve problems. One possibility is that if you had bigger groups they wouldn't do as well. There may be a critical range. We don't know. All we know is that the groups we work on, which are about 300 embryos or so, do way better than smaller groups.

[30:42] Stuart Kauffman: It's just amazing. There are cousins. I'm working with a guy Jan Dijksterhaus in Holland and his colleagues. We're doing the 140 species experiment. It's underway. 70 DNA-sequenced bacteria, 70 genotyped fungi. It's hard to sequence the whole thing. We're mixing. We have a couple of 100, 250,000 EUR to make this mixed community. Plate out aliquots of it on sterilized cell in 50 aliquots and watch it for a year or two. They're going to do all kinds of things with one another. These guys are going to learn with one another too. They're going to solve problems in all kinds of weird ways. What are the transcriptomic and gene activity patterns? Would it be identical in the different 50 aliquots? Almost certainly not. One of the fascinating things from the "third transition in science" paper, which Michael Gates heard me talk about, is that, in Jan's phrase, an evolving community creates bubbles of new possible ways to exist together. We can't predict what they'll be, but we can see the ones that were useful that were seized by heritable variation and natural selection. We can ask, do the same mutations occur in the 50 aliquots and in the same sequences? Of course they won't. This is a co-evolutionary assembly, and one can begin to look at the genetics of it. It feels like—what's going on with 300 embryos talking to one another? What's going on in a tissue? These things are talking to one another. If you took random bits of technologies from around the world and stuck them into place, they wouldn't work together. They worked together because they came into existence together. You're looking at all kinds of Darwinian pre-adaptations and xenobots, which is so amazing.

[33:04] Michael Levin: One of the things that we've been studying, and some of this is out and some of this hasn't come out yet, is how evolution works when your material is agential. If you have a hardwired material that does what it does, then we know what evolution does. It searches through these rugged spaces. But when your material is itself an agential material with a multi-scale nature, where every scale has its own agendas and is solving various problems, the whole process of evolution is different, and all kinds of things don't look at all classical. You can keep the random mutation — the intelligence isn't in the mutations — but when the material is smart, the whole process comes out completely different. We've got some cool computational studies coming out on that.

[34:05] Stuart Kauffman: There's so much to talk about. Can I? Go ahead. Please do.

[34:11] Katherine Peil Kauffman: Stu and I talk about Conti and Holes. You're familiar with that concept where you have an organizational closure — in Stu's, you have the bottom-up and top-down pieces we need to consider in introducing Steve's idea of the Condian whole because that's how we've talked about it before. The idea is that there are parts doing something that gives rise to a global whole, which then provides constraints on those parts to do what they were doing. There's freedom from the bottom-up and constraint from the top-down, which gets at the coupling of positive and negative feedback I was talking about, the homeodynamic functions of emotion. The idea of work needs to go in here because intelligent doing is work. When he talks about bottom-up work and top-down constraints as part of constraint closure, the idea that energy release within a few degrees of freedom is how you define work. When you put agency in there, the idea of work becomes a self-regulatory thing where the parts are doing something and the whole is doing something too. You have this dance of parts and holes that I think is necessary for not only constraint closure, operational closure, organizational closure, but informational closure. What you're talking about is the role of bioelectricity as the glue that works at every level. With that idea in mind, I watched some beautiful stuff by a friend of yours, Richard Watson.

[36:19] Michael Levin: Oh yeah.

[36:20] Katherine Peil Kauffman: His whole business is on resonance, song, and depth.

[36:23] Michael Levin: Amazing, amazing, right?

[36:25] Katherine Peil Kauffman: He's getting at exactly what my emotion thing, because there's a phase locked loop going on with the three-step compare, signal, cybernetic loop that I've described. I met a guy in music who said, "Have you ever heard of a phase locked loop?" That's what we use in music to tip positive feedback. When I see what you're doing with the bioelectric layer and the signaling, it's basically that same three-step process, with the positive feedback signal being either the increase or decrease in membrane potential, or positive and negative charge. That would be a manifestation of how phase locking occurs, because when you think about an electromagnetic field, it's the same as a whole bunch of individual oscillators. The idea of syncing them together, where the connectivity is the sync, the individuals are able to do something simple that matches what their nearest neighbors are doing. In that same signal, I have an edge-of-chaos story that's probably very much like the least-energy story as well: the resting state is the edge of chaos, the home state that one goes back to. But since we're non-equilibrium systems, we need to be off the edge of chaos on a regular basis. We're either going to conserve our energy to preserve our form, or we're going to exploit that energy, that entropy, to do something creative and do some growth. That's the dance of self-preservation and self-development that I talk about as a really deep thing going on in networks. Those two things are being balanced all the time, like the Tao. The idea of a Kantian whole and the idea of information closure means that the simple rule for the part would be to get back to the edge of chaos. But when off to exploit, to try new things out—when a bacterium is in a stressed environment, their genetics becomes really loose; they're trying new things out. If whatever sticks gives a new level of global organization, which then feeds back down as that Kantian whole. What I'm suggesting is your friend with the story of songs is exactly that, because at the parts you're exploiting the chaos to either preserve what you're doing and duplicate what you're doing, or you're going to try something new. There's flexibility in our genetic system for all of that. You've got redundancy, you've got all kinds of novelty, but it's the whole that feeds back down. Staying on the edge of chaos is a simple rule for the part to get back to, but in order to maintain that connection with the whole, there's a harmonic resonance thing going on at the same time. That's where you get into things like pink noise, which is a classic marker for criticality, the whole edge-of-chaos thing. All of that is wrapped into my questions to you about this beautiful layer of bioelectricity and how the binary opposites that are interacting give rise to the value system in valence. I think there's a pretty clear story there. If I can articulate it, I'll share it.

[40:38] Michael Levin: That's great. We track this business of the parts and the whole from the very beginning of embryogenesis because initially, let's say you're looking at a blastoderm of a mammal or a bird or something, and it's 50,000 cells, and we all look at it and say, there's an embryo. But what are you counting when you say there's one embryo? What is there one of? There are 50,000 cells. What is there one of? What there's one of is actually a kind of alignment, literally physical alignment of the cells in terms of planar polarity, but also morphological alignment in that all of them are going to work towards the same journey in anatomical space. They're going to make this particular structure. You can do, and this is something I used to do as a grad student, in duck eggs, you can take a little needle and put some scratches in that blastoderm. For the next four to six hours, each of those islands that you've created doesn't feel the presence of the others. They decide that they are alone and they're going to make an embryo. They align to be an embryo. Each one of them does it. When they heal up, you've got conjoined twins and triplets. The question of how many individuals are in an embryo is not set by the genetics. It's not known from the beginning. It could be anywhere from zero to probably half a dozen or more. It's an autopoietic process; they put themselves together and each one has to decide where, because every cell is some other cell's neighbor, so they have to decide where do I end and the outside world begin. When they do heal up, this is the reason why in human conjoined twins, one of the twins often has laterality defects. That's because the cells that are in between on the border have a hard time deciding, am I the left side of this one or am I the right side of that one? Sometimes they make a mistake and you can get organs that go the wrong way. During that process, bioelectricity, but also other modalities, are used by the cells to synchronize into a coherent pole that's going to take a particular path in anatomical space and physiological space. It has a self-model that tries to demarcate it from the outside world, and everything it does is in the service of maintaining the goals that the new system is going to have. For traditional systems, we say evolution gave it those goals, and we call it a day. That doesn't do the trick, because now we can make completely new ones that have never existed before. They have goal states, too, that they exert lots of effort to maintain. They have all kinds of competencies to maintain it. This business with the first the Xenobots, but now the Anthrobots, the human-derived ones, is just the beginning of having these novel systems with a perfectly wild-type genome—Homo sapiens in one case, Xenopus laevis in the other—that have different competencies and different behavioral repertoires. We're just beginning. We don't know what their learning capacity is, what their actual behavioral goals are. We're going to find out all of that. We're trying to train them and test what their preferences are. We'll see. None of it is easily predictable at all; nobody saw any of that coming.

[44:08] Katherine Peil Kauffman: Have you ever tried human skin cells with lung cells, right?

[44:15] Michael Levin: Yeah, the reason?

[44:16] Katherine Peil Kauffman: The epithelial.

[44:18] Michael Levin: I'm sure you can do it with lots of different cell types. The reason we went with lung is because, being ciliated, they move physically. It's easy when people see physical movement; it's easy for them to understand. I think that a lot of important behaviors take place in other spaces like transcriptional space and physiological state space and anatomical amorphous space. It's just people are not comfortable recognizing that as behavior. I suspect that a lot of what people call organoids that they're making now have locked-in syndrome. They're in there and trying to do all kinds of things. We can't see it because they're not running around. That's all we're tuned to: movement in three-dimensional space. I think if we understood how to measure the states of these things properly, we would see that they're solving all kinds of problems in other spaces.

[45:06] Katherine Peil Kauffman: That's what I asked about the epithelial cells because that's how we directly encounter our environment and move about in our environment. There's a bioelectric effect. I love what you're doing because in alternative healing modalities, including acupuncture, which you measured, there's always this idea of subtle energy. Some of the new age ideas have gone crazy with it. I'm looking for really natural spirituality and where our values come from, divorced from the dogma of religion, but they do have their thumb on the pulse of something that's really true and real about human healing. The layer of bioelectricity, that's energy. Would you say that's subtle energy? How would you call it? Is it chi?

[46:14] Michael Levin: We've had a bunch of discussions with those folks. I don't have any evidence that the phenomenon they're calling chi is specifically what we study as bioelectricity. I don't know that those are the same things. But I do think they have a couple of things right. One of the things that they were right very early about is this notion of multiple intelligences within the body. This idea that literally, with a very specific definition of intelligence as problem solving in various spaces, the fact that our body is full of sub-agents that have agendas that have problem-solving competencies. So I think they were 100% on the money about that one. The other thing is this notion of persistent physiological states as objects. They'll tell you, if you do either massage therapy or something, they'll say, "Oh, you've got this blockage that this thing is sitting here. I'm going to try to move it or we're going to try to get rid of it." This idea that these other spaces, physiological state space and so on, have persistent patterns that are the equivalent of objects in three-dimensional space and they have causal power. They do things, and they have their own dynamics, and they're persistent over some amount of time, and they can be moved and modified, and some of them contribute to disease, no doubt. I think that's a very promising direction and we're starting a bunch of work on being able to detect them and developing predictive technologies to then manipulate them. That's going to be a part of real medicine.

[48:11] Katherine Peil Kauffman: Very cool.

[48:13] Stuart Kauffman: Good question. Are you interested, Michael and Kate? I'm running across this very strange thing in which I think von Neumann is wrong in a fundamental and very puzzling way. It would take a few minutes to talk about. We don't need to bring it up at all. Why don't you call it, Michael?

[48:44] Michael Levin: I'd love to hear about it. I've got another 9 minutes before I have to hop off, so we could talk about it. I'm also very happy to schedule another one of these so I can hear in more detail. I'm guessing it's going to take more than 9 minutes to talk about it.

[48:59] Stuart Kauffman: So, I can get the puzzle started.

[49:02] Michael Levin: Yeah, let's do it.

[49:04] Stuart Kauffman: Von Neumann in 1956, posthumously, published a paper on self-reproducing automata. And it's brilliant. Here he comes and he's helped invent the universal Turing machine called the Von Neumann architecture. He wants to make a machine that can copy itself. Here's the logical steps. This is going to work in the real physical world. It's actually going to build things. It's not going to manipulate bits. He starts with the universal constructor. The universal constructor can construct anything. For the universal constructor to construct anything in particular, you better have some instructions. He imagines instructions as some physical system, like a bunch of steel I-beams jointed together in some way. For the universal constructor to make anything specific, it better have access to these instructions. You put the instructions or this structure inside the universal constructor. Once you do that, something magical happens. The information, the instructions now play a dual role. They are used to direct the universal constructor to construct something specific, namely a copy of itself. It could have made a choo-choo train, but it made a rabbit. It made a copy of itself. Now there's this copy of the universal constructor sitting over there, but it can't make anything until it has inside of it some instructions on what to make. Von Neumann's move is, the universal constructor constructs a physical copy of the instructions and sticks the physical copy of the instructions in the second new universal constructor. That's a lot. That's the basic idea. That's what DNA does. It template-replicates, and it's used as a code to synthesize proteins; then the DNA gets replicated by the machinery and stuck in the daughter cell. That's true. We've been stuck there. It's entirely wrong for at least some self-reproducing molecular systems. I can get us started, Michael.

[51:49] Stuart Kauffman: Kate's heard me. I've been thinking about autocatalytic sets for over 50 years. Gonen Ashkenazi at Ben-Gurion literally has a nine-peptide collectively autocatalytic set. Let me define more precisely what a Kantian whole is. It's Kant's idea, not mine. The parts exist for and by means of the whole. So you're a Kantian whole. You exist by means of your liver, your kidney, and your spleen, but they exist by means of you. So all living things are Kantian wholes. Gonen's autocatalytic set of nine peptides is a Kantian whole. Each peptide catalyzes the formation of a second copy of the next peptide around a cycle of the nine peptides. Each peptide gets to exist by virtue of being a member of the nine-peptide set. The nine-peptide set is the Kantian whole and each peptide is the part. It's true: it's a Kantian whole. More notions. Catalytic closure, which I had a long time ago. Catalytic closure is that in every reaction the last step in the reaction to start making anything is catalyzed by somebody. So it's collectively autocatalytic. It is. Then there's this idea of constraint closure, which Kate hinted at. When I was writing Investigations, I was wondering: what's work? Work is the constrained release of energy into a few degrees of freedom, as Atkins in the Second Law says. I finally understood: that's a cannon, with the powder at the base of the cannon and the cannonball. The cannon is the constraint on the release of energy, and it's a boundary condition. When the powder explodes, you don't get a spherical wave. The powder only can blast down the bore of the cannon, and it does thermodynamic work on the cannonball. I understood that. Then I thought, where the **** where'd the cannon come from? I realized that it took work to make the cannon. No constraints on the release of energy, no work. I got stuck there, except that I realized that the release of energy could construct a new constraint. I'm going to give you the brilliant idea, it's transformative, that Matteo Mossio and Maël Montévil had in 2015. There's a lot to pack into nine minutes, but here it is.

[54:33] Katherine Peil Kauffman: Three minutes.

[54:35] Stuart Kauffman: Three minutes. I'll get it across. I want to get some constrained release of energy. I better have some non-equilibrium processes. These two guys say, let there be three processes, one, two, and three. They're non-equilibrium processes. But there better be some constraints on the release of energy. Let there be three constraints, A, B, and C. Pause and really hear this. A constrains the release of energy in process one, and it makes a B. B constrains the release of energy in process two, and it makes a C. C constrains the release of energy in process three, and it makes an A. This is an amazing thing, Michael, and it's not mine, so I can really brag about it. This is a new organization of matter in process. There are a set of constraints. The constraints are boundary conditions. The three boundary condition constraints constrain the release of energy in three non-equilibrium processes to construct the same boundary conditions. The system constructs itself. This is the heart of life right here. We construct our choo-choo trains. They don't construct themselves. Gonen Ashkenazi's set does this, and I'll try to say it quickly. Each peptide binds two fragments that are half copies of the next peptide. By binding them, it orients them in three-space, so it lowers the activation energy for the reactions. There's a constrained release of energy. A peptide bond is formed and work is done when peptide one makes a second copy of peptide two around the cycle. This system Gonen's got is a Kantian whole that achieves constraint closure and catalytic closure. I think that is probably — and I think it's life. Now look at what this system does, Michael and Kate certainly said. This system constructs specifically itself. The nine peptides construct themselves, and so does a cell; it specifically constructs itself. It's not a universal constructor. And the amazing thing is there is no separate description. There's no instructions in Gonen Ashkenazi's nine-peptide set. It's nine peptides and each peptide is a boundary condition on the release of energy making the next peptide. That's the whole thing. There's nothing more to be said. There's no separate description. There's nothing which is acting as instructions that are getting copied and stuck into a second copy. This is now — I'm not getting to DNA. It's just not a catalytic peptide set. This system specifically constructs itself because it's the constraints on the release of energy that constructs the same constraints. It's absolutely not von Neumann's universal constructor. And it doesn't change very much when you get DNA into it. There's something very strange going on about what we mean by information. How can we — there's something that we call information that von Neumann's talking about? It's really puzzling. Do you see the distinction?

[58:04] Michael Levin: I absolutely do. And even the story you contrasted with the DNA story is partly this, because DNA is not a description of the organism. DNA does not have a description of the anatomy.

[58:21] Stuart Kauffman: DNA can be thought of as a code because it specifies polypeptides. And Paul Davies points out that given the translation apparatus of the code, the DNA is a universal constructor for encodable polypeptides.

[58:41] Michael Levin: Yeah, polypeptides, not anatomy.

[58:43] Stuart Kauffman: Not a cell. If you were to take a yeast cell and clone random DNA sequences into all the genes, it'd make a bunch of polypeptides, but the cell would die. It's lethal.

[58:57] Michael Levin: That's the whole thing; we should have another talk about what level of reprogrammability we see in biology, because there are some cool examples and it's not about DNA at all.

[59:14] Stuart Kauffman: We have a notion of what information is. There's a Mondrian painting, and I want to know how much information is in it. I'm asking how many bits it takes to describe the Mondrian painting. I break it up into centimeter-by-centimeter squares, and there's 10,000 of them because I did 100 by 100. I use 4 bits to say what color is there, so there's 4 × 10,000 — 40,000 bits as the description of the Mondrian painting. It's really true that I can send that over a computer to printing machines all around the world that use totally different, physically different ways of making copies of the Mondrian paintings. So the information really is separable from the Mondrian painting, and we just did it. That has nothing to do with how Gonan Ashkenazi's paper said. There's no separate description. There's something funny about our notion, and I've got a loose hunch that I've said to you too, Kate. It's something about a description from the outside. We are giving a description from the outside of the painting. Von Neumann is giving a description from the outside and sticking that outside description in, which is then used in a dual way as a program and gets copied. That's not how cells build themselves, Michael. There's something fundamentally wrong with that. That means there's something wrong about our notions of information. It's the wrong word. It was confused. We could describe in 40,000 bits the Mondrian painting, and we can send it over computer wires around the world and make 10 zillion copies of it. But we're using those machines all over the world to do it, and there's nothing specific. We could have made a copy of anything. Life isn't doing that. It's doing something fundamentally different. At this point, I really get confused.

[1:01:23] Michael Levin: That's a great place to take up the next one. That's a very good point. Unfortunately, I have to run, but let's set up another one and keep going. I think you're on to something very key.

[1:01:37] Stuart Kauffman: Well, there's all kinds of stuff.


Related episodes