Watch Episode Here
Listen to Episode Here
Show Notes
This is a ~52 minute conversation with philosopher David Resnik (https://www.niehs.nih.gov/research/resources/bioethics/bioethicist) about the Platonic Space model and our plan for a new paper about a different variant of it than what I've been writing so far.
CHAPTERS:
(00:00) Morphogenesis and latent space
(15:19) Bridging Platonism and science
(23:30) Explanation, emergence, understanding
(32:50) Research program and models
(42:16) Mathematical constraints in evolution
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:00] Michael Levin: One thing that might be useful is for me to say a couple of things about what I am and am not claiming and why, because I've wrestled for a long time about whether I should use the platonic name at all, and it has benefits, but it also has drawbacks because it makes people think of things that I'm not actually saying. I haven't decided if I'm going to have to rename all this, but for now I just want to say a couple things about how this interacts with the biology. Do you want to do that off the bat, and then you can react, or do you want to talk about your thing first?
[00:41] David Resnik: I think that's a good approach, because I was trying to think about that too. And I think empirically, experimentally, that's the bottom line. You want a conceptual framework that works for you. And then the philosophy will just come in later and try to clean everything up. That's my view of it.
[01:01] Michael Levin: Let me just say a couple things then about how I think this relates to the biology. The first thing is that the goal directedness that we have found in various aspects of morphogenesis, and by goal directedness I mean some degree on the cybernetic continuum, not magic, but some degree of homeostasis, homeodynamic goal-seeking, problem-solving kinds of things, by itself, I don't think motivates necessarily any Platonic view like this. That is not why I'm arguing for this. I think it's perfectly possible to have a simpler, more traditional view of it and still maintain that open-loop emergent models are not sufficient for morphogenesis, that there is a reduction of error toward a set point, and that's fine. I don't think you need that; that is not why I'm going in this direction. However, when we start looking at systems that have not been around before on Earth in the sense that they don't have an evolutionary history of selection at that scaleâthe cells do, but the organisms don't. We're talking about things like xenobots, anthropots, and chimeras of various kinds. Then one has to ask the question: let's assume morphogenesis has set points, and so does behavior and physiology; they all have set points. The question, typically we think, is where do those set points come from? Selection. For long eons of interaction with the environment, evolution sets those set points. For systems that have not had that experience, you want to know: what option space are those set points drawn from? What are the possible options there? That right away raises the issue of a latent space so that we don't simply get surprised when these things show up. We understand where they're coming from. Specifically, it raises the issue of computational cost. This is important because for typical animals and plants we know when the computational cost was paid to define that specific set of features: during the eons of the genome bashing against the environment. That's when we paid the computational cost. When we have these new kinds of beings that have very specific properties and capabilities that were neither engineered by us nor selected from a large set to be exactly what they are, then we have to ask: when did we pay that computational cost? That's when the research program really kicks in. This is, for sure, a metaphysical option. Some people would rather be very minimal physicalists; they say, "I don't want any extra spaces. I don't want to think about latent space. These are simply emergent." When you ask what that means, they say these are regularitiesâjust things that are true in our world. For example, why do some gene regulatory networks have memory properties? They just do. This is just something that happens in the physical world. I can't disprove that. That's a metaphysical position. There's nothing I can do about that except to say that I find it defeatist and anti-science. I think seeing these things as a random grab bag of surprises versus a structured latent space that we can systematically investigate, I go with the latter, and I can't prove it. That's just a metaphysical position. I think it's more useful.
[04:47] David Resnik: Well, can I stop you there for a second?
[04:49] Michael Levin: Sure.
[04:52] David Resnik: That's perfectly compatible with philosophy of mathematics. That math is in the physical world because these patterns are actually just there in the world. They're real. They preexist the organism. I think that's the most important thing you want: that these patterns are somehow real and exist before the organism comes on the scene.
[05:19] Michael Levin: Yes, just one more thing, and then let's get into this issue of whether it's physical or not physical or what it is. Okay, where this I think the research program is going to kick in, and we can talk about what that is: this question of quantifiably do you get more out than you put in? That is where it's very hard to do in biology because in biology, complexity is huge. There's always more mechanisms you haven't found yet. It's very hard to make any kind of quantitative arguments because people say you just haven't found it yet; there's some kind of mechanism underneath that does it. That's why we've done a lot of this in extremely simple minimal systems, computational systems, where you can see all the steps and exactly what you paid for computationally, and then you can quantify: this is what I paid for, what did I get? My research program is not just the biological, where it's pretty hard to quantify this aspect, but also computation. The issue that you raised about whether math is in the physical world is again related to this issue of what do you put in and what do you get out. I am fundamentally very impressed with the following thing. You start with set theory and you do something like: I have an empty set and I can add one more to it and now we have successes. You start with set theory and then before you know it, you discover a very specific value for the number E; it's a specific value. You didn't start with it, you didn't have a choice about it. I realized that you can make different assumptions and find different things, and maybe aliens start somewhere else and find something else. But having set up certain assumptions, you then get handed a whole baggage of truths of number theory, like very specific things, E and Feigenbaum's constant and all this stuff. So you get all these things. If we say these are part of the physical world, they are part of the physical world in the sense that they causally interact with the physical world. There's a causal interaction. We have to talk about what causal means. But I don't think we can say that physical facts fix all the facts. Because if you ask a physicist to tell you what experiments they did to get E or what they would do to change E â if you were at the beginning of the Big Bang and you could twist all the constants â you end up in the math department. You leave the physics department at that point and you're in the math department. We could say it's also part of physics, but I think that's stretching what physicists do.
[08:00] David Resnik: So what I want to say is math is like the meta constraint on all the rest of science. So I'm not saying that, I'm saying that math is actually above the physics, that that's how I see math as structuring our universe, that the universe has a mathematical structure. That's the point. Not that it's going to be the same thing as the Schrödinger wave equation or some equation from physics, but it's beyond that. Let me ask, probe your mind about this: what if the Big Bang never happened? What if you could imagine a situation where there was no existence of any kind that we know? Would the math still be the same?
[09:39] Michael Levin: That's a tricky question because then there aren't any observers to know it. If we put all that aside, I would say yes. I'm in complete agreement that we only know certain kinds of math because of our physical and biological history. That's fine. Aliens may differ. I'm okay with all of that. How I see it is there is an existing latent space of information. Depending on where you start your exploration, certain things are accessible to you and certain things are not. If you're an intuitionist versus not, some things you can get to, some things you can't, depending on what axioms you make. The thing that really inspires me is how much more you get. It's what physicists call a free lunch. You start with something extremely minimal and you get a specific value of E. It's this exact value. And this is very important. I don't think we need to commit to too many things about what this is. People often say, "I hate the existence of other realms." We don't have to say much about these other realms except to acknowledge that these things do not come from what we typically say is the province of physics. Physical facts do not fix all the facts as far as I'm concerned. You can have all the physical facts you want, and mathematical facts are another set of facts that you have to deal with. To whatever extent physicalism holds, I don't think it entails Cartesian interactionism or anything that breaks physics. I think physicalism was dead in Pythagoras' time and before that, because already we knew that physical facts don't fix all the facts. That's the minimal thing. I don't need to say much more about physicalism than that. I don't think it's a closed system. Then we have this other set of facts. Now we have to ask ourselves, what else does that latent space hold? It can hold simple data patterns, like the value of E and Ï. But might it also have more complex patterns that are, for example, behavioral policies or algorithms, or can it even do compute? Does it offer you virtual machines? We don't know yet. These are all empirical questions. But I think math already tells you that physicalism is not true in an important way. And that, I think, is much more powerful for all this stuff. I'm not trying to argue for Platonism from the existence of Zenobots. That's really hard, although I think it's totally compatible with it. I think where the hard evidence is going to come from is very minimal models where we can quantify the computational effort that we've put in our universe and what we get out and whether there's a mismatch here.
[12:46] David Resnik: That makes sense.
[12:48] Michael Levin: Yeah.
David Resnik: Let's get back to the model of how this all works too. But while we're on philosophy, there's one way of formulating Platonism: just what you said, there are mathematical facts that do not reduce to physical facts. Maybe that's it. And we don't have to say where numbers are, or something like that. We don't have to get into too much of the ontology of it. We just want to say there are mathematical facts that are not reducible to physical facts or are not identical to physical facts or something like that.
[13:44] Michael Levin: Yes, I think that's where we have to start. I don't think we can say where they are because the whole where thing gets us back to space-time. So we don't have to say where, but I do think we have to do two things. First, we have to be open to the idea that the math, what we have considered math up until now, does not exhaust all the interesting things in that space. There may be behavior science that studies things in that space. So it's not just math, but mathematicians have the best start on it. The other thing I think we have to say is we have to commit to whether it's a total random grab bag of things that just shows up and there's no systematic order to it. If so, we don't try to call it a space at all. When I call it a space, I don't mean it's a physical space that we can traverse the way we traverse our current reality. I mean that it's a space because the contents in it are related in some way. There's a metric to it. The mathematicians certainly think they're exploring this; they jump from one thing to another. So we have to decide whether we're going to assume it's just random or whether that space has some kind of structure to it. I would think that there has to be some kind of structure to it.
[15:07] David Resnik: Oh, I agree.
[15:09] Michael Levin: Even though that, by itself, I don't know if it's provable. That's a metaphysical assumption.
[15:19] David Resnik: So I'm thinking in selling this to scientists â that's the thing: scientists are physicalists, and in your work and in the paper we wrote we didn't really commit to vitalism or any other metaphysical stuff. Going in this direction is fine. I think it just has to be done in a way that makes it not seem like you're really going far out on a limb, that you're still really sticking within the framework of hard science.
[16:05] Michael Levin: But when we say hard science, what I don't mean is sticking with the frameworks that all scientists are currently in 2025 committed to, right? Hard science just means you have to work on things that are helping your discovery along in some sense. I'm completely okay with couching it in a way that freaks out as few people as possible. There's totally room. I can write wild things somewhere and write much more constrained things here. But I think it's impossible to really just say, don't worry guys, we can all be physicalists. And that's not because of anything I've done; it's because of the math. I just don't see how anybody who believes in math can be a strict physicalist. I know people are, but then when I say, well, what about the value of E and things like this? I say that's just happened to hold. So that to me is a total cop-out and we don't have to irritate that angle, but I don't know that we can ignore it completely. Some people, the more sophisticated folks, say, okay, look, that's true about math, but that's it. None of these other things you're interested in have anything to do with it. Only the math â whatever is not the physical facts â it's only the math facts. We don't have to get into that, but to me, that assumption shouldn't be an axiom. It should be a testable thing. How can you possibly say that mathematicians are the only ones that get to explore that space? That just seems totally arbitrary. I know that's how people have assumed. I don't think these things are unscientific at all. I get the strategic value of not pushing them too far in this piece, and that's fine. I'm okay with that. I was talking to Elon Baronholtz this morning; he said this and this sounds a bit woo. I said I don't know what's more woo than starting with set theory and finding a very specific value of E that then turns out to explain a whole bunch of stuff in physics. Is there something more woo than that?
[18:26] David Resnik: Or the fact that pi mysteriously appears in so many places in math. Where does pi come from?
[18:34] Michael Levin: Right? Yeah.
[18:35] David Resnik: So, I think this definitely can be done. In the paper, we can talk about, we're not constructivists. We're not formalists about this. We think it is real. And so it boils down to how you look at it. The way of looking at it can be framed and motivated; we've been talking. Let's get back to your experimental work and the cash value of all this, curing cancer or regrowing limbs, right? To me, what's so cool about it is that once you start thinking in a mathematical direction. More than just the math is a grab bag that emerges from evolution. The math is really there as a structure. Then you can use the tools of mathematics to model what you think would happen under certain conditions when you play around with Xenobots and other things. Is this likely to regrow a limb or what's it likely to do? So that's what gives you the power of the math: you have these tools available to you.
[20:16] Michael Levin: Yeah.
[20:17] David Resnik: And, um, yeah.
[20:20] Michael Levin: There's one thing we have to head off at the pass up front, because what often happens with those things is I've seen this again and again: we do something interesting, and then after the fact someone looks at it and says, well, that's got to be compatible with physics and chemistry. No surprise, nothing to see here. If you zoom in far enough, all you're ever going to see is the chemistry doing what chemistry does. What can you draw from that? This is what's very important: it's never inscrutable magic underneath. It always looks like chemistry if that's how you want to look at it after the thing has been done. It's the fact that these â I love what you did in the paper with the Thompsons, the Thompson stuff. The value of it is in the forward-looking. What experiments does it make you do? It doesn't escape the fact that somebody can always look back and say, well, just chemistry doing what chemistry does, which is where a lot of people retreat. I have an analogy to this: in the Game of Life, the cellular automaton, you can say that you don't believe in gliders and all there are are individual pixels that go black and white and that's it. You can be a reductionist about that. But if you don't believe in gliders, you're never going to build a Turing machine made of gliders as communication elements, which people have done. You can look at that and say it's still only following the rules of Life. But the reason you didn't make this and somebody else did is because you didn't have that particular perspective on it. Perspectives matter. We will be able to say we can do all this stuff, but we have to be open to the fact that after we do the experiments some folks will still look at it and say, well, that's just chemistry doing what chemistry does. There's no way from that.
[22:32] David Resnik: Another thing I think about this too, the whole emergence paradigm that you're up against. And I think what we're talking about is that somehow the system is getting information about something from somewhere else. The information is not coming from the genome. It's not coming from the geometry of the proteins; it's coming from somewhere else. It's ordering the cells in a certain way. And it seems, in the beginning of the paper, I say this sort of battle, the wholess versus the emergent people, that now we have really experimental evidence, stronger experimental evidence that the cells are getting information from somewhere else.
[23:30] Michael Levin: That I think is critical. Being able to â it's harder in biology than in some of the simple computational models â quantifying that, or at least semi-quantitatively saying how much information you have gained, is super critical. I think there are two issues we want to talk about, and I'd love your help on some of this in the paper. One is: imagine, back to this question of when were Xenobots and Anthrobots â when did you pay the computational costs for those things? I say, here's evolution to be a frog or a human. Here's all the stuff that happened, but that never happened for those. When? Some people will say, at the same time that you learned to be a frog or a human, the genome also learned to do these things. It just happened along the way at the same time. The trouble with that is that it seems to undermine the whole point of evolutionary theory. The whole point of the theory was to draw a tight specificity between the way that a creature is now and the history of the environment that got you there. It was supposed to explain that you are green and frog-shaped, and you have the physiology and the behavior because of all of these historical things that happened. That's supposed to explain, and if you say, oh yeah, at the same time you can be this whole completely different other thing, there's something wrong here. It goes back to the issue of extra information here that the standard paradigm does not capture. It's not enough to say that while emergently it happened at the same time; evolution requires us to do more than that. It requires us to say why this particular thing and why this set of propensities. I think we have to say something about evolution and why you can't just say that it happens at the same time. The other thing we have to address is the nature of explanation in these things. Consider if I ask somebody to explain the glider in the Game of Life: one thing they can do is simply take me through the four steps. There are four stages, and you can just show me each step. But is that an explanation? It looks to me like a simulation. What you've shown me is that, in fact, that's what happens. In this world, that is exactly what happens. You've taken me through the steps. Is that an explanation? We have to agree in advance, or at least in the paper, ask the reader to commit to what we expect as an explanation. Because saying that something is emergent, or if somebody had an amazing simulator and they could simulate frog cells and roll the tape forward real fast and say this is going to be, that does not seem like an explanation. That seems like a simulation and, after the fact, looking at your simulation and saying, "This is what happened." We have to say what we want that's more than that. What are we asking of a good theory that's not just a prediction, a black-box prediction of what's going to happen? I don't know what that means in the case of a glider. I don't know what it would mean, but I'm pretty sure that for biology it's not just that we can roll the tape faster than reality does and just look at the output. That doesn't seem like we've understood anything. Those two issues we have to stick in the paper somewhere.
[27:06] David Resnik: So explaining what the cells are doing during morphogenesis. What an explanation of that looks like.
[27:22] Michael Levin: An explanation for when we would be satisfied? I claim that the standard story of selection and even of developmental plasticity and things like that does not explain the very specific features of Zenobots. They have behaviors, they have hearing, they have a few things. I say the standard model doesn't explain it. The question is what would make us happy? We could say now it's good, now we've explained it. Simply saying that it's emergent from other things the cells happened to have been doing, I don't think that does the trick. What would a mature model of this actually look like? The ingredients: if we could map out the latent space, knew what the possibilities were and understood why it went one form versus another. Just to know a success case when we reach it, what does that look like? I think it's totally not trivial. I think people don't have a good understanding of what it means to explain something in a satisfactory way. I think we have to unpack that.
[28:31] David Resnik: There are different things people could look for in an explanation. That's something that allows you to predict things very well and model things very well. For all you care, it's a black box and psi. But there are also people who sometimes look for understanding. They think that explanation gives you a deeper understanding of things.
[28:53] Michael Levin: That's what we need to flesh out. If you buy into our realistic commitments to the properties of this mathematical plus space, what does understanding look like then? What does a successful example of understanding look like? Maybe the Thompson thing is useful there. This is part of conversations I've been having with some mathematicians, Edward Frenkel and some other people, around what is understanding in math? What does it mean? Once you reach the math department â you're in biology, you start asking questions, eventually you end up in the math department over some symmetries or something. Now the question is, for you guys, what does it mean to have explained something? Is there a version of looking deeper in, or is it lateral at that point? You say all we can say is that this looks like this other thing in topology. What does that look like? I'm still looking for more information on that one, because eventually I think we're going to have to tie into that.
[30:12] David Resnik: I think so. But I think just claiming that you want understanding by itself is not necessarily something that earns money. What you're ultimately talking about is, I think, an explanation that is applicable. It gives you this kind of understanding that we can predict what these xenobots and other things are going to do and we can use them and apply them. And that kind of an explanation is much more useful than just post hoc after the fact saying, oh, well, it was all physics and chemistry. I think we should.
[31:03] Michael Levin: I think we should, but I think we need to be careful about prediction alone because what's happening with some of these AI models in science. Sometimes the thing is crackerjack at giving you predictions. And so then some scientists will say, "Okay, that's all there is: making predictions. This thing makes predictions." Other scientists are not buying into that and saying, because it's a black box, you have no idea why it made the prediction. So we haven't learned anything. It's not science.
[31:33] David Resnik: No, we could say more than that. We could say it's doing this kind of computation, or you're going to just end up describing it. The bioelectric network is like a little computer inside the cells doing these computations and that would be a deeper explanation. To me, that would be the deeper explanation of what's going on. We relate it to the morpho, it's related to the morpho space and some computation going on in the bioelectric network. And then you get a planaria head or tail.
[32:16] Michael Levin: That, I think, is ideal: to be able to say what does an explanation that involvesâwe're going deeper. On the Tomsy thing, when you have a merger of these kinds of non-physical structures and some physical embodiment, what does an explanation look like that spans both worlds? What's a satisfying spanner of those two worlds?
[32:50] David Resnik: I think that might be. I think that kind of discussion might go the latter part of the paper when I envision the latter part of the paper describing the research program. What is, what's the research program from all this? And this is where we start getting to the experimental value of it and the explanation that really helps us understand what's going on. I think that's fine.
[33:26] Michael Levin: I think we can put it there. I think we have to at least nod to it or presage it early, early in the paper because traditional readers start to see the experiments and they immediately come up with a different, again, post-hoc, but a different perspective. And it's entirely unclear at that point why you need all this other stuff.
[33:53] David Resnik: No, I think in the paper early on there will be something: why the move to platonic space, right? What's this all about? And then some explanation of why we would go in this direction, the evidence for it and the research program and what could be done with it, all that. That would come later. I could certainly see setting that up.
[34:30] Michael Levin: Outline the rest of the paper briefly and say we're going to take you through these steps.
[34:40] David Resnik: So am I correct in thinking that you're also trying to model mathematically the bioelectric network?
[34:52] Michael Levin: We have a bunch of papers on this.
[34:55] David Resnik: So that is the way I see it. That is a mathematical space. And so is the morphospace. That's a mathematical space. And I see it as a project of trying to associate, align â Carl Friston was talking about this â so that there could be some kind of mapping from the morphospace to the bioelectrochemical space. That's the real cash value, because then you can independently study the morphospace to predict what should be going on in that other space, and then ultimately what the organism is going to do. I agree with you totally that these forms could be behavioral too. It could be a lot of things. We have game theory, for example â decision theory, game theory â all that stuff is behavioral modeling, and I could see how that could be useful too, but I think that as far as I'm thinking of this favor is to try to narrow the focus to morphogenesis. And then later in the paper, in that later section, saying, well, this possibly has applications to a lot of other areas of biology and things like that if we start thinking in this direction.
[36:51] Michael Levin: That's fine. We can absolutely do that. In some of our very minimal computational models â there's only one paper out so far, but a couple more coming this fall. We basically look for the same kinds of things, but in very simple algorithmic systems where it's completely transparent. There are no new mechanisms to be found. You can see the steps. It's doing something, whatever it's doing. Also, you get these additional behaviors that are nowhere in the algorithm. These behaviors are not just complexity or unpredictability. That's cheap and easy. It's very simple to do that. What instead we find are things that are recognizable to any behavioral scientist. They're very simple kinds of proto-cognitive capacities, things like delayed gratification, things like associative conditioning. These are things that if you were going to write code to implement them, it'd be a whole bunch of code. It isn't there. There isn't any code for it. So you can actually quantify to the point where you can think about commercial applications where there are multiple things going on in the same algorithm: you charge one customer for one set of things, you charge a different customer for this other stuff it's doing, but there's only one set of steps. That's a commercial application where you're breaking the standard view that you have to pay for every useful thing that it does. You have to pay for it with energy and every bit â erasing bits costs you money. That's also part of the research program: these very minimal things already seep in; you don't have to be a complex cell or a complex tissue to benefit from these remarkable features. They soak into even the most minimal kinds of things, where they're less impressive but nevertheless easier to quantify. You can actually say, what did you get for free or for cheaper anyway?
[38:49] David Resnik: Let's think about where ultimately we would want to publish this and the length and everything, because the last paper we had got huge and had to be divided in half, and we had all these problems with the references â it was like Chinese water torture with the references.
[39:19] Michael Levin: There's a journal called "Life" from MDPI, which has no length limits. That's one possibility. There's Adaptive Behavior, which Tom Frosey is the editor of. I don't remember if they have length limits. There's Biological Reviews, which has a length limit of 20,000 words, which is pretty generous. They're a really, really good journal. They tend to have more traditional stuff, so I don't know if they would be up for this, but we can certainly try.
[40:07] David Resnik: What about one of the Frontiers journals?
[40:09] Michael Levin: My team paper in Frontiers was 30,000 words. I had to fight them on it, but eventually they said fine. It did raise the question in my head of why I wrote 30,000 words, had to fight them on the length, had to fight the reviewers on the comments, and then had to pay thousands of dollars to publish. I was like, why? I should have just put it in a book or something. But yes, Frontiers should be doable.
[40:43] David Resnik: Maybe we don't have to worry too much at this point about the length.
[40:50] Michael Levin: What I've done many times is write exactly what we want to write, pre-print it exactly how we want to pre-print it at whatever length it is. Then if the journal asks us to cut, fine, because for anybody who's actually interested, the original, the pre-print is up. We're going to pre-print it anyway. I'm not terribly worried about it. I think those are reasonable options.
[41:18] David Resnik: All right. So how do you want to proceed on this? Do you want me to revise what I've sent you? I think that would be great.
[41:33] Michael Levin: I think that would be great. I am completely tied up through most of December. I've got a couple of deadlines that I'm so behind on already. I can't do terribly much in the next few weeks. But I think it would be great if you did what you could with the stuff that you've already written in light of what we talked about today. And then I'll highlight the places for me to fill in. And then I'll jump in.
[42:01] David Resnik: Because there's going to be a section summarizing your research in general. So I think that'll be fairly easy for you to write.
[42:16] Michael Levin: And all this stuff up front, motivating all this in explaining why, we don't need to necessarily go deep into math and physicalism, but I think it is important to say that we didn't conjure up this enormous conceptual baggage just because Zenobots. I've only started talking about this stuff in the last year; some people say from Zenobots you're bringing in all this other stuff. And I think it's important to realize that no, it's not just Zenobots. This has been cooking for a long time. It just hasn't been a way for us to take it empirically into the empirical stuff. But I think some of the mathematics issues actually tell you this is the case long before you get to any of this. And so I think some of that, and having some of that in the introduction, that we didn't bring in this enormous metaphysical baggage because of a recent result. It's been needed for a long time, and now we find it.
[43:33] David Resnik: It goes back to the whole issue of developmental constraints on evolution. These issues have been around a long time. I have another example and want to see what you think. What do you think about parallel evolution, like the octopus eye being very similar in structure to the mammal eye even though they're very different and far apart? Do you think that the mathematics of refraction and reflection, the materials you have to work with, and the mathematics of how a lens works are factors here? I've been thinking about that.
[44:36] Michael Levin: I think that's very interesting. And I know what the traditionalists are going to say about it as to why there's convergences and so on. But I think you're onto something here. In one of our simple computational models, one of the surprising things that we found is this tendency, which, not anywhere in the algorithm, we didn't put any code in for behavioral subunits to preferentially associate with other subunits that are like them. This stick-with-my-own-kind thing. Chris Fields and I had a paper years ago trying to explain this from the perspective of surprise minimization. The least surprising thing around is a copy of you. So it's a force for multicellularity that you want to surround yourself with others that are not as surprising as the outside environment. But that thing, talk about convergence; that's super biological and it's all over the place in biology. But it also crops up in this completely non-biological, artificial, synthetic, computational thing. So I would not be surprised to find out that there were patterns in that space, whether morphological patterns or behavioral propensities, or policies for doing things that crop up again and again because they are attracted in some functional sense to certain kinds of embodiments in which they can successfully make a difference in the real world. So maybe there's something about being an eye and maybe it's the shape of an eye, or maybe it's the computational processing that these kinds of eyes can do. I think you're on to something. I don't know if we can prove a strong result from that yet, but it's definitely, I think.
[46:34] David Resnik: Another example is wings. We do have a lot of different kinds of wings, but they're bound by aerodynamics. But ultimately, it may be that the physics is doing all the explaining of the emergence. But that's still not a bad thing. It's pointing. It may be. I agree that we should talk about some more motivations besides Xenobots, because we don't want it to just be that we're building this huge structure because of the Xenobots.
[47:38] Michael Levin: Part of it is that you said something really interesting at the beginning of that point about constraints. Lots of people have talked about constraints. But I think what I see is that physics is the domain of things that are constrained by these mathematical patterns. Why are there so many kinds of fermions? Because there's symmetries. But biology, I think, is the domain of things that are not just constrained, but enabled by them. In other words, I think evolution exploits the hell out of these things as free lunches. Biological systems are amazing at using these patterns and then leveraging them into novel capabilities. Some people also talk about enablements, but I really think they tend to get second shrift: the physics means you can only do certain things. So of course there's only so many ways to fly in physics. So of course you're going to have these kinds of wings. I think it's much more interesting to think about all the things that evolution gets to do without having to spend the effort to micromanage it, because of these free lunches. Because of these things, you make a minimal interface and immediately you get: you make a voltage-gated ion channel and suddenly you get all these truth tables and their properties and the fact that NAND is special; you didn't have to evolve those truth tables. You get that for free. All you made was a voltage-gated current conductance. And so now biology, I think, exploits the hell out of these things.
[49:14] David Resnik: In the paper, I don't know if you read that, and I'll talk about it in my PowerPoint in a couple of weeks. I talk about two different ways that math explains. One is the sort of traditional constraint way, but the other way is by creating a space of possibilities that can be explored by the organism and evolution over time. If you think about the cicadas and the prime numbers of 13 and 17, what math has done there is it's created a potential evolutionary strategy. A strategy that wouldn't have been there if there were no prime numbers. The strategy is I want to have a lifecycle that maximizes my evasion of predators. Because that's what prime numbers do. If you have 15, you can have predators that time their life cycles and they can kill you at years 5, 10, and 15. With prime numbers, it's harder to do that. What I'm saying is evolution found that. Cicadas didn't have to; there's no reason they couldn't have had other life cycles, and probably do. There's annual cicadas, but for some reason these cicadas, well, we're pretty sure we know why: because of this. The math creates a space of possibilities then; that's the way I see it.
[51:06] Michael Levin: No, I think that's absolutely right. If you want to know specifically why 17 and 13 work so well for that purpose, you have to go to the math department and talk about the distribution of primes and find out why they are like that versus distributed some other way. Why didn't you have to wait 1000? Why didn't you have to count to 1000 to find the first one? Well, that's not a biological question. It's not a physics question. You're not going to get the answer to that in the physics department.
[51:33] David Resnik: Well, there's some biology in there too. It may just be that biologically speaking, I don't know what another high prime number would be like, 73, a prime number. But biologically you just can't get away with a really long life cycle. There's constraints there on the biological side.
[52:04] Michael Levin: But the actual presence, you can imagine the number, the number line, and where the primes are, and why isn't it nice that we get some nice available primes early on where the biology doesn't have to wait, and it wouldn't, as you pointed out, it wouldn't wait. It can't wait. So it's very nice that we get some low-order primes, and who can tell you why that is? Well, that's the only thing the mathematicians will tell you.