Skip to content

Discussion #1 at the Platonic Space Symposium

Contributors to the Platonic Space Hypothesis discuss math, identity, abstract realms, attractors, simulation, mind and agency, exploring how content, creativity, scale and boundaries might fit into a unified view of reality.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~1 hour 40 minute discussion among contributors to the Platonic Space Hypothesis (https://thoughtforms.life/symposium-on-the-platonic-space/)

CHAPTERS:

(00:00) Math, identity and realms

(16:08) Convergence, abstraction and attractors

(34:05) Attractors, stress and observers

(41:29) Realms, impossibility and simulation

(51:40) Simulation, explanation and understanding

(01:04:39) Mind everywhere and agency

(01:19:19) Content, communication and creativity

(01:30:42) Boundaries, scale and space

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.


[00:00] Chris Fields: I'm happy to raise the question I raised in the registration form, which was a Gardellian question. Since as soon as we want to achieve some level of precision and definition we're forced to use mathematics to talk about our own states and our own interactions with the world, however you want to define that, what are the consequences for our view of mathematics of this fact that we have to use mathematics to describe ourselves and our states as physical systems, our behavior as physical systems, our physical interactions with our environment? I have to use mathematics to describe my interaction with all of you, for example. How does that bias, if it does bias, our thinking about what mathematics is? What it means to claim that we are entities that are not only amenable to mathematical description, but for which mathematical description is required for a certain kind of discourse, the sort of discourse that we regard as science or as explanatorily productive?

[02:23] Olaf: If I can extend that, Chris. I've seen a few talks address this. How much of mathematics are internalized and used as an extension of our senses versus something that is completely usable but external or very low bandwidth with our subjective awareness and computation? How do people feel they are on that spectrum? I think it extends what Chris is asking.

[03:20] Michael Levin: Well, I hear two different but related questions there. One is, if we take the thing that we currently identify as "this is what we think of as math." One question is, to what extent is that applicable to the things that we're interested in here, and where it fails to capture the things we're doing when we relate to each other? So that's one question. But the thing that I keep coming to is, do we in fact have a fixed thing where we know "this is mathematics and here are the borders of it"? And if you go beyond that, you're somewhere else; it's not math, it's something else. Or is it that our attempts to formalize interactions between agents are actually stretching math? Is it changing the definition, changing the borders of what we thought? Maybe certain things that weren't thought of as part of math then have to become part of math. So is that changing the definition? Or is this a fixed thing? Then we can argue about whether it's applicable. And if it's not applicable, then we have to pick something else, some other kind of formalism. I don't know what you guys think of that.

[04:49] Chris Fields: I should say I'm neither a mathematician nor a historian of mathematics professionally. This is only an observation from the outside. Certainly, if one looks from the outside, how math has been described by humans has changed quite a bit with the introduction, for example, of non-Euclidean geometry. This was something that no one even imagined up to then. There are now many kinds of algebras in addition to what was originally regarded as algebra. When one reformulates mathematics in set theory, it looks different. When one reformulates mathematics in category theory, it looks different. It becomes much broader. Many things that in earlier formulations looked like distinct entities or distinct systems or organizations turn out to be notational variants. You say this thing and that thing are in fact exactly the same thing. All we've done is redescribe them in a different language. It seems to me from this outside perspective that how we think of math is constantly changing. That doesn't address the question of whether there's some fixed entity called mathematics somewhere outside of our conceptualization at all.

[06:58] Michael Levin: By the way, does that, what you just said about a notational, when we do notice that, hey, this thing is actually the same as this other thing, is that, what's the meta level there with what are the tools that you have to take on board to even be able to make that judgment?

[07:23] Mariana: So, I would say there are a lot of tools you can use and there are weaker or stronger forms of proofs. You can prove by contradiction, which is fun to do. But in the end, you're reasoning. You're reasoning with agents because ultimately you're going to publish it and you're going to have a community review it. Mathematicians in principle are also the first ones to say, I made a mistake. I thought we could do it this way, but after all, I thought about it, and I found a loophole. And so it's almost like a continuous dialogue of agents' reasoning. But then, of course, you have representation tools that will help you verify, and ultimately you can also have geometrization, for example, of two objects, and then you can see that they relate by some measure, and this ultimately, for example, can work in favor of the proof or against it. But I'm with Chris Fields. I think it was a good intuition. But I also ask why we are asking this question in terms of what's the assumption? I want to tackle the assumption, perhaps work it from a different angle. Is it related, for example, to patterns or to our notion of patterns because they find expression in our mathematics?

[09:20] Chris Fields: If you're asking me, since I posed the question at first, one of my major obsessions is the notion of identity. And in physics, that's the notion of identity over time, since we parameterize this thing with this parameter we call time. But without this notion, physics stops. There's nothing to say anymore. And indeed, lots of other things stop. Psychology stops because we can no longer talk about memory if we can't talk about identity. And identity is a key assumption, or an axiomatic assumption of category theory, that there's an operator that we call identity. And without that notion of identity, mathematics stops. I suppose the underlying question, or the question that underlies my question at the beginning was, what is this notion of identity? What does it mean that we try to formalize it in these various ways?

[11:00] Michael Levin: This is also something that's very fundamental to what we do as developmental biologists, because as developmental biologists we really want to understand what does it mean that you have an embryo, which is the same through some period of time and that things happen to it, but yet this is the thing that's undergoing change. This is a very fundamental where does it come from? How does it come to be and so on. The only reason I bring it up at all is that it seemed to me to be a simpler domain in which to try to make the claim, which some people at least already believe, that not all facts are physical facts. If you try to do that in biology, it's really hard because it's very complex. People will say there's some mechanism you just haven't found yet. That's probably always true because there's always more to be discovered. But in math, other people for a really long period of time have already made the claim that there are facts that are not derived from nor changeable within physics. There's this other domain of important information that exists. That was my strategy: we already know this is the case, or at least many people believe this is the case. Now we can ask the question of whether some of these things are also relevant for biology, for behavior science, and so on, and move on from that foundation. That was my motivation for mentioning mathematics at all, because at least there we have a bedrock where some people already bought into the idea that not all important facts are facts of physics.

[13:01] Mariana: I agree with you. The question was to raise the assumptions so that we could discuss them. Chris spoke of identity; I'm really fond of this topic as well. It is interesting to think of identity all the time and everywhere, but in no particular place. These two change. I know this may seem hard in biology to think of things that do not happen in time or that happen all the time. It's more like it happened all the time. They are the same all the time. So it's within a time range. In development you see this a lot. You would have an embryo. In principle it will grow to Stage 22; it will have 36,000 cells as an open embryo. So this happens all the time. When there's a variation, we note it down. But in principle this happens all the time. Time can be expressed also from a time-independent perspective. Sometimes this is helpful because if there is a structured space of patterns independent of us, then our assumptions of time may be wrong, and this can hinder our understanding of their development if they do. What would it be like to develop not in time? This also ties back with the notion of memory, that memory is a temporal thing. Suppose that this structure, space of patterns, is a space where memory is retrieved from. All states that already happened and will happen live there. What you have is agents that loop around. This is a hypothesis. Depending on a local state, they will fetch preferential points in this structure. If you want to call this a temporal structure, there are physical models that could do this. They may not represent the standard model, but they exist and they're mathematically influential. Then we are no longer speaking of time that passes. You're speaking of agents that are atemporal. I find this notion interesting. I don't know what you think about it. Another thing: we speak a lot of physical facts, and I would like to bring to the table this notion of relational facts. Both in mathematics and in physical models, what we are asserting is relational facts.

[16:08] Olaf: How much do you think mathematics and related areas, or intersections of your own fields with mathematics, are converging or diverging on the mathematical level? And how far, if they're diverging, can it stray apart from the current sets of axioms and concepts? I'm saying this because I see mathematics as this historically negotiated corpus — from Euclidean geometry to algebras, analysis, and category theory. It feels inter-subjective and tends toward convergence in most cases. In my field, neuroscience, we see connectivity matrices becoming less interesting and have to converge toward higher-level abstractions to make something — the most exciting to me, at least. So do we tend toward divergence or convergence?

[18:02] Mariana: It depends on what your parameters are. I've dwelled a lot on this notion because I feel it's very important for us and for the research program in general in this distinction between abstract and concrete. There are good proposals from logic to speak of this in terms of properties it sets, but I ask, for example, in terms of a combination. When you abstract something, ultimately it really feels semantic, but also if you're going to look under the hood, it feels that you're saying less to address more. Suppose you have a high feature density. This speaks to the chorus. Feature density means you can distinguish something in your data set or in your model, and it is unique. Suppose you have lots of these unique features that don't repeat. This would be very rich and you would have less redundancy, for example. Suppose the other way, where you have one feature that repeats 360,000 times. This would give you another kind of ratio. This is my question for you: have you ever thought of things between, or a spectrum between something that is abstract and concrete along these lines, if you were to play something between these endpoints?

[20:33] Michael Levin: Having seen the different talks and everything that everybody has been saying in the symposium, what do you guys think about how many different views we have here? Fundamentally, how many different—obviously, everybody's got a different perspective. I'm going to send out a table for people to comment on, and I'm trying to think of what the columns of the table should be, the primary axes that people would have different opinions on in this collection of thought. How many different views do you think we have and what does the conceptual space look like? What are the primary axes where people agree and disagree? Just to give you an example, one basic one that comes up all the time is people say, "I agree with what you said about ABC, but I really don't like having a separate realm." This notion that some people like a monism where everything is in one space and they really don't like the idea that there's a separate realm in some cases, and we can argue about what it really means to be a realm as opposed to something else, some weaker form of it. But that's one axis, I think, where people differ: to what extent are there multiple realms? There are probably other axes. I'd be curious to know what you guys took away from all the discussions as to other fundamental dimensions.

[22:36] Olaf: I have one other axis, which is I think something like physics-boundedness, being constrained by laws of physics or not, as in a fixed set of rules. And it feels like my talk is on the extreme, on both ends of this, which is interesting. But something about dependency on substrate. Let me continue those questions. Let's assume that everything that we want from the representation exists. What then?

[23:32] Michael Levin: What will be the end goal? Because what we are representing is some subset of the real world. Let's assume that we have everything in there, technically create two worlds, then what? Or if we have some reduction of concept in that Platonic space, then what can we do with this? What are the best options to do with this?

[24:08] Mariana: I would say map it. Depends on the assumptions. But if we have agents that can come and go, then in terms of experiments, as you guys have shown, it is possible to have a source and target map. This would be what I'm most interested in.

[24:34] Yvette: For you, the fact that we succeeded in mapping everything that we would like to do in the world is good.

[24:48] Michael Levin: Success of that test, of that theory.

[24:54] Mariana: No, but it would give us some truth bounds for experimental means or for managing expectations for experiments. What would be yours, for example, your end goal?

[25:16] Olaf: I will try to predict something that I didn't put there.

[25:23] Yvette: There is object A and object B, and object C is inferred from all of those. I know that, but I don't know if my tool can do that.

[25:39] Olaf: I'm hearing in the metric of verifying whether it is or it's not a subjectivity notion, but you can have it or you can zoom out in an objective godlike view of those agents that Mariana you just mentioned as well. I think that makes all the difference. If you switch to math that we haven't invented yet or that is alien math, the part where we consider that this is part or not of the subjective perspective that we are holding right now is, I think, important. Maybe that's another axis for you, Mike.

[26:27] Michael Levin: For me, what I'm really interested in is mapping the space, but also figuring out what is it, what degree of, I call it a free lunch, but what does it actually give you? Because there's a wide range of options. So it might just give you static patterns, here's the value of E and that's all you get. It's just there. Or it might give you dynamic behavior or algorithms or compute, what's the range of complexity that you get out of it that you didn't put in and where? And so we're doing some things in our lab, giving bodies, whether physical or simulated, to simple mathematical objects to see what they encode. If you treat them as behavioral propensities, what do you get? But more generally, because that has implications for evolution. I think, to what extent can evolution exploit things that it pulls out of that space without having to take the time to micromanage them and evolve all the components. What do you get for free? You get some stuff, as Stuart Kaufman showed us, for free. But I think my suspicion is that's just the tip of the iceberg and you actually get a lot more. And ultimately in the lab, we need to be able to say, here are some anthrobots. There's never been selection to be a good anthrobot and to do all the weird things that they do. Where do their specific properties come from? Why did we not see this coming? How can we have predicted it? What are the options? And what's the relationship between the thing you make and the stuff that then comes through? If we tweak, can we tweak certain things about the anthrobots if we want other types of patterns to come through? That's what I'm interested in, is what do you actually get and what's the relationship between the interface that you build, whether that be technological or computational or biological or some combination thereof, and what is going to come through that you have no idea about.

[28:52] Mariana: One of the things that I've been thinking a lot is exactly the work that you do, and precisely this mapping. It seems to me that mapping, based also on your experiments, under the assumption that developmental states are pulled in, then this mapping can also allow you, in very practical terms, to address, for example, regenerative procedures at late stage, so you don't need to catch something early stage because you already know how to pick up that pattern in case there is a topological defect, in your sense, is a developmental stage. This is why I find the mapping really relevant. It might not be the best approach, but I think when you speak of free lunches, these are the free lunches, the low-hanging fruit that we could use. I've been thinking a lot about it. It seems that when you speak of perturbations or abrupt perturbations, things that were unforeseen so far, that then output these developmental novelties, like the anthropots. I'm very puzzled about the tail onto the flank. Why not the tail? Why not keep the tail? It's so much cheaper. Why reject the tail? Why build the limb? It seems like it's the most— I know there are some changes that are more helpful or more useful. Sometimes I wonder why it just feels like, for example, a limb is more complex in terms of edges than the tail, right? In physics, you would call it relational mechanics. There are some proposals exactly in these terms where whenever there's a chance, go for partition. Go for something that is different that has— this would be the measure of complexity. So complexity is just a relational measure between you and your neighbors around you. I like it.

[31:41] Michael Levin: It's a very interesting question. I don't want to dominate this thing. So please, Yvette, Juan, Brian, Carl, please chime in. This question of at what point does the thing give up on the standard implementation and shift over to something else. There's a standard, you can call it an attractor, but I don't think that's all it is. There's a standard version of an embryonic body plan that it will try to hold to. If you deviate it, it will work pretty hard to get there. If you put on an extra tail, it will try to make it a limb and things like that. But at some point, you can push it so far that it basically says, forget it. I'm now an anthrobot. I'm not going to try to make a human embryo. This is my new life. One of the ways that we're trying to address that is to look at stress markers, because we have a project looking at systemic stress as a measure of distance to your goal state. There are scenarios where the tendency to try to reduce stress is what pushes you to get back to where you need to be. So we're interested in this question of, okay, are Zenobots and Anthrobots stressed out about being those things? Or at some point do they adopt that as the new set point? So being a Zenobot is my set point. I'm now a great Zenobot, so my stress can fall. That's an experimentally detectable thing. We're doing those measurements. That's one way of doing it. In general, I think that's a great question: at what point does it shift, and I don't think it's about utility or anything like that, at least certainly not in the short term.

[33:41] Mariana: So I misunderstood.

[33:43] Michael Levin: I don't know. We don't know how a lot of these decisions are made. There's so much that these systems will tolerate and try to accommodate to still get back to what they need to be. But there are also scenarios in which they just flip to something else. Carl, please.

[34:05] Carl: Some wonderful questions there. I wanted to pick up on this notion of stress and attractors, but try to frame it in response to some of the questions that have been rehearsed. So it's going right back to Chris, a question about the nature of maths. I noticed that he used the word dialogue. There was also Mariana's notion of discourse, and then we had Olaf's negotiated corpus. I think Olaf speaks to maths just as being a particular kind of co-constructed language that has an enormous amount of explanatory power in terms of accounting for things accurately with the minimum complexity. In so doing, the question about convergence touches upon, Mike, what you were asking about, is there something else or is this just another version of the same thing. And if you pursue the notion that the right kind of language and the right kind of maths is going to explain everything as simply as possible, but no simpler than you're looking for, that's exactly that convergence, and I think that speaks to a lot of what people were saying in terms of your maths being a continual process of basically model building, a co-constructive model building. The notion of identity, in my world, that would be self, and it would be the self you find in self-organisation. It would be exactly the same thing you find in information theory in terms of self-information, right through to self-evidencing and the free energy principle. I mentioned that because that stress is mathematically simply the self-information or the implausibility of finding this kind of thing away from its attracting set. So coming back to the attracting set, the notion of a pullback attraction probably has everything that you need in order to accommodate all the questions that I've heard thus far. That really commits you to a particular kind of maths. It probably wouldn't be maths, it'd be physics. But certainly, mathematically framed, you would be seeking out that convergence that people were talking about, the kind of maths that allows you to explore all of the issues we've been talking about and also provides that nidus of convergence that will enable a certain consensus. It strikes me that the notion of an attracting set has everything that you need. Think about Mariana's questions about things that recur in time, memory, persistence in time, having characteristic states. You can express all of these things in terms of attracting sets. So you need the physics of attracting sets, and that's basically the pullback attractors. Within that you can now define self. You don't need to be axiomatic and assume the existence of identity in the sense that there is a self that is constituted by the attractor, and everything else follows. To bring that to closure, the stress is a measure of the distance, or how far outside your attracting set you are, and what will happen is you'll go back to your attracting set. That was my breathless summary of the thoughts that were induced by the conversation.

[38:02] Michael Levin: Brian.

[38:03] Brian: I just wanted to add. From this notion of perturbations, I think one of the issues with platonic spaces that I always reconcile with is whether this is all observer dependent in some sense. And I think the notion of perturbations is a nice way to think about making the observer aspects as weird as possible. In the computational realm, you can do this. Marianne talked about this notion of experiencing time and the aspect that maybe you can actually have agents experience time in a very different way than we experience time. We already have these in the AI space; they're called diffusion models. If you've ever read the Ted Chiang stories or seen the movie Arrival, there's this kind of gap between how we personally oftentimes see time in a linear fashion. But diffusion models in the sequence space see time in a completely different way where everything appears at once. When everything appears at once, you can imagine that this is something that diffusion does where it looks at generating the entire story everywhere at once. We're now exploring this because we're interested in whether these same systems learn algorithms that we're familiar with or they learn completely different algorithms in the space of, for example, games like Sudoku and things like that. So I think this notion that we should make the observations as different and as weird as possible. That's the way to at least hope that we're always being locked to some notion of observer dependence, but we can generalize that observer dependence further and further out.

[39:37] Michael Levin: That's super interesting. Can you say any more about that? What are you actually doing with these diffusion models?

[39:43] Brian: We are training diffusion models to play Sudoku because Sudoku is one of those games where it has a lot of computational advantages, and it's an NP-complete problem. The algorithms that we usually use for Sudoku are very — if you play Sudoku, this causal structure of "let me find the least constrained square and go from the most constrained square to the least constrained ones and solve the puzzle that way." If you train a diffusion model to solve Sudoku puzzles, they solve it very differently. We don't quite understand how they solve it right now, but they definitely don't choose the obvious strategies of "let me march down the least constrained square to the most constrained square to the least constrained square." It's something that almost feels random right now. We haven't done enough analysis on this yet, but the way it solves Sudoku could also lead to new algorithms in that space. Maybe the time complexity of those algorithms may be very different than the time complexity of algorithms that we have created in that space. And because it's an NP-complete problem, there's a lot of computational complexity analysis that we could potentially do in the long term from this. So we're just training diffusion models on things where we already have an algorithm that we know works and seeing what the diffusion model discovers as a different algorithm. It's also worthwhile studying in the language space because there are diffusion models in language now. You could look at the algorithm that diffusion models require and the representations they acquire by training on the sequences that we believe are generally temporally linear.

[41:29] Yvette: Hi, I apologize for just jumping in. It's my first time at the meeting, so I wanted to say hi to everyone and thank you for the invitation. I will try to join regularly now that my schedule is a bit more organized with my teaching, so it doesn't interfere. Maybe I just wanted to quickly introduce myself. I'm a physicist and I work at the interface of quantum mechanics and general relativity. I'm going to be listening for a while and then see when I can chip in with something more meaningful, but maybe just an interesting connection: one of the things I'm interested in is the interface between the classical world and the quantum world. I tend to see these as two different realms. Maybe this is just a perspective that can be helpful because at the end of the day you could say it's all part of one whole, so why divide it? Sometimes these divisions can be useful. The thing I'm starting to see is that the classical world emerges from the quantum world. They are different realms in the sense that they follow different rules, but they interact. That's important because in the lab we can prepare a superposition of an atom going to the left and to the right. What I'm working on is the emergence of mass from, let's say, rules of the quantum world. Some things that have made understanding the quantum world difficult are that we try to put the rules of the classical world into the quantum world somehow. I was thinking about some things you've mentioned: what's physical, what's physics? Where do you draw the line of what's physical or not? We have a lot of arguments about that within quantum mechanics, because even within our community some people argue that quantum mechanics is physical and other colleagues say it's just information and a mathematical tool to make predictions, which I don't understand very well. Because we haven't understood quantum mechanics well yet, it's still very debatable what is physical. I think the question about different realms and what's outside the realm of the physical is definitely something that interests me very much. I'll just say hi, and hopefully I will be able to contribute later on. Thank you.

[44:47] Mariana: It was lovely hearing you, Yvette.

[44:50] Yvette: Thank you.

[44:51] Mariana: I agree. I rarely use the term "physical" and don't find it very useful. There are a lot of mathematicians contributing immensely to mathematical biology, which end up contributing a lot to quantum theories or field theories. Because morphogenesis pinpoints the questions of the first mass and how mass behaves at different scales, we have all these effective theories and models. That's half the predictive power. But on the other hand, we cannot find the physicality for them. The temperatures are off. They don't reproduce anything but that behavior. It was lovely hearing you.

[45:50] Yvette: Thank you very much. Yes, I'm excited about joining this discussion.

[46:04] Chris Fields: Can I go way back to one of the previous topics, which had to do with what we're actually trying to achieve with both this discussion, but with science in general, mapping our space. And in a sense, one can look at it from two fairly different perspectives. One is the perspective of trying to predict what we will see, and the other is from the perspective of trying to characterize what we won't see. And if you think of mathematics as a formal system, or if you think of physics as a set of symmetries, a postulated set of symmetries, where everything else, everything beyond the statement of what the symmetries are is a relative fact that Mariana was describing earlier, then what comes out of those two ways of proceeding is a list of things that you can't do, or a list of structures that don't occur, that don't make sense. A list of no-go theorems, like Gödel's theorem, for example. And everything else is subsumed under T.H. White's law of the ants, or what Gell-Mann called the totalitarian principle that everything not explicitly forbidden is mandatory. If you can't prove that something won't exist, then you're going to bump into it somewhere. So it's a possible pattern or a possible attractor in Carl's terms. And it may be entirely unclear how to construct that attractor. But in the absence of a proof that it's impossible, you can expect it to be constructed somehow. But that's a very different, very negative way of describing what one is trying to do, to say that what one is trying to do is characterize the impossible. Carl, please.

[48:52] Carl: That was an excellent point. I'm just thinking of one more positive take on that. You could invert the problem and just assume that we want to characterise systems that can exist and then work from there. In that sense, what you are doing is saying that if a system is characterised by these characteristics, then it can't be over here in some state space. And that's just the surprise of self-information or stress that we were talking about before. And that comes for free with just writing down a density over states that can be occupied. Of course, there are many more ways of being dead than alive. It's a very small, attracting set that remains there. In answer to that question, in my world, the utility of having the right kind of maths means you can simulate things in silico. If you can simulate things in silico, then you can do forecasting, scenario modelling, interventions, you can test hypotheses about perturbations to the system. This becomes really practically relevant, certainly in things like computational psychiatry, where you start to simulate people, for example, and when their sense-making or decision-making goes wrong, or any sub-organising system like climate, financial services, fintech, getting the right maths and the right kind of self-organisation in play opens up a whole world of practical and important ways of intervening and testing hypotheses via simulation. The other application is you can test hypotheses about the unknowable mechanisms of the system that you're interested in, because you can't observe it directly. You can start to test hypotheses about the mechanisms because you've got the right simulation testbed. And then finally, if it's all working, you can put it into a diffusion model and make artifacts and you can then move into the world of autonomous vehicles, artificial intelligence research, using the right kind of maths to drive and to create artifacts that somehow embellish or endorse our ecosystem and the things that we actually play with and use. So practically, from my perspective, it's really important to get the maths right, because once you get the maths right, you can now answer all sorts of really important questions.

[51:40] Michael Levin: Could I ask about what you guys think about the distinction between simulation and explanation? This comes up in biology in the following way. I'll say we need to understand why this thing is doing that. And people say, well, it's emergent. And I say, what does that mean? And they say, what it means is that if we were to simulate the micro rules that are driving it, this is what we would see. And it's a regularity that holds in the world. That's what it is. I say, but what does that mean, that it's a regularity that holds? And what they mean is if we simulate the low-level rules, then out it will come. We can show that that's the case and get a catalog of these things. I'm interested in what the relationship is between being able to simulate it and thus show that, yes, in fact, that is what happens versus understanding what's going on. I was thinking about an extremely minimal case of this, the glider in the Game of Life. What does it mean to understand the glider, say the rate at which it moves or the angle at which it moves? What somebody can very easily do is show me the four steps, the cycle of the thing moving over. We can all agree that, sure enough, with the physics of this world, that is exactly what happens. Nothing more to say about that. But have you explained it or have you simulated it? With enough simulation, it seems like we could get the answers and not understand much of anything. What do you all think? Is there a distinction? And if so, what is it?

[53:18] Brian: For me, I have a very pragmatic distinction, it's a continuum explanation and simulation, but if you can accelerate the simulation to a point where it doesn't have to run in real time or doesn't have to run in reality, the idea is that explanation allows you to skip a lot of steps. It allows you to go way faster than a simulator would be if it had to simulate the entire universe to get to the same point that you want. In the Salvation Hometer case, I can predict gliders ahead of time without running all the intermediate steps that require a glider to be formed in the Game of Life.

[54:00] Olaf: The simulation is a shorter explanation if the usual route is much longer in terms of time and space and compute or energy or whatever you want. That makes sense to me. I wrote this thing about LLMs being frustrating because you put so much data and compute into it that you can't possibly think that this is highly intelligent because you invested so much in it. What is impressive about intelligence is that you put not much and you get a lot, which is how you phrased this, Mike, earlier. Math is impressive because you put something and you get so much more out of it. I think there is this notion of compression and efficiency that we expect and otherwise we are frustrated.

[55:00] Carl: I just didn't endorse that. It reminds me of arguments for universal computation, inheriting from, say, Kolmogorov complexity compression. When I hear compression, I hear maximizing the likelihood of your model. I mentioned that, Mike, you can look at simulations as a glorified statistical test. When we do a t-test and we're asking a question about the mechanisms or our hypothesis about the causes of some measurement or some data, we are doing a simulation. We are building a little generative model, a general linear model for the t-test, and we're simulating what could have caused the data, and then we're identifying the best hypothesis. When I talked about simulations, I wasn't talking about using maths and computers to reproduce behaviour. I was talking about the simulations used as a statement of your hypothesis about the underlying cause-effect structure that generated those data. Then you can compare different simulations, i.e. different observations or statistical models, and quantify the evidence for your hypothesis here and hypothesis there. But to do that, you've got to have the right kind of model. And you've got to have the right kind of maths that underpins that model, which is an open question and an ongoing challenge, which I'm sure we're all contending with.

[56:53] Chris Fields: I'll throw in a different way of answering that question, which is to say that a simulation may reproduce some effects that you're interested in, but it doesn't force you to change your conceptualization of the effect. Does it force you to change your language? Whereas a really good explanation often forces you to change your concepts. For example, go back to 1900 and consider the question of how there can be atoms. Why are there any atoms that are stable? The going theory back then was that the difference between protons and electrons was understood. That language hadn't been invented, but it was known that there was positively charged stuff and negatively charged stuff. It was also known that if the electrons actually moved in the atom, they would radiate away their energy and the atom would collapse, so there wouldn't be any atoms around, no stable atoms. That was the question that the Rutherford model ended up answering. Rutherford's point was that electrons don't move, that electrons can have particular energies, but they're not moving, so they don't radiate unless they change their energy in very precise ways. And they can only change their energy by finite amounts, and in fact, only by particular finite amounts. So they'll only radiate by these particular finite amounts. That radiation is not due to motion. Rutherford's picture introduced a radically different conceptualization of what an atom was. And that's why it was a good explanation. It made sense because it got rid of an entire problem, the problem of radiating the energy away and the atom collapsing and being unstable. But to get rid of that problem required a conceptual change, an abandoning of intuitions about motion. So I would say a really good explanation is something that causes us to abandon some intuition.

[59:54] Michael Levin: I like that.

[59:55] Chris Fields: Simulations don't do that.

[59:57] Michael Levin: I like that a lot. It sort of evaporates certain problems. It inevitably raises new ones. And is that okay? Is there some sort of ratio? I've seen reviewers' comments on papers where they say, "this raises more questions than it answers." I'm like, that to me seems like that's what we're supposed to do, but maybe not. So what should be the ratio between the problems that you've evaporated and the ones that you've now unearthed?

[1:00:41] Chris Fields: I suppose the new problem should be more interesting than the ones you've evaporated.

[1:00:48] Olaf: You can sometimes solve a problem by switching languages. You create a new field sometimes in nice successful cases. But you can always backtrack into the other language, which is what happens when science splits, and science is very split. That's why we have hope for things like category theory and things that bridge between the fields. But essentially, you have one cake and you eat that one or the other one. You can't eat both. You can cash them out both at the same time until you unify. But that's how I usually see it. You can backtrack. You can always speak the other language. And other than that, we solve the other problem.

[1:01:58] Carl: If it's a common language, though, the ratio that Mike was asking about would be a log odds ratio in Bayesian statistics and hypothesis, post-paparian hypothesis testing, it would be a Bayes factor. The relative probability of these data measurements or observations conditioned upon this simulation or that simulation, where the simulations embody your hypothesis, your mechanistic explanations, your conceptions. So the art of being a good Rutherford or a good scientist is to exactly deny your intuitions, your prior beliefs, and explore other hypotheses. Then you put that hypothesis in some converged maths and make it into an observation model. And then you evaluate the likelihood of your data under that observation model relative to another one. And indeed, you may find yourself going back to the null hypothesis. If you commit to classical or frequentist statistics, failure to reject the null keeps you stuck. But if you use a more generic model selection approach based upon the Bayes ratio or logos ratio, I think that provides a really nice space in which you can think of the scientific process as elaborating new hypothesis spaces. And then you've got the technology to evaluate how you proceed, how you take a path through the space of hypotheses. The challenge, of course, is to elaborate new hypotheses. And ultimately you come back to natural selection and evolution and the law of requisite variety and the like to elaborate those. But notice that depends upon having converged maths. So you can take two concepts or hypotheses or models and simulate data generation under these two hypotheses in a comparable way. So you can evaluate the probability of your data given these two observational concepts or hypotheses; you couldn't use category theory to do t-tests.

[1:04:39] Mariana: I'm not very good with the term simulation. I really like breaking assumptions or trying to think across the borders. I think the first time I heard your framework, Mike or Tame, I thought it gave so many disciplines a fresh set of tools, even thinking tools. I've been thinking a lot about them for our research program. Because when you have a technological approach to mind everywhere and you try to find where is everywhere, where is the limit, then perhaps we may find ourselves with this opportunity to do these experiments now outside of biology. This is no easy thing to conceptualize. I feel that I've been working at the limit, or at least at my capacity, because it requires this interpretative competency to go into all these different fields and try to match the framework. It's a real mind-bender. I don't know where you are all regarding it, if you have thoughts concerning this aspect.

[1:07:08] Chris Fields: Well, I think one thing that the framework requires of us is to abandon this notion that we are special in a very particular way as cognitive beings and to allow ourselves to consider all sorts of other systems as cognitive beings, and that's, in a sense, deeply deflationary. I'm not sure whether that's what you really mean by it, being a mind bender, but that's what I see as the primary challenge of it.

[1:08:17] Mariana: I'm not sure why I said mind bender, but now I like that I said it. I think it's more, and I think it's a good thing. I don't know how to evaluate the behavior of humans, their intentions I do not know. I find it a bit puzzling. It is easier for me to see, from Mike's perspective, there's a technological approach of mine everywhere. There are ultimate preferences. It's easier for me to see it in animals. Transposing these classical conditioning techniques—behavioral changes seen in animals—into mathematics or even into electrons. If we were to classify their behavioral profile, their preferences, or their skills, this is what I've been calling a mind bender. It seems very conceptual and it has absolutely no use, but I feel that through time and multi-scale competence architecture, we have at least a way to transpose these conceptual ideas about agency in other kinds of spaces and test for them. What I mean by this is if you have an observation that an object X always behaves the same way, is this non-interesting or is this an extreme preference? A preference X would not let go. No matter what you do, X will do that. I think this raises a lot of good questions. Maybe there is something that X likes more, and we can test for that. Then suddenly X has two behaviors, and you don't have the same paradigm you had before. I'm not sure.

[1:10:40] Chris Fields: I think there's another way to look at that, which is perhaps equally interesting, that if you look at X and X only has one behavior, then perhaps that is mostly reflecting on your capabilities to look. And that X may be doing all kinds of things that you can't see because you're choosing to observe it only in some particular way. And it may be that the way that you observe it is what leads you to call it X as opposed to something else. And your method of observation is inevitable: one's method of observation is what one uses to cut up the world into systems with boundaries or Markov blankets or however we want to describe them. And then that automatically restricts the measurements that we can make, as well as restricting the aspects of the system's behavior that we can regard as behavior because it gets encoded on the boundary that we have identified?

[1:12:10] Mariana: I completely agree. This poses another question, which is suppose you got a new set of goggles, scientific goggles, and you have a new framework and you see that X always chooses something in this space. Now we found out that it has a different behavior in some other space, but it behaves quite similarly. It has one preference in that space. You keep doing this. My question is, how can we state this preference? Or how can we pose things in other terms than perturbation or stress in trying to figure out what they want, what would be beneficial for them instead of trying to use them? Because it has brought us so far. But I wonder if we are to shake all assumptions and if we are to take these frameworks, I think, even on an experimental means, it feels interesting to think, what would they want? Because then perhaps if we have a different kind of behavior, we would have more ground truth to judge such behavior. I agree with everything you said. I don't think this is a simple problem. I think it's a very interesting one.

[1:13:53] Carl: Could I ask, you're using words like preference and want, which I like because I talk about prior preferences a lot, but it does anthropomorphize things. It sounds to me you're talking about things that conform to a variation principle of least action. They have the most likely path and that path is the path they prefer. How much do they deviate from that path of least action? It makes me wonder whether the answers that you would supply, which could be articulated in terms of preferred paths and wanting to get to the end point of the path of least action, are scale dependent. I want to ask Chris: let's just take two extreme scales. Let's take the electron and the moon, both of which have lawful behaviour. Do either of them want or have a preferred course of action or a preferred behaviour?

[1:15:09] Chris Fields: We certainly model them as if they do in terms of least action principles.

[1:15:19] Carl: I'm trying to get at the mind-bending thing. So why doesn't that apply to you and me then?

[1:15:28] Chris Fields: I'm not sure it doesn't apply to you and me.

[1:15:31] Carl: I'm trying to get at the mind-bending issue. Because you could certainly talk about the moon having a preferred trajectory or path, and likewise an electron at the quantum level. But what is special about the self-organization that Mike commits his life to trying to understand, that this platonic series is trying to address? Why is that principle of least action not sufficient, or is it?

[1:16:07] Michael Levin: I don't know about the moon, but we've been looking at it in very minimal computational systems, like the sorting algorithms that we've all talked about. What we've observed, for anybody that hasn't seen it, is basically that these are simple deterministic algorithms. Yes, they sort like they're supposed to, but it turns out they also do something, if you look at it from a different perspective, which hadn't been done in all the years that people have been studying these things, you see something quite differently. You see them doing some other stuff that is very surprising. I don't know if the right way to look at that stuff is as also goals that they're attempting to achieve via some least action thing, or whether that whole framework is limiting. Maybe that's the part that's exploration and play. These other things that are not actually goal-directed behavior. Maybe that's the more interesting volitional play aspect where the sorting is what you forced it to do, but in between, in the spaces between that, there's some stuff that the system likes to do. I think we can start to look at some of that in these very minimal models. I don't know if that's more minimal than the moon. I'm not sure what you think, but it seems very minimal because it's deterministic. We have control over all of it. Yet there is some stuff that's happening in the spaces between the thing you actually wanted it to do, and so this is still something that I'm playing with different frameworks that maybe it's another set of objectives to optimize that it's doing. Or maybe that's the exploration part, the intrinsic motivation, which then scaled up is what we see in us and so on. Maybe that's the simplest version of what it actually looks like when you push it all the way to the left of the spectrum. Santos just wrote on the chat that the aggression of the abstract into the physical maybe also follows a principle of least action. I think that's quite reasonable. I've been thinking about those ways too. The really simplistic way I started thinking about that is that these things in that latent space are under positive pressure. Basically, you don't have to work that hard to get them to ingress. If you make an interface, there they are. They're in some strange sense pushing out. There's a baseline pressure in which they are going to get into this world. There may be a more sophisticated least action kind of thing that could be used to describe what shows up relative to what interface you've made.

[1:19:19] Mariana: I've been looking into it because it's hard to discern if what we call or what we observe as a principle of least action is because of the configuration of the space or because it's the best transportation means or the most effective path. You have this notion of path, transportation means, and spatial constraints. This in itself gives us a lot of predictive power in other areas. For us, for example, tail and not limb, or limb and not tail—there's no minimization of surprise there, quite contrary. In these cases least action would serve to model the path of whatever it is that ingresses, according to how we model the space, but ultimately would not tell us the content. I'm very interested about the content. What is being ingressed? Not in terms of the geometric path, perhaps not even in terms of the conflicting dynamics that may happen at ingression, because both X and Y want to ingress—who wins? More of the content. There are, of course, very good mathematical constructions to do this, and you can kind of glue them all together, and then you can look inside, and you can see. But I'm pretty sure, what would we do with it, with the content, for example?

[1:21:20] Carl: Sorry, say it again, what would we do with the what?

[1:21:23] Mariana: Suppose that we can not only model the aggression, we have the path, everything, but we also have the content.

[1:21:34] Carl: I think we'll then get back to the utility of having a simulation. When you've got the simulation, you can do intervention experiments, test hypotheses, ask questions about what would happen if I did that, how it would respond in this context, what are the emergent behaviors, and deploy them in the way that we've been talking about. Not that I'm a physicist, but from a physicist's perspective, numerical analyses are the thing that get at the content and they reflect the application of a principle, where a principle is read from the perspective of the physicist as a method. So you get the right principles, the right maths that equip you with the right methods and tools. Then you apply that to ask questions about the different content under those principles. For example, why a tail and not a limb? That's just an expression of a path of least action expressed at an evolutionary scale, where you now read natural selection as a path of least action, where the Lagrangian is adaptive fitness. In fact, adaptive fitness can be, in this instance, equated with the marginal likelihood of finding this kind of phenotype in this eco niche, and the marginal likelihood is just the negative self-information.

[1:23:08] Mariana: I agree. I agree with you and it makes sense. If it wasn't for this research project, I would not be debating this. But now I find it very useful to debate it because why and phenotype. I feel that our models are very good at predicting a lot of things and we can do a lot of things with them. It is true. But why is it not a preference of some pattern that we don't see with whom we can obviously interact? Because if we see it this way, then we have the possibility to interact. For example, on this question of content, if we have the content, what we could do is do a code book and we can communicate. Perhaps this is absolutely bonkers, but in two years, maybe we have a new telecommunication system, because this was nonsense, but something nice came out of it. It feels really good to be able to speak about this. What do you think of a code book?

[1:24:15] Michael Levin: I would add there's the third-person perspective, which is what Carl just nicely described, where you look at this thing and you see I'm going to model it. I see this particular pattern coming through. I'm observing it in a third-person perspective. But there's also the other end of it: what does the world look like from the perspective of the thing that's coming through? Close to that is the issue that we are also potentially fundamentally patterns that are now manifesting through whatever interface, biological and so on. When we talk about communication, I think there's a whole research direction here, which is different and harder than typical third-person science, which is to look at the agency of the patterns themselves and of us as these patterns and the communication that takes place, conventionally in third person through the physical world, but also possibly directly lateral. This is what some mathematicians have said to me and maybe what Darwin meant when he said mathematicians have a different sense, this idea that when they perceive mathematical truth, it doesn't dip into the physical world and come back. They're not doing experiments, they're not making observations; they have some kind of other interaction. You could imagine a direct interaction between patterns from that space. This is very weird stuff, but maybe that also is an aspect of the communication when we start looking at it from the perspective of the thing that's coming through, not just from us evaluating it as passive data, like here's the thing I see.

[1:26:15] Carl: It wouldn't be weird from an evolutionary psychology perspective. You're talking about cultural niche construction, new wave aspect to evolution. If we have co-constructed a culture of good maths and good, providing the simplest or least complex explanation for everything, and that can be inherited from generation to generation, then that is a perfectly consistent process with a path of least action, where action in this instance has elevated meaning in the context of evolutionary psychology or devo constructs taken into the cultural domain, which brings us back to the convergence and the discourse and the negotiated corpus of calculus that we commonly accept as the right kind of maths.

[1:27:19] Mariana: Thanks for bringing that up. It is true. I'm no authority, but there is a lot of love and devotion, and almost even sacrifice that goes into doing maths; perhaps it is a bit different. I would also say that one sets up experiments. And then it's a bit of pen and paper. There is a lot of "what if", and then you state the problem in a different way and you use a different example. I don't know if it's the same for everyone, but it feels there is some proportionality between this love and this devotion and, at the same time, this plain curiosity — that it's never really about you. It comes to you. It feels the more you put in, in terms of being alone and thinking about the problem, the more it comes.

[1:28:53] Michael Levin: I'm looking forward to adding. I didn't get it; it just hasn't been scheduled yet, but there's a couple of conversations coming soon for this symposium with musicians. I've had a lot of outreach from people who are not scientists or mathematicians; they make music and they've been watching this stuff. They said this is what it's like when a particular song makes itself known to you. When it comes through, their creative discovery process I find very interesting to compare and contrast to what we all do. I think that'll be a nice addition. I'll try to get hold of some more artists and people in that space.

[1:29:43] Mariana: Sometimes I've been discussing informally with some to ask about this ingression, exactly treating, I let them talk and I'm trying to map whatever it is that they're saying to this ingression mechanism. Sometimes I feel perhaps you can model it with priors. But there are things that are novel, like in morphogenesis, they've never heard before. Is it on? You've been recording it.

[1:30:17] Michael Levin: It hasn't happened yet. It will be recorded. I'm trying to set it up. I thought our schedules were weird, but theirs are even more challenging, apparently. We'll make it happen.

[1:30:42] Chris Fields: If I can make one more comment from a physics perspective. We move back and forth in physics from a theoretical stance in which the world has been divided up in some particular way into systems that have certain internal processes and hence interaction capabilities, which we can describe as observational capabilities or action capabilities, or treat them as interactions. We can move from a perspective that cuts the world up in one way to a perspective that cuts the world up in some completely different way. Moving between those perspectives changes all of the interactions that one describes between the components that we've made by the cutting up process. But it doesn't change the assumed behavior of the whole system. So whatever model is constructed has to be consistent with the underlying principle that if you erase the boundaries, take your pencil, erase all the lines you've drawn, nothing has changed. I'm trying to get to this point: to what extent does our perspective in which we each view ourselves as bounded entities bias our thinking about how patterns interact. Given that if we take our erasers and erase these boundaries, we haven't changed the patterns. We haven't changed the overall pattern. We just changed how we describe different pieces of it interacting by creating the pieces. If we don't have those pieces, that interaction, of course, doesn't exist, but that doesn't change what's going on in the background. And so when one thinks that way, the question of what it means for these to think about patterns interacting takes on at least a different flavor, because we described it in a radically different way when we think of some big box with a bunch of stuff happening inside of it versus the big box being cut up in a lot of little ways where we think about these little entities that have to exchange information across their boundaries.

[1:34:12] Carl: Chris, is that not easily resolved just by picking a particular scale? You're talking about big little boxes in big boxes, little Markov blankets within a big Markov blanket. The only difference is that there always has to be a carving of an independency structure to have a cause-effect structure to describe anything. So the difference between the big box and the little box is just a scale that you pick in order to try and articulate and model your system or explain your system. Interestingly, those little boxes are all trying to do exactly what you are doing, trying to understand the boxology of the system at hand. But my main point is, are you not just stating that there is a choice here, that you have to pick the scale at which you want to characterise or model or understand or explain your system?

[1:35:16] Chris Fields: I don't think so. The notion of statistical independence that allows you to talk about things causally is itself an approximation that we get by assuming that the overall dynamics of the system is naturally multipartite. The quantum language makes it easy to say if, to the extent that the whole state is fairly entangled, then the boundaries that we draw are completely artificial. They can be regarded as boundaries only for a very short time until regarding them as boundaries no longer works. Whereas in a classical system that's not true. In a classical system, all the interactions are causal, and any boundaries you draw really are boundaries. From a quantum perspective that's no longer the case. The boundaries staying boundaries is pragmatic. That's independent of scale.

[1:37:04] Carl: But I briefly pursue that. When I use the notion of scale, I included temporal scale. What I hear you saying is that certain carvings exist at specific scales. At the quantum scale, the carvings are only valid as an approximation, possibly over very short periods of time. But as you increase the scale, the duration where that boundary is in play in a classical or statistical sense extends. Again, one could argue that if you include separation of temporal scales in that scale invariance, that it's a question of picking the scale you want to work at. And in a sense, I was asking you to think about the two ends of scales that we could consider, the moon and the electron, and why they are not relevant to biotic self-organization that is characterized by curious behavior and exploration and playfulness and breaking of detailed balance, because you don't find that either at the quantum level or at the level of heavenly bodies, for example. Just to summarize, what I'm saying is that, yes, it is certainly true that statistical independences dissipate and fluctuate and indeed a pullback attractor is itself a random variable. But over a certain time scale, they are in play, and that time scale increases as you increase the scale of the system.

[1:38:54] Chris Fields: From a global quantum theoretic viewpoint, as you increase the temporal scale, the approximation of things being statistically independent always becomes worse. The only question is how much, how quickly it becomes worse, which is dependent on energy density. So I do think that the reason we see the world as classical at large scales is because we don't know how to look at it. We don't know the right way to look at it.

[1:39:59] Mariana: You can have all these agents in a time-independent manner and also a background-independent manner. So it's not only that you would choose the scale or the metric; you can literally let the cohort tell you what the metric is or homological class or character. So you can definitely do all that with no time and no particular conception of space besides the one you're observing. This is what you were saying. No, Chris.

[1:40:35] Chris Fields: I think that we'll eventually be forced to consider space as something that we impose as observers. I'd love to be able to make that more precise.


Related episodes