Skip to content

Platonic Space discussion 3

This 1h44m roundtable on the Platonic Space Hypothesis explores platonic forms in biology, xenobots, symbiosis, competency spaces, play and plasticity, Markov blankets, thermodynamics, and will-to-live dynamics.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~1 hours 44 minute discussion among contributors to the Platonic Space Hypothesis (https://thoughtforms.life/symposium-on-the-platonic-space/).

CHAPTERS:

(00:00) Platonic forms and interaction

(07:01) Xenobots and evolutionary cost

(11:23) Symbiosis, rectifiers, embeddings

(18:06) Free lunches in biology

(27:59) Topology of competency space

(39:53) Exploration, play, affordances

(48:01) Markov blankets and evolution

(57:51) Relational life and observers

(01:04:10) Defining play and plasticity

(01:20:35) Thermodynamics and domesticated play

(01:29:32) Will to live dynamics

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.


[00:00] Michael Levin: Welcome, everybody. We're open for another discussion of the Platonic space. If anybody has any issues or questions for each other, please, now's the time.

[00:15] Unknown: I've got questions or comments. I don't want to monopolize things. Go for it.

[00:19] Michael Levin: Go for it.

[00:20] Unknown: When I prepared my talk, I had not read what you wrote, Mike, because I wanted to just comment: what's the biology, unbiased, what is it telling me? I've since gone back and read these things. My impression with regard to the Plato concept is that unless there's a proposal for where these forms are stored and what the mechanisms are for pointing and ingressing, how you consult these forms, the whole project is indistinguishable from mathematics, physics, and biology all having patterns and rules. Why there are any rules and how they get consulted, we don't know. I was wondering whether the intent of this whole session, the series of lectures, was to see if somebody could come up with, what is the storage site? What are mechanisms for pointing and ingressing? My impression is that at least a number of the talks, certainly mine and Gordana's, are that you don't really need to have forms outside somewhere that are being consulted. You can have them just as rules and constraints within the biological organism itself. Or, as my quantum mechanics professor used to joke, it took you a little while to do the homework, but every electron and proton in the whole universe is doing it like that all the time. Has there been any insight into whether there is consulting or whether it really just comes back to there are laws somewhere and we just need to find them out?

[02:09] Michael Levin: I would say there's a couple of things. First of all, I didn't think that these other talks would address that question in particular, because fundamentally I think there are many people who don't agree with my framing of it in the first place. So that's step one: to even say whether this whole thing is even — I'm arguing for a strong interactionist model. Before you can worry about the interaction, you have to think that there is interaction, and some people don't. However, what I would focus on and what our research program actually focuses on is trying to understand what it is that you actually get during this interaction. For example, are they just constraints? Are they enablements? Lots of people say it's not just constraints, it's also enablements. But I think enablements can be taken much more seriously. That is not just that by closing off some stuff over here, I forced you into this other set of things that you're going to do. In the sense of free lunches, or heavily discounted lunches, you get more than you put in. I'm really interested in this idea of what you get out of such a thing. By putting in some amount of effort to make an interface, what you actually get through it is in some quantifiable way more than you put in. In other words, what you get are not simply constraints on things that you can't do, nor being shuttled into other modes, but actually you get policies, maybe information — static patterns, maybe actual compute in the form of virtual machines. You get something that you didn't pay for in an important sense. Because I think the current way of calculating what you paid for only takes into account this side of the interaction. My strong supposition and my hypothesis is that you get way more than you paid for. I think biology in particular — the things we call biology tend to be systems that exploit that; they are very good at exploiting these things, saving effort on things that they did not need to evolve or find or search for. In biology it's very hard to quantify that because it's always complex. There are always mechanisms you don't know, and it's really hard to prove any of that. But in simpler systems and in simple computational systems we may be able to. That's one of the things we're doing: trying to quantify how much you put in and what you get out in these simple systems.

[04:55] Unknown: Since you put it that way, I'll mention something that I didn't put in my talk, but it is in the causality paper from a couple of years ago, which is Richard Levins's idea from across town from you. He was at Harvard School of Public Health. The idea, which goes all the way back to Waddington's theoretical biology books in the '70s — there are a couple of nice papers back there — is that essentially the way you get some structure is by crystallizing out of an amorphous mass. In other words, what happens is biology starts off, it's not that you build new things, it's just that you start off doing many things badly. By adding constraints, you exclude many of those things. Now the remaining ones are done well. You can see this, for example, in neurobiology, where in lower animals there'll be a brain nucleus that does two or three things connecting to two or three places. In higher organisms, it splits into two, each one of which takes on one of those tasks. Or in the genome: in E. coli, the whole genome's accessible. In eukaryotes, you suppress everything with histones, and now you selectively de-repress stuff. What you've done then is essentially increase the signal-to-noise ratio of stuff you already have. Enzymes are another classic. All these chemical reactions can go on without the enzyme, but 12 other side reactions also happen. If you have an enzyme, you're essentially preventing some of those by the nature of the active site. Only a couple of them happen, and they happen much faster and much better. This signal-to-noise ratio, which is what you're talking about — the ratio of inputs to output — is probably exactly the thing to be looking at.

[07:01] Michael Levin: I think that's certainly one subset of those phenomena. In biology, even though it's much harder to prove anything in this scenario and hard to quantify these things, we do now with some of the synthetic models that we and others have made, xenobots, anthrobots, there's an opportunity and a challenge now for biologists to be able to say, when was the computational cost paid to design these things? In other words, we know when the frog and the human design was paid for, it was in the millions of years of selection for specific features. But when you create something that's never been here before, and it has certain competencies, you want to know where did those come from and when did we pay for them? I don't think it's good enough. When I ask people this question, they generally say, well, it has an evolutionary history. It just learned to do that when it was being selected for other stuff. That's OK, except A, it provides zero explanatory value. It just means that whatever other weird thing pops up, you'll just chalk it up to the history. And B, it rips up a large part of what I thought evolutionary theory was supposed to do, which is provide a tight specificity between the history of environments and the properties that you got out the other end. You're supposed to be able to say, this thing looks and acts this way because it has a history of selection going back, and everything else died out. So if you're willing to rip that up and just say, well, whatever your history was, you can end up with pretty much anything. I think we're supposed to do better than that. I think we're supposed to have some kind of theory to be able to say more than just the developmental plasticity. We're able to say why is it that we selected for all of these things? And also, by the way, in a novel configuration, all of this other new stuff works that's never been evaluated before. Hard to quantify, but at least we can start looking for theory that does better than it's emergent, it just showed up.

[09:10] Unknown: I thought that was the Evo-Devo program, or maybe it's because I only talk to people like Gunter Wagner who think that environment is not the whole story. There is a set of rules somewhere. There's some other set of constraints on how you build an organism that functions.

[09:35] Michael Levin: The constraints. I think this is more than constraints. Andreas Wagner gets really close to this. He doesn't quite come out saying it, but he has this book, "Arrival of the Fittest," which I think asks exactly this question: okay, you can sort of select out the bad stuff, that's great. Where does the good stuff come from? Specifically, constraints are one thing. But when you get significant competencies out of it, maybe it's more than constraints. Maybe by building certain interfaces you're tapping into something that provides a bigger return on investment. I can think of a number of examples of that. I think learning to predict, facilitate it when we want it, suppress it when we don't want it, because there are scenarios where that happens. If everybody knows you get complexity like that, you get unpredictability, maybe you get perverse instantiation in a life context, but it's not just that. It's not just complexity and unpredictability. It's competencies that would be recognizable to any behavior scientist. Somewhere along the spectrum of maybe low to higher, that requires explanation because if you don't have a long history of selection for it and you don't have direct engineering or design for it, we're looking at something additional to that. There are knowledge gaps around where that stuff comes from.

[11:23] Unknown: Not that I have the answer. I'm simply asking. Do we really need the environment for that? Meaning a symbiotic relationship between two species can explain creating noise in one species, which the other species will use in a different way. So for species A it's noise, but for species B that's food. By increasing that, you're actually increasing the other. The fitness function is not only on group A; it's actually on group B that influences group A. So you're creating a symbiotic connection that really complicates the way to describe what is good and what is not good.

[12:16] Unknown: The question is, where is the directionality coming from? You're saying you need a rectifier. Here's somebody who's making noise. Here's somebody else who can use that noise and put direction on it, which is exactly a Carnot heat engine. You've got random motion, and you now put it in a piston and you can direct it into a higher level of work, which is a macroscopic thing. It seems to me that what biology needs to have done then is have invented some little module that is a rectifier and could take noise of whatever kinds and make something useful out of it. I had wondered whether a couple of theorems that showed up in these talks might be clues as to how do you design such a thing? What would it look like? How would nature have designed it? The two things that struck me were, one, in that Platonic Hu, Chung, Wang, Isola paper. You have this theorem about if you have vector embeddings, you want a vector embedding in which the similarity between the observation and the constructs you're trying to make is the same as pointwise mutual information. I don't understand that, but I get the flavor of it. It seems like the sort of computation or constraint or requirement that might tell you what is the kind of thing you have to build that is guaranteed to make useful stuff. Then selection can go figure out which useful stuff. So it's like Lego blocks — you're always gonna make something 'cause the way they go together. The other one was this thing that came up, the Markov blanket theorem, which sounds like it goes back to Ross Ashby's thing about if you're gonna have a regulator, it has to have a model of what's being regulated. I was familiar with Ashby, but I see that there's debates about whether he ever actually proved that. Has there subsequently been a proof of that? That would be another kind of thing that you could imagine being a requirement for making biological modules that are very likely to make useful stuff, and all you have to do is recombine them or whatever.

[15:16] Brian Chung: The paper we had in the appendix discussed this notion that the kernel itself, the relationships between embeddings, map to point-wise mutual information of two events in probability space. Point-wise mutual information is a ratio of log probabilities, and the idea is that the kernels are becoming the reflection of the mutual information shared between those two embedded objects. This assumes bijectivity and other things that are not necessarily practical, so it's not an ideal proof. That was the kind of thing we were getting at: the kernel meaning that the inner embeddings and their relationships are converging to something equivalent to point-wise mutual information, given some mathematical assumptions.

[16:15] Unknown: That's selecting out a special kind of object.

[16:20] Brian Chung: The embedding is not necessarily all possible objects. It's whatever the model chose to compress its representation towards.

[16:37] Unknown: Is there an idiot's version? Say a biophysicist's version of the mathematics of that one could wade through and maybe understand in detail how it all plays out? In other words, for those of us who don't think about embeddings all day.

[17:03] Brian Chung: Unfortunately, I don't know about biophysics. I'm on the other side; I think about embeddings. If there are notions of co-occurrence and the probability of co-occurrence, that's what pointwise information is reflecting. That is what a kernel is. It's saying that these embeddings had to be embedded close together because they co-occur frequently. So the notion in language is that words' meaning derives from the company they keep. Objects' meanings derive from the things that co-occur.

[17:40] Unknown: I'll incubate on that some more. Thank you. Nobody else wants to jump in on it. I can keep throwing in comments. You jump in. I'm wondering.

[18:06] unknown: Oh.

[18:07] Unknown: Yeah, go ahead. My bad.

[18:10] unknown: I'm wondering whether we have a better sense of which mathematical objects have free lunches. Attractors might have them. If we're doing a difference, the sorting algorithms paper has a transitive global objective, and we can see the algorithm getting to some sub-objective. I'm wondering if we have a set of objects that we know are potent.

[18:40] Michael Levin: We've been playing with this, taking different ones and trying to see what they offer. There will be some work on this coming soon. The difficulty with all of that is that it's a two-way IQ test. As always, when you're trying to gauge what that is, it's only as good as we know how to notice it. The clustering thing — the only reason we found it is because I thought to look for it, but there's probably 1,000 other things we haven't thought to look for. We are still very much for the biological — for cells, tissues, gene regulatory networks. We are looking for suites of tools to identify novel competencies that we haven't found yet. And ideally, there's no such thing as unbiased, but ideally as differently biased as what humans have been looking for all these years. The same thing is here. I would like to deploy the exact same tools on all of this so that we could try to find as many competencies as we can in different spaces. But I think we are limited primarily by our imagination.

[19:50] unknown: I suppose my other question is a free lunch; I think it's really clear to me what a free lunch is at a very low level. In an attractor, it feels like a free lunch. I'm within the basin, now I know where I'm going. But as a human, when I think about a free lunch, I just think of doing something that costs me less to get more. It's not really a free lunch, but it's a place where we're not dealing with a zero-sum. What I'm wondering is what's the difference there?

[20:28] Michael Levin: I don't mean literally free because you still have to build the interface, so it's not gonna be free, but some sort of heavily discounted lunch. Here's a dumb example that I've used. Let's say that in some universe, the highest fitness belongs to a particular shape triangle. You crank a bunch of generations and you find the first angle and you crank a bunch more generations, you find the second angle. Now, the third one you don't have to look for because you get this amazing free gift that once you know two angles, you know the third. In some sense, evolution just saved 1/3 of its time, because if you didn't have that, you would have to go find the third angle. That kind of thing is a constraint, but for biology it's not so much a constraint, it's an enabling feature. It means you can go faster. There's tons of stuff like that, that you get these things, these mathematical relationships of facts of computation, where you don't have to do the whole truth table once you have your voltage-gated ion channels, you've got your transistors, the truth table comes naturally after that. These properties you don't have to go look for, they're handed to you. Biology is precisely the set of things that exploit those kinds of things.

[21:51] Brian Chung: So I want to add to this notion of the free lunch, because this interesting phenomenon that we see in the AI models is that algorithms that don't normally work are working a lot better now as a model gets to a certain level of competency. So things like evolutionary search and reinforcement learning—if you try to train your model from scratch, it'd be hopeless. But if you do it on a model that's already pre-trained, it works remarkably well. So there are papers now showing that if you do evolutionary search on models that are 7 billion parameters or more, you would think that would not work at all because it's 7 billion parameters. That's a very high dimensional space. But evolutionary search perturbations can give you performance improvements on downstream things, which begs the question: as things become more competent, things that didn't work previously seem to be working a lot more effectively now.

[22:47] Michael Levin: Would you mind popping some links into the chat? I haven't seen those from the computer, from the CS side of things, but I'll tell you from the biology: this is something that we've been writing about for a while, that evolution, I think, works quite differently on a competent substrate. So when you have cells that can actually solve problems on their own, it's a completely different story, because if the mapping between a genotype and phenotype is not hardwired, if it's actually an interpretation and intelligent interpretation process, then some very interesting things happen to evolutionary search: it goes much faster and it finds much more interesting things. So having that middle layer, the translation layer, which is morphogenesis, basically, having that be competent greatly potentiates evolutionary search.

[23:41] Brian Chung: I imagine these LEGO blocks: the chance of the molecules forming a cube is very low, but LEGOs forming a cube is much higher in the sense that random perturbations create something that is structured.

[23:53] Michael Levin: There's another effect here, which is this. Imagine: One of the things that morphogenesis is very good at is getting to the same final outcome even when things change. If you change up the circumstances, it's really good at getting to the same thing. For example, if you make a tadpole where the mouth is on the back of the head, eventually that mouth will come around to where it needs to be and you get a normal frog. We made these things called Picasso tadpoles where we scramble your facial organs: the mouth is out here, the eyes back. They still make normal frogs because all this stuff moves around until you get a nice frog face and then that's it. Imagine what happens with evolution then. Most mutations are deleterious because it's much easier to screw things up than to do good things. Also, most mutations have more than one effect. You have your tadpole, you make a mutation; the mutation does two things. It moves the mouth off to the side, but it also has some other beneficial effect somewhere else. If the material was a direct mapping from genotype to phenotype, you would never see the consequences of this other mutation because the mouth is off to the side, the thing would starve, and that's the end of that. You would have to wait until you get that same mutation without the mouth effect, and that would take a lot longer. Instead, you make the mutation, the mouth fixes itself, and you get to explore the consequences of the other side effects because it makes up for a lot of those things. That aspect turns a lot of deleterious mutations into neutral ones. We have a bunch of computational work on this. If you simulate that process, it becomes very hard for selection to actually see the genome. If you have a beautiful-looking tadpole, you don't know if the genome was amazing or if the structural genome wasn't so good but the developmental process fixed everything along the way. If you look at where evolution is doing most of the work, it ends up doing more work on the competency mechanisms instead of the structural stuff. If you do that, it becomes even harder to see the structural genome. You get onto a positive feedback loop where eventually you get a really unreliable medium, but it doesn't matter because the algorithm is amazing and it fixes whatever happens. If you take that to its logical conclusion, you end up with something like a planarian. In planaria there is a whole spectrum of where these things end in evolution. C. elegans is super hardwired, then mammals, then amphibians, then planaria. In planaria the material is incredibly junky because, for reasons we could describe, the genome—cells have different numbers of chromosomes. They're mixaploid. But they're the ones that have the most regenerative capacity, cancer suppression, immortality. They don't age. It's not because they have a beautiful genome. It's the exact opposite. It's because the material is so unreliable. All the effort went into the algorithm to be able to say, we already know that the hardware is going to be iffy, but you're going to have to fix yourself. Other creatures can do that to different extents. I think it comes because of the consequence that the more of that you do, the harder it is to select on the genome. So all the effort goes into the competency part. That's just one of the things that happens. But having that problem-solving competency means that even though the mutations are random, the outcome actually is not like that at all. It looks quite different.

[27:57] Brian Chung: Great. That's awesome. Yeah.

[27:59] Unknown: So, Mike, if I can jump in — sorry, I was a little late, I was teaching. One of the things this reminds me of is the deep pertinence of understanding. Some of you may know this old paper by Stadler, Wagner, and Fontana called "The Topology of the Possible," which is about the genotype–phenotype map. Basically what it is arguing is that what is really important to understand is this process that takes you from genotype to phenotype, and because of certain neutralities that you can have, that is much more consequential for evolutionary search and so on. These questions about local search around competent states make me think that one of the key challenges we face in this platonic space view more broadly is understanding what you might call, rather than a genotype–phenotype map, a substrate–competency or substrate–platonic-form map. I think you have provided us with plenty of evidence in your own work that this is highly non-trivial in the structuring of that map, such that you both have a lot of equifinality. You can have very different substrates pointing to very similar competencies. You can also have some kinds of substrates that are very tolerant to some perturbations in their mapping into competency space, some that are very intolerant. I'm curious if you or anyone else has looked back at some of that old machinery and thought about it in the context of this problem: if we want to engineer in competency space, which is what matters, we really need to understand the topology of that mapping.

[30:26] Michael Levin: The people who have—those guys and some others who have spoken about that mapping—generally focus on its complexity. There's redundancy, there's pleiotropy, there's degeneracy. But I have a more radical view. I think it's more than that. I think it's intelligence, literally. I think it's problem solving competencies. Most of those descriptions are still at a lower dynamical systems level. I think a lot of what's going on is isomorphic with paradigms from behavioral science, where this is classic anticipation, habituation, Pavlovian conditioning. I think then you get more—it's like that, but on steroids, because then once you really have some competencies about navigating that space, the material itself is helping out. Every layer is doing something, and all you have to do is deform the option space for your parts to get interesting things to happen at your level.

[31:56] Unknown: Going back to Brian Chung's multi-billion-dimensional space, or that many parameters, therefore that size space, is it mostly empty?

[32:12] Brian Chung: I think mathematically, there's a dynamic that might be at play where most perturbations aren't harmful, but they do something meaningful in some sense: they don't destroy every capability as much as they change one specific capability more and don't ruin the whole process. As Mike was implying, there's a lot of structure for the perturbations for some reason in this parameter space, despite the curse of dimensionality and other things saying that there shouldn't be ways of doing this with just Gaussian perturbations. It's one of those things where this is very recent work. This work came out a few months ago. I don't think there's a good understanding of why this is possible right now in models that have reached a certain level of competency and it wasn't possible before.

[33:00] unknown: There's an interesting paper that I saw a couple of days ago around how the connections within regions in the brain compress data or compress the useful information. They did the study on 96 participants' fMRI data. They found that most of the connections between regions were redundant; four or five percent of the connections were basically the very important ones, and the rest of it was redundant for the overall computation. So my thought is when we're thinking about these spaces, computationally, are we thinking about some dynamic where exploitation and exploration of that space work in lockstep with the compressibility of that data object, such that that data object makes that observer or that agent able to do more, as in gives them more variation in what they can do next. When I was thinking about when Brian said that you get certain competencies that don't work as a baseline model and you have to train it until that competency becomes useful. Is that because we're going to this point where you have to have some foundational structure or some foundational organization that's very basic, that gives you some low-level foundational brick understanding before you can build a wall on top of it. Once you have that layer, you can start. One of the things in Blaze's paper is the phase transition that happens when a group of things come together. In that example, are there enough training to then be able to take an abstraction a layer up in that map from the base layer to the satellite view, and now in the satellite view I can do slightly more. The thing I'm working on, Observer Theory, is how to model a computational possibility space in the set of all possible computations. When you look at some of the work done on evolutionary algorithms with one-dimensional Turing machines and on bulk orchestration when they take properties from the whole and not the part, there is a dynamic where you get these step ups in the very basic forms of those computational systems where new things don't go in the linear fashion. They have some dynamic equilibrium on a linear line and then they jump up. They find some novel rule that was already there that they could select from. And that gives you free lunches straight away because you've gone up to a different level of competence. From now on, whenever you change a rule, you're never going back down that exponential curve if you're surviving and continuing. And that's one of the ideas that I thought speaks to not only how we construct differences in categories as observers like us, but also how we balance those evolutionary strategies within this ingression model, because if we have some top-down pull, whether it's an attractor or something else, the balance is between exploration — pointing at an attractor and going for it — or exploitation — exploiting something within that attractor well. These are two different things: one is discovery of something we're already in, which costs very little because we're already there, and the other is quite high cost because we have to make many guesses about how to get to the next jump, the next exponential jumping capability.

[36:58] Brian Chung: Reminds me of this notion of functional information, which I've just started reading about, which is the idea of reposing the notion of information or complexity as the idea that once things become composable, the number of combinations that can possibly exist explodes, and that's much more functional. You can imagine if you create a binary system, suddenly you can create permutations that are larger than the number of possible atoms in the universe. This composability and the dramatic growth of permutation space creates a lot more functionality to that space as well.

[37:33] Unknown: This speaks to information. There's Shannon information, Fisher information, Kolmogorov information, etc. But the thing that is missing from Kolmogorov information, which I think is really important, is this notion of compositionality or scale. The experiments that I showed give one a sense of how noise or random bits can get turned into algorithmic information. In a way, I think of life as an engine for turning random encounters into algorithmic information that gets baked into how to make stuff, how to do stuff in the future. But also the fact that it composes hierarchically — you get a composition, and then those things compose, and those things compose, and so on — gives all of that a multi-scale and compositional quality that isn't captured by the normal Kolmogorov sense of things. So I feel like there's some pretty basic theory work to do there to understand scale in information as well that would give us a much better handle on the information-theoretic properties of life.

[38:55] unknown: Layers, the epiplexy paper, things in the paper about structured information, an AI-sponsored paper. Do you think that measure can be adapted for this function? That was one of the most interesting, because you have a problem challenging information, comic book, infinite systems, so they're not hugely useful in the way people.

[39:22] Unknown: I think epiplexity is one root. There are also things one can do with conditional Kolmogorov complexity or conditional Kolmogorov information that Eric Elmosnino has done some playing with. There are definitely some promising directions. I'm not sure any of them has a complete theory as yet.

[39:53] Michael Levin: I like the inclusion of exploration here, because one thing that has been completely missing from our efforts to recognize diverse intelligence is that, for practical reasons, they've been entirely focused on goal-directed competencies, which is only one half of the equation, because then there's exploration in play. So part of cognition is non-goal-directed, just messing around to see what happens. We know what creative play looks like in mammals and birds. We're terrible at detecting it in unconventional embodiments. And so I spent a lot of time recently thinking about what that would look like in cells and tissues and molecular networks. If we step away from the idea that it has to be this way because it serves some very practical purpose, or it's going to this goal, or the whole thing evolved because it did something important, what does creative play look like in some of these systems? And how would we know? How do you know if an unconventional system is playing?

[41:13] Unknown: Can I chip in? I've been listening to the discussion and, on the one hand, there's this language around competencies, which is very agent-focused in terms of skills or knowledge; on the other hand, we've got the notion of interfaces and functional information. It seems like this idea of where you put your focus: on the agent, the environment, or the interaction. I was thinking about affordances, those Gibsonian possibilities for action. That therefore puts your attention very much on the relational properties. So, if you look at that and say something's highly competent, maybe it's found its affordances in a given context, in its environment. In that relation, you could look at it as an observer and say I draw a distinction around that entity and I'll say that is competent because I observe its capacity to adopt some affordances which I recognize, and I say it's good at doing these things. Now, if I put it in another context, change its environment, and if you've been doing that in your experiments, you suddenly find this entity, which may not have had competencies in one environment, suddenly picks up these things in another. There's also talk about structured spaces: the idea that evolution or competency might increase in a more structured environment which has richer affordances, i.e., when you interface with an affordance, you get more out of it because it's already semi-structured and already capable. Relating that to play: maybe there are different ways of acting. If you're in a given environment — say, an evolutionary view where we look at something in a relatively consistent environment — the affordances in the organism-environment relation may get locked in a bit. Maybe you go from a play mode into a more restricted mode: it's working well; you start to reorganize your internal processes and are not in exploratory or play mode. You're happy here; it's all working well. If you place or perturb it into another environment, you are now stripped of the affordances which were working well for you. You could perish, drop into some low mode, or go into an exploratory, playful mode where you explore the affordances in this new relational context to try and pick up those things. And I think the richer that environment is with latent — I'm going to make up the term — active affordances: affordances which may already be, someone mentioned an attractor, something you can attach to which gives you a lot for free. So there could be a hierarchy of affordances. Depending on your view, do you look at competencies when you're agent-based? Do you look at functional information? And that's the term if you're thinking about the reception, the capacity to receive an affordance, or do you look at affordances themselves? I think the discussion of language may benefit from drawing distinctions about what perspective you're taking: the agent, the environment, or their relationship. For me, when we're observing the discussion, that's a useful thing to bear in mind: the language changes and what you look for changes. When you're talking about interfaces, it sounds relational. Competency is more agent-based, agent-focused. That was just an observation I wanted to throw in.

[45:53] unknown: One of the things I wanted to come back to is when we think about play, the way observers are modeled in computational possibility spaces, second-order cybernetic, so they have sensory input from the environment and they have an internal model which they update. I'm wondering if play is exploration that's more internal-model focused, where you're playing within a bounded area of the space you're in and that is therefore, with low risk, letting you update your internal model to then expand the accessible space you've got in the future versus actual exploration where you take all of those learnings from play and training to get those skills and affordances. Real exploration is when you're moving it into the real world. So you're taking that updated internal model, that thing you've looped around through play, to then attack some problem in the real space that was riskier. You want to embed more or find more equivalences for when you were doing that play loop. Is that what functional play is doing? Is it a bounded version of the whole environment, shrunken down into your internal model, trying to take advantage of something you've come across before that's sparsely related within your internal model, a few relations practiced by doing that loop, just like practicing and learning how to cook or learning how to play an instrument? By running that loop internally, that is effectively functional play that's letting you do something in a future state where your internal model predicts that these equivalences will be valuable later for growing the size of your accessible space.

[47:57] Michael Levin: Carl, did you want to say something? You had your hand up.

[48:01] Unknown: Yes, no, I just wanted to endorse the last few points. In the context of a non-biological, more physicists approach to self-organization, that you can derive the most likely path into the future that looks like this is how this system chooses to behave. Interestingly, one, the imperatives for the most likely paths do have this epistemic playful aspect and an instrumental aspect. In my world, that's called epistemic affordance. And to my mind, that would be exactly the sort of curiosity, this play that is quantified by the information game that you can write down as a relative entropy or a KR divergence. The interesting thing, though, this is only the case. You can only derive the epistemic affordance as the natural behaviour of certain kinds of things when you've got exactly what you were talking about before, which is this sparse, deep hierarchical structure. So when the internal model deep inside can no longer see the interface, the actions that it's prosecuting or exchanging at its interface with the world, then you can interpret the inside as exactly a good regulator or a generative model. But crucially, one which looks as if it is planning into the future to maximise the information gain. But this can only happen when you've got this deep, sparse structure. You've got nested Markov blankets. If you're just a single cell organism with just one blanket, you have direct access to your action upon the world. But if you've got a deep structure, complex structure, structurally speaking, with this very much like a deep learning model, then you get this as an emergent property. So there's a nice connection between the notion of play and information seeking, and one could even argue reasoning. I'm going to do this, because then I will know, or it'll look like that. And this deep, sparse structure that you were talking about before in terms of the brain, for example, the brain being empty of connections. I have friends who work on connectomics using anatomy. If you think about the brain as a collection of connections, they say it's almost empty. Of all the connections you could have, there are hardly any there. And I think that speaks again to this minimum description length compression, minimizing Kolmogorov complexity, but at different scales where the scales are defined by the hierarchical structure. I couldn't resist joining, everything you've said makes entire sense. I have to go and do a podcast now. I'm not bored. It's brilliant. I will have to slide away in a few minutes.

[51:11] Michael Levin: Thanks, Carl. Does that mean we could take what you just said and apply it on an evolutionary scale and say that the many Markov blankets between the genotype and the phenotype mean that overall the whole process might exhibit competencies that aren't as blind and dumb as it's supposed to be?

[51:39] Unknown: That also occurred to me because you've got the separation of scales. I think the whole point about deep models in deep RL or deep models in multicellular organisms speaks to the fact that as you get deeper into the system, time and scale slow down or get bigger. The perfect example of that is the scale-free or scale-invariant aspects of evolution in and of itself.

[52:12] Unknown: Mike, this reminds me of something that I know Sam Kriegman has been thinking about, which is the idea that although we often tend to boil down the evolutionarily salient part of some entity to a fixed or static point in time. In fact, what is seen by selection is a trajectory of the organism through its possible configuration space. Fitness is not computed at one point in organismal configuration space. It's an integral along this family of possible trajectories through organismal configuration space. Does that make sense?

[53:16] Michael Levin: It makes sense to me, but I think that's a pretty controversial claim in the standard neo-Darwinian synthesis. The idea is supposed to be that both the foresight and the hindsight is supposed to be pretty much zero and that we've toyed with models where there's metadata on each allele to say, what was it before and how did that work out? You can make those things and we're playing with some of those models. At least the standard view, and obviously there are people who disagree with this, is supposed to be that all you have is what you're doing right now. That is the only thing that fitness can see is whatever you're doing right now. So I do think that's restrictive. But I think that's highly controversial.

[54:06] unknown: Do you think that when you think about evolution and fitness, given the type of discussion around platonic star spaces, survival and continuing genes is enough to bring evolution into this discussion or do you think it needs to have some informational component? One thing that came out of the last talk was Tim Jackson's discussion on convergent evolution, and those convergent structures seem to maximize sense data, at least in the very basic sense of how much of a certain space or a certain type of space you can access, whether that's through sensor data or being able to fly or echolocation if you're a bat or a dolphin. Does this conception of atomic space need to have some informational component? When I think about observers, not just animals all the way down, we think about persistence, which is that survival point, but also computational boundaries, like how much you can do, what your computational capacity is. One of the things that needs to be considered, if we're thinking about a space of attractors, is what the properties of those attractors are in relation to the properties of the agent—what are we measuring against. That is really critical here, and there is a lot of work being done because of LLMs and AI on different statistical measures for useful information or how things can go in phase space. Can those ideas be adapted for biological evolution? I'm not a biologist, so I don't know the answer, but I thought it would be interesting to give a group.

[56:06] Michael Levin: I personally am very suspicious of the idea that survival and replication is the main driver. I know that's how it's supposed to be. I'm not sure that's true at all. One thing is that in order to have that in the first place, you have to already have your replicator and already have the thing that has differential fitness — the thing has to persist and defend itself. There are some very interesting dynamics, which we'll preprint in about a week, of what happens before you get replicators. Blaze has some stuff on this as well, but there are things happening before you can point to something and say, that's a thing that will have differential success. So whatever's happening before isn't driven by that. There's some underlying dynamic, which for us seems to be a positive feedback loop between learning and causal emergence. The thing ratchets itself up by learning and causal emergence. Before that is weird. I don't even know that we have a proper vocabulary for it yet because it's happening in a pool of this pregnant medium that you can't really draw circles around. That's the thing reproducing because the materials are all over the place. They come and go and there's not a single thing, but you can already see that these loops are pulling themselves up by their bootstraps. Eventually the causal emergence hits and suddenly you get a replicator. Now you're off to the more conventional optimization part.

[57:51] Unknown: There's a really interesting thing. First of all, I'd love to see those results. Mike, that sounds fascinating. They certainly jibe with a lot of things that I've been seeing too. And they point out something that I think is actually really important in all of this, which is that in normal biology, the coarse graining is always given. There's just this presumption that you know what the thing is that is replicating. And obviously, the Dawkins "Selfish Gene" thing was very provocative because it proposed a different coarse graining that people weren't used to: that it was the gene that was the thing. In addition to emphasizing competition, etc., it was just an alternative coarse graining. But obviously, a coarse graining is just a model. There's nothing that says one coarse graining is correct and another is not. Any given coarse graining allows you to write down equations, to look at dynamics, to ask about reproduction. And it's not trivial because a coarse graining requires that you be able to say when something is or isn't an entity, when something is or isn't another instance of the same class of entity. Is something transformation? Is it reproduction? Is it another of the same? And for organisms with complex life cycles, the question of whether this next stage is the same species, or is it actually one species giving rise to another, giving rise to another? These questions come up all the time, or is a hive an instance of a thing, or only the bee? And the answer, of course, is both and all of the above. In the period before you get cell membranes, especially, you really don't have an obvious coarse graining at all. You just have all these loops and interactions that seem to be autocatalyzing each other. And then there's some point at which I think our intuition is that the thing has a model of itself. For me, autopoiesis is something that we recognize when we assert that the thing that is doing the autopoiesis actually has a self-model, and therefore is following that self-model in order to construct more of that self. But again, it takes a model to recognize a model. So all of these things are completely relational. You can't make any truth statements about them without presupposing a coarse graining. This relational view of what life is strikes me as the equivalent of relational quantum mechanics. It's something that hasn't really been well theorized and would make a lot of the paradoxes go away by just pointing out that you can't make any of these statements without positing a perspective and, of course, coarse graining going along with that.

[1:01:02] Michael Levin: I think that's really critical. Josh Bongart and I have been playing with this in terms of polycomputing and this notion of different observers who see the same physical events as different computations. Some recent work, this isn't out yet either, tries to simulate this, using a model of gene regulatory networks. The idea is if evolution has a choice between scaling up the competencies of the material, the individual networks, versus leaving the material in place and instead working on adding different observers who see the exact same thing going on but are able to map a different core screening and a different set of interpretations on it. The answer is evolution prefers to be able to do both, but if it has to choose one, it'll scale the observers rather than mess with the material. Part of it is because if you start messing with the material, you screw up dependencies. If something else was dependent on it, now things downstream are going to go wrong. Whereas if you leave the material in place and simply add perspectives, then you can overload meaning onto the same thing and not mess up anybody else, keep adding perspectives. Quantitatively, that looks like what it prefers to do.

[1:02:28] unknown: This is a question for that: would something like that prefer more observers? Because when you pull a bunch of those observers together with the same properties, they can form a component, a small network component where they get parallelization, computational competencies, those three lunges from making more versus a fixed substrate where they already know it's persistent, evolution knows it's persistent, and then you go, well, that's persistent. If I can pick between changing that to make it more persistent, more less bounded, and just making more of the same way. This is the competency from the group of things. This is all, again, to me; it screams I'm optimizing for my computational power primarily. Then I look as a second order: can I maintain persistence while optimizing that? It's a computational view of what's going on. But I wondered what your view of that is, because that's something that you see in some of the basic experiments around cellular automata, and they are far away from that. The reason I'm interested is because the dynamic is similar. I wondered if that explanation was interesting or was thought about in the context of that result.

[1:03:53] Michael Levin: I think that's very interesting. We haven't gotten to that yet. Right now, none of the observers talk to each other. After we characterize all of that, we'll do exactly what you said and let them form a network too. Katrina, did you want to say something?

[1:04:10] Brian Chung: I just wanted to follow on that comment about the importance of the relational nature of what we're talking about and how, Leo, you had brought up affordances in the environment. I think equally important there is the affordances of other agents in the environment. Back to that earlier example of play, Jaak Panksepp, the neuroscientist, has this widely shared model of play, which is more that it's a relational activity between organisms. Something like learning to cook actually isn't play under some definitions. Play is emotion regulation, social engagement that we do in order to create alignment between us and other agents in the world. It increases synaptic plasticity and gets us into a mentally labile state. The reason I think that's important to bring up is because when we've been talking about information and how information gets shared and where the free lunches come from, I think of that as being critical in humans: our free lunches are via human communication. I'm getting all kinds of information right now for very low cost or at a highly discounted cost because I'm putting my cognitive architecture in a state where I'm receptive to that. I think that could be what's accelerating our human evolution in intelligence, taking us further and further away from our genome and more into this information-sharing social space.

[1:05:32] Unknown: Can I just respond because this might be old territory, but the notion of play being analogous to the kind of raising of the temperature of a system to explore more of its performance space in our context, that in a stable environment, you might get locked into particular kinds of affordances. But then when the environment changes, or we go into a room with new people, we have to explore how to build bridges, how to couple with that environment including other agents, and play might be the notion of raising the temperature a little bit to explore the space of potential couplings or importances and where they might lead. I think analogies have been made between the evolutionary and, in statistics, people have borrowed evolutionary algorithms to search complex, multi-dimensional, rugged energy landscapes. But equally, transporting some of the concepts of statistical physics back into biology, which has been done lots of times, is also valuable in thinking about some of the processes that we're looking at as search mechanisms in finding the optimal engagement with your environment. Optimal is a hard word. It's something less than that, I think, but something that provides a way of hooking up with our environment that might, by exploration and finding those key affordances, create a ramp. The more competent, or the more affordances that the particular environment offers, the higher, I guess, we can ascend the ramp of possibilities.

[1:07:58] Brian Chung: Yeah, Jacob.

[1:07:59] Unknown: I really love this idea. Alison Gopnik certainly has made some connections between this notion of exploration and a kind of annealing point of view. I nonetheless don't think we pay enough attention to the role of behavioral plasticity in learning. Just a simple example that really drove this point home to me. My wife, Erica Cartmill, will sometimes, when she's trying to explain very basic conditioning of an animal to audiences that are not a bunch of physicists, do an experiment where she tries doing simple reinforcement on a human subject to try to shape some arbitrary behavior. There is a very strong relationship between the base behavioral plasticity that this person will exhibit and how easily they can be shaped into the appropriate kind of target behaviors. Someone who just sits there like a wet fish not doing anything provides very few probes into this possible affordance space that is in this case being shaped by a rewarding human interactant, but that you could think of more broadly as any kind of relational source of potential reward. If you're not exhibiting that kind of plasticity, you're not going to discover these sort of affordances in the environment. I think this raises a real methodological challenge, which comes back to something you raised very early on, Mike, about the difficulty of what we can and can't recognize. I think we're very limited empirically in our ability to probe the capacities of intelligent systems because we have to be able to read the design of the task in the same way that the system in question is reading it. This is one way to say a lot of the shortcut learning, for example, that we see is exploiting an affordance that we were unaware of in the design of the particular task. How do you think about that? What I, in my language, often talk about as the as-relation, this interpretive layer involved in all of this behavior where the environment is read as having these set of options, and our very limited capacity to read the option spaces that are interpreted by—it's hard enough to do it with other humans, let alone something that's radically different.

[1:11:04] Michael Levin: Sorry, Katrina, had you had your hand up before? Did I miss that? No. Blaze.

[1:11:12] Unknown: One of the reasons that I have some problems with the play concept is because I think it actually carries with it the assumption that what we normally do is something other than that, or that work is the default, or that we are optimizing for something, or that there is some other thing. The reality is any living system stays alive by virtue of staying alive. It doesn't mean that it has to be optimizing something. There is a dynamical loop that is stable enough that it continues to exist, and the range of things that can happen in the context of such a dynamical loop is very, very large. This sort of Darwinian-Spencerian idea that if you're not working hard at it, you're going to die because something else is going to eat your lunch, we know is not really exactly the case for a lot of organisms in a lot of situations. There are many things that create lunches for each other. There are networks that mutually reinforce each other in various ways. And that just leaves a lot of space for other stuff to happen. So it's not that I think seeking information or curiosity isn't something that things with intelligence do; certainly they do it. But any of these definitions about play that it's only about stuff that satisfies your curiosity or only this, only that, it's a little bit like trying to define art. There's this form of play that is just bumping your head against something — is it play, is it not play, is it just a tick, what is it; it's very particular, very value-laden, very anthropomorphic. I think that when we look at a worm doing something fun and we say it's play, we may be doing something that is usefully empathic. It may be that there is pleasure being experienced, that there's something about that experience subjectively that is like what we associate with play. It also may not be, but whether or not that is valid to me doesn't speak to whether it is serious or not. Stuff does all kinds of stuff. So I guess that would be my take on that question for what it's worth. By the way, I need to switch to phone mode. So I'm still here, but may not be on the same video. David.

[1:13:53] Unknown: So let me chime in here. As someone who plays music and teaches children how to play, and from my perspective as a musician, play is fundamental to music, to learning how to play music well, learning how to compose music. The way that I experience play in music is not necessarily information seeking, but almost pleasure seeking, maybe just boredom, taking up time, something to do just to do, that kind of thing. I would say that play is not necessarily exploration or information seeking at all. It can have multiple purposes. Maybe it's almost like a will to power: just to do something. I am fascinated by this question of how to distinguish between play and other behaviors. Early in my life I collected a lot of ants and spent a lot of time observing them. It seems like some ant behavior is exploration, like when they're foraging and it's almost randomly driven, just a random walk through the environment. But some of it could be characterized more as play. I think it may be very specific to the organism how you make this distinction. And it probably has to be within an understanding of what the goals of the organism are — what it's trying to do. I think a functional approach will get you somewhere in trying to understand what play is, how to characterize it, and how it differs from other things. Thanks.

[1:16:26] Michael Levin: Just watching cells build an embryo, especially in time lapse, that's one experience. And then a different experience is watching a bunch of cells explanted in the dish or in some other context and you watch them running around with not much, at least apparent to us, happening. And I always think about this sphere of television broadcasts that have been spreading out from the earth. I always think about aliens somewhere and there's 80 light years of Three Stooges spreading out into the world and football games and things like this. I just imagine the aliens getting that and seeing some of that and trying to figure out what is this? Are they doing something? Are they just messing around, trying to understand. And it's basically that. We're in that position, watching these cells, trying to figure out if this is a poor attempt to build something or is this not even that at all? And it's a fantastic attempt at having an enjoyable time exploring the dish or what the heck is it? Jacob.

[1:17:37] Unknown: This is riffing off of a remark that Douglas Brash made in the comments about when your kid plays: play is coming up with your own goal and pursuing that as an end in itself. I do think that this describes a capacity that is clearly very important to humans, namely our ability to choose essentially an arbitrary thing and pursue it as an end in and of itself. I think there's a very beautiful theory of culture in the work of the early 20th century sociologist-philosopher Georg Simmel in his book. I can't remember what the German name is. I will look it up and put it in the chat. But in any case, he has this theory that culture is built by identifying basic, core forms of life and pursuing them as ends in and of themselves. The mathematician David Mumford has an interesting account, very similar, of the origins of different parts of mathematics, along the same lines: geometry comes from pursuing the idea of space and fixing on that and exploring it in all of its possible variations. Analysis comes from taking notions of motion and putting them through their paces. I think this is something we can clearly do as humans. The challenge is recognizing that capacity in radically different embodiments. As Blaise said with the worm, and as you've said, Mike, with the cells, we know from our own introspective experience that there are cases where we are choosing some arbitrary end and pursuing it simply as an exercise in pursuing that end. We know what that looks like in other people and they can tell us that's what they're doing. Can we recognize that activity, which seems to me to be absolutely fundamental to our basic cultural and scientific capacities, in other embodiments?

[1:20:35] Unknown: Let me throw something on this. Play takes energy. That is something that is going to be selected against: too much expenditure of wasted energy, right?

[1:21:06] Unknown: On the other hand, play is fun. I was wrestling with this proposal. There's a comment in the chat now from Leo that there's a correspondence to a temperature scale. Shouldn't play be low temperature because it's fun and easy? Maybe is it really an entropy thing rather than an energy thing? That there's low constraint.

[1:21:41] Unknown: I see exploratory play would be more like the high temperature regime, but we need to generalize temperature, don't we? It could be that the relation that in a population, a small number might be exploring or playing more on the fringe of the potential spaces of interaction. And that their relation in terms of their size of their population versus the bulk of the population would be a Boltzmann-type thing, which you can only access at higher effective temperature. It's less populated, so it's the fringe and they're the more playful. And as you go lower in temperature, you're getting to the more uniform, regular habituated modes of interaction. That was the thing I was grasping at in that analogy.

[1:22:55] Unknown: I get the analogy. You're saying I should readjust my thinking that spending lots of energy is a high-energy thing and focusing on a single bit of work that you have to get done by five o'clock today is actually low energy, not fun, but I should look at it as low energy rather than looking at it as high energy. That's what I meant by the entropy business.

[1:23:23] Unknown: But I think that being too thermodynamic about this just presumes that we're in too constrained a situation. Let's take a chemotactic bacterium close to its point of starvation. Then you may be close to a limit where if it doesn't tumble at just the right times, it significantly increases the likelihood that it will not exist in the future. Something like that is going to have to behave like an optimizer, which is to say it doesn't have a lot of space to have fun. If its space of behaviors is tumble or don't tumble and making the wrong decision means there's no more bacterium, then there's not a lot of agency or fun in a system like that. It'll only continue if it does exactly the right thing. But for the huge majority of organisms, including unicellular ones, there's such a range of behaviors. There are so many behaviors that are consistent with continuing to exist. If you're doing your chores, skinning the animal that you just killed, and you're bumping your **** a little bit while you do it and dancing around a little bit, the idea that the energetic difference between bumping your **** and not bumping your **** is going to make a difference in your survival is just ridiculous. Of course we're not that constrained, and I think that's true of the vast majority of life. I think that this whole teleological question of fun kind of vanishes. You just see there's a lot of turbulence in the system. There's a lot of stuff that happens. It's sometimes emotionally loaded. It's informationally interesting. It can develop cultural dimensions. But there's nothing unusual about this. The idea that everything is so constrained, I think, is just a wrong idea about how life works.

[1:25:24] Michael Levin: It sounds like what you've just described is something like the Maslow hierarchy.

[1:25:33] Unknown: Almost everything is above the baseline of the Maslow hierarchy when you look at it.

[1:25:39] Unknown: Yeah. So I think what you bring up is the importance of making some analytic distinctions. Using the case of human play as the paradigm example between what is behaviorally visible, which might be the unpredictability of the behavior, given some circumstance, which I think is closest to the temperature, the enabling conditions, which again, in the case of human play are typically situations where there is a sense of protection, where there is a sense of lower risk to engage in more exploratory behavior, and the motivational status. In other words, what is the agent who is playing trying to do in that activity? I think you're absolutely right, Blaise, with the point about the degree of constraint and enabling condition. There's a beautiful example with the domestication of these birds, the white-roomed timunia, where their wild type song is very, very characteristic, but under domestication as an epiphenomenon of the reduced selection pressure they started to develop much, much more variable song behavior. Now, I don't know if anyone has heard them, but they have these amazing songs. There's lots of variation, there's lots of variability. This is presented by folks who work on the evolution of language and the domestication as evidence of reduced selection pressures, reduced constraints, opening up at least some degrees of freedom for greater variation and, in this sense, play across evolutionary time. I think it is very important to keep in mind how rare it is that we actually see organisms against the bare metal of survival.

[1:27:58] unknown: You think, Jacob, did they find anything in the compositions of the songs? Were they more complex or more rich? So they explored that thing that was, you found them to, in a way, be beyond its survival mechanic. It really went into having more space, more computational capacity. I don't have to find food anymore. Therefore, I can now put more energy into making these songs more rich, more complex, more coherent.

[1:28:31] Unknown: Absolutely. Okay.

[1:28:36] unknown: You know what fits nicely

[1:28:37] Brian Chung: With that idea is the prevalence of play in human children versus adults, because at least if you're a human child in a relatively safe environment, you've got that domestication situation and you have the ability to play and explore a lot. And then as you get older, you're, oh, ****, I better get serious about my life.

[1:28:54] unknown: And the selection pressures are more apparent to you.

[1:28:59] Unknown: That is also consistent with what is supposed to be happening in the academy. The term scholar comes from the Greek schole, which means leisure. The idea is supposed to be that you're protected from some of these forces so that you have time to play intellectually. And I do think we — a lot of what we do is creating these spaces for play like this one.

[1:29:32] Unknown: When we become adults we play less partly because constraints, financial constraints, other kinds of constraints, but could it also be that we get bored with life? Lose the will to live, maybe. Nietzsche's characterization of life as "will to power." So that's really what life is ultimately about: exerting some kind of power over your environment, and play is just one of those ways of exerting your power. That's what life is fundamentally about.

[1:30:35] unknown: Very postmodern view of what life is about.

[1:30:42] Unknown: What?

[1:30:43] unknown: Very postmodern view. It's like everything's a pain, it's taking everything as a power game. There's the biological limit. My reaction is that it's quite a shrunken down version from personal experience. It feels like there's probably a bit more to it than that and that we can reduce it to that because it is easier to investigate that in finite time with metrics and with tests and where you can say that, but when you extend time out, that might just be a function of the fact that the tool we have today sees it that way and the tool in the future may see it differently. I think that's an error in that line of post-wormness thinking that's almost tuned along timescales. But that's a personal view, just to give the corollary that I don't think you can just do that.

[1:31:45] Unknown: I think what Blaise said in the chat is that life is just doing stuff. Power in the sense of that kind of will to power, the will to live. It's very fundamental. And maybe it even precedes reproduction. Maybe the fact that life forms reproduce is the manifestation of something deeper. Why even bother to reproduce? Why even bother to go on living?

[1:32:36] Unknown: I think a different perspective to see it is we lose the open-endedness of play. When we're a child we don't know the limitations; we are exploring the limitations. We don't know low-level details that constrain us. Today we have great ideas we've thought about; we don't even dig deep on those ideas to see why they cannot work. We think that they can't work, we find a way to make it. So maybe play is not to know too much.

[1:33:20] Unknown: One of the really interesting things about children is that they both play a lot and love repeating things. They exhibit quite different properties vis-a-vis adults, with respect to both variability of behavior and getting bored. They love having the same thing happen over and over again. And there's a construal of that that says that in both cases, what it is about—in a Nietzschean register—is just affirmation. Affirm whatever they're doing. I'm singing this song for the 16th time. I'm very excited about that. I'm going to now go do some random other behavior. I'm also very excited about that. From an existential standpoint, I do agree with you, David, about getting tired of life, but I think the way to look at children, at least, as models of this kind of radical capacity for not getting bored with things is both to play and to radically affirm what's happening.

[1:34:43] Michael Levin: There's an interesting piece of data that I think is deep and hasn't been dealt with that speaks to something David was bringing up. This guy did these experiments where he would take a rat and throw it in a bucket of water and the rat can tread water for a couple of minutes and then it drowns. And that's what happens. Then he would throw the rat in, wait a minute, 45 seconds, take the rat out, dry him off, put him back in. You do that a couple of times and basically the rat learns that he's going to be rescued and then you find out that a rat can actually tread water for about an hour. So this is very interesting. The physiological reserves are sufficient to keep going for an hour. Why do most rats drown after a minute and a half or two minutes? There's some version of giving up, and I don't know that that's available to insects, but it seems to be available to at least some mammals, where in the hopelessness of it you would think that evolution would greatly select for a terminator-like behavior where, if you've got the physiological reserves, just go to the last moment — one time out of 1,000 something will happen, you'll get rescued; that certainly should be the favorable phenotype. And yet that's not what happens, and at least in the mammalian case, and there's other examples of this up in birds, they have the ability to actually give up and say, forget it. I could keep going, but I'm not going to. I think that's interesting and how that interplays with evolution is interesting. You wouldn't predict it from standard Darwinian principles. I don't think you'd predict this.

[1:36:28] unknown: One of the tools in observer theory is this idea of a limit of your possibility space from the observer's perspective, i.e. what you think can possibly happen versus what you're predicting right now, what normally happens. So these spaces: the field is smaller than the edge of your state space. When you pick the rat up after a minute or five, you're creating an equivalence where that field or that state space gets bigger or approaches the boundary of what it thinks is possible. And because you have that equivalence after reinforcing it enough times, that becomes part of their possibility space. So they can then go in their internal model when they're creating that loop. They go, oh, this happened before and this can happen again. If I break a bit longer, then I can keep going. Once that possibility's been actualized in their internal model, a rat might need more reinforcement or more direct reinforcement, say us, then it can do it. It sort of accesses that full possibility space because you've created equivalence for it by interacting with it, by effectively coupling with it, by giving the rat a proposition: you will physically get lifted out of this tub. That proposition is accepted by the rat because it doesn't have the choice of whether it gets lifted out or not, but with enough reinforcement that proposition becomes part of its world order and therefore its state space accepts. So it can then do that thing because you've given it, you've given it effectively top-down knowledge, but its possibility space was bigger than full. That sort of dynamic of reinforcement and coupling from different observers when you accept and reject propositions that change the morphisms accessible, the choices accessible between states and the internal model, and that can apply not just in that example — that sort of dynamic can apply at all scales — is an interesting way of investigating that difference. How to lead the practice of an idea of platonic space is when we do new things or we introduce a new element to something else that doesn't have that element, we are ingressing in its platonic space or its state space or its data space, whichever one you want to use. We're changing it. Ingression from you: you've changed the things that it can do, therefore it now thinks it can do more. It's updated. And that loop is a way to play around with that idea of ingression in a tight physical way.

[1:39:18] Unknown: I do think there is something about when I had kept ant colonies: when the queen died the colony just fell apart even though they continued living, they weren't foraging, and eventually they would just die out. It seemed to me, looking at it, that they lost the will to live once the queen was dead in the colony. But maybe that's just got a complete biochemical explanation that can be found. Certainly behaviorally, that's what it looked like. They lost the will to live. We have a lot of interesting things going on in the discussion in the chat. Someone brought up galaxies earlier. I wonder at a deep metaphysical level: maybe that's what existence is, actually—why is there something rather than nothing? We've all thought about that question, but I don't think anyone's got a good handle on it. Maybe there's something rather than nothing because the universe wants to do stuff.

[1:41:15] Michael Levin: Dave, to your previous point about the ants as to whether there would be a biochemical explanation, I think there's always a biochemical story to be told of anything, or a physical story to be told. To me, it's like the neural correlates of consciousness. You could tell that story. It's not false exactly, because it does accompany and it does implement the thing you're talking about. But in most interesting cases, that low level story is not the most insightful story. I'm sure there's some biochemical fact about it to be found, but there's probably a more interesting level to it, I would think.

[1:42:07] Unknown: The ants get pheromones from the queen, giving them instructions to do different behaviors.

[1:42:18] Michael Levin: No doubt, if you watch two brilliant mathematicians discuss some proof and you come away saying, look here, there was a bunch of air molecules and they moved like this and then that, you're not wrong exactly, but you've missed the whole point. You haven't facilitated the next interesting thing that might happen there. It's just you've picked poorly as far as the level of description.

[1:42:44] Unknown: It would be an interesting experiment to try to reproduce, say, a robot queen you inject into an ant colony that has all the right pheromones and everything it's secreting. But does it play the exact functional role of a real live queen in the colony?

[1:43:04] Michael Levin: Do you know the book "The Soul of the White Ant" from the 20s by Eugene Murray? Have you seen that? Well worth it. If you're into ants, "The Soul of the White Ant" by Eugene Murray, back from '23 or something. It's really amazing. He did all these experiments: there's a colony and if an ant from one colony goes to another colony, they kill it. But if he goes over there and the queen is dead, they take him in. He becomes, there's all this stuff. He was trying to work out how they know and the distance and putting barriers in. Really, really remarkable.

[1:43:45] Unknown: When I was in my own experiments, when a queen died, I would try to introduce a new queen into the colony to see if they would take it. Sometimes they would, sometimes they wouldn't. It may vary with the species.

[1:44:13] Michael Levin: I think this has been great. Does anybody else have any last thoughts?


Related episodes

Platonic Space discussion 2

Platonic Space discussion 2

Conversations and working meetings