<?xml version="1.0" encoding="UTF-8"?>
<rss 
  version="2.0"
  xmlns:dc="http://purl.org/dc/elements/1.1/"
  xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:atom="http://www.w3.org/2005/Atom"
  xmlns:media="http://search.yahoo.com/mrss/"
  xmlns:wfw="http://wellformedweb.org/CommentAPI/"
  xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
  xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
  xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
  xmlns:rawvoice="http://www.rawvoice.com/rawvoiceRssModule/"
  xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0">

  <channel>
    <atom:link href="https://thoughtforms-life.aipodcast.ing/rss/" rel="self" type="application/rss+xml" />
    <title>Thoughtforms Life Podcast</title>
    <link>https://thoughtforms-life.aipodcast.ing</link>
    <description>Exploring the unseen forces of life, cognition, and emergence — with Professor Michael Levin. Conversations on morphogenesis, bioelectricity, synthetic biology, and the nature of intelligence.</description>
    <language>en</language>
    <copyright>Thoughtforms Life Podcast Copyright 2026</copyright>
    <lastBuildDate>Sun, 12 Apr 2026 20:43:38 +0000</lastBuildDate>
    <itunes:author>Thoughtforms Life Podcast</itunes:author>
    <itunes:summary>Exploring the unseen forces of life, cognition, and emergence — with Professor Michael Levin. Conversations on morphogenesis, bioelectricity, synthetic biology, and the nature of intelligence.</itunes:summary>
    <itunes:owner>
      <itunes:name>Your Name</itunes:name>
      <itunes:email>youremail@example.com</itunes:email>
    </itunes:owner>
    <itunes:explicit>no</itunes:explicit>
    <itunes:image href="https://thoughtforms-life.aipodcast.ing/content/images/2025/04/TFL2-1.png" />
    <itunes:category text="Technology"></itunes:category>

        <item>
          <title>A Multiscale Logic of Collective Intelligence&quot; by Donald Hoffman and Chetan Prakash</title>
          <link>https://thoughtforms-life.aipodcast.ing/a-multiscale-logic-of-collective-intelligence-by-donald-hoffman-and-chetan-prakash/</link>
          <description>Donald Hoffman, Chetan Prakash, Robert Chis-Ciure, and Chris Fields discuss a multiscale logic of collective intelligence, covering observers, agency, causal emergence, quantum logic, and consciousness-first models.</description>
          <pubDate>Sat, 11 Apr 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69d9d7a6983bbd0001fafd27 ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/YnfaT5APPB0" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/cb7a379b/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1.5 hour talk + discussion, titled "A Multiscale Logic of Collective Intelligence" by Donald Hoffman ( and Chetan Prakash ( with Robert Chis-Ciure ( and Chris Fields ( and me.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:01) Beyond space-time physics</p><p>(10:10) Minimal observer participants</p><p>(18:49) Recursive trace logic</p><p>(35:08) Actions and trace blankets</p><p>(43:45) Physics and agency together</p><p>(52:32) Causal emergence and joins</p><p>(59:52) Contextuality and quantum logic</p><p>(01:06:00) Unitarity and positive geometries</p><p>(01:16:29) Consciousness-first mind theories</p><p>(01:24:39) Testing models together</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:01] Donald Hoffman:</strong> A multi-scale logic of collective intelligence, and it's what we call the recursive trace logic. We've had the trace logic for a couple of years, but in the last couple of months discovered a recursive aspect to it that will lead into a notion of agency that's novel. This is different, Chris, than the conscious agent theory. It's a different notion of agency than we've had before. The big topics I'd like to talk about are: can you guys see me? Yeah. Okay. So I'm going to talk a little bit about collective intelligence, our model of collective intelligence, how it involves core screening, which is important to you guys, how it involves generative models, minimizing surprise automatically, bending problem spaces, a recursive notion of agency and self, a new intelligence metric for agents, we'll call lambda sub 2, and its relationship to your measure K. And then how this is all beyond space-time and quantum theory. And I'll start there just briefly about why I'm thinking entirely outside of space-time and quantum theory. The idea is that high-energy theoretical physicists are done with space-time. They say it's not fundamental. So here's Nima Arkani-Hamed at the Institute for Advanced Study. Space-time is doomed. There's no such thing as space-time fundamentally in the actual underlying description of the laws of physics. And he makes it very, very clear that he's saying space-time and anything inside space-time, and that includes anything with unitary evolution, quantum theory in particular. So he's going beyond space-time and quantum theory. And it's not just him, it's because of his success and his collaborators, the ERC has funded a 10 million euro initiative called Universe Plus. And it's all about going entirely beyond space-time and entirely beyond quantum theory and looking for what they're calling positive geometries. And so there's over 100 high-energy theoretical physicists and mathematicians now working on this, and they're finding stuff. And I can talk a little bit about how it's related to what we're finding, but they're finding these positive geometries that give you scattering amplitudes without any quantum theory whatsoever. And you get it much more easily than, and more simply than with quantum theory. So it's clearly quite striking. So I'm stepping entirely inside of space-time. Yeah.</p><p><strong>[02:42] Michael Levin:</strong> Sorry, just a quick question. Maybe naive, but I just want to understand this idea of space-time being doomed. So on that view, if that were correct, what is the status of, let's say, general relativity? What does it refer? Is it completely different, be supplanted? What is that theory about then?</p><p><strong>[03:01] Donald Hoffman:</strong> Right, so the idea is that the very notions of space and time, even the combination of them as space-time, is not fundamental at all. So general relativity will go the way of all theories. It will be, like Newton, we still use Newton for certain cases, we'll still use GR for certain cases, but we needed a much deeper theory. The hard fact is that when you bring together GR and quantum theory, you find that space-time has no operational meaning at the Planck scale, 10 to the minus 33 centimeters, 10 to the minus 43 seconds. It simply has no operational meaning. So that means we have to find a deeper foundation. So these are only, at best, approximation theories.</p><p><strong>[03:50] Robert Chis-Ciure:</strong> So Don, just to be clear, in the obviously Kantian vein in which I know much of your theories, this is not only denamitating the space-time at the empirical level, it's also ejecting it from any transcendental style considerations. It is just a placeholder, until we have something better, but it will die as a concept in our economy of thinking, even about our experience, let alone the empirical physical world.</p><p><strong>[04:22] Donald Hoffman:</strong> Absolutely. That's the idea that we thought space and time were the fundamental nature of reality. We might have even thought they were a priori true or something, but that's just wrong. That's just plain wrong. And science has a way of forcing us.</p><p><strong>[04:37] Chris Fields:</strong> If one formulates basic ideas of quantum theory outside space time completely, then there are many routes, which are understudied by huge numbers of people, again, for generating space time as a consequence of basically assumptions about quantum information theory, and also many, many routes for generating Einstein's equations as either approximations or again, outcomes of other kinds of assumptions. So GR turns into something like the status that classical physics has with respect to quantum theory in space time, which is a limiting case, an approximation that's good in some circumstances for doing some things, which is basically how Don just characterized it. So yeah, there's lots and lots and lots of physics underlying this, both in the high energy community and the quantum information community.</p><p><strong>[06:07] Donald Hoffman:</strong> Right, and what NEMA and the ERC group are doing is even going beyond that because they're saying we're not going to even start with quantum information theory. Anything quantum itself is going to arise joined at the hip with spacetime from something far deeper. So they want to show quantum information theory and general relativity arise together from something that couldn't care less about unitarity at all. So that's what they're after. So there is no locality and there is no unitarity, period, in these new positive geometries. And they don't care about unitarity. And they show that then quantum information theory comes out as an approximation in special case at the same time that you get spacetime. So it's different than the Carlo Rovelli kind of approaches and so forth. So that's the direction you should be trying to go here.</p><p><strong>[07:03] Chris Fields:</strong> You should actually say that it contradicts most of quantum information theory, because quantum information theory actually has nothing to do with spacetime. So the two arising together would be very unusual.</p><p><strong>[07:23] Donald Hoffman:</strong> Well, what Nima wants to show is that unitarity and locality together arise from these positive geometries. And then, because you get unitarity rising from it, then you get the foundations for quantum information theory, so that, but we'll see the proof is if you can do it right, so yeah, but I'm just trying to be clear about where they stand with respect to current approaches to trying to build up. As you say, Chris, most approaches that are trying to build space-time are starting with something quantum, and these guys are not. They're saying, we're not even having quantum. We're starting with what they just call positive geometries. So I just wanted to make clear how out of the box their thinking is. So John Wheeler, of course, was trying to think out of the box, and he was saying, you know, someday, this is in 1990, and his wonderful book on gravity and space-time, he says, someday surely we'll find, we'll see a principle underlying existence so simple, so beautiful, so obvious that we'd all say to each other, oh, how could it have, we have all been so blind so long. And so that's what we're looking for. And let's see. I'm not able to, can you guys hear me?</p><p><strong>[08:38] Michael Levin:</strong> We can hear you, but the slides are not advancing.</p><p><strong>[08:40] Donald Hoffman:</strong> Let's see. Okay, I guess now advanced. And he said about the same time in his...</p><p><strong>[08:52] Michael Levin:</strong> Sorry, we're still seeing the type of slide.</p><p><strong>[08:56] Donald Hoffman:</strong> Okay, let me try this again.</p><p><strong>[09:01] Robert Chis-Ciure:</strong> We only saw the title page so far.</p><p><strong>[09:05] Donald Hoffman:</strong> I'll go back and try the... That's weird. So I'll... Let's see. So I'll go back to share. Sorry about that. That's weird. Can you see that?</p><p><strong>[10:01] Michael Levin:</strong> We can, but it's still in.</p><p><strong>[10:08] Donald Hoffman:</strong> Yep. Okay.</p><p><strong>[10:10] Michael Levin:</strong> Yeah, there we go.</p><p><strong>Donald Hoffman:</strong> Okay, so Wheeler suggested that the notes struck out on the piano by the observer participants of all places and all times, bits though they are, in and of themselves constitute the great wide world of space and time and things. So he was trying to start with what he calls observer participants. And he thought that maybe somehow, and that was in his It from Bit paper in 1989. And he actually, in his paper, cited work that Chetan and I were doing, our book Observer Mechanics. So he was already thinking about the stuff we were doing with observers and participants back then. So what's a minimal observer participant? I'm going to have, we're going to start with just the absolute bare basics. They have experiences, like smell of garlic, taste of mint, and these experiences can change. That's all I'm going to assume. That's the foundation of everything. So my ontology is there are experiences and they can change. So for example, maybe I have four experiences, a very, very simple observer, red, green, blue, and I'll call that yellow, and they change. So now I'm seeing yellow, now I'm seeing green, now I'm seeing blue, and so forth. They keep changing. So a simple, in fact, the simplest and most general way of talking about that is just to talk about Markov chains. So the Markov matrix there, the first row has 0.2, that means if I see red now, what's the probability I'll see red next? The 0.3 is if I see red now, that's a three-tenths chance that I'll get, a 30% chance I'll get green next and so forth. So it's just a transition matrix, probability of seeing the next color, given that I'm seeing the current color. So that's all Markov chains are, there are these matrices. Of course, a lot of complications come out of that. And one aspect of Markov chains is that they immediately instantiate a very interesting kind of goal-directed behavior. No matter what state you start the Markov chain in, it has a target stationary measure. In this case, it's the thing on the left, 0.33, 0.30, 0.16, and 0.21. No matter what state you start this matrix in, it is going to go eventually to that state. And you can perturb it as much as you want. It will resist the perturbation and head back to that target state. So already we have, in the very structure of this, a goal-directed behavior. So, and as you guys in your papers talk about, William James mentions that intelligence is achieving a fixed goal with variable means of achieving it. So, that's the stationary measure, and if you have an ergodic Markov chain, then you will have a stationary measure. Now, the idea is I want to have multiscale collective intelligence, and so we need a notion of scale. So, I'm just going to take an observer that sees a subset of the states that this is. So the first observer I was talking about has four colors, you can see. Let's consider one that has only two. So that's my notion of scale. How many, you know, the subset relationship among the number of experiences that you have. Now, so here's the key idea of everything we're going to be doing now. Suppose I take the matrix on the right as describing, quote unquote, the reality. This is what's happening. And those are the transitions. But this observer on the left only sees two, red and green. What transition probabilities is it going to see? There should be a formula. Given the matrix on the right, there should be some kind of computation we could do to give us a two-by-two matrix for the transitions on the left, just in red and green. Does that idea make sense?</p><p><strong>[14:04] Donald Hoffman:</strong> Yep. Okay, good. So when you do the mathematics, it turns out that's the matrix. So you get this very two by two matrix. Notice that the numbers are completely different from this matrix, right? It's not just copying, it's the computation that you have to do. And so here are the two matrices. The one on the right is the big matrix. And if you just restrict attention to the red and green, then you get the matrix on the left, induced by the matrix on the right. And this is called the trace, so the matrix on the left is called the trace of the matrix on the right. That's just standard in Markov theory that's been around for more than half a century. So this is not new to me or to us. Now, you can actually, the trace formula is important. I'm going to go through it because it has an important conceptual thing for us. So the way you compute the trace, so I'm going to take this matrix, I want to, I'll call it matrix P, and I want to get its trace on the red and green. So first I'll just notice that we can take this matrix and divide it into four sub-matrices. There's a two by two matrix that has 0.2, 0.3, 0.5, 0.2, that's for the red and green and so forth. That we'll call matrix A, so that's going to be the states that are visible to the trace observer, right? So A is the sub-matrix based on the states that are going to be visible to the sub-observer. C is the sub-matrix relating states that are dark to this new observer. It doesn't see this, so this is all dynamics that's dark to it, okay? B is the matrix that is the exit. This is the exits from what you can see to the dark region. So B is the exits, and D is the re-entrance. This is getting from the invisible world into the visible world. So those are the sub-matrices that we're going to be using, and here is the formula. This works universally. The trace, so the trace matrix on A, which is the visible states, is you just take the original matrix A, so 0.2, 0.3, 0.5, 0.2, and you add this interesting thing on the right. That I is the identity matrix. So you take the identity matrix minus the dark matrix. So I minus C is the identity minus the dark matrix. And you take its inverse. That has the effect of being able to explore all possible paths. There's an infinite number of paths through C that you could take. So you allow I minus C quantity inverse is exploring the infinite number of paths there. And then you pre-multiply by the exits and post-multiply by the entrances. And you add that all up and that's your trace. So that's the idea. You're basically, you get the trace by looking at all the ways that you could go outside of the trace and then coming back into the trace, the trace states, okay? That's the general formula. So that's been around, again, that's not us, that's been around for a long time. So you have hidden memories and controls. B, C, and D are going to be hidden layers of control that the agent A cannot see, but will be influencing their behavior. So that's going to be an interesting hidden memory kinds of possibilities now with B, C, and D. So there's explicit memory changes when you change A directly, but then there's going to be hidden.</p><p><strong>[18:00] Robert Chis-Ciure:</strong> Don, just one second, can you please go back? In this BCD, so the exits, the entrances, and the invisible, is there any particular mapping to memory or control, or is it more of a blanket category you're using hidden memory or controls? Is memory, for example, C, the dynamics, hidden dynamics in C, or what's the control here? I suspect exits and entrances would be more like control.</p><p><strong>[18:27] Donald Hoffman:</strong> Well, it turns out that there's different ways to control. You can screw around with B, you can screw around with C, screw around with D, or all of the above in any combination you want. All of them together give you different ways of controlling. So it's really quite fascinating, the possibilities here.</p><p><strong>[18:46] Robert Chis-Ciure:</strong> Very cool, thanks.</p><p><strong>[18:49] Donald Hoffman:</strong> So all of that is old, here's the new stuff. We discovered just a couple years ago that the trace relationship gives you a partial order on all Markov chains. That was the discovery, and that's what sort of launched this whole thing. So it's a partial order, which means that there is a logic. So the definition is that the matrix M is less than or equal to a matrix N in the trace order if and only if M is a trace of N. That's it, one trivial definition, but no one saw it before. And it turns out that that definition gives you a multiscale logic of minimal surprise. And the reason it's minimal surprise is because the trace is the zero-surprise view of the bigger matrix. That's the key idea. It is the zero-surprise subset view. And we'll talk about the stationary measures as well. The stationary measure is identical to the, is a normalized restriction of the original stationary measure. So you have minimal surprise in the dynamics. In fact, zero surprise in the dynamics. And in the stationary measure, again, zero surprise. So the trace logic is the logic of minimal surprise for arbitrary dynamical systems. So that's the power of this, because minimizing surprise is, of course, key to intelligence, a key to intelligence. But this is multiscale. So this is the multiscale logic of minimal surprise. So what about this trace logic? The set of all Markov chains form a non-Boolean logic under the trace order. It's non-Boolean. That means that there's no global top, there's no global negation, many matrices do not have meets and joins, or ands and ors. However, so it does have a notion of meet, join, not, and entails generally, but many matrices are not compatible. So they may or may not have meets and joins. So it's a very, very complex logic. However, if you take any particular Markov chain P, and you look at all of its traces, they form a Boolean sub-logic. So I can pick any Markov chain I want to, any one at random, look at all of its traces, all those Markov matrices together form a Boolean logic. So the notion of and, or, not are completely well-defined. And this Boolean logic has 2 to the n members. If there's n experiences, then there are 2 to the n members in this Boolean logic traces. So that's, if you think about it, all we've got right now are, we don't have agency yet, although I showed you that notion of goal-directed behavior, which is sort of like a proto-agency kind of thing. Already, these matrices are going toward their stationary measure, no matter how you perturb them. So already, there is this interesting notion of some kind of agency going on there. But now, here's the key idea. And this is only now two months old, this idea. And that's why, when I had this idea, I realized that it was time to talk with you guys. Once we have the trace logic I've talked about is a logic on observer windows. So it's an infinite space of all possible observer windows. There's this minimal surprise logic on all of it, the trace logic, cleanly well-defined.</p><p><strong>[22:53] Donald Hoffman:</strong> Now, how do I want to do model agency? And this is the new idea just in the last few weeks. Agency is a matter of changing which window I want to look through. I want to have a policy for how, if I'm looking at the world this way, then how do I want to look at the world next? And how do I do that? Well, another Markov chain. The Markov kernel will say, what's the probability, if this is my current window, that my next current window will be such and such? The way you write that down is, again, a Markov matrix. So what we have is a policy is a Markov matrix on the trace logic itself. So the trace logic is the entire logic of minimal surprise on possible conscious observers. That's what it is. And the first step of agency is to say, let's crawl along the trace logic. That's the first baby step in agency, the first ability to crawl along the trace logic. Now, if we look at the collection of, I'll call those Markov kernels, policies. Each Markov kernel is a policy, it's a first order of agency. And since they're Markov matrices, they satisfy their own trace logic. So we now have a trace, we have the first trace logic of observer windows. Now we start crawling on that trace logic of observer windows. That's our first layer of agency. It has its own trace logic. That's its own, so that's why I call this recursive trace logic. It's recursive now. And you can see we can do this ad infinitum. Once we have the trace logic of policies, I can now crawl on it and get meta-policies. And so I can take agency to whatever layer of complexity I want. We can start with the baby layer. We can start with just the observer windows and explore those. Then study policies and then meta-policies and build up recursively to ever deeper notions of agency. So just at top level, we can think of a policy as simply a path through the trace logic of observer windows. That's the simplest case, right? So I started off with a three-state window, and maybe I moved to a two-state window, and then I moved to a one-state window, and that was what my policy was. And so I've got a Markov kernel that does that. And then a meta-policy would say, I've got thousands and thousands of policies. I now have the flexibility to choose my policies based on whatever goals I might have. So policies can model attention shifts, scale shifts, reparameterizations. It can maybe describe a subsystem that I think is now driving my future decisions, my policies. So the recursive trace logic is the collection of all policies with their trace logics, and then recursed, recursed, recursed again. So it's a whole hierarchy of trace logics. Each trace logic itself is infinite.</p><p><strong>[26:57] Donald Hoffman:</strong> So we have a choice of policy, meta-policy, meta-meta-policy, and so forth. So we've talked about stationary measures. And there's sort of a minimal kind of notion of goal-directed behavior. We can write down a simple intelligence metric based on Markov chains. So it turns out that for any probability measure pi, there are many Markov chains for which pi is a stationary measure. So if you specify a stationary measure and you ask, what is the Markov chain that has a stationary measure? That's their own question. There's an infinite class of Markov chains that will have that stationary measure, and they vary in very interesting ways. For one, they have different rates of convergence. So some will have this goal-directed behavior where they're going almost immediately to the goal. No matter where you start them, they will go almost in just a couple steps to the goal, and others will converge very, very slowly. So we get to choose, in the trace logic, we can choose how quickly we want to converge to our goal, right? So this is going to be very interesting, because search efficiency is, of course, your measure K is a model of intelligence. So we have a dial here that we can dial the intelligence, and it may be that you might have high intelligence with respect to a goal, but there may be some sub-goals. It turns out that if you go quickly to this stationary measure, you may not do other things intelligently. So we're going to have to be careful which Markov chain we choose, depending on what goals we're trying to get to. So there are many goals that you can get, and I want to talk about that, the possibilities. So there's differing rates of convergence, and the convergence rate is dominated by lambda 2, lambda sub 2, which is the largest eigenvalue of the Markov matrix. You take the Markov matrix and do its eigenvalue analysis. The largest eigenvector has value one. But then you find the largest eigenvalue that's less than one, and that pretty much tells you the rate of convergence for that particular Markov chain. So there are Markov chains with different lambda twos that all have the same stationary measure, and so they converge to it at different rates. So there is then a connection between this Markov notion of intelligence, which is the Lambda 2 convergence, and your metric, which is K. And the relationship is just a simple equation, where T sub M would be essentially R Lambda 2, the rate of convergence. And T blind would be, say, just a random walk that's not smart. Right, so there is a deep connection. But now here's a little trick. We want to have, as you guys talk about, you talk about different layers. It's hierarchical, and higher layers can bend the geometry of the problem space for lower layers. And so how do you model that with Markov chains? Well, it turns out that there... You can have lots of different so-called community structures. So again, for any stationary measure pi, there are an infinite number of Markov chains that have pi stationary, but that have differing community structures. So now a community structure is roughly, is like, you probably know about it, but I'll just say briefly. You could have thousands and thousands of states in this Markov chain, maybe a few hundred are tightly connected over here, a few hundred are tightly connected over there. There's just a few cross-links. The whole thing is ergodic, but basically you might have 10 communities that are tightly knit. Now, within each of those communities, maybe my 100-state community, if I look at it more closely, it itself is composed of maybe three new sub-communities. In other words, you can have an infinite number of communities, sub-communities, sub-communities all the way down.</p><p><strong>[31:01] Donald Hoffman:</strong> as far as you want, and all having the same stationary measure. So what this gives us is you might have one big goal, reach the stationary measure, but you could have sub-goals, which community, the way you get there is the different communities that you might emphasize as you go down. So it gives you this multi-scale flexibility. And the community structure, it turns out mathematically, is dictated by the eigenvectors. When you do the analysis of the matrix again, the eigenvectors with eigenvalues close to 1 because they involve slow mixing between communities. So the communities themselves mix inside themselves, but they don't mix between the communities very much. So we can have policies then that are trying to focus on stationary measures, community structures, convergence rates, particular dynamical models, and so forth. So policies can be looking at all these things and trying to optimize. And then the meta-policies can explore different policies. We can have meta-policies and meta-meta-policies exploring at different rates. So this starts to give us a recursive notion of agency. And in some sense, the reason I'm bringing this up is here is a framework of mathematical tools that's incredibly simple. There's one definition, the trace. That's the only mathematics there. And then there's one observation, the trace logic. And then the third observation is it's recursive. That's it. And then all the tools, that's it, and all the tools are at your disposal. So meta-policies can explore different policies, and the deeper the recursion that we go, in terms of making deeper and deeper trace logics, we get deeper and deeper notions of agency. So we can actually explore just policies for our simplest notion of agency, and then go to meta-policies to discover deeper notions of agency and so forth. So we can take it one baby step at a time. Now, in terms of how this relates to notions of Markov blankets and the self versus the world, Markov blankets, as you well know, are strictly speaking defined for directed acyclic graphs. And there, they define a boundary between self and the world. And I want to upgrade these notions to Markov chains, right? So the idea of the upgrade is, Markov chains are graphs, but they're not acyclic. They allow cycles. So this is one upgrade. We're upgrading from acyclic graphs to cyclic graphs, and then we're upgrading to labeled cyclic graphs, namely labeled by the transition probabilities. So that's what I mean by upgrade. We're going beyond directed acyclic graphs to something that's far more general. So we have to... So we want to move from the standard notion of Markov blanket to what I would call a trace blanket. And here now, we have to actually construct the self and the world. And we need to, the way we will do that is, and by the way, now, you know, I'm just saying at top level, we have to do a lot of hard work here. But it's going to be policies and meta-policies and what they do. And certain experiences, like experiences of pleasure and pain, will be part of the experiences agents have. And to the extent that certain actions lead to greater hitting of the pleasure centers, the pleasure, the higher stationary measure for the pleasure, then they'll be sought, and to higher stationary measures for the pain, they will be less sought. They will be avoided. And so the idea will be that there'll be pleasure and pain guides, but there will also be, I'm thinking that policies, what policies do is they say, given that I'm looking through this particular observer window, what's the probability that I'll now look through that window over there or that window over there?</p><p><strong>[35:08] Chris Fields:</strong> Can I interrupt with the question? A couple of sentences ago was the first time you used the word action. And is an action in this framework just a change in policy?</p><p><strong>[35:23] Donald Hoffman:</strong> So each, it is, but it's not just a change in policy. So a policy itself gives you an action on observer windows because your action is to change observer windows.</p><p><strong>[35:37] Chris Fields:</strong> Okay.</p><p><strong>[35:38] Donald Hoffman:</strong> A meta-policy gives you a higher level of action because you're now changing policies, right? And then a meta-meta-policy would be an even higher level of action because you're changing your meta-policy.</p><p><strong>[35:51] Chris Fields:</strong> Actions are all either changing what you're looking at or changing how you decide what you're looking at.</p><p><strong>[36:01] Donald Hoffman:</strong> That's right. Recursively.</p><p><strong>[36:03] Chris Fields:</strong> Great.</p><p><strong>Donald Hoffman:</strong> That's a recursive notion of action now. Right. So now this, I'm just thinking through this last bit, but it seems like some policies, for example, so now I'm looking at just the smallest level of action. Some policies, if they have certain things that always appear in your observer window. So, for example, in my observer windows, my hands and my body often appear, whereas other things that I call the external world don't appear that often. And I also notice that I seem to be able to directly control my hands and my body. But if I want to have my phone move, I need to move my hand so that I can pick up the phone to move the phone. So what I am going to say is that we really have to, so in the Markov blanket approach, right? The Markov blanket has a clean definition. It's, you know, give me a set of nodes, their blanket is, they're the parents of the nodes, the offspring of the nodes, and the parents of the offspring of the nodes. End of story. That is your blanket, that's your skin. That's your boundary between you and the world. Here, it's much more complicated. Now, I have to use the notion of agency in a non-trivial fashion, and learn probabilistically what features of my sequence of observer windows that I'm having remain there most of the time. My hands are there most of the time. And they're associated, certain actions with my hands are associated with pleasure signals, others are associated with pain signals. So I'm learning to do certain things with my hands and don't stick them in the fire, things like that. Other things are much more contingent. So I can use probabilities of what I'm seeing in my observer windows as a way of starting to construct myself versus the outside world. Plus the pleasure and pain guides.</p><p><strong>[38:03] Chris Fields:</strong> Don, can I ask another question? Sure. You talked about actions with your hands. What does that mean in terms of changing what you're looking at? Since the only action is changing what you're looking at, what does it mean to control what your hands are doing?</p><p><strong>[38:20] Donald Hoffman:</strong> Right, so that's a great question, Chris, because what that means is I want an observer window, I have an observer window where my hand is touching my ear. Now I want an observer window in which my hand is touching my leg. And so I transition to that observer. So what's happening is I'm choosing what I want to see in my movie next. And that's what we call moving my hands. It's a completely, you have to really think out-of-the-box now. This is, it's really, it's a choice of what I want to see next, and that's what the actions are.</p><p><strong>[38:58] Chris Fields:</strong> Okay, okay, great.</p><p><strong>[39:00] Donald Hoffman:</strong> It's very austere. What I love about it is it's austere. There's only one equation and one logic, and so you have very, very tight guides, and yet the claim is we should be able to get everything out of it. But that's what I love, is a theory that forces you to do it in a principled way. Now, Bayesian inference, we can talk about it more if you want, but I'll just mention briefly, Bayes' rule falls out of the meat and the trace logic. And we can go into how that's the case. It's beautiful and non-trivial, but Bayesian inference is effectively a special case of the meat of the trace logic. And if you want, we can go into that. And you guys talk about bending the option space, and I want to say that, yeah, I'm taking the notion of space. Of course, that was metaphorical when you talk about bending the option space, but there is a real sense in which I want to get space and space-time itself. And what I'm working on quite heavily, and with a couple others, is I believe that we can actually boot up special and general relativity entirely from the trace logic. And so that's the claim, that relativistic space-time can be constructed entirely from the trace logic, and this would be then fulfilling John Wheeler's goal, that starting with only observer participants, that we can build up all of space-time physics. And that's the goal of where we're headed. And I'll just give you, this will be the last thing I do, and then we can have a conversation about it. Just to give you a hint about how that would happen. It's standard in Markov chain theory to have what are called enhanced Markov chains. So you have a Markov chain, but you also have a counter. Every time your experience flips, you change experience, your counter increments. So here I've got a case where I've got the four-color agent, and then there's the sub-agent of just red and green. And notice that there's a counter for the red and green, and there's a counter for all four. And notice the counter on the left is going much faster than the counter on the right because it's seeing more experiences. So the counters go, the counters for sub-agents, or I'm sorry, sub-windows, sub-observers, are going at a slower rate than the ones above them. So if I'm less than you in the trace logic, my time counter is going less than you, than the one. So the trace logic also is giving you a relationship among counters, and that we claim is the time dilation of special relativity and general relativity. That's where it comes from. So it's all about observer windows and their counters. And it turns out that the distances can also be derived. And it turns out that the distances that you will get in the window, the trace window, are different than the distances you'll get in the bigger. And so this is where we're hoping to get general relativity coming out of this. Just simply, there's notions of essentially something like the commute time between states. And similar notions, the commute time, I'll just give you that concretely. It's the expected time of starting at green, getting to blue, and then back to green. What's the expected number of steps? Starting at green, I'll get to blue and then back to green. And it turns out that expected time can be viewed as the square of Euclidean distance. So there are canonical ways of getting Euclidean distances from commute time properties and other. There are Dirichlet measures which are even more to the point, but more complex. I won't go into them. But there are ways of going from, the trace logic gives us effectively the time dilation and length contractions of special and general relativity is the idea. So time runs lower on the trace, gaps between tricks. So I'll just leave it at that. I think that's enough for us to, I'll stop the share so we can talk about it. But I just wanted to give you guys a feel, and I can send you guys some papers on this, but I wanted us to have a little time to talk about this is just, we haven't solved the agency framework, the agency thing. What we've got is a language now that's principled for talking about agency.</p><p><strong>[43:45] Michael Levin:</strong> Thanks very much, John. That was amazing. Question, a kind of general question. What do you make of the fact that you're apparently pulling out descriptions of physics and descriptions of agency out of the same starting material? Does that surprise you?</p><p><strong>[44:03] Donald Hoffman:</strong> I think, well, something I've been saying for quite a while is that space-time's just a headset. And we're effectively saying we can build a headset. Space-time is not the reality that's independent of us, and we're little, tiny, little... Our typical view is, Hoffman is this tiny, little, 160-pound thing inside a massive, massive space-time universe. And I'm saying, no, what we call Hoffman is just an avatar inside a space-time headset that's being created by consciousness. And the proof of the pudding is, can we build the headset? So the idea is that space-time, for this approach to go through, we have to be able to show that we can get special relativity, no hand wave, just from the trace logic, and also general relativity and quantum theory. We have to be able to show that we can get entanglement and all of this stuff simply from the trace logic of Markov chain. Now, one objection that someone might have is to say, look, Markov chains, in quantum theory, we have unitary matrices. What are you dealing with? You just have Markov matrices. You don't have these nice unitary matrices. So how are you going to do that? And the idea is most Markov matrices are not unitary, but there are some that are. They are a measure zero subset of the Markov matrices that are unitary. And when you look at the long-term behavior of a Markov matrix, the asymptotic behavior, it turns out that the way that the eigenfunctions, this is now when we go to those enhanced Markov chains. And this is work that Chetan and I did back in 2014. Chetan discovered that the eigenfunctions of the enhanced Markov chains are identical in form to the quantum wave functions of free particles, identical. So the idea is going to be that quantum theory arises as an asymptotic description of a Markov dynamics. So the Markov dynamics gives you a step-by-step-by-step analysis of agency and consciousness. Quantum theory only gives you the asymptotic behavior, not the step-by-step behavior. So that's going to be the connection. Again, this is all a matter of theorem and proof. Either we're right or we're wrong. It's theorem and proof or theorem and disproof. Now, one might say, well, you have the no-cloning theorem in quantum theory. What about that in Markov chains and so forth? And it turns out, if you look carefully at the no-cloning theorem, the proof of it does not require unitarity, it only requires linearity. Markov chains are linear, and they have their own no-cloning theorem. So I see no obstruction right now. We just have to do the hard work. But I see we have a principled notion of agency, and it shows us how the nested community structure can give us nested goals and bending, nested bending of problem spaces. And we can actually not only talk about, metaphorically, about bending the problem spaces, we actually can show, I think we'll be able to show, that we can actually have real space-time curved representations of bending, general relativistic descriptions of bending.</p><p><strong>[48:01] Robert Chis-Ciure:</strong> Mike, may I share? You remember we were taking this project and embedding it into the variational free energy principle and all that. So may I share now the screen to show Donna and Shetan what we already have. I mean, mind you all, this is work from one year ago. I still didn't get to develop it in full. It's, let's say, maybe 80% done. So on the left, you see this book. This will come soon. It's Carl's book on The Free Energy Principle and the Nature of Things. It's a big monograph of the latest version, I suspect. So we didn't push this paper because I also wanted to have access to the latest form of this before we would push. The synthesis paper had only a few drop names of variational free energy and the decomposable way in which you can assess intelligence and true scale-free quantification and recursive decomposition in that sense. So in this project, we try to do it within the free energy principle framework. So within the variational framework, right? So we take all these problem space operators and embed them into a physical variational physics descriptions. And we also end up on some of the things that you, Yudon and Chetan mentioned, like, for example, when we can have the issue of renormalization and then getting ways in which you can decompose and quantify across scales, additive gains in such efficiency at different scales, and then do it globally for as the whole total of the system, depending on how you would, how efficiency gains are cashed out at different levels. So that can be embedded in the variational logic of the free energy principle for sure. And using the more pedestrian thing in the sense of the all the Ness assumption and the Helmholtz decomposition and building on that, building a minimal Landauer-style floor of cost per unit operator and efficiency gains. So this is not done. But what I want to say is that seeing you present this just now, it is clear to me that you provide a way more finer algebraic structure for us to probe even deeper into this decomposition and then look at what you mentioned, the community structure and all these sort of effects you would get and what would appear to us at the scale and metric of observation as an efficiency gain might come from innumerable ways of tiling that problem space or bending the problem space or just doing, in your terms, just changing the observation frame by communities and certain communities having different policies and then meta policies based on the higher order aggregates and so on. So I think it's definitely valuable even if it was something that's built after this is pushed because this will be finished quite soon. Right. It's definitely worth looking into it. And the connection with physics is certainly impressive for sure. That's the hard work. This is less hard.</p><p><strong>[51:41] Donald Hoffman:</strong> I would certainly welcome any interactions you guys want to have on this once your current projects are, you get to a certain place. Because I think as I listened to your work, I realized our ideas are really converging quite nicely here. And I think that there's a synergy. The nice thing about the Markov stuff is that it's so well studied. You just look at the eigen analysis of these matrices to get a lot of this stuff. So there are lots of papers out there about the community structure and so forth. So we would just have to do our homework and understand a lot of that stuff we could just then port in here. And it's really quite well understood. The only thing they didn't have was the trace logic and the fact that it can recurse. That's what they were missing to pull this whole picture together.</p><p><strong>[52:30] Robert Chis-Ciure:</strong> Interesting, interesting. Yeah.</p><p><strong>[52:32] Michael Levin:</strong> What do you think would happen, or maybe you've already done it, but to apply some of the causal emergence metrics to the dynamics of these things, or some of the stuff that you guys do, Robert, or some of the more conventional stuff that we have. Have you, you know, FID and all that kind of stuff, have you done that at all?</p><p><strong>[52:56] Robert Chis-Ciure:</strong> You're done.</p><p><strong>[52:58] Michael Levin:</strong> Don on his stuff. And then if they haven't done it yet, then I'm going to say maybe we should.</p><p><strong>[53:03] Donald Hoffman:</strong> So say a little bit more about the causal emergence question. I want to make sure I understand that question.</p><p><strong>[53:09] Michael Levin:</strong> Well, Robert's the better person to speak to it, but there are a variety of newish metrics in information theory that basically try to quantify the important aspects of agency, right? So that the extent to which the whole is in some causally more than its parts, phi, all that stuff.</p><p><strong>[53:31] Donald Hoffman:</strong> Right, that kind of, exactly right.</p><p><strong>[53:33] Michael Levin:</strong> That's what I'm getting at. Have you tried any of those metrics on the dynamical path of these things?</p><p><strong>[53:39] Donald Hoffman:</strong> Well, a lot of that work has been, the motivation between that, Tononi and Coe and so forth, has been to somehow have consciousness be a function of the amount of causal emergence, right? And to the extent that phi is the system with the greatest causal measure. And those are very, very useful. I think they have nothing to do with consciousness.</p><p><strong>[54:12] Michael Levin:</strong> I'm not making any claims about consciousness. I'm just asking, just step one of getting the measurements and seeing what's going on as far as...</p><p><strong>[54:21] Donald Hoffman:</strong> Oh, sure, I think that would, as long as there's no claims about that in consciousness, I'm all for it. I think that stuff is really good work, absolutely. What I think is bogus is saying that consciousness has something to do with that. I think that's bogus. But we haven't done that ourselves. So the answer is we haven't gone there yet.</p><p><strong>[54:43] Michael Levin:</strong> It may be interesting to do, just to get some data and just to do some measurements. I mean, we've been doing it on gene regulatory networks and all sorts of weird things, and there's some really interesting, but we haven't said a word to our consciousness yet with respect to that, but just the data alone, I think, are already interesting, whatever the interpretation.</p><p><strong>[55:03] Donald Hoffman:</strong> But your data has really inspired me the last few months. I've just really, your work, your whole team has really inspired me. It really forced me to think out of the box about what this thing can do. So thank you. I mean, it's been really quite fun.</p><p><strong>[55:17] Michael Levin:</strong> Thanks.</p><p><strong>[55:18] Donald Hoffman:</strong> Your podcast with Lex Friedman, I've listened to it five times or something like that.</p><p><strong>[55:25] Robert Chis-Ciure:</strong> Don, I mean, on this topic specifically, do you think you could use, not the inverse trace, as a sort of hypothesis generation for this kind of experiments and this kind of data? Because, I mean, it seems like you do the construction sort of in this paper, at least in the paper in this test, we did it mostly forward, but you do have a calculus and an algebra for doing it backwards. So you have, let's say, an observed effective kernel P sub A on the visible states A, and then with the trace chain theorem says that any consistent extension P with the hidden states A prime, was it? Yeah, A prime. Then that must satisfy basically the theorem. So then the ABCD sort of like a tuples is a sort of parameterized hypothesis based about hidden mechanisms. And then it just becomes an inverse problem, right? It's just that. So if that's the case, and we have a lot of data in our synthesis paper, we use the planarian regeneration example, and I don't use the data in that literature, but there is also way more data in GRMs and stuff like that. So do you think we could have a sort of model in the inverse trace that would distinguish between different hypotheses that explain best the data that we see.</p><p><strong>[56:47] Donald Hoffman:</strong> Yes, in the following sense. So the trace logic, because it's a logic, there is the notion of not only the meet, but also the join.</p><p><strong>[56:55] Robert Chis-Ciure:</strong> The join, yeah.</p><p><strong>[56:56] Donald Hoffman:</strong> So I can take two matrices, and if they are compatible, if they're, for example, part of a Boolean sub-logic, then they have a join. Now Chetan has done the hard work of getting a closed-form solution for very special cases. And it turns out to get a general closed-form solution is an open problem. And the interesting thing is, so there can be a join, there can also be, so the join is the least upper bound, right, between two. But you could also have, in some cases, Chetan has pointed out that there could be a whole, say, one-parameter family of minimal upper bounds. So it's going to be very, very interesting. The trace logic, for certain matrices, there may not be a unique least upper bound. There could be a family of minimal upper bounds, and then we may be able to use other factors to choose one that we've got other criteria for what we want. Maybe we want something that's some kind of minimized complexity or maximum complexity or causal structure, greatest causal structure or something like that. So there are all sorts of things that we could do. So we don't know if there is a general closed-form formula for computing the join. Chetan has it for a special case. It's a fair bet that there is not one. That's going to be interesting to, it's an interesting open mathematical problem to study the join of this thing. And I can give you the unpublished paper we have so far, where Jake Don's stuff is there and you can see what we've got so far. And it's open as to how to generalize that, or to prove that it cannot be done, that it cannot be generalized.</p><p><strong>[59:05] Robert Chis-Ciure:</strong> It's super interesting. I would definitely love to see those papers. And I read some of your stuff and also the latest on traces of consciousness. I went through most of it very...</p><p><strong>[59:18] Donald Hoffman:</strong> You've seen the trace of consciousness paper, right? So it's in the appendix at the back of that paper that you already have. You'll see Chetan's work on what we have so far in the join.</p><p><strong>[59:31] Unknown:</strong> Yeah, work in progress.</p><p><strong>[59:33] Robert Chis-Ciure:</strong> I know of your work because one of my best friends is Robert Prender. So we talk a lot. We just submitted a paper on IAT and conversations theory.</p><p><strong>[59:44] Donald Hoffman:</strong> Oh, yes, right, right.</p><p><strong>Robert Chis-Ciure:</strong> One month ago. Yeah.</p><p><strong>[59:49] Donald Hoffman:</strong> Yeah, of course. I know Robert quite well. Yeah.</p><p><strong>[59:52] Chris Fields:</strong> So I have a question, John, about the definition of the trace. The trace of any Markov process is also a Markov process, right?</p><p><strong>[1:00:06] Donald Hoffman:</strong> Yes.</p><p><strong>[1:00:07] Chris Fields:</strong> Do you have an available model of observations that are, or sequences of observations that are not Markov? So sequences of observations that violate constant probability of switching from one state to the next? So that the probability is not well defined.</p><p><strong>[1:00:39] Donald Hoffman:</strong> Right, it depends on your time window and how you want to coarse grain the states, right? So one could say, look, there are many, many systems that are, it's not the case that if you look at the states I've given you, that the probability of the next state can be given exactly just based on the current state. You might need to have, look at three states or five states or 10 states or whatever. You have a bigger window to get the probability. But in those cases, you can always then create new states and then turn Markov.</p><p><strong>[1:01:19] Unknown:</strong> So basically, you can expand the state space, you actually multiply it. There can be a combinatorial explosion, though. You do need to be aware of that possibility. And this only works for finite memory. It doesn't work for infinite memory. But even for finite memory, you do have to be a little careful that the combinatorics can get nuts, which seems to be often the problem in consciousness research.</p><p><strong>[1:01:55] Chris Fields:</strong> I'm, of course, mainly interested in issues like contextuality and quantum theory, where you have groups of observations for which joint probabilities can't be defined. And so you can't build a single self-consistent hidden variable theory. So I don't know whether the formalism will handle that sort of situation or not, since you do seem to be always assuming well-defined probability distributions.</p><p><strong>[1:02:46] Donald Hoffman:</strong> Can you say more about the system that doesn't work, Chris?</p><p><strong>[1:02:54] Chris Fields:</strong> Well, contextuality is defined as a phenomenon: sets of observations for which a joint probability distribution is undefinable, for which the statistics violate the Kolmogorov axioms.</p><p><strong>[1:03:24] Unknown:</strong> I suspect that the fact that joins don't always work might have something to do with that. That's what I was thinking too. Yeah, that's what I originally thought your question was about, having the same probabilities every time. And of course, you don't need that with Markov chains. They don't have to be homogeneous. But that's not your question. It's much more abstract than that. What I would, my hope would be to somehow find sub-logics which actually look like quantum logics. Right. But if that happens, then we could possibly answer your question in the affirmative. And that would be one way to do it.</p><p><strong>[1:04:12] Chris Fields:</strong> So another way to think of it in your formalism might be if there are paths, yeah, this would be where it would look like a quantum logic. If you have paths in the trace logic network that don't commute.</p><p><strong>[1:04:27] Donald Hoffman:</strong> Oh, easily.</p><p><strong>[1:04:32] Chris Fields:</strong> That may be a way of approaching this join question, then.</p><p><strong>[1:04:38] Donald Hoffman:</strong> Yeah, that does have no join.</p><p><strong>[1:04:40] Chris Fields:</strong> To get back to your comment about unitarity at the very beginning, unitarity is really just conservation of information. And Kolmogorov probability is really just conservation of information. So if you don't have situations in which information is actually lost in some global sense, informational singularities, if you will, then the system satisfies unitarity as it's used as an axiom within information theory, which is just conservation of information. So this is where I was trying to emphasize this complete dissociation, actually, of unitarity, any spatial considerations. But anyway, that's sort of an aside. The real question is about contextuality.</p><p><strong>[1:06:00] Donald Hoffman:</strong> It is striking that Nima Arkani-Hamed and these high-energy theoretical physicists are strident that they're not assuming unitarity. They're saying we don't need it and we'll show that it arises from these positive geometries that are entirely outside of space-time. So they're getting space-time and unitarity together.</p><p><strong>[1:06:28] Unknown:</strong> I've never seen a demonstration of that fact in itself. What I have seen is they derive scattering amplitudes, which match what's understood from the Feynman approach. That doesn't mean you've derived space-time, and it doesn't mean you've derived unitarity. It just means that you've matched something.</p><p><strong>[1:06:56] Chris Fields:</strong> Yeah, I think they're referring to.</p><p><strong>[1:06:57] Unknown:</strong> The principle is absent.</p><p><strong>[1:07:00] Chris Fields:</strong> Yeah.</p><p><strong>Unknown:</strong> Sorry.</p><p><strong>[1:07:01] Chris Fields:</strong> I think they're referring to unitary processes in space-time.</p><p><strong>[1:07:06] Unknown:</strong> Right.</p><p><strong>[1:07:07] Donald Hoffman:</strong> That's what they're referring to. Absolutely.</p><p><strong>[1:07:10] Chris Fields:</strong> That's very different from unitarity as a strictly information theoretic concept.</p><p><strong>[1:07:18] Donald Hoffman:</strong> And yet the way they wave it around and say, we don't assume space-time or unitarity, right?</p><p><strong>[1:07:24] Chris Fields:</strong> It's just a disconnect in language, I think.</p><p><strong>[1:07:26] Unknown:</strong> I mean, it's fine not assuming a collection B when you're doing a collection A. But if somebody like Dyson comes around and says they're equivalent, then you can't say that we don't need space-time. It's just another way of looking at it. So their claim is unfounded as far as I know. I mean, somebody needs to sit down and say, this is how space-time emerges from the ampituvahedron. Otherwise, it's just saying that Schwinger could say I don't need to assume Feynman, and Feynman could say I don't need to assume Schwinger, and they're both right, but they're both equivalent.</p><p><strong>[1:08:11] Donald Hoffman:</strong> Right now they don't give you space-time, they give you scattering aptitudes. That's what they give you.</p><p><strong>[1:08:15] Unknown:</strong> Which I think are defined in space-time. Fair enough. It was on a facet of something.</p><p><strong>[1:08:28] Chris Fields:</strong> I suspect that eventually we'll generally be able to identify amplituhedron-like structures with error-correcting quantum error-correcting codes.</p><p><strong>[1:08:48] Unknown:</strong> You wrote a paper on that, didn't you, Chris?</p><p><strong>[1:08:50] Chris Fields:</strong> Well, we have a preprint of it that was revised as of a few months ago, and we're still working on it. But the current available preprint isn't bad. But the hypothesis would be we can go the other way from amplitude hedron-like structures to quantum error correcting codes. And there are many ways to get space-time from quantum error correcting codes. So.</p><p><strong>[1:09:28] Donald Hoffman:</strong> Interesting.</p><p><strong>[1:09:30] Chris Fields:</strong> It may be that the inference ends up going in that direction.</p><p><strong>[1:09:41] Donald Hoffman:</strong> What I'm hoping to be able to show is that some of these positive geometries, like the sociohedron, are sub-polytopes of the Markov polytope. Because the Markov polytope could be describing the probabilities of certain interactive processes that we would think of as scattering processes. If that's the case, then there may be a deep connection between some of these positive geometries and the Markov Hedron, which is itself a positive geometry.</p><p><strong>[1:10:14] Chris Fields:</strong> Yeah.</p><p><strong>Donald Hoffman:</strong> Instead of all possible Markov chains is a positive geometry.</p><p><strong>[1:10:18] Chris Fields:</strong> I suspect that any such structure defined over any space of possibilities or any such dynamics defined over any space of possibilities can be thought of as scattering in some metaphorical, but formally sensible way.</p><p><strong>[1:10:52] Unknown:</strong> Yeah.</p><p><strong>[1:10:53] Chris Fields:</strong> I mean, we can think of computation as scattering in interaction space.</p><p><strong>[1:11:00] Unknown:</strong> Interaction is scattering.</p><p><strong>[1:11:03] Chris Fields:</strong> Yeah.</p><p><strong>[1:11:04] Unknown:</strong> I mean, scattering is how interactions look in a physics lab.</p><p><strong>[1:11:12] Donald Hoffman:</strong> What's surprising is how restricted the scattering events are that you find in physics, right? A very restricted set, and it turns out that something like the standard model gives you all the components that you're ever going to find in a new scattering that you ever do.</p><p><strong>[1:11:30] Unknown:</strong> We hope.</p><p><strong>[1:11:31] Donald Hoffman:</strong> So far, that's so far.</p><p><strong>[1:11:34] Chris Fields:</strong> Well, all the ones that you see with the sorts of things that we call elementary particles.</p><p><strong>[1:11:42] Donald Hoffman:</strong> What I find interesting is just as when we had the Ptolemaic system, we had all cycles and cycles and cycles, you could get all the orbits of the planets, but it was ugly and just a mess because you had to add all these cycles and correcting cycles to correct those cycles and so forth to do it. And the same thing happens with space-time and scattering. When you look at the Feynman diagrams, it's loop after loop, and you have three or four particle interactions and 500 pages of algebra because you have all these Ptolemaic loop after loop after loop where you're enforcing locality and unitarity. So Feynman is forcing locality and unitarity, and so we have to do all this stuff. And all of a sudden, 400 pages of algebra turns into two terms when you let go of space-time. And also, it feels, again, like we've got this Rube Goldberg machine called space-time. And that's why things look so ugly in space-time and the mathematics. And all of a sudden, we're seeing some hint. I mean, it was a big hint when we went from Ptolemy to Newton. That's, you know, all of a sudden the formulas got a lot. We're on to something much deeper here than Ptolemy was on to. And now when we go from, you know, Feynman scattering diagrams to these positive geometries, once again, we're getting 400 pages of algebra down to two terms. A clear hint that we're on to something deeper beyond space-time.</p><p><strong>[1:13:08] Chris Fields:</strong> No, space-time's a kluge. I think that's clear.</p><p><strong>[1:13:14] Donald Hoffman:</strong> What's interesting is that theories of consciousness, all the main theories of consciousness assume otherwise. We start with space-time. We try to figure out what physical systems in space-time could possibly have the right structure to give rise to consciousness. So all of our theories start with the Kluge as the assumption and then try to go from there. So they're doomed, completely doomed to failure.</p><p><strong>[1:13:42] Chris Fields:</strong> Well, I mean, all of science has done this before about, what, 1970?</p><p><strong>[1:13:48] Donald Hoffman:</strong> You're talking about the standard model since about 1970.</p><p><strong>[1:13:57] Chris Fields:</strong> Well, no, I'm roughly dating at least the first things I saw from Wheeler with the notion of observer participants in it.</p><p><strong>[1:14:15] Donald Hoffman:</strong> In the 70s, right? He saw it. He knew, I mean, he wrote the book on space-time. He wrote the book on gravity. Measler, Thorne, and Wheeler. That is the Bible. And he knew space-time and he knew it was a clue. He was looking for something entirely beyond.</p><p><strong>[1:14:35] Chris Fields:</strong> Yeah.</p><p><strong>[1:14:37] Donald Hoffman:</strong> And he was going to call it observer participants. That's what he called it.</p><p><strong>[1:14:43] Chris Fields:</strong> May I ask something? Yeah.</p><p><strong>[1:14:44] Robert Chis-Ciure:</strong> Go ahead.</p><p><strong>[1:14:46] Chris Fields:</strong> I was just going to say good to see you guys again. Good to meet you, Robert.</p><p><strong>[1:14:50] Robert Chis-Ciure:</strong> Good to meet you, Chris. And I think we will meet in person in Spain, if I'm not mistaken, in July.</p><p><strong>[1:14:55] Chris Fields:</strong> Yes, hopefully. That sounds very exciting.</p><p><strong>[1:14:58] Robert Chis-Ciure:</strong> I told them to invite you specifically. Thank you. What was happening in Spain? There will be a workshop organized by some people, the Takena Foundation, and it's a workshop on the known unknowns in our fields of interest. And there will be, yeah, more than one field. So that's why Fields is excellent. And yeah, we'll discuss several things. But Don, I wanted to say that I think this is the biggest myopia in consciousness science. And I work in Anil Seth's lab. I did my PhD with Giulio Tononi. I did a postdoc with David Chalmers. I work with Georg Northoff and the temporal spatiality. I know all the people, and all of them are sort of either physicalists or they still think that consciousness is something you squeeze with enough dexterity at the end of a long tube that you operationally construct. And they really don't get the point that consciousness is the starting point, and everything else must come afterwards. You build everything else from consciousness, not the other way. It's hot. I know the pain.</p><p><strong>[1:16:29] Unknown:</strong> Do you see the platonic space as having something to do with consciousness in that sense, or being embedded within it?</p><p><strong>[1:16:42] Robert Chis-Ciure:</strong> That's Mike's answer.</p><p><strong>[1:16:45] Unknown:</strong> Yeah, that's I'm asking Mike.</p><p><strong>[1:16:48] Michael Levin:</strong> Okay, so I'm not fundamentally a consciousness researcher. I don't have any strong claims on this yet. But if I had to say right now, I would say that I don't think that from the point of the Platonic space formalism that I'm investigating, I don't think that we are beings that occasionally get visited by Platonic patterns that are something else. I think we are the patterns. And I think these patterns are a wide range of static, dynamic, low agency, high agency things. And I think what we call consciousness is the perspective from the Platonic space outwards into. So what a pattern from the Platonic space experiences when it interacts with the physical world is what we tend to call consciousness. Now, I'm not sure that, like, I don't think that exhausts all the possibilities. I think there are probably lateral interactions within that world. I suspect that when mathematicians think about abstract mathematical objects, what happens is you have two, there are basically two different patterns in residence there, the human one and whatever it is that they're studying. So there may be lateral interactions that don't even require the physical world per se. But basically what I think we mean when we talk about consciousness is what it's like to be a Platonic pattern projecting into a physical world through some interface. So that might be, you know, the sense organs that we have or something completely different and so on.</p><p><strong>[1:18:14] Donald Hoffman:</strong> So the Platonic patterns transcend our experiential notion of consciousness.</p><p><strong>[1:18:21] Michael Levin:</strong> In the sense that, say more. What do you mean by that?</p><p><strong>[1:18:26] Donald Hoffman:</strong> But there's a Platonic realm that maybe we shouldn't describe as conscious, but we perceive, but a human perspective on it is we experience it as consciousness looking at it somehow. So consciousness is just perspective.</p><p><strong>[1:18:42] Michael Levin:</strong> I ultimately think some version of idealism is probably the more accurate thing. I do think that consciousness is fundamental, but I don't know what to do with that on a practical level right now. I don't have a way of making use of that in the lab or anything like that. What I see is a way to make progress with a somewhat more dualistic version, which I agree that it would be nice to have some kind of simple monism, but what I see right now is that we have different but interacting realms, so to speak, and that model helps us to do new experiments and make new discoveries. So for now, I do think that they're sort of two separate things. But ultimately, if I had to guess, I would say that consciousness is primary. I don't know how to do that reduction right now, so I'm sticking with two because that's what we can handle right now.</p><p><strong>[1:19:41] Donald Hoffman:</strong> One of our goals is to show that we can actually get space-time, special in general relativity and quantum theory from this theory of Markov chains of consciousness, in which case then we could inherit all the work that's been done in physics, but see it arising from a consciousness first point of view. So that's where I'm headed.</p><p><strong>[1:20:04] Michael Levin:</strong> I mean, for us, one of the important pieces of our research program is to understand the mapping between the interfaces that we construct, and those are anything from sorting algorithms through cyborgs, through Xenobots, embryos, you name it, all of these stuff, to understand what are the properties of these things that facilitate the ingression of specific patterns from the space. So, what is it about this thing you've made that pulls down this rather than that? And then that allows different degrees of and different kinds of mind state to interact through it and also to quantify. And so we've got some wild stuff that's kind of ripening in the next few months that I'd love to run by all of you. We should all have another meeting so you can see. But we've been trying to quantify one of the things that I think is really important about this Platonic space thing is that it's not just a redescription, and a bunch of sort of useless philosophy that you sprinkle on top of things that work perfectly well. It actually makes very new predictions in the sense that it suggests that what you're getting from that space is things that using conventional theories that we have now of doing the accounting of effort, of computational cost, of physical cost, these kinds of things, that you get free lunches, or at least heavily discounted lunches. You get more than you put in. Our accounting isn't adding up everything that you get. We're missing something very significant. And we now have the ability to quantify how much are we getting, how much free memory, free compute, free whatever. We can, at least in simple toy, sort of minimal models, we can quantify that in biology. You can see it, but it's hard to quantify or prove anything. It's just too complicated. Whereas in these minimal models, we can actually quantify what did we get that we didn't pay for according to the conventional way of totaling up effort. And so that's really important to find out how much and what do you get. Do you just get static patterns? Do you get behavioral propensities? Do you get algorithms? Do you get virtual machines? Do you get free compute that you can do in that space? That's kind of a crazy prediction of mine that I think you can actually get, you can do compute in that space that you don't pay for in this space, so to speak. So that's, you know.</p><p><strong>[1:22:31] Donald Hoffman:</strong> I've been listening to a lot of your podcasts on this and thinking about it. And I think that there's connection with the hidden states in this Markov system, so that if we just see a trace, most of the intelligence is something you don't see. And so to the trace observer, that's all in a platonic realm, because you literally cannot see it. And yet, what you're seeing is entirely a trace of that world. So your visible world is controlled by this quote unquote platonic space that you cannot see. And so that's why I was thinking, that's what your stuff about the planaria, for example, you cut off the head and cut off the tail and you can change the electric fields and make it have two heads and so forth. I mean, it's just how does it know how to do that? Right, where is that? So I've been thinking, so somehow all we're seeing is the planarian in our trace. We're not seeing beyond what we can see. There's a whole Markov realm of intelligence out there that is projecting down into what we can see, which is just the planarian. So I'd really love to explore with you guys, as this formalism matures, how we might use the formalism in concrete ways to model specific platonic spaces for specific memories, biological memories. Because I think this gives us the tools, right? There are the exits, the dark states, and the entrances. All those tools are part of the platonic space. And if we learn, so we're new to this ourselves, right? We don't know how to use those tools yet, but to learn how to use those tools to model specific things, we might be able to get that platonic space, not just a hand wave, but here is the Markov chain, here's the trace, and this is why it looks like this platonic intelligence that's guiding you. But it would be multi-scale, right? There's going to be multi-scale in the dark space.</p><p><strong>[1:24:39] Michael Levin:</strong> What might be really fun is, so definitely we should do that with some of the biology examples that we have. In particular, for example, with some of the synthetic things that we have. So, Xenobots, Neurobots, because they raise what I think is perhaps the most interesting part of this is, where do the goals and properties of novel beings come from, where you can't just pin it on selection, eons of selection. I guess where do they come from? So, but I would complement that with, which I think would be even easier, the study of basically applying these things to some of the minimal computational models that we have. Yes. Right, we have sorting. Sorting is one, and there's going to be a bunch of new work on that coming soon. But we have others, we have some really interesting stuff that'll come out soon on giving embodiments to various very weird sources like mathematical objects and making the robots that are driven not by conventional algorithms and sensors and whatever, but their entire behavior is driven by sort of pre-cooked static mathematical constants and watching how those kinds of, or mathematical objects and watching how those things end up navigating a world and adapt, and what cognitive features they end up having and so on. So I think those things are simple enough that we could-- I think we could actually make a pretty tight mapping onto what you have in terms of states and things like that.</p><p><strong>[1:26:13] Donald Hoffman:</strong> Interesting. Investing. It may be, though, when you get into pure mathematics and sort of a hidden platonic realm that's doing stuff, like sorting algorithms that are doing other things you didn't expect. I'm having my mind stretched to think how Markov chains could do that. It seems like that might be something even deeper somehow.</p><p><strong>[1:26:34] Michael Levin:</strong> Well, what we could, that was the other thing. I was thinking what is totally doable now is to take your system and apply the tests that we have. So we have a range of assays that basically are taken right out of the behaviorist handbook. Because the one thing I think behaviorists got right is that they weren't worried about what the implementation was. And so it's very easy to apply their tools to anything, right? So we could actually look for habituation, sensitization, associative conditioning, delayed gratification, path planning, illusion, counterfactuals, all this kind of stuff. We have assays now that we can look for all that stuff.</p><p><strong>[1:27:15] Donald Hoffman:</strong> And it's clear to me that there's always going to be Markov chains that we can build to do that. So it'll be, they're universal Turing machines. Markov chains are universal.</p><p><strong>[1:27:28] Michael Levin:</strong> So I'm not talking about building ones that do it. I'm talking about finding it in simple or random ones that you don't think should be doing it. That's the trick. We're finding these capacities in very simple and no design, no selection. Usually the three things you think you normally need, you need rational design by an engineer, selection or evolution or learning, right? Those are the three things you need. We're not doing any of that. We're pulling it out of, I don't know, you'll judge for yourself where you think it's coming from, but I think we could do that with random matrices or whatever.</p><p><strong>[1:28:13] Donald Hoffman:</strong> That's right. We may be able to find matrices in which what we're seeing in the organism is the trace, but the invisible states are having a lot of the intelligence that leads to what you're seeing. And so we write down the matrix that even though the trace, you can't see why it's doing that, but it does it. But in the big matrix, you see why it's doing it.</p><p><strong>[1:28:35] Michael Levin:</strong> Yeah.</p><p><strong>[1:28:37] Chris Fields:</strong> I'm going to have to jump off, guys. Great conversation.</p><p><strong>[1:28:41] Michael Levin:</strong> Thanks, Chris.</p><p><strong>[1:28:43] Chris Fields:</strong> Good to see you.</p><p><strong>[1:28:44] Michael Levin:</strong> Thanks, Chris.</p><p><strong>[1:28:46] Unknown:</strong> Good to see you, Chris.</p><p><strong>[1:28:51] Robert Chis-Ciure:</strong> Mike, will we have the recording of this?</p><p><strong>[1:28:54] Michael Levin:</strong> It is being recorded. I'll send you guys a link. If everybody's okay with it, I'll put it up on our center channel. But regardless, you can all have a copy.</p><p><strong>[1:29:03] Donald Hoffman:</strong> That's fine with me. Yeah.</p><p><strong>[1:29:04] Michael Levin:</strong> Great.</p><p><strong>[1:29:05] Robert Chis-Ciure:</strong> Perfectly fine.</p><p><strong>[1:29:07] Michael Levin:</strong> I think there's a ton of stuff to do. So why don't we go off and think about some specific directions and let's come back. I already have some thoughts, but there'll be lots more in a few weeks when I can send around some pre-prints. I'm having my students write these things up and so then I'll send them out.</p><p><strong>[1:29:27] Donald Hoffman:</strong> Very good. And if we maybe then pick a particular simple problem system that we can see what the trace logic might do on it and see where we go from there. That would be fun.</p><p><strong>[1:29:35] Michael Levin:</strong> Yeah, that'd be great.</p><p><strong>[1:29:37] Robert Chis-Ciure:</strong> And I think a natural extension of our former work after we did the FEP thing is basically try to apply all this stuff to it and see how you can get even deeper, more interesting things you can say about all this computing intelligence across scales in a more fine-grained manner than even the FEP allows.</p><p><strong>[1:30:01] Donald Hoffman:</strong> I agree that this seemed to be a natural connection with the project you guys are doing right now.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>A Multiscale Logic of Collective Intelligence&quot; by Donald Hoffman and Chetan Prakash</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>Donald Hoffman, Chetan Prakash, Robert Chis-Ciure, and Chris Fields discuss a multiscale logic of collective intelligence, covering observers, agency, causal emergence, quantum logic, and consciousness-first models.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/YnfaT5APPB0" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/cb7a379b/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1.5 hour talk + discussion, titled "A Multiscale Logic of Collective Intelligence" by Donald Hoffman ( and Chetan Prakash ( with Robert Chis-Ciure ( and Chris Fields ( and me.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:01) Beyond space-time physics</p><p>(10:10) Minimal observer participants</p><p>(18:49) Recursive trace logic</p><p>(35:08) Actions and trace blankets</p><p>(43:45) Physics and agency together</p><p>(52:32) Causal emergence and joins</p><p>(59:52) Contextuality and quantum logic</p><p>(01:06:00) Unitarity and positive geometries</p><p>(01:16:29) Consciousness-first mind theories</p><p>(01:24:39) Testing models together</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:01] Donald Hoffman:</strong> A multi-scale logic of collective intelligence, and it's what we call the recursive trace logic. We've had the trace logic for a couple of years, but in the last couple of months discovered a recursive aspect to it that will lead into a notion of agency that's novel. This is different, Chris, than the conscious agent theory. It's a different notion of agency than we've had before. The big topics I'd like to talk about are: can you guys see me? Yeah. Okay. So I'm going to talk a little bit about collective intelligence, our model of collective intelligence, how it involves core screening, which is important to you guys, how it involves generative models, minimizing surprise automatically, bending problem spaces, a recursive notion of agency and self, a new intelligence metric for agents, we'll call lambda sub 2, and its relationship to your measure K. And then how this is all beyond space-time and quantum theory. And I'll start there just briefly about why I'm thinking entirely outside of space-time and quantum theory. The idea is that high-energy theoretical physicists are done with space-time. They say it's not fundamental. So here's Nima Arkani-Hamed at the Institute for Advanced Study. Space-time is doomed. There's no such thing as space-time fundamentally in the actual underlying description of the laws of physics. And he makes it very, very clear that he's saying space-time and anything inside space-time, and that includes anything with unitary evolution, quantum theory in particular. So he's going beyond space-time and quantum theory. And it's not just him, it's because of his success and his collaborators, the ERC has funded a 10 million euro initiative called Universe Plus. And it's all about going entirely beyond space-time and entirely beyond quantum theory and looking for what they're calling positive geometries. And so there's over 100 high-energy theoretical physicists and mathematicians now working on this, and they're finding stuff. And I can talk a little bit about how it's related to what we're finding, but they're finding these positive geometries that give you scattering amplitudes without any quantum theory whatsoever. And you get it much more easily than, and more simply than with quantum theory. So it's clearly quite striking. So I'm stepping entirely inside of space-time. Yeah.</p><p><strong>[02:42] Michael Levin:</strong> Sorry, just a quick question. Maybe naive, but I just want to understand this idea of space-time being doomed. So on that view, if that were correct, what is the status of, let's say, general relativity? What does it refer? Is it completely different, be supplanted? What is that theory about then?</p><p><strong>[03:01] Donald Hoffman:</strong> Right, so the idea is that the very notions of space and time, even the combination of them as space-time, is not fundamental at all. So general relativity will go the way of all theories. It will be, like Newton, we still use Newton for certain cases, we'll still use GR for certain cases, but we needed a much deeper theory. The hard fact is that when you bring together GR and quantum theory, you find that space-time has no operational meaning at the Planck scale, 10 to the minus 33 centimeters, 10 to the minus 43 seconds. It simply has no operational meaning. So that means we have to find a deeper foundation. So these are only, at best, approximation theories.</p><p><strong>[03:50] Robert Chis-Ciure:</strong> So Don, just to be clear, in the obviously Kantian vein in which I know much of your theories, this is not only denamitating the space-time at the empirical level, it's also ejecting it from any transcendental style considerations. It is just a placeholder, until we have something better, but it will die as a concept in our economy of thinking, even about our experience, let alone the empirical physical world.</p><p><strong>[04:22] Donald Hoffman:</strong> Absolutely. That's the idea that we thought space and time were the fundamental nature of reality. We might have even thought they were a priori true or something, but that's just wrong. That's just plain wrong. And science has a way of forcing us.</p><p><strong>[04:37] Chris Fields:</strong> If one formulates basic ideas of quantum theory outside space time completely, then there are many routes, which are understudied by huge numbers of people, again, for generating space time as a consequence of basically assumptions about quantum information theory, and also many, many routes for generating Einstein's equations as either approximations or again, outcomes of other kinds of assumptions. So GR turns into something like the status that classical physics has with respect to quantum theory in space time, which is a limiting case, an approximation that's good in some circumstances for doing some things, which is basically how Don just characterized it. So yeah, there's lots and lots and lots of physics underlying this, both in the high energy community and the quantum information community.</p><p><strong>[06:07] Donald Hoffman:</strong> Right, and what NEMA and the ERC group are doing is even going beyond that because they're saying we're not going to even start with quantum information theory. Anything quantum itself is going to arise joined at the hip with spacetime from something far deeper. So they want to show quantum information theory and general relativity arise together from something that couldn't care less about unitarity at all. So that's what they're after. So there is no locality and there is no unitarity, period, in these new positive geometries. And they don't care about unitarity. And they show that then quantum information theory comes out as an approximation in special case at the same time that you get spacetime. So it's different than the Carlo Rovelli kind of approaches and so forth. So that's the direction you should be trying to go here.</p><p><strong>[07:03] Chris Fields:</strong> You should actually say that it contradicts most of quantum information theory, because quantum information theory actually has nothing to do with spacetime. So the two arising together would be very unusual.</p><p><strong>[07:23] Donald Hoffman:</strong> Well, what Nima wants to show is that unitarity and locality together arise from these positive geometries. And then, because you get unitarity rising from it, then you get the foundations for quantum information theory, so that, but we'll see the proof is if you can do it right, so yeah, but I'm just trying to be clear about where they stand with respect to current approaches to trying to build up. As you say, Chris, most approaches that are trying to build space-time are starting with something quantum, and these guys are not. They're saying, we're not even having quantum. We're starting with what they just call positive geometries. So I just wanted to make clear how out of the box their thinking is. So John Wheeler, of course, was trying to think out of the box, and he was saying, you know, someday, this is in 1990, and his wonderful book on gravity and space-time, he says, someday surely we'll find, we'll see a principle underlying existence so simple, so beautiful, so obvious that we'd all say to each other, oh, how could it have, we have all been so blind so long. And so that's what we're looking for. And let's see. I'm not able to, can you guys hear me?</p><p><strong>[08:38] Michael Levin:</strong> We can hear you, but the slides are not advancing.</p><p><strong>[08:40] Donald Hoffman:</strong> Let's see. Okay, I guess now advanced. And he said about the same time in his...</p><p><strong>[08:52] Michael Levin:</strong> Sorry, we're still seeing the type of slide.</p><p><strong>[08:56] Donald Hoffman:</strong> Okay, let me try this again.</p><p><strong>[09:01] Robert Chis-Ciure:</strong> We only saw the title page so far.</p><p><strong>[09:05] Donald Hoffman:</strong> I'll go back and try the... That's weird. So I'll... Let's see. So I'll go back to share. Sorry about that. That's weird. Can you see that?</p><p><strong>[10:01] Michael Levin:</strong> We can, but it's still in.</p><p><strong>[10:08] Donald Hoffman:</strong> Yep. Okay.</p><p><strong>[10:10] Michael Levin:</strong> Yeah, there we go.</p><p><strong>Donald Hoffman:</strong> Okay, so Wheeler suggested that the notes struck out on the piano by the observer participants of all places and all times, bits though they are, in and of themselves constitute the great wide world of space and time and things. So he was trying to start with what he calls observer participants. And he thought that maybe somehow, and that was in his It from Bit paper in 1989. And he actually, in his paper, cited work that Chetan and I were doing, our book Observer Mechanics. So he was already thinking about the stuff we were doing with observers and participants back then. So what's a minimal observer participant? I'm going to have, we're going to start with just the absolute bare basics. They have experiences, like smell of garlic, taste of mint, and these experiences can change. That's all I'm going to assume. That's the foundation of everything. So my ontology is there are experiences and they can change. So for example, maybe I have four experiences, a very, very simple observer, red, green, blue, and I'll call that yellow, and they change. So now I'm seeing yellow, now I'm seeing green, now I'm seeing blue, and so forth. They keep changing. So a simple, in fact, the simplest and most general way of talking about that is just to talk about Markov chains. So the Markov matrix there, the first row has 0.2, that means if I see red now, what's the probability I'll see red next? The 0.3 is if I see red now, that's a three-tenths chance that I'll get, a 30% chance I'll get green next and so forth. So it's just a transition matrix, probability of seeing the next color, given that I'm seeing the current color. So that's all Markov chains are, there are these matrices. Of course, a lot of complications come out of that. And one aspect of Markov chains is that they immediately instantiate a very interesting kind of goal-directed behavior. No matter what state you start the Markov chain in, it has a target stationary measure. In this case, it's the thing on the left, 0.33, 0.30, 0.16, and 0.21. No matter what state you start this matrix in, it is going to go eventually to that state. And you can perturb it as much as you want. It will resist the perturbation and head back to that target state. So already we have, in the very structure of this, a goal-directed behavior. So, and as you guys in your papers talk about, William James mentions that intelligence is achieving a fixed goal with variable means of achieving it. So, that's the stationary measure, and if you have an ergodic Markov chain, then you will have a stationary measure. Now, the idea is I want to have multiscale collective intelligence, and so we need a notion of scale. So, I'm just going to take an observer that sees a subset of the states that this is. So the first observer I was talking about has four colors, you can see. Let's consider one that has only two. So that's my notion of scale. How many, you know, the subset relationship among the number of experiences that you have. Now, so here's the key idea of everything we're going to be doing now. Suppose I take the matrix on the right as describing, quote unquote, the reality. This is what's happening. And those are the transitions. But this observer on the left only sees two, red and green. What transition probabilities is it going to see? There should be a formula. Given the matrix on the right, there should be some kind of computation we could do to give us a two-by-two matrix for the transitions on the left, just in red and green. Does that idea make sense?</p><p><strong>[14:04] Donald Hoffman:</strong> Yep. Okay, good. So when you do the mathematics, it turns out that's the matrix. So you get this very two by two matrix. Notice that the numbers are completely different from this matrix, right? It's not just copying, it's the computation that you have to do. And so here are the two matrices. The one on the right is the big matrix. And if you just restrict attention to the red and green, then you get the matrix on the left, induced by the matrix on the right. And this is called the trace, so the matrix on the left is called the trace of the matrix on the right. That's just standard in Markov theory that's been around for more than half a century. So this is not new to me or to us. Now, you can actually, the trace formula is important. I'm going to go through it because it has an important conceptual thing for us. So the way you compute the trace, so I'm going to take this matrix, I want to, I'll call it matrix P, and I want to get its trace on the red and green. So first I'll just notice that we can take this matrix and divide it into four sub-matrices. There's a two by two matrix that has 0.2, 0.3, 0.5, 0.2, that's for the red and green and so forth. That we'll call matrix A, so that's going to be the states that are visible to the trace observer, right? So A is the sub-matrix based on the states that are going to be visible to the sub-observer. C is the sub-matrix relating states that are dark to this new observer. It doesn't see this, so this is all dynamics that's dark to it, okay? B is the matrix that is the exit. This is the exits from what you can see to the dark region. So B is the exits, and D is the re-entrance. This is getting from the invisible world into the visible world. So those are the sub-matrices that we're going to be using, and here is the formula. This works universally. The trace, so the trace matrix on A, which is the visible states, is you just take the original matrix A, so 0.2, 0.3, 0.5, 0.2, and you add this interesting thing on the right. That I is the identity matrix. So you take the identity matrix minus the dark matrix. So I minus C is the identity minus the dark matrix. And you take its inverse. That has the effect of being able to explore all possible paths. There's an infinite number of paths through C that you could take. So you allow I minus C quantity inverse is exploring the infinite number of paths there. And then you pre-multiply by the exits and post-multiply by the entrances. And you add that all up and that's your trace. So that's the idea. You're basically, you get the trace by looking at all the ways that you could go outside of the trace and then coming back into the trace, the trace states, okay? That's the general formula. So that's been around, again, that's not us, that's been around for a long time. So you have hidden memories and controls. B, C, and D are going to be hidden layers of control that the agent A cannot see, but will be influencing their behavior. So that's going to be an interesting hidden memory kinds of possibilities now with B, C, and D. So there's explicit memory changes when you change A directly, but then there's going to be hidden.</p><p><strong>[18:00] Robert Chis-Ciure:</strong> Don, just one second, can you please go back? In this BCD, so the exits, the entrances, and the invisible, is there any particular mapping to memory or control, or is it more of a blanket category you're using hidden memory or controls? Is memory, for example, C, the dynamics, hidden dynamics in C, or what's the control here? I suspect exits and entrances would be more like control.</p><p><strong>[18:27] Donald Hoffman:</strong> Well, it turns out that there's different ways to control. You can screw around with B, you can screw around with C, screw around with D, or all of the above in any combination you want. All of them together give you different ways of controlling. So it's really quite fascinating, the possibilities here.</p><p><strong>[18:46] Robert Chis-Ciure:</strong> Very cool, thanks.</p><p><strong>[18:49] Donald Hoffman:</strong> So all of that is old, here's the new stuff. We discovered just a couple years ago that the trace relationship gives you a partial order on all Markov chains. That was the discovery, and that's what sort of launched this whole thing. So it's a partial order, which means that there is a logic. So the definition is that the matrix M is less than or equal to a matrix N in the trace order if and only if M is a trace of N. That's it, one trivial definition, but no one saw it before. And it turns out that that definition gives you a multiscale logic of minimal surprise. And the reason it's minimal surprise is because the trace is the zero-surprise view of the bigger matrix. That's the key idea. It is the zero-surprise subset view. And we'll talk about the stationary measures as well. The stationary measure is identical to the, is a normalized restriction of the original stationary measure. So you have minimal surprise in the dynamics. In fact, zero surprise in the dynamics. And in the stationary measure, again, zero surprise. So the trace logic is the logic of minimal surprise for arbitrary dynamical systems. So that's the power of this, because minimizing surprise is, of course, key to intelligence, a key to intelligence. But this is multiscale. So this is the multiscale logic of minimal surprise. So what about this trace logic? The set of all Markov chains form a non-Boolean logic under the trace order. It's non-Boolean. That means that there's no global top, there's no global negation, many matrices do not have meets and joins, or ands and ors. However, so it does have a notion of meet, join, not, and entails generally, but many matrices are not compatible. So they may or may not have meets and joins. So it's a very, very complex logic. However, if you take any particular Markov chain P, and you look at all of its traces, they form a Boolean sub-logic. So I can pick any Markov chain I want to, any one at random, look at all of its traces, all those Markov matrices together form a Boolean logic. So the notion of and, or, not are completely well-defined. And this Boolean logic has 2 to the n members. If there's n experiences, then there are 2 to the n members in this Boolean logic traces. So that's, if you think about it, all we've got right now are, we don't have agency yet, although I showed you that notion of goal-directed behavior, which is sort of like a proto-agency kind of thing. Already, these matrices are going toward their stationary measure, no matter how you perturb them. So already, there is this interesting notion of some kind of agency going on there. But now, here's the key idea. And this is only now two months old, this idea. And that's why, when I had this idea, I realized that it was time to talk with you guys. Once we have the trace logic I've talked about is a logic on observer windows. So it's an infinite space of all possible observer windows. There's this minimal surprise logic on all of it, the trace logic, cleanly well-defined.</p><p><strong>[22:53] Donald Hoffman:</strong> Now, how do I want to do model agency? And this is the new idea just in the last few weeks. Agency is a matter of changing which window I want to look through. I want to have a policy for how, if I'm looking at the world this way, then how do I want to look at the world next? And how do I do that? Well, another Markov chain. The Markov kernel will say, what's the probability, if this is my current window, that my next current window will be such and such? The way you write that down is, again, a Markov matrix. So what we have is a policy is a Markov matrix on the trace logic itself. So the trace logic is the entire logic of minimal surprise on possible conscious observers. That's what it is. And the first step of agency is to say, let's crawl along the trace logic. That's the first baby step in agency, the first ability to crawl along the trace logic. Now, if we look at the collection of, I'll call those Markov kernels, policies. Each Markov kernel is a policy, it's a first order of agency. And since they're Markov matrices, they satisfy their own trace logic. So we now have a trace, we have the first trace logic of observer windows. Now we start crawling on that trace logic of observer windows. That's our first layer of agency. It has its own trace logic. That's its own, so that's why I call this recursive trace logic. It's recursive now. And you can see we can do this ad infinitum. Once we have the trace logic of policies, I can now crawl on it and get meta-policies. And so I can take agency to whatever layer of complexity I want. We can start with the baby layer. We can start with just the observer windows and explore those. Then study policies and then meta-policies and build up recursively to ever deeper notions of agency. So just at top level, we can think of a policy as simply a path through the trace logic of observer windows. That's the simplest case, right? So I started off with a three-state window, and maybe I moved to a two-state window, and then I moved to a one-state window, and that was what my policy was. And so I've got a Markov kernel that does that. And then a meta-policy would say, I've got thousands and thousands of policies. I now have the flexibility to choose my policies based on whatever goals I might have. So policies can model attention shifts, scale shifts, reparameterizations. It can maybe describe a subsystem that I think is now driving my future decisions, my policies. So the recursive trace logic is the collection of all policies with their trace logics, and then recursed, recursed, recursed again. So it's a whole hierarchy of trace logics. Each trace logic itself is infinite.</p><p><strong>[26:57] Donald Hoffman:</strong> So we have a choice of policy, meta-policy, meta-meta-policy, and so forth. So we've talked about stationary measures. And there's sort of a minimal kind of notion of goal-directed behavior. We can write down a simple intelligence metric based on Markov chains. So it turns out that for any probability measure pi, there are many Markov chains for which pi is a stationary measure. So if you specify a stationary measure and you ask, what is the Markov chain that has a stationary measure? That's their own question. There's an infinite class of Markov chains that will have that stationary measure, and they vary in very interesting ways. For one, they have different rates of convergence. So some will have this goal-directed behavior where they're going almost immediately to the goal. No matter where you start them, they will go almost in just a couple steps to the goal, and others will converge very, very slowly. So we get to choose, in the trace logic, we can choose how quickly we want to converge to our goal, right? So this is going to be very interesting, because search efficiency is, of course, your measure K is a model of intelligence. So we have a dial here that we can dial the intelligence, and it may be that you might have high intelligence with respect to a goal, but there may be some sub-goals. It turns out that if you go quickly to this stationary measure, you may not do other things intelligently. So we're going to have to be careful which Markov chain we choose, depending on what goals we're trying to get to. So there are many goals that you can get, and I want to talk about that, the possibilities. So there's differing rates of convergence, and the convergence rate is dominated by lambda 2, lambda sub 2, which is the largest eigenvalue of the Markov matrix. You take the Markov matrix and do its eigenvalue analysis. The largest eigenvector has value one. But then you find the largest eigenvalue that's less than one, and that pretty much tells you the rate of convergence for that particular Markov chain. So there are Markov chains with different lambda twos that all have the same stationary measure, and so they converge to it at different rates. So there is then a connection between this Markov notion of intelligence, which is the Lambda 2 convergence, and your metric, which is K. And the relationship is just a simple equation, where T sub M would be essentially R Lambda 2, the rate of convergence. And T blind would be, say, just a random walk that's not smart. Right, so there is a deep connection. But now here's a little trick. We want to have, as you guys talk about, you talk about different layers. It's hierarchical, and higher layers can bend the geometry of the problem space for lower layers. And so how do you model that with Markov chains? Well, it turns out that there... You can have lots of different so-called community structures. So again, for any stationary measure pi, there are an infinite number of Markov chains that have pi stationary, but that have differing community structures. So now a community structure is roughly, is like, you probably know about it, but I'll just say briefly. You could have thousands and thousands of states in this Markov chain, maybe a few hundred are tightly connected over here, a few hundred are tightly connected over there. There's just a few cross-links. The whole thing is ergodic, but basically you might have 10 communities that are tightly knit. Now, within each of those communities, maybe my 100-state community, if I look at it more closely, it itself is composed of maybe three new sub-communities. In other words, you can have an infinite number of communities, sub-communities, sub-communities all the way down.</p><p><strong>[31:01] Donald Hoffman:</strong> as far as you want, and all having the same stationary measure. So what this gives us is you might have one big goal, reach the stationary measure, but you could have sub-goals, which community, the way you get there is the different communities that you might emphasize as you go down. So it gives you this multi-scale flexibility. And the community structure, it turns out mathematically, is dictated by the eigenvectors. When you do the analysis of the matrix again, the eigenvectors with eigenvalues close to 1 because they involve slow mixing between communities. So the communities themselves mix inside themselves, but they don't mix between the communities very much. So we can have policies then that are trying to focus on stationary measures, community structures, convergence rates, particular dynamical models, and so forth. So policies can be looking at all these things and trying to optimize. And then the meta-policies can explore different policies. We can have meta-policies and meta-meta-policies exploring at different rates. So this starts to give us a recursive notion of agency. And in some sense, the reason I'm bringing this up is here is a framework of mathematical tools that's incredibly simple. There's one definition, the trace. That's the only mathematics there. And then there's one observation, the trace logic. And then the third observation is it's recursive. That's it. And then all the tools, that's it, and all the tools are at your disposal. So meta-policies can explore different policies, and the deeper the recursion that we go, in terms of making deeper and deeper trace logics, we get deeper and deeper notions of agency. So we can actually explore just policies for our simplest notion of agency, and then go to meta-policies to discover deeper notions of agency and so forth. So we can take it one baby step at a time. Now, in terms of how this relates to notions of Markov blankets and the self versus the world, Markov blankets, as you well know, are strictly speaking defined for directed acyclic graphs. And there, they define a boundary between self and the world. And I want to upgrade these notions to Markov chains, right? So the idea of the upgrade is, Markov chains are graphs, but they're not acyclic. They allow cycles. So this is one upgrade. We're upgrading from acyclic graphs to cyclic graphs, and then we're upgrading to labeled cyclic graphs, namely labeled by the transition probabilities. So that's what I mean by upgrade. We're going beyond directed acyclic graphs to something that's far more general. So we have to... So we want to move from the standard notion of Markov blanket to what I would call a trace blanket. And here now, we have to actually construct the self and the world. And we need to, the way we will do that is, and by the way, now, you know, I'm just saying at top level, we have to do a lot of hard work here. But it's going to be policies and meta-policies and what they do. And certain experiences, like experiences of pleasure and pain, will be part of the experiences agents have. And to the extent that certain actions lead to greater hitting of the pleasure centers, the pleasure, the higher stationary measure for the pleasure, then they'll be sought, and to higher stationary measures for the pain, they will be less sought. They will be avoided. And so the idea will be that there'll be pleasure and pain guides, but there will also be, I'm thinking that policies, what policies do is they say, given that I'm looking through this particular observer window, what's the probability that I'll now look through that window over there or that window over there?</p><p><strong>[35:08] Chris Fields:</strong> Can I interrupt with the question? A couple of sentences ago was the first time you used the word action. And is an action in this framework just a change in policy?</p><p><strong>[35:23] Donald Hoffman:</strong> So each, it is, but it's not just a change in policy. So a policy itself gives you an action on observer windows because your action is to change observer windows.</p><p><strong>[35:37] Chris Fields:</strong> Okay.</p><p><strong>[35:38] Donald Hoffman:</strong> A meta-policy gives you a higher level of action because you're now changing policies, right? And then a meta-meta-policy would be an even higher level of action because you're changing your meta-policy.</p><p><strong>[35:51] Chris Fields:</strong> Actions are all either changing what you're looking at or changing how you decide what you're looking at.</p><p><strong>[36:01] Donald Hoffman:</strong> That's right. Recursively.</p><p><strong>[36:03] Chris Fields:</strong> Great.</p><p><strong>Donald Hoffman:</strong> That's a recursive notion of action now. Right. So now this, I'm just thinking through this last bit, but it seems like some policies, for example, so now I'm looking at just the smallest level of action. Some policies, if they have certain things that always appear in your observer window. So, for example, in my observer windows, my hands and my body often appear, whereas other things that I call the external world don't appear that often. And I also notice that I seem to be able to directly control my hands and my body. But if I want to have my phone move, I need to move my hand so that I can pick up the phone to move the phone. So what I am going to say is that we really have to, so in the Markov blanket approach, right? The Markov blanket has a clean definition. It's, you know, give me a set of nodes, their blanket is, they're the parents of the nodes, the offspring of the nodes, and the parents of the offspring of the nodes. End of story. That is your blanket, that's your skin. That's your boundary between you and the world. Here, it's much more complicated. Now, I have to use the notion of agency in a non-trivial fashion, and learn probabilistically what features of my sequence of observer windows that I'm having remain there most of the time. My hands are there most of the time. And they're associated, certain actions with my hands are associated with pleasure signals, others are associated with pain signals. So I'm learning to do certain things with my hands and don't stick them in the fire, things like that. Other things are much more contingent. So I can use probabilities of what I'm seeing in my observer windows as a way of starting to construct myself versus the outside world. Plus the pleasure and pain guides.</p><p><strong>[38:03] Chris Fields:</strong> Don, can I ask another question? Sure. You talked about actions with your hands. What does that mean in terms of changing what you're looking at? Since the only action is changing what you're looking at, what does it mean to control what your hands are doing?</p><p><strong>[38:20] Donald Hoffman:</strong> Right, so that's a great question, Chris, because what that means is I want an observer window, I have an observer window where my hand is touching my ear. Now I want an observer window in which my hand is touching my leg. And so I transition to that observer. So what's happening is I'm choosing what I want to see in my movie next. And that's what we call moving my hands. It's a completely, you have to really think out-of-the-box now. This is, it's really, it's a choice of what I want to see next, and that's what the actions are.</p><p><strong>[38:58] Chris Fields:</strong> Okay, okay, great.</p><p><strong>[39:00] Donald Hoffman:</strong> It's very austere. What I love about it is it's austere. There's only one equation and one logic, and so you have very, very tight guides, and yet the claim is we should be able to get everything out of it. But that's what I love, is a theory that forces you to do it in a principled way. Now, Bayesian inference, we can talk about it more if you want, but I'll just mention briefly, Bayes' rule falls out of the meat and the trace logic. And we can go into how that's the case. It's beautiful and non-trivial, but Bayesian inference is effectively a special case of the meat of the trace logic. And if you want, we can go into that. And you guys talk about bending the option space, and I want to say that, yeah, I'm taking the notion of space. Of course, that was metaphorical when you talk about bending the option space, but there is a real sense in which I want to get space and space-time itself. And what I'm working on quite heavily, and with a couple others, is I believe that we can actually boot up special and general relativity entirely from the trace logic. And so that's the claim, that relativistic space-time can be constructed entirely from the trace logic, and this would be then fulfilling John Wheeler's goal, that starting with only observer participants, that we can build up all of space-time physics. And that's the goal of where we're headed. And I'll just give you, this will be the last thing I do, and then we can have a conversation about it. Just to give you a hint about how that would happen. It's standard in Markov chain theory to have what are called enhanced Markov chains. So you have a Markov chain, but you also have a counter. Every time your experience flips, you change experience, your counter increments. So here I've got a case where I've got the four-color agent, and then there's the sub-agent of just red and green. And notice that there's a counter for the red and green, and there's a counter for all four. And notice the counter on the left is going much faster than the counter on the right because it's seeing more experiences. So the counters go, the counters for sub-agents, or I'm sorry, sub-windows, sub-observers, are going at a slower rate than the ones above them. So if I'm less than you in the trace logic, my time counter is going less than you, than the one. So the trace logic also is giving you a relationship among counters, and that we claim is the time dilation of special relativity and general relativity. That's where it comes from. So it's all about observer windows and their counters. And it turns out that the distances can also be derived. And it turns out that the distances that you will get in the window, the trace window, are different than the distances you'll get in the bigger. And so this is where we're hoping to get general relativity coming out of this. Just simply, there's notions of essentially something like the commute time between states. And similar notions, the commute time, I'll just give you that concretely. It's the expected time of starting at green, getting to blue, and then back to green. What's the expected number of steps? Starting at green, I'll get to blue and then back to green. And it turns out that expected time can be viewed as the square of Euclidean distance. So there are canonical ways of getting Euclidean distances from commute time properties and other. There are Dirichlet measures which are even more to the point, but more complex. I won't go into them. But there are ways of going from, the trace logic gives us effectively the time dilation and length contractions of special and general relativity is the idea. So time runs lower on the trace, gaps between tricks. So I'll just leave it at that. I think that's enough for us to, I'll stop the share so we can talk about it. But I just wanted to give you guys a feel, and I can send you guys some papers on this, but I wanted us to have a little time to talk about this is just, we haven't solved the agency framework, the agency thing. What we've got is a language now that's principled for talking about agency.</p><p><strong>[43:45] Michael Levin:</strong> Thanks very much, John. That was amazing. Question, a kind of general question. What do you make of the fact that you're apparently pulling out descriptions of physics and descriptions of agency out of the same starting material? Does that surprise you?</p><p><strong>[44:03] Donald Hoffman:</strong> I think, well, something I've been saying for quite a while is that space-time's just a headset. And we're effectively saying we can build a headset. Space-time is not the reality that's independent of us, and we're little, tiny, little... Our typical view is, Hoffman is this tiny, little, 160-pound thing inside a massive, massive space-time universe. And I'm saying, no, what we call Hoffman is just an avatar inside a space-time headset that's being created by consciousness. And the proof of the pudding is, can we build the headset? So the idea is that space-time, for this approach to go through, we have to be able to show that we can get special relativity, no hand wave, just from the trace logic, and also general relativity and quantum theory. We have to be able to show that we can get entanglement and all of this stuff simply from the trace logic of Markov chain. Now, one objection that someone might have is to say, look, Markov chains, in quantum theory, we have unitary matrices. What are you dealing with? You just have Markov matrices. You don't have these nice unitary matrices. So how are you going to do that? And the idea is most Markov matrices are not unitary, but there are some that are. They are a measure zero subset of the Markov matrices that are unitary. And when you look at the long-term behavior of a Markov matrix, the asymptotic behavior, it turns out that the way that the eigenfunctions, this is now when we go to those enhanced Markov chains. And this is work that Chetan and I did back in 2014. Chetan discovered that the eigenfunctions of the enhanced Markov chains are identical in form to the quantum wave functions of free particles, identical. So the idea is going to be that quantum theory arises as an asymptotic description of a Markov dynamics. So the Markov dynamics gives you a step-by-step-by-step analysis of agency and consciousness. Quantum theory only gives you the asymptotic behavior, not the step-by-step behavior. So that's going to be the connection. Again, this is all a matter of theorem and proof. Either we're right or we're wrong. It's theorem and proof or theorem and disproof. Now, one might say, well, you have the no-cloning theorem in quantum theory. What about that in Markov chains and so forth? And it turns out, if you look carefully at the no-cloning theorem, the proof of it does not require unitarity, it only requires linearity. Markov chains are linear, and they have their own no-cloning theorem. So I see no obstruction right now. We just have to do the hard work. But I see we have a principled notion of agency, and it shows us how the nested community structure can give us nested goals and bending, nested bending of problem spaces. And we can actually not only talk about, metaphorically, about bending the problem spaces, we actually can show, I think we'll be able to show, that we can actually have real space-time curved representations of bending, general relativistic descriptions of bending.</p><p><strong>[48:01] Robert Chis-Ciure:</strong> Mike, may I share? You remember we were taking this project and embedding it into the variational free energy principle and all that. So may I share now the screen to show Donna and Shetan what we already have. I mean, mind you all, this is work from one year ago. I still didn't get to develop it in full. It's, let's say, maybe 80% done. So on the left, you see this book. This will come soon. It's Carl's book on The Free Energy Principle and the Nature of Things. It's a big monograph of the latest version, I suspect. So we didn't push this paper because I also wanted to have access to the latest form of this before we would push. The synthesis paper had only a few drop names of variational free energy and the decomposable way in which you can assess intelligence and true scale-free quantification and recursive decomposition in that sense. So in this project, we try to do it within the free energy principle framework. So within the variational framework, right? So we take all these problem space operators and embed them into a physical variational physics descriptions. And we also end up on some of the things that you, Yudon and Chetan mentioned, like, for example, when we can have the issue of renormalization and then getting ways in which you can decompose and quantify across scales, additive gains in such efficiency at different scales, and then do it globally for as the whole total of the system, depending on how you would, how efficiency gains are cashed out at different levels. So that can be embedded in the variational logic of the free energy principle for sure. And using the more pedestrian thing in the sense of the all the Ness assumption and the Helmholtz decomposition and building on that, building a minimal Landauer-style floor of cost per unit operator and efficiency gains. So this is not done. But what I want to say is that seeing you present this just now, it is clear to me that you provide a way more finer algebraic structure for us to probe even deeper into this decomposition and then look at what you mentioned, the community structure and all these sort of effects you would get and what would appear to us at the scale and metric of observation as an efficiency gain might come from innumerable ways of tiling that problem space or bending the problem space or just doing, in your terms, just changing the observation frame by communities and certain communities having different policies and then meta policies based on the higher order aggregates and so on. So I think it's definitely valuable even if it was something that's built after this is pushed because this will be finished quite soon. Right. It's definitely worth looking into it. And the connection with physics is certainly impressive for sure. That's the hard work. This is less hard.</p><p><strong>[51:41] Donald Hoffman:</strong> I would certainly welcome any interactions you guys want to have on this once your current projects are, you get to a certain place. Because I think as I listened to your work, I realized our ideas are really converging quite nicely here. And I think that there's a synergy. The nice thing about the Markov stuff is that it's so well studied. You just look at the eigen analysis of these matrices to get a lot of this stuff. So there are lots of papers out there about the community structure and so forth. So we would just have to do our homework and understand a lot of that stuff we could just then port in here. And it's really quite well understood. The only thing they didn't have was the trace logic and the fact that it can recurse. That's what they were missing to pull this whole picture together.</p><p><strong>[52:30] Robert Chis-Ciure:</strong> Interesting, interesting. Yeah.</p><p><strong>[52:32] Michael Levin:</strong> What do you think would happen, or maybe you've already done it, but to apply some of the causal emergence metrics to the dynamics of these things, or some of the stuff that you guys do, Robert, or some of the more conventional stuff that we have. Have you, you know, FID and all that kind of stuff, have you done that at all?</p><p><strong>[52:56] Robert Chis-Ciure:</strong> You're done.</p><p><strong>[52:58] Michael Levin:</strong> Don on his stuff. And then if they haven't done it yet, then I'm going to say maybe we should.</p><p><strong>[53:03] Donald Hoffman:</strong> So say a little bit more about the causal emergence question. I want to make sure I understand that question.</p><p><strong>[53:09] Michael Levin:</strong> Well, Robert's the better person to speak to it, but there are a variety of newish metrics in information theory that basically try to quantify the important aspects of agency, right? So that the extent to which the whole is in some causally more than its parts, phi, all that stuff.</p><p><strong>[53:31] Donald Hoffman:</strong> Right, that kind of, exactly right.</p><p><strong>[53:33] Michael Levin:</strong> That's what I'm getting at. Have you tried any of those metrics on the dynamical path of these things?</p><p><strong>[53:39] Donald Hoffman:</strong> Well, a lot of that work has been, the motivation between that, Tononi and Coe and so forth, has been to somehow have consciousness be a function of the amount of causal emergence, right? And to the extent that phi is the system with the greatest causal measure. And those are very, very useful. I think they have nothing to do with consciousness.</p><p><strong>[54:12] Michael Levin:</strong> I'm not making any claims about consciousness. I'm just asking, just step one of getting the measurements and seeing what's going on as far as...</p><p><strong>[54:21] Donald Hoffman:</strong> Oh, sure, I think that would, as long as there's no claims about that in consciousness, I'm all for it. I think that stuff is really good work, absolutely. What I think is bogus is saying that consciousness has something to do with that. I think that's bogus. But we haven't done that ourselves. So the answer is we haven't gone there yet.</p><p><strong>[54:43] Michael Levin:</strong> It may be interesting to do, just to get some data and just to do some measurements. I mean, we've been doing it on gene regulatory networks and all sorts of weird things, and there's some really interesting, but we haven't said a word to our consciousness yet with respect to that, but just the data alone, I think, are already interesting, whatever the interpretation.</p><p><strong>[55:03] Donald Hoffman:</strong> But your data has really inspired me the last few months. I've just really, your work, your whole team has really inspired me. It really forced me to think out of the box about what this thing can do. So thank you. I mean, it's been really quite fun.</p><p><strong>[55:17] Michael Levin:</strong> Thanks.</p><p><strong>[55:18] Donald Hoffman:</strong> Your podcast with Lex Friedman, I've listened to it five times or something like that.</p><p><strong>[55:25] Robert Chis-Ciure:</strong> Don, I mean, on this topic specifically, do you think you could use, not the inverse trace, as a sort of hypothesis generation for this kind of experiments and this kind of data? Because, I mean, it seems like you do the construction sort of in this paper, at least in the paper in this test, we did it mostly forward, but you do have a calculus and an algebra for doing it backwards. So you have, let's say, an observed effective kernel P sub A on the visible states A, and then with the trace chain theorem says that any consistent extension P with the hidden states A prime, was it? Yeah, A prime. Then that must satisfy basically the theorem. So then the ABCD sort of like a tuples is a sort of parameterized hypothesis based about hidden mechanisms. And then it just becomes an inverse problem, right? It's just that. So if that's the case, and we have a lot of data in our synthesis paper, we use the planarian regeneration example, and I don't use the data in that literature, but there is also way more data in GRMs and stuff like that. So do you think we could have a sort of model in the inverse trace that would distinguish between different hypotheses that explain best the data that we see.</p><p><strong>[56:47] Donald Hoffman:</strong> Yes, in the following sense. So the trace logic, because it's a logic, there is the notion of not only the meet, but also the join.</p><p><strong>[56:55] Robert Chis-Ciure:</strong> The join, yeah.</p><p><strong>[56:56] Donald Hoffman:</strong> So I can take two matrices, and if they are compatible, if they're, for example, part of a Boolean sub-logic, then they have a join. Now Chetan has done the hard work of getting a closed-form solution for very special cases. And it turns out to get a general closed-form solution is an open problem. And the interesting thing is, so there can be a join, there can also be, so the join is the least upper bound, right, between two. But you could also have, in some cases, Chetan has pointed out that there could be a whole, say, one-parameter family of minimal upper bounds. So it's going to be very, very interesting. The trace logic, for certain matrices, there may not be a unique least upper bound. There could be a family of minimal upper bounds, and then we may be able to use other factors to choose one that we've got other criteria for what we want. Maybe we want something that's some kind of minimized complexity or maximum complexity or causal structure, greatest causal structure or something like that. So there are all sorts of things that we could do. So we don't know if there is a general closed-form formula for computing the join. Chetan has it for a special case. It's a fair bet that there is not one. That's going to be interesting to, it's an interesting open mathematical problem to study the join of this thing. And I can give you the unpublished paper we have so far, where Jake Don's stuff is there and you can see what we've got so far. And it's open as to how to generalize that, or to prove that it cannot be done, that it cannot be generalized.</p><p><strong>[59:05] Robert Chis-Ciure:</strong> It's super interesting. I would definitely love to see those papers. And I read some of your stuff and also the latest on traces of consciousness. I went through most of it very...</p><p><strong>[59:18] Donald Hoffman:</strong> You've seen the trace of consciousness paper, right? So it's in the appendix at the back of that paper that you already have. You'll see Chetan's work on what we have so far in the join.</p><p><strong>[59:31] Unknown:</strong> Yeah, work in progress.</p><p><strong>[59:33] Robert Chis-Ciure:</strong> I know of your work because one of my best friends is Robert Prender. So we talk a lot. We just submitted a paper on IAT and conversations theory.</p><p><strong>[59:44] Donald Hoffman:</strong> Oh, yes, right, right.</p><p><strong>Robert Chis-Ciure:</strong> One month ago. Yeah.</p><p><strong>[59:49] Donald Hoffman:</strong> Yeah, of course. I know Robert quite well. Yeah.</p><p><strong>[59:52] Chris Fields:</strong> So I have a question, John, about the definition of the trace. The trace of any Markov process is also a Markov process, right?</p><p><strong>[1:00:06] Donald Hoffman:</strong> Yes.</p><p><strong>[1:00:07] Chris Fields:</strong> Do you have an available model of observations that are, or sequences of observations that are not Markov? So sequences of observations that violate constant probability of switching from one state to the next? So that the probability is not well defined.</p><p><strong>[1:00:39] Donald Hoffman:</strong> Right, it depends on your time window and how you want to coarse grain the states, right? So one could say, look, there are many, many systems that are, it's not the case that if you look at the states I've given you, that the probability of the next state can be given exactly just based on the current state. You might need to have, look at three states or five states or 10 states or whatever. You have a bigger window to get the probability. But in those cases, you can always then create new states and then turn Markov.</p><p><strong>[1:01:19] Unknown:</strong> So basically, you can expand the state space, you actually multiply it. There can be a combinatorial explosion, though. You do need to be aware of that possibility. And this only works for finite memory. It doesn't work for infinite memory. But even for finite memory, you do have to be a little careful that the combinatorics can get nuts, which seems to be often the problem in consciousness research.</p><p><strong>[1:01:55] Chris Fields:</strong> I'm, of course, mainly interested in issues like contextuality and quantum theory, where you have groups of observations for which joint probabilities can't be defined. And so you can't build a single self-consistent hidden variable theory. So I don't know whether the formalism will handle that sort of situation or not, since you do seem to be always assuming well-defined probability distributions.</p><p><strong>[1:02:46] Donald Hoffman:</strong> Can you say more about the system that doesn't work, Chris?</p><p><strong>[1:02:54] Chris Fields:</strong> Well, contextuality is defined as a phenomenon: sets of observations for which a joint probability distribution is undefinable, for which the statistics violate the Kolmogorov axioms.</p><p><strong>[1:03:24] Unknown:</strong> I suspect that the fact that joins don't always work might have something to do with that. That's what I was thinking too. Yeah, that's what I originally thought your question was about, having the same probabilities every time. And of course, you don't need that with Markov chains. They don't have to be homogeneous. But that's not your question. It's much more abstract than that. What I would, my hope would be to somehow find sub-logics which actually look like quantum logics. Right. But if that happens, then we could possibly answer your question in the affirmative. And that would be one way to do it.</p><p><strong>[1:04:12] Chris Fields:</strong> So another way to think of it in your formalism might be if there are paths, yeah, this would be where it would look like a quantum logic. If you have paths in the trace logic network that don't commute.</p><p><strong>[1:04:27] Donald Hoffman:</strong> Oh, easily.</p><p><strong>[1:04:32] Chris Fields:</strong> That may be a way of approaching this join question, then.</p><p><strong>[1:04:38] Donald Hoffman:</strong> Yeah, that does have no join.</p><p><strong>[1:04:40] Chris Fields:</strong> To get back to your comment about unitarity at the very beginning, unitarity is really just conservation of information. And Kolmogorov probability is really just conservation of information. So if you don't have situations in which information is actually lost in some global sense, informational singularities, if you will, then the system satisfies unitarity as it's used as an axiom within information theory, which is just conservation of information. So this is where I was trying to emphasize this complete dissociation, actually, of unitarity, any spatial considerations. But anyway, that's sort of an aside. The real question is about contextuality.</p><p><strong>[1:06:00] Donald Hoffman:</strong> It is striking that Nima Arkani-Hamed and these high-energy theoretical physicists are strident that they're not assuming unitarity. They're saying we don't need it and we'll show that it arises from these positive geometries that are entirely outside of space-time. So they're getting space-time and unitarity together.</p><p><strong>[1:06:28] Unknown:</strong> I've never seen a demonstration of that fact in itself. What I have seen is they derive scattering amplitudes, which match what's understood from the Feynman approach. That doesn't mean you've derived space-time, and it doesn't mean you've derived unitarity. It just means that you've matched something.</p><p><strong>[1:06:56] Chris Fields:</strong> Yeah, I think they're referring to.</p><p><strong>[1:06:57] Unknown:</strong> The principle is absent.</p><p><strong>[1:07:00] Chris Fields:</strong> Yeah.</p><p><strong>Unknown:</strong> Sorry.</p><p><strong>[1:07:01] Chris Fields:</strong> I think they're referring to unitary processes in space-time.</p><p><strong>[1:07:06] Unknown:</strong> Right.</p><p><strong>[1:07:07] Donald Hoffman:</strong> That's what they're referring to. Absolutely.</p><p><strong>[1:07:10] Chris Fields:</strong> That's very different from unitarity as a strictly information theoretic concept.</p><p><strong>[1:07:18] Donald Hoffman:</strong> And yet the way they wave it around and say, we don't assume space-time or unitarity, right?</p><p><strong>[1:07:24] Chris Fields:</strong> It's just a disconnect in language, I think.</p><p><strong>[1:07:26] Unknown:</strong> I mean, it's fine not assuming a collection B when you're doing a collection A. But if somebody like Dyson comes around and says they're equivalent, then you can't say that we don't need space-time. It's just another way of looking at it. So their claim is unfounded as far as I know. I mean, somebody needs to sit down and say, this is how space-time emerges from the ampituvahedron. Otherwise, it's just saying that Schwinger could say I don't need to assume Feynman, and Feynman could say I don't need to assume Schwinger, and they're both right, but they're both equivalent.</p><p><strong>[1:08:11] Donald Hoffman:</strong> Right now they don't give you space-time, they give you scattering aptitudes. That's what they give you.</p><p><strong>[1:08:15] Unknown:</strong> Which I think are defined in space-time. Fair enough. It was on a facet of something.</p><p><strong>[1:08:28] Chris Fields:</strong> I suspect that eventually we'll generally be able to identify amplituhedron-like structures with error-correcting quantum error-correcting codes.</p><p><strong>[1:08:48] Unknown:</strong> You wrote a paper on that, didn't you, Chris?</p><p><strong>[1:08:50] Chris Fields:</strong> Well, we have a preprint of it that was revised as of a few months ago, and we're still working on it. But the current available preprint isn't bad. But the hypothesis would be we can go the other way from amplitude hedron-like structures to quantum error correcting codes. And there are many ways to get space-time from quantum error correcting codes. So.</p><p><strong>[1:09:28] Donald Hoffman:</strong> Interesting.</p><p><strong>[1:09:30] Chris Fields:</strong> It may be that the inference ends up going in that direction.</p><p><strong>[1:09:41] Donald Hoffman:</strong> What I'm hoping to be able to show is that some of these positive geometries, like the sociohedron, are sub-polytopes of the Markov polytope. Because the Markov polytope could be describing the probabilities of certain interactive processes that we would think of as scattering processes. If that's the case, then there may be a deep connection between some of these positive geometries and the Markov Hedron, which is itself a positive geometry.</p><p><strong>[1:10:14] Chris Fields:</strong> Yeah.</p><p><strong>Donald Hoffman:</strong> Instead of all possible Markov chains is a positive geometry.</p><p><strong>[1:10:18] Chris Fields:</strong> I suspect that any such structure defined over any space of possibilities or any such dynamics defined over any space of possibilities can be thought of as scattering in some metaphorical, but formally sensible way.</p><p><strong>[1:10:52] Unknown:</strong> Yeah.</p><p><strong>[1:10:53] Chris Fields:</strong> I mean, we can think of computation as scattering in interaction space.</p><p><strong>[1:11:00] Unknown:</strong> Interaction is scattering.</p><p><strong>[1:11:03] Chris Fields:</strong> Yeah.</p><p><strong>[1:11:04] Unknown:</strong> I mean, scattering is how interactions look in a physics lab.</p><p><strong>[1:11:12] Donald Hoffman:</strong> What's surprising is how restricted the scattering events are that you find in physics, right? A very restricted set, and it turns out that something like the standard model gives you all the components that you're ever going to find in a new scattering that you ever do.</p><p><strong>[1:11:30] Unknown:</strong> We hope.</p><p><strong>[1:11:31] Donald Hoffman:</strong> So far, that's so far.</p><p><strong>[1:11:34] Chris Fields:</strong> Well, all the ones that you see with the sorts of things that we call elementary particles.</p><p><strong>[1:11:42] Donald Hoffman:</strong> What I find interesting is just as when we had the Ptolemaic system, we had all cycles and cycles and cycles, you could get all the orbits of the planets, but it was ugly and just a mess because you had to add all these cycles and correcting cycles to correct those cycles and so forth to do it. And the same thing happens with space-time and scattering. When you look at the Feynman diagrams, it's loop after loop, and you have three or four particle interactions and 500 pages of algebra because you have all these Ptolemaic loop after loop after loop where you're enforcing locality and unitarity. So Feynman is forcing locality and unitarity, and so we have to do all this stuff. And all of a sudden, 400 pages of algebra turns into two terms when you let go of space-time. And also, it feels, again, like we've got this Rube Goldberg machine called space-time. And that's why things look so ugly in space-time and the mathematics. And all of a sudden, we're seeing some hint. I mean, it was a big hint when we went from Ptolemy to Newton. That's, you know, all of a sudden the formulas got a lot. We're on to something much deeper here than Ptolemy was on to. And now when we go from, you know, Feynman scattering diagrams to these positive geometries, once again, we're getting 400 pages of algebra down to two terms. A clear hint that we're on to something deeper beyond space-time.</p><p><strong>[1:13:08] Chris Fields:</strong> No, space-time's a kluge. I think that's clear.</p><p><strong>[1:13:14] Donald Hoffman:</strong> What's interesting is that theories of consciousness, all the main theories of consciousness assume otherwise. We start with space-time. We try to figure out what physical systems in space-time could possibly have the right structure to give rise to consciousness. So all of our theories start with the Kluge as the assumption and then try to go from there. So they're doomed, completely doomed to failure.</p><p><strong>[1:13:42] Chris Fields:</strong> Well, I mean, all of science has done this before about, what, 1970?</p><p><strong>[1:13:48] Donald Hoffman:</strong> You're talking about the standard model since about 1970.</p><p><strong>[1:13:57] Chris Fields:</strong> Well, no, I'm roughly dating at least the first things I saw from Wheeler with the notion of observer participants in it.</p><p><strong>[1:14:15] Donald Hoffman:</strong> In the 70s, right? He saw it. He knew, I mean, he wrote the book on space-time. He wrote the book on gravity. Measler, Thorne, and Wheeler. That is the Bible. And he knew space-time and he knew it was a clue. He was looking for something entirely beyond.</p><p><strong>[1:14:35] Chris Fields:</strong> Yeah.</p><p><strong>[1:14:37] Donald Hoffman:</strong> And he was going to call it observer participants. That's what he called it.</p><p><strong>[1:14:43] Chris Fields:</strong> May I ask something? Yeah.</p><p><strong>[1:14:44] Robert Chis-Ciure:</strong> Go ahead.</p><p><strong>[1:14:46] Chris Fields:</strong> I was just going to say good to see you guys again. Good to meet you, Robert.</p><p><strong>[1:14:50] Robert Chis-Ciure:</strong> Good to meet you, Chris. And I think we will meet in person in Spain, if I'm not mistaken, in July.</p><p><strong>[1:14:55] Chris Fields:</strong> Yes, hopefully. That sounds very exciting.</p><p><strong>[1:14:58] Robert Chis-Ciure:</strong> I told them to invite you specifically. Thank you. What was happening in Spain? There will be a workshop organized by some people, the Takena Foundation, and it's a workshop on the known unknowns in our fields of interest. And there will be, yeah, more than one field. So that's why Fields is excellent. And yeah, we'll discuss several things. But Don, I wanted to say that I think this is the biggest myopia in consciousness science. And I work in Anil Seth's lab. I did my PhD with Giulio Tononi. I did a postdoc with David Chalmers. I work with Georg Northoff and the temporal spatiality. I know all the people, and all of them are sort of either physicalists or they still think that consciousness is something you squeeze with enough dexterity at the end of a long tube that you operationally construct. And they really don't get the point that consciousness is the starting point, and everything else must come afterwards. You build everything else from consciousness, not the other way. It's hot. I know the pain.</p><p><strong>[1:16:29] Unknown:</strong> Do you see the platonic space as having something to do with consciousness in that sense, or being embedded within it?</p><p><strong>[1:16:42] Robert Chis-Ciure:</strong> That's Mike's answer.</p><p><strong>[1:16:45] Unknown:</strong> Yeah, that's I'm asking Mike.</p><p><strong>[1:16:48] Michael Levin:</strong> Okay, so I'm not fundamentally a consciousness researcher. I don't have any strong claims on this yet. But if I had to say right now, I would say that I don't think that from the point of the Platonic space formalism that I'm investigating, I don't think that we are beings that occasionally get visited by Platonic patterns that are something else. I think we are the patterns. And I think these patterns are a wide range of static, dynamic, low agency, high agency things. And I think what we call consciousness is the perspective from the Platonic space outwards into. So what a pattern from the Platonic space experiences when it interacts with the physical world is what we tend to call consciousness. Now, I'm not sure that, like, I don't think that exhausts all the possibilities. I think there are probably lateral interactions within that world. I suspect that when mathematicians think about abstract mathematical objects, what happens is you have two, there are basically two different patterns in residence there, the human one and whatever it is that they're studying. So there may be lateral interactions that don't even require the physical world per se. But basically what I think we mean when we talk about consciousness is what it's like to be a Platonic pattern projecting into a physical world through some interface. So that might be, you know, the sense organs that we have or something completely different and so on.</p><p><strong>[1:18:14] Donald Hoffman:</strong> So the Platonic patterns transcend our experiential notion of consciousness.</p><p><strong>[1:18:21] Michael Levin:</strong> In the sense that, say more. What do you mean by that?</p><p><strong>[1:18:26] Donald Hoffman:</strong> But there's a Platonic realm that maybe we shouldn't describe as conscious, but we perceive, but a human perspective on it is we experience it as consciousness looking at it somehow. So consciousness is just perspective.</p><p><strong>[1:18:42] Michael Levin:</strong> I ultimately think some version of idealism is probably the more accurate thing. I do think that consciousness is fundamental, but I don't know what to do with that on a practical level right now. I don't have a way of making use of that in the lab or anything like that. What I see is a way to make progress with a somewhat more dualistic version, which I agree that it would be nice to have some kind of simple monism, but what I see right now is that we have different but interacting realms, so to speak, and that model helps us to do new experiments and make new discoveries. So for now, I do think that they're sort of two separate things. But ultimately, if I had to guess, I would say that consciousness is primary. I don't know how to do that reduction right now, so I'm sticking with two because that's what we can handle right now.</p><p><strong>[1:19:41] Donald Hoffman:</strong> One of our goals is to show that we can actually get space-time, special in general relativity and quantum theory from this theory of Markov chains of consciousness, in which case then we could inherit all the work that's been done in physics, but see it arising from a consciousness first point of view. So that's where I'm headed.</p><p><strong>[1:20:04] Michael Levin:</strong> I mean, for us, one of the important pieces of our research program is to understand the mapping between the interfaces that we construct, and those are anything from sorting algorithms through cyborgs, through Xenobots, embryos, you name it, all of these stuff, to understand what are the properties of these things that facilitate the ingression of specific patterns from the space. So, what is it about this thing you've made that pulls down this rather than that? And then that allows different degrees of and different kinds of mind state to interact through it and also to quantify. And so we've got some wild stuff that's kind of ripening in the next few months that I'd love to run by all of you. We should all have another meeting so you can see. But we've been trying to quantify one of the things that I think is really important about this Platonic space thing is that it's not just a redescription, and a bunch of sort of useless philosophy that you sprinkle on top of things that work perfectly well. It actually makes very new predictions in the sense that it suggests that what you're getting from that space is things that using conventional theories that we have now of doing the accounting of effort, of computational cost, of physical cost, these kinds of things, that you get free lunches, or at least heavily discounted lunches. You get more than you put in. Our accounting isn't adding up everything that you get. We're missing something very significant. And we now have the ability to quantify how much are we getting, how much free memory, free compute, free whatever. We can, at least in simple toy, sort of minimal models, we can quantify that in biology. You can see it, but it's hard to quantify or prove anything. It's just too complicated. Whereas in these minimal models, we can actually quantify what did we get that we didn't pay for according to the conventional way of totaling up effort. And so that's really important to find out how much and what do you get. Do you just get static patterns? Do you get behavioral propensities? Do you get algorithms? Do you get virtual machines? Do you get free compute that you can do in that space? That's kind of a crazy prediction of mine that I think you can actually get, you can do compute in that space that you don't pay for in this space, so to speak. So that's, you know.</p><p><strong>[1:22:31] Donald Hoffman:</strong> I've been listening to a lot of your podcasts on this and thinking about it. And I think that there's connection with the hidden states in this Markov system, so that if we just see a trace, most of the intelligence is something you don't see. And so to the trace observer, that's all in a platonic realm, because you literally cannot see it. And yet, what you're seeing is entirely a trace of that world. So your visible world is controlled by this quote unquote platonic space that you cannot see. And so that's why I was thinking, that's what your stuff about the planaria, for example, you cut off the head and cut off the tail and you can change the electric fields and make it have two heads and so forth. I mean, it's just how does it know how to do that? Right, where is that? So I've been thinking, so somehow all we're seeing is the planarian in our trace. We're not seeing beyond what we can see. There's a whole Markov realm of intelligence out there that is projecting down into what we can see, which is just the planarian. So I'd really love to explore with you guys, as this formalism matures, how we might use the formalism in concrete ways to model specific platonic spaces for specific memories, biological memories. Because I think this gives us the tools, right? There are the exits, the dark states, and the entrances. All those tools are part of the platonic space. And if we learn, so we're new to this ourselves, right? We don't know how to use those tools yet, but to learn how to use those tools to model specific things, we might be able to get that platonic space, not just a hand wave, but here is the Markov chain, here's the trace, and this is why it looks like this platonic intelligence that's guiding you. But it would be multi-scale, right? There's going to be multi-scale in the dark space.</p><p><strong>[1:24:39] Michael Levin:</strong> What might be really fun is, so definitely we should do that with some of the biology examples that we have. In particular, for example, with some of the synthetic things that we have. So, Xenobots, Neurobots, because they raise what I think is perhaps the most interesting part of this is, where do the goals and properties of novel beings come from, where you can't just pin it on selection, eons of selection. I guess where do they come from? So, but I would complement that with, which I think would be even easier, the study of basically applying these things to some of the minimal computational models that we have. Yes. Right, we have sorting. Sorting is one, and there's going to be a bunch of new work on that coming soon. But we have others, we have some really interesting stuff that'll come out soon on giving embodiments to various very weird sources like mathematical objects and making the robots that are driven not by conventional algorithms and sensors and whatever, but their entire behavior is driven by sort of pre-cooked static mathematical constants and watching how those kinds of, or mathematical objects and watching how those things end up navigating a world and adapt, and what cognitive features they end up having and so on. So I think those things are simple enough that we could-- I think we could actually make a pretty tight mapping onto what you have in terms of states and things like that.</p><p><strong>[1:26:13] Donald Hoffman:</strong> Interesting. Investing. It may be, though, when you get into pure mathematics and sort of a hidden platonic realm that's doing stuff, like sorting algorithms that are doing other things you didn't expect. I'm having my mind stretched to think how Markov chains could do that. It seems like that might be something even deeper somehow.</p><p><strong>[1:26:34] Michael Levin:</strong> Well, what we could, that was the other thing. I was thinking what is totally doable now is to take your system and apply the tests that we have. So we have a range of assays that basically are taken right out of the behaviorist handbook. Because the one thing I think behaviorists got right is that they weren't worried about what the implementation was. And so it's very easy to apply their tools to anything, right? So we could actually look for habituation, sensitization, associative conditioning, delayed gratification, path planning, illusion, counterfactuals, all this kind of stuff. We have assays now that we can look for all that stuff.</p><p><strong>[1:27:15] Donald Hoffman:</strong> And it's clear to me that there's always going to be Markov chains that we can build to do that. So it'll be, they're universal Turing machines. Markov chains are universal.</p><p><strong>[1:27:28] Michael Levin:</strong> So I'm not talking about building ones that do it. I'm talking about finding it in simple or random ones that you don't think should be doing it. That's the trick. We're finding these capacities in very simple and no design, no selection. Usually the three things you think you normally need, you need rational design by an engineer, selection or evolution or learning, right? Those are the three things you need. We're not doing any of that. We're pulling it out of, I don't know, you'll judge for yourself where you think it's coming from, but I think we could do that with random matrices or whatever.</p><p><strong>[1:28:13] Donald Hoffman:</strong> That's right. We may be able to find matrices in which what we're seeing in the organism is the trace, but the invisible states are having a lot of the intelligence that leads to what you're seeing. And so we write down the matrix that even though the trace, you can't see why it's doing that, but it does it. But in the big matrix, you see why it's doing it.</p><p><strong>[1:28:35] Michael Levin:</strong> Yeah.</p><p><strong>[1:28:37] Chris Fields:</strong> I'm going to have to jump off, guys. Great conversation.</p><p><strong>[1:28:41] Michael Levin:</strong> Thanks, Chris.</p><p><strong>[1:28:43] Chris Fields:</strong> Good to see you.</p><p><strong>[1:28:44] Michael Levin:</strong> Thanks, Chris.</p><p><strong>[1:28:46] Unknown:</strong> Good to see you, Chris.</p><p><strong>[1:28:51] Robert Chis-Ciure:</strong> Mike, will we have the recording of this?</p><p><strong>[1:28:54] Michael Levin:</strong> It is being recorded. I'll send you guys a link. If everybody's okay with it, I'll put it up on our center channel. But regardless, you can all have a copy.</p><p><strong>[1:29:03] Donald Hoffman:</strong> That's fine with me. Yeah.</p><p><strong>[1:29:04] Michael Levin:</strong> Great.</p><p><strong>[1:29:05] Robert Chis-Ciure:</strong> Perfectly fine.</p><p><strong>[1:29:07] Michael Levin:</strong> I think there's a ton of stuff to do. So why don't we go off and think about some specific directions and let's come back. I already have some thoughts, but there'll be lots more in a few weeks when I can send around some pre-prints. I'm having my students write these things up and so then I'll send them out.</p><p><strong>[1:29:27] Donald Hoffman:</strong> Very good. And if we maybe then pick a particular simple problem system that we can see what the trace logic might do on it and see where we go from there. That would be fun.</p><p><strong>[1:29:35] Michael Levin:</strong> Yeah, that'd be great.</p><p><strong>[1:29:37] Robert Chis-Ciure:</strong> And I think a natural extension of our former work after we did the FEP thing is basically try to apply all this stuff to it and see how you can get even deeper, more interesting things you can say about all this computing intelligence across scales in a more fine-grained manner than even the FEP allows.</p><p><strong>[1:30:01] Donald Hoffman:</strong> I agree that this seemed to be a natural connection with the project you guys are doing right now.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Discussion: Lisa Maroski, Michael Levin, Richard Watson</title>
          <link>https://thoughtforms-life.aipodcast.ing/discussion-lisa-maroski-michael-levin-richard-watson/</link>
          <description>Lisa Maroski, Michael Levin, and Richard Watson discuss how language shapes thinking in diverse intelligence, covering systems thinking, recursion, biological agency, patterns, and belief formation.</description>
          <pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69d8a23d983bbd0001fafd1f ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/0U9gOKqGxwY" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/833b1daa/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1 hour discussion with Lisa Maroski ( and Richard Watson ( about the role of language in shaping our thinking in the field of diverse intelligence and beyond.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Language and systems thinking</p><p>(04:25) Parts wholes and resonance</p><p>(07:34) Who versus what</p><p>(12:45) Recursion and new structures</p><p>(17:47) Searching for a word</p><p>(25:16) Holding multiple polarities</p><p>(29:41) Nested biological agency</p><p>(35:42) Patterns and Platonic minds</p><p>(45:41) Cross-level observers and time</p><p>(52:45) Knowing and owning beliefs</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Lisa Maroski:</strong> Let me just run down some of the list, and if any of them strike you as, yeah, I want to start there.</p><p><strong>[00:10] Michael Levin:</strong> Sure. And also, if you wanted to take a few minutes and just talk about your work and tell us where you're coming from and what you've been working on, that's great, too.</p><p><strong>[00:22] Lisa Maroski:</strong> OK, I'll give a-- since I'm the unknown quantity here. I'll do that. So my work has been very transdisciplinary, but focused particularly on language. When I was in college reading systems theory, von Bertalanffy, I kept thinking, if only he had developed a better language for this. And so I'm not setting out to do that for von Bertalanffy, but I started seeing things, seeing aspects of both language and our worldviews, our cognitive models that I think keep people constrained. I saw both and everywhere. I saw the interconnectedness of nature and nurture and mind and body. And it just seems silly that people would argue about, is it this or that, instead of looking for a way to language the dynamics, the interpenetration, the both-and-ness of them. And I was also influenced by topology, the Mobius strip and the Klein bottle, and saw them as interesting metaphors for doing just that, for maintaining both the distinction a Mobius strip seems to have two different sides, just like a piece of paper, but looked at globally, it only has one side. And same for the Klein bottle and inside and outside. And so I started looking for ways to bring this kind of multi-layered thinking into language itself. Is that enough? Yeah, that's great. So, yeah, where it overlaps with your work, this can be used for both local and global perspectives, multiple scales, and specifying... What I'm really interested in is seeing how, particularly in biology, when you are cognizant of the multiple scales of intelligence and goal setting and cognition that are going on, how do you talk about all of that together? But I'm not asking you for answers. I'm saying this is what I'm working on, and maybe you have some insights that could help me, and I have some insights that could help you. I don't know. Let's find out.</p><p><strong>[04:25] Richard Watson:</strong> I would very much like to be able to talk about the relationship between parts and wholes and nested selves in a way that respected the autonomy of the parts and the autonomy of the whole and the relationship between them. I guess we're relatively comfortable with the idea that some concepts are relational, that we're not just talking about things, but we're talking about relationships between them. And we're relatively comfortable with talking about processes rather than structure or material. But when it has that nested relationship, it's usually treated in a very dull way of just parts and wholes. And I feel like there's something quite deep about what we need to get to in our understanding of mechanisms of cognition and what cognition is and how cognition works. That's intrinsically about the relationship between parts and wholes and the... But those aren't the right words, right? Between the selves within and the self without and the self that is the two together. That it's something to do with-- because I think about cognition as being about causes that operate on different time scales. So multiple instances of a process that is rapid versus one instance of a process that is slow and that-- memory is about bringing causes from the past into the here and now, which is just another way of saying there's multiple timescales involved in the causes that you're talking about. And that does, yeah, well, at the very least, I agree with you that the existing language is insufficient to be able to talk about those things, and I would like to be able to talk about them more easily. I often resort recently I've been resorting to metaphorical language or possibly advocating for the literal interpretation in terms of things like resonance. In particular, resonance between a tune played at one frequency and the same tune played at a lower frequency that nonetheless are the same tune and resonate with each other and hold the shape of each other, right? Which has a sort of the insides reaching out to the outside and the outside reaching into the inside sort of feeling to it. So maybe if there were other words available to talk about such concepts, then I wouldn't have to call it a metaphor and I wouldn't have to say it was literal either. I could say it was that kind of thing that I'm talking about.</p><p><strong>[07:34] Michael Levin:</strong> From my side, in terms of language, I've been thinking that one of the fundamental limiting aspects of our language, and I don't know if this is true in other languages, the ones that I know all have the same problem, but I don't know very many. So maybe other languages do this better. But we only have two options. We have a what and we have a who. And that's it. Everything is either a what or a who. And I mean, I don't even think that works when you have a dog. You're like, well, it may not be a who, but it's definitely not a what. And so this idea that we're going to just divide the world into two sharp categories, and that's all that our language allows. So I started thinking, and I don't know what the answer is, but my crazy version for this was to put a little exponent on the O. So, like, you could have, I'm a level 10 who, right? So, and maybe if I go to some meditation retreat or whatever, I'll gain a, I'll be a level 11 or something, and maybe my dog is a level 7, and maybe my Zenobots are a level 3. I have no idea. But this notion that at least, even if we don't agree on what the exponent is or whatever, this idea that it just can't be too sharp categories for this sort of thing, I think. You learn that so early in your language, and it just freezes everything from then on, so that we have to keep having the same arguments again and again about the spectrum of cognition. And maybe that's why it made, right? Because it's just baked into the language. So I don't know. I'd be interested to know if there were other languages that have wider options, but it's, yeah, I think that's one of those things that has to be melted down and redone.</p><p><strong>[09:26] Lisa Maroski:</strong> Other languages do divide the categories differently. Some indigenous languages include many more beings, types of beings, in the who category. They will give beavers and mountains and trees personhood, knowing that they're not human persons, they're beaver persons or mountain persons. So it's not just a language issue. It's a category structure issue.</p><p><strong>[10:07] Richard Watson:</strong> Yeah.</p><p><strong>[10:08] Lisa Maroski:</strong> Which relates back to culture then.</p><p><strong>[10:11] Richard Watson:</strong> Yeah. I guess in English, we talk about the spirit of something. And in some contexts, we mean that in a quite who-like way. In another context, we mean that in a quite what-like way. So part of what we're talking about here is language in the sense of, hey, wouldn't it be useful if we had a word for this? And together with that, if we had a word for this, it might change the way that we think about things on the ontological structures that we impose over things. But also part of it is perhaps the thing that we want to talk about is a linguistic thing, that the thing that we want to talk about is how can one part refer to another, or how can one part establish identity or non-identity with another? And that we're talking about, for example, when we're talking about the sort of strange loops that you mentioned at the beginning, in Hofstadter's term, like the Möbius loop and the Klein bottle, that have the idea, the feeling of the inside reaching out to be the outside or that the inside and outside isn't clearly defined or that it's flipping back and forth or something like that. That's a thing that you can do when you can do language. But when you take everything literally non-declaratively, non-referentially, it's just a concept you can't have. It's all, do you know what I'm reaching for? He says, lack of language.</p><p><strong>[12:08] Lisa Maroski:</strong> I think you're reaching for the distinction between metaphoric language and literal language, and in some circumstances, like science, we try to reach for literal more often than metaphoric, even though I know you both understand that science is full of metaphor at multiple levels, both in the level that we talk about things and also at the level of scientific models are a kind of metaphor as well.</p><p><strong>[12:45] Richard Watson:</strong> So take Chomsky's notions of linguistic structures, productivity, compositionality, systematicity, all of which are involved in recursion. And it feels like there's, it's not just that, it's not just that we need words for those things as though, as observers looking at those things, we need words for it. But the thing that we want to talk about is intrinsically of that kind. It's intrinsically linguistic in nature, that the kind of concepts like center embedding, systematicity, compositionality, and things like those are like the kind of construct that we want to be able to talk about when we're saying these Mobioid, I just made that up, structures where the inside reaches out to the outside and vice versa. And that's not just because we need a word for that thing that's out there, but because the thing that we're talking about is a sort of abstraction, sort of something that can only exist when abstractions are possible. Maybe that's what I'm trying to say. Like you can only create a paradox by using words which label things in a particular way that creates paradoxes, right?</p><p><strong>[14:18] Lisa Maroski:</strong> I'd also like to add another caveat to my interests. In looking at how, as you know, language and culture and the different aspects of language are so interconnected themselves, I'm not just looking for new words. I'm actually looking for new structures for language to be able to express this kind of mobile, mobioid, I'll use your new word, types of relationships and ways of expressing the complexity of certain types of, again, there's no word, experiences, processes that we're trying to have a way to discuss without having to reduce them to the old categories. So it's kind of a difficult project because it involves change at multiple levels simultaneously, which is often difficult to do. So yes, change at both the cognitive level, meaning how we think about and categorize the world and how we speak about it and write about it. All of those forms, I think, need simultaneous changing. Otherwise, the system, language is a system that likes to maintain some level of homeostasis or homeodynamics. The various already existing structures help to keep the whole intact when one part of it wants to go off and do something different. Does that make sense? In other words, while I'm all for neologisms, I don't think they're enough. I think language has to really embrace, or we as language users have to create or evolve our own language to be able to express the kind of things that you're doing at interesting multi-level systems dynamics.</p><p><strong>[17:46] Richard Watson:</strong> Yeah.</p><p><strong>Lisa Maroski:</strong> Yeah.</p><p><strong>[17:47] Richard Watson:</strong> So I'm gonna have a go at describing the concept that I want a word for.</p><p><strong>[17:52] Lisa Maroski:</strong> Okay.</p><p><strong>[17:56] Richard Watson:</strong> So building on those words for the kind of properties that one might want from a systematic language, compositionality, productivity, and systematicity. And concepts like being able to refer to something rather than already being it, being a reference to something. From that, we build up to an idea of something being self-referential, that it refers to itself. And then I don't quite want something that refers to itself. I want something like whole referential parts and part referential wholes so that they're referring between levels. But I don't just want either one of those either. I want both of them at the same time. The whole is referring to the parts and the part is referring to the whole at the same time. But I don't quite want that either. What I really want is one where you can't really tell which is the whole and which is the parts because it keeps turning inside out. I'd like a word for that.</p><p><strong>[19:08] Lisa Maroski:</strong> That's beautiful.</p><p><strong>[19:14] Richard Watson:</strong> Maybe that is the word, just beauty.</p><p><strong>[19:19] Lisa Maroski:</strong> Yeah, so you also have notions in there that are holographic and fractal. So one of the neologisms, new forms that I did make up, I'm not sure it fits everything, but I think it at least fits perfectly. Part of what you're looking for is I invented a glyph that I call Mobi, which means distinct but not separate from. And so the distinct part is that linguistically, you can distinguish this bit of a whole, but ontologically, they're not separate. So it's a way of capturing something about a system that allows you to say, well, okay, this part of the system does this and this part of the system does that, without turning both of those into different what's, to use Mike's distinction earlier. It's a way of retaining the wholeness and the partness simultaneously. And so we are Moby at many different levels. I am Moby, my microbiome. So I am distinct, but not separate from my microbiome. My microbiome makes up me. I would not be me without my particular microbiome. But yet those are also whole organisms within themselves and collectivities within themselves, within me. And I can also say I am Moby, by place here in California, or I am Moby the Earth because I am not separate from the Earth. If I got separated from the Earth, and then I just thought, oh, we just sent astronauts, maybe this isn't going to work. But while I'm on Earth, I am interdependent with it. I need it for my sustenance. It needs me as well. And so there's, I think we're heading, a term like that, be heading in the direction you're looking for. I don't think it fully captures what you're looking for though.</p><p><strong>[22:34] Richard Watson:</strong> So I often these days return to the notion of things being the same and different at the same time.</p><p><strong>[22:42] Lisa Maroski:</strong> Yes.</p><p><strong>Richard Watson:</strong> Which is not quite the same as being distinct, but not separate from, because that doesn't necessarily imply that there's a symmetry there, right? That there's a sameness there. There's an interdependent parts-ness and so a separateness and a non-separateness at the same time, but not necessarily a sameness and a distinction at the same time. So an example of a sameness, same and different at the same time, is an object and its reflection. Yes. So an object's reflection, if it was different, it wouldn't be its reflection, right? It has to be the same, but it also isn't the same because this is the object and that's its reflection. Unless, of course, I was inside the looking glass and then that would be the object and this would be the reflection, right? So there are two different things there, two different things that I can refer to and also they're the same, right? Or maybe they are different, but only in one respect, right? That there's a line of, there's a plane of symmetry. So the distances are all opposite in that one dimension. And what I would really like is to be better able to articulate that same and different at the same time, but with the nested whole. You know, the whole is different from the parts, but it's also a reflection of the parts and the parts are different from the whole, but they're a reflection of the whole and that they are, there's a sameness there and a difference there, but in that, in that scale relationship, that containment relationship, more particularly than an object in its reflection. And also still keep that in a, and I don't really know which one is the parts, and which one is the whole, and which one is the whole, and which one is the parts, right? I don't know if I can really articulate why I'm attached to that last bit, but I am. So I could, but I don't know if it would help.</p><p><strong>[25:16] Lisa Maroski:</strong> So one of the ways that I tried to address that kind of wanting to hold both at the same time, whether it's sameness and difference or some other concept, is to put concepts like that in a structure like a yin yang symbol, just to present both of them simultaneously so that when you refer to one, the other is right there. That sameness can't be sameness without difference.</p><p><strong>[26:15] Richard Watson:</strong> Well, in particular has that foreground, background ambiguity and the contained, containing ambiguity as well. That does do a lot of the work.</p><p><strong>[26:33] Lisa Maroski:</strong> And you can combine multiple ones. For example, and this is where I think our culture really needs some help to be able to hold multiple polarities like that simultaneously so that we can think about, for example, I'm just going to use the example in the book because it's simplest. It's one that, in American culture, we talk a lot about freedom and that word gets bandied about, but freedom can't be freedom without some sort of responsibility behind it. And when we talk about freedom, it's not just every person's freedom. The individual's freedom is essentially constrained by and given by the collective freedom. So there's a freedom responsibility polarity. There's a self-other polarity. And then there's also a temporal one, like my freedom right now to do X versus, and considered along with, how is that gonna play out in the long term? So there's like a short-term, long-term. So how can we think about all of these multiple polarities simultaneously and be able to express them? I don't know. I mean, I'm sure that sort of thing comes up in biology as well, just to try to loop Mike back into the conversation. Because the body is doing, is balancing all kinds of different polarities, whether it's the sympathetic and parasympathetic nervous systems working simultaneously along with interactions with the environment, along with cognitive interactions, emotions and feelings, and all of those things in play simultaneously. I should probably come up with a question here.</p><p><strong>[29:35] Richard Watson:</strong> Why is Mike scowling?</p><p><strong>[29:41] Michael Levin:</strong> Oh no, I'm not scowling. Yeah, no, please, if you have a question, let's do it. I mean, you're right, of course. I think in any body there are numerous different agents with agendas and priors and different capabilities, and they're all hacking each other, and the higher levels are bending the lower levels, and the lower levels are constraining and then enabling things at higher levels too. I mean, this is a huge ecosystem for that kind of stuff. And I don't just mean bacteria versus cells. All of these things are nested and whatnot.</p><p><strong>[30:16] Richard Watson:</strong> But also, you as a whole can feel the stress of your parts, and I think your parts can feel the stress of the whole, that they tune in with one another directly, that your identity and the identity of your parts is not just a containment relation, but that there's a sort of skipping levels. Like, it ****** my cells off, it ****** me off, and I was like, why? That doesn't have to equate, does it?</p><p><strong>[30:53] Michael Levin:</strong> Yeah.</p><p><strong>[31:00] Richard Watson:</strong> I can just know that they are without connecting with them in that way, without identifying with their affect.</p><p><strong>[31:21] Lisa Maroski:</strong> Yeah.</p><p><strong>[31:24] Michael Levin:</strong> And isn't there a language issue there too, in the sense that in order to have the kind of relation that Richard just mentioned, there has to be some impedance match? Like, at the very least, you need to share a concept of being ****** *** and whatever, so that you can be in vaguely similar states. And so then you wonder, what are the states that we don't share that we don't know about, right? And that's also a language issue. There could be all kinds of, and in fact, they're almost guaranteed to be all kinds of things that the cells and then the molecular networks inside of them and the bioelectric gradients and the tensile forces and everything else, they could have all sorts of other states that we can't really, you know.</p><p><strong>[32:08] Richard Watson:</strong> They're feeling all mobilacking right now and we're struggling to tune in with that.</p><p><strong>[32:18] Lisa Maroski:</strong> And they might have differing goals than what the whole organism has. If I had a bunch of candida, I might be craving sugar. And while the whole organism, me, is trying to diet and not wanting to eat sugar. And so the other language issue in this scenario that I find interesting is the agency at multiple levels. The candida in my gut have their own agency. They're trying to live their life in their environment. Their environment just happens to be me. I'm trying to live my life and my environment. There's probably all sorts of other bifidobacteria and other creatures trying to live out their lives, needing different things, wanting different things. And how all of this maintains balance and to stay healthy, to continue the infinite game of life. There are so many infinite games going on, just in every single organism. Do you, are you familiar with cars?</p><p><strong>[34:05] Michael Levin:</strong> Sorry, with what? Say that again.</p><p><strong>[34:06] Lisa Maroski:</strong> James Carse's Finite and Infinite Games, the distinction between them.</p><p><strong>[34:12] Michael Levin:</strong> I don't know the name. I mean, I think I know about the distinction, but I don't know who James Carse is. Tell us.</p><p><strong>[34:18] Lisa Maroski:</strong> He was a philosopher. I think he was at NYU, no longer with us. Wrote a wonderful little book called Finite and Infinite Games.</p><p><strong>[34:26] Michael Levin:</strong> Interesting.</p><p><strong>[34:28] Lisa Maroski:</strong> You know that some games are made to be played to win or lose. And some games are made to be played so that the game can continue to be played. And those have very different kinds of rules than the games that are made to be won or lost. And yet, also that there are finite games that keep the infinite game going. So the finite games within our body are that cells senesce and die, and to keep the infinite game going, we have other cells that come along and process them and take their parts and recycle the parts. So there's a living and dying game going on, both at the cellular level and at the more than cellular level, that keeps the infinite game of life itself going.</p><p><strong>[35:42] Michael Levin:</strong> Yeah, I mean, I think it gets even more, like, the combinatorics get even weirder because I think it's not just the tangible things like cells and the bacteria and whatnot, but you can think about the patterns as well, right? And so this is something that we've been working on lately is kind of fuzzing out that distinction between thoughts and thinkers in a sense, between real beings and just patterns and excitable media, that sort of thing. So when you have the butterfly-caterpillar, the caterpillar-butterfly transition, there are multiple, so you can take the perspective of the caterpillar and face a kind of singularity and sort of think about what it means, whether you're going to exist or not. And you can take the perspective of the butterfly and ask, because they do inherit some of the memories of the caterpillar, in fact, remapping them to their new embodiment. And so the butterfly might ask, like, I have these, I have some weird, you know, feelings about certain stimuli. Why is that? I don't remember ever having encountered it. Like there's no specific encounter, but I have these, I've been saddled with these odd behavioral propensities for some reason that I don't think I own, but clearly I do, and so on. And so that's bad enough, but you can also take the perspective of the memory itself as a pattern within the cognitive medium of the caterpillar and the idea that you're not going to survive as a caterpillar memory. You can't because you're about things the butterfly doesn't care anything about. But if you are plastic to the extent that you can remap and generalize, and sort of now you can be about things. So whereas before it was the kind of motion that a soft-bodied creature can do in order to reach some leaves, in the butterfly you might be the kind of motions that a hard-bodied kind of creature like a butterfly can do in 3D, by the way, with an extra dimension to your thing. And it's no longer about leaves. Now it's about nectar. And the perception is different because your eyes are different and everything is different. But the associative conditioning that you received as a larva passes on, right? So you can persist. And so this notion that we've been exploring around that, like, spectrum, you know, we have fleeting patterns, so fleeting thoughts, these sort of like a wave sort of comes and goes. And then you have these sort of persistent thoughts, which are hard to get rid of. They do a little bit of work to keep themselves going, maybe a little niche construction in your brain, you know, as depressive and those kind of repetitive thoughts do. And then there's some other stuff. And then eventually you get to something that's maybe a personality fragment, from dissociative identity kind of scenarios where you're not a full human personality, but you're way more than a simple thought pattern because you can plan and you can have, you know, preferences and so on. And then there's a full human personality and then who knows what's on the other side of that. So, you know, whatever vocabulary we have for all these things has to take those kinds of things into account as well, potentially.</p><p><strong>[38:59] Lisa Maroski:</strong> So I've heard you talk about platonic spaces a lot. So it seems like you're making distinctions between different types of, let's just use Plato's words, platonic forms within a platonic space.</p><p><strong>[39:25] Michael Levin:</strong> So I am not trying to stick close to Plato's ideas to whatever extent we even know what they are. The reason, and this may need a total vocabulary refresh at some point, I went with Platonic because I wanted to anchor it first in mathematics and go from there. And when you say Platonic patterns to mathematicians, they know exactly what we mean, and some percentage of them agree with this idea that there are important facts, or more broadly, patterns that are not physical facts. These are things that you would not discover as a physicist. These are things that you can't just disband the math department and hope the physicists find these things. More importantly, you can't change these facts by tweaking the fundamental constants of physics. You're not going to change why quaternions don't obey the whatever it was, the property of multiplication and so on. You're not going to change that by changing the fine structure constant and things like that. So that's the idea: you start with the simple notion that, like it or not, there appear to be, at least temporarily, I think we have to say that there are at least two realms. I say it on purpose because people hate this notion that there's more than one realm. But I think it's important to say that if realms is to have any meaning at all, you have to be able to say that there are things in this realm that are quite different than the things that we're used to dealing with. And then from there, some other stuff, but basically then I want to drop the assumption, because that's all it is. I think it's not a result. It's an axiom that people add for some reason that I don't think we should add, that these patterns are only relevant for mathematics. That basically it's only the low-agency static forms that mathematicians study. My suspicion is that once you've accepted, and I don't see any way around it, once you've accepted that there are these kind of patterns that are important for physics and biology and so on, but their origin is not in the physical world as we study it, then you might ask, but could you have patterns that have various degrees of agency, including ones that we might recognize as kinds of minds. And so now you sort of shade smoothly into an old class of theories in the philosophy of mind, where minds are simply not of the physical world. They're something else. And then you have this interaction. And then, of course, you run into the interaction problem. But you already had an interaction problem between math and physics, is what I claim. So you're not, this is not new. This is, you know, Pythagoras already had all this. So that's kind of the idea. And we can even show some of the intermediate steps. So one of my favorites, do you know Patrick Grimm's work at all? No. So he's a philosopher. I think he's at SUNY in New York. And he basically started out by saying that, well, you've got this liar. So this is interesting because it gets the language and so on. So he says, you've got the liar paradox. And the reason it's a paradox is because you insist on one truth value and then it's a problem.</p><p><strong>[42:30] Michael Levin:</strong> But if you treat it as a dynamical system, no problem. You've got an oscillator, true, false, true, false, true, false, right? You just have an oscillator. And once you do that, you can start making dynamical systems maps of English sentences that have various degrees of paradoxical self-reference, and you can have multiple ones. So if you have two sentences, and sentence A is, I am 80% as true as sentence B is false, and sentence B is, well, I'm only true if sentence A is less than 70%, whatever. You have these things, and you can plot them. And so he shows these beautiful fractal structures that these things have, and some of them settle down, and some of them don't settle down. But what's interesting is, so you have your static patterns like, E is a certain number, pi is more than three, and that's like a rock. It's not going anywhere. It's just how it is. It's the electron of that world. And then you have these little oscillators. It's kind of like the liar paradox, it's just kind of buzz up and down. But once you have those things, you can do something, you can take the next step and you can make sets of sentences that act exactly as the gene regulatory network models that we studied, they can be trained. And so I have a student that's actually training sets of English sentences because in the end, they're just the dynamical systems kinds of things. And some of them, if you give them stimuli, and what I mean by stimuli is a temporary bump in one of the values. So you have, let's say, 10 sentences. They're all sort of about each other. And you can give a little bump into one. And so that stimulates and some stuff happens. And then it settles down and you do it again and you do it again. And you just ask the question, as you keep doing it, is there habituation? Is there sensitization? Can you condition to stimuli on each other and give it a placebo effect kind of thing, as associative conditioning? Turns out you can, and more. And so then you can, they don't have to be closed off and only be about each other. You can, some of the sentences can refer to things in the outside world. So you can give them an embodiment by saying, okay, here are your sentences. And also there's a lamp or a clock or whatever. And one of those sentences might be, I'm only true if that thing is green, or I'm only true if the car is running. And so now they're about the outside world, but they have their own learning capacity, right? And it affects how they interact with the outside world. So you can build all of this that's grounded in these kind of language slash logic systems. So you can imagine a whole set of, and once you have, as far as I can see, once you have associative conditioning, possibly you could keep going. I don't know how far you can keep going. We haven't gone terribly far yet, but so you can imagine things like that.</p><p><strong>[45:41] Lisa Maroski:</strong> So, it seems that, at least the liar's paradox, and I don't know about the conditioning experiments that you're working on. It relies on interaction between two different levels. So you have the level of the sentence. This sentence is false. But in order to judge the truth value of the sentence, you have to go to a higher level, which is the self-reflective judgment level. Let's take a different example. If you're looking at a written text and it says this sentence is red, but it's written in black ink, you have to jump out to that higher level to judge whether it's red or black, which sounds similar to what you're doing with the lamp: this is true if the lamp is green.</p><p><strong>[46:49] Michael Levin:</strong> Yeah, I mean, I think it's consistent with what we've been talking about here, which is numerous interacting observers at different levels. Because if you have multiple sentences, sentence B is just looking at sentence A, and that's okay. Different observers can, right? It's what Richard was saying is that it crosses levels. So the observations and the sensors and the effectors can cross levels, and biology is full of that kind of stuff where you can sense something that it might be mediated by a chemical signal, but the point isn't that you're sensing some specific chemical, you're sensing some systemic, systemic state that's like high-level state that's mediated. So yeah, you can absolutely make and maybe you can even do crazy things and keep going further. Like, I am only as true as this whole thing is consistent, or worse, I am only as true as this whole thing is interesting. Or I'm only as, you know, I'm true if and only if this thing doesn't settle to a stable point, right? Then maybe that's some kind of crazy Turing, you know, halting problem.</p><p><strong>[48:06] Richard Watson:</strong> Yeah, I was just writing those.</p><p><strong>[48:11] Michael Levin:</strong> Yeah, we haven't tried that yet.</p><p><strong>[48:12] Richard Watson:</strong> But this sentence is true only if those sentences are stable, and these sentences refer to each other in an unstable way. Only if that sentence is false, that kind of thing.</p><p><strong>[48:22] Michael Levin:</strong> And that introduces another degree of freedom, which is time. Because if you want to know that they're stable, it's not enough to take a snapshot. You have to watch them for some period of time and say, to know if it's settled down, you have to have multiple time points to compare it with.</p><p><strong>[48:43] Richard Watson:</strong> Like if it's in or out of the Mandelbrot set.</p><p><strong>[48:47] Michael Levin:</strong> You have to have observations over some period of time. And now you're back to having different observers operating at different time scales and watching things at different time scales.</p><p><strong>[48:59] Richard Watson:</strong> What you said about that stimulus being a bump, which was to temporarily modify the truth of one of the statements, did that hold the truth of all the other statements true as they were while you did it?</p><p><strong>[49:15] Michael Levin:</strong> So we don't touch them externally, but of course the minute you do that, it's going to propagate. So we have to do the whole thing as discrete time, unfortunately, right? So during the time point that we're bumping it, we don't touch the others, but then we have to recalculate all the others, and the ones that are connected to it will immediately update state. So they will react to it themselves.</p><p><strong>[49:45] Richard Watson:</strong> So changing one of the variables without changing any of the others is a bit like a jump in time, right? So there's the state that you had before you bumped it and the state that you had after you bumped it. Imagine that those two states are state. by running it forward in time, that if you hadn't bumped it, you would have got to a state where everything was the same except for this bit, if you just waited, right? So that little bump is like nudging it in time at different amounts.</p><p><strong>[50:22] Michael Levin:</strong> It is. And I'm sure from inside the system, it looks like magic because all of a sudden, this one node bumped up. And if you're the node that normally feeds into it, you say, what is this? I didn't do that. How did this thing get bumped up all of a sudden, right? Because we're acting on it from outside the system.</p><p><strong>[50:45] Richard Watson:</strong> Or it looks like nothing at all. So it only looks like something if you have a time scale with which you can reflect upon what it was a moment ago before you bumped it. Otherwise, you say, hey, that thing just moved. And you say, what thing just moved? It's like, well, it used to be low and outside. No, it isn't. It's like it's already gone, right?</p><p><strong>[51:03] Lisa Maroski:</strong> So you're bringing up another important thing that I think language needs to have a little more specificity about, which is context and perspective. When you're looking at things from multiple perspectives, to be able to specify, okay, from this perspective, it looks like there's a bump. From this other perspective, it does not look like there's a bump. How do we reconcile these two perspectives? Or 3 or 4 or 10 when you're talking about more complex systems. And I think, I'm not just thinking about language as the two-dimensional kinds of words that we write on a page. I'm thinking about expanding language to be more graphic, more fully two-dimensional, to be able to show relationships like that, not just say them one word after another like we're doing now. That's going to take a lot of work. That's going to take a lot of people coming together with a lot of different expertise. But I think what you're doing is helping to create a model system to start working on things like that.</p><p><strong>[52:45] Richard Watson:</strong> I'm reminded of what Mark Somes said in our conversation earlier today about forgetting why you believe something, forgetting that it becomes, what was the word he was using, automatized, that it becomes automatic. You don't know why, you don't know what the evidence was or what the thinking process was that brought you to that conclusion. And now it just becomes automatic. It's like, it's just who I am. It's not a belief I have. It's just who I am. The converse is a system that knows something about the process by which it arrives at its own truth, something like that, right? That's a weird kind of system already, right? How do you know? Because that already has that reaching between levels sort of feeling to it that it knows. It's not just the same as knowing something about your parts. Like I can look in a microscope at my own belly fat. But it's like that knowing something about the process that constructs my own knowing is a bit more strange loopy than that. Yeah. And it feels so that the real reason why I want this inside outside thing to connect or to flip is because I want the recursion, I want a stopping clause for the recursion, right? It's like, well, you know, if I did know the process by which I knew this, how would I know the process by which I came to know the process by which I knew this, right? And then that, you know, that sort of, you know, feels like it has a recursion that can't bottom out. And that bottom level has to somehow resonate with right all the way back to the top of, that just because that's what it means to know something, because that's what it means to say that you know what the process is, right? So that it becomes not a material or substrate-dependent constraint that makes you think that not a historical contingency or happenstance that makes you think that it becomes a logical truth that you think that because that's the only kind of thing you can really think that is self-consistent. So the platonic constraints that come from mathematical boundary conditions that you can't change, that just are truths, that are absolute truths, and the dirty, nitty-gritty historical contingencies, substrate dependency fads, they become the same, so that they're not really different from each other.</p><p><strong>[55:35] Lisa Maroski:</strong> So there is a structure in language. It's not very prevalent in English. It's not required in English, but in some languages, it's called evidentials. And grammar, you have to specify grammatically how you know what it is you're saying, whether you know it firsthand, so I saw the bobcat, or whether it through inference, there was a bobcat there because I saw the tracks that it left, or whether you got it third hand. My neighbor told me he saw a bobcat around here yesterday, so be careful. Some languages require you to specify those sorts of things. And it sounds like it would be good to have a way to specify how we know other kinds of things that you were just talking about, from the nitty gritty to this is an abstract truth that is unchanging.</p><p><strong>[56:56] Richard Watson:</strong> Earlier, Mike mentioned the idea that the butterfly doesn't know where it got the odor aversion from. It's like, I don't know why. Why do I like this? Or why do I not like that? I find myself thinking about, well, whose preference does it think that is then? It's like, you can't think that it's somebody else's, right? In order to act, you have to own it, right? It has to be my preference in order for me to act on it, right? And then that sort of feels like, well, if, am I doing that when you tell me an idea that you have, right? When you tell me an idea or a concept that you have or a fact that you have, do I take that in in a way that it becomes, like I know it or am I still holding it? Like that's what Mike thinks, you know, or like if when I grok it, it's like something shifts, right? That it becomes my own thought. It's like, even though I also remember that you said it a moment ago and a moment ago I said I didn't get it. And now I do, but it still feels like my thought, right?</p><p><strong>[58:16] Lisa Maroski:</strong> I've had experiences like that where I've had an idea that I thought was mine, and then I go back and reread a book that I read 30 years ago, and it's like, oh, that's how I got it.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Discussion: Lisa Maroski, Michael Levin, Richard Watson</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>Lisa Maroski, Michael Levin, and Richard Watson discuss how language shapes thinking in diverse intelligence, covering systems thinking, recursion, biological agency, patterns, and belief formation.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/0U9gOKqGxwY" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/833b1daa/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1 hour discussion with Lisa Maroski ( and Richard Watson ( about the role of language in shaping our thinking in the field of diverse intelligence and beyond.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Language and systems thinking</p><p>(04:25) Parts wholes and resonance</p><p>(07:34) Who versus what</p><p>(12:45) Recursion and new structures</p><p>(17:47) Searching for a word</p><p>(25:16) Holding multiple polarities</p><p>(29:41) Nested biological agency</p><p>(35:42) Patterns and Platonic minds</p><p>(45:41) Cross-level observers and time</p><p>(52:45) Knowing and owning beliefs</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Lisa Maroski:</strong> Let me just run down some of the list, and if any of them strike you as, yeah, I want to start there.</p><p><strong>[00:10] Michael Levin:</strong> Sure. And also, if you wanted to take a few minutes and just talk about your work and tell us where you're coming from and what you've been working on, that's great, too.</p><p><strong>[00:22] Lisa Maroski:</strong> OK, I'll give a-- since I'm the unknown quantity here. I'll do that. So my work has been very transdisciplinary, but focused particularly on language. When I was in college reading systems theory, von Bertalanffy, I kept thinking, if only he had developed a better language for this. And so I'm not setting out to do that for von Bertalanffy, but I started seeing things, seeing aspects of both language and our worldviews, our cognitive models that I think keep people constrained. I saw both and everywhere. I saw the interconnectedness of nature and nurture and mind and body. And it just seems silly that people would argue about, is it this or that, instead of looking for a way to language the dynamics, the interpenetration, the both-and-ness of them. And I was also influenced by topology, the Mobius strip and the Klein bottle, and saw them as interesting metaphors for doing just that, for maintaining both the distinction a Mobius strip seems to have two different sides, just like a piece of paper, but looked at globally, it only has one side. And same for the Klein bottle and inside and outside. And so I started looking for ways to bring this kind of multi-layered thinking into language itself. Is that enough? Yeah, that's great. So, yeah, where it overlaps with your work, this can be used for both local and global perspectives, multiple scales, and specifying... What I'm really interested in is seeing how, particularly in biology, when you are cognizant of the multiple scales of intelligence and goal setting and cognition that are going on, how do you talk about all of that together? But I'm not asking you for answers. I'm saying this is what I'm working on, and maybe you have some insights that could help me, and I have some insights that could help you. I don't know. Let's find out.</p><p><strong>[04:25] Richard Watson:</strong> I would very much like to be able to talk about the relationship between parts and wholes and nested selves in a way that respected the autonomy of the parts and the autonomy of the whole and the relationship between them. I guess we're relatively comfortable with the idea that some concepts are relational, that we're not just talking about things, but we're talking about relationships between them. And we're relatively comfortable with talking about processes rather than structure or material. But when it has that nested relationship, it's usually treated in a very dull way of just parts and wholes. And I feel like there's something quite deep about what we need to get to in our understanding of mechanisms of cognition and what cognition is and how cognition works. That's intrinsically about the relationship between parts and wholes and the... But those aren't the right words, right? Between the selves within and the self without and the self that is the two together. That it's something to do with-- because I think about cognition as being about causes that operate on different time scales. So multiple instances of a process that is rapid versus one instance of a process that is slow and that-- memory is about bringing causes from the past into the here and now, which is just another way of saying there's multiple timescales involved in the causes that you're talking about. And that does, yeah, well, at the very least, I agree with you that the existing language is insufficient to be able to talk about those things, and I would like to be able to talk about them more easily. I often resort recently I've been resorting to metaphorical language or possibly advocating for the literal interpretation in terms of things like resonance. In particular, resonance between a tune played at one frequency and the same tune played at a lower frequency that nonetheless are the same tune and resonate with each other and hold the shape of each other, right? Which has a sort of the insides reaching out to the outside and the outside reaching into the inside sort of feeling to it. So maybe if there were other words available to talk about such concepts, then I wouldn't have to call it a metaphor and I wouldn't have to say it was literal either. I could say it was that kind of thing that I'm talking about.</p><p><strong>[07:34] Michael Levin:</strong> From my side, in terms of language, I've been thinking that one of the fundamental limiting aspects of our language, and I don't know if this is true in other languages, the ones that I know all have the same problem, but I don't know very many. So maybe other languages do this better. But we only have two options. We have a what and we have a who. And that's it. Everything is either a what or a who. And I mean, I don't even think that works when you have a dog. You're like, well, it may not be a who, but it's definitely not a what. And so this idea that we're going to just divide the world into two sharp categories, and that's all that our language allows. So I started thinking, and I don't know what the answer is, but my crazy version for this was to put a little exponent on the O. So, like, you could have, I'm a level 10 who, right? So, and maybe if I go to some meditation retreat or whatever, I'll gain a, I'll be a level 11 or something, and maybe my dog is a level 7, and maybe my Zenobots are a level 3. I have no idea. But this notion that at least, even if we don't agree on what the exponent is or whatever, this idea that it just can't be too sharp categories for this sort of thing, I think. You learn that so early in your language, and it just freezes everything from then on, so that we have to keep having the same arguments again and again about the spectrum of cognition. And maybe that's why it made, right? Because it's just baked into the language. So I don't know. I'd be interested to know if there were other languages that have wider options, but it's, yeah, I think that's one of those things that has to be melted down and redone.</p><p><strong>[09:26] Lisa Maroski:</strong> Other languages do divide the categories differently. Some indigenous languages include many more beings, types of beings, in the who category. They will give beavers and mountains and trees personhood, knowing that they're not human persons, they're beaver persons or mountain persons. So it's not just a language issue. It's a category structure issue.</p><p><strong>[10:07] Richard Watson:</strong> Yeah.</p><p><strong>[10:08] Lisa Maroski:</strong> Which relates back to culture then.</p><p><strong>[10:11] Richard Watson:</strong> Yeah. I guess in English, we talk about the spirit of something. And in some contexts, we mean that in a quite who-like way. In another context, we mean that in a quite what-like way. So part of what we're talking about here is language in the sense of, hey, wouldn't it be useful if we had a word for this? And together with that, if we had a word for this, it might change the way that we think about things on the ontological structures that we impose over things. But also part of it is perhaps the thing that we want to talk about is a linguistic thing, that the thing that we want to talk about is how can one part refer to another, or how can one part establish identity or non-identity with another? And that we're talking about, for example, when we're talking about the sort of strange loops that you mentioned at the beginning, in Hofstadter's term, like the Möbius loop and the Klein bottle, that have the idea, the feeling of the inside reaching out to be the outside or that the inside and outside isn't clearly defined or that it's flipping back and forth or something like that. That's a thing that you can do when you can do language. But when you take everything literally non-declaratively, non-referentially, it's just a concept you can't have. It's all, do you know what I'm reaching for? He says, lack of language.</p><p><strong>[12:08] Lisa Maroski:</strong> I think you're reaching for the distinction between metaphoric language and literal language, and in some circumstances, like science, we try to reach for literal more often than metaphoric, even though I know you both understand that science is full of metaphor at multiple levels, both in the level that we talk about things and also at the level of scientific models are a kind of metaphor as well.</p><p><strong>[12:45] Richard Watson:</strong> So take Chomsky's notions of linguistic structures, productivity, compositionality, systematicity, all of which are involved in recursion. And it feels like there's, it's not just that, it's not just that we need words for those things as though, as observers looking at those things, we need words for it. But the thing that we want to talk about is intrinsically of that kind. It's intrinsically linguistic in nature, that the kind of concepts like center embedding, systematicity, compositionality, and things like those are like the kind of construct that we want to be able to talk about when we're saying these Mobioid, I just made that up, structures where the inside reaches out to the outside and vice versa. And that's not just because we need a word for that thing that's out there, but because the thing that we're talking about is a sort of abstraction, sort of something that can only exist when abstractions are possible. Maybe that's what I'm trying to say. Like you can only create a paradox by using words which label things in a particular way that creates paradoxes, right?</p><p><strong>[14:18] Lisa Maroski:</strong> I'd also like to add another caveat to my interests. In looking at how, as you know, language and culture and the different aspects of language are so interconnected themselves, I'm not just looking for new words. I'm actually looking for new structures for language to be able to express this kind of mobile, mobioid, I'll use your new word, types of relationships and ways of expressing the complexity of certain types of, again, there's no word, experiences, processes that we're trying to have a way to discuss without having to reduce them to the old categories. So it's kind of a difficult project because it involves change at multiple levels simultaneously, which is often difficult to do. So yes, change at both the cognitive level, meaning how we think about and categorize the world and how we speak about it and write about it. All of those forms, I think, need simultaneous changing. Otherwise, the system, language is a system that likes to maintain some level of homeostasis or homeodynamics. The various already existing structures help to keep the whole intact when one part of it wants to go off and do something different. Does that make sense? In other words, while I'm all for neologisms, I don't think they're enough. I think language has to really embrace, or we as language users have to create or evolve our own language to be able to express the kind of things that you're doing at interesting multi-level systems dynamics.</p><p><strong>[17:46] Richard Watson:</strong> Yeah.</p><p><strong>Lisa Maroski:</strong> Yeah.</p><p><strong>[17:47] Richard Watson:</strong> So I'm gonna have a go at describing the concept that I want a word for.</p><p><strong>[17:52] Lisa Maroski:</strong> Okay.</p><p><strong>[17:56] Richard Watson:</strong> So building on those words for the kind of properties that one might want from a systematic language, compositionality, productivity, and systematicity. And concepts like being able to refer to something rather than already being it, being a reference to something. From that, we build up to an idea of something being self-referential, that it refers to itself. And then I don't quite want something that refers to itself. I want something like whole referential parts and part referential wholes so that they're referring between levels. But I don't just want either one of those either. I want both of them at the same time. The whole is referring to the parts and the part is referring to the whole at the same time. But I don't quite want that either. What I really want is one where you can't really tell which is the whole and which is the parts because it keeps turning inside out. I'd like a word for that.</p><p><strong>[19:08] Lisa Maroski:</strong> That's beautiful.</p><p><strong>[19:14] Richard Watson:</strong> Maybe that is the word, just beauty.</p><p><strong>[19:19] Lisa Maroski:</strong> Yeah, so you also have notions in there that are holographic and fractal. So one of the neologisms, new forms that I did make up, I'm not sure it fits everything, but I think it at least fits perfectly. Part of what you're looking for is I invented a glyph that I call Mobi, which means distinct but not separate from. And so the distinct part is that linguistically, you can distinguish this bit of a whole, but ontologically, they're not separate. So it's a way of capturing something about a system that allows you to say, well, okay, this part of the system does this and this part of the system does that, without turning both of those into different what's, to use Mike's distinction earlier. It's a way of retaining the wholeness and the partness simultaneously. And so we are Moby at many different levels. I am Moby, my microbiome. So I am distinct, but not separate from my microbiome. My microbiome makes up me. I would not be me without my particular microbiome. But yet those are also whole organisms within themselves and collectivities within themselves, within me. And I can also say I am Moby, by place here in California, or I am Moby the Earth because I am not separate from the Earth. If I got separated from the Earth, and then I just thought, oh, we just sent astronauts, maybe this isn't going to work. But while I'm on Earth, I am interdependent with it. I need it for my sustenance. It needs me as well. And so there's, I think we're heading, a term like that, be heading in the direction you're looking for. I don't think it fully captures what you're looking for though.</p><p><strong>[22:34] Richard Watson:</strong> So I often these days return to the notion of things being the same and different at the same time.</p><p><strong>[22:42] Lisa Maroski:</strong> Yes.</p><p><strong>Richard Watson:</strong> Which is not quite the same as being distinct, but not separate from, because that doesn't necessarily imply that there's a symmetry there, right? That there's a sameness there. There's an interdependent parts-ness and so a separateness and a non-separateness at the same time, but not necessarily a sameness and a distinction at the same time. So an example of a sameness, same and different at the same time, is an object and its reflection. Yes. So an object's reflection, if it was different, it wouldn't be its reflection, right? It has to be the same, but it also isn't the same because this is the object and that's its reflection. Unless, of course, I was inside the looking glass and then that would be the object and this would be the reflection, right? So there are two different things there, two different things that I can refer to and also they're the same, right? Or maybe they are different, but only in one respect, right? That there's a line of, there's a plane of symmetry. So the distances are all opposite in that one dimension. And what I would really like is to be better able to articulate that same and different at the same time, but with the nested whole. You know, the whole is different from the parts, but it's also a reflection of the parts and the parts are different from the whole, but they're a reflection of the whole and that they are, there's a sameness there and a difference there, but in that, in that scale relationship, that containment relationship, more particularly than an object in its reflection. And also still keep that in a, and I don't really know which one is the parts, and which one is the whole, and which one is the whole, and which one is the parts, right? I don't know if I can really articulate why I'm attached to that last bit, but I am. So I could, but I don't know if it would help.</p><p><strong>[25:16] Lisa Maroski:</strong> So one of the ways that I tried to address that kind of wanting to hold both at the same time, whether it's sameness and difference or some other concept, is to put concepts like that in a structure like a yin yang symbol, just to present both of them simultaneously so that when you refer to one, the other is right there. That sameness can't be sameness without difference.</p><p><strong>[26:15] Richard Watson:</strong> Well, in particular has that foreground, background ambiguity and the contained, containing ambiguity as well. That does do a lot of the work.</p><p><strong>[26:33] Lisa Maroski:</strong> And you can combine multiple ones. For example, and this is where I think our culture really needs some help to be able to hold multiple polarities like that simultaneously so that we can think about, for example, I'm just going to use the example in the book because it's simplest. It's one that, in American culture, we talk a lot about freedom and that word gets bandied about, but freedom can't be freedom without some sort of responsibility behind it. And when we talk about freedom, it's not just every person's freedom. The individual's freedom is essentially constrained by and given by the collective freedom. So there's a freedom responsibility polarity. There's a self-other polarity. And then there's also a temporal one, like my freedom right now to do X versus, and considered along with, how is that gonna play out in the long term? So there's like a short-term, long-term. So how can we think about all of these multiple polarities simultaneously and be able to express them? I don't know. I mean, I'm sure that sort of thing comes up in biology as well, just to try to loop Mike back into the conversation. Because the body is doing, is balancing all kinds of different polarities, whether it's the sympathetic and parasympathetic nervous systems working simultaneously along with interactions with the environment, along with cognitive interactions, emotions and feelings, and all of those things in play simultaneously. I should probably come up with a question here.</p><p><strong>[29:35] Richard Watson:</strong> Why is Mike scowling?</p><p><strong>[29:41] Michael Levin:</strong> Oh no, I'm not scowling. Yeah, no, please, if you have a question, let's do it. I mean, you're right, of course. I think in any body there are numerous different agents with agendas and priors and different capabilities, and they're all hacking each other, and the higher levels are bending the lower levels, and the lower levels are constraining and then enabling things at higher levels too. I mean, this is a huge ecosystem for that kind of stuff. And I don't just mean bacteria versus cells. All of these things are nested and whatnot.</p><p><strong>[30:16] Richard Watson:</strong> But also, you as a whole can feel the stress of your parts, and I think your parts can feel the stress of the whole, that they tune in with one another directly, that your identity and the identity of your parts is not just a containment relation, but that there's a sort of skipping levels. Like, it ****** my cells off, it ****** me off, and I was like, why? That doesn't have to equate, does it?</p><p><strong>[30:53] Michael Levin:</strong> Yeah.</p><p><strong>[31:00] Richard Watson:</strong> I can just know that they are without connecting with them in that way, without identifying with their affect.</p><p><strong>[31:21] Lisa Maroski:</strong> Yeah.</p><p><strong>[31:24] Michael Levin:</strong> And isn't there a language issue there too, in the sense that in order to have the kind of relation that Richard just mentioned, there has to be some impedance match? Like, at the very least, you need to share a concept of being ****** *** and whatever, so that you can be in vaguely similar states. And so then you wonder, what are the states that we don't share that we don't know about, right? And that's also a language issue. There could be all kinds of, and in fact, they're almost guaranteed to be all kinds of things that the cells and then the molecular networks inside of them and the bioelectric gradients and the tensile forces and everything else, they could have all sorts of other states that we can't really, you know.</p><p><strong>[32:08] Richard Watson:</strong> They're feeling all mobilacking right now and we're struggling to tune in with that.</p><p><strong>[32:18] Lisa Maroski:</strong> And they might have differing goals than what the whole organism has. If I had a bunch of candida, I might be craving sugar. And while the whole organism, me, is trying to diet and not wanting to eat sugar. And so the other language issue in this scenario that I find interesting is the agency at multiple levels. The candida in my gut have their own agency. They're trying to live their life in their environment. Their environment just happens to be me. I'm trying to live my life and my environment. There's probably all sorts of other bifidobacteria and other creatures trying to live out their lives, needing different things, wanting different things. And how all of this maintains balance and to stay healthy, to continue the infinite game of life. There are so many infinite games going on, just in every single organism. Do you, are you familiar with cars?</p><p><strong>[34:05] Michael Levin:</strong> Sorry, with what? Say that again.</p><p><strong>[34:06] Lisa Maroski:</strong> James Carse's Finite and Infinite Games, the distinction between them.</p><p><strong>[34:12] Michael Levin:</strong> I don't know the name. I mean, I think I know about the distinction, but I don't know who James Carse is. Tell us.</p><p><strong>[34:18] Lisa Maroski:</strong> He was a philosopher. I think he was at NYU, no longer with us. Wrote a wonderful little book called Finite and Infinite Games.</p><p><strong>[34:26] Michael Levin:</strong> Interesting.</p><p><strong>[34:28] Lisa Maroski:</strong> You know that some games are made to be played to win or lose. And some games are made to be played so that the game can continue to be played. And those have very different kinds of rules than the games that are made to be won or lost. And yet, also that there are finite games that keep the infinite game going. So the finite games within our body are that cells senesce and die, and to keep the infinite game going, we have other cells that come along and process them and take their parts and recycle the parts. So there's a living and dying game going on, both at the cellular level and at the more than cellular level, that keeps the infinite game of life itself going.</p><p><strong>[35:42] Michael Levin:</strong> Yeah, I mean, I think it gets even more, like, the combinatorics get even weirder because I think it's not just the tangible things like cells and the bacteria and whatnot, but you can think about the patterns as well, right? And so this is something that we've been working on lately is kind of fuzzing out that distinction between thoughts and thinkers in a sense, between real beings and just patterns and excitable media, that sort of thing. So when you have the butterfly-caterpillar, the caterpillar-butterfly transition, there are multiple, so you can take the perspective of the caterpillar and face a kind of singularity and sort of think about what it means, whether you're going to exist or not. And you can take the perspective of the butterfly and ask, because they do inherit some of the memories of the caterpillar, in fact, remapping them to their new embodiment. And so the butterfly might ask, like, I have these, I have some weird, you know, feelings about certain stimuli. Why is that? I don't remember ever having encountered it. Like there's no specific encounter, but I have these, I've been saddled with these odd behavioral propensities for some reason that I don't think I own, but clearly I do, and so on. And so that's bad enough, but you can also take the perspective of the memory itself as a pattern within the cognitive medium of the caterpillar and the idea that you're not going to survive as a caterpillar memory. You can't because you're about things the butterfly doesn't care anything about. But if you are plastic to the extent that you can remap and generalize, and sort of now you can be about things. So whereas before it was the kind of motion that a soft-bodied creature can do in order to reach some leaves, in the butterfly you might be the kind of motions that a hard-bodied kind of creature like a butterfly can do in 3D, by the way, with an extra dimension to your thing. And it's no longer about leaves. Now it's about nectar. And the perception is different because your eyes are different and everything is different. But the associative conditioning that you received as a larva passes on, right? So you can persist. And so this notion that we've been exploring around that, like, spectrum, you know, we have fleeting patterns, so fleeting thoughts, these sort of like a wave sort of comes and goes. And then you have these sort of persistent thoughts, which are hard to get rid of. They do a little bit of work to keep themselves going, maybe a little niche construction in your brain, you know, as depressive and those kind of repetitive thoughts do. And then there's some other stuff. And then eventually you get to something that's maybe a personality fragment, from dissociative identity kind of scenarios where you're not a full human personality, but you're way more than a simple thought pattern because you can plan and you can have, you know, preferences and so on. And then there's a full human personality and then who knows what's on the other side of that. So, you know, whatever vocabulary we have for all these things has to take those kinds of things into account as well, potentially.</p><p><strong>[38:59] Lisa Maroski:</strong> So I've heard you talk about platonic spaces a lot. So it seems like you're making distinctions between different types of, let's just use Plato's words, platonic forms within a platonic space.</p><p><strong>[39:25] Michael Levin:</strong> So I am not trying to stick close to Plato's ideas to whatever extent we even know what they are. The reason, and this may need a total vocabulary refresh at some point, I went with Platonic because I wanted to anchor it first in mathematics and go from there. And when you say Platonic patterns to mathematicians, they know exactly what we mean, and some percentage of them agree with this idea that there are important facts, or more broadly, patterns that are not physical facts. These are things that you would not discover as a physicist. These are things that you can't just disband the math department and hope the physicists find these things. More importantly, you can't change these facts by tweaking the fundamental constants of physics. You're not going to change why quaternions don't obey the whatever it was, the property of multiplication and so on. You're not going to change that by changing the fine structure constant and things like that. So that's the idea: you start with the simple notion that, like it or not, there appear to be, at least temporarily, I think we have to say that there are at least two realms. I say it on purpose because people hate this notion that there's more than one realm. But I think it's important to say that if realms is to have any meaning at all, you have to be able to say that there are things in this realm that are quite different than the things that we're used to dealing with. And then from there, some other stuff, but basically then I want to drop the assumption, because that's all it is. I think it's not a result. It's an axiom that people add for some reason that I don't think we should add, that these patterns are only relevant for mathematics. That basically it's only the low-agency static forms that mathematicians study. My suspicion is that once you've accepted, and I don't see any way around it, once you've accepted that there are these kind of patterns that are important for physics and biology and so on, but their origin is not in the physical world as we study it, then you might ask, but could you have patterns that have various degrees of agency, including ones that we might recognize as kinds of minds. And so now you sort of shade smoothly into an old class of theories in the philosophy of mind, where minds are simply not of the physical world. They're something else. And then you have this interaction. And then, of course, you run into the interaction problem. But you already had an interaction problem between math and physics, is what I claim. So you're not, this is not new. This is, you know, Pythagoras already had all this. So that's kind of the idea. And we can even show some of the intermediate steps. So one of my favorites, do you know Patrick Grimm's work at all? No. So he's a philosopher. I think he's at SUNY in New York. And he basically started out by saying that, well, you've got this liar. So this is interesting because it gets the language and so on. So he says, you've got the liar paradox. And the reason it's a paradox is because you insist on one truth value and then it's a problem.</p><p><strong>[42:30] Michael Levin:</strong> But if you treat it as a dynamical system, no problem. You've got an oscillator, true, false, true, false, true, false, right? You just have an oscillator. And once you do that, you can start making dynamical systems maps of English sentences that have various degrees of paradoxical self-reference, and you can have multiple ones. So if you have two sentences, and sentence A is, I am 80% as true as sentence B is false, and sentence B is, well, I'm only true if sentence A is less than 70%, whatever. You have these things, and you can plot them. And so he shows these beautiful fractal structures that these things have, and some of them settle down, and some of them don't settle down. But what's interesting is, so you have your static patterns like, E is a certain number, pi is more than three, and that's like a rock. It's not going anywhere. It's just how it is. It's the electron of that world. And then you have these little oscillators. It's kind of like the liar paradox, it's just kind of buzz up and down. But once you have those things, you can do something, you can take the next step and you can make sets of sentences that act exactly as the gene regulatory network models that we studied, they can be trained. And so I have a student that's actually training sets of English sentences because in the end, they're just the dynamical systems kinds of things. And some of them, if you give them stimuli, and what I mean by stimuli is a temporary bump in one of the values. So you have, let's say, 10 sentences. They're all sort of about each other. And you can give a little bump into one. And so that stimulates and some stuff happens. And then it settles down and you do it again and you do it again. And you just ask the question, as you keep doing it, is there habituation? Is there sensitization? Can you condition to stimuli on each other and give it a placebo effect kind of thing, as associative conditioning? Turns out you can, and more. And so then you can, they don't have to be closed off and only be about each other. You can, some of the sentences can refer to things in the outside world. So you can give them an embodiment by saying, okay, here are your sentences. And also there's a lamp or a clock or whatever. And one of those sentences might be, I'm only true if that thing is green, or I'm only true if the car is running. And so now they're about the outside world, but they have their own learning capacity, right? And it affects how they interact with the outside world. So you can build all of this that's grounded in these kind of language slash logic systems. So you can imagine a whole set of, and once you have, as far as I can see, once you have associative conditioning, possibly you could keep going. I don't know how far you can keep going. We haven't gone terribly far yet, but so you can imagine things like that.</p><p><strong>[45:41] Lisa Maroski:</strong> So, it seems that, at least the liar's paradox, and I don't know about the conditioning experiments that you're working on. It relies on interaction between two different levels. So you have the level of the sentence. This sentence is false. But in order to judge the truth value of the sentence, you have to go to a higher level, which is the self-reflective judgment level. Let's take a different example. If you're looking at a written text and it says this sentence is red, but it's written in black ink, you have to jump out to that higher level to judge whether it's red or black, which sounds similar to what you're doing with the lamp: this is true if the lamp is green.</p><p><strong>[46:49] Michael Levin:</strong> Yeah, I mean, I think it's consistent with what we've been talking about here, which is numerous interacting observers at different levels. Because if you have multiple sentences, sentence B is just looking at sentence A, and that's okay. Different observers can, right? It's what Richard was saying is that it crosses levels. So the observations and the sensors and the effectors can cross levels, and biology is full of that kind of stuff where you can sense something that it might be mediated by a chemical signal, but the point isn't that you're sensing some specific chemical, you're sensing some systemic, systemic state that's like high-level state that's mediated. So yeah, you can absolutely make and maybe you can even do crazy things and keep going further. Like, I am only as true as this whole thing is consistent, or worse, I am only as true as this whole thing is interesting. Or I'm only as, you know, I'm true if and only if this thing doesn't settle to a stable point, right? Then maybe that's some kind of crazy Turing, you know, halting problem.</p><p><strong>[48:06] Richard Watson:</strong> Yeah, I was just writing those.</p><p><strong>[48:11] Michael Levin:</strong> Yeah, we haven't tried that yet.</p><p><strong>[48:12] Richard Watson:</strong> But this sentence is true only if those sentences are stable, and these sentences refer to each other in an unstable way. Only if that sentence is false, that kind of thing.</p><p><strong>[48:22] Michael Levin:</strong> And that introduces another degree of freedom, which is time. Because if you want to know that they're stable, it's not enough to take a snapshot. You have to watch them for some period of time and say, to know if it's settled down, you have to have multiple time points to compare it with.</p><p><strong>[48:43] Richard Watson:</strong> Like if it's in or out of the Mandelbrot set.</p><p><strong>[48:47] Michael Levin:</strong> You have to have observations over some period of time. And now you're back to having different observers operating at different time scales and watching things at different time scales.</p><p><strong>[48:59] Richard Watson:</strong> What you said about that stimulus being a bump, which was to temporarily modify the truth of one of the statements, did that hold the truth of all the other statements true as they were while you did it?</p><p><strong>[49:15] Michael Levin:</strong> So we don't touch them externally, but of course the minute you do that, it's going to propagate. So we have to do the whole thing as discrete time, unfortunately, right? So during the time point that we're bumping it, we don't touch the others, but then we have to recalculate all the others, and the ones that are connected to it will immediately update state. So they will react to it themselves.</p><p><strong>[49:45] Richard Watson:</strong> So changing one of the variables without changing any of the others is a bit like a jump in time, right? So there's the state that you had before you bumped it and the state that you had after you bumped it. Imagine that those two states are state. by running it forward in time, that if you hadn't bumped it, you would have got to a state where everything was the same except for this bit, if you just waited, right? So that little bump is like nudging it in time at different amounts.</p><p><strong>[50:22] Michael Levin:</strong> It is. And I'm sure from inside the system, it looks like magic because all of a sudden, this one node bumped up. And if you're the node that normally feeds into it, you say, what is this? I didn't do that. How did this thing get bumped up all of a sudden, right? Because we're acting on it from outside the system.</p><p><strong>[50:45] Richard Watson:</strong> Or it looks like nothing at all. So it only looks like something if you have a time scale with which you can reflect upon what it was a moment ago before you bumped it. Otherwise, you say, hey, that thing just moved. And you say, what thing just moved? It's like, well, it used to be low and outside. No, it isn't. It's like it's already gone, right?</p><p><strong>[51:03] Lisa Maroski:</strong> So you're bringing up another important thing that I think language needs to have a little more specificity about, which is context and perspective. When you're looking at things from multiple perspectives, to be able to specify, okay, from this perspective, it looks like there's a bump. From this other perspective, it does not look like there's a bump. How do we reconcile these two perspectives? Or 3 or 4 or 10 when you're talking about more complex systems. And I think, I'm not just thinking about language as the two-dimensional kinds of words that we write on a page. I'm thinking about expanding language to be more graphic, more fully two-dimensional, to be able to show relationships like that, not just say them one word after another like we're doing now. That's going to take a lot of work. That's going to take a lot of people coming together with a lot of different expertise. But I think what you're doing is helping to create a model system to start working on things like that.</p><p><strong>[52:45] Richard Watson:</strong> I'm reminded of what Mark Somes said in our conversation earlier today about forgetting why you believe something, forgetting that it becomes, what was the word he was using, automatized, that it becomes automatic. You don't know why, you don't know what the evidence was or what the thinking process was that brought you to that conclusion. And now it just becomes automatic. It's like, it's just who I am. It's not a belief I have. It's just who I am. The converse is a system that knows something about the process by which it arrives at its own truth, something like that, right? That's a weird kind of system already, right? How do you know? Because that already has that reaching between levels sort of feeling to it that it knows. It's not just the same as knowing something about your parts. Like I can look in a microscope at my own belly fat. But it's like that knowing something about the process that constructs my own knowing is a bit more strange loopy than that. Yeah. And it feels so that the real reason why I want this inside outside thing to connect or to flip is because I want the recursion, I want a stopping clause for the recursion, right? It's like, well, you know, if I did know the process by which I knew this, how would I know the process by which I came to know the process by which I knew this, right? And then that, you know, that sort of, you know, feels like it has a recursion that can't bottom out. And that bottom level has to somehow resonate with right all the way back to the top of, that just because that's what it means to know something, because that's what it means to say that you know what the process is, right? So that it becomes not a material or substrate-dependent constraint that makes you think that not a historical contingency or happenstance that makes you think that it becomes a logical truth that you think that because that's the only kind of thing you can really think that is self-consistent. So the platonic constraints that come from mathematical boundary conditions that you can't change, that just are truths, that are absolute truths, and the dirty, nitty-gritty historical contingencies, substrate dependency fads, they become the same, so that they're not really different from each other.</p><p><strong>[55:35] Lisa Maroski:</strong> So there is a structure in language. It's not very prevalent in English. It's not required in English, but in some languages, it's called evidentials. And grammar, you have to specify grammatically how you know what it is you're saying, whether you know it firsthand, so I saw the bobcat, or whether it through inference, there was a bobcat there because I saw the tracks that it left, or whether you got it third hand. My neighbor told me he saw a bobcat around here yesterday, so be careful. Some languages require you to specify those sorts of things. And it sounds like it would be good to have a way to specify how we know other kinds of things that you were just talking about, from the nitty gritty to this is an abstract truth that is unchanging.</p><p><strong>[56:56] Richard Watson:</strong> Earlier, Mike mentioned the idea that the butterfly doesn't know where it got the odor aversion from. It's like, I don't know why. Why do I like this? Or why do I not like that? I find myself thinking about, well, whose preference does it think that is then? It's like, you can't think that it's somebody else's, right? In order to act, you have to own it, right? It has to be my preference in order for me to act on it, right? And then that sort of feels like, well, if, am I doing that when you tell me an idea that you have, right? When you tell me an idea or a concept that you have or a fact that you have, do I take that in in a way that it becomes, like I know it or am I still holding it? Like that's what Mike thinks, you know, or like if when I grok it, it's like something shifts, right? That it becomes my own thought. It's like, even though I also remember that you said it a moment ago and a moment ago I said I didn't get it. And now I do, but it still feels like my thought, right?</p><p><strong>[58:16] Lisa Maroski:</strong> I've had experiences like that where I've had an idea that I thought was mine, and then I go back and reread a book that I read 30 years ago, and it's like, oh, that's how I got it.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Discussion: Richard Watson, Alexey Tolchinsky, Mark Solms, Michael Levin, and Karl Friston</title>
          <link>https://thoughtforms-life.aipodcast.ing/discussion-richard-watson-alexey-tolchinsky-mark-solms-michael-levin-and-karl-friston/</link>
          <description>A roundtable with Richard Watson, Alexey Tolchinsky, Mark Solms, Michael Levin, and Karl Friston on memory and forgetting in human and unconventional intelligence. They also discuss overfitting, REM sleep, dreams, and collective cellular identity.</description>
          <pubDate>Thu, 09 Apr 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69d7506d983bbd0001fafd16 ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/w_ciA-yyF8M" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/d3acb073/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1 hour meeting with Richard Watson ( Alexey Tolchinsky ( Mark Solms ( and Karl Friston ( where we discuss issues of memory (especially, the role of forgetting) in diverse intelligence (human patients and beyond), and a bit on dreams and psychoanalysis. The original question from me was motivated by some findings on the effects of induced forgetting in models of unconventional cognition ( and more coming soon).</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Role of forgetting</p><p>(06:22) Overfitting and generalization</p><p>(10:45) Accuracy minus complexity</p><p>(21:13) REM sleep and transference</p><p>(24:40) Choosing futures and pasts</p><p>(31:18) Cellular psychotherapy ideas</p><p>(34:58) Dreaming of cell phones</p><p>(39:47) Photographic memory costs</p><p>(44:18) Precision and future paths</p><p>(52:25) Collective cellular identity</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Michael Levin:</strong> What I'm interested in is to get all of your thoughts on the following question, the role of forgetting in particular, the role of losing memories, if you even think that happens, but the role of forgetting in agency and the potentiation of agency, and just in general, what role you think forgetting plays in the mind and in the capacity to have a significant mind, like how important is forgetting? How do you see forgetting and so on? So that's what I'm interested in. And yeah, I can give you the context of why I'm asking this, but that's what I'd love to hear about.</p><p><strong>[00:42] Mark Solms:</strong> I'm sure we all remember the context. If I may, I will begin. When I read your original description, the thoughts that occurred to me were exactly the thoughts that Carl then articulated over the emails about model complexity and the need to balance accuracy with complexity, and Carl drawing attention to how during sleep, when a lot of memory consolidation goes on, consolidation, of course, involves both what we retain and what we forget. It's a selective process. And Carl drew attention to how we believe-- he says we believe-- actually, it began with him and Alan Hobson believing, and now we all agree with them, that during sleep, there's a reduction-- there's a getting rid of redundant synapses or synaptic connections, because otherwise, you have too complex a model. And this is the ideal time to do it because nothing's happening. There's no new incoming era. So those thoughts that Carl articulated were exactly the thoughts I had. So then I thought, well, now that Carl's expressed my thoughts, which were actually derived from his thoughts, I'll have to come up with new thoughts. And these were the additional thoughts. They actually are just two of them. The one is that there's an interesting problem in infancy when you've got a hell of a lot to learn and new things happen all the time. How do you? How do you balance this business at the very beginning of life? And how do you retain any kind of a stable model when the world is so utterly unpredictable? Then there must be some mechanism whereby there's some continuity in the kind of base model. Otherwise, you're just totally fragmented and every day wipes out your beliefs that you had established the day before. And I would like to link that with the fact that in the first two years of life in humans, there's pretty much no declarative memory. It's all non-declarative. So things go from short-term memory into non-declarative long-term memory. They can't retrieve those memories and rethink them because that's what non-declarative memory is.</p><p><strong>[03:24] Mark Solms:</strong> Things just go straight into these automatic memory systems. And the way that I think about those subcortical non-declarative memory systems is that they carry high precision. This is on the view that consciousness is uncertainty. That's what consciousness is for, is to feel your way through situations where you're not so confident about your predictions. You're palpating them and testing them against the incoming errors. And so this is not happening in relation to the memory systems of infants. Everything goes into long-term non-declarative memory. So I think that there's some kind of biasing, some kind of excessive confidence. I don't know if that's right, but that's my thought. And then you can link that with the fact that there's so much REM sleep in infancy. It used to be thought that it's during REM sleep that all the memory consolidation is going on, but in fact, it turns out to be the opposite. It's during non-REM sleep that all the memory consolidation is going on during sleep. And REM sleep is a highly entropic state. So it's dealing with uncertainties and it's conscious. You know, you're dreaming during REM sleep. So you're in a state of uncertainty by physiological measures and by psychological measures in the sense of the subjectivity of a highly emotional, conscious state of mind. I have the view, and this is the last thing I'll say, that REM sleep, which incidentally is also characterized by highly unstable homeostasis, we go out of kilter across a great many homeostatic parameters during REM sleep. So it really is a state where you're in a lot of uncertainty, even at the level of autonomic homeostatic mechanisms. So I'm of the view that during REM sleep, we are actually resisting, like we do during infancy, resisting too much model, too much forgetting. We're wanting to retain non-declarative memories against the accumulating errors of the day. It's trying to explain away. In other words, trying to forget, trying to not remember, trying to not update the existing non-declarative model. So those are my opening shots.</p><p><strong>[06:10] Michael Levin:</strong> Great. I made a couple of notes because I want to come back to the whole sleep thing, but maybe we'll go around with this topic. Who wants to?</p><p><strong>[06:22] Alexey Tolchinsky:</strong> I mean, build on what Mark just said, which is very useful. And to add to your work, Eric Oil's overfitted brain hypothesis, which was new to me because I studied your dream and sleep work thoroughly and I watched your debate with Alan Hobson with great pleasure. So Eric Oil suggests that one of the things dreams are useful for is they reduce overfitting because what we've learned in the day is being placed in a wildly different context. It allows us to loosen the priors and to see what can generalize. And incidentally, he's a writer, he writes fiction. He said fiction has an additional function for that. When we fantasize, we do that. Because when we hold on to very precise notion, we cannot generalize. And I think that's the general theme in forgetting and building. So what I think, Michael, you said, when we remember, when we recall, we build agency, we build a higher level, we build a macro. But when we forget, we sharpen the causal signal. This is sculptor's chisel. So then one of the things we optimize is exactly generalization, because if we use the precise memories we've learned, we cannot use them in other instances. The metaphor for that is Funis de Memorius. It's the book or the story by Luis Borges. A man fell from a horse and lost the ability to forget. And then he couldn't recognize his dog anymore because at 3.45 and 4.05, there was a slightly different angle of view and slightly different shade of the fur. So he lost concepts, he lost abstraction, he lost pattern recognition. And incidentally, speak about agency, he lost self because self is a mental object and we must abstract to retain some coherence and some continuity of the self. And in neurology, I suppose semantic amnesia is close to that where concepts are gone and we only have details. We sort of live in the here and now. It's the recent self without any continuity to the past. And, you know, so, but generalization is a balancing act. So these are the cases where there's not enough generalization. But when there's too much generalization, we have another issue like Alzheimer's when it starts, you know, we start losing the recent details. And in that sense, self lives in the past. You know, we have some concepts, but we will lose the recency. We stop updating the self. And also generalization can be skewed or biased. Like in PTSD, a flashback is re-experiencing now in the same context what happened back then in the circumstances of trauma. So this is incorrect generalization, overgeneralization of the phobic memory. And I suppose in depression or in OCD, when we ruminate, it's again, the negative experience of the past is casting a shadow on the present and on the planning for the future. So I think that this forgetting serves a function of optimizing generalization. And exactly like Mark said, also metabolic function, because every memory trace is metabolically costly and we just can't afford to hold on to everything. I mean, I think in physics, the structure that remembers everything is a black hole. It encodes everything on the event horizon at maximum density. So that's the kind of structure that remembers everything. Without forgetting, we are dysfunctional, including the self-functioning. But I've talked too much, so these are my thoughts on what Mark said.</p><p><strong>[09:43] Richard Watson:</strong> Alexei, can I check that I understand the connection between what you were saying about reducing overfitting and what Mark was saying previously? So the connection is that by resisting the update of long-term memory with particular instances, that's what Mark was talking about, you are fostering an ability to avoid overfitting to those particular instances, right?</p><p><strong>[10:09] Alexey Tolchinsky:</strong> I think that memories are malleable, even Pavlovian memories. We update and change the context. We weaken and dampen them. If we cannot let go of some details, we cannot think, we cannot disambiguate, exactly like that dog that is different, if that's...</p><p><strong>[10:30] Richard Watson:</strong> Even the things which are the same.</p><p><strong>[10:31] Alexey Tolchinsky:</strong> Right.</p><p><strong>[10:40] Michael Levin:</strong> Carl, do you want to?</p><p><strong>[10:45] Karl Friston:</strong> Say anything about that? Yeah, so lots of themes here. Just to address that last question, from the point of view of machine learning and physics, that point about generalisation being the same thing as avoiding overfitting, I think it's absolutely fundamental. So, you know, it's fairly straightforward. I think David McKay was the first person, or perhaps even before that, statisticians Cass and Stepley were able to prove that the ability to generalise is a measure of the evidence for your generative model of the way in which your data or your world supplies data. And the log of the evidence is just the accuracy minus the complexity. So coming back to Mark's point, which means that to generalize is to have the simplest explanation or model or account of an accurate sort of everything you're trying to explain. So mathematically, they are the same thing. And if one elevates that notion of model evidence or interprets it now in an evolutionary context, or, sorry, more generally in a selective context, again, coming back to Mark's notion that we are selecting things, then what is selected is just simply the thing that is most likely to be there. And the thing that is most likely to be there, with a nod to survival of the most likely, is those that have the greatest marginal likelihood. And model evidence just is the marginal likelihood. So I think mathematically all these things are the same thing. So to summarize that, the things that are selected, the last man standing, as it were, is just the most likely thing that you're going to see. That likelihood is always expressed as accuracy minus complexity. And thereby maximising the marginal likelihood means minimising the complexity. And that means that you will have the best model that is able to generalise. So the question then just, I think, resolves again formally to what timescale we're talking about. I mean, the selection process, you could argue, unfolds at all timescales, but is exactly the same kind of process. So you can have attentional selection over, say, 300 milliseconds to several seconds. You can have action selection. We select the most likely thing that we're going to do next over multiple time scales, right the way through to, well, you could even argue in neurodevelopment from the perspective of neural Darwinism and the theory of neuronal group selection if you wanted to, but you can jump right through to natural selection at a very, very slow time scale. So it's the same thing going on every time scale. It just looks different and we have different disciplines and different ways of talking about these things. But it's the same underlying, almost tautological explanation for the way things are.</p><p><strong>[14:14] Karl Friston:</strong> It couldn't be any other way from a mathematical perspective. Dreaming is interesting because that talks about a particular time scale of a diurnal sort. And it's interesting then to link that to memory. And something that I think both Alexa and Mark alluded to was that to consolidate is to forget selectively. And I often think of this in terms of a sculpture creating a figurine, for example. It's what you remove which gives it its form. And therefore, if I now read forgetting as removing the right stuff, minimizing the complexity in the right kind of way, then forgetting is just a particular kind of learning or model optimization that basically consolidates the stuff that is not removed. So it's not surprising that much of the process of selection is taking stuff away, either by death or by ignoring it, or by some synaptic homeostasis while we're asleep. So forgetting is just the other side of the coin from learning. Without forgetting, you couldn't learn; without learning, you couldn't forget. The both, I think, descriptions. There's another conversation we could have here, which is not so much biological, but more you would find in economics and states-based modelling, which is Bayes-optimal forgetting and volatility, adapting the particular learning of certain things, and in particular, the learning rate which is just a precision. I think that's another sort of identity or isomorphism, which is important to remember. Precision is just a learning rate. So if you write down, if you just think about any differential equation and you apply some precision or some parameter to some prediction error that's driving changes in what you're representing or learning, then the units of precision are per unit time. So precision is a learning rate, which means that if Mark is right and children have to learn very, very quickly, then they're going to be assigning a lot of precision to their sensorium relative to their prior beliefs, for example. So on that view, there's a really interesting link between volatility in your environment and the right precision or learning rates that you bring to the table to match that volatility. And this, you know, I see this in many, many different fields ranging from the Kalman gain and Bayesian filtering. If you've got very, very precise data, you pay a lot of attention to i.e. high precision to i.e. you increase your learning rate in the face of those data. But if the data are really, really noisy or you've got your eyes shut during sleep, then you wouldn't afford the same kind of precision in state estimation.</p><p><strong>[17:43] Karl Friston:</strong> In an evolutionary context, I first came across this in Ernst Mayer's The Growth of Biological Thought, where he was telling a story where if you have Drosophila fruit flies and you rear them in a volatile environment by manipulating the temperature, you increase the mutation rate. So they forget genetically or epigenetically the kind of environment to which they are most likely, they are best fit. So I can't remember. And then Stuart Kaufman came in with sort of second-order selection, selection for selectability. Again, it's just mathematically the same thing. It's just the selectability is just the rate of forgetting, which is just the precision at this particular level of optimization. I think Stuart Kaufman went on to actually revisit that second-order selection, which I think you could easily read as forgetting and just basically matching your learning rate, your precision, your rate constants to the actual volatility of the world in which you're trying to explain. So to come back to the neuro-environmental thing, which I hadn't really thought about, that basically means you'd expect things that have a lot of learning to do to get a consolidated, good, generalizing generative model of their world. They're going to learn very, very quickly. And that means that they're going to forget also very, very quickly, until they can weed out what things are invariant over time. The last thing, more of a question, it's, you know, in terms of declarative memory, you know, it's interesting that during REM sleep, unless you've woken up, you don't actually remember your dreams, which is, which I think there's another sort of dynamic in play here that, in order to not forget, you have to literally do reinforcement learning, literally in the way that the word reinforcement learning was originally introduced, which is to reinforce a synaptic connection. So although in dreaming, well, in my world, in terms of simulating these processes, you are generating some sort of fictive content in order to weed out the redundant synapses and associations to minimise the complexity. The imperative here is to get rid of synaptic connections. You really do not want to retain them. So there must be another neuromodulating mechanism that says no, okay, this was actually activity induced by real exposure to the sensorium. And I'm going to remember this. I'm going to lock it in in some way of the kind that we do during waking. But that's not what's going on in rapid eye movement sleep. But maybe during slow-wave sleep. I haven't kept up with that literature. Mark, you look as though you've got something.</p><p><strong>[21:13] Mark Solms:</strong> Well, I just agree with everything that you're saying. The slow wave sleep is a much more predictable process. That's what the slow waves are. You know what waves are coming next. It's a much more passive process. There's much less mental work, predictive work going on. It's just, I imagine, accepting, as it were, the errors that have accumulated. The active process is resisting the updating. It's fighting against the errors, you know, so forgetting. And I agree with you. Dreaming is an eminently forgettable process. It's really one of the most striking features of dreams is you can't remember them. So, you know, they specialize in forgetting. It's trying to explain away everything that's trying to make me update my model that I don't want to. So I think that things that are relatively superficial, in other words, tolerable by the simple generalizable model, superficial things which don't actually question your core beliefs, those get encoded, but things which threaten your core beliefs, your generalizable, non-declarative model, those things you need to explain them away. And I think that that's the main thing that's going on in dreaming now. Michael invited us in his original e-mail to link this to psychoanalysis. I normally am reluctant to bring in psychoanalysis because it's my own pet interest and it doesn't generalize to everyone else's interest. But since Mike asked me to or invited us to, I want to say this. The problem with childhood models, I mean, we can see why they must be high confidence models. They must be highly generalizable. You know, they must persist. They become our core beliefs. The problem is that they're models that we built a long time ago under very different circumstances to the ones that prevail in adulthood. And this is what we deal with in clinical psychoanalysis. The problem is that our patients are living in the present as if it were the past. That's what we call transference. They're transferring the past and their beliefs and predictions deriving from the past, which are the best solutions they could come up with to the world that they were living in then, or the least bad predictions they could formulate then. They then become non-declarative and automatized, and they perseverate into adulthood. And they're living in a world that isn't the world that's there. And I think that this is where Freud's wish fulfillment theory of dreams comes in. It's an attempt to explain away that which does not fit with your non-declarative generative model, your simple generalizable childhood model. And there's a lot that doesn't fit with that model precisely for the reason that I just said. And you're resisting it, you're resisting updating. I will just add one other little footnote, which is that, of course, as we age, I think I must be probably the oldest in the room. I can tell you that I don't do a hell of a lot of updating anymore.</p><p><strong>[24:34] Michael Levin:</strong> Thanks. That's great. I've got a bunch of stuff to ask about. Richard, did you want to say anything before we?</p><p><strong>[24:40] Richard Watson:</strong> Yeah, just a little, thank you. So I guess I mean all of us are on the same page. I think that the naive idea that it would be best if you could remember everything because obviously you could make better informed decisions if you didn't forget anything, I think we're all of the opinion that that's naive and that forgetting is necessary in order to have a model of future behavior which is specific rather than retaining all possibilities. A way of thinking about it that occurred to me that may or not be useful here is an idea of agency that is time reversible. So that it's similar to what Carl said, that forgetting the right stuff and deciding the right stuff are really the same kind of action. If you think about deciding something as decoupling the causal relationship between the state of things as they are and the actions they're going to have on the consequences for the future, right? It's like deciding something is that I It's as though I changed the state that I am now in such a way that I will do this action rather than that action. And forgetting is like a decoupling between the state that I am now and the causes that made me like that in the past. It's like I'm going to become the thing that was made by this history instead of the thing that was made by that history. So by choosing to have a particular history, which means forgetting something instead of holding both of those possibilities, I'm going to forget this one and be that one. Now that's the same as being something different now, which is the same as deciding a different path for the future. So that the choice that you make, if you think you can make choices about which path you go on in future, that's the same thing as making choices about which path you came at from the past. So there ought to be a collapsing of possibilities going forward, ought to be identically symmetric with a collapsing of possibilities from the past, because otherwise, you know, you've lost even more causation than free will thinks you've lost. Let me try that again, right? So imagine that you couldn't change the past, but you could change the future. I can make a decision and just decide to do this instead of that as though my free will intervenes on causation in some way, right? That's super weird because I'm somehow imagining that I can't change the past, but I can change who I am right now in this moment so that I can take a different path in the future. I think it's less inconsistent with causation. It's more consistent with causation to say that when I choose a different path for the future, I'm also choosing a different history. And I'm like I'm stepping between train tracks and one was going this way and one was going that way. And if I can make a decision about which way I'll go into the future, that's the same as making a decision about which history I come from. And so that's the same act. The act of deciding what you're going to do in the future is the same as the act of forgetting a particular path about where you came from in the past. So I think you can't have one without the other. So I'll try, I'll try one more time because I've been rambling a bit. That it's about whether you think the state that you are now causes what's going to happen next. And if you can decide between one possible future and another possible future, that's a decoupling between what you are now and what happens next. And if you can do that, that's the same as saying, I'm decoupling what I am now from what caused me to be like this from the past. I'm, it's, a word that we might use for that is I'm attending to this thing from the past rather than attending to that thing from the past. And by attending to them, I change who I am in this moment and thus what I'm deciding for the future. So I'm just offering that view of it as a sort of a time reversible relationship between decisions and forgetting.</p><p><strong>[29:09] Karl Friston:</strong> I'd be interested to hear what Mark has to say from the point of view of psychotherapy on that, because I imagine most of his life is actually opening up that choice of paths into the future, given the past.</p><p><strong>[29:22] Richard Watson:</strong> I would imagine that being able to do a different future is tantamount. What I'm suggesting is that it's tantamount to being able to see your history differently.</p><p><strong>[29:37] Mark Solms:</strong> Yeah, I don't want to go too far down the psychotherapy line because I'm sure that Michael has questions from his own field in relation to what we've already said. But I will just say that it's a hell of a hard. Psychotherapy is very difficult. People don't want to change. That's what they resist. And it's because it's the non-declarative aspects of their predictive model that are causing all the trouble. It's not easy to change. So what we do is draw attention to the patterns of behavior, what they're enacting. They're enacting their beliefs, and they are enacting their predictions. If I do this, then that will happen. Of course, they're doing this automatically, and that isn't happening. That's why they suffer from emotional disorders. That's the error signal. But they're not using it to update what they're doing. So we draw attention to, can you see you're doing this all the time, and it's meant to have that outcome, and it's not having that outcome, and that's why you're suffering like this. That problematizes their generative model. And then they lay down new predictions. It doesn't extinguish the old ones. The bad old ways always stay there. That's why we can go back to our bad old ways. So we don't extinguish those core beliefs, but we supplement them with better ones, with new beliefs, which gradually get deeply consolidated. And that's why the treatment takes so long, working through, we call that. But over to you, Mike.</p><p><strong>[31:18] Michael Levin:</strong> A whole list of things. Let's see, just briefly, this business of forgetting or changing your story of the past is hugely relevant to some of our work on regeneration, for example, because one way to look at, for example, mammals not regenerating their limbs is that they have an evolutionary history in which it didn't make sense for them to try. It wasn't going to work, they would get infected and all these things. But now, with our wearable biodomes and various other things, there is a future that now makes sense, whereas it didn't before. And we spend a lot of time thinking about how to soften those priors. What are the signals that we could give the cells? Because it's not that they can't. I think they've, it's just not the model of themselves and of their future that they have now, because it's been shut down for various practical reasons that we can now lift. And so I spent a bunch of time trying to understand what kind of stimuli we can get, right? So this is, not to be facetious, but some kind of psychotherapy at a somatic level for cells and organs and things like this that basically I think have a bunch of frozen priors about what they should and shouldn't do that are now limiting more than they are helpful. And if we can sort of guide them to a different, a reinterpretation of what their past was into a new future, I think the mechanisms are all there. They have the tools to do it. I think they're just on a different path, so to speak. So, I don't know, what the relevant version of therapy is in that case. I mean, we thought about plastogens and some things like that, but surely there are more techniques.</p><p><strong>[33:15] Mark Solms:</strong> Somehow what comes to mind, and it's a tangent, it's a free association to what you've just said. So it's relevant, but I don't know why. And this also builds on what Richard was saying earlier. It seems that once you've automatized, in other words, deeply consolidated, in other words, rendered very precise a belief, then you no longer need to know where that belief came from. I mean, that's what adds to the uncertainty. It's sort of like, well, step A, B, C, D, E led me to this, maybe B was wrong, I better go back and rethink it. But once you've automatized, deeply consolidated the outcome of that predictive work, then you don't need to know how you got there. And I think that's a big part of what you're talking about. As I say, I just intuitively, that seems relevant to what you just said, Mike. So it's not, so forgetting, it's too general a word, it's selective forgetting, it's retaining the products of learning, but forgetting the sort of course by which you got there, because that's no longer, you no longer need that information. If forgetting were to pertain equally to the products of the learning process, you'll have a very unstable system with much less agency. I mean, that's what your question was all about at the outset.</p><p><strong>[34:58] Michael Levin:</strong> Okay, a couple of things following up on this. First, back to the dreaming thing, and maybe you guys can fact check me on this. So my collaborator, Marca, whom you should meet at some point, was telling me this thing, which I had heard, that people don't dream of cell phones, despite how common this is, that nobody ever dreams of cell phones. So first of all, is that a fact? Is that a real issue? And if so, then I want to hear what you guys have to say about that, why you think that's the case. And more broadly, the reason I'm interested in this is because in thinking about novel beings, okay, and the changes that, so of course, the humans in various cyborg configurations and then ultimately some very, very different kind of beings that are thinking to be around, what do you think their sleep, not so much the sleep, the architecture of the sleep, but the content, the interpretation, the meaning of the dreams of beings who don't have the same evolutionary past as we do? And what does it mean when we do and don't dream of specific things? And how are we going to, for example, interpret dreams of these novel beings and so on?</p><p><strong>[36:13] Mark Solms:</strong> So I don't know the facts about whether we dream of cell phones. But what it brings to my mind is a slightly older literature from the 80s and the 90s when a lot of work was being done on typical content. And there were remarkable swathes of things that we don't dream about. And it included things like calculating, writing, typing, and it seems to be in the same ballpark as cell phones. These kind of boring, repetitive things that we do all the time and that don't have much, there's not a hell of a lot to learn there. They're just taken for granted sort of things. It would be interesting to see if that is a finding. I'm not questioning it. I just don't know that data. But it would be interesting to see, does it apply equally across the age range for those of us for whom cell phones were a novelty, as opposed to those who were born into a world of cell phones? It would be interesting to see if there's a difference in their dream content that would tell us something about what we're talking about. As to the dreams of future cyborgs, sure. I'll pass on that one. Oh, go on.</p><p><strong>[37:41] Michael Levin:</strong> Just out of curiosity, show of hands, we're all roughly the same age. Has anybody here dreamt of cell phones? I don't think I ever have.</p><p><strong>[37:48] Richard Watson:</strong> I don't think so.</p><p><strong>[37:49] Michael Levin:</strong> I don't think so.</p><p><strong>[37:50] Mark Solms:</strong> I certainly can't bring a dream to mind of cell phones, but I'm going to pay attention to that question now. I'd never thought of it.</p><p><strong>[37:59] Richard Watson:</strong> I certainly do dream of activities that I didn't do until I was an adult. And to keep it clean, I'm talking about driving.</p><p><strong>[38:07] Michael Levin:</strong> That's a good point. Lots of driving, lots of driving dreams. I wonder.</p><p><strong>[38:12] Richard Watson:</strong> And with respect to the repetitive ones that Mark just mentioned of writing and typing and things like, but you do dream about walking, right? Let's say you dream about the environment you're walking in.</p><p><strong>[38:27] Mark Solms:</strong> He's dreaming about walking.</p><p><strong>[38:33] Richard Watson:</strong> My first thought, Mike, when you, but I'll do, I'll go where Mark dares not go. My first thought about chimeric dreaming was I wondered whether it might be more like multi-participant collective community dreams, right? So sometimes, you know, we maintain, generally, I think, a singular sense of identity even whilst we're dreaming. I accept that even though it's all in my head, I still get surprised by things. So something in my head is making things up that I wasn't expecting, right? So it's almost like there is something collective happening in any dream when you get surprised by things. So I was just trying to connect that with what would it be like if one entity that had multiple evolutionary histories and a chimeric being was dreaming? Would it have more of that multi-participant dreaming sort of feel to it? Or is it the case that each participant can only have a singular identity in it? It's just that it's more surprising to them because there's other things going on.</p><p><strong>[39:47] Michael Levin:</strong> What do we make of the occasional person with a data memory? I think that's a real phenomenon, right? Where people apparently can remember the most trivial details of any day. What do we make of that? Because some of them have apparently normal cognition. They get around, they live in society and all that. What do we make of that?</p><p><strong>[40:11] Mark Solms:</strong> Yes, I find it, so again, one must be careful talking outside of one's area of expertise. I'm not an expert on eidetic memory. But what comes to my mind is a famous case of Alexander Luria. What was his name, Alexei? Was it Sharashevsky, the patient? Anyway, the book describing him is the title is The Mind of a Mnemonist. And it's a man who can't forget. And reading that case study, you see how extremely inefficient that form of, that way of being is. It's an extremely concrete, extremely overly complex model, and the person was frankly autistic. Although Luria doesn't describe him as such, it's clear reading between the lines that he was autistic. So, you know, he doesn't... He doesn't generalise, he doesn't abstract, he doesn't get the big picture. And so you're saying that there are people who get by fine with that sort of memory. Just on the reading of that case, I find it hard to believe, but I really don't know that literature, but I find it hard to believe. And generally in development, kids have much more concrete sort of, you know, I don't mean babies. I mean, you know, once your declarative memory systems kick in, they remember a hell of a lot of trivial nonsense that we don't. But I think that then that all gets consolidated into a more generalisable picture. So I think generalisation, which means forgetting, as we were discussing earlier, just is obviously the efficient way to deal with the problem, the formal problem that Carl introduced us to at the outset, the fact that accuracy comes with complexity costs.</p><p><strong>[42:28] Richard Watson:</strong> I don't think it's just efficiency. I think it's literally the same thing. If I'm still causally affected by two things that happened in the past, then I'm not able to respond to one of them alone. If I'm still causally affected by two things that happened in the past, then I'm causally affected by two things that happened in the past, not by only one of them. It's like I haven't decided what's happening next if I haven't forgotten, if I haven't broken that causal dependence on one of those things. And I don't think that's just about efficiency.</p><p><strong>[43:14] Mark Solms:</strong> Well, I might be going off in the wrong direction now, but I also saw that Carl wanted to say something, so I'll be super brief. I think that what you're saying touches on what I said earlier, that you might be affected by two things, but once you've come up with one solution, you don't need to remember the two things. And I think that that's what we're talking about. We're talking about a generative model which doesn't have a solution for each and every thing. It has compromises. It has solutions which fuse, synthesize different problems. And then you can forget the two things. So what you're left with is the product. And the opposite is a problem. If you've got a solution for each and every situation, then it's not really a solution. It's not workable. And that is what I mean by efficiency.</p><p><strong>[44:09] Richard Watson:</strong> I see.</p><p><strong>[44:16] Michael Levin:</strong> Carl, did you want to comment?</p><p><strong>[44:18] Karl Friston:</strong> Yes. When I heard the word efficiency, I normally assume people are talking about the path of pleased action, which is just the most likely path into the future, given the kind of thing I am. So I quite like the efficiency word. But just to try and draw some of the things together, the ability to remember everything that you see at a very elemental sensorial level did remind me, as Mark was alluding to, of idiot savant and the capacity to reproduce. And of course, what accompanies that remarkable ability is what some people call a lack of central coherence. So experts in autism say that this ability to remember everything that you've seen and reproduce it in a drawing comes at the price of failing to build a deep generative model, where you abstract those things that are required for generalisation. Mark actually articulated that very nicely in terms of, there is no abstraction. There is nothing that you can use for making sense of the coarse-grained carving nature at its joints in a much more fundamental way. So my suspicion is that the people who say they have photographic memory, they're either autistic or they've trained very much as Chinese children trained to do mental arithmetic. You can't do both. You can't have a deep generative model that is minimally complex in the right kind of way and remember all the fine-grained details because that would entail too many degrees of freedom and that would basically render your model having low evidence because it's too complex because there are too many degrees of freedom available. So you would never generalize. You could do just that like an idiot savant. But to try and get back to this notion of paths into the future and the like, if you are someone with severe autism or you're, well, any artifact that cannot disengage from the sensorium, then you are effectively affording too much precision and thereby rate learning rate to the immediate moment, which means that you cannot. If you're the kind of thing that has, as Richard was talking about, the ability to simulate into the future, to explore different paths so that you can select the one that is most likely for the kind of thing that you have learned you are, then the depth into the future of those paths is severely curtailed. So one aspect of this lack of central coherence is the fact that if you're severely autistic, you just can't model yourself into the future. You can't predict yourself into existence in the future, which means that you become very, very reflexive. You become very tied to the moment, tied to the sensorium. So the depth of the path is severely compromised in things that don't have this kind of ability to forget about the sensory data, to reassign more precision or to, you know, to the deeper, slower aspects of the generative models. And for things like you and me, models of models of the future, I sort of bring that, I sort of emphasize that from what you come back to Mike's question about 10 minutes ago, if you want to apply these ideas to a cell, what kind of imperatives would you bring to the table in terms of scoring different paths into the future, selecting what you would do? Now, for you and me, because we have explicit models of the consequences of our actions, we can actually select literally what to do, this or to do that, to choose Richard's words, we can choose, we can decide. But if you don't have a, if you don't, if you're a much simpler and say single-cell organism that doesn't have a deep hierarchical structure, you don't actually have a jointed model of your future.</p><p><strong>[48:22] Karl Friston:</strong> You just have reflexes, which means you can't run out into the future. So there is no way of generating choices or paths into different paths into the future. You've just got to commit to one like a thermostat. What is that path? It's the path of least action. It's the most efficient path, given the kind of thing you are. So what I'm trying to work towards is that you can't do psychotherapy on cells. But what you can do is just look at the maths that determines that the path into the future. And there's only one. And that is in engineering, that would be something called path integral control. And basically what it is, it's the measuring along the short-term path into the future, the difference, technically the information and the relative entropy or the KL divergence between what you anticipate given the current circumstances is going to happen in the future and what a priori you think would happen to me as a cell, for example. And that balance basically determines the direction of travel. And if you want to now open up the directions of travel, then you have to decrease the precision of the preferred probability distribution over the kind of thing that I am. So if I was trying to simulate this or I was faced with this problem technically as an engineer, I'd be looking for where are my preferred states of being, and more specifically, my preferred distribution over paths encoded by physically. And more specifically, where is the precision of those sort of sub-personal mathematical beliefs about my preferred paths into the future. So basically, if I was looking at a thermostat, I'd be looking where is it that there is a sensitivity, a precision, a learning rate that controls the set point? Is this a very precise thermostat that gets really upset as soon as the temperature deviates? Or is it something that has a bit more latitude in it and can tolerate a greater range with less precision? So I'll be looking for the knob that encodes the precision on the set points over the paths of the kind of thing that I am. And if I'm a single cell, these will be responses to the world at different temporal scales. And then I relax that. And that in principle will allow you to take different paths that are not constrained by your very, very, very precise engineering and precise belief about the kind of thing that I am. And of course, what Mark was saying before, like he doesn't forget or learn anymore. He has now a very precise set of beliefs. He does exactly what he's going to do and say, given the kind of thing he is, because he has now learned a very precise self-model. So that if you wanted to get Mark to go bungee jumping or go to discos, you'd have to find the neurotransmitter basis of the precision on those particular paths into the future.</p><p><strong>[52:25] Richard Watson:</strong> You don't know, maybe that's what he always does.</p><p><strong>[52:27] Michael Levin:</strong> Yeah, right. Now I know the activity we're all going to do later this year. I got it. So that's very interesting, and it's not because, of course, I'm trying to deal with the intermediate case. It's not a single-cell thing. I'm trying to understand what are the possibilities open to the collective of cells in anatomical space, right? So I'm not so much thinking about single cells, and I don't think we know exactly what the collective can and can't do, but to correct me if I'm wrong, one of the key sort of parameters in what you just said is the kind of thing that I am, and that also, I think, is very interesting here because, can we, for example, if you're the kind, if you think you're an axolotl, you maybe have a different, a different, that regenerates organs that will, you might have a different future open to you. And so this is something I'm actually very interested in, is what kind of a thing do you think you are as a cellular system? And the experimental models that we often have, so something like a frogolotl, right, where you combine a bunch of frog cells. You combine a bunch of axolotl cells. It's a perfectly viable thing called the frogolotl. And now you can ask some interesting questions. What do you think you actually are in anatomical space? Because frog larvae don't have legs. Baby axolotls do have legs. As a frogolotl, do you think you should have legs? I mean, you can't answer that question from the genomics. You have all the genomes. That doesn't help you. We still need to understand what do you really think you are and what are you going to do? And maybe to some extent, that's the trick is if you want to induce those kind of outcomes that normally don't happen, you have to change your, you have to change up the kind of thing you think you are. Maybe that's the control knob here, right?</p><p><strong>[54:17] Alexey Tolchinsky:</strong> I may add a quick thought based on what you said, Michael and Carl, going back to your GRNs, right, in your Pavlovian conditioning experiment, you gave a task which was somewhat stressful. It wasn't trivial. And then the learning built some agency, some intelligence. And how did that happen? These nodes learned how to work together. You've built collective intelligence essentially to accomplish this task, right? And you've done it by introducing some stress. And with a note to Mark, and with Freud, in ego grows in frustration, we need a balancing act of some containment and some predictability. So when you build a biodome, you introduce some containment and some predictability in the foundation, saying, I will survive. But then we do need to introduce some stress, some frustration, not too much like in trauma, not too little like in triviality. And that may possibly shift the system into a new regime, such as this new collective needs to build something. But without stress, the change is not possible. We need an influx of energy for it to change.</p><p><strong>[55:21] Michael Levin:</strong> Carl, please.</p><p><strong>[55:23] Karl Friston:</strong> I have to go in 2 minutes to do a PhD in Montreal, but just to pursue that point. Mike, do you remember very early on when, pre-Frans, in fact, when we were doing that simulation of morphogenesis, it just struck me that you're talking about having the potential to be different kinds of things. I mean, that was exactly the whole point of that sort of pluripotentiality. All of the constituent cells could be anything. And all they had to do was to infer which particular kind of thing I'm in in this context. And that context was established by communication with the others. And the precision with which they commit or select to being this particular kind of thing in this particular sort of anatomical space, as it were, or the contribution to the ensemble of that space was the bioelectric signalling. So that would be the knob you'd be looking at to change, relax the precision. So the other thing just goes like that because everything very precisely believes I should be a tail, I should be a head, I should be this and I should be that. But if you reduce the precision by just putting smaller gradients on the bioelectric communication or chemical communication, then you get a much, much more uncertain and much more pluri and much more diverse set and slower outcomes. We didn't simulate that other than sort of cutting things in half, but it might be interesting to revisit that sort of, because there you know what the precision knob is. It's basically the strength of the signal from you to me, telling me, I'm over here, I've got to be a head, you're a tail, and then when we're both in agreement sending the right kinds of messages that are precise, we can commit our pluripotential to being this kind of thing.</p><p><strong>[57:21] Michael Levin:</strong> Similarly with anthrobots and xenobots, what kind of thing are you? Looking at your genome doesn't help because you've got the same, in case of the xenobot, same genome as the frog, so that's not going to help. But you are a different thing, and you end up upregulating genes for sound perception and doing things that normal frog embryos don't do, because in some way you've now changed, and this is something we're very interested in looking at. We have now calcium signaling data on all of these things and so on to try to figure out what does it think it is. And then I guess in subsequent chats, what I'd love to dig into is that we talked about a very general notion of sleep. And I'd love to talk about how one recognizes sleep in things that aren't typical brains, so you can't sort of lean on REM patterns and whatnot, but what does it look like? How do you know when a system is sleeping? What does that look like in different embodiments?</p><p><strong>[58:23] Karl Friston:</strong> I'll get Juliet to talk about his fruit flies sleeping. He loves that.</p><p><strong>[58:28] Michael Levin:</strong> Fruit fly is way more conventional than what I'm thinking of. I'm thinking of some really weird things. We'll have to go well beyond the fruit fly, and then come back to the whole GRN thing because...</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Discussion: Richard Watson, Alexey Tolchinsky, Mark Solms, Michael Levin, and Karl Friston</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>A roundtable with Richard Watson, Alexey Tolchinsky, Mark Solms, Michael Levin, and Karl Friston on memory and forgetting in human and unconventional intelligence. They also discuss overfitting, REM sleep, dreams, and collective cellular identity.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/w_ciA-yyF8M" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/d3acb073/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1 hour meeting with Richard Watson ( Alexey Tolchinsky ( Mark Solms ( and Karl Friston ( where we discuss issues of memory (especially, the role of forgetting) in diverse intelligence (human patients and beyond), and a bit on dreams and psychoanalysis. The original question from me was motivated by some findings on the effects of induced forgetting in models of unconventional cognition ( and more coming soon).</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Role of forgetting</p><p>(06:22) Overfitting and generalization</p><p>(10:45) Accuracy minus complexity</p><p>(21:13) REM sleep and transference</p><p>(24:40) Choosing futures and pasts</p><p>(31:18) Cellular psychotherapy ideas</p><p>(34:58) Dreaming of cell phones</p><p>(39:47) Photographic memory costs</p><p>(44:18) Precision and future paths</p><p>(52:25) Collective cellular identity</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Michael Levin:</strong> What I'm interested in is to get all of your thoughts on the following question, the role of forgetting in particular, the role of losing memories, if you even think that happens, but the role of forgetting in agency and the potentiation of agency, and just in general, what role you think forgetting plays in the mind and in the capacity to have a significant mind, like how important is forgetting? How do you see forgetting and so on? So that's what I'm interested in. And yeah, I can give you the context of why I'm asking this, but that's what I'd love to hear about.</p><p><strong>[00:42] Mark Solms:</strong> I'm sure we all remember the context. If I may, I will begin. When I read your original description, the thoughts that occurred to me were exactly the thoughts that Carl then articulated over the emails about model complexity and the need to balance accuracy with complexity, and Carl drawing attention to how during sleep, when a lot of memory consolidation goes on, consolidation, of course, involves both what we retain and what we forget. It's a selective process. And Carl drew attention to how we believe-- he says we believe-- actually, it began with him and Alan Hobson believing, and now we all agree with them, that during sleep, there's a reduction-- there's a getting rid of redundant synapses or synaptic connections, because otherwise, you have too complex a model. And this is the ideal time to do it because nothing's happening. There's no new incoming era. So those thoughts that Carl articulated were exactly the thoughts I had. So then I thought, well, now that Carl's expressed my thoughts, which were actually derived from his thoughts, I'll have to come up with new thoughts. And these were the additional thoughts. They actually are just two of them. The one is that there's an interesting problem in infancy when you've got a hell of a lot to learn and new things happen all the time. How do you? How do you balance this business at the very beginning of life? And how do you retain any kind of a stable model when the world is so utterly unpredictable? Then there must be some mechanism whereby there's some continuity in the kind of base model. Otherwise, you're just totally fragmented and every day wipes out your beliefs that you had established the day before. And I would like to link that with the fact that in the first two years of life in humans, there's pretty much no declarative memory. It's all non-declarative. So things go from short-term memory into non-declarative long-term memory. They can't retrieve those memories and rethink them because that's what non-declarative memory is.</p><p><strong>[03:24] Mark Solms:</strong> Things just go straight into these automatic memory systems. And the way that I think about those subcortical non-declarative memory systems is that they carry high precision. This is on the view that consciousness is uncertainty. That's what consciousness is for, is to feel your way through situations where you're not so confident about your predictions. You're palpating them and testing them against the incoming errors. And so this is not happening in relation to the memory systems of infants. Everything goes into long-term non-declarative memory. So I think that there's some kind of biasing, some kind of excessive confidence. I don't know if that's right, but that's my thought. And then you can link that with the fact that there's so much REM sleep in infancy. It used to be thought that it's during REM sleep that all the memory consolidation is going on, but in fact, it turns out to be the opposite. It's during non-REM sleep that all the memory consolidation is going on during sleep. And REM sleep is a highly entropic state. So it's dealing with uncertainties and it's conscious. You know, you're dreaming during REM sleep. So you're in a state of uncertainty by physiological measures and by psychological measures in the sense of the subjectivity of a highly emotional, conscious state of mind. I have the view, and this is the last thing I'll say, that REM sleep, which incidentally is also characterized by highly unstable homeostasis, we go out of kilter across a great many homeostatic parameters during REM sleep. So it really is a state where you're in a lot of uncertainty, even at the level of autonomic homeostatic mechanisms. So I'm of the view that during REM sleep, we are actually resisting, like we do during infancy, resisting too much model, too much forgetting. We're wanting to retain non-declarative memories against the accumulating errors of the day. It's trying to explain away. In other words, trying to forget, trying to not remember, trying to not update the existing non-declarative model. So those are my opening shots.</p><p><strong>[06:10] Michael Levin:</strong> Great. I made a couple of notes because I want to come back to the whole sleep thing, but maybe we'll go around with this topic. Who wants to?</p><p><strong>[06:22] Alexey Tolchinsky:</strong> I mean, build on what Mark just said, which is very useful. And to add to your work, Eric Oil's overfitted brain hypothesis, which was new to me because I studied your dream and sleep work thoroughly and I watched your debate with Alan Hobson with great pleasure. So Eric Oil suggests that one of the things dreams are useful for is they reduce overfitting because what we've learned in the day is being placed in a wildly different context. It allows us to loosen the priors and to see what can generalize. And incidentally, he's a writer, he writes fiction. He said fiction has an additional function for that. When we fantasize, we do that. Because when we hold on to very precise notion, we cannot generalize. And I think that's the general theme in forgetting and building. So what I think, Michael, you said, when we remember, when we recall, we build agency, we build a higher level, we build a macro. But when we forget, we sharpen the causal signal. This is sculptor's chisel. So then one of the things we optimize is exactly generalization, because if we use the precise memories we've learned, we cannot use them in other instances. The metaphor for that is Funis de Memorius. It's the book or the story by Luis Borges. A man fell from a horse and lost the ability to forget. And then he couldn't recognize his dog anymore because at 3.45 and 4.05, there was a slightly different angle of view and slightly different shade of the fur. So he lost concepts, he lost abstraction, he lost pattern recognition. And incidentally, speak about agency, he lost self because self is a mental object and we must abstract to retain some coherence and some continuity of the self. And in neurology, I suppose semantic amnesia is close to that where concepts are gone and we only have details. We sort of live in the here and now. It's the recent self without any continuity to the past. And, you know, so, but generalization is a balancing act. So these are the cases where there's not enough generalization. But when there's too much generalization, we have another issue like Alzheimer's when it starts, you know, we start losing the recent details. And in that sense, self lives in the past. You know, we have some concepts, but we will lose the recency. We stop updating the self. And also generalization can be skewed or biased. Like in PTSD, a flashback is re-experiencing now in the same context what happened back then in the circumstances of trauma. So this is incorrect generalization, overgeneralization of the phobic memory. And I suppose in depression or in OCD, when we ruminate, it's again, the negative experience of the past is casting a shadow on the present and on the planning for the future. So I think that this forgetting serves a function of optimizing generalization. And exactly like Mark said, also metabolic function, because every memory trace is metabolically costly and we just can't afford to hold on to everything. I mean, I think in physics, the structure that remembers everything is a black hole. It encodes everything on the event horizon at maximum density. So that's the kind of structure that remembers everything. Without forgetting, we are dysfunctional, including the self-functioning. But I've talked too much, so these are my thoughts on what Mark said.</p><p><strong>[09:43] Richard Watson:</strong> Alexei, can I check that I understand the connection between what you were saying about reducing overfitting and what Mark was saying previously? So the connection is that by resisting the update of long-term memory with particular instances, that's what Mark was talking about, you are fostering an ability to avoid overfitting to those particular instances, right?</p><p><strong>[10:09] Alexey Tolchinsky:</strong> I think that memories are malleable, even Pavlovian memories. We update and change the context. We weaken and dampen them. If we cannot let go of some details, we cannot think, we cannot disambiguate, exactly like that dog that is different, if that's...</p><p><strong>[10:30] Richard Watson:</strong> Even the things which are the same.</p><p><strong>[10:31] Alexey Tolchinsky:</strong> Right.</p><p><strong>[10:40] Michael Levin:</strong> Carl, do you want to?</p><p><strong>[10:45] Karl Friston:</strong> Say anything about that? Yeah, so lots of themes here. Just to address that last question, from the point of view of machine learning and physics, that point about generalisation being the same thing as avoiding overfitting, I think it's absolutely fundamental. So, you know, it's fairly straightforward. I think David McKay was the first person, or perhaps even before that, statisticians Cass and Stepley were able to prove that the ability to generalise is a measure of the evidence for your generative model of the way in which your data or your world supplies data. And the log of the evidence is just the accuracy minus the complexity. So coming back to Mark's point, which means that to generalize is to have the simplest explanation or model or account of an accurate sort of everything you're trying to explain. So mathematically, they are the same thing. And if one elevates that notion of model evidence or interprets it now in an evolutionary context, or, sorry, more generally in a selective context, again, coming back to Mark's notion that we are selecting things, then what is selected is just simply the thing that is most likely to be there. And the thing that is most likely to be there, with a nod to survival of the most likely, is those that have the greatest marginal likelihood. And model evidence just is the marginal likelihood. So I think mathematically all these things are the same thing. So to summarize that, the things that are selected, the last man standing, as it were, is just the most likely thing that you're going to see. That likelihood is always expressed as accuracy minus complexity. And thereby maximising the marginal likelihood means minimising the complexity. And that means that you will have the best model that is able to generalise. So the question then just, I think, resolves again formally to what timescale we're talking about. I mean, the selection process, you could argue, unfolds at all timescales, but is exactly the same kind of process. So you can have attentional selection over, say, 300 milliseconds to several seconds. You can have action selection. We select the most likely thing that we're going to do next over multiple time scales, right the way through to, well, you could even argue in neurodevelopment from the perspective of neural Darwinism and the theory of neuronal group selection if you wanted to, but you can jump right through to natural selection at a very, very slow time scale. So it's the same thing going on every time scale. It just looks different and we have different disciplines and different ways of talking about these things. But it's the same underlying, almost tautological explanation for the way things are.</p><p><strong>[14:14] Karl Friston:</strong> It couldn't be any other way from a mathematical perspective. Dreaming is interesting because that talks about a particular time scale of a diurnal sort. And it's interesting then to link that to memory. And something that I think both Alexa and Mark alluded to was that to consolidate is to forget selectively. And I often think of this in terms of a sculpture creating a figurine, for example. It's what you remove which gives it its form. And therefore, if I now read forgetting as removing the right stuff, minimizing the complexity in the right kind of way, then forgetting is just a particular kind of learning or model optimization that basically consolidates the stuff that is not removed. So it's not surprising that much of the process of selection is taking stuff away, either by death or by ignoring it, or by some synaptic homeostasis while we're asleep. So forgetting is just the other side of the coin from learning. Without forgetting, you couldn't learn; without learning, you couldn't forget. The both, I think, descriptions. There's another conversation we could have here, which is not so much biological, but more you would find in economics and states-based modelling, which is Bayes-optimal forgetting and volatility, adapting the particular learning of certain things, and in particular, the learning rate which is just a precision. I think that's another sort of identity or isomorphism, which is important to remember. Precision is just a learning rate. So if you write down, if you just think about any differential equation and you apply some precision or some parameter to some prediction error that's driving changes in what you're representing or learning, then the units of precision are per unit time. So precision is a learning rate, which means that if Mark is right and children have to learn very, very quickly, then they're going to be assigning a lot of precision to their sensorium relative to their prior beliefs, for example. So on that view, there's a really interesting link between volatility in your environment and the right precision or learning rates that you bring to the table to match that volatility. And this, you know, I see this in many, many different fields ranging from the Kalman gain and Bayesian filtering. If you've got very, very precise data, you pay a lot of attention to i.e. high precision to i.e. you increase your learning rate in the face of those data. But if the data are really, really noisy or you've got your eyes shut during sleep, then you wouldn't afford the same kind of precision in state estimation.</p><p><strong>[17:43] Karl Friston:</strong> In an evolutionary context, I first came across this in Ernst Mayer's The Growth of Biological Thought, where he was telling a story where if you have Drosophila fruit flies and you rear them in a volatile environment by manipulating the temperature, you increase the mutation rate. So they forget genetically or epigenetically the kind of environment to which they are most likely, they are best fit. So I can't remember. And then Stuart Kaufman came in with sort of second-order selection, selection for selectability. Again, it's just mathematically the same thing. It's just the selectability is just the rate of forgetting, which is just the precision at this particular level of optimization. I think Stuart Kaufman went on to actually revisit that second-order selection, which I think you could easily read as forgetting and just basically matching your learning rate, your precision, your rate constants to the actual volatility of the world in which you're trying to explain. So to come back to the neuro-environmental thing, which I hadn't really thought about, that basically means you'd expect things that have a lot of learning to do to get a consolidated, good, generalizing generative model of their world. They're going to learn very, very quickly. And that means that they're going to forget also very, very quickly, until they can weed out what things are invariant over time. The last thing, more of a question, it's, you know, in terms of declarative memory, you know, it's interesting that during REM sleep, unless you've woken up, you don't actually remember your dreams, which is, which I think there's another sort of dynamic in play here that, in order to not forget, you have to literally do reinforcement learning, literally in the way that the word reinforcement learning was originally introduced, which is to reinforce a synaptic connection. So although in dreaming, well, in my world, in terms of simulating these processes, you are generating some sort of fictive content in order to weed out the redundant synapses and associations to minimise the complexity. The imperative here is to get rid of synaptic connections. You really do not want to retain them. So there must be another neuromodulating mechanism that says no, okay, this was actually activity induced by real exposure to the sensorium. And I'm going to remember this. I'm going to lock it in in some way of the kind that we do during waking. But that's not what's going on in rapid eye movement sleep. But maybe during slow-wave sleep. I haven't kept up with that literature. Mark, you look as though you've got something.</p><p><strong>[21:13] Mark Solms:</strong> Well, I just agree with everything that you're saying. The slow wave sleep is a much more predictable process. That's what the slow waves are. You know what waves are coming next. It's a much more passive process. There's much less mental work, predictive work going on. It's just, I imagine, accepting, as it were, the errors that have accumulated. The active process is resisting the updating. It's fighting against the errors, you know, so forgetting. And I agree with you. Dreaming is an eminently forgettable process. It's really one of the most striking features of dreams is you can't remember them. So, you know, they specialize in forgetting. It's trying to explain away everything that's trying to make me update my model that I don't want to. So I think that things that are relatively superficial, in other words, tolerable by the simple generalizable model, superficial things which don't actually question your core beliefs, those get encoded, but things which threaten your core beliefs, your generalizable, non-declarative model, those things you need to explain them away. And I think that that's the main thing that's going on in dreaming now. Michael invited us in his original e-mail to link this to psychoanalysis. I normally am reluctant to bring in psychoanalysis because it's my own pet interest and it doesn't generalize to everyone else's interest. But since Mike asked me to or invited us to, I want to say this. The problem with childhood models, I mean, we can see why they must be high confidence models. They must be highly generalizable. You know, they must persist. They become our core beliefs. The problem is that they're models that we built a long time ago under very different circumstances to the ones that prevail in adulthood. And this is what we deal with in clinical psychoanalysis. The problem is that our patients are living in the present as if it were the past. That's what we call transference. They're transferring the past and their beliefs and predictions deriving from the past, which are the best solutions they could come up with to the world that they were living in then, or the least bad predictions they could formulate then. They then become non-declarative and automatized, and they perseverate into adulthood. And they're living in a world that isn't the world that's there. And I think that this is where Freud's wish fulfillment theory of dreams comes in. It's an attempt to explain away that which does not fit with your non-declarative generative model, your simple generalizable childhood model. And there's a lot that doesn't fit with that model precisely for the reason that I just said. And you're resisting it, you're resisting updating. I will just add one other little footnote, which is that, of course, as we age, I think I must be probably the oldest in the room. I can tell you that I don't do a hell of a lot of updating anymore.</p><p><strong>[24:34] Michael Levin:</strong> Thanks. That's great. I've got a bunch of stuff to ask about. Richard, did you want to say anything before we?</p><p><strong>[24:40] Richard Watson:</strong> Yeah, just a little, thank you. So I guess I mean all of us are on the same page. I think that the naive idea that it would be best if you could remember everything because obviously you could make better informed decisions if you didn't forget anything, I think we're all of the opinion that that's naive and that forgetting is necessary in order to have a model of future behavior which is specific rather than retaining all possibilities. A way of thinking about it that occurred to me that may or not be useful here is an idea of agency that is time reversible. So that it's similar to what Carl said, that forgetting the right stuff and deciding the right stuff are really the same kind of action. If you think about deciding something as decoupling the causal relationship between the state of things as they are and the actions they're going to have on the consequences for the future, right? It's like deciding something is that I It's as though I changed the state that I am now in such a way that I will do this action rather than that action. And forgetting is like a decoupling between the state that I am now and the causes that made me like that in the past. It's like I'm going to become the thing that was made by this history instead of the thing that was made by that history. So by choosing to have a particular history, which means forgetting something instead of holding both of those possibilities, I'm going to forget this one and be that one. Now that's the same as being something different now, which is the same as deciding a different path for the future. So that the choice that you make, if you think you can make choices about which path you go on in future, that's the same thing as making choices about which path you came at from the past. So there ought to be a collapsing of possibilities going forward, ought to be identically symmetric with a collapsing of possibilities from the past, because otherwise, you know, you've lost even more causation than free will thinks you've lost. Let me try that again, right? So imagine that you couldn't change the past, but you could change the future. I can make a decision and just decide to do this instead of that as though my free will intervenes on causation in some way, right? That's super weird because I'm somehow imagining that I can't change the past, but I can change who I am right now in this moment so that I can take a different path in the future. I think it's less inconsistent with causation. It's more consistent with causation to say that when I choose a different path for the future, I'm also choosing a different history. And I'm like I'm stepping between train tracks and one was going this way and one was going that way. And if I can make a decision about which way I'll go into the future, that's the same as making a decision about which history I come from. And so that's the same act. The act of deciding what you're going to do in the future is the same as the act of forgetting a particular path about where you came from in the past. So I think you can't have one without the other. So I'll try, I'll try one more time because I've been rambling a bit. That it's about whether you think the state that you are now causes what's going to happen next. And if you can decide between one possible future and another possible future, that's a decoupling between what you are now and what happens next. And if you can do that, that's the same as saying, I'm decoupling what I am now from what caused me to be like this from the past. I'm, it's, a word that we might use for that is I'm attending to this thing from the past rather than attending to that thing from the past. And by attending to them, I change who I am in this moment and thus what I'm deciding for the future. So I'm just offering that view of it as a sort of a time reversible relationship between decisions and forgetting.</p><p><strong>[29:09] Karl Friston:</strong> I'd be interested to hear what Mark has to say from the point of view of psychotherapy on that, because I imagine most of his life is actually opening up that choice of paths into the future, given the past.</p><p><strong>[29:22] Richard Watson:</strong> I would imagine that being able to do a different future is tantamount. What I'm suggesting is that it's tantamount to being able to see your history differently.</p><p><strong>[29:37] Mark Solms:</strong> Yeah, I don't want to go too far down the psychotherapy line because I'm sure that Michael has questions from his own field in relation to what we've already said. But I will just say that it's a hell of a hard. Psychotherapy is very difficult. People don't want to change. That's what they resist. And it's because it's the non-declarative aspects of their predictive model that are causing all the trouble. It's not easy to change. So what we do is draw attention to the patterns of behavior, what they're enacting. They're enacting their beliefs, and they are enacting their predictions. If I do this, then that will happen. Of course, they're doing this automatically, and that isn't happening. That's why they suffer from emotional disorders. That's the error signal. But they're not using it to update what they're doing. So we draw attention to, can you see you're doing this all the time, and it's meant to have that outcome, and it's not having that outcome, and that's why you're suffering like this. That problematizes their generative model. And then they lay down new predictions. It doesn't extinguish the old ones. The bad old ways always stay there. That's why we can go back to our bad old ways. So we don't extinguish those core beliefs, but we supplement them with better ones, with new beliefs, which gradually get deeply consolidated. And that's why the treatment takes so long, working through, we call that. But over to you, Mike.</p><p><strong>[31:18] Michael Levin:</strong> A whole list of things. Let's see, just briefly, this business of forgetting or changing your story of the past is hugely relevant to some of our work on regeneration, for example, because one way to look at, for example, mammals not regenerating their limbs is that they have an evolutionary history in which it didn't make sense for them to try. It wasn't going to work, they would get infected and all these things. But now, with our wearable biodomes and various other things, there is a future that now makes sense, whereas it didn't before. And we spend a lot of time thinking about how to soften those priors. What are the signals that we could give the cells? Because it's not that they can't. I think they've, it's just not the model of themselves and of their future that they have now, because it's been shut down for various practical reasons that we can now lift. And so I spent a bunch of time trying to understand what kind of stimuli we can get, right? So this is, not to be facetious, but some kind of psychotherapy at a somatic level for cells and organs and things like this that basically I think have a bunch of frozen priors about what they should and shouldn't do that are now limiting more than they are helpful. And if we can sort of guide them to a different, a reinterpretation of what their past was into a new future, I think the mechanisms are all there. They have the tools to do it. I think they're just on a different path, so to speak. So, I don't know, what the relevant version of therapy is in that case. I mean, we thought about plastogens and some things like that, but surely there are more techniques.</p><p><strong>[33:15] Mark Solms:</strong> Somehow what comes to mind, and it's a tangent, it's a free association to what you've just said. So it's relevant, but I don't know why. And this also builds on what Richard was saying earlier. It seems that once you've automatized, in other words, deeply consolidated, in other words, rendered very precise a belief, then you no longer need to know where that belief came from. I mean, that's what adds to the uncertainty. It's sort of like, well, step A, B, C, D, E led me to this, maybe B was wrong, I better go back and rethink it. But once you've automatized, deeply consolidated the outcome of that predictive work, then you don't need to know how you got there. And I think that's a big part of what you're talking about. As I say, I just intuitively, that seems relevant to what you just said, Mike. So it's not, so forgetting, it's too general a word, it's selective forgetting, it's retaining the products of learning, but forgetting the sort of course by which you got there, because that's no longer, you no longer need that information. If forgetting were to pertain equally to the products of the learning process, you'll have a very unstable system with much less agency. I mean, that's what your question was all about at the outset.</p><p><strong>[34:58] Michael Levin:</strong> Okay, a couple of things following up on this. First, back to the dreaming thing, and maybe you guys can fact check me on this. So my collaborator, Marca, whom you should meet at some point, was telling me this thing, which I had heard, that people don't dream of cell phones, despite how common this is, that nobody ever dreams of cell phones. So first of all, is that a fact? Is that a real issue? And if so, then I want to hear what you guys have to say about that, why you think that's the case. And more broadly, the reason I'm interested in this is because in thinking about novel beings, okay, and the changes that, so of course, the humans in various cyborg configurations and then ultimately some very, very different kind of beings that are thinking to be around, what do you think their sleep, not so much the sleep, the architecture of the sleep, but the content, the interpretation, the meaning of the dreams of beings who don't have the same evolutionary past as we do? And what does it mean when we do and don't dream of specific things? And how are we going to, for example, interpret dreams of these novel beings and so on?</p><p><strong>[36:13] Mark Solms:</strong> So I don't know the facts about whether we dream of cell phones. But what it brings to my mind is a slightly older literature from the 80s and the 90s when a lot of work was being done on typical content. And there were remarkable swathes of things that we don't dream about. And it included things like calculating, writing, typing, and it seems to be in the same ballpark as cell phones. These kind of boring, repetitive things that we do all the time and that don't have much, there's not a hell of a lot to learn there. They're just taken for granted sort of things. It would be interesting to see if that is a finding. I'm not questioning it. I just don't know that data. But it would be interesting to see, does it apply equally across the age range for those of us for whom cell phones were a novelty, as opposed to those who were born into a world of cell phones? It would be interesting to see if there's a difference in their dream content that would tell us something about what we're talking about. As to the dreams of future cyborgs, sure. I'll pass on that one. Oh, go on.</p><p><strong>[37:41] Michael Levin:</strong> Just out of curiosity, show of hands, we're all roughly the same age. Has anybody here dreamt of cell phones? I don't think I ever have.</p><p><strong>[37:48] Richard Watson:</strong> I don't think so.</p><p><strong>[37:49] Michael Levin:</strong> I don't think so.</p><p><strong>[37:50] Mark Solms:</strong> I certainly can't bring a dream to mind of cell phones, but I'm going to pay attention to that question now. I'd never thought of it.</p><p><strong>[37:59] Richard Watson:</strong> I certainly do dream of activities that I didn't do until I was an adult. And to keep it clean, I'm talking about driving.</p><p><strong>[38:07] Michael Levin:</strong> That's a good point. Lots of driving, lots of driving dreams. I wonder.</p><p><strong>[38:12] Richard Watson:</strong> And with respect to the repetitive ones that Mark just mentioned of writing and typing and things like, but you do dream about walking, right? Let's say you dream about the environment you're walking in.</p><p><strong>[38:27] Mark Solms:</strong> He's dreaming about walking.</p><p><strong>[38:33] Richard Watson:</strong> My first thought, Mike, when you, but I'll do, I'll go where Mark dares not go. My first thought about chimeric dreaming was I wondered whether it might be more like multi-participant collective community dreams, right? So sometimes, you know, we maintain, generally, I think, a singular sense of identity even whilst we're dreaming. I accept that even though it's all in my head, I still get surprised by things. So something in my head is making things up that I wasn't expecting, right? So it's almost like there is something collective happening in any dream when you get surprised by things. So I was just trying to connect that with what would it be like if one entity that had multiple evolutionary histories and a chimeric being was dreaming? Would it have more of that multi-participant dreaming sort of feel to it? Or is it the case that each participant can only have a singular identity in it? It's just that it's more surprising to them because there's other things going on.</p><p><strong>[39:47] Michael Levin:</strong> What do we make of the occasional person with a data memory? I think that's a real phenomenon, right? Where people apparently can remember the most trivial details of any day. What do we make of that? Because some of them have apparently normal cognition. They get around, they live in society and all that. What do we make of that?</p><p><strong>[40:11] Mark Solms:</strong> Yes, I find it, so again, one must be careful talking outside of one's area of expertise. I'm not an expert on eidetic memory. But what comes to my mind is a famous case of Alexander Luria. What was his name, Alexei? Was it Sharashevsky, the patient? Anyway, the book describing him is the title is The Mind of a Mnemonist. And it's a man who can't forget. And reading that case study, you see how extremely inefficient that form of, that way of being is. It's an extremely concrete, extremely overly complex model, and the person was frankly autistic. Although Luria doesn't describe him as such, it's clear reading between the lines that he was autistic. So, you know, he doesn't... He doesn't generalise, he doesn't abstract, he doesn't get the big picture. And so you're saying that there are people who get by fine with that sort of memory. Just on the reading of that case, I find it hard to believe, but I really don't know that literature, but I find it hard to believe. And generally in development, kids have much more concrete sort of, you know, I don't mean babies. I mean, you know, once your declarative memory systems kick in, they remember a hell of a lot of trivial nonsense that we don't. But I think that then that all gets consolidated into a more generalisable picture. So I think generalisation, which means forgetting, as we were discussing earlier, just is obviously the efficient way to deal with the problem, the formal problem that Carl introduced us to at the outset, the fact that accuracy comes with complexity costs.</p><p><strong>[42:28] Richard Watson:</strong> I don't think it's just efficiency. I think it's literally the same thing. If I'm still causally affected by two things that happened in the past, then I'm not able to respond to one of them alone. If I'm still causally affected by two things that happened in the past, then I'm causally affected by two things that happened in the past, not by only one of them. It's like I haven't decided what's happening next if I haven't forgotten, if I haven't broken that causal dependence on one of those things. And I don't think that's just about efficiency.</p><p><strong>[43:14] Mark Solms:</strong> Well, I might be going off in the wrong direction now, but I also saw that Carl wanted to say something, so I'll be super brief. I think that what you're saying touches on what I said earlier, that you might be affected by two things, but once you've come up with one solution, you don't need to remember the two things. And I think that that's what we're talking about. We're talking about a generative model which doesn't have a solution for each and every thing. It has compromises. It has solutions which fuse, synthesize different problems. And then you can forget the two things. So what you're left with is the product. And the opposite is a problem. If you've got a solution for each and every situation, then it's not really a solution. It's not workable. And that is what I mean by efficiency.</p><p><strong>[44:09] Richard Watson:</strong> I see.</p><p><strong>[44:16] Michael Levin:</strong> Carl, did you want to comment?</p><p><strong>[44:18] Karl Friston:</strong> Yes. When I heard the word efficiency, I normally assume people are talking about the path of pleased action, which is just the most likely path into the future, given the kind of thing I am. So I quite like the efficiency word. But just to try and draw some of the things together, the ability to remember everything that you see at a very elemental sensorial level did remind me, as Mark was alluding to, of idiot savant and the capacity to reproduce. And of course, what accompanies that remarkable ability is what some people call a lack of central coherence. So experts in autism say that this ability to remember everything that you've seen and reproduce it in a drawing comes at the price of failing to build a deep generative model, where you abstract those things that are required for generalisation. Mark actually articulated that very nicely in terms of, there is no abstraction. There is nothing that you can use for making sense of the coarse-grained carving nature at its joints in a much more fundamental way. So my suspicion is that the people who say they have photographic memory, they're either autistic or they've trained very much as Chinese children trained to do mental arithmetic. You can't do both. You can't have a deep generative model that is minimally complex in the right kind of way and remember all the fine-grained details because that would entail too many degrees of freedom and that would basically render your model having low evidence because it's too complex because there are too many degrees of freedom available. So you would never generalize. You could do just that like an idiot savant. But to try and get back to this notion of paths into the future and the like, if you are someone with severe autism or you're, well, any artifact that cannot disengage from the sensorium, then you are effectively affording too much precision and thereby rate learning rate to the immediate moment, which means that you cannot. If you're the kind of thing that has, as Richard was talking about, the ability to simulate into the future, to explore different paths so that you can select the one that is most likely for the kind of thing that you have learned you are, then the depth into the future of those paths is severely curtailed. So one aspect of this lack of central coherence is the fact that if you're severely autistic, you just can't model yourself into the future. You can't predict yourself into existence in the future, which means that you become very, very reflexive. You become very tied to the moment, tied to the sensorium. So the depth of the path is severely compromised in things that don't have this kind of ability to forget about the sensory data, to reassign more precision or to, you know, to the deeper, slower aspects of the generative models. And for things like you and me, models of models of the future, I sort of bring that, I sort of emphasize that from what you come back to Mike's question about 10 minutes ago, if you want to apply these ideas to a cell, what kind of imperatives would you bring to the table in terms of scoring different paths into the future, selecting what you would do? Now, for you and me, because we have explicit models of the consequences of our actions, we can actually select literally what to do, this or to do that, to choose Richard's words, we can choose, we can decide. But if you don't have a, if you don't, if you're a much simpler and say single-cell organism that doesn't have a deep hierarchical structure, you don't actually have a jointed model of your future.</p><p><strong>[48:22] Karl Friston:</strong> You just have reflexes, which means you can't run out into the future. So there is no way of generating choices or paths into different paths into the future. You've just got to commit to one like a thermostat. What is that path? It's the path of least action. It's the most efficient path, given the kind of thing you are. So what I'm trying to work towards is that you can't do psychotherapy on cells. But what you can do is just look at the maths that determines that the path into the future. And there's only one. And that is in engineering, that would be something called path integral control. And basically what it is, it's the measuring along the short-term path into the future, the difference, technically the information and the relative entropy or the KL divergence between what you anticipate given the current circumstances is going to happen in the future and what a priori you think would happen to me as a cell, for example. And that balance basically determines the direction of travel. And if you want to now open up the directions of travel, then you have to decrease the precision of the preferred probability distribution over the kind of thing that I am. So if I was trying to simulate this or I was faced with this problem technically as an engineer, I'd be looking for where are my preferred states of being, and more specifically, my preferred distribution over paths encoded by physically. And more specifically, where is the precision of those sort of sub-personal mathematical beliefs about my preferred paths into the future. So basically, if I was looking at a thermostat, I'd be looking where is it that there is a sensitivity, a precision, a learning rate that controls the set point? Is this a very precise thermostat that gets really upset as soon as the temperature deviates? Or is it something that has a bit more latitude in it and can tolerate a greater range with less precision? So I'll be looking for the knob that encodes the precision on the set points over the paths of the kind of thing that I am. And if I'm a single cell, these will be responses to the world at different temporal scales. And then I relax that. And that in principle will allow you to take different paths that are not constrained by your very, very, very precise engineering and precise belief about the kind of thing that I am. And of course, what Mark was saying before, like he doesn't forget or learn anymore. He has now a very precise set of beliefs. He does exactly what he's going to do and say, given the kind of thing he is, because he has now learned a very precise self-model. So that if you wanted to get Mark to go bungee jumping or go to discos, you'd have to find the neurotransmitter basis of the precision on those particular paths into the future.</p><p><strong>[52:25] Richard Watson:</strong> You don't know, maybe that's what he always does.</p><p><strong>[52:27] Michael Levin:</strong> Yeah, right. Now I know the activity we're all going to do later this year. I got it. So that's very interesting, and it's not because, of course, I'm trying to deal with the intermediate case. It's not a single-cell thing. I'm trying to understand what are the possibilities open to the collective of cells in anatomical space, right? So I'm not so much thinking about single cells, and I don't think we know exactly what the collective can and can't do, but to correct me if I'm wrong, one of the key sort of parameters in what you just said is the kind of thing that I am, and that also, I think, is very interesting here because, can we, for example, if you're the kind, if you think you're an axolotl, you maybe have a different, a different, that regenerates organs that will, you might have a different future open to you. And so this is something I'm actually very interested in, is what kind of a thing do you think you are as a cellular system? And the experimental models that we often have, so something like a frogolotl, right, where you combine a bunch of frog cells. You combine a bunch of axolotl cells. It's a perfectly viable thing called the frogolotl. And now you can ask some interesting questions. What do you think you actually are in anatomical space? Because frog larvae don't have legs. Baby axolotls do have legs. As a frogolotl, do you think you should have legs? I mean, you can't answer that question from the genomics. You have all the genomes. That doesn't help you. We still need to understand what do you really think you are and what are you going to do? And maybe to some extent, that's the trick is if you want to induce those kind of outcomes that normally don't happen, you have to change your, you have to change up the kind of thing you think you are. Maybe that's the control knob here, right?</p><p><strong>[54:17] Alexey Tolchinsky:</strong> I may add a quick thought based on what you said, Michael and Carl, going back to your GRNs, right, in your Pavlovian conditioning experiment, you gave a task which was somewhat stressful. It wasn't trivial. And then the learning built some agency, some intelligence. And how did that happen? These nodes learned how to work together. You've built collective intelligence essentially to accomplish this task, right? And you've done it by introducing some stress. And with a note to Mark, and with Freud, in ego grows in frustration, we need a balancing act of some containment and some predictability. So when you build a biodome, you introduce some containment and some predictability in the foundation, saying, I will survive. But then we do need to introduce some stress, some frustration, not too much like in trauma, not too little like in triviality. And that may possibly shift the system into a new regime, such as this new collective needs to build something. But without stress, the change is not possible. We need an influx of energy for it to change.</p><p><strong>[55:21] Michael Levin:</strong> Carl, please.</p><p><strong>[55:23] Karl Friston:</strong> I have to go in 2 minutes to do a PhD in Montreal, but just to pursue that point. Mike, do you remember very early on when, pre-Frans, in fact, when we were doing that simulation of morphogenesis, it just struck me that you're talking about having the potential to be different kinds of things. I mean, that was exactly the whole point of that sort of pluripotentiality. All of the constituent cells could be anything. And all they had to do was to infer which particular kind of thing I'm in in this context. And that context was established by communication with the others. And the precision with which they commit or select to being this particular kind of thing in this particular sort of anatomical space, as it were, or the contribution to the ensemble of that space was the bioelectric signalling. So that would be the knob you'd be looking at to change, relax the precision. So the other thing just goes like that because everything very precisely believes I should be a tail, I should be a head, I should be this and I should be that. But if you reduce the precision by just putting smaller gradients on the bioelectric communication or chemical communication, then you get a much, much more uncertain and much more pluri and much more diverse set and slower outcomes. We didn't simulate that other than sort of cutting things in half, but it might be interesting to revisit that sort of, because there you know what the precision knob is. It's basically the strength of the signal from you to me, telling me, I'm over here, I've got to be a head, you're a tail, and then when we're both in agreement sending the right kinds of messages that are precise, we can commit our pluripotential to being this kind of thing.</p><p><strong>[57:21] Michael Levin:</strong> Similarly with anthrobots and xenobots, what kind of thing are you? Looking at your genome doesn't help because you've got the same, in case of the xenobot, same genome as the frog, so that's not going to help. But you are a different thing, and you end up upregulating genes for sound perception and doing things that normal frog embryos don't do, because in some way you've now changed, and this is something we're very interested in looking at. We have now calcium signaling data on all of these things and so on to try to figure out what does it think it is. And then I guess in subsequent chats, what I'd love to dig into is that we talked about a very general notion of sleep. And I'd love to talk about how one recognizes sleep in things that aren't typical brains, so you can't sort of lean on REM patterns and whatnot, but what does it look like? How do you know when a system is sleeping? What does that look like in different embodiments?</p><p><strong>[58:23] Karl Friston:</strong> I'll get Juliet to talk about his fruit flies sleeping. He loves that.</p><p><strong>[58:28] Michael Levin:</strong> Fruit fly is way more conventional than what I'm thinking of. I'm thinking of some really weird things. We'll have to go well beyond the fruit fly, and then come back to the whole GRN thing because...</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Conversation with Nic Rouleau, part 2: neuroscience, memory transfer, aging of cognition, and more</title>
          <link>https://thoughtforms-life.aipodcast.ing/conversation-with-nic-rouleau-part-2-neuroscience-memory-transfer-aging-of-cognition-and-more/</link>
          <description>Neuroscientist Nicolas Rouleau joins for a follow-up discussion on consciousness, memory transfer, cognitive plasticity and aging, goal decoding in the brain, and unusual experiments on conditioning and learning in materials like Play-Doh and neural tissues.</description>
          <pubDate>Fri, 03 Apr 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69cf693ffb271c000155d4a7 ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/nYK4NvqyY0k" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/2ddc5dc7/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~55 minute discussion following up on Nic's talk and our brief conversation ( comprising part 2 of a conversation with a really interesting young neuroscientist, as well as friend, collaborator, and our Center member, Nicolas Rouleau ( We cover topics of consciousness, neural decoding, the meaning of neuroscience, memory transfer, cognitive plasticity and its relationship to rejuvenation therapies, intelligence throughout the universe, and the weirdest work Nic has done (he chose his work on memory in Playdoh). For more information: Nic's website: X account: @DrNRouleau Recent papers to check out: Sellar, E.P., Rouleau, N. (In Review). A cybernetic framework for synthetic biological intelligence in the era of neural tissue engineering. Preprint doi: 10.31234/osf.io/md2wf_v1. Kansala, C., Cicek, E., Nkansah-Okoree, V., Golding, A., Murugan, N.J., Rouleau, N. (In Review). Superstitious conditioning forms the experience of free will under causal determinism. Preprint doi: 10.31234/osf.io/fk3yt_v2. Roskies, A. &amp; Rouleau, N. (Forthcoming, In Press). Research on brain organoids should prioritize questions of agency, not consciousness. AJOB Neuroscience. Rouleau, N. &amp; Levin, M. (In Press). Brains and where else? Mapping theories of consciousness to unconventional embodiments. Philosophical Transactions: A. Preprint doi:10.1098/rsta.2025.0082. Rouleau, N., Levin, M. (2024), Discussions of machine versus living intelligence need more clarity, Nature Machine Intelligence, doi:10.31219/osf.io/gz3km Rouleau, N., and Levin, M. (2023), The Multiple Realizability of Sentience in Living Systems and Beyond, eNeuro, 10(11), doi:10.1523/eneuro.0375-23.2023 Rouleau, N., Cairns, D. M., Rusk, W., Levin, M., and Kaplan, D. (2021), Learning and synaptic plasticity in 3D bioengineered neural tissues Neuroscience Letters, 750: 135799</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Rethinking unconscious experience</p><p>(07:50) Immortality, memory, and aging</p><p>(20:11) Regeneration, identity, and continuity</p><p>(34:52) Goal signals and decoding</p><p>(44:01) Conditioning strange materials</p><p>(49:15) Neuroscience and cosmic minds</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Michael Levin:</strong> First of all, I'm wondering what you think about this. In the study of consciousness, for example, what people study are, they say, okay, here's conscious learning, non-conscious learning, right? There are processes that go on that they say, okay, the subject had no awareness that this happened, right? And it always surprises me, and tell me if I've got this wrong or there's a good explanation for it, because saying that it's not conscious because the human subject, i.e. typically the left hemisphere, has just told you they have no awareness of it, seems to completely beg the question. In other words, okay, the human subject you're looking at told you that there was no consciousness, but how do we know that the various components which were involved in perception, memory, all the things that took place, we don't actually know that those things didn't have a conscious experience during that time, right? It seems like you're assuming the very thing you're trying to prove. So I'm curious what your thought about that is, and if anybody studies this, and this, you know, is there really a good example of some sort of non-conscious behavior? Do we have any way of actually knowing that? What do you think about that?</p><p><strong>[01:31] Nicolas Rouleau:</strong> This is something that I think about often in the context of anesthesia. It is said of people who are under the influence of an anesthetic that they're not conscious, because when they are suddenly roused from their unconscious state, they have no memory of what had happened previously during that period of time. But it could just be that they were having experiences, but none of them were encoded. And I mean, the counter evidence for that is when you look at all the physiological markers of how one would respond physiologically in terms of heart rate or galvanic skin response. Are they sweating when you're administering pain or a noxious stimulus or something? And you don't see any of that. And so it's concluded that the person didn't have any conscious experience because they're not responding to external stimuli. And then also they don't have a memory of the thing. But it could just be that you respond differently in those states. You don't have an emotional response, for example. Maybe those parts of the complex response are attenuated during that period. So it's very difficult to know whether in fact there was no experience. The other way that I think about this is in the context of state-dependent learning in general. If you study for a test under the influence of a drug, and that drug isn't totally impairing and doesn't affect your memory, or at least not in a severe way, you actually perform slightly better on the test if you're on the influence of the drug that you used when you were studying, because your memories are encoded in a certain state.</p><p><strong>[03:26] Michael Levin:</strong> It's almost like place conditioning, right? That seems like...</p><p><strong>[03:30] Nicolas Rouleau:</strong> That's another thing that does happen is if you're in a lecture hall and you attended the lectures on the left side of the room, when you take the test, if you're on the left side of the room, you tend to do better than if you were on the right side of the room. And I mean, it has to do with cues and it has to do with all sorts of other things, but it comes down to whatever the state was that you were in when you learned the thing, that state seems to be the optimal state within which you can actually recall the information. And I think you can think of that in the context of this conscious versus unconscious learning. Instead of calling it unconscious and conscious learning, I mean, you could just say it's all state dependent. And when you're in one state, you tend to be able to retrieve that information more effectively. And it could be that there are whole sets of information that you can only access when you're in that other state. And so I often wonder if the catalog of our lifetime of dreams is actually accessible in the dream state. Right now, it's very difficult for me to recall all the things that I've ever dreamt about. I can really only remember the things that I recalled slightly after rousing from sleep. But it could be that in that sleep state, you actually have a whole inner life that you can access in the same way that I have an autobiographical memory in this conscious waking state.</p><p><strong>[04:55] Michael Levin:</strong> That's really interesting. I mean, there's the recall thing. And then for me, there's also the issue of the different sub-components, right? So whatever sub-modules had the experience in your mind, they may be permanently or completely inaccessible, not just because of a memory failure, but because you weren't the one that had the experience. And this comes up all the time. People say to me, well, this diverse intelligence stuff that we do. They say, well, I don't feel my liver being conscious. Well, of course, you don't feel your liver being, you also don't feel me being conscious. That's not shocking. If the liver were, you would not know about it, right? That makes sense. And so in a lot of these things, right, they seem to beg the question of, they focus on one subject and whatever that linguistic subject says, that's taken to be the conclusion, but yeah.</p><p><strong>[05:54] Nicolas Rouleau:</strong> Yeah, and I think you can interpret what used to be called multiple personality disorder. You can interpret the clinical presentation of that disorder as just the extreme version of what most of us experience when we have context-dependent responses. Like, I behave totally differently in the context when I'm speaking professionally versus when I'm speaking with a close friend or my child, or when I'm speaking to my parents, or I behaved differently in public than I do in private. And that's all normal. Context-dependent responses are a normal feature of human psychology, but in multiple personality disorder, you have inappropriate displays of context-inappropriate behavior in the wrong state. And so you could think of each one of these individuals as separate people, but in the average person, they're integrated. And if I was to behave suddenly right now with you, as I do with my child, you would really think I'm a different person. You would say, wow, you're so condescending, right? And it's just because that's just not how you speak to adults and it's just not how you speak to mentors and colleagues. Like, that's just not how you do it. And so, yeah, I think that the other version of that is, when we start talking about unconventional embodiments and unconventional minds, now we get into some really hairy territory where it's unclear how one ought to behave in these situations or how many different kinds of states they can hold or how many different kinds of repertoires of responses are available to them that are discretized, that aren't part of an integrated whole. Yeah, it's fascinating.</p><p><strong>[07:50] Michael Levin:</strong> Okay, something else related to this question of how many states. What's your prediction? If we were to, let's say, regenerative therapies get off the ground to the point where a standard human can live forever with brain rejuvenation and all of that stuff, indefinitely, let's say, right, let's say it were possible to just keep rejuvenating it. Do you think that, well, two questions. So memory capacity and learning capacity: finite or infinite? Like if you just sort of, this is the physical part, but we stave off DK forever, limited or unlimited.</p><p><strong>[08:36] Nicolas Rouleau:</strong> It's a great question. I think it's limited in the sense that you can't just infinitely increase the information that's encoded, but you could rewrite. So I mean, we already have that kind of system without a regenerative technology where memories are forgotten or their resolution is diminished and then new memories take over the real estate. But your cranial capacity is a certain finite size and neurons can only make a certain number of connections with their neighbors and you can only pack a certain number of spheres of 10 microns into a given space. I mean, if you made it such that the cells, if your technology allowed the cells to sprout more axons and form more synaptic spines, then their genetically encoded blueprints allow them to do so. I mean, I think you could increase the amount of information.</p><p><strong>[09:51] Michael Levin:</strong> But your prediction is that it's limited by the physical capacity of whatever the encodings actually are.</p><p><strong>[10:04] Nicolas Rouleau:</strong> I mean, it has to be. So I would say that for a memory, for a long-term memory to remain crystallized and accessible, it has to occupy some space. And so space is your limiting factor. I mean, you could encode it in different ways. Perhaps the information is now encoded in the extracellular space, or maybe some of it is encoded in a higher dimensional plane in terms of how the cells are being connected. And so now you have this whole new layer that's not physical strictly, but still occupies some physical space. But the information content is not linearly related to the amount of space it's occupying. Maybe there are some things that are possible, but there is ultimately a space limiting factor. Because the way that I view memory is memory is a trace of the environment encoded in a new space and you require space. So I think space is the limiting factor.</p><p><strong>[11:20] Michael Levin:</strong> And do you think that, so let's say, the kind of loss of plasticity that we often see with age, do you think that's, is that a hardware problem or a software problem in the sense that if we did have rejuvenation therapies and you had an 80-year-old with the brain of a 20-year-old in terms of the cellular architecture, would they still be stuck in their ways and cranky and whatever it is that is happening to us? Or do you think that once you get the cellular medium refreshed, then we go back to that, we could keep that plasticity for long periods of time?</p><p><strong>[12:01] Nicolas Rouleau:</strong> That's a great question. I mean, if suddenly I woke up, if I was 70 years old and I had certain habits and I didn't want to change them, and you have to ask yourself why you don't change your habits. And part of that is they're adaptive. I mean, you've created certain kinds of behavioral strategies to navigate through your life. And as long as the environment doesn't change, which it will, by the way, but as long as it doesn't change, you're actually optimized for the environment. That's your brain's doing that all the time. So if I was suddenly given the motivation to change and the regenerative ability and the plasticity and the hardware space to adapt, of course, I think of course you would do that. What do you think about this?</p><p><strong>[13:03] Michael Levin:</strong> Yeah, it's a good question in the sense that I've been thinking about what the social implications are of radical regenerative therapies. So at some point, you'll be 20, and I don't think it'll take all that long actually, but you'll be 20 and you'll meet another 20-year-old, somebody that looks like they're 20, and you find out that, yeah, actually they're 85. And so the question there is, physically, like, all good, compatible; mentally, what does that mean? In other words, when I say software problem, I mean that. Is it possible that just the fact of dealing with cognitive input in life and all of that for some number of decades just puts you in a mental state that cannot be, you know, there are some software states that you can't get out of with hardware, right? There are issues, computational issues. A related issue to this is one of the things we've been working on in our aging program is, so people think about aging as being fundamentally a physics problem, meaning you accumulate entropic errors, or it's a biology problem, meaning that evolution wants you to die. And so there's like certain clocks and stuff like that. But our simulation suggests that there's also a third problem, which is a cognitive problem. And a cognitive problem doesn't require damage and it doesn't require selection forces. It's basically a problem of goal-directed systems after they've completed their goal. What do they do after that? So you can imagine that the homeostatic process that creates the body, right? So the cellular collective intelligence creates the body, you're an adult. Well, it hangs out that way, minimizing disorder for a while, but eventually, if there is a second order, so some sort of metacognitive loop that says, okay, well, you've already done this goal, but you haven't been given a new goal. You're not like a planarian which basically refreshes, like sweeps the decks every two weeks, rips a thing in half, and you got to do it all over again. Is there, you know, basically almost like a boredom theory of aging, right? Where that part's not the conventional cognition, it's the cognition of the body, where morphogenetically, we've already done this, what is left to do? And they sort of, and we actually have data on this, both from simulations, from analyzing, this is Leo Pio-Lopez's work, analyzing what happens to the cells, and they start to, transcriptionally, they start to disband. They roll backwards, right? The phylostratigraphy shows they start expressing more ancient genes, but they diverge from each other. They're no longer in agreement about what should happen because the goal is the thing that was, right, the set point was the thing that was keeping it together. So I just wonder, right, so the way I think about this is like a silly sort of thought experiment. Let's say the standard sort of Judeo-Christian version of heaven, right? So you get there, everything is perfect forever. So you imagine, right? You get there and it's you and your pet snake and your dog. And so you get there, there's no damage from the bottom up. Nothing's getting degraded. Everything's perfect. The hardware is going to work great forever. So, I don't know. You tell me what you think. It seems to me the snake would be just fine doing snake things for a trillion years, like probably fine. The dog, I don't know. Maybe if the environment is good and every day is exactly like every other day, the dog may be fine too. I don't know if dogs are capable of some sort of existential ennui or something like that. But the human, like, okay, you know, it seems to me you can keep yourself busy for the first 10,000 years or 100,000 years. But a billion years in, are you still sane? And if you're not, that's not a physics problem and it's not a biology problem. That's some sort of cognition problem, right? So I don't know, that, it seems to, and maybe the real limit is way longer than, you know, than we have to ever worry about. But it gets to the fundamental problem of how much of this is the hardware and how much of this is the purely cognitive dynamics that are right on top of it. I don't know what you think.</p><p><strong>[17:20] Nicolas Rouleau:</strong> Super cool. I mean, I think we have to consider both the agent as well as their environment in this case. And if the heaven that you're describing is unchanging and it's just, like we often just say, well, it's just the best version of life, just whatever that means. And that could mean the same thing every day for someone, according to if you ask people, like, what's a perfect day, they might just say, well, it's the same thing every day. For some people, it might be something new every day. I suspect that you would be able to endure longer periods of heaven if there were, if things were changing and you had the hardware slash software to actually adapt to those new situations over and over. So you have to, I think you would have to actually wipe the slate at some point, partially or in whole, in order to maintain that cognitive engagement that you're describing. And I think it's really fascinating, this idea of the boredom-based model of disease or cancer, or I think that's really interesting. So do you think it's because the mechanisms that basically quiet those processes are then removed later on? Like in other words, like the system becomes less vigilant about quieting these sort of processes that would be a nuisance if they were generated? Because I've always thought of the brain as being fundamentally non-regenerative because its function is anathema to regeneration. Like you actually don't want a system that is endlessly flexible if you want it to be crystallized in such a way as to have representations that can build world models and can retain something like a stable personality and maintain memories that aren't always changing or aren't suddenly erased so that you can maintain your social bonds and so on. Like I see the brain as like, non-regenerative for a purpose. And so if it suddenly became regenerative, or if it was just given some degree more plasticity, I think it would cease to be the thing that it is currently. It would be more like a general learning machine, but without all the things that we seem to care about as humans, like self and personality and love and all these kinds of very personal things.</p><p><strong>[20:11] Michael Levin:</strong> Yeah, I don't know, axolotls, right? So axolotls, extremely regenerative, including the brain. Now, we could argue about whether axolotls have individual personalities. I suspect, like, I think they do to some extent, obviously not as rich as advanced mammals, but ground squirrels. So ground squirrels, when they hibernate, they have a significant reduction of brain volume. They basically chew up a lot of their brain cells. They come out in the springtime, regenerates, it comes back. And the cool thing about it is it's exactly what you said about the social bonds. They have, apparently, these ground squirrels have very intricate ledgers of who did what to whom and who's cooperating with these social structures, and all of that comes back. So right now, okay, they didn't chew up their whole brain. This is not a planarian story. Like, so, but I'm not sure, you know, I'm not sure. And I'm also, that's a whole conversation for, I think, for another meeting about, I'm not even convinced that all information is on board here. I have a feeling that, you know, I'm exploring some models in which, I mean, familiar things in which this is basically an interface, like a front-end thin client, and some of the action is on the back end, which means that it may well be possible to be regenerative and still index into the structures that cells that are elsewhere. So I don't know. But what's really, what was really wild to me is we did these, so Leo did these simulations where it's a simulation of morphogenesis. So you have individual cells, the collective has homeostatic states and so on. So they build an embryo. In that model, there is no noise. So there is no damage underneath, we don't have that, nor do we have any evolutionary pressures, there's no evolution, there's nothing telling you to die at any given moment. What we see is that already there, spontaneously, you have this error reduction that builds the embryo, and then it sits there for some time as a nice embryo, you know, continuously upkeeping and whatever. And then the whole thing basically spontaneously starts to disband and goes all to hell. And there is no underlying, we didn't have to put in any cause for that. And the other thing that's wild to me is it seems to me that takes 2 levels of cognition, because if you're just the thermostat, you'll be fine doing that same loop forever, basically. What you need is a metacognitive loop that says, well, this goal has been achieved for a really long time. Something is up, right? It's like, yes, surprise, minimizing surprise, yes, but eventually you need to generate some new surprises so that you can learn, do better. And so that second-order loop, we didn't put that in, right? So we did not explicitly encode that, and yet it has this dynamic, which I think is wild. And so, for, you know, I'm thinking that with these radical life regenerative technologies, maybe it'll be enough for the micro-level regeneration, so that as long as we sort of repair all the individual stuff, maybe that's enough to keep things exciting, as it were. Or maybe the answer is, you can't live forever as a caterpillar. But if you're willing to change things up every so often, then you can. And the magnitude of the degree to which you're going to have to change things up, I don't think we know. But it's not, I think what you said makes sense. It's quite reasonable that if you want to stick around longer periods of time, you're going to have to make significant changes and then force the adaptation, the accommodations to it.</p><p><strong>[23:56] Nicolas Rouleau:</strong> I think people would be willing, at least some people would be willing to take that gambit. But I think that what people are not willing to give up would be like a through line of consciousness that carries you from form A to form B. I think people would be willing to give up their memories eventually. If thousands of years had passed and whatever had happened in the past was now, perhaps you're no longer interacting with the same people, or you're not in the same environment, or that information is no longer relevant, I think just like the files on your computer that are 20-plus years old, you may be willing to purge them or at least offload them and really just never look at them ever again. But consciousness is not something people are going to want to give up. And so there needs to be some mechanism for the experience to continue from form A to form B. Do you think that it could? Well, first of all, do you think it does continue in the case of the caterpillar?</p><p><strong>[25:01] Michael Levin:</strong> So the one thing we know about the case of the caterpillar is that functional memories are not only retained, but I think even more to me, to this point, even more interestingly, they're remapped because the actual memories of the caterpillar are of no use in a butterfly body. You have to completely remap them onto new, not only new hardware. So caterpillar is a soft-bodied robot, meaning you can't push on anything, so your controller is all about inflating and deflating and stuff like that. Whereas the butterfly is a hard-bodied creature, which means you have to push and pull on things to fly around, so it's so completely different, but also the preferences, right? So the caterpillar got trained to, what was it, eat leaves at a particular color stimulus or something. Well, the butterfly didn't want leaves, it didn't care about leaves, it wants nectar. And so now you have to go from just like, you know, there has to be some generalization to take place that, right, that this was good. And now not only are your eyes different, who knows what the hell you see now that might be different, but also, I also don't want the thing I ate last time. How do I know that I'm going to get something new that actually is more appropriate, right? So all of that stuff. So that happens. I don't know about the consciousness. I don't know what it's, you know, obviously what it's like to be a caterpillar during the most interesting part of this, of course, is the middle part, right? It's like how they, during the remapping. But even if it maintained, I don't even know if it's possible that being in a butterfly body, you could have the same consciousness as a caterpillar. For one thing, you're living in a world that has an extra dimension. So you were this like two-dimensional thing crawling around. Now you can fly. Like if we had it, right, if we had an extra dimension, would that, you know, could you even say you have like continuity, I suppose. But I do think it's interesting that it sort of goes to sleep for a little while to some extent, right? I would say while everything's getting ripped up and rearranged. So what, right? What comes out on the other end between lives like that? There's all sorts of, you know, wacky things we could talk about there. But I, yeah, I don't know.</p><p><strong>[27:05] Nicolas Rouleau:</strong> I think, from the perspective of the child, it probably seems very unlikely that they may ever have the conscious experience of being an adult, and yet that transition occurs.</p><p><strong>[27:18] Michael Levin:</strong> No, you're right. And because one of the things that happens across puberty, for example, is a radical reprioritization. So things you really cared about before, now it's like, what, who cares? And things that before you thought were completely useless and irrelevant, now they're occupying tons of your time, right? So from that perspective, are you even the same being? To what extent?</p><p><strong>[27:45] Nicolas Rouleau:</strong> I get the sense that we're like, as you're describing the remapping and reprioritization, especially from the caterpillar to the butterfly, I sort of had this out-of-body experience where I'm supervising this conversation. And it's interesting that what we're describing is reproduction and just life cycle. When you reproduce and when you actually give rise to offspring, you might ask the question as a third-party observer, well, how did the consciousness travel from the parent to the offspring? Or how does that continuity actually happen there? Because clearly, this is the organism's mechanism to move on past death: it creates this little clone of itself. I mean, it's not exactly a clone, but it creates this little bud. How exactly does the consciousness move from one to the other? And yeah, I just think that there's something interesting here about when your body ceases to function and the parts that make up who you are are redistributed in the world and reintegrated with other organisms, we think that at least if some of those particles make it into the composition of other humans, that there is some sense that there has been a reorganization here that's taken place structurally and functionally that has now emerged as this new organism somewhere else that has a conscious experience. And although the memories and the conscious experience of that other organism are different and even quantifiably different, maybe it is the case that there is something that gets transferred over, even in this sort of very entropically guided case of you have really just complete dissolution and scattering of all the parts of the system. I mean, it's much more extreme than the caterpillar and the butterfly, but to some extent, you do have a kind of remapping of a cognitive system into another when you have ingestion of another organism. How do you think that relates to the McConnell studies?</p><p><strong>[30:25] Michael Levin:</strong> Yeah, I mean, I think I, and I haven't replicated the brain regeneration stuff with Tal Shamrat. We didn't try the cannibalism stuff. There are data on memory transfer by transplants, by tissue transplants. And if it can survive a tissue transplant, then going through the gut, all it has to do is not get digested, I suppose. So I'm not, it doesn't seem crazy to me that it would work. I think that in the end, I suspect that all of these things are pointers in an important sense. They're indexes into a different space, so I'm not sure what that model should look like. But there's an in-between case for this reproduction slash death thing, which is, I wrote this, it's called Life, Death, and something else, I forget what, it's a paper, where I start out by talking about an imaginary visit of scientists to an imaginary planet where they, you know, there's an ecosystem and they do a bunch of sequence, you know, they sequence the hell out of everything. And they find some amoebas that have the same genome as some of the large animals. And they're like, what the hell is this? And I basically go through this notion that you could have a life cycle that's basically a xenobot life cycle, where at some point, and you could even imagine, I don't know whether any creature on earth does this, but I think there's not any particular reason why a fish or a frog or something that already lives in water, it's hard for mammals, they need us to make anthrobots, they can't do it themselves. But I don't see any reason when a salmon beats itself to death on a rock somewhere, some of the cells that come off, there isn't any fundamental reason they couldn't live on as amoebas for some amount of time. And that's a viable life strategy in lakes, right? And potentially reassemble as some sort of a bio, like a xenobot or something. And who knows whether given enough time that thing can make some germ cells and go back to being a fish. I don't know. But in general, like that kind of thing, when we make a xenobot by taking apart the cells of an early frog embryo, what happened to that frog embryo? Like, is it dead? Well, not really. Is it still here? No, not really. You have this xenobot, it continues, right? And in the case of the anthrobots, we have plenty where the donor is deceased, but there is a being that continues. That's something we've talked about doing, these experiments where we can get anthrobots from smokers who had a nicotine addiction and just asking whether A, whether anthrobots pursue nicotine from those patients specifically. And if they do, whether implanting them, so here's your, there's your memory transplant studies, whether implanting them into a rat or something would then convey that behavior. I don't know. One of the weirdest things about it is that it doesn't seem to at all, which is consistent with this pointer notion, it doesn't seem to at all match the size of the, you might think, how's a tiny anthrobot going to redo the preferences of a giant rat body, right? It's not the same thing, but maybe it's relevant. In planaria, if we take a little tiny piece out of a two-headed worm and implant it into a one-headed worm, in something like 17% of the cases, the recipient becomes two-headed. And this is, to me, super interesting because, and again, maybe goes back to the boredom thing because why would this giant body listen to a few cells? All the other cells are in agreement that worms have one head. This little tiny piece is saying actually we should have two. Why even 17% of the time, why does it win? And maybe it's that novelty thing again. Maybe the other cells are willing to listen some percentage of the time because, well, we've already been a one-headed worm for 400 million years. Here's some new information. Maybe that, you know, maybe that's lit up as higher priority now.</p><p><strong>[34:27] Nicolas Rouleau:</strong> Yes, especially if the environment is really harsh or has changed suddenly, I imagine really extreme responses, maybe like a kind of Hail Mary. That's fascinating.</p><p><strong>[34:52] Michael Levin:</strong> So related to these issues of memory storage, memory interpretation, another thing I wanted to ask you is neural decoding. Why do you think third-person neural decoding, meaning that I'm going to measure your brain and try to figure out what you're thinking, is so much harder than first-person neural decoding, which is like most of the time under normal circumstances, we don't have a lot of difficulty knowing what the meaning of our engrams is? We sort of reconstruct it and whatever, but we're pretty good at accessing our own. But in third person, it's really hard, right? I mean, people have had some success, but it's really hard. What do you think is going on there? Why is it so hard?</p><p><strong>[35:35] Nicolas Rouleau:</strong> It just occurred to me what I wanted to ask a minute ago, if you don't mind.</p><p><strong>[35:38] Michael Levin:</strong> Sure, go for it.</p><p><strong>[35:40] Nicolas Rouleau:</strong> So it would be interesting if we were able to identify some molecule, like just imagine a hypothetical molecule that exists in systems, that the sole purpose of the molecule is to transfer goals. It doesn't transfer structural building blocks. It's not a physiological tool. It's literally just a goal. It's like a message that says, This is what your job is. And if that were the case, all this would be, it would make a lot of sense, right? If you take, because under a neurobiological explanation of what you're describing with the anthrobots, suppose they were able to acquire some kind of nicotine-pursuing strategy, you might say, well, that's because perhaps they have more of the nicotinic acetylcholine receptor, and that's being co-opted as its main chemotaxing module or something like that. And you can make some sort of argument like that. And then once you implant it, the whole question would be like, well, how then do you tell the rest of the system to take on this new goal when in fact the new system isn't equipped with the same concentration levels of the nicotinic. But if it actually had this little message that it could pass on and say, well, this goal has been really useful for me. Why don't you try it out? Or maybe just add it to your repertoire of potential goal orientation strategies. I mean, I'm not saying that thing exists. I'm just saying, like, you would need some kind of goal messaging system that goes beyond just simple building blocks.</p><p><strong>[37:27] Michael Levin:</strong> For sure. And there's another interesting piece of data. This guy Heper back in, I want to say, the 80s, did these experiments where he would take certain odorant molecules and inject them into a frog egg inside. So we're talking cytoplasmic. And then when that animal became big enough to have behavior, it would preferentially seek out those molecules in its food choices. Now, here again, so that interpretation, so again, the question is, well, what's the transduction? So you've got some sort of weird molecule inside the cell. It then has to convert that into a presumably multicellular neural something that will lead from smelling it to actually going to find it and all that. So you have to analyze it and then modify your large-scale nervous system somehow. So I feel like these systems have a ton of this plasticity interpretation, and they like to sort of pass it on. This information moves, it moves within bodies, it moves between bodies. That, yeah, I think that plasticity is going to be sort of massive, and I think it's underappreciated.</p><p><strong>[38:41] Nicolas Rouleau:</strong> That's interesting. Neural decoding.</p><p><strong>[38:46] Michael Levin:</strong> Why is neural decoding of somebody else's brain, as opposed to your own brain, so difficult? What do you think makes it so challenging?</p><p><strong>[38:57] Nicolas Rouleau:</strong> I mean, I think the way that we neurally decode from a third-person perspective between humans is usually through the medium of language. Do you agree?</p><p><strong>[39:08] Michael Levin:</strong> Well, I would say, so the comparison I'm making, and you don't have to buy into the comparison. You can just talk about neural decoding as it stands today in general. But for me, I see two versions of this. I see my own neural decoding, which means that most of the time in the absence of various defects and so on, I don't have a lot of problems knowing whatever memory, structures, molecules, processes, whatever we're using, we typically know what they mean, right? So, I access whatever structure that is, and they say, yes, that's because yesterday I had toast or something. Whereas if I were to, if in third person, if I were to figure out, okay, did Nick have toast yesterday? It would be, I would have a hell of a time trying to interpret, right? And there's only been some success, but it's really hard. That's the comparison, right?</p><p><strong>[39:56] Nicolas Rouleau:</strong> Again, the interpretation of. In your example, what's the connection? What's the information that I have access to that I'm trying to decode?</p><p><strong>[40:05] Michael Levin:</strong> In the third-person perspective, whatever neuro, whatever you want, electrical MRI, what do people typically use, right? They typically use physiological readings from brains of animals and human subjects to try and say, can I tell what, you know, you've seen 10, you've seen 10 pictures at some point, then I ask you to imagine one, and I try to guess, right, from doing brain readings, I try to guess which picture you're looking at.</p><p><strong>[40:34] Nicolas Rouleau:</strong> I think that if I was to take an EEG reading of my brain when asked the question, or an fMRI recording, I think is equivalent in this case. But if I were to take a recording from my brain when asked the question, what's your favorite food? And I took a recording from your brain, I think I would have just as much trouble interpreting both of those signals. Agreed. And in a sense, they're both third person in that case, right?</p><p><strong>[41:02] Michael Levin:</strong> Exactly, that's exactly what I'm getting at, right? So from the outside, whatever that means, it's really hard, but from the inside, whatever that means, it apparently is much more, much smoother, right?</p><p><strong>[41:12] Nicolas Rouleau:</strong> Yeah, I think that one way to answer this would be that all of these measures are imperfect analogs of what's actually happening. It's like the original analog computers, how you'd have a buoy and then the buoy would go up and down with a wave. And then you could have the buoy basically trace a line on a graph, on graph paper. So if I was to present you with the graph paper with the sine wave or whatever wave is on there, and I gave you no context at all, you may not just guess that actually what this is the analog of a wave in the ocean. I'm not sure that you'd be able to guess that with no context. So I think that all of these measures that we use as stand-ins for qualia, for phenomenal experience, for decoded memories are just extremely imperfect. If I was able to measure your brain and we had a new technology that actually converted your brain activity into a video, and you could affirm that video in fact was a pretty accurate representation of your experience, I think that if I presented that video to someone else, they would also be able to have a pretty accurate experience of it. They could describe the video, and they would be describing your experience. So I think that the tool is imperfect in these cases. And in that case, that would be an encoder that is getting very, very close to the actual experience in the form of a video. If we added audio, it would get even closer to your experience and would have more features of your memories. So you need something like that. Language is what humans use to describe all that. And it's very much an imperfect thing because you lose so much with words. But words evoke this kind of mental theater that is the cognitive, the low-resolution version of this technology I'm describing, where you would be able to convert your memories into these actual messages on screens. But I think that's really the problem is that it's, and you can have good decoders and bad decoders, and it's very difficult to infer anything about someone's experience based upon lines on a graph or numbers on a screen. Let's see.</p><p><strong>[44:01] Michael Levin:</strong> Cool. So I want you to describe what's the weirdest work that you've ever done.</p><p><strong>[44:08] Nicolas Rouleau:</strong> There's so much to choose from. I think the weirdest thing that I've ever done is, as part of my master's thesis, I asked the question, and maybe I should preface this, but I asked the question, can you classically condition materials? And the reason for that was I had read this really cool paper from the 1950s where these guys out in England, they classically conditioned an iron bar. You know about this. So the question was, could we do it with something like electroconductive Play-Doh? Because Play-Doh, you could run current through it. You know that the current is taking a certain path through the Play-Doh. So could it actually carve out a particular path that could be in a re-entrant kind of way, continuously carved out? And so, just like running current through a piece of wood and noticing that it has this kind of lightning pattern, could you do that sort of thing in a given material and have it dynamically respond to a previously neutral stimulus by having the kind of unconditioned stimulus, neutral stimulus pairing? I was able to classically condition Play-Doh. Basically, we took small bits of Play-Doh and it was like, you know, it's Play-Doh with lemon juice in it. And we ran certain current through it. And then that current was pair associated with a flashing light. And basically, you would have the current that goes through and then you would measure the current output. And so you could get, you could create a spectrogram based upon the electrical noise in the Play-Doh. And what we found was that when the light was on, when you actually flash the light after the pairing had occurred, the noise, the electrical noise in the Play-Doh seemed to correlate with just running current through the Play-Doh. And so this was just a light-induced, current-type response in the Play-Doh. So, we had successfully demonstrated a conditioned response. But then we went a little bit further and we started developing a histo technique on the Play-Doh. So we took the Play-Doh and ran it through histological analysis and sectioned it and stained all the little grooves and things like this. And we were actually able to find these little microstructures that corresponded to when you actually ran electricity through the Play-Doh. So it had more like little grooves inside of it. We actually ended up publishing it. So, you know, it's on Plus One somewhere. And it's very weird. And I don't think anyone has cited it and probably no one will replicate it. It's just a very weird study. But yeah, that's my answer.</p><p><strong>[47:14] Michael Levin:</strong> Amazing. And so the fact that you found microstructures, does that mean that if you were to, or maybe you tried this, if you were to take the trained Play-Doh and rejigger it at a higher level, does it keep the information or no? What's the scale of the...</p><p><strong>[47:30] Nicolas Rouleau:</strong> We did exactly that. So you take the Play-Doh, have them paired, and then just deform it and reform it into a ball, and it didn't display the response. I see.</p><p><strong>[47:40] Michael Levin:</strong> So some kind of larger structure. Interesting. Out of the space of all possible materials, what's your guess as to what percentage? Presumably, we don't think there's something super lucky about Plato, right? What percentage of materials out there do you think have these properties?</p><p><strong>[48:02] Nicolas Rouleau:</strong> I don't know. Percentage is difficult, but I think it would have to.</p><p><strong>[48:05] Michael Levin:</strong> Overall, is it a needle in a haystack thing or is it a general feature of matter or somewhere in between? What do you?</p><p><strong>[48:11] Nicolas Rouleau:</strong> On this planet, it's probably a relatively general feature, I would say, because of water and because of all the organics. I would assume that a system would have to have, because Plato, of course, is made up of the stuff of plants, right? So I think you would have to have a system that is sufficiently plastic and responsive to some kind of deformation, be it electrical or photonic or mechanical. It would have to be changed by inputs of some sort and retain those changes for some duration of time. I think that describes a lot of materials. Like we have memristive materials now. We know that mushrooms do this kind of thing. And there's all sorts of living and non-living materials that have these basic properties, that they could be changed by inputs. I think it's pretty general, not some special feature of a small subset.</p><p><strong>[49:15] Michael Levin:</strong> I agree with that. Then two final quick questions before we have to wrap up. So I, for example, don't think that neuroscience is about neurons per se at all. Do you agree with that? And if so, in a sentence or two, what do you think neuroscience is really about?</p><p><strong>[49:39] Nicolas Rouleau:</strong> I agree with you because I know what you mean. And I know that, in the same way that plant neurobiology isn't really about the nerves of plants, we're describing what we might say are neural systems in the absence of neurons. Like we're talking about networks or we're talking about cognitive systems or we're talking about some functional label that isn't bound to a specific structure. I agree that much of neuroscience is actually about that. I totally agree. And yet the field is defined by whatever it is that most people are doing or saying in the field. And I would say most neuroscientists would probably disagree. They would say that, no, it really is just about cells and brains. But no, I agree with you. I have a more functionalist kind of view of these things. What do you think?</p><p><strong>[50:33] Michael Levin:</strong> I think fundamentally the deepest lessons of neuroscience are about cognitive glue. So they're about understanding, and of course, neurons are a great example of that, but as we said, there are many others, of ways in which competent smaller subunits get harnessed together and aligned towards larger scale causality goals, memories, preferences that none of the parts have, but the collective does. And I think that's, to me, that's one of the biggest things that neuroscience offers us is an example where we take seriously all the levels. We take seriously the synaptic proteins and the networks and eventually psychoanalysis, like all the, right, we know that all of these levels are interesting and important. And it's this amazing field where lots of people are working on the transitions between the levels, right? So that, you know, for example, for molecular biology and things like that, that's a deep lesson that they haven't yet, I think.</p><p><strong>[51:31] Nicolas Rouleau:</strong> Isn't that interesting that our fields are defined by all this matter stuff and not process? If we actually define the fields by process, we could have fields of study like multicellular connectomics. And then that would just describe regenerative biology and cancer and neuroscience and all sorts of things. And it would just be functionalist. There would be different cell types based upon certain structural markers within these fields. But that wouldn't matter because we're actually just talking about processes, shared processes.</p><p><strong>[52:05] Michael Levin:</strong> I think the problem there, or the resistance to that, is that you couldn't keep it in the biology department then. The departments would have to go too, which I think would be true. It would be completely fine.</p><p><strong>[52:17] Nicolas Rouleau:</strong> Immediately computer science and neuroscience departments are now the same department.</p><p><strong>[52:23] Michael Levin:</strong> That's right. And as you said, certain material science departments as well, right? What do you see as the, again, I'm going to say percentage, but I don't mean a number. What's the prevalence of intelligence in the universe? Is it like super rare and precious and maybe the earth is the only one? Is it a common feature? Is it embodiments beyond water and carbon and all of that? What's your take on the whole thing?</p><p><strong>[53:00] Nicolas Rouleau:</strong> Great question. My intuition is just to scaffold it almost a one-to-one correlation with whichever planets would host life. But not because I think that it's just a life thing, but because the kinds of planets that have the kinds of interactions that give rise to life would have the kinds of causal structures required for an intelligent system. And yet, I think you could maybe view all sorts of intelligent things at scales that are much larger than planets. I don't know how. I mean, so we have to ask what is intelligence, and that's a whole rabbit hole that we can go down. But if we're just talking about problem solving and adaptation and this kind of cognitive flexibility.</p><p><strong>[53:54] Michael Levin:</strong> Well, you can throw in consciousness as well, right? So first-person perspective, how common is that? I mean, you choose any of that.</p><p><strong>[54:05] Nicolas Rouleau:</strong> I think that you probably have to have, like, so I don't think this stuff is happening at the level of atoms. And I don't think it's happening at the level of galaxies, because I don't think galaxies can, I don't think there are enough units in a galaxy, like in the universe in terms of galaxies with connections between them that allow them to have sufficient causal structure to solve problems. So you probably do need to have something at a scale that is less than a planet to have intelligence, just because of the size of things. I think it's just sort of like a spatial problem, and how close things are to each other. In space, things are really spread out, but on a planet, because of gravity, everything's been kind of brought together. So I think if you have a planet where everything has been kind of brought together and squished together, you have the capacity for the kinds of interactions that can lead to problem solving. And then I think that, you know, that means, you know, if we're just talking about planets in the universe, now we're talking about a really small subset of the universe. And then if we're talking about like only the planets that are far enough away from the sun to have like, you know, organics and light and life or water, that's an even smaller percentage. So I think it's a small percentage in the universe, but how small it is, I'm not too sure.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Conversation with Nic Rouleau, part 2: neuroscience, memory transfer, aging of cognition, and more</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>Neuroscientist Nicolas Rouleau joins for a follow-up discussion on consciousness, memory transfer, cognitive plasticity and aging, goal decoding in the brain, and unusual experiments on conditioning and learning in materials like Play-Doh and neural tissues.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/nYK4NvqyY0k" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/2ddc5dc7/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~55 minute discussion following up on Nic's talk and our brief conversation ( comprising part 2 of a conversation with a really interesting young neuroscientist, as well as friend, collaborator, and our Center member, Nicolas Rouleau ( We cover topics of consciousness, neural decoding, the meaning of neuroscience, memory transfer, cognitive plasticity and its relationship to rejuvenation therapies, intelligence throughout the universe, and the weirdest work Nic has done (he chose his work on memory in Playdoh). For more information: Nic's website: X account: @DrNRouleau Recent papers to check out: Sellar, E.P., Rouleau, N. (In Review). A cybernetic framework for synthetic biological intelligence in the era of neural tissue engineering. Preprint doi: 10.31234/osf.io/md2wf_v1. Kansala, C., Cicek, E., Nkansah-Okoree, V., Golding, A., Murugan, N.J., Rouleau, N. (In Review). Superstitious conditioning forms the experience of free will under causal determinism. Preprint doi: 10.31234/osf.io/fk3yt_v2. Roskies, A. &amp; Rouleau, N. (Forthcoming, In Press). Research on brain organoids should prioritize questions of agency, not consciousness. AJOB Neuroscience. Rouleau, N. &amp; Levin, M. (In Press). Brains and where else? Mapping theories of consciousness to unconventional embodiments. Philosophical Transactions: A. Preprint doi:10.1098/rsta.2025.0082. Rouleau, N., Levin, M. (2024), Discussions of machine versus living intelligence need more clarity, Nature Machine Intelligence, doi:10.31219/osf.io/gz3km Rouleau, N., and Levin, M. (2023), The Multiple Realizability of Sentience in Living Systems and Beyond, eNeuro, 10(11), doi:10.1523/eneuro.0375-23.2023 Rouleau, N., Cairns, D. M., Rusk, W., Levin, M., and Kaplan, D. (2021), Learning and synaptic plasticity in 3D bioengineered neural tissues Neuroscience Letters, 750: 135799</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Rethinking unconscious experience</p><p>(07:50) Immortality, memory, and aging</p><p>(20:11) Regeneration, identity, and continuity</p><p>(34:52) Goal signals and decoding</p><p>(44:01) Conditioning strange materials</p><p>(49:15) Neuroscience and cosmic minds</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Michael Levin:</strong> First of all, I'm wondering what you think about this. In the study of consciousness, for example, what people study are, they say, okay, here's conscious learning, non-conscious learning, right? There are processes that go on that they say, okay, the subject had no awareness that this happened, right? And it always surprises me, and tell me if I've got this wrong or there's a good explanation for it, because saying that it's not conscious because the human subject, i.e. typically the left hemisphere, has just told you they have no awareness of it, seems to completely beg the question. In other words, okay, the human subject you're looking at told you that there was no consciousness, but how do we know that the various components which were involved in perception, memory, all the things that took place, we don't actually know that those things didn't have a conscious experience during that time, right? It seems like you're assuming the very thing you're trying to prove. So I'm curious what your thought about that is, and if anybody studies this, and this, you know, is there really a good example of some sort of non-conscious behavior? Do we have any way of actually knowing that? What do you think about that?</p><p><strong>[01:31] Nicolas Rouleau:</strong> This is something that I think about often in the context of anesthesia. It is said of people who are under the influence of an anesthetic that they're not conscious, because when they are suddenly roused from their unconscious state, they have no memory of what had happened previously during that period of time. But it could just be that they were having experiences, but none of them were encoded. And I mean, the counter evidence for that is when you look at all the physiological markers of how one would respond physiologically in terms of heart rate or galvanic skin response. Are they sweating when you're administering pain or a noxious stimulus or something? And you don't see any of that. And so it's concluded that the person didn't have any conscious experience because they're not responding to external stimuli. And then also they don't have a memory of the thing. But it could just be that you respond differently in those states. You don't have an emotional response, for example. Maybe those parts of the complex response are attenuated during that period. So it's very difficult to know whether in fact there was no experience. The other way that I think about this is in the context of state-dependent learning in general. If you study for a test under the influence of a drug, and that drug isn't totally impairing and doesn't affect your memory, or at least not in a severe way, you actually perform slightly better on the test if you're on the influence of the drug that you used when you were studying, because your memories are encoded in a certain state.</p><p><strong>[03:26] Michael Levin:</strong> It's almost like place conditioning, right? That seems like...</p><p><strong>[03:30] Nicolas Rouleau:</strong> That's another thing that does happen is if you're in a lecture hall and you attended the lectures on the left side of the room, when you take the test, if you're on the left side of the room, you tend to do better than if you were on the right side of the room. And I mean, it has to do with cues and it has to do with all sorts of other things, but it comes down to whatever the state was that you were in when you learned the thing, that state seems to be the optimal state within which you can actually recall the information. And I think you can think of that in the context of this conscious versus unconscious learning. Instead of calling it unconscious and conscious learning, I mean, you could just say it's all state dependent. And when you're in one state, you tend to be able to retrieve that information more effectively. And it could be that there are whole sets of information that you can only access when you're in that other state. And so I often wonder if the catalog of our lifetime of dreams is actually accessible in the dream state. Right now, it's very difficult for me to recall all the things that I've ever dreamt about. I can really only remember the things that I recalled slightly after rousing from sleep. But it could be that in that sleep state, you actually have a whole inner life that you can access in the same way that I have an autobiographical memory in this conscious waking state.</p><p><strong>[04:55] Michael Levin:</strong> That's really interesting. I mean, there's the recall thing. And then for me, there's also the issue of the different sub-components, right? So whatever sub-modules had the experience in your mind, they may be permanently or completely inaccessible, not just because of a memory failure, but because you weren't the one that had the experience. And this comes up all the time. People say to me, well, this diverse intelligence stuff that we do. They say, well, I don't feel my liver being conscious. Well, of course, you don't feel your liver being, you also don't feel me being conscious. That's not shocking. If the liver were, you would not know about it, right? That makes sense. And so in a lot of these things, right, they seem to beg the question of, they focus on one subject and whatever that linguistic subject says, that's taken to be the conclusion, but yeah.</p><p><strong>[05:54] Nicolas Rouleau:</strong> Yeah, and I think you can interpret what used to be called multiple personality disorder. You can interpret the clinical presentation of that disorder as just the extreme version of what most of us experience when we have context-dependent responses. Like, I behave totally differently in the context when I'm speaking professionally versus when I'm speaking with a close friend or my child, or when I'm speaking to my parents, or I behaved differently in public than I do in private. And that's all normal. Context-dependent responses are a normal feature of human psychology, but in multiple personality disorder, you have inappropriate displays of context-inappropriate behavior in the wrong state. And so you could think of each one of these individuals as separate people, but in the average person, they're integrated. And if I was to behave suddenly right now with you, as I do with my child, you would really think I'm a different person. You would say, wow, you're so condescending, right? And it's just because that's just not how you speak to adults and it's just not how you speak to mentors and colleagues. Like, that's just not how you do it. And so, yeah, I think that the other version of that is, when we start talking about unconventional embodiments and unconventional minds, now we get into some really hairy territory where it's unclear how one ought to behave in these situations or how many different kinds of states they can hold or how many different kinds of repertoires of responses are available to them that are discretized, that aren't part of an integrated whole. Yeah, it's fascinating.</p><p><strong>[07:50] Michael Levin:</strong> Okay, something else related to this question of how many states. What's your prediction? If we were to, let's say, regenerative therapies get off the ground to the point where a standard human can live forever with brain rejuvenation and all of that stuff, indefinitely, let's say, right, let's say it were possible to just keep rejuvenating it. Do you think that, well, two questions. So memory capacity and learning capacity: finite or infinite? Like if you just sort of, this is the physical part, but we stave off DK forever, limited or unlimited.</p><p><strong>[08:36] Nicolas Rouleau:</strong> It's a great question. I think it's limited in the sense that you can't just infinitely increase the information that's encoded, but you could rewrite. So I mean, we already have that kind of system without a regenerative technology where memories are forgotten or their resolution is diminished and then new memories take over the real estate. But your cranial capacity is a certain finite size and neurons can only make a certain number of connections with their neighbors and you can only pack a certain number of spheres of 10 microns into a given space. I mean, if you made it such that the cells, if your technology allowed the cells to sprout more axons and form more synaptic spines, then their genetically encoded blueprints allow them to do so. I mean, I think you could increase the amount of information.</p><p><strong>[09:51] Michael Levin:</strong> But your prediction is that it's limited by the physical capacity of whatever the encodings actually are.</p><p><strong>[10:04] Nicolas Rouleau:</strong> I mean, it has to be. So I would say that for a memory, for a long-term memory to remain crystallized and accessible, it has to occupy some space. And so space is your limiting factor. I mean, you could encode it in different ways. Perhaps the information is now encoded in the extracellular space, or maybe some of it is encoded in a higher dimensional plane in terms of how the cells are being connected. And so now you have this whole new layer that's not physical strictly, but still occupies some physical space. But the information content is not linearly related to the amount of space it's occupying. Maybe there are some things that are possible, but there is ultimately a space limiting factor. Because the way that I view memory is memory is a trace of the environment encoded in a new space and you require space. So I think space is the limiting factor.</p><p><strong>[11:20] Michael Levin:</strong> And do you think that, so let's say, the kind of loss of plasticity that we often see with age, do you think that's, is that a hardware problem or a software problem in the sense that if we did have rejuvenation therapies and you had an 80-year-old with the brain of a 20-year-old in terms of the cellular architecture, would they still be stuck in their ways and cranky and whatever it is that is happening to us? Or do you think that once you get the cellular medium refreshed, then we go back to that, we could keep that plasticity for long periods of time?</p><p><strong>[12:01] Nicolas Rouleau:</strong> That's a great question. I mean, if suddenly I woke up, if I was 70 years old and I had certain habits and I didn't want to change them, and you have to ask yourself why you don't change your habits. And part of that is they're adaptive. I mean, you've created certain kinds of behavioral strategies to navigate through your life. And as long as the environment doesn't change, which it will, by the way, but as long as it doesn't change, you're actually optimized for the environment. That's your brain's doing that all the time. So if I was suddenly given the motivation to change and the regenerative ability and the plasticity and the hardware space to adapt, of course, I think of course you would do that. What do you think about this?</p><p><strong>[13:03] Michael Levin:</strong> Yeah, it's a good question in the sense that I've been thinking about what the social implications are of radical regenerative therapies. So at some point, you'll be 20, and I don't think it'll take all that long actually, but you'll be 20 and you'll meet another 20-year-old, somebody that looks like they're 20, and you find out that, yeah, actually they're 85. And so the question there is, physically, like, all good, compatible; mentally, what does that mean? In other words, when I say software problem, I mean that. Is it possible that just the fact of dealing with cognitive input in life and all of that for some number of decades just puts you in a mental state that cannot be, you know, there are some software states that you can't get out of with hardware, right? There are issues, computational issues. A related issue to this is one of the things we've been working on in our aging program is, so people think about aging as being fundamentally a physics problem, meaning you accumulate entropic errors, or it's a biology problem, meaning that evolution wants you to die. And so there's like certain clocks and stuff like that. But our simulation suggests that there's also a third problem, which is a cognitive problem. And a cognitive problem doesn't require damage and it doesn't require selection forces. It's basically a problem of goal-directed systems after they've completed their goal. What do they do after that? So you can imagine that the homeostatic process that creates the body, right? So the cellular collective intelligence creates the body, you're an adult. Well, it hangs out that way, minimizing disorder for a while, but eventually, if there is a second order, so some sort of metacognitive loop that says, okay, well, you've already done this goal, but you haven't been given a new goal. You're not like a planarian which basically refreshes, like sweeps the decks every two weeks, rips a thing in half, and you got to do it all over again. Is there, you know, basically almost like a boredom theory of aging, right? Where that part's not the conventional cognition, it's the cognition of the body, where morphogenetically, we've already done this, what is left to do? And they sort of, and we actually have data on this, both from simulations, from analyzing, this is Leo Pio-Lopez's work, analyzing what happens to the cells, and they start to, transcriptionally, they start to disband. They roll backwards, right? The phylostratigraphy shows they start expressing more ancient genes, but they diverge from each other. They're no longer in agreement about what should happen because the goal is the thing that was, right, the set point was the thing that was keeping it together. So I just wonder, right, so the way I think about this is like a silly sort of thought experiment. Let's say the standard sort of Judeo-Christian version of heaven, right? So you get there, everything is perfect forever. So you imagine, right? You get there and it's you and your pet snake and your dog. And so you get there, there's no damage from the bottom up. Nothing's getting degraded. Everything's perfect. The hardware is going to work great forever. So, I don't know. You tell me what you think. It seems to me the snake would be just fine doing snake things for a trillion years, like probably fine. The dog, I don't know. Maybe if the environment is good and every day is exactly like every other day, the dog may be fine too. I don't know if dogs are capable of some sort of existential ennui or something like that. But the human, like, okay, you know, it seems to me you can keep yourself busy for the first 10,000 years or 100,000 years. But a billion years in, are you still sane? And if you're not, that's not a physics problem and it's not a biology problem. That's some sort of cognition problem, right? So I don't know, that, it seems to, and maybe the real limit is way longer than, you know, than we have to ever worry about. But it gets to the fundamental problem of how much of this is the hardware and how much of this is the purely cognitive dynamics that are right on top of it. I don't know what you think.</p><p><strong>[17:20] Nicolas Rouleau:</strong> Super cool. I mean, I think we have to consider both the agent as well as their environment in this case. And if the heaven that you're describing is unchanging and it's just, like we often just say, well, it's just the best version of life, just whatever that means. And that could mean the same thing every day for someone, according to if you ask people, like, what's a perfect day, they might just say, well, it's the same thing every day. For some people, it might be something new every day. I suspect that you would be able to endure longer periods of heaven if there were, if things were changing and you had the hardware slash software to actually adapt to those new situations over and over. So you have to, I think you would have to actually wipe the slate at some point, partially or in whole, in order to maintain that cognitive engagement that you're describing. And I think it's really fascinating, this idea of the boredom-based model of disease or cancer, or I think that's really interesting. So do you think it's because the mechanisms that basically quiet those processes are then removed later on? Like in other words, like the system becomes less vigilant about quieting these sort of processes that would be a nuisance if they were generated? Because I've always thought of the brain as being fundamentally non-regenerative because its function is anathema to regeneration. Like you actually don't want a system that is endlessly flexible if you want it to be crystallized in such a way as to have representations that can build world models and can retain something like a stable personality and maintain memories that aren't always changing or aren't suddenly erased so that you can maintain your social bonds and so on. Like I see the brain as like, non-regenerative for a purpose. And so if it suddenly became regenerative, or if it was just given some degree more plasticity, I think it would cease to be the thing that it is currently. It would be more like a general learning machine, but without all the things that we seem to care about as humans, like self and personality and love and all these kinds of very personal things.</p><p><strong>[20:11] Michael Levin:</strong> Yeah, I don't know, axolotls, right? So axolotls, extremely regenerative, including the brain. Now, we could argue about whether axolotls have individual personalities. I suspect, like, I think they do to some extent, obviously not as rich as advanced mammals, but ground squirrels. So ground squirrels, when they hibernate, they have a significant reduction of brain volume. They basically chew up a lot of their brain cells. They come out in the springtime, regenerates, it comes back. And the cool thing about it is it's exactly what you said about the social bonds. They have, apparently, these ground squirrels have very intricate ledgers of who did what to whom and who's cooperating with these social structures, and all of that comes back. So right now, okay, they didn't chew up their whole brain. This is not a planarian story. Like, so, but I'm not sure, you know, I'm not sure. And I'm also, that's a whole conversation for, I think, for another meeting about, I'm not even convinced that all information is on board here. I have a feeling that, you know, I'm exploring some models in which, I mean, familiar things in which this is basically an interface, like a front-end thin client, and some of the action is on the back end, which means that it may well be possible to be regenerative and still index into the structures that cells that are elsewhere. So I don't know. But what's really, what was really wild to me is we did these, so Leo did these simulations where it's a simulation of morphogenesis. So you have individual cells, the collective has homeostatic states and so on. So they build an embryo. In that model, there is no noise. So there is no damage underneath, we don't have that, nor do we have any evolutionary pressures, there's no evolution, there's nothing telling you to die at any given moment. What we see is that already there, spontaneously, you have this error reduction that builds the embryo, and then it sits there for some time as a nice embryo, you know, continuously upkeeping and whatever. And then the whole thing basically spontaneously starts to disband and goes all to hell. And there is no underlying, we didn't have to put in any cause for that. And the other thing that's wild to me is it seems to me that takes 2 levels of cognition, because if you're just the thermostat, you'll be fine doing that same loop forever, basically. What you need is a metacognitive loop that says, well, this goal has been achieved for a really long time. Something is up, right? It's like, yes, surprise, minimizing surprise, yes, but eventually you need to generate some new surprises so that you can learn, do better. And so that second-order loop, we didn't put that in, right? So we did not explicitly encode that, and yet it has this dynamic, which I think is wild. And so, for, you know, I'm thinking that with these radical life regenerative technologies, maybe it'll be enough for the micro-level regeneration, so that as long as we sort of repair all the individual stuff, maybe that's enough to keep things exciting, as it were. Or maybe the answer is, you can't live forever as a caterpillar. But if you're willing to change things up every so often, then you can. And the magnitude of the degree to which you're going to have to change things up, I don't think we know. But it's not, I think what you said makes sense. It's quite reasonable that if you want to stick around longer periods of time, you're going to have to make significant changes and then force the adaptation, the accommodations to it.</p><p><strong>[23:56] Nicolas Rouleau:</strong> I think people would be willing, at least some people would be willing to take that gambit. But I think that what people are not willing to give up would be like a through line of consciousness that carries you from form A to form B. I think people would be willing to give up their memories eventually. If thousands of years had passed and whatever had happened in the past was now, perhaps you're no longer interacting with the same people, or you're not in the same environment, or that information is no longer relevant, I think just like the files on your computer that are 20-plus years old, you may be willing to purge them or at least offload them and really just never look at them ever again. But consciousness is not something people are going to want to give up. And so there needs to be some mechanism for the experience to continue from form A to form B. Do you think that it could? Well, first of all, do you think it does continue in the case of the caterpillar?</p><p><strong>[25:01] Michael Levin:</strong> So the one thing we know about the case of the caterpillar is that functional memories are not only retained, but I think even more to me, to this point, even more interestingly, they're remapped because the actual memories of the caterpillar are of no use in a butterfly body. You have to completely remap them onto new, not only new hardware. So caterpillar is a soft-bodied robot, meaning you can't push on anything, so your controller is all about inflating and deflating and stuff like that. Whereas the butterfly is a hard-bodied creature, which means you have to push and pull on things to fly around, so it's so completely different, but also the preferences, right? So the caterpillar got trained to, what was it, eat leaves at a particular color stimulus or something. Well, the butterfly didn't want leaves, it didn't care about leaves, it wants nectar. And so now you have to go from just like, you know, there has to be some generalization to take place that, right, that this was good. And now not only are your eyes different, who knows what the hell you see now that might be different, but also, I also don't want the thing I ate last time. How do I know that I'm going to get something new that actually is more appropriate, right? So all of that stuff. So that happens. I don't know about the consciousness. I don't know what it's, you know, obviously what it's like to be a caterpillar during the most interesting part of this, of course, is the middle part, right? It's like how they, during the remapping. But even if it maintained, I don't even know if it's possible that being in a butterfly body, you could have the same consciousness as a caterpillar. For one thing, you're living in a world that has an extra dimension. So you were this like two-dimensional thing crawling around. Now you can fly. Like if we had it, right, if we had an extra dimension, would that, you know, could you even say you have like continuity, I suppose. But I do think it's interesting that it sort of goes to sleep for a little while to some extent, right? I would say while everything's getting ripped up and rearranged. So what, right? What comes out on the other end between lives like that? There's all sorts of, you know, wacky things we could talk about there. But I, yeah, I don't know.</p><p><strong>[27:05] Nicolas Rouleau:</strong> I think, from the perspective of the child, it probably seems very unlikely that they may ever have the conscious experience of being an adult, and yet that transition occurs.</p><p><strong>[27:18] Michael Levin:</strong> No, you're right. And because one of the things that happens across puberty, for example, is a radical reprioritization. So things you really cared about before, now it's like, what, who cares? And things that before you thought were completely useless and irrelevant, now they're occupying tons of your time, right? So from that perspective, are you even the same being? To what extent?</p><p><strong>[27:45] Nicolas Rouleau:</strong> I get the sense that we're like, as you're describing the remapping and reprioritization, especially from the caterpillar to the butterfly, I sort of had this out-of-body experience where I'm supervising this conversation. And it's interesting that what we're describing is reproduction and just life cycle. When you reproduce and when you actually give rise to offspring, you might ask the question as a third-party observer, well, how did the consciousness travel from the parent to the offspring? Or how does that continuity actually happen there? Because clearly, this is the organism's mechanism to move on past death: it creates this little clone of itself. I mean, it's not exactly a clone, but it creates this little bud. How exactly does the consciousness move from one to the other? And yeah, I just think that there's something interesting here about when your body ceases to function and the parts that make up who you are are redistributed in the world and reintegrated with other organisms, we think that at least if some of those particles make it into the composition of other humans, that there is some sense that there has been a reorganization here that's taken place structurally and functionally that has now emerged as this new organism somewhere else that has a conscious experience. And although the memories and the conscious experience of that other organism are different and even quantifiably different, maybe it is the case that there is something that gets transferred over, even in this sort of very entropically guided case of you have really just complete dissolution and scattering of all the parts of the system. I mean, it's much more extreme than the caterpillar and the butterfly, but to some extent, you do have a kind of remapping of a cognitive system into another when you have ingestion of another organism. How do you think that relates to the McConnell studies?</p><p><strong>[30:25] Michael Levin:</strong> Yeah, I mean, I think I, and I haven't replicated the brain regeneration stuff with Tal Shamrat. We didn't try the cannibalism stuff. There are data on memory transfer by transplants, by tissue transplants. And if it can survive a tissue transplant, then going through the gut, all it has to do is not get digested, I suppose. So I'm not, it doesn't seem crazy to me that it would work. I think that in the end, I suspect that all of these things are pointers in an important sense. They're indexes into a different space, so I'm not sure what that model should look like. But there's an in-between case for this reproduction slash death thing, which is, I wrote this, it's called Life, Death, and something else, I forget what, it's a paper, where I start out by talking about an imaginary visit of scientists to an imaginary planet where they, you know, there's an ecosystem and they do a bunch of sequence, you know, they sequence the hell out of everything. And they find some amoebas that have the same genome as some of the large animals. And they're like, what the hell is this? And I basically go through this notion that you could have a life cycle that's basically a xenobot life cycle, where at some point, and you could even imagine, I don't know whether any creature on earth does this, but I think there's not any particular reason why a fish or a frog or something that already lives in water, it's hard for mammals, they need us to make anthrobots, they can't do it themselves. But I don't see any reason when a salmon beats itself to death on a rock somewhere, some of the cells that come off, there isn't any fundamental reason they couldn't live on as amoebas for some amount of time. And that's a viable life strategy in lakes, right? And potentially reassemble as some sort of a bio, like a xenobot or something. And who knows whether given enough time that thing can make some germ cells and go back to being a fish. I don't know. But in general, like that kind of thing, when we make a xenobot by taking apart the cells of an early frog embryo, what happened to that frog embryo? Like, is it dead? Well, not really. Is it still here? No, not really. You have this xenobot, it continues, right? And in the case of the anthrobots, we have plenty where the donor is deceased, but there is a being that continues. That's something we've talked about doing, these experiments where we can get anthrobots from smokers who had a nicotine addiction and just asking whether A, whether anthrobots pursue nicotine from those patients specifically. And if they do, whether implanting them, so here's your, there's your memory transplant studies, whether implanting them into a rat or something would then convey that behavior. I don't know. One of the weirdest things about it is that it doesn't seem to at all, which is consistent with this pointer notion, it doesn't seem to at all match the size of the, you might think, how's a tiny anthrobot going to redo the preferences of a giant rat body, right? It's not the same thing, but maybe it's relevant. In planaria, if we take a little tiny piece out of a two-headed worm and implant it into a one-headed worm, in something like 17% of the cases, the recipient becomes two-headed. And this is, to me, super interesting because, and again, maybe goes back to the boredom thing because why would this giant body listen to a few cells? All the other cells are in agreement that worms have one head. This little tiny piece is saying actually we should have two. Why even 17% of the time, why does it win? And maybe it's that novelty thing again. Maybe the other cells are willing to listen some percentage of the time because, well, we've already been a one-headed worm for 400 million years. Here's some new information. Maybe that, you know, maybe that's lit up as higher priority now.</p><p><strong>[34:27] Nicolas Rouleau:</strong> Yes, especially if the environment is really harsh or has changed suddenly, I imagine really extreme responses, maybe like a kind of Hail Mary. That's fascinating.</p><p><strong>[34:52] Michael Levin:</strong> So related to these issues of memory storage, memory interpretation, another thing I wanted to ask you is neural decoding. Why do you think third-person neural decoding, meaning that I'm going to measure your brain and try to figure out what you're thinking, is so much harder than first-person neural decoding, which is like most of the time under normal circumstances, we don't have a lot of difficulty knowing what the meaning of our engrams is? We sort of reconstruct it and whatever, but we're pretty good at accessing our own. But in third person, it's really hard, right? I mean, people have had some success, but it's really hard. What do you think is going on there? Why is it so hard?</p><p><strong>[35:35] Nicolas Rouleau:</strong> It just occurred to me what I wanted to ask a minute ago, if you don't mind.</p><p><strong>[35:38] Michael Levin:</strong> Sure, go for it.</p><p><strong>[35:40] Nicolas Rouleau:</strong> So it would be interesting if we were able to identify some molecule, like just imagine a hypothetical molecule that exists in systems, that the sole purpose of the molecule is to transfer goals. It doesn't transfer structural building blocks. It's not a physiological tool. It's literally just a goal. It's like a message that says, This is what your job is. And if that were the case, all this would be, it would make a lot of sense, right? If you take, because under a neurobiological explanation of what you're describing with the anthrobots, suppose they were able to acquire some kind of nicotine-pursuing strategy, you might say, well, that's because perhaps they have more of the nicotinic acetylcholine receptor, and that's being co-opted as its main chemotaxing module or something like that. And you can make some sort of argument like that. And then once you implant it, the whole question would be like, well, how then do you tell the rest of the system to take on this new goal when in fact the new system isn't equipped with the same concentration levels of the nicotinic. But if it actually had this little message that it could pass on and say, well, this goal has been really useful for me. Why don't you try it out? Or maybe just add it to your repertoire of potential goal orientation strategies. I mean, I'm not saying that thing exists. I'm just saying, like, you would need some kind of goal messaging system that goes beyond just simple building blocks.</p><p><strong>[37:27] Michael Levin:</strong> For sure. And there's another interesting piece of data. This guy Heper back in, I want to say, the 80s, did these experiments where he would take certain odorant molecules and inject them into a frog egg inside. So we're talking cytoplasmic. And then when that animal became big enough to have behavior, it would preferentially seek out those molecules in its food choices. Now, here again, so that interpretation, so again, the question is, well, what's the transduction? So you've got some sort of weird molecule inside the cell. It then has to convert that into a presumably multicellular neural something that will lead from smelling it to actually going to find it and all that. So you have to analyze it and then modify your large-scale nervous system somehow. So I feel like these systems have a ton of this plasticity interpretation, and they like to sort of pass it on. This information moves, it moves within bodies, it moves between bodies. That, yeah, I think that plasticity is going to be sort of massive, and I think it's underappreciated.</p><p><strong>[38:41] Nicolas Rouleau:</strong> That's interesting. Neural decoding.</p><p><strong>[38:46] Michael Levin:</strong> Why is neural decoding of somebody else's brain, as opposed to your own brain, so difficult? What do you think makes it so challenging?</p><p><strong>[38:57] Nicolas Rouleau:</strong> I mean, I think the way that we neurally decode from a third-person perspective between humans is usually through the medium of language. Do you agree?</p><p><strong>[39:08] Michael Levin:</strong> Well, I would say, so the comparison I'm making, and you don't have to buy into the comparison. You can just talk about neural decoding as it stands today in general. But for me, I see two versions of this. I see my own neural decoding, which means that most of the time in the absence of various defects and so on, I don't have a lot of problems knowing whatever memory, structures, molecules, processes, whatever we're using, we typically know what they mean, right? So, I access whatever structure that is, and they say, yes, that's because yesterday I had toast or something. Whereas if I were to, if in third person, if I were to figure out, okay, did Nick have toast yesterday? It would be, I would have a hell of a time trying to interpret, right? And there's only been some success, but it's really hard. That's the comparison, right?</p><p><strong>[39:56] Nicolas Rouleau:</strong> Again, the interpretation of. In your example, what's the connection? What's the information that I have access to that I'm trying to decode?</p><p><strong>[40:05] Michael Levin:</strong> In the third-person perspective, whatever neuro, whatever you want, electrical MRI, what do people typically use, right? They typically use physiological readings from brains of animals and human subjects to try and say, can I tell what, you know, you've seen 10, you've seen 10 pictures at some point, then I ask you to imagine one, and I try to guess, right, from doing brain readings, I try to guess which picture you're looking at.</p><p><strong>[40:34] Nicolas Rouleau:</strong> I think that if I was to take an EEG reading of my brain when asked the question, or an fMRI recording, I think is equivalent in this case. But if I were to take a recording from my brain when asked the question, what's your favorite food? And I took a recording from your brain, I think I would have just as much trouble interpreting both of those signals. Agreed. And in a sense, they're both third person in that case, right?</p><p><strong>[41:02] Michael Levin:</strong> Exactly, that's exactly what I'm getting at, right? So from the outside, whatever that means, it's really hard, but from the inside, whatever that means, it apparently is much more, much smoother, right?</p><p><strong>[41:12] Nicolas Rouleau:</strong> Yeah, I think that one way to answer this would be that all of these measures are imperfect analogs of what's actually happening. It's like the original analog computers, how you'd have a buoy and then the buoy would go up and down with a wave. And then you could have the buoy basically trace a line on a graph, on graph paper. So if I was to present you with the graph paper with the sine wave or whatever wave is on there, and I gave you no context at all, you may not just guess that actually what this is the analog of a wave in the ocean. I'm not sure that you'd be able to guess that with no context. So I think that all of these measures that we use as stand-ins for qualia, for phenomenal experience, for decoded memories are just extremely imperfect. If I was able to measure your brain and we had a new technology that actually converted your brain activity into a video, and you could affirm that video in fact was a pretty accurate representation of your experience, I think that if I presented that video to someone else, they would also be able to have a pretty accurate experience of it. They could describe the video, and they would be describing your experience. So I think that the tool is imperfect in these cases. And in that case, that would be an encoder that is getting very, very close to the actual experience in the form of a video. If we added audio, it would get even closer to your experience and would have more features of your memories. So you need something like that. Language is what humans use to describe all that. And it's very much an imperfect thing because you lose so much with words. But words evoke this kind of mental theater that is the cognitive, the low-resolution version of this technology I'm describing, where you would be able to convert your memories into these actual messages on screens. But I think that's really the problem is that it's, and you can have good decoders and bad decoders, and it's very difficult to infer anything about someone's experience based upon lines on a graph or numbers on a screen. Let's see.</p><p><strong>[44:01] Michael Levin:</strong> Cool. So I want you to describe what's the weirdest work that you've ever done.</p><p><strong>[44:08] Nicolas Rouleau:</strong> There's so much to choose from. I think the weirdest thing that I've ever done is, as part of my master's thesis, I asked the question, and maybe I should preface this, but I asked the question, can you classically condition materials? And the reason for that was I had read this really cool paper from the 1950s where these guys out in England, they classically conditioned an iron bar. You know about this. So the question was, could we do it with something like electroconductive Play-Doh? Because Play-Doh, you could run current through it. You know that the current is taking a certain path through the Play-Doh. So could it actually carve out a particular path that could be in a re-entrant kind of way, continuously carved out? And so, just like running current through a piece of wood and noticing that it has this kind of lightning pattern, could you do that sort of thing in a given material and have it dynamically respond to a previously neutral stimulus by having the kind of unconditioned stimulus, neutral stimulus pairing? I was able to classically condition Play-Doh. Basically, we took small bits of Play-Doh and it was like, you know, it's Play-Doh with lemon juice in it. And we ran certain current through it. And then that current was pair associated with a flashing light. And basically, you would have the current that goes through and then you would measure the current output. And so you could get, you could create a spectrogram based upon the electrical noise in the Play-Doh. And what we found was that when the light was on, when you actually flash the light after the pairing had occurred, the noise, the electrical noise in the Play-Doh seemed to correlate with just running current through the Play-Doh. And so this was just a light-induced, current-type response in the Play-Doh. So, we had successfully demonstrated a conditioned response. But then we went a little bit further and we started developing a histo technique on the Play-Doh. So we took the Play-Doh and ran it through histological analysis and sectioned it and stained all the little grooves and things like this. And we were actually able to find these little microstructures that corresponded to when you actually ran electricity through the Play-Doh. So it had more like little grooves inside of it. We actually ended up publishing it. So, you know, it's on Plus One somewhere. And it's very weird. And I don't think anyone has cited it and probably no one will replicate it. It's just a very weird study. But yeah, that's my answer.</p><p><strong>[47:14] Michael Levin:</strong> Amazing. And so the fact that you found microstructures, does that mean that if you were to, or maybe you tried this, if you were to take the trained Play-Doh and rejigger it at a higher level, does it keep the information or no? What's the scale of the...</p><p><strong>[47:30] Nicolas Rouleau:</strong> We did exactly that. So you take the Play-Doh, have them paired, and then just deform it and reform it into a ball, and it didn't display the response. I see.</p><p><strong>[47:40] Michael Levin:</strong> So some kind of larger structure. Interesting. Out of the space of all possible materials, what's your guess as to what percentage? Presumably, we don't think there's something super lucky about Plato, right? What percentage of materials out there do you think have these properties?</p><p><strong>[48:02] Nicolas Rouleau:</strong> I don't know. Percentage is difficult, but I think it would have to.</p><p><strong>[48:05] Michael Levin:</strong> Overall, is it a needle in a haystack thing or is it a general feature of matter or somewhere in between? What do you?</p><p><strong>[48:11] Nicolas Rouleau:</strong> On this planet, it's probably a relatively general feature, I would say, because of water and because of all the organics. I would assume that a system would have to have, because Plato, of course, is made up of the stuff of plants, right? So I think you would have to have a system that is sufficiently plastic and responsive to some kind of deformation, be it electrical or photonic or mechanical. It would have to be changed by inputs of some sort and retain those changes for some duration of time. I think that describes a lot of materials. Like we have memristive materials now. We know that mushrooms do this kind of thing. And there's all sorts of living and non-living materials that have these basic properties, that they could be changed by inputs. I think it's pretty general, not some special feature of a small subset.</p><p><strong>[49:15] Michael Levin:</strong> I agree with that. Then two final quick questions before we have to wrap up. So I, for example, don't think that neuroscience is about neurons per se at all. Do you agree with that? And if so, in a sentence or two, what do you think neuroscience is really about?</p><p><strong>[49:39] Nicolas Rouleau:</strong> I agree with you because I know what you mean. And I know that, in the same way that plant neurobiology isn't really about the nerves of plants, we're describing what we might say are neural systems in the absence of neurons. Like we're talking about networks or we're talking about cognitive systems or we're talking about some functional label that isn't bound to a specific structure. I agree that much of neuroscience is actually about that. I totally agree. And yet the field is defined by whatever it is that most people are doing or saying in the field. And I would say most neuroscientists would probably disagree. They would say that, no, it really is just about cells and brains. But no, I agree with you. I have a more functionalist kind of view of these things. What do you think?</p><p><strong>[50:33] Michael Levin:</strong> I think fundamentally the deepest lessons of neuroscience are about cognitive glue. So they're about understanding, and of course, neurons are a great example of that, but as we said, there are many others, of ways in which competent smaller subunits get harnessed together and aligned towards larger scale causality goals, memories, preferences that none of the parts have, but the collective does. And I think that's, to me, that's one of the biggest things that neuroscience offers us is an example where we take seriously all the levels. We take seriously the synaptic proteins and the networks and eventually psychoanalysis, like all the, right, we know that all of these levels are interesting and important. And it's this amazing field where lots of people are working on the transitions between the levels, right? So that, you know, for example, for molecular biology and things like that, that's a deep lesson that they haven't yet, I think.</p><p><strong>[51:31] Nicolas Rouleau:</strong> Isn't that interesting that our fields are defined by all this matter stuff and not process? If we actually define the fields by process, we could have fields of study like multicellular connectomics. And then that would just describe regenerative biology and cancer and neuroscience and all sorts of things. And it would just be functionalist. There would be different cell types based upon certain structural markers within these fields. But that wouldn't matter because we're actually just talking about processes, shared processes.</p><p><strong>[52:05] Michael Levin:</strong> I think the problem there, or the resistance to that, is that you couldn't keep it in the biology department then. The departments would have to go too, which I think would be true. It would be completely fine.</p><p><strong>[52:17] Nicolas Rouleau:</strong> Immediately computer science and neuroscience departments are now the same department.</p><p><strong>[52:23] Michael Levin:</strong> That's right. And as you said, certain material science departments as well, right? What do you see as the, again, I'm going to say percentage, but I don't mean a number. What's the prevalence of intelligence in the universe? Is it like super rare and precious and maybe the earth is the only one? Is it a common feature? Is it embodiments beyond water and carbon and all of that? What's your take on the whole thing?</p><p><strong>[53:00] Nicolas Rouleau:</strong> Great question. My intuition is just to scaffold it almost a one-to-one correlation with whichever planets would host life. But not because I think that it's just a life thing, but because the kinds of planets that have the kinds of interactions that give rise to life would have the kinds of causal structures required for an intelligent system. And yet, I think you could maybe view all sorts of intelligent things at scales that are much larger than planets. I don't know how. I mean, so we have to ask what is intelligence, and that's a whole rabbit hole that we can go down. But if we're just talking about problem solving and adaptation and this kind of cognitive flexibility.</p><p><strong>[53:54] Michael Levin:</strong> Well, you can throw in consciousness as well, right? So first-person perspective, how common is that? I mean, you choose any of that.</p><p><strong>[54:05] Nicolas Rouleau:</strong> I think that you probably have to have, like, so I don't think this stuff is happening at the level of atoms. And I don't think it's happening at the level of galaxies, because I don't think galaxies can, I don't think there are enough units in a galaxy, like in the universe in terms of galaxies with connections between them that allow them to have sufficient causal structure to solve problems. So you probably do need to have something at a scale that is less than a planet to have intelligence, just because of the size of things. I think it's just sort of like a spatial problem, and how close things are to each other. In space, things are really spread out, but on a planet, because of gravity, everything's been kind of brought together. So I think if you have a planet where everything has been kind of brought together and squished together, you have the capacity for the kinds of interactions that can lead to problem solving. And then I think that, you know, that means, you know, if we're just talking about planets in the universe, now we're talking about a really small subset of the universe. And then if we're talking about like only the planets that are far enough away from the sun to have like, you know, organics and light and life or water, that's an even smaller percentage. So I think it's a small percentage in the universe, but how small it is, I'm not too sure.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Conversation with Nic Rouleau, part 1: &quot;Some thoughts on the mind as material&quot;</title>
          <link>https://thoughtforms-life.aipodcast.ing/conversation-with-nic-rouleau-part-1-some-thoughts-on-the-mind-as-material/</link>
          <description>Neuroscientist Nicolas Rouleau joins Michael Levin for a wide-ranging discussion on the mind as a material process, exploring free will, agency, cybernetics, brain death, and how consciousness and information might be transmitted or realized in different systems.</description>
          <pubDate>Fri, 03 Apr 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69cf679efb271c000155d49f ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/3talIGE_v9Y" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/b83a84be/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1 hour talk and discussion, comprising part 1 of a conversation with a really interesting young neuroscientist, as well as friend, collaborator, and our Center member, Nicolas Rouleau ( Nic goes over unconventional aspects of neuroscience touching on free will, cybernetics, consciousness, and a lot more. We start a discussion which is continued in part 2. For more information: Nic's website: X account: @DrNRouleau Recent papers to check out: Sellar, E.P., Rouleau, N. (In Review). A cybernetic framework for synthetic biological intelligence in the era of neural tissue engineering. Preprint doi: 10.31234/osf.io/md2wf_v1. Kansala, C., Cicek, E., Nkansah-Okoree, V., Golding, A., Murugan, N.J., Rouleau, N. (In Review). Superstitious conditioning forms the experience of free will under causal determinism. Preprint doi: 10.31234/osf.io/fk3yt_v2. Roskies, A. &amp; Rouleau, N. (Forthcoming, In Press). Research on brain organoids should prioritize questions of agency, not consciousness. AJOB Neuroscience. Rouleau, N. &amp; Levin, M. (In Press). Brains and where else? Mapping theories of consciousness to unconventional embodiments. Philosophical Transactions: A. Preprint doi:10.1098/rsta.2025.0082. Rouleau, N., Levin, M. (2024), Discussions of machine versus living intelligence need more clarity, Nature Machine Intelligence, doi:10.31219/osf.io/gz3km Rouleau, N., and Levin, M. (2023), The Multiple Realizability of Sentience in Living Systems and Beyond, eNeuro, 10(11), doi:10.1523/eneuro.0375-23.2023 Rouleau, N., Cairns, D. M., Rusk, W., Levin, M., and Kaplan, D. (2021), Learning and synaptic plasticity in 3D bioengineered neural tissues Neuroscience Letters, 750: 135799</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Free will, minds, transmission</p><p>(26:52) Material brains after death</p><p>(31:18) Defining free will experience</p><p>(38:00) Long-term agency and algorithms</p><p>(49:35) Causality, math, and consciousness</p><p>(57:33) Transmissive consciousness and information</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Nicolas Rouleau:</strong> Yeah, thanks for inviting me, Mike. My name is Nick Rulo. I am an assistant professor at Wilfrid Laurier University and an affiliate scientist at the Allen Discovery Center at Tufts. And today we're going to talk about some interesting topics. And I've put them under the umbrella of some thoughts on the mind as material. We'll be talking about free will, cybernetics, and this idea of transmissive consciousness. And I'll try to run through these slides pretty quickly because I'm excited to get to the discussion. So these are the three ideas that I sort of want to touch on today. How do we explain the experience of free will? What is a mind and how can we build it? And is the brain a transmissive organ? So the brain is a complex object. It's probably the most complex object we know of in the universe, but they are not impossible to understand. They're not uniquely composed, and they don't require any magical or non-physical mechanisms or new physics. I think that everything we've seen so far in terms of functions of the brain can be explained basically by the physics that we already have and mechanisms that we already know of. And then, of course, we can build on top of that, but I don't think it needs any special properties beyond what we're already investigating in biology and engineering. So I'll start with the first question, which is, how do we explain the experience of free will? And I'll start this by just sort of talking a little bit about the story that we've been sold about free will, which is that we've been sold the story that the brain makes decisions. So there are these things called intentions and desires that are said to initiate a sort of causal chain that leads to actions. We're often told that we plan and we organize events. And then you can look at brain imaging and see that there are certain areas of the brain that light up, demonstrate activity, either through fMRI or EEG, and those correlate with actions, but they seem to precede the actions. And then, of course, there are these things called decisions, which correspond roughly to the idea that you can actually select options in the world in terms of what you're going to do. And basically the story is that you're the conscious author of your actions. You get to make decisions and those decisions have impacts in the world and you really are making it happen in a conscious and intentive way. And the evidence for this is not great. So if you look at the neuroscientific literature over the past 20, 30 years, you'll find a whole bunch of evidence in the opposite direction, which basically suggests, you know, the totality of it suggests that we are more or less witnesses to actions that are happening. We seem to be conscious of decisions after they're made, which to me means that the decision isn't being made by the conscious agent. But of course, there are different ways to interpret all these results. I included one study here on the far right, which I think is a bit of a nail in the coffin for an idea of free will, just from the neuroscientific perspective. And I could talk a little bit about that, which is basically if you stimulate the brain with transcranial magnetic stimulation, if you stimulate the right side of the brain, you can get people to make left-sided decisions, like press a left button more often than the right side. And if you stimulate the opposite side, you can get the opposite reaction. Basically, you can determine people's decisions. But when you ask them why they made those decisions, they'll tell you that they wanted to make the decisions, which is really interesting. It's a preservation of the experience of free will, even though experimentally, you know that you're determining the outcome of the task.</p><p><strong>[04:28] Nicolas Rouleau:</strong> So I think this is all really interesting. And there are different ways to interpret these studies, of course, and many will disagree with my conclusions. But I basically think that this is interesting but irrelevant. And that's because I think that the question is not posed correctly and has been pursued incorrectly. The only evidence that we have for free will is the subjective experience of free will. And Bertrand Russell gave us this great analogy of the teapot, which he applied to different kinds of arguments, but here I think it applies to free will. And basically the idea is that, if I make the claim that there's a teapot that's orbiting the sun between Earth and Mars, it would be very difficult for you to actually measure that or disconfirm it. But actually, the burden of proof is on me to demonstrate that the teapot exists, not on you to demonstrate that the teapot doesn't exist. And what we've seen in the neuroscientific literature is that the burden of proof has been shifted to basically those who don't think free will exists. And there's a constant sort of struggle to develop experiments that continuously demonstrate that we're not the author of our actions. So assuming causal determinism is true and there really are no uncaused causes, I think that all we need to explain really is the experience of free will. And we can even make the assumption at the outset, whether it's true or not, we can make the assumption that we live in a causally deterministic universe, the brain is no different, thoughts and behaviors are basically the products of a chain of causes. And so really all we have left to explain is why people have this incredible experience of free will. And it is a very common experience. So it sort of demands an explanation. And the answer that we put forward in a recent preprint is the idea that basically free will is explained by superstitious conditioning. If you look at the original studies by B.F. Skinner in the mid-20th century, he did some incredible experiments where he demonstrated that you could basically get pigeons to have superstitious beliefs. And this is a form of learning called non-contingent reinforcement. So if you have a pigeon in a Skinner box and you make it so that when they peck a lever, they get some food, they'll peck the lever and they'll get the food. But if you make it so that they get food regardless of what they do every 10 seconds, 30 seconds, what ends up happening is the pigeon gets reinforced for the behavior that it was displaying right before it got the food. So whatever it was doing right before it got the food, now that behavior is reinforced. And so it will continuously display that behavior. And you can see naturalistic examples of this throughout history. So for example, rainmaking behavior, people doing rain dances and things like this to bring about a change in weather, to address a famine. So that is totally understandable in the context of superstitious conditioning. Basically, humans are stressed by either a famine or a drought. They engage in different kinds of behaviors to try to remove that stress or avoid that stress. And finally, as time goes on and they become more desperate, they do things that are more and more strange and incoherent, which is what most animals do. Animals will behave randomly until they get the response they desire. And whatever they're doing right before it rains, and eventually it will rain, whatever they're doing right before it rains gets reinforced. And so then whatever behavior was happening right before it rained, whether it's an animal sacrifice or a specific kind of dance, that gets reinforced. More of that over time. So basically, that's what we think free will is, and I'll get to the model of how that works. But this basically is saying that free will is a learned phenomenon.</p><p><strong>[08:57] Nicolas Rouleau:</strong> And what we find is that learning, superstition, and delusions, which are all related to one another, are actually related specifically through dopaminergic pathways in the brain, the mesolimbic pathways in particular. You find that in, for example, Parkinson's disease, among people with Parkinson's disease, superstitious beliefs are very low. I don't know why I've got the arrows backwards here. So in Parkinson's disease, where dopaminergic activity is very low, superstitious beliefs are also low. In schizophrenia, superstitious beliefs are very high, and that's a disorder characterized by excess dopamine activity. If you give people with schizophrenia a dopamine blocker, they end up having less superstitious and delusional beliefs. So basically, we think that it's a dopaminergic phenomenon, which is actually in line with the literature, the neuroscience literature around decision-making, which has focused on dopamine for the past 20 years. And the sensible question you might ask is, well, why is this? Why do we have this experience of desiring things and then planning and organizing? Because people really do report these kinds of experiences. And basically the answer we've come up with is that we think that you're predicting it. And because you have access to the content of your brain, whether conscious or unconscious, you are constantly forming predictions about what will happen next. And this is consistent with an active inference model of how the brain works. And basically, because you have access to what's happening next, when your predictions are realized, that actually reinforces whatever it is that you were predicting. And if you look at the model that we generated here, basically the idea is that whatever activity is happening in the premotor cortex and any area of the brain that signals to the motor cortex, and the motor cortex, by the way, signals directly out to the spinal cord into the muscles. So if you just stimulate the motor cortex, people move their bodies. So anything that's happening before the motor cortex is called premotor. But we think that that's basically a predictive, it's a predictive substrate, and it is anticipating what will happen next. And when that action is actually realized, the anticipation is reinforced. And because of the temporal contiguity, because prediction is always coming before realization, you have this automatic reinforcement of the prediction. And because it's embodied and because you witness it as your body in motion, there's a self-attribution. So you're attributing the causality to yourself. And if it's a learned phenomenon, it should extinguish when you have examples where a prediction is made, but then it's not realized. And we do actually have examples of this. So you can condition individuals to not feel free. When you do this in animals, it's called learned helplessness. So if you have a dog in a Skinner box with two compartments, a shocked floor and another shocked floor. And it can choose between these two compartments. And whatever it does, it will always get punished. The dog will eventually stop avoiding the punishment and will simply sit still. And you get different kinds of versions of this classic experiment where eventually, if it doesn't matter what you do, you are constantly getting a negative reinforcement. You can get basically the opposite of what we're suggesting, which is you get a complete lack of free will or the experience of free will. You don't think you're the author of your actions anymore. And there are clinical versions of this like avolition that you see in clinical populations. So why isn't it usually extinguished? It's because the way that the neural circuitry works in the brain is that premotor activity always precedes motor activity.</p><p><strong>[13:25] Nicolas Rouleau:</strong> You always have this prefrontal cortex activity that's happening before the motor cortices are activated. And so whatever predictive state is occurring is always happening beforehand. And that temporal contiguity, one before the other, means that you will always reinforce the prediction. So I'm happy to talk about this a little bit more, but I'll move on to the next idea here. So the next idea is what is a mind and can we build it? And this line of my research has very much to do with this idea of minds being much less complex than we give them credit for. They look incredibly complex, but basically what we have in the form of animal brains and minds is something that has been built up over billions of years from tissues and cells and the kinds of cognitive systems that can exist at smaller scales and at different time spans. But the basic building blocks of a cognitive system can be described using things like Breitenberg vehicles and basic cybernetic loops. And I think that cybernetics actually provides a great way to approach these problems. And I love this Breitenberg quote, which is the idea of uphill analysis and downhill invention. So if you're trying to understand the brain, you can pick apart all the different pathways and test everything until you're blue in the face. And you will eventually get answers, and these answers will tell you about the cognitive circuitry of the system. Alternatively, another way to understand the brain and another way to understand cognitive systems is to actually build them and then try to understand the thing that you built and map that onto the kinds of behavioral phenotypes that you see in nature. So we can engineer these miniature brains in a dish using pluripotent stem cells and primary neurons, and we can put them together in different kinds of combinations, controlling how many layers there are, what kinds of spatial characteristics they have, whether it's a co-culture, monoculture. And this is some of the work that Mike and I did, where we found that if you create these miniaturized brains in a dish, you can get them to learn, you can get them to display these non-associative learning responses that also have spontaneous recovery. So these very basic responses that you can see. So the question is, are they capable of more complex cognitive phenomena? And we now use a single-cell-resolution microelectrode array. And what you're looking at here are action potentials within a network being displayed at very high resolution, resolution of about 10 microns. And it turns out that if you disembody neurons, if you remove them from a body and put them in a dish, as we've been doing in biology for over a century, they behave very differently in terms of their physiology than they would if they were actually hooked up to a body. Disembodied neurons display these stereotyped paroxysmal electrical discharges, this kind of burst-firing phenotype. And that's viewed in the context of electrophysiology as like a good sign. It's like, well, the cells are firing, great. But if that was happening in a body, we would call it a seizure. And in fact, it displays all the basic characteristics of a seizure. So these are basically cells in a dish that are aberrantly firing. They have this kind of seizure phenotype. And it turns out that if you just give them feedback, either feedback about their own activity, or you inject small amounts of current into the network, just stochastic inputs, that totally normalizes. So as long as the neural network is getting some kind of input from an environment, let's say, something that isn't itself, it tends to normalize. It tends to become much more like the kinds of neural activity you would see in a body, which I think is just fascinating and actually coincides with all these interesting things that you can do with neurons when you give them feedback. They seem to be able to learn autonomously. They seem to be able to problem solve, make decisions as far as that word means anything after the previous topic. But yeah, an embodied neuron is a very different thing than a disembodied neuron. And that should be pretty unsurprising because, of course, brains co-evolved with bodies. A brain is just part of an organism.</p><p><strong>[17:54] Nicolas Rouleau:</strong> And when these systems are functioning together and navigating the world, they behave incredibly different than when you separate them and have them interact with the same environment. So using closed-loop feedback to try to investigate questions of consciousness and intelligence and attention and all these other cognitive capacities, I think is a really important frontier. And it's something that we're interested in as a lab and looking at. So we're creating these modular brains in a dish, and we're doing some interesting things. One of the things we're doing right now is we're trying to actually bring three-dimensional cell culture and two-dimensional cell culture together into a kind of layered system where different parts of the network can function as reservoirs and as readouts. And if you do this, in the context of embodied cognition, the idea is that you're actually giving the system different kinds of cognitive resources to draw on to solve problems, perform computations. And what we're really interested in is whether, giving a two-dimensional monolayer access to a three-dimensional neural network, for example, whether that confers some kind of enhanced cognitive properties. Do they learn faster? Does it take fewer trials to learn the same task or solve the same puzzle? So that's some of the things we're doing. And I'll move on to the last topic because I would like to just get into the discussion. And this is the topic of the brain as a potentially transmissive organ. And this idea sort of starts with what I was doing in grad school. So in grad school, I was working with Michael Persinger on the idea of brain electromagnetic interactions. And I was reading about William James's research and his thoughts on consciousness from the turn of the century, late 1800s. And he gave this great speech on human immortality, where he asked the question, what would be necessary for consciousness to survive bodily death? And basically the answer that he came up with, which I think is the right way to approach this scientifically, is the idea that if the brain is a productive organ, if consciousness is just a property of what the brain is doing, as such, neurons firing signals to one another, be they electrical, chemical, or a combination, if that is really the sufficient property that either realizes consciousness or from which consciousness emerges, then basically, you cannot, consciousness can't survive death because when the brain dies and it decays, consciousness would dissolve with the matter that gave rise to it. But he proposed that there are other kinds of functions, other kinds of functional categories that could exist that if it was discovered that the brain was within this category of function, that consciousness could survive bodily death. And he proposes transmissive function as the main function that would allow for the survival of consciousness. And what is transmissive function? Well, it looks a lot like these two examples. So there's the example of the pipe organ, for example. So there's air in the room and the air isn't music. It's not sound. But when that air is sifted through various compartments and compressed and changed, it can become these oscillating pressure waves, which are experienced as music. And in the same way, when you shine light through a prism, that light, when it's split into its constituent colors, the colors are not a result of the prism producing anything. The prism is simply filtering the light. And you wouldn't ask the question, well, where was the color in the prism before it was filtered by the prism? These are sort of nonsensical questions in the context of transmissive function. And so these are some of the examples that William James gave to describe what transmissive function is. And I think it was in 2021 or 2022, I wrote an essay that was a response to a challenge by the Bigelow Institute for Consciousness Studies that had the same question. What is the best scientific explanation, the best scientific evidence for the existence of consciousness after bodily death? And those who produced essays in this competition often drew on examples like, well, we can look at things like mediumship and we can look at post-mortem apparitions, like the existence of ghosts and things like this. I thought that William James basically got the question right. And so it was just a matter of identifying what kinds of scientific evidence existed that could be in support of the idea of transmissive consciousness.</p><p><strong>[22:22] Nicolas Rouleau:</strong> And what I identified was that basically the brain's interactions with electromagnetic fields constitute a genuine transmissive function of the brain. These are transductions and transmissions that are occurring without an intermediate sensory modality. They're happening directly at the level of the brain. They're changing brain function and they're changing experience. So I'll get into a couple examples of this. So there's evidence, for example, that the brain coheres in real time with oscillations of Earth's magnetic field. Everybody knows that Earth has a magnetic field. That magnetic field is generated by basically molten iron moving around in our core. You can see Earth's magnetic field when you have coronal mass ejections and basically the particles from the sun are dancing at the poles of Earth's magnetic field as the aurora borealis. But we know that those same perturbations of Earth's magnetic field distort all sorts of things like flight patterns in birds and different kinds of swarming behaviors in insects and so on. Well, it turns out that it also influences human activity. I'll get to that a little bit later. But one of the things that happens is that Earth's magnetic field is not static. It actually oscillates. And the reason it oscillates is that around the Earth right now and continuously, there are lightning discharges between the ionosphere and the surface. And these lightning discharges actually oscillate Earth's magnetic field with a modal frequency of about 7.83 Hertz. This is known as Schumann resonance. And this oscillation actually shows up on EEG and coheres in real time with EEG. So if you look at the brains of, you bring people into the lab, look at their brains in terms of EEG rhythms, you'll find that their brain activity actually coheres in real time with Schumann resonance measured as Earth's oscillating magnetic field. When Earth's magnetic field is oscillating during moments of perturbation, like when there's coronal mass ejections, there are more seizures that you see in psychiatric inpatients. So humans are affected just like other animals. We're really no different. And they also have sort of esoteric experiences around times of geomagnetic fluctuation. If you take a person and put them in a Faraday cage where you've blocked out the electromagnetic environment, you find that their brain rhythms change as well, specifically within the alpha band, which is around 10 cycles per second firing frequency, which is a brain rhythm associated with inhibition of various brain areas. And in grad school, I basically demonstrated that if you expose postmortem brain tissue, these are preserved brain specimens, if you expose them to electrical current or electromagnetic fields and you measure voltage fluctuations in the tissues, the different areas of the cortex filter those electromagnetic signals differently. So if you inject current into the parahippocampal gyrus, for example, it will amplify theta rhythms more than it will amplify beta rhythms, for example. So there's a certain kind of frequency selectivity. And basically, I think this is a material property of the brain. I don't think there's anything magical happening here. I think that the brain has material properties in addition to its biological living properties. And that the parahippocampal gyrus is one of these areas that has this really interesting sort of geometry, which could make it a great candidate for studying in terms of brain material and the brain material's interactions with electromagnetic fields. It could explain, for example, why the temporal lobes in particular are so sensitive to electromagnetic fields and why research involving brain exposures to electromagnetic fields are very often dominated by experiences that are consistent with activations of the temporal lobes, like hearing sounds, seeing colors, having these kinds of visceral experiences, not on the outside of the body, but on the inside of the body, and things like that. Okay, I'm going to end it there. This is my lab. Shout out to my institutional affiliations and funders, and I'll wrap it up there.</p><p><strong>[26:52] Michael Levin:</strong> Super. Thanks very much. Lots to chew on. Let's see. I have a bunch of questions. Let's start with just the very last thing you said, and then we'll circle around the back. So isn't it amazing that even after these brains that you were looking at, formaldehyde fixation, formalin, something like that? I mean, that's a lot to ask, right? Even if the basic finding is true that, okay, after death, they can still respond to, like, let's say all that is true, you might still think that, my God, fixing all of the aldehydes and everything, like, good luck. Any thoughts on why it's still able to do that?</p><p><strong>[27:38] Nicolas Rouleau:</strong> Yeah, I think you're totally right. So when you fix the brain with an aldehyde-based fixative, you get all these sulfide bonds between all the proteins. And that's what's great about it, because when you put it under a microscope, even 20, 30, 40 years later, you have this incredible microarchitecture that's preserved. So there's no chance that whatever it is that we're observing has anything to do with biology in the sort of sense of there's no physiological response here happening. I think that it's a material property and it's a material property that is at least not fully attenuated by aldehyde fixation. It's basically happening at the level of the tissue, like conductors and insulators and capacitors.</p><p><strong>[28:26] Michael Levin:</strong> I mean, that's just wild, right? I can absolutely buy that there are these important material properties, but you would think that the fixation would change all those, right? You know, that's one of these things that strikes me as amazing that it actually works. The other one that always gets me is general anesthesia, right? If somebody said to me that, okay, we're going to come and decouple all your electrical synapses, but don't worry, afterwards, you'll probably settle in pretty much the same bioelectric state you were in before, I would say not a chance, right? No way. It's just incredible that works.</p><p><strong>[29:00] Nicolas Rouleau:</strong> We had the same idea. What is it about the fixative? Could the fixative be interfering with this? So we had a follow-up experiment. We actually did publish it. I think it's in the journal Cognitive Neurodynamics in 2017. And what we did was there were a whole bunch of mouse brains that were sitting in fixative for anywhere between one year and 20 years. And basically we just measured all of these and then looked at how the amount of time spent in the fixative and the pH of the fixative changed the way that there were, how it changed voltage fluctuations in the tissues themselves. And there were time-dependent differences. So I have no doubt that the fixative is changing something, but clearly not so much as to destroy any kind of differences between cortical areas, for example.</p><p><strong>[29:58] Michael Levin:</strong> That's amazing. If that's the case, then presumably we can expect evolution to have taken advantage of this wide range of weird things that can happen to the material and keep certain useful properties.</p><p><strong>[30:14] Nicolas Rouleau:</strong> Have you ever looked at the shapes of reptile and bird brains?</p><p><strong>[30:22] Michael Levin:</strong> I mean, I've seen them and I haven't studied them closely. What did you see?</p><p><strong>[30:26] Nicolas Rouleau:</strong> Well, it's interesting. They have these really interesting sandwich-like properties where you have three layers and the layers are alternating conductors and insulators. And we actually have that as a vestigial part of our brain anatomy in certain parts of our brain, like the hippocampus. The hippocampus basically is a three-layered cortex. And we eventually have developed more layers as we went along. But yeah, the birds and the reptilian brain, they have this really interesting sandwich structure, which I always thought was really interesting from a materials perspective.</p><p><strong>[31:18] Michael Levin:</strong> Let me circle back around to the beginning. Let's start with the free will stuff. Do you want to give a definition of free will and in particular, do you think the word is completely useless and should be gone, or is there some useful sense of it that does some useful work?</p><p><strong>[31:39] Nicolas Rouleau:</strong> I think that, so if I'm giving my definition of free will, I would say that it is a subjective experience of control. And if I go a little bit further, I'll say that it's a misattribution of causality. Because I don't think that you actually do have control. I think you're witnessing your predictions realized. What most people mean by free will is the ability to author their own actions. And I just don't think that that's a thing that really exists. I'm super curious. I'd love to speak to, I wonder what you think about this. Why do you, what is it about brain activity preceding actions that we think has anything to do with something like intention or planning.</p><p><strong>[32:35] Michael Levin:</strong> Yeah, I mean, without getting into my whole story of free will, I do have a theory on why it's a pervasive way of thinking. So imagine the earliest, simplest life forms, right? Because you're living in a highly energy- and time-constrained environment, meaning that everything's expensive, time is really expensive, food is expensive, what you can't afford to do is be a Laplacian demon. In other words, you can't say, I'm just going to pay attention to all the microstates of every ion and everything else around me, and that'll be my story. You'll be dead and eaten in no time. So what you have to do, what you're forced to do, is to coarse grain. You're forced to take ensembles of things and say, I'm just going to call all this stuff back. And as you do that, one really powerful way of doing that is to have models of agents doing things. So in other words, it's not just a bunch of random stuff that happens, but the way I'm going to coarse grain it is like, here's this thing, I'm going to call it a predator, I'm going to call it food, I'm going to call it a mate, like whatever it is. And I'm going to tell a story about this thing doing something. A very nice way of compressing what's going on, and it gives me the ability to make fast decisions under limited information. Well, if you do that long enough, eventually you turn that on yourself and you say, wait a minute, I'm an agent that does things. And so I would, without going into whether I think it's actually real or not, I would simply say that here's a theorem that one might put forward, that any being that arises under resource constraint is going to believe in free will. I'm not saying it has it, I'm not saying it doesn't have it. I'm saying that I think that kind of origin really induces and facilitates these kind of models. And then it makes sense to apply that to yourself, right? So I think that's okay. We could tell that story for the origin of it. But I wonder, so let's run with your definition of this misattribution of authorship. What do you think are the implications of how, what does that mean for how we should, or can conduct ourselves? So, somebody doesn't know that theory, then they heard you, they found you very convincing. What happens after that? And I remember being at the Danish pastry house with Dan Dennett, and the waitress came over and she's like, Oh, you know, Professor Dennett, what will you have? And he's looking at the menu, he's like, Well, let me see. I'm like, Dan. What are you doing? Are you going to choose a soup? That can't be right. So, but neither can we sort of sit there and just wait to see what the universe has in store. What do you think about that? Like if you find that convincing, how do you navigate life, or do you?</p><p><strong>[35:38] Nicolas Rouleau:</strong> Yeah, I think that I think you can have intellectual positions about how the world really works and then you can have the way that you privately conduct yourself in your day-to-day life. So when I'm interacting with others and when I'm navigating the world on a day-to-day basis, I do generally just move around with the assumption that everybody has free will, even though when I, and it's consistent with what you're describing, like the model that you were describing, which, by the way, I think is also consistent with the animism that you see in young children, like thinking that trees are alive, but that trees are agents that, like everyone else, like the humans that they interact with. And I think that every now and then, what often happens in my life is, like, I'll perceive someone to have slighted me or I'll perceive, like, someone to have done something, quote unquote, wrong. And I think my first knee-jerk reaction is to say, well, what a terrible thing to do. This person really is awful. But then I realize, like, very quickly, because I have, like, the intellectual position that the decisions they're making are the only decisions they could have made given all the preconditions. And even aside from that, like, the whole idea that this person is the author of all their actions and this can be completely held responsible for whatever it is they're doing is just not right. And so I try to, if I reflect about how other people behave and how I behave, I can often forgive people much more readily because I kind of view people's behavior as, like, no different than the weather or no different than how you would expect a crocodile to behave if you stuck your hand in their cage. What happens is the thing that was most likely to happen right then and there because of all the preconditions. And blame seems to be, I don't know, just a kind of vestige of something that's just not true. How do you navigate the world with this?</p><p><strong>[38:00] Michael Levin:</strong> So I have a couple of thoughts on that. One is, I do think that it's a useful heuristic too. So what I try to do is, in the example that you gave, when somebody does something bad, I usually go to the Sapolsky version and say, well, there's a long history that led that person to this. What are you going to do? On the other hand, when somebody does something amazing, right, or some act of courage or generosity or whatever, I usually flip the other way and I say, fantastic, you get full credit, like that was your magical inner nature doing that, right? I think that's fine and that's helpful. But also, my story of free will is not as deflationary, I think, and we can sort of, if we have time, we can talk about what that is. I do think that though, on a short time scale, like in these decision, if you're looking at individual decisions, that's not where you're going to find it. So at the micro scale, I really don't think, if you're looking at the micro scale of what's going on, you're going to find a bunch of causes that got you there, and that's fine. What I do think is a useful sense of free will is the long-term extended showing up. And what I mean is, if you apply consistent effort, whether that be education, meditation, anger management, therapy, I don't know what it is, whatever it is, if you're applying consistent long-term efforts so that your future reactions, instinctual though they may be at the time, but you're biasing the distribution through consistent applied effort, you're biasing your likely future behaviors. So you're not free for current you, but you have some freedom of what future you is going to look like. And I realize that then you say, but even that effort is, whether you can apply that effort or not, caused by something. I get all that. I see it as a kind of like summing infinitesimals under a curve in calculus. Like each thing is, yes, it's infinitely small, but altogether it actually adds up to a non, you know, to a non-zero thing. So I think that's a useful version of free will, where you say the freedom you have is not what's happening right now, right? Past you, in fact, a whole series of past you have done all kinds of stuff to get you here. Don't worry about any of that. Look forward, right? You can't do anything about any of that, but you can, although you can actually, I think, tell a more adaptive story about what happened. You can flip those stories around. But what you can do is now do the nice thing for future you and do whatever it takes to get yourself so that you're doing something in the future that's more, you know, more aligned with your values and things like that. So I think it has a more useful version that way, because what's your take on this, with that kind of view of it? Do you think the crisis of meaning is an issue? The kinds of, do you know what I'm talking about? Like the basic, right, where neuroscience and physics and evolutionary theory have really sort of taken the rug out from a lot of things that we might want in our relationship with others, but also in a kind of a social level. How do you see this story fitting into that?</p><p><strong>[41:21] Nicolas Rouleau:</strong> Yeah, that's so. I think that that's maybe one of the most relevant questions here because you have to ask yourself, what are you going to do with this? And I just think that this is one of these genuinely dangerous ideas because, and we talk about this a little bit in the preprint, but you could dedicate a whole research program to this, and people have. But when you tell people that their decisions are determined, they tend to cheat more on tests. They tend to slight people around them and undercut people. And that's concerning because if it's true that in fact your decisions are determined and really the only barrier between social anarchy and social order or harmony, however you want to characterize it, is really just people's belief in free will, from a social benefits side of things, you really do want to preserve that for the benefit of the world and for the species. On the other hand, the scientist's job is to figure out what's true and what is a model of the world that best describes it and has best predictive validity and in many cases allows you to control things. I think there are benefits on the side of viewing people's actions as not authored by themselves. For example, you might still have a prison system. When people do things that are harming themselves or others, you still have to remove them from the situation in order to reduce that harm. So quarantine is still a viable solution to the problem of antisocial behavior. But the prison system would look very different. You would basically have people in these boxes, but these boxes would be places of compassion. They'd be places of understanding. They'd be places where essentially you'd be treating people as people who are sick or people who have learned inappropriately maladaptive behaviors. And I think that is a compassionate outcome of really taking it seriously that we are not the authors of our actions. But then again, there's a balance here to be struck. And I don't know, on the balance of all things, whether this would be good or bad for the species.</p><p><strong>[44:06] Michael Levin:</strong> Yeah. There's a couple of other things. I just mentioned them, and then we don't have to dig into it. But there's a couple of things I think are relevant to this. One is that I actually think that there's a lot of the components of what we mean or what we want to have by free will has to do with causes that are at a higher level than the parts which are sort of underneath them. And from that perspective, I think your first act of free will is basically embryogenesis, right? It's when you've managed to, when the collective of cells begins to acquire goals in a different space, in a large-scale morphous space that the individual cells didn't have. And so you now have this causality at a different level that actually works downwards to bend the option space for the cells. And it makes them do things that they have no idea why or what they're doing. But there's a higher level at which, okay, now there's this larger goal state that we're all working towards. And so on. So I think those kinds of things are important. And the other thing that's interesting is, I think we have a weird new model system for a strange kind of free will, and that model system, I don't know if you've kept up with our stuff, and we'll be a bunch more this spring, but our stuff on the sorting algorithms. So you haven't seen it. It's pretty wild. So basically, like, okay, Dan in his old book on free will made a very sort of powerful analysis where he said, look, we only know of two kinds of things. We know causes, where A is caused by B, and then we know quantum randomness. And neither of those things is what we mean by free will. So then, you know, then that's the end. So I think there's something else going on. And long story short, what we've been looking at are, because I'm interested in the shock value of doing this for extremely simple, minimal models. Once you have something biological, there's always some new mechanism that you haven't found yet. There's going to be some quantum something. There's never an end to it. But what we looked at were very simple computational systems where you can see all the steps. And so we took simple sorting algorithms, like bubble sort. So these are things of five or six lines of code, people have been studying them for 80 years. And we looked at them in a way that basically drops the assumption that we know what they're doing. Because you assume, right, the theory of algorithms is, the whole point of it is the algorithm tells you exactly what you're going to do. And okay, people have studied unpredictability and sort of complexity and things like that. But it turns out that there's something else that comes out of it, which is not just unpredictability or complexity. It's actually things that are recognizable to any behavior scientist.</p><p><strong>[46:50] Michael Levin:</strong> So it turns out they have delayed gratification. It turns out that they can do some other stuff. I've been calling it these side quests because the algorithm tells you you're going to sort, and it forces you to sort, and yeah, you sort the numbers, but you're also doing this other thing, which I'm not, I won't use your time now to go into it. But it does this other thing where there are no steps in the algorithm for this other thing. They're not there. And so it isn't a miracle in the sense that the CPU is literally only doing the things in the algorithm, right? That part works. But it turns out that our formal model of the algorithm only captures one thing. The thing we are forcing it to do is there. But there's also this other thing, which you can, and probably many, that's just the one we found. There's probably a million others that we just haven't caught yet. That is a kind of, another way to look at it, it's a kind of intrinsic motivation. So it isn't the thing we forced it to do, and it doesn't have anything to do with randomness or it doesn't have a quantum interface, it's a deterministic algorithm. But yet there's this other thing. And I think that if we were looking for something, for like a minimal version of free will, it isn't the thing we forced it to do. Of course, like the mechanism makes it, that's obviously not it. But this other thing that we never asked it to do, that is in fact an easily recognizable, it's basically like homophily, it turns out, it's just like a biological homophily. It turns out that, I don't know where that comes from. And that might be a very minimal model of the kind of thing we're looking at. It's neither the chance nor the necessity. It's the thing that you're doing in between the steps of the algorithm that forces you to do specific, or the rules of physics and chemistry. It's the stuff you do despite the algorithm, right? It's not what the algorithm makes you do. It's the stuff you manage to do despite the algorithm. So I don't know yet. I haven't said, you know, this is like I'm working on a whole thing on free will here, but I haven't said anything about that yet. But I suspect there's something here. And I suspect if something so simple is able to manifest it, I'm sure evolution has like, I exploited the hell out of it, and maybe this is the kind of thing we're looking at. But of course, in biological systems, it's very complex, right? It's hard to prove anything like that. Anyway, so that's kind of how I've been looking at it. But I wonder, I think from what you've described today, it's an interesting mix of views in that, so free will, no, but consciousness, yes. That's kind of interesting, right? You want to talk about that for a second? How do you see that working?</p><p><strong>[49:35] Nicolas Rouleau:</strong> Well, I just want to say one quick thing about what you were just talking about, because it's fascinating. I think the insight that Darwin had about natural selection and the environment as the selector was an incredibly powerful insight that will continue to be adopted by other areas of study as we trot along in science. And I wonder, if you don't mind, if I could ask you a quick question, so for these algorithms, do you think that if you could account for all of the particles in the universe and their motions and their positions, like Laplace's demon, do you think that you would still find causal chains in those algorithms? Or do you think there's something happening there that's not accounted for by all the parts?</p><p><strong>[50:36] Michael Levin:</strong> Right. A couple of things, and this could, we could spend hours on this. I'll send you some of the stuff that we've been working on recently. But what I don't think this requires is any weird new physics. I don't think this is an issue of physics. I don't think this is an issue of getting around conservation of mass energy or anything like that. What I think is happening here is something much, much weirder actually, which has to do with the following. And just to be really quick about it, there are certain facts that are not facts of physics. These are things like the specific value of the natural logarithm E, the specific number of Feigenbaum's constant, like that kind of stuff, right? And what I'm impressed by is this feature where wherever you start, whether it be in biology or physics, if you just keep asking why, eventually you end up in the math department, right? So sooner or later, the answer is, oh, because the distribution of primes is like this and not like that, or because the symmetry of this group is this or that, right? Eventually, that's where you end up. So you have this weird thing where it isn't exactly, it isn't that, basically, the explanation for what's going on actually is it takes you out of physicalism, basically. Like I think physicalism is wrong. I think the physical world is simply not closed. It's closed if you want to try for billiard ball causation, then that's all you'll see. Of course, that's all you'll see. But I think that kind of causation is long dead. And I think there's a much more interesting aspect of it where one half of that causal influence are really weird things like mathematical properties and actually some of those properties I think are not simple static things like E and so on. I think they're actually active patterns that we would recognize as kinds of minds. So this is a really weird sort of platonic almost kind of view. But so what I'm thinking is that's going on here is this. It's kind of like if you hear two mathematicians talking and they're sort of, you say, okay, so can you give me an explanation of what happened? You could give an explanation of what all the air molecules were doing. And it's not wrong exactly, but the actual, like the more insightful reason for why things went the way they did is not any of that, right? It's not to be found in the molecules. It's in a completely different space. And eventually you end up in these things that aren't parts of physics at all. So I think that's what's going on here. It's not that these things are breaking physics, although once you go down that road and you ask, okay, what do we get from that space? So we can get like static things like E and so on. Once you start asking that question, I suspect, and we're doing experiments on this now, what you actually get is not static. I think you might get compute out of it. At which point you are going to break, for example, the known sort of relationship between the cost of computing a single bit or erasing, more accurately, a single bit, that kind of stuff. I think that stuff might break, actually. But it's not because it's because I think there's a really weird kind of causation going on. And that's not new physics. That's like already at the time of Pythagoras, if you wanted to know why certain things were happening, the answer was going to be, well, that's how the math shakes out, right?</p><p><strong>[54:11] Nicolas Rouleau:</strong> No, I think that's totally fair. And it's like the problem of, you know, you have similar problems in consciousness where, what comes to mind is the Zen master coming over to the microphone when asked a very profound question and just tapping the microphone. And it's like, there are things that are ineffable. They can't be described in terms of language. And they just are experiences. And when you translate them into reference out in the world and you point to them, unless you're actually experiencing them as raw data, any representation or symbol that's pointing to it is just not the thing. You're not pointing to the thing itself. You have to go a little bit deeper. And so investigation and describing the world scientifically is a kind of step removed from reality. So yeah, no, it's very interesting. I can't wait to read it. But you asked the question about the free will, no consciousness, yes.</p><p><strong>[55:24] Michael Levin:</strong> I mean, are we talking an epiphenomenalist position or what's the, how do you put them together?</p><p><strong>[55:31] Nicolas Rouleau:</strong> Yeah, I just think that, in a nutshell, free will is not something that we ought to spend a lot of time trying to explain. The question of causality is interesting. It's like a question for physics, and I think that if the physicists solve it, we can just apply that broadly. But I think that, in the sense of Bertrand Russell, like I was talking about a little bit earlier, I don't think it makes sense to go down this rabbit hole trying to look for causation and for explanations about how experience maps onto that, because all we really need to do is explain the experience. And to me, what I'm really saying is just consciousness, yes. And that's what it is. And free will is just an experience within consciousness. It's just another experience that you can have that is basically illusory. You're witnessing something that isn't actually happening. And you form a belief about what's actually happening. There really is just a causal chain of events happening, and then your predictions are being realized and you've become, you develop a superstition that is self-referential. You think that you are the author of your own actions, but really what you're just doing is witnessing a body in motion, an organism that is interacting with its environment in predictable ways. So yeah, I just don't think that free will is a coherent idea. And I think that consciousness is really what we have. Although I do think that an external environment does exist. There are some who think that consciousness is all we have and the world is really just a projection of consciousness. So, but that's a metaphysical thing.</p><p><strong>[57:33] Michael Levin:</strong> All right, in the 4 minutes that we have left, just real quick. So the transmissive business, endogenously in the absence of us applying things to brains. What do you think is being transmitted?</p><p><strong>[57:50] Nicolas Rouleau:</strong> Well, so it's very likely information, but I don't know what the question is: what is the carrier of that information? I think it's electromagnetic.</p><p><strong>[58:02] Michael Levin:</strong> But the content, right? So let's say it's electromagnetic, how much your content specificity, how much, I guess, how close are you to the theories of the brain in general as a receiver of, right? Because if you say that, you have to, then, well, what is it receiving and where does that come from, right? So I'm just curious where you're going with that.</p><p><strong>[58:23] Nicolas Rouleau:</strong> So I don't think that the transmissive model is mutually exclusive. And I think that the productive model can play nice with the transmissive model. And in a follow-up paper to that original essay, I argue that I think that there is a middle ground here where the brain really is doing a whole bunch of physiological things, and you really can just go in and poke the cells, so to speak, and have them do things. But there, I think that in addition to that productive physiology that the brain clearly does, I think that there's also a layer, a functional layer that we haven't really investigated yet, which is transmissive in nature and really places the causal elements of what's going to happen to the cells outside of the brain. And so, if a cell is activated in a neural network, under the productive model, the logical thing to do is just look at all the synapses and just trace backwards, do this kind of retroactive analysis and see where the initial signal came from. But I think that part of what the brain is doing is brain cells are being activated where if you were to interrogate the presynaptic cells, you would find that there was no initiating event that led to that cell being activated. That cell is being activated by an extracerebral source, either Earth's magnetic field or something else. John Eccles thought it was a whole new particle. He thought it was a psychon, right? This particle that when it interacted with the brain, it conferred consciousness. But I'm not sure what the mechanism is, but everything suggests that it's at least interacting with electromagnetic fields and that transmission and production are happening together in the brain.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Conversation with Nic Rouleau, part 1: &quot;Some thoughts on the mind as material&quot;</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>Neuroscientist Nicolas Rouleau joins Michael Levin for a wide-ranging discussion on the mind as a material process, exploring free will, agency, cybernetics, brain death, and how consciousness and information might be transmitted or realized in different systems.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/3talIGE_v9Y" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/b83a84be/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1 hour talk and discussion, comprising part 1 of a conversation with a really interesting young neuroscientist, as well as friend, collaborator, and our Center member, Nicolas Rouleau ( Nic goes over unconventional aspects of neuroscience touching on free will, cybernetics, consciousness, and a lot more. We start a discussion which is continued in part 2. For more information: Nic's website: X account: @DrNRouleau Recent papers to check out: Sellar, E.P., Rouleau, N. (In Review). A cybernetic framework for synthetic biological intelligence in the era of neural tissue engineering. Preprint doi: 10.31234/osf.io/md2wf_v1. Kansala, C., Cicek, E., Nkansah-Okoree, V., Golding, A., Murugan, N.J., Rouleau, N. (In Review). Superstitious conditioning forms the experience of free will under causal determinism. Preprint doi: 10.31234/osf.io/fk3yt_v2. Roskies, A. &amp; Rouleau, N. (Forthcoming, In Press). Research on brain organoids should prioritize questions of agency, not consciousness. AJOB Neuroscience. Rouleau, N. &amp; Levin, M. (In Press). Brains and where else? Mapping theories of consciousness to unconventional embodiments. Philosophical Transactions: A. Preprint doi:10.1098/rsta.2025.0082. Rouleau, N., Levin, M. (2024), Discussions of machine versus living intelligence need more clarity, Nature Machine Intelligence, doi:10.31219/osf.io/gz3km Rouleau, N., and Levin, M. (2023), The Multiple Realizability of Sentience in Living Systems and Beyond, eNeuro, 10(11), doi:10.1523/eneuro.0375-23.2023 Rouleau, N., Cairns, D. M., Rusk, W., Levin, M., and Kaplan, D. (2021), Learning and synaptic plasticity in 3D bioengineered neural tissues Neuroscience Letters, 750: 135799</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Free will, minds, transmission</p><p>(26:52) Material brains after death</p><p>(31:18) Defining free will experience</p><p>(38:00) Long-term agency and algorithms</p><p>(49:35) Causality, math, and consciousness</p><p>(57:33) Transmissive consciousness and information</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Nicolas Rouleau:</strong> Yeah, thanks for inviting me, Mike. My name is Nick Rulo. I am an assistant professor at Wilfrid Laurier University and an affiliate scientist at the Allen Discovery Center at Tufts. And today we're going to talk about some interesting topics. And I've put them under the umbrella of some thoughts on the mind as material. We'll be talking about free will, cybernetics, and this idea of transmissive consciousness. And I'll try to run through these slides pretty quickly because I'm excited to get to the discussion. So these are the three ideas that I sort of want to touch on today. How do we explain the experience of free will? What is a mind and how can we build it? And is the brain a transmissive organ? So the brain is a complex object. It's probably the most complex object we know of in the universe, but they are not impossible to understand. They're not uniquely composed, and they don't require any magical or non-physical mechanisms or new physics. I think that everything we've seen so far in terms of functions of the brain can be explained basically by the physics that we already have and mechanisms that we already know of. And then, of course, we can build on top of that, but I don't think it needs any special properties beyond what we're already investigating in biology and engineering. So I'll start with the first question, which is, how do we explain the experience of free will? And I'll start this by just sort of talking a little bit about the story that we've been sold about free will, which is that we've been sold the story that the brain makes decisions. So there are these things called intentions and desires that are said to initiate a sort of causal chain that leads to actions. We're often told that we plan and we organize events. And then you can look at brain imaging and see that there are certain areas of the brain that light up, demonstrate activity, either through fMRI or EEG, and those correlate with actions, but they seem to precede the actions. And then, of course, there are these things called decisions, which correspond roughly to the idea that you can actually select options in the world in terms of what you're going to do. And basically the story is that you're the conscious author of your actions. You get to make decisions and those decisions have impacts in the world and you really are making it happen in a conscious and intentive way. And the evidence for this is not great. So if you look at the neuroscientific literature over the past 20, 30 years, you'll find a whole bunch of evidence in the opposite direction, which basically suggests, you know, the totality of it suggests that we are more or less witnesses to actions that are happening. We seem to be conscious of decisions after they're made, which to me means that the decision isn't being made by the conscious agent. But of course, there are different ways to interpret all these results. I included one study here on the far right, which I think is a bit of a nail in the coffin for an idea of free will, just from the neuroscientific perspective. And I could talk a little bit about that, which is basically if you stimulate the brain with transcranial magnetic stimulation, if you stimulate the right side of the brain, you can get people to make left-sided decisions, like press a left button more often than the right side. And if you stimulate the opposite side, you can get the opposite reaction. Basically, you can determine people's decisions. But when you ask them why they made those decisions, they'll tell you that they wanted to make the decisions, which is really interesting. It's a preservation of the experience of free will, even though experimentally, you know that you're determining the outcome of the task.</p><p><strong>[04:28] Nicolas Rouleau:</strong> So I think this is all really interesting. And there are different ways to interpret these studies, of course, and many will disagree with my conclusions. But I basically think that this is interesting but irrelevant. And that's because I think that the question is not posed correctly and has been pursued incorrectly. The only evidence that we have for free will is the subjective experience of free will. And Bertrand Russell gave us this great analogy of the teapot, which he applied to different kinds of arguments, but here I think it applies to free will. And basically the idea is that, if I make the claim that there's a teapot that's orbiting the sun between Earth and Mars, it would be very difficult for you to actually measure that or disconfirm it. But actually, the burden of proof is on me to demonstrate that the teapot exists, not on you to demonstrate that the teapot doesn't exist. And what we've seen in the neuroscientific literature is that the burden of proof has been shifted to basically those who don't think free will exists. And there's a constant sort of struggle to develop experiments that continuously demonstrate that we're not the author of our actions. So assuming causal determinism is true and there really are no uncaused causes, I think that all we need to explain really is the experience of free will. And we can even make the assumption at the outset, whether it's true or not, we can make the assumption that we live in a causally deterministic universe, the brain is no different, thoughts and behaviors are basically the products of a chain of causes. And so really all we have left to explain is why people have this incredible experience of free will. And it is a very common experience. So it sort of demands an explanation. And the answer that we put forward in a recent preprint is the idea that basically free will is explained by superstitious conditioning. If you look at the original studies by B.F. Skinner in the mid-20th century, he did some incredible experiments where he demonstrated that you could basically get pigeons to have superstitious beliefs. And this is a form of learning called non-contingent reinforcement. So if you have a pigeon in a Skinner box and you make it so that when they peck a lever, they get some food, they'll peck the lever and they'll get the food. But if you make it so that they get food regardless of what they do every 10 seconds, 30 seconds, what ends up happening is the pigeon gets reinforced for the behavior that it was displaying right before it got the food. So whatever it was doing right before it got the food, now that behavior is reinforced. And so it will continuously display that behavior. And you can see naturalistic examples of this throughout history. So for example, rainmaking behavior, people doing rain dances and things like this to bring about a change in weather, to address a famine. So that is totally understandable in the context of superstitious conditioning. Basically, humans are stressed by either a famine or a drought. They engage in different kinds of behaviors to try to remove that stress or avoid that stress. And finally, as time goes on and they become more desperate, they do things that are more and more strange and incoherent, which is what most animals do. Animals will behave randomly until they get the response they desire. And whatever they're doing right before it rains, and eventually it will rain, whatever they're doing right before it rains gets reinforced. And so then whatever behavior was happening right before it rained, whether it's an animal sacrifice or a specific kind of dance, that gets reinforced. More of that over time. So basically, that's what we think free will is, and I'll get to the model of how that works. But this basically is saying that free will is a learned phenomenon.</p><p><strong>[08:57] Nicolas Rouleau:</strong> And what we find is that learning, superstition, and delusions, which are all related to one another, are actually related specifically through dopaminergic pathways in the brain, the mesolimbic pathways in particular. You find that in, for example, Parkinson's disease, among people with Parkinson's disease, superstitious beliefs are very low. I don't know why I've got the arrows backwards here. So in Parkinson's disease, where dopaminergic activity is very low, superstitious beliefs are also low. In schizophrenia, superstitious beliefs are very high, and that's a disorder characterized by excess dopamine activity. If you give people with schizophrenia a dopamine blocker, they end up having less superstitious and delusional beliefs. So basically, we think that it's a dopaminergic phenomenon, which is actually in line with the literature, the neuroscience literature around decision-making, which has focused on dopamine for the past 20 years. And the sensible question you might ask is, well, why is this? Why do we have this experience of desiring things and then planning and organizing? Because people really do report these kinds of experiences. And basically the answer we've come up with is that we think that you're predicting it. And because you have access to the content of your brain, whether conscious or unconscious, you are constantly forming predictions about what will happen next. And this is consistent with an active inference model of how the brain works. And basically, because you have access to what's happening next, when your predictions are realized, that actually reinforces whatever it is that you were predicting. And if you look at the model that we generated here, basically the idea is that whatever activity is happening in the premotor cortex and any area of the brain that signals to the motor cortex, and the motor cortex, by the way, signals directly out to the spinal cord into the muscles. So if you just stimulate the motor cortex, people move their bodies. So anything that's happening before the motor cortex is called premotor. But we think that that's basically a predictive, it's a predictive substrate, and it is anticipating what will happen next. And when that action is actually realized, the anticipation is reinforced. And because of the temporal contiguity, because prediction is always coming before realization, you have this automatic reinforcement of the prediction. And because it's embodied and because you witness it as your body in motion, there's a self-attribution. So you're attributing the causality to yourself. And if it's a learned phenomenon, it should extinguish when you have examples where a prediction is made, but then it's not realized. And we do actually have examples of this. So you can condition individuals to not feel free. When you do this in animals, it's called learned helplessness. So if you have a dog in a Skinner box with two compartments, a shocked floor and another shocked floor. And it can choose between these two compartments. And whatever it does, it will always get punished. The dog will eventually stop avoiding the punishment and will simply sit still. And you get different kinds of versions of this classic experiment where eventually, if it doesn't matter what you do, you are constantly getting a negative reinforcement. You can get basically the opposite of what we're suggesting, which is you get a complete lack of free will or the experience of free will. You don't think you're the author of your actions anymore. And there are clinical versions of this like avolition that you see in clinical populations. So why isn't it usually extinguished? It's because the way that the neural circuitry works in the brain is that premotor activity always precedes motor activity.</p><p><strong>[13:25] Nicolas Rouleau:</strong> You always have this prefrontal cortex activity that's happening before the motor cortices are activated. And so whatever predictive state is occurring is always happening beforehand. And that temporal contiguity, one before the other, means that you will always reinforce the prediction. So I'm happy to talk about this a little bit more, but I'll move on to the next idea here. So the next idea is what is a mind and can we build it? And this line of my research has very much to do with this idea of minds being much less complex than we give them credit for. They look incredibly complex, but basically what we have in the form of animal brains and minds is something that has been built up over billions of years from tissues and cells and the kinds of cognitive systems that can exist at smaller scales and at different time spans. But the basic building blocks of a cognitive system can be described using things like Breitenberg vehicles and basic cybernetic loops. And I think that cybernetics actually provides a great way to approach these problems. And I love this Breitenberg quote, which is the idea of uphill analysis and downhill invention. So if you're trying to understand the brain, you can pick apart all the different pathways and test everything until you're blue in the face. And you will eventually get answers, and these answers will tell you about the cognitive circuitry of the system. Alternatively, another way to understand the brain and another way to understand cognitive systems is to actually build them and then try to understand the thing that you built and map that onto the kinds of behavioral phenotypes that you see in nature. So we can engineer these miniature brains in a dish using pluripotent stem cells and primary neurons, and we can put them together in different kinds of combinations, controlling how many layers there are, what kinds of spatial characteristics they have, whether it's a co-culture, monoculture. And this is some of the work that Mike and I did, where we found that if you create these miniaturized brains in a dish, you can get them to learn, you can get them to display these non-associative learning responses that also have spontaneous recovery. So these very basic responses that you can see. So the question is, are they capable of more complex cognitive phenomena? And we now use a single-cell-resolution microelectrode array. And what you're looking at here are action potentials within a network being displayed at very high resolution, resolution of about 10 microns. And it turns out that if you disembody neurons, if you remove them from a body and put them in a dish, as we've been doing in biology for over a century, they behave very differently in terms of their physiology than they would if they were actually hooked up to a body. Disembodied neurons display these stereotyped paroxysmal electrical discharges, this kind of burst-firing phenotype. And that's viewed in the context of electrophysiology as like a good sign. It's like, well, the cells are firing, great. But if that was happening in a body, we would call it a seizure. And in fact, it displays all the basic characteristics of a seizure. So these are basically cells in a dish that are aberrantly firing. They have this kind of seizure phenotype. And it turns out that if you just give them feedback, either feedback about their own activity, or you inject small amounts of current into the network, just stochastic inputs, that totally normalizes. So as long as the neural network is getting some kind of input from an environment, let's say, something that isn't itself, it tends to normalize. It tends to become much more like the kinds of neural activity you would see in a body, which I think is just fascinating and actually coincides with all these interesting things that you can do with neurons when you give them feedback. They seem to be able to learn autonomously. They seem to be able to problem solve, make decisions as far as that word means anything after the previous topic. But yeah, an embodied neuron is a very different thing than a disembodied neuron. And that should be pretty unsurprising because, of course, brains co-evolved with bodies. A brain is just part of an organism.</p><p><strong>[17:54] Nicolas Rouleau:</strong> And when these systems are functioning together and navigating the world, they behave incredibly different than when you separate them and have them interact with the same environment. So using closed-loop feedback to try to investigate questions of consciousness and intelligence and attention and all these other cognitive capacities, I think is a really important frontier. And it's something that we're interested in as a lab and looking at. So we're creating these modular brains in a dish, and we're doing some interesting things. One of the things we're doing right now is we're trying to actually bring three-dimensional cell culture and two-dimensional cell culture together into a kind of layered system where different parts of the network can function as reservoirs and as readouts. And if you do this, in the context of embodied cognition, the idea is that you're actually giving the system different kinds of cognitive resources to draw on to solve problems, perform computations. And what we're really interested in is whether, giving a two-dimensional monolayer access to a three-dimensional neural network, for example, whether that confers some kind of enhanced cognitive properties. Do they learn faster? Does it take fewer trials to learn the same task or solve the same puzzle? So that's some of the things we're doing. And I'll move on to the last topic because I would like to just get into the discussion. And this is the topic of the brain as a potentially transmissive organ. And this idea sort of starts with what I was doing in grad school. So in grad school, I was working with Michael Persinger on the idea of brain electromagnetic interactions. And I was reading about William James's research and his thoughts on consciousness from the turn of the century, late 1800s. And he gave this great speech on human immortality, where he asked the question, what would be necessary for consciousness to survive bodily death? And basically the answer that he came up with, which I think is the right way to approach this scientifically, is the idea that if the brain is a productive organ, if consciousness is just a property of what the brain is doing, as such, neurons firing signals to one another, be they electrical, chemical, or a combination, if that is really the sufficient property that either realizes consciousness or from which consciousness emerges, then basically, you cannot, consciousness can't survive death because when the brain dies and it decays, consciousness would dissolve with the matter that gave rise to it. But he proposed that there are other kinds of functions, other kinds of functional categories that could exist that if it was discovered that the brain was within this category of function, that consciousness could survive bodily death. And he proposes transmissive function as the main function that would allow for the survival of consciousness. And what is transmissive function? Well, it looks a lot like these two examples. So there's the example of the pipe organ, for example. So there's air in the room and the air isn't music. It's not sound. But when that air is sifted through various compartments and compressed and changed, it can become these oscillating pressure waves, which are experienced as music. And in the same way, when you shine light through a prism, that light, when it's split into its constituent colors, the colors are not a result of the prism producing anything. The prism is simply filtering the light. And you wouldn't ask the question, well, where was the color in the prism before it was filtered by the prism? These are sort of nonsensical questions in the context of transmissive function. And so these are some of the examples that William James gave to describe what transmissive function is. And I think it was in 2021 or 2022, I wrote an essay that was a response to a challenge by the Bigelow Institute for Consciousness Studies that had the same question. What is the best scientific explanation, the best scientific evidence for the existence of consciousness after bodily death? And those who produced essays in this competition often drew on examples like, well, we can look at things like mediumship and we can look at post-mortem apparitions, like the existence of ghosts and things like this. I thought that William James basically got the question right. And so it was just a matter of identifying what kinds of scientific evidence existed that could be in support of the idea of transmissive consciousness.</p><p><strong>[22:22] Nicolas Rouleau:</strong> And what I identified was that basically the brain's interactions with electromagnetic fields constitute a genuine transmissive function of the brain. These are transductions and transmissions that are occurring without an intermediate sensory modality. They're happening directly at the level of the brain. They're changing brain function and they're changing experience. So I'll get into a couple examples of this. So there's evidence, for example, that the brain coheres in real time with oscillations of Earth's magnetic field. Everybody knows that Earth has a magnetic field. That magnetic field is generated by basically molten iron moving around in our core. You can see Earth's magnetic field when you have coronal mass ejections and basically the particles from the sun are dancing at the poles of Earth's magnetic field as the aurora borealis. But we know that those same perturbations of Earth's magnetic field distort all sorts of things like flight patterns in birds and different kinds of swarming behaviors in insects and so on. Well, it turns out that it also influences human activity. I'll get to that a little bit later. But one of the things that happens is that Earth's magnetic field is not static. It actually oscillates. And the reason it oscillates is that around the Earth right now and continuously, there are lightning discharges between the ionosphere and the surface. And these lightning discharges actually oscillate Earth's magnetic field with a modal frequency of about 7.83 Hertz. This is known as Schumann resonance. And this oscillation actually shows up on EEG and coheres in real time with EEG. So if you look at the brains of, you bring people into the lab, look at their brains in terms of EEG rhythms, you'll find that their brain activity actually coheres in real time with Schumann resonance measured as Earth's oscillating magnetic field. When Earth's magnetic field is oscillating during moments of perturbation, like when there's coronal mass ejections, there are more seizures that you see in psychiatric inpatients. So humans are affected just like other animals. We're really no different. And they also have sort of esoteric experiences around times of geomagnetic fluctuation. If you take a person and put them in a Faraday cage where you've blocked out the electromagnetic environment, you find that their brain rhythms change as well, specifically within the alpha band, which is around 10 cycles per second firing frequency, which is a brain rhythm associated with inhibition of various brain areas. And in grad school, I basically demonstrated that if you expose postmortem brain tissue, these are preserved brain specimens, if you expose them to electrical current or electromagnetic fields and you measure voltage fluctuations in the tissues, the different areas of the cortex filter those electromagnetic signals differently. So if you inject current into the parahippocampal gyrus, for example, it will amplify theta rhythms more than it will amplify beta rhythms, for example. So there's a certain kind of frequency selectivity. And basically, I think this is a material property of the brain. I don't think there's anything magical happening here. I think that the brain has material properties in addition to its biological living properties. And that the parahippocampal gyrus is one of these areas that has this really interesting sort of geometry, which could make it a great candidate for studying in terms of brain material and the brain material's interactions with electromagnetic fields. It could explain, for example, why the temporal lobes in particular are so sensitive to electromagnetic fields and why research involving brain exposures to electromagnetic fields are very often dominated by experiences that are consistent with activations of the temporal lobes, like hearing sounds, seeing colors, having these kinds of visceral experiences, not on the outside of the body, but on the inside of the body, and things like that. Okay, I'm going to end it there. This is my lab. Shout out to my institutional affiliations and funders, and I'll wrap it up there.</p><p><strong>[26:52] Michael Levin:</strong> Super. Thanks very much. Lots to chew on. Let's see. I have a bunch of questions. Let's start with just the very last thing you said, and then we'll circle around the back. So isn't it amazing that even after these brains that you were looking at, formaldehyde fixation, formalin, something like that? I mean, that's a lot to ask, right? Even if the basic finding is true that, okay, after death, they can still respond to, like, let's say all that is true, you might still think that, my God, fixing all of the aldehydes and everything, like, good luck. Any thoughts on why it's still able to do that?</p><p><strong>[27:38] Nicolas Rouleau:</strong> Yeah, I think you're totally right. So when you fix the brain with an aldehyde-based fixative, you get all these sulfide bonds between all the proteins. And that's what's great about it, because when you put it under a microscope, even 20, 30, 40 years later, you have this incredible microarchitecture that's preserved. So there's no chance that whatever it is that we're observing has anything to do with biology in the sort of sense of there's no physiological response here happening. I think that it's a material property and it's a material property that is at least not fully attenuated by aldehyde fixation. It's basically happening at the level of the tissue, like conductors and insulators and capacitors.</p><p><strong>[28:26] Michael Levin:</strong> I mean, that's just wild, right? I can absolutely buy that there are these important material properties, but you would think that the fixation would change all those, right? You know, that's one of these things that strikes me as amazing that it actually works. The other one that always gets me is general anesthesia, right? If somebody said to me that, okay, we're going to come and decouple all your electrical synapses, but don't worry, afterwards, you'll probably settle in pretty much the same bioelectric state you were in before, I would say not a chance, right? No way. It's just incredible that works.</p><p><strong>[29:00] Nicolas Rouleau:</strong> We had the same idea. What is it about the fixative? Could the fixative be interfering with this? So we had a follow-up experiment. We actually did publish it. I think it's in the journal Cognitive Neurodynamics in 2017. And what we did was there were a whole bunch of mouse brains that were sitting in fixative for anywhere between one year and 20 years. And basically we just measured all of these and then looked at how the amount of time spent in the fixative and the pH of the fixative changed the way that there were, how it changed voltage fluctuations in the tissues themselves. And there were time-dependent differences. So I have no doubt that the fixative is changing something, but clearly not so much as to destroy any kind of differences between cortical areas, for example.</p><p><strong>[29:58] Michael Levin:</strong> That's amazing. If that's the case, then presumably we can expect evolution to have taken advantage of this wide range of weird things that can happen to the material and keep certain useful properties.</p><p><strong>[30:14] Nicolas Rouleau:</strong> Have you ever looked at the shapes of reptile and bird brains?</p><p><strong>[30:22] Michael Levin:</strong> I mean, I've seen them and I haven't studied them closely. What did you see?</p><p><strong>[30:26] Nicolas Rouleau:</strong> Well, it's interesting. They have these really interesting sandwich-like properties where you have three layers and the layers are alternating conductors and insulators. And we actually have that as a vestigial part of our brain anatomy in certain parts of our brain, like the hippocampus. The hippocampus basically is a three-layered cortex. And we eventually have developed more layers as we went along. But yeah, the birds and the reptilian brain, they have this really interesting sandwich structure, which I always thought was really interesting from a materials perspective.</p><p><strong>[31:18] Michael Levin:</strong> Let me circle back around to the beginning. Let's start with the free will stuff. Do you want to give a definition of free will and in particular, do you think the word is completely useless and should be gone, or is there some useful sense of it that does some useful work?</p><p><strong>[31:39] Nicolas Rouleau:</strong> I think that, so if I'm giving my definition of free will, I would say that it is a subjective experience of control. And if I go a little bit further, I'll say that it's a misattribution of causality. Because I don't think that you actually do have control. I think you're witnessing your predictions realized. What most people mean by free will is the ability to author their own actions. And I just don't think that that's a thing that really exists. I'm super curious. I'd love to speak to, I wonder what you think about this. Why do you, what is it about brain activity preceding actions that we think has anything to do with something like intention or planning.</p><p><strong>[32:35] Michael Levin:</strong> Yeah, I mean, without getting into my whole story of free will, I do have a theory on why it's a pervasive way of thinking. So imagine the earliest, simplest life forms, right? Because you're living in a highly energy- and time-constrained environment, meaning that everything's expensive, time is really expensive, food is expensive, what you can't afford to do is be a Laplacian demon. In other words, you can't say, I'm just going to pay attention to all the microstates of every ion and everything else around me, and that'll be my story. You'll be dead and eaten in no time. So what you have to do, what you're forced to do, is to coarse grain. You're forced to take ensembles of things and say, I'm just going to call all this stuff back. And as you do that, one really powerful way of doing that is to have models of agents doing things. So in other words, it's not just a bunch of random stuff that happens, but the way I'm going to coarse grain it is like, here's this thing, I'm going to call it a predator, I'm going to call it food, I'm going to call it a mate, like whatever it is. And I'm going to tell a story about this thing doing something. A very nice way of compressing what's going on, and it gives me the ability to make fast decisions under limited information. Well, if you do that long enough, eventually you turn that on yourself and you say, wait a minute, I'm an agent that does things. And so I would, without going into whether I think it's actually real or not, I would simply say that here's a theorem that one might put forward, that any being that arises under resource constraint is going to believe in free will. I'm not saying it has it, I'm not saying it doesn't have it. I'm saying that I think that kind of origin really induces and facilitates these kind of models. And then it makes sense to apply that to yourself, right? So I think that's okay. We could tell that story for the origin of it. But I wonder, so let's run with your definition of this misattribution of authorship. What do you think are the implications of how, what does that mean for how we should, or can conduct ourselves? So, somebody doesn't know that theory, then they heard you, they found you very convincing. What happens after that? And I remember being at the Danish pastry house with Dan Dennett, and the waitress came over and she's like, Oh, you know, Professor Dennett, what will you have? And he's looking at the menu, he's like, Well, let me see. I'm like, Dan. What are you doing? Are you going to choose a soup? That can't be right. So, but neither can we sort of sit there and just wait to see what the universe has in store. What do you think about that? Like if you find that convincing, how do you navigate life, or do you?</p><p><strong>[35:38] Nicolas Rouleau:</strong> Yeah, I think that I think you can have intellectual positions about how the world really works and then you can have the way that you privately conduct yourself in your day-to-day life. So when I'm interacting with others and when I'm navigating the world on a day-to-day basis, I do generally just move around with the assumption that everybody has free will, even though when I, and it's consistent with what you're describing, like the model that you were describing, which, by the way, I think is also consistent with the animism that you see in young children, like thinking that trees are alive, but that trees are agents that, like everyone else, like the humans that they interact with. And I think that every now and then, what often happens in my life is, like, I'll perceive someone to have slighted me or I'll perceive, like, someone to have done something, quote unquote, wrong. And I think my first knee-jerk reaction is to say, well, what a terrible thing to do. This person really is awful. But then I realize, like, very quickly, because I have, like, the intellectual position that the decisions they're making are the only decisions they could have made given all the preconditions. And even aside from that, like, the whole idea that this person is the author of all their actions and this can be completely held responsible for whatever it is they're doing is just not right. And so I try to, if I reflect about how other people behave and how I behave, I can often forgive people much more readily because I kind of view people's behavior as, like, no different than the weather or no different than how you would expect a crocodile to behave if you stuck your hand in their cage. What happens is the thing that was most likely to happen right then and there because of all the preconditions. And blame seems to be, I don't know, just a kind of vestige of something that's just not true. How do you navigate the world with this?</p><p><strong>[38:00] Michael Levin:</strong> So I have a couple of thoughts on that. One is, I do think that it's a useful heuristic too. So what I try to do is, in the example that you gave, when somebody does something bad, I usually go to the Sapolsky version and say, well, there's a long history that led that person to this. What are you going to do? On the other hand, when somebody does something amazing, right, or some act of courage or generosity or whatever, I usually flip the other way and I say, fantastic, you get full credit, like that was your magical inner nature doing that, right? I think that's fine and that's helpful. But also, my story of free will is not as deflationary, I think, and we can sort of, if we have time, we can talk about what that is. I do think that though, on a short time scale, like in these decision, if you're looking at individual decisions, that's not where you're going to find it. So at the micro scale, I really don't think, if you're looking at the micro scale of what's going on, you're going to find a bunch of causes that got you there, and that's fine. What I do think is a useful sense of free will is the long-term extended showing up. And what I mean is, if you apply consistent effort, whether that be education, meditation, anger management, therapy, I don't know what it is, whatever it is, if you're applying consistent long-term efforts so that your future reactions, instinctual though they may be at the time, but you're biasing the distribution through consistent applied effort, you're biasing your likely future behaviors. So you're not free for current you, but you have some freedom of what future you is going to look like. And I realize that then you say, but even that effort is, whether you can apply that effort or not, caused by something. I get all that. I see it as a kind of like summing infinitesimals under a curve in calculus. Like each thing is, yes, it's infinitely small, but altogether it actually adds up to a non, you know, to a non-zero thing. So I think that's a useful version of free will, where you say the freedom you have is not what's happening right now, right? Past you, in fact, a whole series of past you have done all kinds of stuff to get you here. Don't worry about any of that. Look forward, right? You can't do anything about any of that, but you can, although you can actually, I think, tell a more adaptive story about what happened. You can flip those stories around. But what you can do is now do the nice thing for future you and do whatever it takes to get yourself so that you're doing something in the future that's more, you know, more aligned with your values and things like that. So I think it has a more useful version that way, because what's your take on this, with that kind of view of it? Do you think the crisis of meaning is an issue? The kinds of, do you know what I'm talking about? Like the basic, right, where neuroscience and physics and evolutionary theory have really sort of taken the rug out from a lot of things that we might want in our relationship with others, but also in a kind of a social level. How do you see this story fitting into that?</p><p><strong>[41:21] Nicolas Rouleau:</strong> Yeah, that's so. I think that that's maybe one of the most relevant questions here because you have to ask yourself, what are you going to do with this? And I just think that this is one of these genuinely dangerous ideas because, and we talk about this a little bit in the preprint, but you could dedicate a whole research program to this, and people have. But when you tell people that their decisions are determined, they tend to cheat more on tests. They tend to slight people around them and undercut people. And that's concerning because if it's true that in fact your decisions are determined and really the only barrier between social anarchy and social order or harmony, however you want to characterize it, is really just people's belief in free will, from a social benefits side of things, you really do want to preserve that for the benefit of the world and for the species. On the other hand, the scientist's job is to figure out what's true and what is a model of the world that best describes it and has best predictive validity and in many cases allows you to control things. I think there are benefits on the side of viewing people's actions as not authored by themselves. For example, you might still have a prison system. When people do things that are harming themselves or others, you still have to remove them from the situation in order to reduce that harm. So quarantine is still a viable solution to the problem of antisocial behavior. But the prison system would look very different. You would basically have people in these boxes, but these boxes would be places of compassion. They'd be places of understanding. They'd be places where essentially you'd be treating people as people who are sick or people who have learned inappropriately maladaptive behaviors. And I think that is a compassionate outcome of really taking it seriously that we are not the authors of our actions. But then again, there's a balance here to be struck. And I don't know, on the balance of all things, whether this would be good or bad for the species.</p><p><strong>[44:06] Michael Levin:</strong> Yeah. There's a couple of other things. I just mentioned them, and then we don't have to dig into it. But there's a couple of things I think are relevant to this. One is that I actually think that there's a lot of the components of what we mean or what we want to have by free will has to do with causes that are at a higher level than the parts which are sort of underneath them. And from that perspective, I think your first act of free will is basically embryogenesis, right? It's when you've managed to, when the collective of cells begins to acquire goals in a different space, in a large-scale morphous space that the individual cells didn't have. And so you now have this causality at a different level that actually works downwards to bend the option space for the cells. And it makes them do things that they have no idea why or what they're doing. But there's a higher level at which, okay, now there's this larger goal state that we're all working towards. And so on. So I think those kinds of things are important. And the other thing that's interesting is, I think we have a weird new model system for a strange kind of free will, and that model system, I don't know if you've kept up with our stuff, and we'll be a bunch more this spring, but our stuff on the sorting algorithms. So you haven't seen it. It's pretty wild. So basically, like, okay, Dan in his old book on free will made a very sort of powerful analysis where he said, look, we only know of two kinds of things. We know causes, where A is caused by B, and then we know quantum randomness. And neither of those things is what we mean by free will. So then, you know, then that's the end. So I think there's something else going on. And long story short, what we've been looking at are, because I'm interested in the shock value of doing this for extremely simple, minimal models. Once you have something biological, there's always some new mechanism that you haven't found yet. There's going to be some quantum something. There's never an end to it. But what we looked at were very simple computational systems where you can see all the steps. And so we took simple sorting algorithms, like bubble sort. So these are things of five or six lines of code, people have been studying them for 80 years. And we looked at them in a way that basically drops the assumption that we know what they're doing. Because you assume, right, the theory of algorithms is, the whole point of it is the algorithm tells you exactly what you're going to do. And okay, people have studied unpredictability and sort of complexity and things like that. But it turns out that there's something else that comes out of it, which is not just unpredictability or complexity. It's actually things that are recognizable to any behavior scientist.</p><p><strong>[46:50] Michael Levin:</strong> So it turns out they have delayed gratification. It turns out that they can do some other stuff. I've been calling it these side quests because the algorithm tells you you're going to sort, and it forces you to sort, and yeah, you sort the numbers, but you're also doing this other thing, which I'm not, I won't use your time now to go into it. But it does this other thing where there are no steps in the algorithm for this other thing. They're not there. And so it isn't a miracle in the sense that the CPU is literally only doing the things in the algorithm, right? That part works. But it turns out that our formal model of the algorithm only captures one thing. The thing we are forcing it to do is there. But there's also this other thing, which you can, and probably many, that's just the one we found. There's probably a million others that we just haven't caught yet. That is a kind of, another way to look at it, it's a kind of intrinsic motivation. So it isn't the thing we forced it to do, and it doesn't have anything to do with randomness or it doesn't have a quantum interface, it's a deterministic algorithm. But yet there's this other thing. And I think that if we were looking for something, for like a minimal version of free will, it isn't the thing we forced it to do. Of course, like the mechanism makes it, that's obviously not it. But this other thing that we never asked it to do, that is in fact an easily recognizable, it's basically like homophily, it turns out, it's just like a biological homophily. It turns out that, I don't know where that comes from. And that might be a very minimal model of the kind of thing we're looking at. It's neither the chance nor the necessity. It's the thing that you're doing in between the steps of the algorithm that forces you to do specific, or the rules of physics and chemistry. It's the stuff you do despite the algorithm, right? It's not what the algorithm makes you do. It's the stuff you manage to do despite the algorithm. So I don't know yet. I haven't said, you know, this is like I'm working on a whole thing on free will here, but I haven't said anything about that yet. But I suspect there's something here. And I suspect if something so simple is able to manifest it, I'm sure evolution has like, I exploited the hell out of it, and maybe this is the kind of thing we're looking at. But of course, in biological systems, it's very complex, right? It's hard to prove anything like that. Anyway, so that's kind of how I've been looking at it. But I wonder, I think from what you've described today, it's an interesting mix of views in that, so free will, no, but consciousness, yes. That's kind of interesting, right? You want to talk about that for a second? How do you see that working?</p><p><strong>[49:35] Nicolas Rouleau:</strong> Well, I just want to say one quick thing about what you were just talking about, because it's fascinating. I think the insight that Darwin had about natural selection and the environment as the selector was an incredibly powerful insight that will continue to be adopted by other areas of study as we trot along in science. And I wonder, if you don't mind, if I could ask you a quick question, so for these algorithms, do you think that if you could account for all of the particles in the universe and their motions and their positions, like Laplace's demon, do you think that you would still find causal chains in those algorithms? Or do you think there's something happening there that's not accounted for by all the parts?</p><p><strong>[50:36] Michael Levin:</strong> Right. A couple of things, and this could, we could spend hours on this. I'll send you some of the stuff that we've been working on recently. But what I don't think this requires is any weird new physics. I don't think this is an issue of physics. I don't think this is an issue of getting around conservation of mass energy or anything like that. What I think is happening here is something much, much weirder actually, which has to do with the following. And just to be really quick about it, there are certain facts that are not facts of physics. These are things like the specific value of the natural logarithm E, the specific number of Feigenbaum's constant, like that kind of stuff, right? And what I'm impressed by is this feature where wherever you start, whether it be in biology or physics, if you just keep asking why, eventually you end up in the math department, right? So sooner or later, the answer is, oh, because the distribution of primes is like this and not like that, or because the symmetry of this group is this or that, right? Eventually, that's where you end up. So you have this weird thing where it isn't exactly, it isn't that, basically, the explanation for what's going on actually is it takes you out of physicalism, basically. Like I think physicalism is wrong. I think the physical world is simply not closed. It's closed if you want to try for billiard ball causation, then that's all you'll see. Of course, that's all you'll see. But I think that kind of causation is long dead. And I think there's a much more interesting aspect of it where one half of that causal influence are really weird things like mathematical properties and actually some of those properties I think are not simple static things like E and so on. I think they're actually active patterns that we would recognize as kinds of minds. So this is a really weird sort of platonic almost kind of view. But so what I'm thinking is that's going on here is this. It's kind of like if you hear two mathematicians talking and they're sort of, you say, okay, so can you give me an explanation of what happened? You could give an explanation of what all the air molecules were doing. And it's not wrong exactly, but the actual, like the more insightful reason for why things went the way they did is not any of that, right? It's not to be found in the molecules. It's in a completely different space. And eventually you end up in these things that aren't parts of physics at all. So I think that's what's going on here. It's not that these things are breaking physics, although once you go down that road and you ask, okay, what do we get from that space? So we can get like static things like E and so on. Once you start asking that question, I suspect, and we're doing experiments on this now, what you actually get is not static. I think you might get compute out of it. At which point you are going to break, for example, the known sort of relationship between the cost of computing a single bit or erasing, more accurately, a single bit, that kind of stuff. I think that stuff might break, actually. But it's not because it's because I think there's a really weird kind of causation going on. And that's not new physics. That's like already at the time of Pythagoras, if you wanted to know why certain things were happening, the answer was going to be, well, that's how the math shakes out, right?</p><p><strong>[54:11] Nicolas Rouleau:</strong> No, I think that's totally fair. And it's like the problem of, you know, you have similar problems in consciousness where, what comes to mind is the Zen master coming over to the microphone when asked a very profound question and just tapping the microphone. And it's like, there are things that are ineffable. They can't be described in terms of language. And they just are experiences. And when you translate them into reference out in the world and you point to them, unless you're actually experiencing them as raw data, any representation or symbol that's pointing to it is just not the thing. You're not pointing to the thing itself. You have to go a little bit deeper. And so investigation and describing the world scientifically is a kind of step removed from reality. So yeah, no, it's very interesting. I can't wait to read it. But you asked the question about the free will, no consciousness, yes.</p><p><strong>[55:24] Michael Levin:</strong> I mean, are we talking an epiphenomenalist position or what's the, how do you put them together?</p><p><strong>[55:31] Nicolas Rouleau:</strong> Yeah, I just think that, in a nutshell, free will is not something that we ought to spend a lot of time trying to explain. The question of causality is interesting. It's like a question for physics, and I think that if the physicists solve it, we can just apply that broadly. But I think that, in the sense of Bertrand Russell, like I was talking about a little bit earlier, I don't think it makes sense to go down this rabbit hole trying to look for causation and for explanations about how experience maps onto that, because all we really need to do is explain the experience. And to me, what I'm really saying is just consciousness, yes. And that's what it is. And free will is just an experience within consciousness. It's just another experience that you can have that is basically illusory. You're witnessing something that isn't actually happening. And you form a belief about what's actually happening. There really is just a causal chain of events happening, and then your predictions are being realized and you've become, you develop a superstition that is self-referential. You think that you are the author of your own actions, but really what you're just doing is witnessing a body in motion, an organism that is interacting with its environment in predictable ways. So yeah, I just don't think that free will is a coherent idea. And I think that consciousness is really what we have. Although I do think that an external environment does exist. There are some who think that consciousness is all we have and the world is really just a projection of consciousness. So, but that's a metaphysical thing.</p><p><strong>[57:33] Michael Levin:</strong> All right, in the 4 minutes that we have left, just real quick. So the transmissive business, endogenously in the absence of us applying things to brains. What do you think is being transmitted?</p><p><strong>[57:50] Nicolas Rouleau:</strong> Well, so it's very likely information, but I don't know what the question is: what is the carrier of that information? I think it's electromagnetic.</p><p><strong>[58:02] Michael Levin:</strong> But the content, right? So let's say it's electromagnetic, how much your content specificity, how much, I guess, how close are you to the theories of the brain in general as a receiver of, right? Because if you say that, you have to, then, well, what is it receiving and where does that come from, right? So I'm just curious where you're going with that.</p><p><strong>[58:23] Nicolas Rouleau:</strong> So I don't think that the transmissive model is mutually exclusive. And I think that the productive model can play nice with the transmissive model. And in a follow-up paper to that original essay, I argue that I think that there is a middle ground here where the brain really is doing a whole bunch of physiological things, and you really can just go in and poke the cells, so to speak, and have them do things. But there, I think that in addition to that productive physiology that the brain clearly does, I think that there's also a layer, a functional layer that we haven't really investigated yet, which is transmissive in nature and really places the causal elements of what's going to happen to the cells outside of the brain. And so, if a cell is activated in a neural network, under the productive model, the logical thing to do is just look at all the synapses and just trace backwards, do this kind of retroactive analysis and see where the initial signal came from. But I think that part of what the brain is doing is brain cells are being activated where if you were to interrogate the presynaptic cells, you would find that there was no initiating event that led to that cell being activated. That cell is being activated by an extracerebral source, either Earth's magnetic field or something else. John Eccles thought it was a whole new particle. He thought it was a psychon, right? This particle that when it interacted with the brain, it conferred consciousness. But I'm not sure what the mechanism is, but everything suggests that it's at least interacting with electromagnetic fields and that transmission and production are happening together in the brain.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Conversation 2 with Lisa Barrett, Ben Lyons, and Karen Quigley</title>
          <link>https://thoughtforms-life.aipodcast.ing/conversation-2-with-lisa-barrett-ben-lyons-and-karen-quigley/</link>
          <description>Lisa Barrett, Karen Quigley, and Benjamin Lyons continue their discussion of relational realism, allostasis, predictive processing, and embodiment, exploring how brain, body, and world jointly shape emotion, perception, and scientific objectivity.</description>
          <pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69c2707ac63a120001225354 ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/MGNJJe-apb0" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/5c4bcb71/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a second conversation with Lisa Barrett ( Karen Quigley ( and Benjamin Lyons ( about Relational Realism, allostasis, and questions of mind/body/behavior.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Rethinking emotion universals</p><p>(09:34) Brain evolution and allostasis</p><p>(17:25) Predictive processing and signaling</p><p>(30:05) Objectivity and first-person science</p><p>(38:45) Flexible body-world boundaries</p><p>(45:46) Relational meaning in perception</p><p>(52:06) Embodiment, morphospace and realism</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Lisa Barrett:</strong> What we could do is start at the beginning of how we got into the work that we're doing now and where I'd like to end up is talking about the relationship between brain and body signaling and the contextual nature of that. And also the philosophy of science that it's led us to this idea about what we're calling relational realism, the idea that it's really a solution to the problem of the dichotomy between traditional realism, where there's an objective world that is fixed and that you can only perceive through the veil of your own concepts, and something like idealism or any kind of anti-realism. So this is a realist view, but it's a realist view that's rooted in the idea that what is real is relational and that things don't have fixed meanings, they have relational meanings. And the things that we think of as having properties in the world are actually properties of relations, not properties of objects. I'll give a very brief overview. And Ben, you'll stop me if this is not what you think is useful.</p><p><strong>[01:36] Benjamin Lyons:</strong> I trust your judgment and Mike's ability to pick up on this stuff. When I was talking about economics, he started absorbing things very quickly, so I think we'll be good.</p><p><strong>[01:45] Lisa Barrett:</strong> Karen and I started off as colleagues. This was many years ago. We had very separate research programs. I fell into the question of trying to understand the nature of emotion because I was in psychology and in psychiatry and in much of neuroscience there's this assumption that there are these fixed categories for emotion, fixed circuits for emotion, that emotions are essentially adaptations that are wired in, that are basically programmed into your genes. This is like modern synthesis: DNA plus natural selection gives you these adaptations which exist in a fixed manner, and emotions being some of those. The idea is that there are these challenges to fitness which have persisted throughout millennia and emotions evolved as solutions to these problems. And there are a set of universal categories that are shared by all humans on earth and also other animals, which ones depends on who you read. But the idea is that there's a circuit in the brain for fear, a circuit in the brain for anger, a circuit in the brain for happiness. People debate on how many circuits and how many categories, but at least six and maybe upwards of 20, there are these hardwired things at birth. They are there, which means that everyone around the world will widen their eyes and gasp in fear, and that there is one cardiovascular pattern for fear, and so on and so forth. It sounds like a cartoon. The idea is that there might be some variability in what people look like and sound like when they're fearful, but that variation can be explained by or is epiphenomenal to emotion. People who study non-human animals will, for example, look at a fly that rubs its legs or expose a rat to the scent of a predator or do classical conditioning with an electric shock. So they'll pair a tone with a shock. They believe that what they're studying is fear. They're attempting to identify the neural circuit for fear, maybe the genes for fear. And the assumption is that's going to generalize across species, across all animals of that species, but also across species, usually mammalian species, but sometimes all vertebrates, just depends on who you read. They're usually citing Darwin as evidence, which is a whole other thing about what Darwin actually said. When I was a graduate student, I needed to measure emotion and I needed to measure it what I thought objectively, meaning in a third-person way. I thought this was going to be convenient because there are all these expressions that are universal and physiological patterns that are universal. I systematically discovered that if you read what the introductions and the discussion sections of these papers say, it's inconsistent with what the data actually show.</p><p><strong>[05:39] Lisa Barrett:</strong> What the data actually show is contextual variation. Probably the first 20 years of my career was spent just documenting this variation in the brain, in the face, in the body with Karen. I met Karen. I started off as a psychologist and then I needed to retrain as a psychophysiologist so that I could study peripheral physiological signals to actually test this hypothesis. I started to work with Karen and then I had to retrain as a cognitive neuroscientist. I had to keep picking up skills to try to test these different domains. And what we discovered across all of this time is that really the business problem that a brain and a body have to solve is not how do you read emotion in other people, not how do you inhibit these pre-potent emotional responses. This is meta-analytic evidence: in the West, when people are angry, 35% of the time people scowl. That's better than chance, but 65% of the time, people don't scowl. When they're angry, they express emotion on the face in some other meaningful way. Half the time when people are scowling, they're not angry. There is variability in how people experience emotion, how they express emotion. There's variability in the neural patterns for emotion that seems to be yoked to context. That is not random variation. There's variability that is structured within a person across situations, as well as across people, for example, across cultures. What this means is that there is no inherent meaning of a scowl, the raise of an eyebrow, or the curl of a lip. An increase in heart rate or a decrease in heart rate, even amygdala activity, this area of the temporal lobe, doesn't have inherent psychological meaning. Even the activation of individual neurons doesn't have inherent psychological meaning. They have relational meaning in the signals; for example, action potentials or the local field potentials around a set of neurons have meaning in a pattern of other signals, but they don't have an inherent meaning, like in a labeled-line sense. In fact, nothing in the brain that I can determine has a label. There are no fixed receptive fields anywhere in the brain. There are no labeled lines where a particular axon fires and it has a particular meaning every single time. The meanings are really relational. That's the punchline. Karen, do you want to say your part and then we'll catch Mike and Ben up to that point, and then we can talk about going forward.</p><p><strong>[09:34] Karen Quigley:</strong> What exactly would you like me to focus on?</p><p><strong>[09:39] Lisa Barrett:</strong> What part of the story haven't I told that is relevant?</p><p><strong>[09:49] Karen Quigley:</strong> Well, it seems you've done a pretty good job of telling the basic idea behind the story. We've spent the last decade trying to further enhance the empirical evidence for this idea and in saying more about what we mean contextually and what we mean more at the individual level.</p><p><strong>[10:12] Lisa Barrett:</strong> First, basically what we did for 20 years was just document the question and get people to accept the fact that these emotions, these kinds of fixed forms don't exist. There is no circuit in the brain for fear. There is no circuit in the brain for anger. There is no fixed chemical; dopamine isn't a reward chemical. These fixed meanings just aren't there. I think we spent a lot of time marshaling a lot of evidence from our own studies and also meta-analytic evidence from a lot of domains to basically try to frame what is the business problem that we have to solve here. Historically in psychology, this problem has been encountered before. This is probably the third time that people have encountered this problem about emotion, but it's a broader problem than just emotion. People have been attempting to start with folk categories that they learn from their own experience. Being socialized in a particular culture, those meanings and those categories are culturally inherited. When I say they're learned, what we mean is they affect the patterns that people learn and that come to be where it's possible for the brain to remember those meanings. We can put some biology on that, but here I'm just talking generally and colloquially. They learn certain categories and then use their experiences in these instances that they've categorized in particular ways. They go searching for the physical basis of these categories in a fixed way. In cognitive neuroscience there were 30, 40 years where people were searching for specific localizations and specific sets of neurons for anger, sadness, fear, episodic memory, semantic memory, this kind of attention, that kind of attention. They were looking for fixed modules to map to these categories. What we decided to do is take a step back and say what any animal has to do is deal with a tremendous amount of uncertainty. Animals move around. They have a particular body shape. They have a particular ecology. They have a particular set of metabolic demands. They're moving around in a highly uncertain, only partly predictable world.</p><p><strong>[13:48] Lisa Barrett:</strong> And they have to create meaning in such a way that they can survive and thrive. And so that means we took a step back and said, well, instead of starting with these folk categories, why don't we start with brain evolution and metabolism, and not so much homeostasis but allostasis, this idea that what a system is doing is anticipating metabolic needs and attempting to meet those needs, preparing to meet those needs before they arrive, and that different parts of the system might use homeostasis. Different parts of the system might function by homeostasis, but really allostasis is what is most metabolically efficient. We started drawing from different lines of research, from electrical research and electrical engineering on signal processing, what's efficient and energy efficient signal processing, brain evolution, anatomy, neuroanatomy, just various literatures and bringing them all together. We developed a set of hypotheses based on this integration of a lot of different literatures coming together. The idea is that, and there's quite a bit of evidence for this now, that the traditional way of thinking about the way—and we're talking primarily now about vertebrates; we're going to talk it upscale, which is much more simplified than what you deal with. But the general idea is that sensory signals—that an animal is detecting with their sensory surfaces—detect changes in the world. Those signals are ferried to the brain as small details that then have to be somehow compressed or integrated. So you have all these lines and edges in primary visual cortex that then have to be integrated, bound together into objects, which then have to be bound together with sounds and smells and so on until you get a representation of an object, which then you retrieve from memory, your understanding of what that object is, and you compare it and then categorize it. Then the object is meaningful, and then you plan an action. You're walking on the street, you're taking in all of these sensory signals; your brain somehow is binding them. It actually used to be called the binding problem. How do you bind together all of these sensory signals into an object that you categorize? So you see some ball of fur that has whiskers on the street and eventually you perceive a cat and then you categorize the cat as a cat, and then you make an action plan. Are you going to bend down and pet the cat? This is the idea. It's a bit of a cartoon for how people understood it, but that is the general idea: you start with the details and eventually you get to objects and then scenes and objects and then action plans, and then you behave towards the object in some way. That's the general idea. Karen, anything to add there?</p><p><strong>[17:25] Karen Quigley:</strong> I think that's right. That's the cartoon version.</p><p><strong>[17:28] Lisa Barrett:</strong> That's the cartoon version of it. That is basically the version of perception and action. People still use.</p><p><strong>[17:35] Karen Quigley:</strong> Yeah.</p><p><strong>[17:36] Lisa Barrett:</strong> For the most part, there is a literature that considers energy and metabolism and allostasis, but not this literature. When people are thinking about cognition or perception or emotion or decision making, even people who are studying reward, they don't typically think about the dynamics in the body at all that have to support those actions. What we do is challenge that view. We take a predictive processing approach where we say not so much that the brain is running a model of the world, but it's running a model of its own body. It's running a model of the sensory surfaces of the body. Your brain doesn't have a map of the world. It has a map of its retina. It has a map of the cochlea. It has a map of the skin. So it has some kind of very spatially degraded, compressed map of the body. It has a fine temporal map of signals inside the body. These signals are compressed as they make their way to the brain to various spatial and temporal degrees. What they meet when they get to the brain is a set of intrinsic signals in the brain that are a neural context that direct the compression of the incoming signals and give them meaning fundamentally in a metabolic sense. I could show you pictures to explain, but that's the general idea. What the brain is doing in any given moment, if you just stopped time, the brain is generating; it's remembering, essentially, re-implementing a set of past experiences similar to the present in some way. There are features of equivalence that the brain is using. It's not remembering or reinstating the signal patterns for particular instances. It can do what's called conceptual combination, the sort of flexible implementation of patterns. So it's creating patterns of activation. It's not remembering a single instance, it's remembering a collection of instances which are similar to the present in some way. In psychology, a bunch of things which are similar in some way for some function in some context is called a category. What it's doing is generating categories that are potential. If we take the cerebral cortex, for example, in any given part of the cerebral cortex, what's happening is that the neurons there are reinstating a pattern. The pattern is fundamentally a visceral motor pattern for regulating the body. There are axons that will leave a cortical column in layers five and six, but mostly five, that descend to the subcortical areas all the way to the spinal cord.</p><p><strong>[21:47] Lisa Barrett:</strong> That is essentially a visceral motor pattern. Literal collaterals off those axons make their way to other neurons in other parts of the cortex as prediction signals. So the motor pattern, the predicted motor pattern or plan, and then the predicted sensory consequences of those movements. That's happening across the entire expanse of the cerebral cortex. I'm picking the cortex because that's what people know the most about. There's much less known about how the subcortical areas are working together, but we're working on a paper about the hippocampus, for example, as also adhering to this kind of pattern. So what the brain is doing in any given moment, it's making an action plan, a visceral motor plan for regulating the body, for controlling metabolism, and it's also making a set of prediction signals that will anticipate the incoming signals from the sensory surfaces of the body. An interesting aspect of this is that any given action-potential-like train of spikes has no inherent meaning because what the same set of spikes can mean, depending on who's sending and who's receiving the signal, can be an action. It can be a motor plan or it could be anticipating a sensory signal. It's the same set of action potentials, but it means something different depending on who's sending and who's receiving. So it has a relational meaning. We're not saying it has no meaning. We're saying it has a relational meaning. The meaning isn't inherent in the action potential itself. It's relational depending on the pattern. Any given set of sensory signals has no inherent meaning; it has a meaning in relation to the neural context that's been created by the brain. That's one way to think about it. Another way to think about it is that those signals are constraining the brain. One way to think about it is that the brain is a network and it has inherent signaling that will continue until it runs out of energy and things are perturbing it. If you think about it that way, then you would say these intrinsic signals are giving meaning to signals from the body, which are reporting on the sensory conditions in the body and the sensory conditions in the world. Another way to think about it is that these sensory signals are actually constraining the brain. Without them, all kinds of patterns could occur, some of which would not be beneficial. For example, partly what psilocybin is doing is relaxing those constraints, so the brain is not so constrained by signals from the body. Or when you go to sleep, your dream signals are not so constrained by exteroceptive signals from the retina and from the cochlea. I could go on, but I'll stop there and see what you want. Maybe Ben, you could say if any of this is what you had in mind?</p><p><strong>[25:58] Benjamin Lyons:</strong> This is exactly what I had in mind. You covered everything, I was hoping you would. What I'm trying to accomplish is a further integration of these literatures. I think what y'all study and what Mike studies are the same thing at different scales and timescales. There are other literatures I think are relevant to economics, but also developmental psychology and the science of how motor behavior is produced and developed. That's also highly relevant. There's a bunch of stuff I'd love to show y'all there. It's a process of seeing that it's all the same pattern.</p><p><strong>[26:26] Lisa Barrett:</strong> I will say one other thing: this idea that the way that neurons signal each other is not unique to neurons and that any cell—a cytokine is just one cell signaling another cell. It doesn't have a special meaning. That's an epiphany to a lot of psychologists and neuroscientists who think that cortisol is a stress hormone, as opposed to just one way of signaling. So you can think about the brain as a system and the body as a system, and they're interacting with each other. You could think about the brain and the body as a system that is interacting with things outside the body. You could think about the four of us as a system. You can place those boundaries—what do you think? But basically, any system is trading in signal patterns. The meaning of what's occurring is in the pattern; it's not in the individual parts. For example, Nick Lane and I talked about electromagnetic signals that mitochondria generate, and that may be another way the body can signal the brain about metabolic status. That would mean there would have to be a set of receptors for those if they were interoceptive signals. He was thinking about electromagnetic signals from mitochondria in neurons signaling each other, but I said they're in the heart too and they're in the gut. I didn't know any cell could generate electrical activity, but that means that if there were a receptor somewhere in the brain, that could be a global signal about metabolic status, some kind of allostatic signal that the brain could receive. He suggested nanoparticles, like iron. We have a scan where we scan for iron, but we use it as a control in an fMRI study to control for signal that will interfere with the magnetic signal of the scanner. We thought that if this is a metabolic signal, we would expect the concentrations to be in certain places more than others. We looked, and in fact, that is where there seems to be more concentration. That doesn't mean anything other than that this is a really important question to ask in a more controlled way. My point is that it's a really different way of thinking about things than people in our domains are used to. The small amount of work that I've been able to read of yours that I can understand the details of seems to me to show Ben is exactly right: we are talking about relational meaning, but at completely different scales.</p><p><strong>[30:05] Michael Levin:</strong> Could I ask a couple of questions? Going back to the first part of what you were saying about the debate on emotions, could you give me an idea how much of that is related to the hard problem? Are any of the issues about first-person perspective, or is the debate about behavior and physiology?</p><p><strong>[30:35] Lisa Barrett:</strong> Very few people think about it in terms of the hard problem. There is an assumption broadly in psychology and in neuroscience that science is objective. Here's how we would think about science. What we do is we create a condition under which we will experience things. Observations are experiences of scientists that then we quantify with numbers in some way. We don't bifurcate nature. We don't say, well, these things are objective and these things are subjective. And there's a whole history in psychology for how that happened. Basically, that's our view. The view of a large number of scientists in our field is that science is objective. What they mean by that, the kind of objectivity, has undergone a change historically over time. They're using a 19th-century definition of objectivity, which means that observations, because they are automated by technology, and because they are made publicly, are either free from human concepts and experience, or you're minimizing the bias of human concepts and experience. They're doing a third-person kind of science that assumes there is an objective, verifiable pattern to, in a perceiver-independent way, identify a state of anger or a state of fear or a state of sadness. The assumption is that when this putative circuit triggers for anger, there will be a definable physiological pattern, a definable pattern in the brain somewhere, a definable expression, and all of these things are very diagnostic of that state. In older versions of this, it was an essence: necessary and sufficient conditions for membership in the folk category anger. Now people would say it's a prototype. Your face might not look the same every single time. You might not scowl every time. Your blood pressure might not go up every time. But there's a family resemblance to this prototype. The prototype is fixed, so your response might not be fixed. And your response, my response, Ben's response, Karen's response, people who live in Tanzania and are hunter-gatherers, their response, maybe even the response of a rat, will all have a family resemblance for some or all of these features. That's the view. They believe their epistemology is that there is a viable third-person. They demote the reports, the subjective experiences, of their human participants. From our perspective, every observation that you make as a scientist is an experience of some sort that you've created for yourself. If Ben is our subject, I look at Ben and observe and quantify his movements in some way. And I have the experience of Ben as angry. And we ask Ben, how do you feel?</p><p><strong>[34:29] Lisa Barrett:</strong> And Ben says, I feel sad. In that other view, the view is that we're right and Ben is wrong. Because Ben can't possibly know his state. He has all kinds of reasons for, even if we can assume we've created conditions where he will be as honest as he possibly can. There are moments where there's no way he could know his state, but we could know. Our view is that what is real in that moment is that we experience him as angry and he experiences sadness. That's what's real in that moment. That's what we have to try to figure out: that pattern. So there's an assumption that what is buried in the definition of objectivity is a prioritizing of certain experiences over other experiences. The experiences of scientists matter more. My experience of Ben as angry is what is closer to the ground truth than Ben's experience of his experience in the moment as being sad. Whereas what we would say is we would use an older definition of objectivity that is rooted more in Francis Bacon, around the time of the scientific revolution, which would be to say every human has a point of view. We all have concepts and categories. We can't escape them. The way you do science is you try to minimize any particular bias. The way that you do that is by trying to come to consensus over the data if you have diverse points of view and using lots of methods, some of which would disadvantage you and others would advantage you, and you use them all. Or if you're doing an analysis, you would do a multiverse analysis where you would vary every parameter in your analysis, and then you would have a distribution of results. Then you would interrogate that distribution as opposed to picking parameters in your analysis so that you result have one result and then potentially have picked the ones that favor your particular perspective in some way. So it's called transformative interrogation, where you have a community of scientists who are actively engaged in a self-critical examination, but the community has to be diverse. I don't mean ethnically diverse, although I'm sure that matters, but it's more that diverse in terms of your starting assumptions; your starting assumptions have to be diverse. That's the way to get to not truth, but usable, justified knowledge. That doesn't solve the hard problem either, but it does acknowledge the fact that all science is first person science. This is just ******** that it's third person. That's just a way of saying I think that my expertise counts, my experience as an expert counts more than your experience as a different kind of expert. That's our view.</p><p><strong>[38:25] Michael Levin:</strong> On the topic of managing the sensory interfaces, I'm thinking of Andy Clark type of ideas, the extended mind. How does it decide where the boundary actually is?</p><p><strong>[38:45] Lisa Barrett:</strong> That is a decision that is made continuously and it varies. Do you want to say something about this, Karen?</p><p><strong>[38:54] Karen Quigley:</strong> I was going to use the example of getting in the car.</p><p><strong>[38:56] Lisa Barrett:</strong> Yeah. Yeah.</p><p><strong>[38:57] Karen Quigley:</strong> When you're walking around the world, presumably the boundaries of your sensory surfaces are putatively at your skin, although depending upon what you're doing it could be quite different. Let's say you get in your car. Now the boundaries of your actions, the boundaries of your body have gotten out to the edges of the car — very personal space, basically.</p><p><strong>[39:21] Lisa Barrett:</strong> Yeah.</p><p><strong>[39:22] Karen Quigley:</strong> We would see that as highly flexible based on the current context and what your actions are.</p><p><strong>[39:30] Lisa Barrett:</strong> Michael Graziano did these studies at Princeton, where he's doing electrical recordings in neurons, and he identified these neurons in prefrontal, premotor cortex that he called bubble wrap neurons, which start to fire very frequently; their action potential spike trains speed up a lot the closer something is physically corporeal to the animal. What's really interesting is that that boundary changes depending on the state of the animal. It looks like when the animal is metabolically compromised, the boundary is out here. When the animal is allostatically balanced, everything's running smoothly, the boundary is closer corporeally to the animal's body. It's really clear that that boundary — there are a lot of "me, not me" systems in the body, like the immune system. So I think that boundary of where you end and where the world begins isn't always at the skin, and it rarely is; it's always fluctuating. There are some interesting cases. Maybe you had this experience when you were an adolescent. I had this experience when I was pregnant. I was constantly whapping things with my belly. It's not that I forgot that I was pregnant, but there would be some difference in the amount of growth and then I'd be walking into things; it was not explainable. You hear adolescents talk about how they don't know where their body is in space. I think there are also some interesting cases where people don't update when they should, when they need to. The car is an example of something that fluctuates, or a pen in your hand. It becomes part of the peripersonal space, but sometimes you don't update. I think that where this peripersonal boundary is is related to time, the experience of time, like how long you think things take. You could create a just-so story about how this came to be when animals developed distance senses like vision and audition, senses where you're sensing something at a distance as opposed to a proximal sense like olfaction or touch or anything interoceptive or gustatory. It turns out that in the brain this is something unique to us. A lot of people make a distinction between exteroceptive, meaning outside the body, and interoceptive, meaning inside the body, sensory signals. We think about proximal senses versus distal senses because they're processed very differently. That is where the signal compression is happening, the temporal speed of them. These things seem very different for distant senses versus proximal senses. Distance senses came last. Proximal senses were there first, and they're more tied to movement. There's increasing evidence that signals from proximal senses, in the way they're processed, are gating the sampling of distant senses like vision and audition.</p><p><strong>[44:06] Michael Levin:</strong> These are issues we grapple with all the time at the cellular and even subcellular level: exactly that change of that boundary, that flexible boundary between self and moral. In particular, both in natural biological cases and then all the weird stuff we do where we either instrumentize something and give it a sense that it never had before or connect it to some crazy engineered thing in a hybrid mode.</p><p><strong>[44:36] Lisa Barrett:</strong> I will also say one other thing that there are senses that humans have that we have no sensors for that the brain computes. Temperature is a really good one. Skin temperature — we have no sensors on the skin for temperature at all.</p><p><strong>[45:00] Karen Quigley:</strong> You mean wetness?</p><p><strong>Lisa Barrett:</strong> We have no wetness. You feel wet when you take a shower, when a raindrop hits you, or when you're swimming. But we have no sensory signals for wetness. It's a combination of temperature and touch. There are other examples too. We were talking about the kinesthetic sense of your head — where your head is positioned in space. That's a combination of five different sensory signals. Or flavor is a combination of olfaction and gustation, what's called taste. But what most people call taste is really flavor. Sam would know a lot about that.</p><p><strong>[45:46] Michael Levin:</strong> When you were talking about relational meanings, are there scenarios you know of where a given set of events has multiple relational meanings, where different observers look at the same thing and have different interpretations of it?</p><p><strong>[46:08] Lisa Barrett:</strong> I'll just use the very tired example of seeing red to make the point, because I think even though philosophers use it a lot, it's actually a really good example. So normally we see an object that's red. I'm looking around for an object that's red. I don't see one. But an apple is red. And you think redness is in the apple. But red, the property of red is a property of the relation between the signals coming from the apple, the signals that your retina transduces and the signals in your brain. Neurotypical people have three types of cones with three different types of opsins, and you need all three in order to take light, which is reflecting off an object at 620 nanometers in order to see red. That's not all you need, but that is necessary. And if you have a person or an animal with only two cones, cones with two opsins, they would experience that wavelength as a muddy brown, greenish brown. And so we would say that, or people say that they're colorblind, meaning red is in the apple. If you can't see the red, then you're colorblind to the reality of the red apple. But there are also some humans with four opsins. They're rare and they're mostly women, but they do exist. They have the same opsin, the same fourth opsin. They parse the visible light spectrum with many more categories than we do. So they would experience 620 nanometers in the same visual context as some other color. But if neurotypical humans had four cones, then that apple would not be objectively red. It would be objectively some other color. And we, those of us who have three cones, would be colorblind.</p><p><strong>[49:06] Lisa Barrett:</strong> What happens with objectivity is that we prioritize the biology of certain people over other people, and then we call it objective. That idea happens everywhere with lots of different examples. Some of them are very basic visual examples, and some are more social examples where people come with a different neural context, a different set of categories that their brain is equipped to make, and they experience the world and the same signals extremely differently. In the predictive processing Andy Clark way, if you combine that with anatomical evidence, what it seems like is that predictions are not for perception, they're actually for action. The action is planned first; the sensory prediction, so the perception, is a consequence of the action plan. What's really happening under the hood is the action plan is there first, and lived experience is a consequence of the action, not the other way around. When we say that people experience things differently, embedded in that is the assumption based on the anatomy that they will be having very different action plans when confronted with the same set of sensory signals. I can present stimuli to you, and you will experience the signals one way, and then I can make one change and you will experience the signals completely differently. I can show you another image, take it away, show you the first set of signals exactly the same, and you will experience them completely differently. It's a party trick. I use it all the time on audiences. We have done careful brain imaging studies where we do this with subjects, where we show them the initial visual image, and then we show them a second visual image. We show them the first visual image, then give them another image, take that away, and show the first image again. The pattern of BOLD signal activity is different than the first time. We can also show them the same thing three times, and it doesn't change. An intervening experience changes their experience of the first pattern of signals, and it doesn't revert. The way they make meaning of the first set of signals has changed, and it's changed pretty much forever.</p><p><strong>[52:06] Michael Levin:</strong> We didn't get to Ben's stuff. Should we make a new one?</p><p><strong>[52:13] Lisa Barrett:</strong> We absolutely can. But I wouldn't mind, in the last remaining two minutes, since I just talked the whole time without slides and showing you anything, what your thoughts are initially.</p><p><strong>[52:26] Michael Levin:</strong> I think it's very compatible with a lot of the stuff we're doing. If we change scale and substrate a little bit, a lot of this carries over. We could use some of these models, vice versa, to map this onto some really ancient cellular stuff that's going on in the body at all scales.</p><p><strong>[52:49] Lisa Barrett:</strong> That'd be really great.</p><p><strong>[52:51] Michael Levin:</strong> Yeah.</p><p><strong>Lisa Barrett:</strong> That'd be really exciting.</p><p><strong>[52:52] Michael Levin:</strong> Yeah.</p><p><strong>[52:53] Lisa Barrett:</strong> I also think there's an implication here for how we do science. I think that our way of understanding how the brain is processing signals from the body or how the body is constraining the brain — is it both true? It just depends on what you're focusing on. We can use that to think about the epistemology and even the metaphysics of what we're doing as scientists. Karen is still rooted in the nuts and bolts of the science, but I've been dipping my toe into this other world of thinking about the epistemology and the metaphysics of how we do science and what we think we're doing exactly.</p><p><strong>[53:47] Michael Levin:</strong> I think that's a great area to get into. Our contact with it now is this weird thing. We call Mom bot, which is a joint project with Josh Bongard's lab and Doug Blackiston. One way to see it is it's a robot scientist. It's a thing that sits in our lab. It has an AI that makes hypotheses about stimuli you give to the cells to make certain biobots. It makes the xenobots; it physically makes the xenobots with those stimuli. Then it observes the biobots in terms of their shape and behavior, goes back, revises its hypotheses, and tries again. In that sense, it tries to make discoveries in morphogenesis. That's one way to think about it: an automated robotic discovery platform. But the other way I like to think about it is that this thing is basically a reverse hybrid. The typical hybrids people make are they'll take a brain from a fish and put it in a little cart that drives a little car around. This is the reverse. What you have here is an AI that is exploring morphospace, and the body it has to explore with is the living cells—the frog cells. So it's basically, whatever level of intelligence it may or may not have, the body through which it experiences anatomical space is the living material. It uses the biobots as the outer surface to feel around.</p><p><strong>[55:23] Lisa Barrett:</strong> That's very cool.</p><p><strong>[55:25] Michael Levin:</strong> Isn't that wild? People always talk about embodiment and they always think it has to be running around in physical space. This thing sits still as far as our obsession with 3D space is concerned, but it's exploring morphospace.</p><p><strong>[55:37] Lisa Barrett:</strong> I know we're out of time, but I have to say this one thing. This is really interesting to me because I think that some of the fundamental features of reality that we take to be fundamental— is this solid? We assume that this is real, objective, perceiver-independent; that's traditional realism. I think we experience this as solid because of the kinds of bodies we have. If we were subatomic particles, this would not be solid. This would be mostly empty space. Many of the things that we take to be Kant's primary properties—shape, solidity—are hidden from us because we all have bodies that are very similar in experiencing solidity, experiencing these signals as solidity. And this is a fundamental aspect of relational realism, this thing that I was talking about, this metaphysics that I've been ... That is almost impossible. It's not possible to test with others—we're like fish in water; we can't escape the water. It's a bit like the hard problem: we have to study consciousness through consciousness. Anytime we make a new discovery, it's because there's a reverberation or a pattern in a signal that we didn't expect—for example, dark matter. There's some pattern that we don't expect, and that tells us that something else might be there. So it's almost impossible to expand our island of knowledge because we're limited by our sensory surfaces. This is really cool because it suggests a potential—not a solution, but maybe an avenue for dealing with this problem.</p><p><strong>[58:06] Michael Levin:</strong> Chris Fields and I have this paper on diverse spaces. As you said, solidity, what do barriers look like in transcriptional space? What does it feel like to be walking around a bent physiological state space, or my favorite, it was just anatomical morphospace. There's a metric of distance and you can send signals across and you can wander around it. I think that's exactly what groups of cells do. They live in these weird spaces with no doubt weird perceptions.</p><p><strong>[58:42] Lisa Barrett:</strong> One thing that we think is that some of the things that we call illnesses are actually different physiological spaces for people, that are not the neurotypical kind of physiological spaces, not the biologically typical range of physiological spaces. That is an idea that we've had. We haven't been able to figure out how to create experiences for ourselves called observations. We haven't been able to figure out how to study it.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Conversation 2 with Lisa Barrett, Ben Lyons, and Karen Quigley</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>Lisa Barrett, Karen Quigley, and Benjamin Lyons continue their discussion of relational realism, allostasis, predictive processing, and embodiment, exploring how brain, body, and world jointly shape emotion, perception, and scientific objectivity.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/MGNJJe-apb0" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/5c4bcb71/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a second conversation with Lisa Barrett ( Karen Quigley ( and Benjamin Lyons ( about Relational Realism, allostasis, and questions of mind/body/behavior.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Rethinking emotion universals</p><p>(09:34) Brain evolution and allostasis</p><p>(17:25) Predictive processing and signaling</p><p>(30:05) Objectivity and first-person science</p><p>(38:45) Flexible body-world boundaries</p><p>(45:46) Relational meaning in perception</p><p>(52:06) Embodiment, morphospace and realism</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Lisa Barrett:</strong> What we could do is start at the beginning of how we got into the work that we're doing now and where I'd like to end up is talking about the relationship between brain and body signaling and the contextual nature of that. And also the philosophy of science that it's led us to this idea about what we're calling relational realism, the idea that it's really a solution to the problem of the dichotomy between traditional realism, where there's an objective world that is fixed and that you can only perceive through the veil of your own concepts, and something like idealism or any kind of anti-realism. So this is a realist view, but it's a realist view that's rooted in the idea that what is real is relational and that things don't have fixed meanings, they have relational meanings. And the things that we think of as having properties in the world are actually properties of relations, not properties of objects. I'll give a very brief overview. And Ben, you'll stop me if this is not what you think is useful.</p><p><strong>[01:36] Benjamin Lyons:</strong> I trust your judgment and Mike's ability to pick up on this stuff. When I was talking about economics, he started absorbing things very quickly, so I think we'll be good.</p><p><strong>[01:45] Lisa Barrett:</strong> Karen and I started off as colleagues. This was many years ago. We had very separate research programs. I fell into the question of trying to understand the nature of emotion because I was in psychology and in psychiatry and in much of neuroscience there's this assumption that there are these fixed categories for emotion, fixed circuits for emotion, that emotions are essentially adaptations that are wired in, that are basically programmed into your genes. This is like modern synthesis: DNA plus natural selection gives you these adaptations which exist in a fixed manner, and emotions being some of those. The idea is that there are these challenges to fitness which have persisted throughout millennia and emotions evolved as solutions to these problems. And there are a set of universal categories that are shared by all humans on earth and also other animals, which ones depends on who you read. But the idea is that there's a circuit in the brain for fear, a circuit in the brain for anger, a circuit in the brain for happiness. People debate on how many circuits and how many categories, but at least six and maybe upwards of 20, there are these hardwired things at birth. They are there, which means that everyone around the world will widen their eyes and gasp in fear, and that there is one cardiovascular pattern for fear, and so on and so forth. It sounds like a cartoon. The idea is that there might be some variability in what people look like and sound like when they're fearful, but that variation can be explained by or is epiphenomenal to emotion. People who study non-human animals will, for example, look at a fly that rubs its legs or expose a rat to the scent of a predator or do classical conditioning with an electric shock. So they'll pair a tone with a shock. They believe that what they're studying is fear. They're attempting to identify the neural circuit for fear, maybe the genes for fear. And the assumption is that's going to generalize across species, across all animals of that species, but also across species, usually mammalian species, but sometimes all vertebrates, just depends on who you read. They're usually citing Darwin as evidence, which is a whole other thing about what Darwin actually said. When I was a graduate student, I needed to measure emotion and I needed to measure it what I thought objectively, meaning in a third-person way. I thought this was going to be convenient because there are all these expressions that are universal and physiological patterns that are universal. I systematically discovered that if you read what the introductions and the discussion sections of these papers say, it's inconsistent with what the data actually show.</p><p><strong>[05:39] Lisa Barrett:</strong> What the data actually show is contextual variation. Probably the first 20 years of my career was spent just documenting this variation in the brain, in the face, in the body with Karen. I met Karen. I started off as a psychologist and then I needed to retrain as a psychophysiologist so that I could study peripheral physiological signals to actually test this hypothesis. I started to work with Karen and then I had to retrain as a cognitive neuroscientist. I had to keep picking up skills to try to test these different domains. And what we discovered across all of this time is that really the business problem that a brain and a body have to solve is not how do you read emotion in other people, not how do you inhibit these pre-potent emotional responses. This is meta-analytic evidence: in the West, when people are angry, 35% of the time people scowl. That's better than chance, but 65% of the time, people don't scowl. When they're angry, they express emotion on the face in some other meaningful way. Half the time when people are scowling, they're not angry. There is variability in how people experience emotion, how they express emotion. There's variability in the neural patterns for emotion that seems to be yoked to context. That is not random variation. There's variability that is structured within a person across situations, as well as across people, for example, across cultures. What this means is that there is no inherent meaning of a scowl, the raise of an eyebrow, or the curl of a lip. An increase in heart rate or a decrease in heart rate, even amygdala activity, this area of the temporal lobe, doesn't have inherent psychological meaning. Even the activation of individual neurons doesn't have inherent psychological meaning. They have relational meaning in the signals; for example, action potentials or the local field potentials around a set of neurons have meaning in a pattern of other signals, but they don't have an inherent meaning, like in a labeled-line sense. In fact, nothing in the brain that I can determine has a label. There are no fixed receptive fields anywhere in the brain. There are no labeled lines where a particular axon fires and it has a particular meaning every single time. The meanings are really relational. That's the punchline. Karen, do you want to say your part and then we'll catch Mike and Ben up to that point, and then we can talk about going forward.</p><p><strong>[09:34] Karen Quigley:</strong> What exactly would you like me to focus on?</p><p><strong>[09:39] Lisa Barrett:</strong> What part of the story haven't I told that is relevant?</p><p><strong>[09:49] Karen Quigley:</strong> Well, it seems you've done a pretty good job of telling the basic idea behind the story. We've spent the last decade trying to further enhance the empirical evidence for this idea and in saying more about what we mean contextually and what we mean more at the individual level.</p><p><strong>[10:12] Lisa Barrett:</strong> First, basically what we did for 20 years was just document the question and get people to accept the fact that these emotions, these kinds of fixed forms don't exist. There is no circuit in the brain for fear. There is no circuit in the brain for anger. There is no fixed chemical; dopamine isn't a reward chemical. These fixed meanings just aren't there. I think we spent a lot of time marshaling a lot of evidence from our own studies and also meta-analytic evidence from a lot of domains to basically try to frame what is the business problem that we have to solve here. Historically in psychology, this problem has been encountered before. This is probably the third time that people have encountered this problem about emotion, but it's a broader problem than just emotion. People have been attempting to start with folk categories that they learn from their own experience. Being socialized in a particular culture, those meanings and those categories are culturally inherited. When I say they're learned, what we mean is they affect the patterns that people learn and that come to be where it's possible for the brain to remember those meanings. We can put some biology on that, but here I'm just talking generally and colloquially. They learn certain categories and then use their experiences in these instances that they've categorized in particular ways. They go searching for the physical basis of these categories in a fixed way. In cognitive neuroscience there were 30, 40 years where people were searching for specific localizations and specific sets of neurons for anger, sadness, fear, episodic memory, semantic memory, this kind of attention, that kind of attention. They were looking for fixed modules to map to these categories. What we decided to do is take a step back and say what any animal has to do is deal with a tremendous amount of uncertainty. Animals move around. They have a particular body shape. They have a particular ecology. They have a particular set of metabolic demands. They're moving around in a highly uncertain, only partly predictable world.</p><p><strong>[13:48] Lisa Barrett:</strong> And they have to create meaning in such a way that they can survive and thrive. And so that means we took a step back and said, well, instead of starting with these folk categories, why don't we start with brain evolution and metabolism, and not so much homeostasis but allostasis, this idea that what a system is doing is anticipating metabolic needs and attempting to meet those needs, preparing to meet those needs before they arrive, and that different parts of the system might use homeostasis. Different parts of the system might function by homeostasis, but really allostasis is what is most metabolically efficient. We started drawing from different lines of research, from electrical research and electrical engineering on signal processing, what's efficient and energy efficient signal processing, brain evolution, anatomy, neuroanatomy, just various literatures and bringing them all together. We developed a set of hypotheses based on this integration of a lot of different literatures coming together. The idea is that, and there's quite a bit of evidence for this now, that the traditional way of thinking about the way—and we're talking primarily now about vertebrates; we're going to talk it upscale, which is much more simplified than what you deal with. But the general idea is that sensory signals—that an animal is detecting with their sensory surfaces—detect changes in the world. Those signals are ferried to the brain as small details that then have to be somehow compressed or integrated. So you have all these lines and edges in primary visual cortex that then have to be integrated, bound together into objects, which then have to be bound together with sounds and smells and so on until you get a representation of an object, which then you retrieve from memory, your understanding of what that object is, and you compare it and then categorize it. Then the object is meaningful, and then you plan an action. You're walking on the street, you're taking in all of these sensory signals; your brain somehow is binding them. It actually used to be called the binding problem. How do you bind together all of these sensory signals into an object that you categorize? So you see some ball of fur that has whiskers on the street and eventually you perceive a cat and then you categorize the cat as a cat, and then you make an action plan. Are you going to bend down and pet the cat? This is the idea. It's a bit of a cartoon for how people understood it, but that is the general idea: you start with the details and eventually you get to objects and then scenes and objects and then action plans, and then you behave towards the object in some way. That's the general idea. Karen, anything to add there?</p><p><strong>[17:25] Karen Quigley:</strong> I think that's right. That's the cartoon version.</p><p><strong>[17:28] Lisa Barrett:</strong> That's the cartoon version of it. That is basically the version of perception and action. People still use.</p><p><strong>[17:35] Karen Quigley:</strong> Yeah.</p><p><strong>[17:36] Lisa Barrett:</strong> For the most part, there is a literature that considers energy and metabolism and allostasis, but not this literature. When people are thinking about cognition or perception or emotion or decision making, even people who are studying reward, they don't typically think about the dynamics in the body at all that have to support those actions. What we do is challenge that view. We take a predictive processing approach where we say not so much that the brain is running a model of the world, but it's running a model of its own body. It's running a model of the sensory surfaces of the body. Your brain doesn't have a map of the world. It has a map of its retina. It has a map of the cochlea. It has a map of the skin. So it has some kind of very spatially degraded, compressed map of the body. It has a fine temporal map of signals inside the body. These signals are compressed as they make their way to the brain to various spatial and temporal degrees. What they meet when they get to the brain is a set of intrinsic signals in the brain that are a neural context that direct the compression of the incoming signals and give them meaning fundamentally in a metabolic sense. I could show you pictures to explain, but that's the general idea. What the brain is doing in any given moment, if you just stopped time, the brain is generating; it's remembering, essentially, re-implementing a set of past experiences similar to the present in some way. There are features of equivalence that the brain is using. It's not remembering or reinstating the signal patterns for particular instances. It can do what's called conceptual combination, the sort of flexible implementation of patterns. So it's creating patterns of activation. It's not remembering a single instance, it's remembering a collection of instances which are similar to the present in some way. In psychology, a bunch of things which are similar in some way for some function in some context is called a category. What it's doing is generating categories that are potential. If we take the cerebral cortex, for example, in any given part of the cerebral cortex, what's happening is that the neurons there are reinstating a pattern. The pattern is fundamentally a visceral motor pattern for regulating the body. There are axons that will leave a cortical column in layers five and six, but mostly five, that descend to the subcortical areas all the way to the spinal cord.</p><p><strong>[21:47] Lisa Barrett:</strong> That is essentially a visceral motor pattern. Literal collaterals off those axons make their way to other neurons in other parts of the cortex as prediction signals. So the motor pattern, the predicted motor pattern or plan, and then the predicted sensory consequences of those movements. That's happening across the entire expanse of the cerebral cortex. I'm picking the cortex because that's what people know the most about. There's much less known about how the subcortical areas are working together, but we're working on a paper about the hippocampus, for example, as also adhering to this kind of pattern. So what the brain is doing in any given moment, it's making an action plan, a visceral motor plan for regulating the body, for controlling metabolism, and it's also making a set of prediction signals that will anticipate the incoming signals from the sensory surfaces of the body. An interesting aspect of this is that any given action-potential-like train of spikes has no inherent meaning because what the same set of spikes can mean, depending on who's sending and who's receiving the signal, can be an action. It can be a motor plan or it could be anticipating a sensory signal. It's the same set of action potentials, but it means something different depending on who's sending and who's receiving. So it has a relational meaning. We're not saying it has no meaning. We're saying it has a relational meaning. The meaning isn't inherent in the action potential itself. It's relational depending on the pattern. Any given set of sensory signals has no inherent meaning; it has a meaning in relation to the neural context that's been created by the brain. That's one way to think about it. Another way to think about it is that those signals are constraining the brain. One way to think about it is that the brain is a network and it has inherent signaling that will continue until it runs out of energy and things are perturbing it. If you think about it that way, then you would say these intrinsic signals are giving meaning to signals from the body, which are reporting on the sensory conditions in the body and the sensory conditions in the world. Another way to think about it is that these sensory signals are actually constraining the brain. Without them, all kinds of patterns could occur, some of which would not be beneficial. For example, partly what psilocybin is doing is relaxing those constraints, so the brain is not so constrained by signals from the body. Or when you go to sleep, your dream signals are not so constrained by exteroceptive signals from the retina and from the cochlea. I could go on, but I'll stop there and see what you want. Maybe Ben, you could say if any of this is what you had in mind?</p><p><strong>[25:58] Benjamin Lyons:</strong> This is exactly what I had in mind. You covered everything, I was hoping you would. What I'm trying to accomplish is a further integration of these literatures. I think what y'all study and what Mike studies are the same thing at different scales and timescales. There are other literatures I think are relevant to economics, but also developmental psychology and the science of how motor behavior is produced and developed. That's also highly relevant. There's a bunch of stuff I'd love to show y'all there. It's a process of seeing that it's all the same pattern.</p><p><strong>[26:26] Lisa Barrett:</strong> I will say one other thing: this idea that the way that neurons signal each other is not unique to neurons and that any cell—a cytokine is just one cell signaling another cell. It doesn't have a special meaning. That's an epiphany to a lot of psychologists and neuroscientists who think that cortisol is a stress hormone, as opposed to just one way of signaling. So you can think about the brain as a system and the body as a system, and they're interacting with each other. You could think about the brain and the body as a system that is interacting with things outside the body. You could think about the four of us as a system. You can place those boundaries—what do you think? But basically, any system is trading in signal patterns. The meaning of what's occurring is in the pattern; it's not in the individual parts. For example, Nick Lane and I talked about electromagnetic signals that mitochondria generate, and that may be another way the body can signal the brain about metabolic status. That would mean there would have to be a set of receptors for those if they were interoceptive signals. He was thinking about electromagnetic signals from mitochondria in neurons signaling each other, but I said they're in the heart too and they're in the gut. I didn't know any cell could generate electrical activity, but that means that if there were a receptor somewhere in the brain, that could be a global signal about metabolic status, some kind of allostatic signal that the brain could receive. He suggested nanoparticles, like iron. We have a scan where we scan for iron, but we use it as a control in an fMRI study to control for signal that will interfere with the magnetic signal of the scanner. We thought that if this is a metabolic signal, we would expect the concentrations to be in certain places more than others. We looked, and in fact, that is where there seems to be more concentration. That doesn't mean anything other than that this is a really important question to ask in a more controlled way. My point is that it's a really different way of thinking about things than people in our domains are used to. The small amount of work that I've been able to read of yours that I can understand the details of seems to me to show Ben is exactly right: we are talking about relational meaning, but at completely different scales.</p><p><strong>[30:05] Michael Levin:</strong> Could I ask a couple of questions? Going back to the first part of what you were saying about the debate on emotions, could you give me an idea how much of that is related to the hard problem? Are any of the issues about first-person perspective, or is the debate about behavior and physiology?</p><p><strong>[30:35] Lisa Barrett:</strong> Very few people think about it in terms of the hard problem. There is an assumption broadly in psychology and in neuroscience that science is objective. Here's how we would think about science. What we do is we create a condition under which we will experience things. Observations are experiences of scientists that then we quantify with numbers in some way. We don't bifurcate nature. We don't say, well, these things are objective and these things are subjective. And there's a whole history in psychology for how that happened. Basically, that's our view. The view of a large number of scientists in our field is that science is objective. What they mean by that, the kind of objectivity, has undergone a change historically over time. They're using a 19th-century definition of objectivity, which means that observations, because they are automated by technology, and because they are made publicly, are either free from human concepts and experience, or you're minimizing the bias of human concepts and experience. They're doing a third-person kind of science that assumes there is an objective, verifiable pattern to, in a perceiver-independent way, identify a state of anger or a state of fear or a state of sadness. The assumption is that when this putative circuit triggers for anger, there will be a definable physiological pattern, a definable pattern in the brain somewhere, a definable expression, and all of these things are very diagnostic of that state. In older versions of this, it was an essence: necessary and sufficient conditions for membership in the folk category anger. Now people would say it's a prototype. Your face might not look the same every single time. You might not scowl every time. Your blood pressure might not go up every time. But there's a family resemblance to this prototype. The prototype is fixed, so your response might not be fixed. And your response, my response, Ben's response, Karen's response, people who live in Tanzania and are hunter-gatherers, their response, maybe even the response of a rat, will all have a family resemblance for some or all of these features. That's the view. They believe their epistemology is that there is a viable third-person. They demote the reports, the subjective experiences, of their human participants. From our perspective, every observation that you make as a scientist is an experience of some sort that you've created for yourself. If Ben is our subject, I look at Ben and observe and quantify his movements in some way. And I have the experience of Ben as angry. And we ask Ben, how do you feel?</p><p><strong>[34:29] Lisa Barrett:</strong> And Ben says, I feel sad. In that other view, the view is that we're right and Ben is wrong. Because Ben can't possibly know his state. He has all kinds of reasons for, even if we can assume we've created conditions where he will be as honest as he possibly can. There are moments where there's no way he could know his state, but we could know. Our view is that what is real in that moment is that we experience him as angry and he experiences sadness. That's what's real in that moment. That's what we have to try to figure out: that pattern. So there's an assumption that what is buried in the definition of objectivity is a prioritizing of certain experiences over other experiences. The experiences of scientists matter more. My experience of Ben as angry is what is closer to the ground truth than Ben's experience of his experience in the moment as being sad. Whereas what we would say is we would use an older definition of objectivity that is rooted more in Francis Bacon, around the time of the scientific revolution, which would be to say every human has a point of view. We all have concepts and categories. We can't escape them. The way you do science is you try to minimize any particular bias. The way that you do that is by trying to come to consensus over the data if you have diverse points of view and using lots of methods, some of which would disadvantage you and others would advantage you, and you use them all. Or if you're doing an analysis, you would do a multiverse analysis where you would vary every parameter in your analysis, and then you would have a distribution of results. Then you would interrogate that distribution as opposed to picking parameters in your analysis so that you result have one result and then potentially have picked the ones that favor your particular perspective in some way. So it's called transformative interrogation, where you have a community of scientists who are actively engaged in a self-critical examination, but the community has to be diverse. I don't mean ethnically diverse, although I'm sure that matters, but it's more that diverse in terms of your starting assumptions; your starting assumptions have to be diverse. That's the way to get to not truth, but usable, justified knowledge. That doesn't solve the hard problem either, but it does acknowledge the fact that all science is first person science. This is just ******** that it's third person. That's just a way of saying I think that my expertise counts, my experience as an expert counts more than your experience as a different kind of expert. That's our view.</p><p><strong>[38:25] Michael Levin:</strong> On the topic of managing the sensory interfaces, I'm thinking of Andy Clark type of ideas, the extended mind. How does it decide where the boundary actually is?</p><p><strong>[38:45] Lisa Barrett:</strong> That is a decision that is made continuously and it varies. Do you want to say something about this, Karen?</p><p><strong>[38:54] Karen Quigley:</strong> I was going to use the example of getting in the car.</p><p><strong>[38:56] Lisa Barrett:</strong> Yeah. Yeah.</p><p><strong>[38:57] Karen Quigley:</strong> When you're walking around the world, presumably the boundaries of your sensory surfaces are putatively at your skin, although depending upon what you're doing it could be quite different. Let's say you get in your car. Now the boundaries of your actions, the boundaries of your body have gotten out to the edges of the car — very personal space, basically.</p><p><strong>[39:21] Lisa Barrett:</strong> Yeah.</p><p><strong>[39:22] Karen Quigley:</strong> We would see that as highly flexible based on the current context and what your actions are.</p><p><strong>[39:30] Lisa Barrett:</strong> Michael Graziano did these studies at Princeton, where he's doing electrical recordings in neurons, and he identified these neurons in prefrontal, premotor cortex that he called bubble wrap neurons, which start to fire very frequently; their action potential spike trains speed up a lot the closer something is physically corporeal to the animal. What's really interesting is that that boundary changes depending on the state of the animal. It looks like when the animal is metabolically compromised, the boundary is out here. When the animal is allostatically balanced, everything's running smoothly, the boundary is closer corporeally to the animal's body. It's really clear that that boundary — there are a lot of "me, not me" systems in the body, like the immune system. So I think that boundary of where you end and where the world begins isn't always at the skin, and it rarely is; it's always fluctuating. There are some interesting cases. Maybe you had this experience when you were an adolescent. I had this experience when I was pregnant. I was constantly whapping things with my belly. It's not that I forgot that I was pregnant, but there would be some difference in the amount of growth and then I'd be walking into things; it was not explainable. You hear adolescents talk about how they don't know where their body is in space. I think there are also some interesting cases where people don't update when they should, when they need to. The car is an example of something that fluctuates, or a pen in your hand. It becomes part of the peripersonal space, but sometimes you don't update. I think that where this peripersonal boundary is is related to time, the experience of time, like how long you think things take. You could create a just-so story about how this came to be when animals developed distance senses like vision and audition, senses where you're sensing something at a distance as opposed to a proximal sense like olfaction or touch or anything interoceptive or gustatory. It turns out that in the brain this is something unique to us. A lot of people make a distinction between exteroceptive, meaning outside the body, and interoceptive, meaning inside the body, sensory signals. We think about proximal senses versus distal senses because they're processed very differently. That is where the signal compression is happening, the temporal speed of them. These things seem very different for distant senses versus proximal senses. Distance senses came last. Proximal senses were there first, and they're more tied to movement. There's increasing evidence that signals from proximal senses, in the way they're processed, are gating the sampling of distant senses like vision and audition.</p><p><strong>[44:06] Michael Levin:</strong> These are issues we grapple with all the time at the cellular and even subcellular level: exactly that change of that boundary, that flexible boundary between self and moral. In particular, both in natural biological cases and then all the weird stuff we do where we either instrumentize something and give it a sense that it never had before or connect it to some crazy engineered thing in a hybrid mode.</p><p><strong>[44:36] Lisa Barrett:</strong> I will also say one other thing that there are senses that humans have that we have no sensors for that the brain computes. Temperature is a really good one. Skin temperature — we have no sensors on the skin for temperature at all.</p><p><strong>[45:00] Karen Quigley:</strong> You mean wetness?</p><p><strong>Lisa Barrett:</strong> We have no wetness. You feel wet when you take a shower, when a raindrop hits you, or when you're swimming. But we have no sensory signals for wetness. It's a combination of temperature and touch. There are other examples too. We were talking about the kinesthetic sense of your head — where your head is positioned in space. That's a combination of five different sensory signals. Or flavor is a combination of olfaction and gustation, what's called taste. But what most people call taste is really flavor. Sam would know a lot about that.</p><p><strong>[45:46] Michael Levin:</strong> When you were talking about relational meanings, are there scenarios you know of where a given set of events has multiple relational meanings, where different observers look at the same thing and have different interpretations of it?</p><p><strong>[46:08] Lisa Barrett:</strong> I'll just use the very tired example of seeing red to make the point, because I think even though philosophers use it a lot, it's actually a really good example. So normally we see an object that's red. I'm looking around for an object that's red. I don't see one. But an apple is red. And you think redness is in the apple. But red, the property of red is a property of the relation between the signals coming from the apple, the signals that your retina transduces and the signals in your brain. Neurotypical people have three types of cones with three different types of opsins, and you need all three in order to take light, which is reflecting off an object at 620 nanometers in order to see red. That's not all you need, but that is necessary. And if you have a person or an animal with only two cones, cones with two opsins, they would experience that wavelength as a muddy brown, greenish brown. And so we would say that, or people say that they're colorblind, meaning red is in the apple. If you can't see the red, then you're colorblind to the reality of the red apple. But there are also some humans with four opsins. They're rare and they're mostly women, but they do exist. They have the same opsin, the same fourth opsin. They parse the visible light spectrum with many more categories than we do. So they would experience 620 nanometers in the same visual context as some other color. But if neurotypical humans had four cones, then that apple would not be objectively red. It would be objectively some other color. And we, those of us who have three cones, would be colorblind.</p><p><strong>[49:06] Lisa Barrett:</strong> What happens with objectivity is that we prioritize the biology of certain people over other people, and then we call it objective. That idea happens everywhere with lots of different examples. Some of them are very basic visual examples, and some are more social examples where people come with a different neural context, a different set of categories that their brain is equipped to make, and they experience the world and the same signals extremely differently. In the predictive processing Andy Clark way, if you combine that with anatomical evidence, what it seems like is that predictions are not for perception, they're actually for action. The action is planned first; the sensory prediction, so the perception, is a consequence of the action plan. What's really happening under the hood is the action plan is there first, and lived experience is a consequence of the action, not the other way around. When we say that people experience things differently, embedded in that is the assumption based on the anatomy that they will be having very different action plans when confronted with the same set of sensory signals. I can present stimuli to you, and you will experience the signals one way, and then I can make one change and you will experience the signals completely differently. I can show you another image, take it away, show you the first set of signals exactly the same, and you will experience them completely differently. It's a party trick. I use it all the time on audiences. We have done careful brain imaging studies where we do this with subjects, where we show them the initial visual image, and then we show them a second visual image. We show them the first visual image, then give them another image, take that away, and show the first image again. The pattern of BOLD signal activity is different than the first time. We can also show them the same thing three times, and it doesn't change. An intervening experience changes their experience of the first pattern of signals, and it doesn't revert. The way they make meaning of the first set of signals has changed, and it's changed pretty much forever.</p><p><strong>[52:06] Michael Levin:</strong> We didn't get to Ben's stuff. Should we make a new one?</p><p><strong>[52:13] Lisa Barrett:</strong> We absolutely can. But I wouldn't mind, in the last remaining two minutes, since I just talked the whole time without slides and showing you anything, what your thoughts are initially.</p><p><strong>[52:26] Michael Levin:</strong> I think it's very compatible with a lot of the stuff we're doing. If we change scale and substrate a little bit, a lot of this carries over. We could use some of these models, vice versa, to map this onto some really ancient cellular stuff that's going on in the body at all scales.</p><p><strong>[52:49] Lisa Barrett:</strong> That'd be really great.</p><p><strong>[52:51] Michael Levin:</strong> Yeah.</p><p><strong>Lisa Barrett:</strong> That'd be really exciting.</p><p><strong>[52:52] Michael Levin:</strong> Yeah.</p><p><strong>[52:53] Lisa Barrett:</strong> I also think there's an implication here for how we do science. I think that our way of understanding how the brain is processing signals from the body or how the body is constraining the brain — is it both true? It just depends on what you're focusing on. We can use that to think about the epistemology and even the metaphysics of what we're doing as scientists. Karen is still rooted in the nuts and bolts of the science, but I've been dipping my toe into this other world of thinking about the epistemology and the metaphysics of how we do science and what we think we're doing exactly.</p><p><strong>[53:47] Michael Levin:</strong> I think that's a great area to get into. Our contact with it now is this weird thing. We call Mom bot, which is a joint project with Josh Bongard's lab and Doug Blackiston. One way to see it is it's a robot scientist. It's a thing that sits in our lab. It has an AI that makes hypotheses about stimuli you give to the cells to make certain biobots. It makes the xenobots; it physically makes the xenobots with those stimuli. Then it observes the biobots in terms of their shape and behavior, goes back, revises its hypotheses, and tries again. In that sense, it tries to make discoveries in morphogenesis. That's one way to think about it: an automated robotic discovery platform. But the other way I like to think about it is that this thing is basically a reverse hybrid. The typical hybrids people make are they'll take a brain from a fish and put it in a little cart that drives a little car around. This is the reverse. What you have here is an AI that is exploring morphospace, and the body it has to explore with is the living cells—the frog cells. So it's basically, whatever level of intelligence it may or may not have, the body through which it experiences anatomical space is the living material. It uses the biobots as the outer surface to feel around.</p><p><strong>[55:23] Lisa Barrett:</strong> That's very cool.</p><p><strong>[55:25] Michael Levin:</strong> Isn't that wild? People always talk about embodiment and they always think it has to be running around in physical space. This thing sits still as far as our obsession with 3D space is concerned, but it's exploring morphospace.</p><p><strong>[55:37] Lisa Barrett:</strong> I know we're out of time, but I have to say this one thing. This is really interesting to me because I think that some of the fundamental features of reality that we take to be fundamental— is this solid? We assume that this is real, objective, perceiver-independent; that's traditional realism. I think we experience this as solid because of the kinds of bodies we have. If we were subatomic particles, this would not be solid. This would be mostly empty space. Many of the things that we take to be Kant's primary properties—shape, solidity—are hidden from us because we all have bodies that are very similar in experiencing solidity, experiencing these signals as solidity. And this is a fundamental aspect of relational realism, this thing that I was talking about, this metaphysics that I've been ... That is almost impossible. It's not possible to test with others—we're like fish in water; we can't escape the water. It's a bit like the hard problem: we have to study consciousness through consciousness. Anytime we make a new discovery, it's because there's a reverberation or a pattern in a signal that we didn't expect—for example, dark matter. There's some pattern that we don't expect, and that tells us that something else might be there. So it's almost impossible to expand our island of knowledge because we're limited by our sensory surfaces. This is really cool because it suggests a potential—not a solution, but maybe an avenue for dealing with this problem.</p><p><strong>[58:06] Michael Levin:</strong> Chris Fields and I have this paper on diverse spaces. As you said, solidity, what do barriers look like in transcriptional space? What does it feel like to be walking around a bent physiological state space, or my favorite, it was just anatomical morphospace. There's a metric of distance and you can send signals across and you can wander around it. I think that's exactly what groups of cells do. They live in these weird spaces with no doubt weird perceptions.</p><p><strong>[58:42] Lisa Barrett:</strong> One thing that we think is that some of the things that we call illnesses are actually different physiological spaces for people, that are not the neurotypical kind of physiological spaces, not the biologically typical range of physiological spaces. That is an idea that we've had. We haven't been able to figure out how to create experiences for ourselves called observations. We haven't been able to figure out how to study it.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Cancer: mitochondria and metabolism - a discussion with Thomas Seyfried and his group</title>
          <link>https://thoughtforms-life.aipodcast.ing/cancer-mitochondria-and-metabolism-a-discussion-with-thomas-seyfried-and-his-group/</link>
          <description>Thomas Seyfried, Derek Lee, Tomás Duraj and colleagues discuss cancer as a metabolic disease, examining mitochondrial dysfunction, metabolic therapies, ion channels, bioelectric control, and links to aging, regeneration, and disease models.</description>
          <pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69b96f3d83f5e500018d387f ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/JWAwBOdsOAc" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/9eedbe05/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a 1hr + 23 minutes talk and conversation with Thomas Seyfried, Derek Lee and Tomás Duraj ( Juanita Mathews from my group ( about their work on the metabolic and mitochondrial aspects of cancer.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Metabolic nature of cancer</p><p>(08:52) Mitochondrial theory of cancer</p><p>(34:00) Metabolic therapy in practice</p><p>(40:39) Mitochondria and morphogenetic fields</p><p>(45:21) Ion channels and metabolism</p><p>(53:01) Bioelectric control and regeneration</p><p>(01:09:03) Mouse versus human models</p><p>(01:15:19) Mitochondria, aging, and disease</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Thomas Seyfried:</strong> We have a lot of information now to show that cancer is predominantly a mitochondrial metabolic disease. We have looked at the number, structure, and function in the mitochondria of all major cancers, and you find abnormalities in these cristae, the number of mitochondria, the function of the mitochondria. It seems to be a common pathophysiological problem in all cancers. They transition from oxidative energy through oxidative phosphorylation to energy through what we have now defined as substrate-level phosphorylation, a very old, ancient way of getting energy. Before oxygen came into the atmosphere two and a half billion years ago, all the cells were fermenters. It was the development of the archaeobacteria and the original fusion between a primitive eukaryotic cell and a type of bacteria that became the mitochondria, which then led to metazoans and all of the development that we know about. Every cancer we look at, all the major ones, are blowing out large amounts of lactic acid and succinic acid. Warburg originally knew that cancer was a disorder of energy metabolism. First, he didn't know about the fermentation of amino acids inside the matrix of the mitochondria and that the TCA cycle itself can be fermented. Second, he made the mistake of assuming that oxygen consumption was an accurate biomarker for ATP synthesis through oxidative phosphorylation. Derek and others clearly showed that the consumption of oxygen in cancer cells is not directly linked to significant ATP production. We found, with our work with Shinopoulos Christos from Semmelweis University in Budapest, that he had long been suggesting there is a substrate-level phosphorylation mechanism, the succinyl-CoA ligase reaction, in the matrix of the mitochondria. We know from our work in cancer that succinic acid, succinate, is released by tumor cells. Succinate should never be released from the TCA cycle. The reason it's being released is that it is the second major waste product of fermentation. It's through the glutaminolysis pathway. The cancer cells are essentially transitioning away from OxPhos and moving toward a fermentation mechanism, substrate-level phosphorylation: one existing in the cytoplasm — the glycolytic (Embden–Meyerhof–Parnas) pathway — and the other in the matrix of the mitochondria itself, which is a fermentation pathway. In his PhD work, Derek showed that if you grow cells in just glutamine, no glucose, even in hypoxia or in the presence of cyanide, you still produce ATP and succinic acid. That pathway is a second major pathway. We found that there is a powerful synergistic parallelism between the glycolytic pathway in the cytoplasm and the glutaminolysis pathway in the mitochondria. The two pathways feed off each other, leading to dysregulated growth. The control of the cell cycle, the checkpoints of the cell cycle, the behavior of cells with respect to one another, contact inhibition, and the behavior of cells in relation to their neighbors is all linked to the calcium currents that are controlled by the mitochondrial-associated membranes and the ER–mitochondrial connections. All of that controls whether a cell should remain quiescent or grow, and if it grows, it's regulated growth. If those calcium currents linking to the inner membrane of the mitochondria and the mitochondrial-associated membranes are abnormal in any way, the cell loses its growth control and no longer responds accurately to environmental cues. The checkpoints of the cell cycle, which are all controlled by calcium–calmodulin signaling, mean the mitochondria control the destiny of the cell. With this behavior, the cells lose growth control; the cadherins that normally would be involved in contact inhibition no longer work. In the morphogenetic field itself, there are mitochondrial waves that link the behavior of cells across these morphogenetic fields.</p><p><strong>[04:24] Thomas Seyfried:</strong> All of this is related to how the energy and the integrity of the mitochondria work. As far as metastasis is concerned, we know that all metastatic cancers share characteristics with macrophages. Macrophages are genetically programmed to move in and out of tissues, live in the circulation. When we have a disordered growth in the microenvironment, you have inflammation and normal macrophages come in as wound healing cells. They fuse with cancerous stem cells forming these hybrid cells. They have abnormal energy metabolism, but they're genetically programmed to move in and out of the bloodstream. They are rogue macrophages. We know how to kill them because they're so dependent on glutamine and glucose. Our therapeutic strategy, which is ketogenic metabolic therapy, is a press-pulse where we press down glucose with diet and exercise, elevate ketones for the enhancement of mitochondrial function in normal non-neoplastic cells. Then we come in with drugs that will target the glutamine. There's no way these cells can grow without glucose and glutamine. We've interrogated the cells. Derek did beautiful work on going through all the amino acids and all the fuels that cells can possibly use. They're dependent heavily on glucose and glutamine. When you target that in people and mice with metastatic cancer, you slaughter these tumor cells. We know basically how cancer starts. We know what it's dependent on and we know pretty much how to manage it. Now we go into the clinic and start treating people with pancreatic, prostate, breast, and colon cancers. We're starting to see when you do it the right way, the way Thomas Durai and our groups have been divining, people living much longer with a higher quality of life. We never say cure. We only say management because we don't know how long a person will remain in a managed state. As a matter of fact, we have people whose tumors never go away. They persist for years but are indolent. We've transformed them from a very aggressive neoplasm into an indolent one where they can be periodically resected. We did that with a brain cancer patient who lived over ten years having periodic debulkings. We never targeted as glutamine, but we kept the tumor in an indolent state. We know we're seeing remarkable outcomes in the clinic based on our new understanding of how this all works. Of course, I've just summarized massive amounts of data and the experimental evidence is in our papers. Derek is here and Thomas Durai is here. Thomas is doing both the basic research and clinical work. Derek has defined the mitochondrial substrate-level phosphorylation as a major force in driving these tumor cells. So that gives you an overview of where we are and what we're doing with this cancer work.</p><p><strong>[08:52] Michael Levin:</strong> You guys want to show any slides at all, or are we just talking from here?</p><p><strong>[08:59] Thomas Seyfried:</strong> I could show you some slides on a screen share.</p><p><strong>[09:05] Michael Levin:</strong> Yeah only only if you.</p><p><strong>[09:06] Thomas Seyfried:</strong> I didn't want to give a formal presentation, but I certainly can show you some of the slides. We had some here yesterday, gave a big lecture on this, but I give it in a nutshell. Let's see here. Carol, Cindy Carroll. Let me go back here. I just got Margaret here. Okay. So can you see this? We have to do a slide up.</p><p><strong>[09:40] Michael Levin:</strong> You can share just. Yeah, just share your screen.</p><p><strong>[09:43] Thomas Seyfried:</strong> Share. Right here. Can you see that, Mike?</p><p><strong>[09:52] Michael Levin:</strong> Yep.</p><p><strong>Thomas Seyfried:</strong> So let me just go for it. Well, I always tell you how many people are dying. I don't know if that's really important for us. But I do go through all the scientific theories. What we're doing, we're changing the whole, what cancer actually is. Whenever you change the theory, like when you go from geocentrism to heliocentrism, you get massive paradigm changes in man's knowledge. We're going to do the same thing when it comes to cancer because we found that it's not a genetic disease. We use Louis Pasteur compared to Galen. We do the Darwin-Wallace theory of evolution as opposed to special creation. So when we look at the mitochondria in cancer, you see all the nice squiggly green. This is highlighting how these are a very dynamic organelle. They have fission and fusion, and they really regulate the entire physiology of the cell through their interactions, not only within the cell, but outside the cell. So the question that we're confronted with now is, is cancer a genetic disease or a mitochondrial metabolic disease? This paper by Hanahan and Weinberg, I think it's the most highly cited, 75,000 or something like this. They clearly state that cancer is a genetic disease, and it's a silent assumption that people just automatically assume. We go through and show that the reason why they say it is dogmatic ideology. It's no longer linked to rational collection of data. What we've done is we find a number of papers where you can't find any genetic mutations in certain cancers. That's a big problem. Now you can't prove anything by negative data. But Vogelstein and others said it's driver genes that are the really bad ones. That caused dysregulated cell growth. But now we're seeing expression of mutations in driver genes in normal tissues of people — P53 and a number of other mutations in cells that never become dysregulated in their growth. So you have cancers with no mutations, and driver genes in normal cells that don't form dysregulated growth. These are very serious challenges to a statement that cancer is a genetic disease. Some carcinogens like asbestos don't cause mutations, but they cause cancer. Aboriginal tribes rarely have cancer. Our closest relatives, chimps — no breast cancer ever recorded in a female chimp. The nuclear-mitochondrial transfer experiments are pretty much the strongest evidence against that. These experiments from Israel and Schaefer are from University of Vermont. He had an epithelial liver cell that he isolated from the liver of a rat, grew it in vitro, and it became dysmorphic in the way it looked. He took the cells and put them under the skin of the rat. Twelve out of thirteen rats developed a tumor. He didn't know if the original cell was tumorigenic, so he went back, grew up the original cells and put them under the skin of rats, and he didn't see any — 0 out of 11. He questioned whether the formation of neoplasia would be due to mutations in the nucleus or some cytoplasmic event like mitochondria. So he simply swapped nuclei: he took the nucleus of the tumor cell and put it into the cytoplasm of the normal non-neoplastic cell, and put the nucleus of the normal cell into the cytoplasm of the tumor cell. They got results directly opposite of those predicted if cancer were due to somatic mutations in the nucleus. The tumor nucleus in the cytoplasm of the normal cell produced only one out of 72 rats, whereas the normal nucleus in the tumor cytoplasm produced 66 out of 68. These are just the opposite of what you would have expected if cancer were due to somatic mutations. Also, McKinnell did a very similar kind of thing in vivo in frogs, the luck frog. He took cells from a lethal kidney cancer that kills the frogs. He takes the cells out of the frog, separates the cells individually, then isolates the nucleus of the tumor cell and implants it into an enucleated fertilized frog egg. He was able to clone tadpoles, all grown from the nucleus of the kidney tumor. When I spoke to him, he said I could cut the tails off these tadpoles, and they immediately grew a new tail in an exactly regulated way, directed by the nucleus of a tumor cell, which came from a cell that was dysregulated in its growth. I said to him, that's because it's not a genetic disease. He absolutely agreed. Before he passed away, he had long talks with him. Rudy Yanisch down here at MIT cloned mice from the nuclei of melanoma. He also showed these would abort; they wouldn't grow to completion, a whole mouse, because they had a number of chromosomal abnormalities and other mutations, but they never formed dysregulated cell growth.</p><p><strong>[14:42] Thomas Seyfried:</strong> All of these are inconsistent with cancer as a metabolic disease or a genetic disease. I wrote this big paper summarizing all of this. This figure now appears throughout the literature. It's a summary of dozens and dozens of in vivo and in vitro experiments replicated over and over again. Normal cells beget normal cells. They have a clean genome and they have normal and functional mitochondria. Tumor cells beget tumor cells. They do have genetic abnormalities in the nucleus, and they have abnormalities in number, structure, and function of mitochondria. When you place the nucleus of the tumor cell into the cytoplasm of the normal cell, you get regulated growth, not dysregulated growth. When you place the normal nucleus into the cytoplasm of the tumor cell, you either get dead cells or cells with dysregulated growth. These are the exact opposite. When you look at all of these data together, what I've just gone through, you have to be cognitively impaired to think that cancer could be a gene-driven phenomenon. It's not a gene-driven phenomenon. I blasted the NIH, the National Cancer Institute. We wrote letters saying, "Who's running the ship down there?" Because they say cancer is a genetic disease. People are writing them and saying the evidence, the theory, the data no longer support this. They mindlessly go through the process saying it is, and not carefully evaluating the data that says it can. I view them as Greylag geese. They just do mindless things without any connection to reality. I go after them about that, because the evidence is not there. I told you Warburg had this pegged. He knew, but he didn't know all of the stuff that we now know. Derek cleared up a lot of this mess. So cells ferment glutamine. This is the second major fermentable fuel. All major cancers have Oxphos insufficiency linked to glucose and glutamine fermentation, which is the common pattern; we see it everywhere. If you look at the structure of the mitochondria in cancer—this is glioblastoma—you have the stripes, the cristae. We've never found cancer with normal content or composition of cardiolipin, the major lipid that controls the electron transport chain; it's abnormal in all major cancers. The structure is abnormal in all major cancers. What I find, Mike, is that you and I, most biologists, know that structure determines function. This is a foundational principle of biology. It seems like everybody except oncologists knows this. They all say mitochondria are normal in cancer. I don't know when they look at these pictures what they think. Here's breast cancer: nice stripes in the normal breast cells; abnormal mitochondrial fragmentation in the cancer. Colorectal cancers have ghost mitochondria. I went through and showed all the different cancers—abnormalities in number, structure, and function of mitochondria. All major cancers have accumulation of lipid droplets. Cancer cells can't burn lipids or the ketone body beta-hydroxybutyrate, because if they do they'll produce reactive oxygen species, blow up, and die. They store lipids in the cytoplasm as a protective mechanism against death from reactive oxygen species. We see that this is a signature of Oxphos insufficiency when you see lipid droplets. There are some beautiful studies that I can explain to show how they prove that. These lipid droplets are there because the mitochondria are defective, not the reverse. Most of the energy we get is from Oxphos, with CO2 and water as the predominant waste products. When you look at cancer, the waste products are lactic acid and succinic acid. These are the waste products of glucose and glutamine fermentation. We know that because we've measured it. Derek and others, a lot of us, measured it. We know that oxygen consumption in cancer cells—they consume oxygen, but it's not largely connected to ATP production. It is largely connected to ROS, reactive oxygen species. We showed this in iScience. You can transition away from Oxphos. This is a gradual process. It doesn't happen overnight. It's a slow progression away from Oxphos and a replacement using substrate-level phosphorylation. There's a direct relationship between how much energy comes from substrate-level phosphorylation and how malignant the tumor is.</p><p><strong>[19:31] Thomas Seyfried:</strong> The malignancy to fermentation is so strong. It's an absolute, a very strong linkage. We have a threshold here because we think there's a point when it becomes irreversible, when you can no longer recover a sick cell. It's an arbitrary threshold; it might differ for different cancers, different types. This is what Derek put together. When you have glucose and glutamine coming through the two major glycolytic and glutaminolysis pathways, the byproducts of these pathways cross-connect, and thereby you can produce tremendous biomass, very rapid growth. As long as you have these two fuels working together in a syngeneic way, they'll survive a little bit and grow a little bit on glucose alone, and the same thing with glutamine alone. When you put the two together, they just explode with growth. Lactic acid and succinic acid are the metabolic waste products. They acidify the extracellular microenvironment, preventing radiation, chemo, and immunotherapies from working effectively because you've acidified. Succinic acid also paralyzes the immune system to prevent immune system killing. This summarizes 100 years of cancer research. We know what causes cancer: any number of environmental or even the rare inherited germline mutations. We showed in a big paper that they all damage OxPhos in one way or another. All of these—inflammations, oncogenes, carcinogens, intermittent hypoxia—produce reactive oxygen species causing the mutations in the nucleus: the chromosomal abnormalities, the point mutations; all that's downstream effects. The retrograde signal system that mitochondria signal to the nucleus turns on HIF-1 alpha and MYC, opening the floodgates to glucose and glutamine, so you can then drive dysregulated growth and energy metabolism. Son and Sheen and Sato talk about the default state. The default state of cells is proliferation. The default energy state is fermentation. That's linked to dysregulated cell growth, because it's the calcium currents controlled by the mitochondria that control the cell cycle. When the cell cycle is no longer under control, the cells fall back on their default state, which is proliferation driven by a fermentation metabolism. We went through and showed sustained angiogenesis. All of these things can be tracked back to mitochondrial dysfunction. For metastasis, it's a fusion hybridization between a macrophage, microglia, and a cancer stem cell forming a hybrid cell. This hybrid cell is driven by glucose and glutamine, and we now know how to kill and manage metastatic cancer. We've shown that in the mouse beautifully, and now we're seeing it in humans. This is Derek's big paper: amino acid–glucose fermentation maintains the constant ATP. He did a magnificent job. We worked with Christos Chianopoulos, world expert on mitochondrial substrate-level phosphorylation. Then we published a big paper on the Warburg hypothesis and the emergence of the mitochondrial metabolic theory. Tom Dorai is showing the difference between the Crabtree effect and the Warburg effect and the confusion that everybody has made when talking about Warburg effects. If we know what to do, how do we manage it? We need to lower blood sugar and elevate ketone bodies. You can do that with water-only fasting or calorie restriction. This is one of our early studies with Bernard Mukherjee. Just by cutting calories and cutting a high-carbohydrate diet by 40%, which is like water-only fasting in humans, you get a huge reduction in tumor growth. We know that when you lower blood sugar—each of these are mice under a different diet—ketone bodies go up as an evolutionarily conserved adaptation to food restriction, and when blood sugar goes down, tumor size goes down. We were the first to do this in the mouse. Now people are looking at prostate, colon, breast cancer. When people have high blood sugar, overall survival is generally much reduced; tumors grow much faster. When you put the patient or the animal into a state of nutritional ketosis, you reduce inflammation and angiogenesis, and you actually kill tumor cells. This is going to be the new tool Derek is working on. We're working on the glucose ketone index. You measure the molar, millimolar ratio of glucose to ketones, and you get a quantitative biomarker that's linked to managing and preventing cancer. We've defined 2.0 or below as a biomarker for killing and managing cancer. This will be the new standard of care for cancer once people come to realize that it's a mitochondrial metabolic problem. So we press down glucose with ketone supplements, stress management, exercise, and then bring in specific low-dose targeting of glucose and glutamine.</p><p><strong>[24:20] Thomas Seyfried:</strong> Hyperbaric oxygen will kill tumor cells when the patient is in nutritional ketosis. Right now, our group and the group that we work with, we're perfecting dosage timing and scheduling. We bring people from sick to managed state and with improvements, we are hoping to get resolution on this. Proof of principle, a very aggressive glioblastoma. This is the VM3 mouse, natural spontaneous glioblastoma. We showed how we managed by targeting glutamine with this drug DON and putting the animals into a calorie restricted ketogenic diet. We could get tremendous increase in overall survival. And again, all the papers that Purna put out, all the information is in the main paper. The blue line is high carbohydrate by itself. The green is diet by itself. The red is drug DON by itself. When you put the drug and the diet together, you get long-term survival and we're seeing much more improved overall survival. How does it work in humans? Let's show you some evidence. This is a glioblastoma from a human. You can't cut these out in every person; it's a very deadly tumor. The purple cells are the tumor cells around blood vessels. They spread throughout the whole neocortex, so you can't really surgically resect glioblastoma. I always like to show these curves. These are the survival curves. We have a problem in science today. Half the stuff in the cancer field can't be reproduced when you go and try to reproduce it. But one thing can be reproduced and that's how fast glioblastoma kills you. These are the data from five different surgical institutions. You can see the standard of care; very few people survive beyond 40 months. I said, no improvement in a hundred years. Bailey and Cushing, 1926, 8 to 14 months with no therapy. Today, with all the stuff, we get 17 to 18 months, almost no improvement. I always show people why, because when you irradiate the brain of cancer patients, you break apart the glutamine-glutamate cycle, freeing up massive amounts of glutamine. You give steroids because the brain swells from the radiation. This makes hyperglycemia and you get high glutamine and these patients die from a combination of bad tumor together with the absurdity of trying to treat them, the crazy way they're doing it. We tried this on a guy from Egypt; this is one of our first clinical papers. This is the corn guy. He was 28 years old. He had glioblastoma. We put him on a metabolic therapy, a weight craniotomy. We pushed radiation off for three months. They wanted to push the radiation. After a couple of weeks, we said no. They eventually had to irradiate and give him temozolomide. He did well. At about 29 or 30, he had some headaches and he died. Alsaka did an autopsy on his brain to see what was going on, and they didn't find tumor cells. He died from radiation-induced brain liquefaction necrosis, so he was killed by the treatment rather than by the tumor. We always like to showcase reports of people who have suffered immensely, like this young girl, Brittany, diagnosed with GBM in 2014. Here's her husband. She received the standard of care. You can see her face is swollen. This is moon face from high-dose steroids, meaning that her blood sugar levels are going to be very high. She decides she doesn't want to live anymore. So she goes into People Magazine. Brittany, 29 years old, plans to end her own life. In three weeks, she goes to Oregon, dies with dignity with her family. As I always say, it's bad when your patients kill themselves rather than go through the treatment you're offering them.</p><p><strong>[29:10] Thomas Seyfried:</strong> Now, this is our man here, Pablo. Thomas and I, we've all spoken to Pablo. We knew him really well. Nice guy. He came to me in 2014. He was one of these purest guys. He didn't want any radiation, no chemo, no steroids, nothing. They told him he'd be dead in nine to 12 months if he didn't do radiation and chemo. They told him his tumor was inoperable. He survived for 10 years. Here is his tumor when it was first detected; they said it was inoperable. He did metabolic therapy for a couple of years and you can see how big the tumor became, but it was indolent because these tumors will kill you in months — this was years. So we told him, yeah, take it out, so he went in and got it out, and he did really well. Then we measured; we have five years of measurements from his blood glucose and ketone. A massive amount of data on this one person. We recalculated his GKI. We take the ratio of blood sugar to blood ketones, and you can see we replotted his GKI. Look how low it was. Beautiful. So he was managed predominantly by metabolic therapy. Pablo died. Thomas and I had a long conversation with him a week before he passed away. He was laughing; we were having a good time. He was a very sharp guy. He was going for his fourth surgical procedure for a tumor which they originally told him was inoperable, and yet now he was going for it. He came out of the surgery really well, thumbs up. He had a big conversation with his wife that day, but he died that night from a cerebral hemorrhage as the result of the surgery. So he never died from the tumor. He lived 122 months. This is our most recent study out of Greece, where my colleagues did standard of care: temozolomide radiotherapy, and temozolomide radiotherapy with ketogenic metabolic therapy. You can see this tremendous difference. Four out of six of the guys on the ketogenic diet survived three years or longer, whereas only one out of 12 on the standard diet survived three years. This diet wasn't bad: salmon, olive oil, sardines, avocado, and some of these kinds of things. Some of these guys refused to give up their sugar. They said, "I'd rather die than give up my sugar," and they did die. That's their choice. But you can see how powerful metabolic therapy is in managing these cancers. There's another little guy — this is Danny Sheen from Marshfield, Massachusetts — diagnosed with pineoblastoma. Look at his face, all swollen from steroids. Here he is a week before he died. Surgery, radiation, all the same **** they give to these little kids. It's tragic. It's just so terrible what they're doing to these kids. I have a big paper that's going to come out.</p><p><strong>[34:00] Unknown:</strong> It's Cell Reports Medicine.</p><p><strong>[34:03] Thomas Seyfried:</strong> Cell Reports Medicine. This is provisionally accepted. We took the glioblastoma cells and put them into pediatric young mice 20 days old representing what basically what Danny Sheen's was. Here you see the control guys. We had restricted ketogenic diet, DON, and Bendazole. This is a parasite medication that targets glutamine. We target glutamine and glucose at the same time. These damn mice live so much longer. They have a high quality of life. It's just so much different. This is a person, Robin; she's still alive with us today. She was from Cleveland, Ohio. She had breast cancer that metastasized to her lungs and brain, and femur and her bones. They said there's nothing more we can do. All the standard of care stuff wasn't working. She got on a plane and went to our colleague. We have a big clinic over there in Istanbul, Turkey, where we're using metabolic therapy. He put her on metabolically supported chemo. You bring in a very low dose of chemo when you're in a state of nutritional ketosis. All of this disappeared. The infiltration and the metastasis were all killed. This was 2025, but we're talking with Slocum recently. She's still doing really well. We're going to do a big follow-up seven years out now. We got a lung cancer guy who's still alive. Adam Amadotus had lung cancer spread to his brain and his liver and everything. My colleague Thanasis—even Julio—said, "We're going to put him on a high-fat diet." One of the attending oncologists said, "Oh, you've got to be careful. He'll have elevated cholesterol, and that would be dead." The guy was going to be dead in three weeks, for Christ's sake. They're worried about elevated cholesterol. The guy's still alive. He's still alive today with a little bit of dyslipidemia, but he's alive. It works tremendously in prostate cancer. This is Thomas Doray's big paper here with all the folks that are now participating in managing cancer. We have a framework, a ketogenic metabolic framework for managing glioblastoma. We have a lot of nutritionists, dietitians, basic scientists, clinicians. They all want to get on board now and start working this out. I always like to show our dog. We put a metabolic therapy on a dog that was destined for death.</p><p><strong>[37:20] Thomas Seyfried:</strong> He had a big mast cell tumor under his nose. Here's his face and nose. And the woman followed what we said. She cut the calories down, gave the dog raw chicken with the bones still in the meat, some fish oil, and some raw egg, and the tumor melted off the dog's face. The doctor said, you're going to have to have it cut off. The vet said the dog was going to get sick, get diarrhea and everything. They didn't do any of that. No surgery, no radiation, no chemo. What happens is when you take animals and people and you fast them, the body will attack the tumor and use the tumor as fuel for the rest of the body. It's called autolytic cannibalism. This dog died from old age; the cancer never came back. It lived 15 and a half years and died from cardiac failure from old age; the cancer never came back. So people say terminal cancer; it's not terminal. We know now we can keep people alive if you do it the right way. We published this big paper with Christos and myself showing that the somatic mutation theory is essentially like geocentrism deferents, equants, and epicycles trying to figure out. And now if we move the mitochondria to the center of the problem, you're going to have a much greater opportunity to manage cancer because it's a mitochondrial metabolic disorder. So I conclude by saying I went through this kind of fast. I give a course a whole semester on this to the students so they can really dive deep into the science supporting all this. Thomas and I are starting this new International Society of Metabolic Oncology, where clinicians and dietitians are all getting together. We're going to standardize treatment for cancer based on cancer being a mitochondrial metabolic disease. We have to work out some of the dosage, timing, and scheduling issues. We're trying to formulate the society right now. Right now, the funding that supports my research in this lab is philanthropy and private foundations. We don't get money from the NIH. We're getting a lot of people coming on board who want to see this happen. We have a lot of case reports under work. More and more people will be publishing this. We're not ready to do a large clinical trial because the only way we can do that is if we do it. Thomas, myself, and Derek are the ones that do this because we're the only ones that really understand all the nuances. We're trying to train these physicians to know what to do and how to do it. The dietitians need to know what kind of foods they should be using. Once we have that, then we're going to run bigger and bigger trials. That's the goal of this new society. We're in the plan to drop the death rate on this disease, no question about it. The biggest problem standing in the way is that the people at NIH think it's a genetic disease. So as long as they consider it a genetic disease, you're not going to be able to make the kind of advances that you need to make. That's the biggest block right now. The NIH is part of the problem rather than the solution. Until they can get on board and recognize what's going on, we're going to have to suffer 1,700 people a day dying from cancer, or 626,000 this year that's predicted. That's where we stand. I've given you an overview, but to do a deeper dive, you have to look at the science, the control experiments that we've done, and how we've run all these experiments. Tom is here. Derek did a lot of these experiments and got his PhD on this. They could answer any questions that you might have on that.</p><p><strong>[40:39] Michael Levin:</strong> Thanks very much. That's remarkable. Absolutely remarkable. I have one basic question that probably won't need a bunch more on the metabolism side. You mentioned at the beginning this notion of how the mitochondria participate in the morphogenetic fields that are multicellular at scale. Could you talk about that a little bit? Because we're very interested in that. I want to understand what you're thinking about the mitochondria.</p><p><strong>[41:05] Thomas Seyfried:</strong> We learned more about that from Picard, from Columbia University. He's one of the leaders on the communication of mitochondria, not only how they communicate or how they regulate the internal intra-physiological state of the cell, but I was surprised to see from his work how they actually communicated across different cells and also through the morphogenetic field itself, through regulatory bioenergetic signaling. I didn't know how extensive the knowledge is about how mitochondria communicate across fields. When Son and Sheen and Soto talk about the tissue organizational field theory and how cancer originates that way, it's very clear from Picard's work how you could damage mitochondria in a group of cells by disrupting the morphogenetic field itself. That's new to me and we're still working that out. What was clear to us within the cell itself, because ultimately what starts is dysregulated cell growth from an individual cell dumping out fermentation waste products. We knew this transition from OXPHOS to fermentation was the key for driving the dysregulated growth. But what is the linkage to make the cell dysregulated in its growth? That was what was most interesting. That relates to the calcium signaling and the control of the cell cycle. It goes beyond that: why the cells are no longer responding to the cues from the environment, why they lost contact inhibition. That's because calcium controls cadherins on the surface of the cells. Then I started looking more, and Picard says you've got communication signaling not only internally and to a few cells outside; there's a way through water channels and all kinds of things that were unknown to me about how mitochondria control the overall bioenergetic system of the whole body. Because they've all derived from a population of mitochondria in the fertilized egg. The egg itself comes into a maternal system. They all become slightly different in different organs. Liver is a little bit different than brain or kidney, depending on what they have. But they all have a commonality in how they work. They communicate throughout the whole body through fields. I was shocked about this. When we look at chronic diseases, we see obesity, type 2 diabetes, coronary disease, neuropsychiatric problems, dementia, cancer. All of these are mitochondrial failures. All of these are attacks on mitochondrial function. We're learning about that. The best answer to know more about that would be to look up the work of Picard. He's discussed this in great detail about how these cells communicate with each other through mitochondrial bioenergetic linkages and signaling. So we're starting to put this together in a broader way. And as he says, the "powerhouse of the cell" is just that one little bit that people talk about. But what he's talking about is an altogether new kind of networking that exists for general health and what we call metabolic homeostasis throughout organs and systems, all linked to the capability of this one organelle. That organelle controls how genes are turned on and turned off in the nucleus. Through epigenetic signaling, the mitochondria control what the nucleus is doing from one cell to another. I think we're learning a lot more about this. This is the new horizon. This is going to be really important.</p><p><strong>[45:21] unknown:</strong> I want to say this is awesome. I'm a big mitochondria fan. When I did my PhD, it was metabolic engineering for hydrogen production. I did a lot of fermentation experiments trying to get increased hydrogen. I've always kept in mind the metabolism and what was going on and looking at the mitochondria specifically. Everything that you're saying is resonating with things that I've read and what I've looked into. I've also found that the mitochondria communicate across the cell membranes with one another. They have direct communication with mitochondria that are on opposing membranes, which I thought was really fascinating. One of the things that I'm also interested in is we work with bioelectric modulation with a lot of different drugs that change different ion channels and their function. One of the things that we're seeing is that mitochondria have their own ion channels and potassium channels and calcium channels. Some of those channels are really important for the function of the mitochondria. It's in pre-print right now, so I can go ahead and tell you the name of the drug. We're working with clofilium, which is a potassium channel blocker. It's not just a potassium channel blocker. It has all these other promiscuous effects on different channels. One of the things that it's been known for is that it can change the metabolic output of mitochondria. It switches them more towards a pentose phosphate pathway metabolism in the cells, and also apparently increases the pH around the cell. They're producing some other fermentation product from it. What they did find with it is a defect where you basically have impaired mitochondrial biogenesis. If you add clofilium to those models, you actually increase the mitochondrial output. It increases mitochondrial biogenesis. It also increases oxidative phosphorylation. In my work, it increased the membrane potential of the mitochondria over time. There's things that our ion channel drugs are doing to the mitochondria and may actually be directly working on the metabolism of the mitochondria. It'd be really great to work with you guys to analyze those effects and also to screen different compounds that we work with to see what they do to those metabolic outputs. We don't have any equipment here. All the mitochondrial studies, usually you isolate the mitochondria and then you do the experiments on the isolated mitochondria. All we have are biosensors. We have calcium, we have ROS, and we can also look at turnover with the mitochondria. It'd be great to work with you guys if you have the specialized equipment to do the isolation and to actually look at what's going on when we block those ion channels.</p><p><strong>[48:53] Thomas Seyfried:</strong> When Mike Kiebish was working in my lab, he's the senior scientist for Berg. He isolated and purified the mitochondria out of the cells. It's laborious, but we're going to be doing that. We're going to be trying to do mitochondrial transfer, that's another thing that we plan to be working on too, is seeing whether we can reverse pathology. It's like putting a new engine in your car, can you transform everything back to a normal state by removing or putting a new engine into the system? The potassium channel blockers and what that might be doing. The answer is we don't know; we would have to look at the systems that we have. How did you measure oxidative phosphorylation? Did you...</p><p><strong>[49:53] unknown:</strong> We're looking at the membrane potential of the mitochondria. It was the far red divided by green. You take MitoView Deep Red, which looks at the potential, and divide it by MitoView Green, which looks at the amount of mitochondria. That gives you what your mitochondrial potential is.</p><p><strong>[50:21] Thomas Seyfried:</strong> By looking at amounts.</p><p><strong>[50:23] Unknown:</strong> Are you doing this in a cancer cell model or?</p><p><strong>[50:28] unknown:</strong> Yeah, colon cancer cells.</p><p><strong>[50:30] Unknown:</strong> Isolated mitochondria or just--?</p><p><strong>[50:32] unknown:</strong> This is in the cells, intact cells. I was pretty surprised because I thought if this was causing some ROS buildup, I would see a decrease in the mitochondrial potential from uncoupling, but I didn't see that at all. I found these papers on PLOG mutations and how Clofilium, even at low concentrations, was rescuing these PLOG defects. It's definitely doing something to the mitochondria to make them more effective.</p><p><strong>[51:06] Unknown:</strong> I think this would be the topic of discussion that we could have. In our view, if you already have a cancer cell line, then the mitochondria are definitely there, depending on the model. But oxidative phosphorylation itself would be insufficient on its own to keep proliferation active. And that's why they shift to fermentation. Within a model, you can increase or decrease different parameters of what you would call OxPhos, oxidative phosphorylation, through different measurements. If you measure oxygen consumption with your treatment, maybe it goes up, but that doesn't really tell us much about the functional adaptability of those cells to different fuels or whether oxidative phosphorylation would be sufficient to keep them proliferating or to keep them alive as you would see in a normal cell. Then you would need a positive control with a normal cell; perhaps that would be a much better comparison between a normal cell and a tumor cell.</p><p><strong>[52:19] unknown:</strong> We have just scratched the surface on mitochondrial stuff. Right now, it's proliferation and membrane potential that we've looked at, and we haven't looked at any other parts of that. I would love to look at that because I think these ion channels we're blocking with these compounds that are very promiscuous and hitting all sorts of different ion channels. I think that they may be hitting mitochondrial ion channels in some cases. If that's the case, what does that do to the energetics? Is it something that could potentially boost metabolic therapy?</p><p><strong>[53:01] Unknown:</strong> I wonder in cell culture, and I think this also was discussed in one of Dr. Slevin's papers, from your perspective, the bioenergetic field or the connection in cell culture is already altered somehow. They are not part of a larger morphogenetic field, but they grow out of control in a 2D plane. Maybe extrapolating from that to the in vivo system is also complicated. We could learn what happens inside the cell, but making the connection to the larger tissue is difficult.</p><p><strong>[53:40] unknown:</strong> Absolutely. I'm 100% with you on that. One of the things that I've been really trying to work on is developing a better in vitro model, something that is more clinically relevant. We use a bunch of different cell lines. We use the cancer cell line, we use endothelial cells, and then we use fibroblasts all from that same tissue. We mix those together, make a spheroid, embed that in a fibrin gel that has human dermal fibroblasts that excrete the growth factors that are necessary for the endothelial cells to start sprouting. It's a very complex multicellular model that gives you more of what the tumor microenvironment would look like in that area. You could even go further than that and look at what natural killer cells are going to be doing in those types of systems. That's about as close as you can get unless you're constantly doing animal studies. We've got that system down pat. It's a beautiful system. You can see intravasation, you can see angiogenesis, and you can see proliferation of the cancer cells themselves in that type of a system. We could potentially look into that.</p><p><strong>[55:05] Unknown:</strong> Yeah, absolutely. I think there's some basic metabolic requirements that cells need in general and cancer cells in particular. It's not often discussed, but most in vitro studies require either serum substitutes or dialyzed serum for the cells to actually divide. Otherwise, they just sit there. So there are some growth factors and other things; that's the whole discussion about the default state of the cell, but if some of those things are missing, the cell simply cannot progress through the cell cycle and through proliferation. And the same for if you're going to be combining different cell types and they all secrete different things into the microenvironment, it might get a little complicated to measure all these things in such a system, but it could definitely be interesting to see how they behave, if they behave differently from the subculture system. I have a question for Michael. I saw most of the work on cancer that your group has been doing has been focusing more on non-mammalian systems in the past. I know you had that paper where you overexpressed KRAS in Xenopus laevis, and then with depolarization you could control the proliferation in melanocytes; I also saw some papers. Is that because these models are more available in your group, or is it perhaps an evolutionary disconnect when you go from—these are invertebrates or I wouldn't say less specialized vertebrates, but perhaps the mammalian systems through evolution lost the capacity for regeneration to hyperspecialize the tissues, where it's more difficult to alter some of these bioelectric fields or bioelectric states? Or is it simply that you're working on it, but you haven't got to these systems yet? I think you're muted.</p><p><strong>[57:23] Michael Levin:</strong> Certainly, we are moving into mammals and human tissue. As Juanita's prior work shows, and she has some papers coming soon that you'll see, we're absolutely going into mammals. We've also done work on breast cancer with Madeleine Uden and some other collaborators. I don't think there's any evolutionary issue here. I think the basic mechanisms are very highly conserved. One of the reasons that we do like Mesonopus and some other regenerative systems is that the regenerative kinds of responses are exactly what we need to normalize cells. Part of our lab does regenerative medicine approaches to try to induce regeneration of mammals. I suspect we can get it working there. We think it can. There are mammals that have deer antlers and spiny mice and things like this. I certainly think we can get it activated. I think that would ultimately be the kind of treatment that you would have. It would be a regenerative response that would grab strong morphogenetic control, but also metabolic control over these cells. We treated it as a stepping stone because the optogenetics and everything else that we did was much easier to do in Xenopus. We showed proof of concept there. Now we're moving into mammals and humans. That's the future work with Juanita and others.</p><p><strong>[59:02] Unknown:</strong> I just wanted to ask out of curiosity: that was Rose and Wallingford, where they put the frog tumor into a salamander limb, and then they amputated the limb with the tumor in the middle. As it was growing, it integrated and normalized the tumor. I've seen you cite that paper a couple of times. I was thinking whether something similar couldn't be done in regenerating liver, where you would put a hepatoma in the liver and then do a partial hepatectomy and see what would happen. I wonder if anybody has repeated it. This is like the nuclear transfer experiments, which is a foundational piece of evidence that I feel should be getting more attention. Those were frog cells in a salamander. I don't know how the immune system works there, but the innate or active immune system during regeneration, including macrophages, might have rejected those cells. I wonder if this could be done in mammalian systems such as the liver, and if you have been doing some of these experiments.</p><p><strong>[1:00:28] Michael Levin:</strong> A couple of things. First, I think there are a couple of other papers that were native salamander that didn't have any foreign cells. So I can try to send you what I have. This is also known in planaria. Planaria are very cancer resistant, but if you do manage to give them a tumor, then amputating, and in fact, it's non-local, so you can amputate at the other end of the animal, and as they regenerate, they clear up. There's data like this, but I agree with you. I think the liver would be a fantastic test of this. I've heard claims, and I don't have the clinical knowledge to know if this is true, but I've heard claims that the liver, because of its constant regenerative renewal, normalizes tumor foci all the time, that it's a rare case that one actually continues and becomes a problem, but that they come up from time to time and that the regenerative processes basically normalize them. I don't know if they disappear or if they just become normal. The liver also has some very interesting bioelectric properties. It's been known for a long time that it hangs out in a middle position between the strongly polarized—most of the tissues of the body are post-mitotic and quiescent—versus the depolarized stem cells and cancer cells. So the liver's kind of in the middle. It retains some of that depolarized character. Are you guys in a position to do something like that? I think it'd be a great experiment.</p><p><strong>[1:02:02] Unknown:</strong> I was just thinking about it. Technically, yes, but we would need funding for that, which is a separate question. I think that's the point of discussion: whether, in your view, when you talk about the breakdown of the communication from the morphogenetic fields and the larger goals of the tissue, even though it's mediated by secondary messengers and something inside the cell in the bioelectric perspective, that would be a reprogramming. If a cell loses the connection to the larger goals of the organism, that is not per se a defect inside the cell that is irreversible; it will be simply a disconnect. Speaking for myself, when you look at advanced tumors—perhaps more malignant ones that are selected for persistently malignant tumor cells—even if you fix them temporarily or inhibit proliferation, they tend to revert back to this proliferative state, which we would think is something happening inside the cell that cannot be recovered through signaling from the outside. I don't know what their thoughts would be on that, because even the oncogenes, KRAS, make all these things; they are happening inside the cell. From our perspective, the mitochondrial alterations would also be happening inside the cell. They have connections to the outside too, and they might be mediating the disconnect from reversion to unicellular behavior. But they do happen inside. If you have a single cell in a single well, and you put oncogenes on top of it—which we would argue damage oxidative phosphorylation and mitochondrial function—you just have one cell and you can make it a tumor cell without ever having any connection to the outside.</p><p><strong>[1:04:22] Michael Levin:</strong> I'm not saying that after the disconnection has happened that it doesn't accumulate additional defects that might really make it difficult to work it back into the collective. That's possible. The jury is still out on how much and at what point it really becomes the hardware problem and physically broken and irreversible. I'm not sure about that. One of the issues we study is the way that collectives versus single cells navigate various problem spaces. Anatomical space is one, transcriptional space is another. We've shown that groups of embryos actually have a quite different transcriptome than single embryos. One of the things they do is they exchange information. We can see these calcium waves passing between embryos that allows them to resist various teratogens much better. Large groups resist better than small groups, which resist better than singletons. What we haven't done is ask how larger collectives versus smaller collectives navigate metabolic space. This may involve physical defects, but it may be a computational problem as well as a physical problem, because the way that you move in the space of metabolic possibilities, the way that you process metabolic information, the decisions you make about what you do when certain things happen may be quite different in a group versus in disconnected cells. That's worth taking a look at: whether what we're looking at is really defects in the way they process metabolic information and move through the space. We have to be careful how much of this is a hardware defect and how much is software bordering on cybernetic cognitive defect?</p><p><strong>[1:06:21] Thomas Seyfried:</strong> When you mentioned that, that's what Picard was talking about: the computational aspects of how mitochondria control the morphogenetic fields. He's going into that depth of what you've just described, but it's pretty much cutting edge, or a lot of it is conceptual right now with data that needs to be further collected. He was speaking exactly about that, this computational. When you speak about the hardware and the software, it involves both of these aspects, and trying to get a handle on it right now has been difficult because of the types of experimental design you would need to separate hardware from software, which requires a very different perspective. Now you're asking a different kind of question, and when you start introducing new questions you start thinking about how you're going to design experiments to answer that. Before you could even formulate a question, you had a lot of things that you saw that you couldn't really put together. I think now, in light of what we're seeing, these kinds of things become more relevant and a lot more thinking needs to go into this.</p><p><strong>[1:07:47] Michael Levin:</strong> We're set up for a lot of that, though not on the metabolic side. We track other things. We track bioelectric states, we track transcriptional states, we track morphogenesis in these exact kinds of experiments that track the computational capacities of smaller and larger groups. We haven't pursued the metabolic aspects, but we probably should.</p><p><strong>[1:08:09] Unknown:</strong> I think that the bioenergetic question might be very important for your work, because when you talk about all the sodium and potassium channels, the chloride channels, the calcium channels. All of these things require energy. Our focus in the lab has been interrogating the bioenergetic states, which we feel is more relevant for the therapy, because the idea is to alter and inhibit the energy production of the tumor cells on the therapeutic side. I don't know, with these electroceuticals that you have been testing—did you see any good results? I think you tested the proton pump inhibitor on some of these things?</p><p><strong>[1:08:52] unknown:</strong> Taprozol, we tested it; it worked really well, especially in combination with other things.</p><p><strong>[1:08:57] Unknown:</strong> On cells or also in the mice to inhibit tumor growth.</p><p><strong>[1:09:00] unknown:</strong> No, we only tested it in cells.</p><p><strong>[1:09:03] Michael Levin:</strong> Can I ask about that, the question of mice in general? What do you think of mice as a model system for this? I'm not an expert. What I hear from people is that mouse cancer has been solved 1,000 times. The hard part is getting it into humans. But everybody uses mice as an assay. What's your opinion on that?</p><p><strong>[1:09:23] Thomas Seyfried:</strong> I can speak to that. We've never cured any mouse in our lab. The mice that we work with are all natural. A lot of times you have these genetically engineered things where you've made them and you've programmed them in a certain way. And you grow human cells in a mouse, like in xenografts; the mice have a compromised immune system. You've got 50 million years of evolutionary difference between a human cell and a mouse cell, and you're putting them in these totally different environments. The human cells never grow to the aggression that you see them growing in humans when you put them into the mice. That's why we work with all syngeneic, orthotopic kinds of things. It's a ***** trying to cure those things. We can't. A lot of people don't use that because they think patient-derived xenografts and all this kind of stuff. They're not natural. These are artificial. When you work with artificial systems, you get artificial information. You have to really work with the natural host, the natural environment from where the cells come. A lot of human cells don't metastasize. When you say, look at all the metastatic models, what they're doing is they're injecting human cells into the tail vein of a mouse. This is not metastasis. They do spread to different organs, but it's not because they spread naturally; they were forced to do that. I've broken down these models for so many years, knowing what's the most informative model. When you talk to the field, they seem to be locked into the models that they've developed to get their answers. Often you get a lot of misinformation from that, and therefore discount the whole system. Dogs have cancer; you can use dog models. Or humans—the best model you have is the human. The reason why we've had so much success in humans is because we worked it out in natural models in the mouse. Ultimately the test is with the human. You can ferret out mechanisms in vitro and in natural systems. Ferreting out molecular mechanisms, we usually have to go to the in vitro system, but you don't want to try to ferret out a molecular mechanism for a phenomenon that doesn't exist. You want to document the phenomenon and then try to break it down in another system. Then you put it back and test it in vivo, and ultimately you test it in the person that has the cancer. Our in vivo systems in the mouse are the most natural. The reason why we've made the advances we have is because we use only natural systems. When we get to humans, Mike, we get much, much better outcomes in humans than in mice. We developed a system in the mice, but when we tested in humans, we get so much better response because of the difference in basal metabolic rate. This is so important that people completely overlook it. The basal metabolic rate of the mouse is seven times faster than that of the human. The human body has a much greater opportunity to work on things where the mouse is super-accelerated. You really have to be careful about knowing that. A mouse without food lives about six days if he's lucky. Humans, depending on how much body fat you have, can live for months. You have a very different metabolic environment in a human than you do in a mouse. If you have natural systems in the mouse, you can translate them into humans, as long as you understand differences in basal metabolic rate, which comes down to bioenergetics and bioelectric relationships to the energetics. We have to be aware of all those things.</p><p><strong>[1:13:38] Unknown:</strong> Gotcha. I definitely agree that not every mouse is created equal, especially with the different mouse models people are using. But if I could ask one question, Michael, I'm very interested in the differences between morphogenetic fields in culture, different types of culture, and the mouse itself. It seems clear — I believe from your work directly, or at least work you've cited — that changing the morphogenetic field can be an initiating factor that's necessary and sufficient to induce proliferation. Do you also feel that it's that way in vivo, in a mouse system as well? Is it an initiating factor at times? Is it just sufficient, but not always necessary? Could you talk a little bit about that?</p><p><strong>[1:14:30] Michael Levin:</strong> Our work is not so much in vivo in mice on this, although we've done human cells and MSCs and things like that in cell culture with David Kaplan and so on. As far as I can see, the evidence is that it plays that role in vivo normally and even in mammals. It's not the only thing, of course; there are chemical factors and biomechanical forces and things like that too, but it plays that role. Partly what it does is coordinate proliferation rates across distance — the kind of allometric scaling that makes things scale. I think so.</p><p><strong>[1:15:19] Thomas Seyfried:</strong> Okay, if you guys would like to consider more on how we could work together that would be certainly an important thing. There are things that we can provide for you and there's things that you can provide for us to move the field in this general direction. We don't have answers to a lot of these things. I think the energetics of how the mitochondria control bioenergetics, the electrical signaling, the signaling cascades that I'm now learning — we've always been nuclear centric in everything we've been doing in cancer and in biology in general, genomic sequencing and reductionism to the point where we've lost sight of what the bigger issues are. Learning that there is an interesting connection between individual cells and the outside world and the way this works is hard to quantify when you try to do a genomic screen on things because you have no clue whether the gene expressions are associated with protein production. It's hard to link those gene expression profiles to actual changes in the morphogenetic activities. Whereas the mitochondria seem to be that organelle that offers the opportunity to, for the first time, make these connections. And the nucleus will just obey whatever the mitochondria is doing. People talk about epigenetics, and we've known that the mitochondria control the epigenetic signaling inside the nucleus. I always found it interesting that the mitochondria have relinquished most of their genes to the nucleus but they've kept 13 of them that they never relinquished, and those 13 control the destiny of the cell. They have a circular genome and multiple circular genomes, so it's a fascinating organelle in that regard. Bits and pieces of mitochondrial genome have become integrated into the nuclear genome as pseudogenes; they're really not expressed, but they integrate into the nuclear genome in many different ways. Why those 13 genes have never been allowed to be part of the nuclear genome, even though the nucleus controls some parts of the proteins of the electron transport, is interesting. The key ones that determine the destiny of the cell are retained in that mitochondrial genome. If you can have someone do the job for you, why should you waste the time doing it? That organelle, knowing that you have a big *** nucleus with a lot of DNA and a lot of chromosomes, and if that can follow the directions of the mitochondria, that saves this organelle. Why should I have to replicate everything this other organelle is doing? Therefore it would give it much greater control. It's a controlling organelle. It really controls the destiny of the whole physiology. The other thing about aging is we die from the second law of thermodynamics: entropy. All humans, mice, and different organisms have a defined life limit on the planet. The way you live longer is you keep your mitochondria healthy. That will just delay entropy, the second law of thermodynamics, because eventually people die and they die from disorder. It's interesting when people die of old age: often they're pretty alert up until two or three days before they die. It's almost like the entire mitochondrial energy system just turns off and you die. But as long as you can keep the system healthy, you can live longer and prevent a lot of different diseases that you are confronted with. Each one of these chronic diseases, in one way or another, increases entropy in a particular organ or in the system itself, and you don't live as long. Clearly, understanding aging is understanding mitochondrial energetics. It's hard to get cancer if your mitochondria are healthy. It's hard to get type two diabetes if your mitochondria are healthy.</p><p><strong>[1:19:13] Thomas Seyfried:</strong> You exercise, your mitochondria stay healthy. Ketone bodies, as we've written, are a super fuel. When you burn ketones, you reduce reactive oxygen species. You get more energy per breath of air when you're burning a ketone than when you break down pyruvate or even fatty acids; they uncouple the mitochondria and create more ROS. All of these things are interesting points to consider when we study biological systems. I think the reason why we haven't spent as much time on the mitochondria is that they're hard to see. When you look under electron or light microscopy, you see a big nucleus; you're focusing on that. The mitochondria is a morphic organelle that's diffused through the cytoplasm. You didn't really start to see it until electron microscopy was developed. Warburg did all his work based on chemical measurements. He never looked at mitochondria. He didn't have the tools to do that. He based it entirely on readouts of fermentation. When you start looking at it in a more dynamic way with microscopy and other techniques, we're starting to take a deeper look at mitochondria. I think you're going to find them to be controlling elements of biological systems' function. It's going to be related to the efficiency of energy use and the interaction of different organ systems. We're just beginning now to turn our attention in this direction, especially because it's related to chronic diseases, which are crippling our country and the world. This is now becoming a major problem, and a lot of it has to do with mitochondrial dysfunction in different ways. We always wonder, in the brain you have Parkinson's disease. This is a mitochondrial reactive oxygen process in the cells of the substantia nigra. These cells die. They don't become cancer. We've always wondered why cardiac myocytes and neurons of the brain rarely, if ever, become tumorigenic. They can't switch to a fermentation metabolism. Their energetic requirements require oxidative phosphorylation, and when that goes down, they die. They don't become cancer cells. We're starting to see why some cells are more prone to become neoplastic and other cells are not neoplastic, and how all this works together. We're starting to see a lot of these connections for the first time. I'm looking forward to a dynamic future, but we have to start by addressing certain questions that we do not have answers to at this time.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Cancer: mitochondria and metabolism - a discussion with Thomas Seyfried and his group</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>Thomas Seyfried, Derek Lee, Tomás Duraj and colleagues discuss cancer as a metabolic disease, examining mitochondrial dysfunction, metabolic therapies, ion channels, bioelectric control, and links to aging, regeneration, and disease models.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/JWAwBOdsOAc" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/9eedbe05/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a 1hr + 23 minutes talk and conversation with Thomas Seyfried, Derek Lee and Tomás Duraj ( Juanita Mathews from my group ( about their work on the metabolic and mitochondrial aspects of cancer.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Metabolic nature of cancer</p><p>(08:52) Mitochondrial theory of cancer</p><p>(34:00) Metabolic therapy in practice</p><p>(40:39) Mitochondria and morphogenetic fields</p><p>(45:21) Ion channels and metabolism</p><p>(53:01) Bioelectric control and regeneration</p><p>(01:09:03) Mouse versus human models</p><p>(01:15:19) Mitochondria, aging, and disease</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Thomas Seyfried:</strong> We have a lot of information now to show that cancer is predominantly a mitochondrial metabolic disease. We have looked at the number, structure, and function in the mitochondria of all major cancers, and you find abnormalities in these cristae, the number of mitochondria, the function of the mitochondria. It seems to be a common pathophysiological problem in all cancers. They transition from oxidative energy through oxidative phosphorylation to energy through what we have now defined as substrate-level phosphorylation, a very old, ancient way of getting energy. Before oxygen came into the atmosphere two and a half billion years ago, all the cells were fermenters. It was the development of the archaeobacteria and the original fusion between a primitive eukaryotic cell and a type of bacteria that became the mitochondria, which then led to metazoans and all of the development that we know about. Every cancer we look at, all the major ones, are blowing out large amounts of lactic acid and succinic acid. Warburg originally knew that cancer was a disorder of energy metabolism. First, he didn't know about the fermentation of amino acids inside the matrix of the mitochondria and that the TCA cycle itself can be fermented. Second, he made the mistake of assuming that oxygen consumption was an accurate biomarker for ATP synthesis through oxidative phosphorylation. Derek and others clearly showed that the consumption of oxygen in cancer cells is not directly linked to significant ATP production. We found, with our work with Shinopoulos Christos from Semmelweis University in Budapest, that he had long been suggesting there is a substrate-level phosphorylation mechanism, the succinyl-CoA ligase reaction, in the matrix of the mitochondria. We know from our work in cancer that succinic acid, succinate, is released by tumor cells. Succinate should never be released from the TCA cycle. The reason it's being released is that it is the second major waste product of fermentation. It's through the glutaminolysis pathway. The cancer cells are essentially transitioning away from OxPhos and moving toward a fermentation mechanism, substrate-level phosphorylation: one existing in the cytoplasm — the glycolytic (Embden–Meyerhof–Parnas) pathway — and the other in the matrix of the mitochondria itself, which is a fermentation pathway. In his PhD work, Derek showed that if you grow cells in just glutamine, no glucose, even in hypoxia or in the presence of cyanide, you still produce ATP and succinic acid. That pathway is a second major pathway. We found that there is a powerful synergistic parallelism between the glycolytic pathway in the cytoplasm and the glutaminolysis pathway in the mitochondria. The two pathways feed off each other, leading to dysregulated growth. The control of the cell cycle, the checkpoints of the cell cycle, the behavior of cells with respect to one another, contact inhibition, and the behavior of cells in relation to their neighbors is all linked to the calcium currents that are controlled by the mitochondrial-associated membranes and the ER–mitochondrial connections. All of that controls whether a cell should remain quiescent or grow, and if it grows, it's regulated growth. If those calcium currents linking to the inner membrane of the mitochondria and the mitochondrial-associated membranes are abnormal in any way, the cell loses its growth control and no longer responds accurately to environmental cues. The checkpoints of the cell cycle, which are all controlled by calcium–calmodulin signaling, mean the mitochondria control the destiny of the cell. With this behavior, the cells lose growth control; the cadherins that normally would be involved in contact inhibition no longer work. In the morphogenetic field itself, there are mitochondrial waves that link the behavior of cells across these morphogenetic fields.</p><p><strong>[04:24] Thomas Seyfried:</strong> All of this is related to how the energy and the integrity of the mitochondria work. As far as metastasis is concerned, we know that all metastatic cancers share characteristics with macrophages. Macrophages are genetically programmed to move in and out of tissues, live in the circulation. When we have a disordered growth in the microenvironment, you have inflammation and normal macrophages come in as wound healing cells. They fuse with cancerous stem cells forming these hybrid cells. They have abnormal energy metabolism, but they're genetically programmed to move in and out of the bloodstream. They are rogue macrophages. We know how to kill them because they're so dependent on glutamine and glucose. Our therapeutic strategy, which is ketogenic metabolic therapy, is a press-pulse where we press down glucose with diet and exercise, elevate ketones for the enhancement of mitochondrial function in normal non-neoplastic cells. Then we come in with drugs that will target the glutamine. There's no way these cells can grow without glucose and glutamine. We've interrogated the cells. Derek did beautiful work on going through all the amino acids and all the fuels that cells can possibly use. They're dependent heavily on glucose and glutamine. When you target that in people and mice with metastatic cancer, you slaughter these tumor cells. We know basically how cancer starts. We know what it's dependent on and we know pretty much how to manage it. Now we go into the clinic and start treating people with pancreatic, prostate, breast, and colon cancers. We're starting to see when you do it the right way, the way Thomas Durai and our groups have been divining, people living much longer with a higher quality of life. We never say cure. We only say management because we don't know how long a person will remain in a managed state. As a matter of fact, we have people whose tumors never go away. They persist for years but are indolent. We've transformed them from a very aggressive neoplasm into an indolent one where they can be periodically resected. We did that with a brain cancer patient who lived over ten years having periodic debulkings. We never targeted as glutamine, but we kept the tumor in an indolent state. We know we're seeing remarkable outcomes in the clinic based on our new understanding of how this all works. Of course, I've just summarized massive amounts of data and the experimental evidence is in our papers. Derek is here and Thomas Durai is here. Thomas is doing both the basic research and clinical work. Derek has defined the mitochondrial substrate-level phosphorylation as a major force in driving these tumor cells. So that gives you an overview of where we are and what we're doing with this cancer work.</p><p><strong>[08:52] Michael Levin:</strong> You guys want to show any slides at all, or are we just talking from here?</p><p><strong>[08:59] Thomas Seyfried:</strong> I could show you some slides on a screen share.</p><p><strong>[09:05] Michael Levin:</strong> Yeah only only if you.</p><p><strong>[09:06] Thomas Seyfried:</strong> I didn't want to give a formal presentation, but I certainly can show you some of the slides. We had some here yesterday, gave a big lecture on this, but I give it in a nutshell. Let's see here. Carol, Cindy Carroll. Let me go back here. I just got Margaret here. Okay. So can you see this? We have to do a slide up.</p><p><strong>[09:40] Michael Levin:</strong> You can share just. Yeah, just share your screen.</p><p><strong>[09:43] Thomas Seyfried:</strong> Share. Right here. Can you see that, Mike?</p><p><strong>[09:52] Michael Levin:</strong> Yep.</p><p><strong>Thomas Seyfried:</strong> So let me just go for it. Well, I always tell you how many people are dying. I don't know if that's really important for us. But I do go through all the scientific theories. What we're doing, we're changing the whole, what cancer actually is. Whenever you change the theory, like when you go from geocentrism to heliocentrism, you get massive paradigm changes in man's knowledge. We're going to do the same thing when it comes to cancer because we found that it's not a genetic disease. We use Louis Pasteur compared to Galen. We do the Darwin-Wallace theory of evolution as opposed to special creation. So when we look at the mitochondria in cancer, you see all the nice squiggly green. This is highlighting how these are a very dynamic organelle. They have fission and fusion, and they really regulate the entire physiology of the cell through their interactions, not only within the cell, but outside the cell. So the question that we're confronted with now is, is cancer a genetic disease or a mitochondrial metabolic disease? This paper by Hanahan and Weinberg, I think it's the most highly cited, 75,000 or something like this. They clearly state that cancer is a genetic disease, and it's a silent assumption that people just automatically assume. We go through and show that the reason why they say it is dogmatic ideology. It's no longer linked to rational collection of data. What we've done is we find a number of papers where you can't find any genetic mutations in certain cancers. That's a big problem. Now you can't prove anything by negative data. But Vogelstein and others said it's driver genes that are the really bad ones. That caused dysregulated cell growth. But now we're seeing expression of mutations in driver genes in normal tissues of people — P53 and a number of other mutations in cells that never become dysregulated in their growth. So you have cancers with no mutations, and driver genes in normal cells that don't form dysregulated growth. These are very serious challenges to a statement that cancer is a genetic disease. Some carcinogens like asbestos don't cause mutations, but they cause cancer. Aboriginal tribes rarely have cancer. Our closest relatives, chimps — no breast cancer ever recorded in a female chimp. The nuclear-mitochondrial transfer experiments are pretty much the strongest evidence against that. These experiments from Israel and Schaefer are from University of Vermont. He had an epithelial liver cell that he isolated from the liver of a rat, grew it in vitro, and it became dysmorphic in the way it looked. He took the cells and put them under the skin of the rat. Twelve out of thirteen rats developed a tumor. He didn't know if the original cell was tumorigenic, so he went back, grew up the original cells and put them under the skin of rats, and he didn't see any — 0 out of 11. He questioned whether the formation of neoplasia would be due to mutations in the nucleus or some cytoplasmic event like mitochondria. So he simply swapped nuclei: he took the nucleus of the tumor cell and put it into the cytoplasm of the normal non-neoplastic cell, and put the nucleus of the normal cell into the cytoplasm of the tumor cell. They got results directly opposite of those predicted if cancer were due to somatic mutations in the nucleus. The tumor nucleus in the cytoplasm of the normal cell produced only one out of 72 rats, whereas the normal nucleus in the tumor cytoplasm produced 66 out of 68. These are just the opposite of what you would have expected if cancer were due to somatic mutations. Also, McKinnell did a very similar kind of thing in vivo in frogs, the luck frog. He took cells from a lethal kidney cancer that kills the frogs. He takes the cells out of the frog, separates the cells individually, then isolates the nucleus of the tumor cell and implants it into an enucleated fertilized frog egg. He was able to clone tadpoles, all grown from the nucleus of the kidney tumor. When I spoke to him, he said I could cut the tails off these tadpoles, and they immediately grew a new tail in an exactly regulated way, directed by the nucleus of a tumor cell, which came from a cell that was dysregulated in its growth. I said to him, that's because it's not a genetic disease. He absolutely agreed. Before he passed away, he had long talks with him. Rudy Yanisch down here at MIT cloned mice from the nuclei of melanoma. He also showed these would abort; they wouldn't grow to completion, a whole mouse, because they had a number of chromosomal abnormalities and other mutations, but they never formed dysregulated cell growth.</p><p><strong>[14:42] Thomas Seyfried:</strong> All of these are inconsistent with cancer as a metabolic disease or a genetic disease. I wrote this big paper summarizing all of this. This figure now appears throughout the literature. It's a summary of dozens and dozens of in vivo and in vitro experiments replicated over and over again. Normal cells beget normal cells. They have a clean genome and they have normal and functional mitochondria. Tumor cells beget tumor cells. They do have genetic abnormalities in the nucleus, and they have abnormalities in number, structure, and function of mitochondria. When you place the nucleus of the tumor cell into the cytoplasm of the normal cell, you get regulated growth, not dysregulated growth. When you place the normal nucleus into the cytoplasm of the tumor cell, you either get dead cells or cells with dysregulated growth. These are the exact opposite. When you look at all of these data together, what I've just gone through, you have to be cognitively impaired to think that cancer could be a gene-driven phenomenon. It's not a gene-driven phenomenon. I blasted the NIH, the National Cancer Institute. We wrote letters saying, "Who's running the ship down there?" Because they say cancer is a genetic disease. People are writing them and saying the evidence, the theory, the data no longer support this. They mindlessly go through the process saying it is, and not carefully evaluating the data that says it can. I view them as Greylag geese. They just do mindless things without any connection to reality. I go after them about that, because the evidence is not there. I told you Warburg had this pegged. He knew, but he didn't know all of the stuff that we now know. Derek cleared up a lot of this mess. So cells ferment glutamine. This is the second major fermentable fuel. All major cancers have Oxphos insufficiency linked to glucose and glutamine fermentation, which is the common pattern; we see it everywhere. If you look at the structure of the mitochondria in cancer—this is glioblastoma—you have the stripes, the cristae. We've never found cancer with normal content or composition of cardiolipin, the major lipid that controls the electron transport chain; it's abnormal in all major cancers. The structure is abnormal in all major cancers. What I find, Mike, is that you and I, most biologists, know that structure determines function. This is a foundational principle of biology. It seems like everybody except oncologists knows this. They all say mitochondria are normal in cancer. I don't know when they look at these pictures what they think. Here's breast cancer: nice stripes in the normal breast cells; abnormal mitochondrial fragmentation in the cancer. Colorectal cancers have ghost mitochondria. I went through and showed all the different cancers—abnormalities in number, structure, and function of mitochondria. All major cancers have accumulation of lipid droplets. Cancer cells can't burn lipids or the ketone body beta-hydroxybutyrate, because if they do they'll produce reactive oxygen species, blow up, and die. They store lipids in the cytoplasm as a protective mechanism against death from reactive oxygen species. We see that this is a signature of Oxphos insufficiency when you see lipid droplets. There are some beautiful studies that I can explain to show how they prove that. These lipid droplets are there because the mitochondria are defective, not the reverse. Most of the energy we get is from Oxphos, with CO2 and water as the predominant waste products. When you look at cancer, the waste products are lactic acid and succinic acid. These are the waste products of glucose and glutamine fermentation. We know that because we've measured it. Derek and others, a lot of us, measured it. We know that oxygen consumption in cancer cells—they consume oxygen, but it's not largely connected to ATP production. It is largely connected to ROS, reactive oxygen species. We showed this in iScience. You can transition away from Oxphos. This is a gradual process. It doesn't happen overnight. It's a slow progression away from Oxphos and a replacement using substrate-level phosphorylation. There's a direct relationship between how much energy comes from substrate-level phosphorylation and how malignant the tumor is.</p><p><strong>[19:31] Thomas Seyfried:</strong> The malignancy to fermentation is so strong. It's an absolute, a very strong linkage. We have a threshold here because we think there's a point when it becomes irreversible, when you can no longer recover a sick cell. It's an arbitrary threshold; it might differ for different cancers, different types. This is what Derek put together. When you have glucose and glutamine coming through the two major glycolytic and glutaminolysis pathways, the byproducts of these pathways cross-connect, and thereby you can produce tremendous biomass, very rapid growth. As long as you have these two fuels working together in a syngeneic way, they'll survive a little bit and grow a little bit on glucose alone, and the same thing with glutamine alone. When you put the two together, they just explode with growth. Lactic acid and succinic acid are the metabolic waste products. They acidify the extracellular microenvironment, preventing radiation, chemo, and immunotherapies from working effectively because you've acidified. Succinic acid also paralyzes the immune system to prevent immune system killing. This summarizes 100 years of cancer research. We know what causes cancer: any number of environmental or even the rare inherited germline mutations. We showed in a big paper that they all damage OxPhos in one way or another. All of these—inflammations, oncogenes, carcinogens, intermittent hypoxia—produce reactive oxygen species causing the mutations in the nucleus: the chromosomal abnormalities, the point mutations; all that's downstream effects. The retrograde signal system that mitochondria signal to the nucleus turns on HIF-1 alpha and MYC, opening the floodgates to glucose and glutamine, so you can then drive dysregulated growth and energy metabolism. Son and Sheen and Sato talk about the default state. The default state of cells is proliferation. The default energy state is fermentation. That's linked to dysregulated cell growth, because it's the calcium currents controlled by the mitochondria that control the cell cycle. When the cell cycle is no longer under control, the cells fall back on their default state, which is proliferation driven by a fermentation metabolism. We went through and showed sustained angiogenesis. All of these things can be tracked back to mitochondrial dysfunction. For metastasis, it's a fusion hybridization between a macrophage, microglia, and a cancer stem cell forming a hybrid cell. This hybrid cell is driven by glucose and glutamine, and we now know how to kill and manage metastatic cancer. We've shown that in the mouse beautifully, and now we're seeing it in humans. This is Derek's big paper: amino acid–glucose fermentation maintains the constant ATP. He did a magnificent job. We worked with Christos Chianopoulos, world expert on mitochondrial substrate-level phosphorylation. Then we published a big paper on the Warburg hypothesis and the emergence of the mitochondrial metabolic theory. Tom Dorai is showing the difference between the Crabtree effect and the Warburg effect and the confusion that everybody has made when talking about Warburg effects. If we know what to do, how do we manage it? We need to lower blood sugar and elevate ketone bodies. You can do that with water-only fasting or calorie restriction. This is one of our early studies with Bernard Mukherjee. Just by cutting calories and cutting a high-carbohydrate diet by 40%, which is like water-only fasting in humans, you get a huge reduction in tumor growth. We know that when you lower blood sugar—each of these are mice under a different diet—ketone bodies go up as an evolutionarily conserved adaptation to food restriction, and when blood sugar goes down, tumor size goes down. We were the first to do this in the mouse. Now people are looking at prostate, colon, breast cancer. When people have high blood sugar, overall survival is generally much reduced; tumors grow much faster. When you put the patient or the animal into a state of nutritional ketosis, you reduce inflammation and angiogenesis, and you actually kill tumor cells. This is going to be the new tool Derek is working on. We're working on the glucose ketone index. You measure the molar, millimolar ratio of glucose to ketones, and you get a quantitative biomarker that's linked to managing and preventing cancer. We've defined 2.0 or below as a biomarker for killing and managing cancer. This will be the new standard of care for cancer once people come to realize that it's a mitochondrial metabolic problem. So we press down glucose with ketone supplements, stress management, exercise, and then bring in specific low-dose targeting of glucose and glutamine.</p><p><strong>[24:20] Thomas Seyfried:</strong> Hyperbaric oxygen will kill tumor cells when the patient is in nutritional ketosis. Right now, our group and the group that we work with, we're perfecting dosage timing and scheduling. We bring people from sick to managed state and with improvements, we are hoping to get resolution on this. Proof of principle, a very aggressive glioblastoma. This is the VM3 mouse, natural spontaneous glioblastoma. We showed how we managed by targeting glutamine with this drug DON and putting the animals into a calorie restricted ketogenic diet. We could get tremendous increase in overall survival. And again, all the papers that Purna put out, all the information is in the main paper. The blue line is high carbohydrate by itself. The green is diet by itself. The red is drug DON by itself. When you put the drug and the diet together, you get long-term survival and we're seeing much more improved overall survival. How does it work in humans? Let's show you some evidence. This is a glioblastoma from a human. You can't cut these out in every person; it's a very deadly tumor. The purple cells are the tumor cells around blood vessels. They spread throughout the whole neocortex, so you can't really surgically resect glioblastoma. I always like to show these curves. These are the survival curves. We have a problem in science today. Half the stuff in the cancer field can't be reproduced when you go and try to reproduce it. But one thing can be reproduced and that's how fast glioblastoma kills you. These are the data from five different surgical institutions. You can see the standard of care; very few people survive beyond 40 months. I said, no improvement in a hundred years. Bailey and Cushing, 1926, 8 to 14 months with no therapy. Today, with all the stuff, we get 17 to 18 months, almost no improvement. I always show people why, because when you irradiate the brain of cancer patients, you break apart the glutamine-glutamate cycle, freeing up massive amounts of glutamine. You give steroids because the brain swells from the radiation. This makes hyperglycemia and you get high glutamine and these patients die from a combination of bad tumor together with the absurdity of trying to treat them, the crazy way they're doing it. We tried this on a guy from Egypt; this is one of our first clinical papers. This is the corn guy. He was 28 years old. He had glioblastoma. We put him on a metabolic therapy, a weight craniotomy. We pushed radiation off for three months. They wanted to push the radiation. After a couple of weeks, we said no. They eventually had to irradiate and give him temozolomide. He did well. At about 29 or 30, he had some headaches and he died. Alsaka did an autopsy on his brain to see what was going on, and they didn't find tumor cells. He died from radiation-induced brain liquefaction necrosis, so he was killed by the treatment rather than by the tumor. We always like to showcase reports of people who have suffered immensely, like this young girl, Brittany, diagnosed with GBM in 2014. Here's her husband. She received the standard of care. You can see her face is swollen. This is moon face from high-dose steroids, meaning that her blood sugar levels are going to be very high. She decides she doesn't want to live anymore. So she goes into People Magazine. Brittany, 29 years old, plans to end her own life. In three weeks, she goes to Oregon, dies with dignity with her family. As I always say, it's bad when your patients kill themselves rather than go through the treatment you're offering them.</p><p><strong>[29:10] Thomas Seyfried:</strong> Now, this is our man here, Pablo. Thomas and I, we've all spoken to Pablo. We knew him really well. Nice guy. He came to me in 2014. He was one of these purest guys. He didn't want any radiation, no chemo, no steroids, nothing. They told him he'd be dead in nine to 12 months if he didn't do radiation and chemo. They told him his tumor was inoperable. He survived for 10 years. Here is his tumor when it was first detected; they said it was inoperable. He did metabolic therapy for a couple of years and you can see how big the tumor became, but it was indolent because these tumors will kill you in months — this was years. So we told him, yeah, take it out, so he went in and got it out, and he did really well. Then we measured; we have five years of measurements from his blood glucose and ketone. A massive amount of data on this one person. We recalculated his GKI. We take the ratio of blood sugar to blood ketones, and you can see we replotted his GKI. Look how low it was. Beautiful. So he was managed predominantly by metabolic therapy. Pablo died. Thomas and I had a long conversation with him a week before he passed away. He was laughing; we were having a good time. He was a very sharp guy. He was going for his fourth surgical procedure for a tumor which they originally told him was inoperable, and yet now he was going for it. He came out of the surgery really well, thumbs up. He had a big conversation with his wife that day, but he died that night from a cerebral hemorrhage as the result of the surgery. So he never died from the tumor. He lived 122 months. This is our most recent study out of Greece, where my colleagues did standard of care: temozolomide radiotherapy, and temozolomide radiotherapy with ketogenic metabolic therapy. You can see this tremendous difference. Four out of six of the guys on the ketogenic diet survived three years or longer, whereas only one out of 12 on the standard diet survived three years. This diet wasn't bad: salmon, olive oil, sardines, avocado, and some of these kinds of things. Some of these guys refused to give up their sugar. They said, "I'd rather die than give up my sugar," and they did die. That's their choice. But you can see how powerful metabolic therapy is in managing these cancers. There's another little guy — this is Danny Sheen from Marshfield, Massachusetts — diagnosed with pineoblastoma. Look at his face, all swollen from steroids. Here he is a week before he died. Surgery, radiation, all the same **** they give to these little kids. It's tragic. It's just so terrible what they're doing to these kids. I have a big paper that's going to come out.</p><p><strong>[34:00] Unknown:</strong> It's Cell Reports Medicine.</p><p><strong>[34:03] Thomas Seyfried:</strong> Cell Reports Medicine. This is provisionally accepted. We took the glioblastoma cells and put them into pediatric young mice 20 days old representing what basically what Danny Sheen's was. Here you see the control guys. We had restricted ketogenic diet, DON, and Bendazole. This is a parasite medication that targets glutamine. We target glutamine and glucose at the same time. These damn mice live so much longer. They have a high quality of life. It's just so much different. This is a person, Robin; she's still alive with us today. She was from Cleveland, Ohio. She had breast cancer that metastasized to her lungs and brain, and femur and her bones. They said there's nothing more we can do. All the standard of care stuff wasn't working. She got on a plane and went to our colleague. We have a big clinic over there in Istanbul, Turkey, where we're using metabolic therapy. He put her on metabolically supported chemo. You bring in a very low dose of chemo when you're in a state of nutritional ketosis. All of this disappeared. The infiltration and the metastasis were all killed. This was 2025, but we're talking with Slocum recently. She's still doing really well. We're going to do a big follow-up seven years out now. We got a lung cancer guy who's still alive. Adam Amadotus had lung cancer spread to his brain and his liver and everything. My colleague Thanasis—even Julio—said, "We're going to put him on a high-fat diet." One of the attending oncologists said, "Oh, you've got to be careful. He'll have elevated cholesterol, and that would be dead." The guy was going to be dead in three weeks, for Christ's sake. They're worried about elevated cholesterol. The guy's still alive. He's still alive today with a little bit of dyslipidemia, but he's alive. It works tremendously in prostate cancer. This is Thomas Doray's big paper here with all the folks that are now participating in managing cancer. We have a framework, a ketogenic metabolic framework for managing glioblastoma. We have a lot of nutritionists, dietitians, basic scientists, clinicians. They all want to get on board now and start working this out. I always like to show our dog. We put a metabolic therapy on a dog that was destined for death.</p><p><strong>[37:20] Thomas Seyfried:</strong> He had a big mast cell tumor under his nose. Here's his face and nose. And the woman followed what we said. She cut the calories down, gave the dog raw chicken with the bones still in the meat, some fish oil, and some raw egg, and the tumor melted off the dog's face. The doctor said, you're going to have to have it cut off. The vet said the dog was going to get sick, get diarrhea and everything. They didn't do any of that. No surgery, no radiation, no chemo. What happens is when you take animals and people and you fast them, the body will attack the tumor and use the tumor as fuel for the rest of the body. It's called autolytic cannibalism. This dog died from old age; the cancer never came back. It lived 15 and a half years and died from cardiac failure from old age; the cancer never came back. So people say terminal cancer; it's not terminal. We know now we can keep people alive if you do it the right way. We published this big paper with Christos and myself showing that the somatic mutation theory is essentially like geocentrism deferents, equants, and epicycles trying to figure out. And now if we move the mitochondria to the center of the problem, you're going to have a much greater opportunity to manage cancer because it's a mitochondrial metabolic disorder. So I conclude by saying I went through this kind of fast. I give a course a whole semester on this to the students so they can really dive deep into the science supporting all this. Thomas and I are starting this new International Society of Metabolic Oncology, where clinicians and dietitians are all getting together. We're going to standardize treatment for cancer based on cancer being a mitochondrial metabolic disease. We have to work out some of the dosage, timing, and scheduling issues. We're trying to formulate the society right now. Right now, the funding that supports my research in this lab is philanthropy and private foundations. We don't get money from the NIH. We're getting a lot of people coming on board who want to see this happen. We have a lot of case reports under work. More and more people will be publishing this. We're not ready to do a large clinical trial because the only way we can do that is if we do it. Thomas, myself, and Derek are the ones that do this because we're the only ones that really understand all the nuances. We're trying to train these physicians to know what to do and how to do it. The dietitians need to know what kind of foods they should be using. Once we have that, then we're going to run bigger and bigger trials. That's the goal of this new society. We're in the plan to drop the death rate on this disease, no question about it. The biggest problem standing in the way is that the people at NIH think it's a genetic disease. So as long as they consider it a genetic disease, you're not going to be able to make the kind of advances that you need to make. That's the biggest block right now. The NIH is part of the problem rather than the solution. Until they can get on board and recognize what's going on, we're going to have to suffer 1,700 people a day dying from cancer, or 626,000 this year that's predicted. That's where we stand. I've given you an overview, but to do a deeper dive, you have to look at the science, the control experiments that we've done, and how we've run all these experiments. Tom is here. Derek did a lot of these experiments and got his PhD on this. They could answer any questions that you might have on that.</p><p><strong>[40:39] Michael Levin:</strong> Thanks very much. That's remarkable. Absolutely remarkable. I have one basic question that probably won't need a bunch more on the metabolism side. You mentioned at the beginning this notion of how the mitochondria participate in the morphogenetic fields that are multicellular at scale. Could you talk about that a little bit? Because we're very interested in that. I want to understand what you're thinking about the mitochondria.</p><p><strong>[41:05] Thomas Seyfried:</strong> We learned more about that from Picard, from Columbia University. He's one of the leaders on the communication of mitochondria, not only how they communicate or how they regulate the internal intra-physiological state of the cell, but I was surprised to see from his work how they actually communicated across different cells and also through the morphogenetic field itself, through regulatory bioenergetic signaling. I didn't know how extensive the knowledge is about how mitochondria communicate across fields. When Son and Sheen and Soto talk about the tissue organizational field theory and how cancer originates that way, it's very clear from Picard's work how you could damage mitochondria in a group of cells by disrupting the morphogenetic field itself. That's new to me and we're still working that out. What was clear to us within the cell itself, because ultimately what starts is dysregulated cell growth from an individual cell dumping out fermentation waste products. We knew this transition from OXPHOS to fermentation was the key for driving the dysregulated growth. But what is the linkage to make the cell dysregulated in its growth? That was what was most interesting. That relates to the calcium signaling and the control of the cell cycle. It goes beyond that: why the cells are no longer responding to the cues from the environment, why they lost contact inhibition. That's because calcium controls cadherins on the surface of the cells. Then I started looking more, and Picard says you've got communication signaling not only internally and to a few cells outside; there's a way through water channels and all kinds of things that were unknown to me about how mitochondria control the overall bioenergetic system of the whole body. Because they've all derived from a population of mitochondria in the fertilized egg. The egg itself comes into a maternal system. They all become slightly different in different organs. Liver is a little bit different than brain or kidney, depending on what they have. But they all have a commonality in how they work. They communicate throughout the whole body through fields. I was shocked about this. When we look at chronic diseases, we see obesity, type 2 diabetes, coronary disease, neuropsychiatric problems, dementia, cancer. All of these are mitochondrial failures. All of these are attacks on mitochondrial function. We're learning about that. The best answer to know more about that would be to look up the work of Picard. He's discussed this in great detail about how these cells communicate with each other through mitochondrial bioenergetic linkages and signaling. So we're starting to put this together in a broader way. And as he says, the "powerhouse of the cell" is just that one little bit that people talk about. But what he's talking about is an altogether new kind of networking that exists for general health and what we call metabolic homeostasis throughout organs and systems, all linked to the capability of this one organelle. That organelle controls how genes are turned on and turned off in the nucleus. Through epigenetic signaling, the mitochondria control what the nucleus is doing from one cell to another. I think we're learning a lot more about this. This is the new horizon. This is going to be really important.</p><p><strong>[45:21] unknown:</strong> I want to say this is awesome. I'm a big mitochondria fan. When I did my PhD, it was metabolic engineering for hydrogen production. I did a lot of fermentation experiments trying to get increased hydrogen. I've always kept in mind the metabolism and what was going on and looking at the mitochondria specifically. Everything that you're saying is resonating with things that I've read and what I've looked into. I've also found that the mitochondria communicate across the cell membranes with one another. They have direct communication with mitochondria that are on opposing membranes, which I thought was really fascinating. One of the things that I'm also interested in is we work with bioelectric modulation with a lot of different drugs that change different ion channels and their function. One of the things that we're seeing is that mitochondria have their own ion channels and potassium channels and calcium channels. Some of those channels are really important for the function of the mitochondria. It's in pre-print right now, so I can go ahead and tell you the name of the drug. We're working with clofilium, which is a potassium channel blocker. It's not just a potassium channel blocker. It has all these other promiscuous effects on different channels. One of the things that it's been known for is that it can change the metabolic output of mitochondria. It switches them more towards a pentose phosphate pathway metabolism in the cells, and also apparently increases the pH around the cell. They're producing some other fermentation product from it. What they did find with it is a defect where you basically have impaired mitochondrial biogenesis. If you add clofilium to those models, you actually increase the mitochondrial output. It increases mitochondrial biogenesis. It also increases oxidative phosphorylation. In my work, it increased the membrane potential of the mitochondria over time. There's things that our ion channel drugs are doing to the mitochondria and may actually be directly working on the metabolism of the mitochondria. It'd be really great to work with you guys to analyze those effects and also to screen different compounds that we work with to see what they do to those metabolic outputs. We don't have any equipment here. All the mitochondrial studies, usually you isolate the mitochondria and then you do the experiments on the isolated mitochondria. All we have are biosensors. We have calcium, we have ROS, and we can also look at turnover with the mitochondria. It'd be great to work with you guys if you have the specialized equipment to do the isolation and to actually look at what's going on when we block those ion channels.</p><p><strong>[48:53] Thomas Seyfried:</strong> When Mike Kiebish was working in my lab, he's the senior scientist for Berg. He isolated and purified the mitochondria out of the cells. It's laborious, but we're going to be doing that. We're going to be trying to do mitochondrial transfer, that's another thing that we plan to be working on too, is seeing whether we can reverse pathology. It's like putting a new engine in your car, can you transform everything back to a normal state by removing or putting a new engine into the system? The potassium channel blockers and what that might be doing. The answer is we don't know; we would have to look at the systems that we have. How did you measure oxidative phosphorylation? Did you...</p><p><strong>[49:53] unknown:</strong> We're looking at the membrane potential of the mitochondria. It was the far red divided by green. You take MitoView Deep Red, which looks at the potential, and divide it by MitoView Green, which looks at the amount of mitochondria. That gives you what your mitochondrial potential is.</p><p><strong>[50:21] Thomas Seyfried:</strong> By looking at amounts.</p><p><strong>[50:23] Unknown:</strong> Are you doing this in a cancer cell model or?</p><p><strong>[50:28] unknown:</strong> Yeah, colon cancer cells.</p><p><strong>[50:30] Unknown:</strong> Isolated mitochondria or just--?</p><p><strong>[50:32] unknown:</strong> This is in the cells, intact cells. I was pretty surprised because I thought if this was causing some ROS buildup, I would see a decrease in the mitochondrial potential from uncoupling, but I didn't see that at all. I found these papers on PLOG mutations and how Clofilium, even at low concentrations, was rescuing these PLOG defects. It's definitely doing something to the mitochondria to make them more effective.</p><p><strong>[51:06] Unknown:</strong> I think this would be the topic of discussion that we could have. In our view, if you already have a cancer cell line, then the mitochondria are definitely there, depending on the model. But oxidative phosphorylation itself would be insufficient on its own to keep proliferation active. And that's why they shift to fermentation. Within a model, you can increase or decrease different parameters of what you would call OxPhos, oxidative phosphorylation, through different measurements. If you measure oxygen consumption with your treatment, maybe it goes up, but that doesn't really tell us much about the functional adaptability of those cells to different fuels or whether oxidative phosphorylation would be sufficient to keep them proliferating or to keep them alive as you would see in a normal cell. Then you would need a positive control with a normal cell; perhaps that would be a much better comparison between a normal cell and a tumor cell.</p><p><strong>[52:19] unknown:</strong> We have just scratched the surface on mitochondrial stuff. Right now, it's proliferation and membrane potential that we've looked at, and we haven't looked at any other parts of that. I would love to look at that because I think these ion channels we're blocking with these compounds that are very promiscuous and hitting all sorts of different ion channels. I think that they may be hitting mitochondrial ion channels in some cases. If that's the case, what does that do to the energetics? Is it something that could potentially boost metabolic therapy?</p><p><strong>[53:01] Unknown:</strong> I wonder in cell culture, and I think this also was discussed in one of Dr. Slevin's papers, from your perspective, the bioenergetic field or the connection in cell culture is already altered somehow. They are not part of a larger morphogenetic field, but they grow out of control in a 2D plane. Maybe extrapolating from that to the in vivo system is also complicated. We could learn what happens inside the cell, but making the connection to the larger tissue is difficult.</p><p><strong>[53:40] unknown:</strong> Absolutely. I'm 100% with you on that. One of the things that I've been really trying to work on is developing a better in vitro model, something that is more clinically relevant. We use a bunch of different cell lines. We use the cancer cell line, we use endothelial cells, and then we use fibroblasts all from that same tissue. We mix those together, make a spheroid, embed that in a fibrin gel that has human dermal fibroblasts that excrete the growth factors that are necessary for the endothelial cells to start sprouting. It's a very complex multicellular model that gives you more of what the tumor microenvironment would look like in that area. You could even go further than that and look at what natural killer cells are going to be doing in those types of systems. That's about as close as you can get unless you're constantly doing animal studies. We've got that system down pat. It's a beautiful system. You can see intravasation, you can see angiogenesis, and you can see proliferation of the cancer cells themselves in that type of a system. We could potentially look into that.</p><p><strong>[55:05] Unknown:</strong> Yeah, absolutely. I think there's some basic metabolic requirements that cells need in general and cancer cells in particular. It's not often discussed, but most in vitro studies require either serum substitutes or dialyzed serum for the cells to actually divide. Otherwise, they just sit there. So there are some growth factors and other things; that's the whole discussion about the default state of the cell, but if some of those things are missing, the cell simply cannot progress through the cell cycle and through proliferation. And the same for if you're going to be combining different cell types and they all secrete different things into the microenvironment, it might get a little complicated to measure all these things in such a system, but it could definitely be interesting to see how they behave, if they behave differently from the subculture system. I have a question for Michael. I saw most of the work on cancer that your group has been doing has been focusing more on non-mammalian systems in the past. I know you had that paper where you overexpressed KRAS in Xenopus laevis, and then with depolarization you could control the proliferation in melanocytes; I also saw some papers. Is that because these models are more available in your group, or is it perhaps an evolutionary disconnect when you go from—these are invertebrates or I wouldn't say less specialized vertebrates, but perhaps the mammalian systems through evolution lost the capacity for regeneration to hyperspecialize the tissues, where it's more difficult to alter some of these bioelectric fields or bioelectric states? Or is it simply that you're working on it, but you haven't got to these systems yet? I think you're muted.</p><p><strong>[57:23] Michael Levin:</strong> Certainly, we are moving into mammals and human tissue. As Juanita's prior work shows, and she has some papers coming soon that you'll see, we're absolutely going into mammals. We've also done work on breast cancer with Madeleine Uden and some other collaborators. I don't think there's any evolutionary issue here. I think the basic mechanisms are very highly conserved. One of the reasons that we do like Mesonopus and some other regenerative systems is that the regenerative kinds of responses are exactly what we need to normalize cells. Part of our lab does regenerative medicine approaches to try to induce regeneration of mammals. I suspect we can get it working there. We think it can. There are mammals that have deer antlers and spiny mice and things like this. I certainly think we can get it activated. I think that would ultimately be the kind of treatment that you would have. It would be a regenerative response that would grab strong morphogenetic control, but also metabolic control over these cells. We treated it as a stepping stone because the optogenetics and everything else that we did was much easier to do in Xenopus. We showed proof of concept there. Now we're moving into mammals and humans. That's the future work with Juanita and others.</p><p><strong>[59:02] Unknown:</strong> I just wanted to ask out of curiosity: that was Rose and Wallingford, where they put the frog tumor into a salamander limb, and then they amputated the limb with the tumor in the middle. As it was growing, it integrated and normalized the tumor. I've seen you cite that paper a couple of times. I was thinking whether something similar couldn't be done in regenerating liver, where you would put a hepatoma in the liver and then do a partial hepatectomy and see what would happen. I wonder if anybody has repeated it. This is like the nuclear transfer experiments, which is a foundational piece of evidence that I feel should be getting more attention. Those were frog cells in a salamander. I don't know how the immune system works there, but the innate or active immune system during regeneration, including macrophages, might have rejected those cells. I wonder if this could be done in mammalian systems such as the liver, and if you have been doing some of these experiments.</p><p><strong>[1:00:28] Michael Levin:</strong> A couple of things. First, I think there are a couple of other papers that were native salamander that didn't have any foreign cells. So I can try to send you what I have. This is also known in planaria. Planaria are very cancer resistant, but if you do manage to give them a tumor, then amputating, and in fact, it's non-local, so you can amputate at the other end of the animal, and as they regenerate, they clear up. There's data like this, but I agree with you. I think the liver would be a fantastic test of this. I've heard claims, and I don't have the clinical knowledge to know if this is true, but I've heard claims that the liver, because of its constant regenerative renewal, normalizes tumor foci all the time, that it's a rare case that one actually continues and becomes a problem, but that they come up from time to time and that the regenerative processes basically normalize them. I don't know if they disappear or if they just become normal. The liver also has some very interesting bioelectric properties. It's been known for a long time that it hangs out in a middle position between the strongly polarized—most of the tissues of the body are post-mitotic and quiescent—versus the depolarized stem cells and cancer cells. So the liver's kind of in the middle. It retains some of that depolarized character. Are you guys in a position to do something like that? I think it'd be a great experiment.</p><p><strong>[1:02:02] Unknown:</strong> I was just thinking about it. Technically, yes, but we would need funding for that, which is a separate question. I think that's the point of discussion: whether, in your view, when you talk about the breakdown of the communication from the morphogenetic fields and the larger goals of the tissue, even though it's mediated by secondary messengers and something inside the cell in the bioelectric perspective, that would be a reprogramming. If a cell loses the connection to the larger goals of the organism, that is not per se a defect inside the cell that is irreversible; it will be simply a disconnect. Speaking for myself, when you look at advanced tumors—perhaps more malignant ones that are selected for persistently malignant tumor cells—even if you fix them temporarily or inhibit proliferation, they tend to revert back to this proliferative state, which we would think is something happening inside the cell that cannot be recovered through signaling from the outside. I don't know what their thoughts would be on that, because even the oncogenes, KRAS, make all these things; they are happening inside the cell. From our perspective, the mitochondrial alterations would also be happening inside the cell. They have connections to the outside too, and they might be mediating the disconnect from reversion to unicellular behavior. But they do happen inside. If you have a single cell in a single well, and you put oncogenes on top of it—which we would argue damage oxidative phosphorylation and mitochondrial function—you just have one cell and you can make it a tumor cell without ever having any connection to the outside.</p><p><strong>[1:04:22] Michael Levin:</strong> I'm not saying that after the disconnection has happened that it doesn't accumulate additional defects that might really make it difficult to work it back into the collective. That's possible. The jury is still out on how much and at what point it really becomes the hardware problem and physically broken and irreversible. I'm not sure about that. One of the issues we study is the way that collectives versus single cells navigate various problem spaces. Anatomical space is one, transcriptional space is another. We've shown that groups of embryos actually have a quite different transcriptome than single embryos. One of the things they do is they exchange information. We can see these calcium waves passing between embryos that allows them to resist various teratogens much better. Large groups resist better than small groups, which resist better than singletons. What we haven't done is ask how larger collectives versus smaller collectives navigate metabolic space. This may involve physical defects, but it may be a computational problem as well as a physical problem, because the way that you move in the space of metabolic possibilities, the way that you process metabolic information, the decisions you make about what you do when certain things happen may be quite different in a group versus in disconnected cells. That's worth taking a look at: whether what we're looking at is really defects in the way they process metabolic information and move through the space. We have to be careful how much of this is a hardware defect and how much is software bordering on cybernetic cognitive defect?</p><p><strong>[1:06:21] Thomas Seyfried:</strong> When you mentioned that, that's what Picard was talking about: the computational aspects of how mitochondria control the morphogenetic fields. He's going into that depth of what you've just described, but it's pretty much cutting edge, or a lot of it is conceptual right now with data that needs to be further collected. He was speaking exactly about that, this computational. When you speak about the hardware and the software, it involves both of these aspects, and trying to get a handle on it right now has been difficult because of the types of experimental design you would need to separate hardware from software, which requires a very different perspective. Now you're asking a different kind of question, and when you start introducing new questions you start thinking about how you're going to design experiments to answer that. Before you could even formulate a question, you had a lot of things that you saw that you couldn't really put together. I think now, in light of what we're seeing, these kinds of things become more relevant and a lot more thinking needs to go into this.</p><p><strong>[1:07:47] Michael Levin:</strong> We're set up for a lot of that, though not on the metabolic side. We track other things. We track bioelectric states, we track transcriptional states, we track morphogenesis in these exact kinds of experiments that track the computational capacities of smaller and larger groups. We haven't pursued the metabolic aspects, but we probably should.</p><p><strong>[1:08:09] Unknown:</strong> I think that the bioenergetic question might be very important for your work, because when you talk about all the sodium and potassium channels, the chloride channels, the calcium channels. All of these things require energy. Our focus in the lab has been interrogating the bioenergetic states, which we feel is more relevant for the therapy, because the idea is to alter and inhibit the energy production of the tumor cells on the therapeutic side. I don't know, with these electroceuticals that you have been testing—did you see any good results? I think you tested the proton pump inhibitor on some of these things?</p><p><strong>[1:08:52] unknown:</strong> Taprozol, we tested it; it worked really well, especially in combination with other things.</p><p><strong>[1:08:57] Unknown:</strong> On cells or also in the mice to inhibit tumor growth.</p><p><strong>[1:09:00] unknown:</strong> No, we only tested it in cells.</p><p><strong>[1:09:03] Michael Levin:</strong> Can I ask about that, the question of mice in general? What do you think of mice as a model system for this? I'm not an expert. What I hear from people is that mouse cancer has been solved 1,000 times. The hard part is getting it into humans. But everybody uses mice as an assay. What's your opinion on that?</p><p><strong>[1:09:23] Thomas Seyfried:</strong> I can speak to that. We've never cured any mouse in our lab. The mice that we work with are all natural. A lot of times you have these genetically engineered things where you've made them and you've programmed them in a certain way. And you grow human cells in a mouse, like in xenografts; the mice have a compromised immune system. You've got 50 million years of evolutionary difference between a human cell and a mouse cell, and you're putting them in these totally different environments. The human cells never grow to the aggression that you see them growing in humans when you put them into the mice. That's why we work with all syngeneic, orthotopic kinds of things. It's a ***** trying to cure those things. We can't. A lot of people don't use that because they think patient-derived xenografts and all this kind of stuff. They're not natural. These are artificial. When you work with artificial systems, you get artificial information. You have to really work with the natural host, the natural environment from where the cells come. A lot of human cells don't metastasize. When you say, look at all the metastatic models, what they're doing is they're injecting human cells into the tail vein of a mouse. This is not metastasis. They do spread to different organs, but it's not because they spread naturally; they were forced to do that. I've broken down these models for so many years, knowing what's the most informative model. When you talk to the field, they seem to be locked into the models that they've developed to get their answers. Often you get a lot of misinformation from that, and therefore discount the whole system. Dogs have cancer; you can use dog models. Or humans—the best model you have is the human. The reason why we've had so much success in humans is because we worked it out in natural models in the mouse. Ultimately the test is with the human. You can ferret out mechanisms in vitro and in natural systems. Ferreting out molecular mechanisms, we usually have to go to the in vitro system, but you don't want to try to ferret out a molecular mechanism for a phenomenon that doesn't exist. You want to document the phenomenon and then try to break it down in another system. Then you put it back and test it in vivo, and ultimately you test it in the person that has the cancer. Our in vivo systems in the mouse are the most natural. The reason why we've made the advances we have is because we use only natural systems. When we get to humans, Mike, we get much, much better outcomes in humans than in mice. We developed a system in the mice, but when we tested in humans, we get so much better response because of the difference in basal metabolic rate. This is so important that people completely overlook it. The basal metabolic rate of the mouse is seven times faster than that of the human. The human body has a much greater opportunity to work on things where the mouse is super-accelerated. You really have to be careful about knowing that. A mouse without food lives about six days if he's lucky. Humans, depending on how much body fat you have, can live for months. You have a very different metabolic environment in a human than you do in a mouse. If you have natural systems in the mouse, you can translate them into humans, as long as you understand differences in basal metabolic rate, which comes down to bioenergetics and bioelectric relationships to the energetics. We have to be aware of all those things.</p><p><strong>[1:13:38] Unknown:</strong> Gotcha. I definitely agree that not every mouse is created equal, especially with the different mouse models people are using. But if I could ask one question, Michael, I'm very interested in the differences between morphogenetic fields in culture, different types of culture, and the mouse itself. It seems clear — I believe from your work directly, or at least work you've cited — that changing the morphogenetic field can be an initiating factor that's necessary and sufficient to induce proliferation. Do you also feel that it's that way in vivo, in a mouse system as well? Is it an initiating factor at times? Is it just sufficient, but not always necessary? Could you talk a little bit about that?</p><p><strong>[1:14:30] Michael Levin:</strong> Our work is not so much in vivo in mice on this, although we've done human cells and MSCs and things like that in cell culture with David Kaplan and so on. As far as I can see, the evidence is that it plays that role in vivo normally and even in mammals. It's not the only thing, of course; there are chemical factors and biomechanical forces and things like that too, but it plays that role. Partly what it does is coordinate proliferation rates across distance — the kind of allometric scaling that makes things scale. I think so.</p><p><strong>[1:15:19] Thomas Seyfried:</strong> Okay, if you guys would like to consider more on how we could work together that would be certainly an important thing. There are things that we can provide for you and there's things that you can provide for us to move the field in this general direction. We don't have answers to a lot of these things. I think the energetics of how the mitochondria control bioenergetics, the electrical signaling, the signaling cascades that I'm now learning — we've always been nuclear centric in everything we've been doing in cancer and in biology in general, genomic sequencing and reductionism to the point where we've lost sight of what the bigger issues are. Learning that there is an interesting connection between individual cells and the outside world and the way this works is hard to quantify when you try to do a genomic screen on things because you have no clue whether the gene expressions are associated with protein production. It's hard to link those gene expression profiles to actual changes in the morphogenetic activities. Whereas the mitochondria seem to be that organelle that offers the opportunity to, for the first time, make these connections. And the nucleus will just obey whatever the mitochondria is doing. People talk about epigenetics, and we've known that the mitochondria control the epigenetic signaling inside the nucleus. I always found it interesting that the mitochondria have relinquished most of their genes to the nucleus but they've kept 13 of them that they never relinquished, and those 13 control the destiny of the cell. They have a circular genome and multiple circular genomes, so it's a fascinating organelle in that regard. Bits and pieces of mitochondrial genome have become integrated into the nuclear genome as pseudogenes; they're really not expressed, but they integrate into the nuclear genome in many different ways. Why those 13 genes have never been allowed to be part of the nuclear genome, even though the nucleus controls some parts of the proteins of the electron transport, is interesting. The key ones that determine the destiny of the cell are retained in that mitochondrial genome. If you can have someone do the job for you, why should you waste the time doing it? That organelle, knowing that you have a big *** nucleus with a lot of DNA and a lot of chromosomes, and if that can follow the directions of the mitochondria, that saves this organelle. Why should I have to replicate everything this other organelle is doing? Therefore it would give it much greater control. It's a controlling organelle. It really controls the destiny of the whole physiology. The other thing about aging is we die from the second law of thermodynamics: entropy. All humans, mice, and different organisms have a defined life limit on the planet. The way you live longer is you keep your mitochondria healthy. That will just delay entropy, the second law of thermodynamics, because eventually people die and they die from disorder. It's interesting when people die of old age: often they're pretty alert up until two or three days before they die. It's almost like the entire mitochondrial energy system just turns off and you die. But as long as you can keep the system healthy, you can live longer and prevent a lot of different diseases that you are confronted with. Each one of these chronic diseases, in one way or another, increases entropy in a particular organ or in the system itself, and you don't live as long. Clearly, understanding aging is understanding mitochondrial energetics. It's hard to get cancer if your mitochondria are healthy. It's hard to get type two diabetes if your mitochondria are healthy.</p><p><strong>[1:19:13] Thomas Seyfried:</strong> You exercise, your mitochondria stay healthy. Ketone bodies, as we've written, are a super fuel. When you burn ketones, you reduce reactive oxygen species. You get more energy per breath of air when you're burning a ketone than when you break down pyruvate or even fatty acids; they uncouple the mitochondria and create more ROS. All of these things are interesting points to consider when we study biological systems. I think the reason why we haven't spent as much time on the mitochondria is that they're hard to see. When you look under electron or light microscopy, you see a big nucleus; you're focusing on that. The mitochondria is a morphic organelle that's diffused through the cytoplasm. You didn't really start to see it until electron microscopy was developed. Warburg did all his work based on chemical measurements. He never looked at mitochondria. He didn't have the tools to do that. He based it entirely on readouts of fermentation. When you start looking at it in a more dynamic way with microscopy and other techniques, we're starting to take a deeper look at mitochondria. I think you're going to find them to be controlling elements of biological systems' function. It's going to be related to the efficiency of energy use and the interaction of different organ systems. We're just beginning now to turn our attention in this direction, especially because it's related to chronic diseases, which are crippling our country and the world. This is now becoming a major problem, and a lot of it has to do with mitochondrial dysfunction in different ways. We always wonder, in the brain you have Parkinson's disease. This is a mitochondrial reactive oxygen process in the cells of the substantia nigra. These cells die. They don't become cancer. We've always wondered why cardiac myocytes and neurons of the brain rarely, if ever, become tumorigenic. They can't switch to a fermentation metabolism. Their energetic requirements require oxidative phosphorylation, and when that goes down, they die. They don't become cancer cells. We're starting to see why some cells are more prone to become neoplastic and other cells are not neoplastic, and how all this works together. We're starting to see a lot of these connections for the first time. I'm looking forward to a dynamic future, but we have to start by addressing certain questions that we do not have answers to at this time.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>&quot;On Biological and Artificial Consciousness&quot; by Borjan Milinkovic and Jaan Aru</title>
          <link>https://thoughtforms-life.aipodcast.ing/on-biological-and-artificial-consciousness-by-borjan-milinkovic-and-jaan-aru/</link>
          <description>Borjan Milinkovic presents a case for biological computationalism in consciousness, discussing neural architectures, metabolic constraints, heterarchical scale integration, dendritic and field-based computation, and consequences for artificial consciousness.</description>
          <pubDate>Thu, 05 Mar 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69a9ab5304a6b700014701dc ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/KKrRLgPp6XI" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/27a27f19/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~53 minute talk + 14 minute Q&amp;A titled "On Biological and Artificial Consciousness: a case for biological computationalism" by Borjan Milinkovic ( and Jaan Aru (</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Speaker background and research</p><p>(03:16) Motivation and philosophical landscape</p><p>(09:32) Von Neumann architecture</p><p>(15:39) Computational metaphor in neuroscience</p><p>(22:52) Metabolic constraints in neurons</p><p>(29:04) Heterarchy and scale integration</p><p>(33:55) Empirical scale integration findings</p><p>(38:00) Hybrid computation in dendrites</p><p>(43:16) Field-based neural computation</p><p>(47:48) Implications for synthetic consciousness</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00]</strong> Hello, everyone. I'm Boki Milinkovic or Borjan Milinkovic, but I prefer Boki.</p><p>Today I'll be presenting on a topic that Jan Aru, a colleague of mine, and I explored on the distinction between biological and artificial consciousness and to make a case of something that lies in between two camps that currently hold sway in the field. That QR code will guide you to the paper if you want to check it out.</p><p>I should present something about myself first so you know I'm embedded in the wider scheme of the research field and what I do. I'm currently a postdoc in Alandis Texas lab here at the University of Tarisakle. We generally work in multi-scale neural modeling, modeling neural dynamics across single neuron level, neural population level or mean field level, and whole brain level. We generally do this with the intention to capture the multi-scale or scale-integrated dynamics that necessarily underpin some global states of consciousness.</p><p>My work primarily is based on trying to build whole-brain models, including receptors, that might tell us something about the molecular action of psychedelics and how this propagates to large-scale activity based on some indices of consciousness. Usually these are complexity indices or perturbation of complexity indices, other information-theoretic applications to that level of dynamics, and modeling TMS and tDCS stimulations. This includes field dynamics that go beyond seeing how stimulation at one node propagates to another, which is a lot more discretized.</p><p>As I go on with some of this, you will see some of these ideas come through and where I might get them. I did my PhD on quantifying and qualifying information-theoretic measures of emergence in neural systems. This was done with a fantastic PhD supervisory team that consisted of Olivia Carter, Tomas Andreon, Lionel Barnett, and Neil Sethman. Some of these ideas, like scale integration, grew out of work I did during my PhD.</p><p>But that's enough about me. We should start to get a sense of where some of these ideas might be coming from. Hopefully this switches.</p><p>I should begin with something that might sound controversial, particularly given what we've published: I'm not primarily interested in artificial intelligence as it is right now in the field, whether it is these artificial systems that can potentially develop a notion of sentience or consciousness. I apologize for making those words synonymous. I know in many fields they are not, and in mine as well, but just for the sake of argument.</p><p>I'm not interested in the way it is framed right now. My question runs in a different direction and has a completely different goal in mind. I'm more interested in: give me a cell or a neuron or a neural population and let me understand what it can actually do and compute.</p><p>The drive is to bring computational notions seriously and formally to understand biological systems. In this way, I think there is a need to shift what we think computation is, but I'm really interested in formalizing what biological systems, and neural systems in particular, are capable of computing. This is the easiest way to describe "certified computational," or the qualifier "computational" in front of computation when you're assigned.</p><p><strong>[05:13]</strong> The reason is simple. Before we speculate about artificial consciousness, we need to understand what biology is doing in the first place. And only then can we ask whether it is possible to construct what I'm potentially calling a phenomenal engine. I want to be clear. This is a completely open question, and I don't know whether we can build such a thing. That is precisely the motivation for the project: to move from conjecture to construction and from analogy to formalization.</p><p>First, we need to clarify the landscape in which this debate is embedded. Most of us are already familiar with what we often present as two opposing positions. I'm about to oversimplify them. The simplification helps build some of the intuition we need here.</p><p>On one side, we have computational functionalism, the position that commits in one way or another to some substrate independence. The idea is that what matters for consciousness is the right information processing, typically at some privileged computational scale. If that organization is preserved, then in principle consciousness could be realized in systems very different from biological tissue. I often hear this contrast framed as silicon versus neurons. The comparison is misleading. Those differ dramatically in scale and organization. If we want a fair comparison, it should be electric circuits versus neurons or silicon versus carbon.</p><p>Setting that aside, on the other end of the spectrum we have biological naturalism. In its strong form, the view holds that biological systems are uniquely privileged in realizing subjectivity, that there is something about that particular tissue that makes possible what it is like. This something has often remained unexplained, ineffable. So reasons are given, but rarely formalized or exposed in computational or dynamical terms.</p><p>There is some great work out there. That has really inspired us. It's the kernel of the paper. It's what motivated us to write it.</p><p>I don't think that these are two categorical camps. They are extremes on a continuum, a spectrum of how relevant biology is for subjectivity. Our position sits somewhere along this spectrum. We are closer towards biology, but not exclusively so.</p><p>What my working intuition is and what I would like to suggest and convince you of throughout this talk is that biology may require us to revise what qualifies as computation before we can meaningfully debate synthetic consciousness — a term I prefer because "artificial" has such a weight attached to its semantics. We need clarity about computation itself. My primary aim is to define computation precisely in a way that is operational and formally usable in neuroscience and biology. That is the first task. And only after that can we responsibly ask how such principles might inform the construction of synthetic systems.</p><p>To begin, we need to clarify what computation traditionally means, both at the physical level of systems—physical hardware systems—and at the abstract level of computability itself, what is known as computability theory, particularly recursion theory. Only then can we assess whether biological systems are simply implementing this classical computation or whether they instantiate something "structurally different." Hopefully that term will be something you hold on to as you go through this to see what difference I'm trying to ascertain.</p><p>We'll start with digital systems. I've been asked whether this paper is really about digital hardware or about computability as the abstract formalism. The answer is, it's about both. And it has to be.</p><p><strong>[10:36]</strong> So biological computationalism is concerned with what computations are permissible under physics. And more specifically, which computations are permissible under physics as realized in biological systems. For that reason, we need to examine two things. First, the concrete architecture of digital machines, what's called the von Neumann model that actually instantiates algorithmic computation. And second, the abstract definition of computation itself, the formal notion of computability in the Church-Turing sense. I know there have been slight nuances between the way the Church-Turing thesis has been explained, but I will try and summarize a general one that holds the most utility and is hopefully the most accurate in the way that we know it today. Only by considering both can we be precise about how structure and function relate in nervous systems and how this might relate to synthetic systems as well.</p><p>Let's begin with the hardware, because the hardware tells a story. This story is a tale of separability.</p><p>Some of you will have seen this diagram in undergraduate computer science textbooks, but it's worth revisiting carefully. A classical von Neumann architecture is built from three core components. First, a memory unit, which passively stores discrete symbols. Second, the arithmetic and logic unit, the ALU, which manipulates those same symbols. And third, a control unit, which fetches instructions from the memory, decodes them, and throws them back or dispatches operations to the ALU. Some of the instructions the control unit uses and the stored data share this uniform address space. It's a separate address space physically separated from the ALU, the arithmetic and logic unit. That separation is not an accident. It constrains how computation unfolds. This is the von Neumann bottleneck. These are modular separations of functions; we see clean boundaries.</p><p>In the brain, we might be tempted to map this onto scale separations that I talk about later. But here, the separation is explicit and engineered across modules. It can relate to scale separation in the brain, but it's not yet a nice definition of scale separation. The first major scale separation, and the way we would like it to mean in the brain, appears at the level of what is called the instruction set architecture. This is where the processor's instruction set abstracts away from the underlying digital circuits and transistors, and it allows the binary code to run across different hardware. That's what allows it. It's this instruction set layer and this compiler toolchain.</p><p>In other words, this means that the algorithm and the computations performed by the effective procedure, the procedure that goes step by step of the algorithms, are insulated and closed from the physical hardware. That is the clean separation. It is that the algorithmic computations are completely closed. Their effective procedure can occur on that level without recourse to the physical level. This insulation produces a form of closure of that level that is precisely what computability theory in the Church-Turing tradition requires. It requires disclosure of an abstract level of procedure.</p><p>I've stated something that's very important, but will not be discussed here: non-algorithmic dependencies still exist. We're not blind to the fact that the physical system does some physical things that help the software run. If I turn off my laptop, the software isn't running. But there is an algorithmic computational layer that is closed.</p><p>Interestingly, this closure, this insulation of algorithm from substrate, has also quietly shaped neuroscience because the computational metaphor was inherited along with the hardware architecture.</p><p><strong>[15:55]</strong> And so the premise becomes, for computational functionalism anyway, that it has some commitments that are directly traceable to these metaphors. Consciousness, that feeling of what it is like to be us, supervenes on the algorithmic organization alone. If the right procedure is executed, the right function implemented, then that is sufficient. The substrate, the hardware beneath an instruction set architecture, becomes completely interchangeable. This is the substrate independence.</p><p>We see echoes of this in theoretical neuroscience. I want to touch on this. Some simulations are performed purely at the microscopic level, detailed neuronal models without recourse to these larger scale dynamics. Others operate purely at the mesoscopic level, mean field approximations, neural mass models. They're treated as complete descriptions in themselves. The convention of splitting scales in this way is not accidental. It's inherited from this computational thinking. This is the kind of figure to the right that you're seeing that is feeding this intuition.</p><p>To be clear, it has been enormously useful. I've worked on these models myself, as I mentioned at the start. They've taught us a great deal, but we have to recognize the limitation. Brains do not operate on clean, arbitrarily defined scales. They are not naturally decomposable into algorithmically closed levels. Another inheritance is this strong substrate independence that I've already mentioned. Neurons, silicon, carbon, in principle, don't matter. They're equivalently interchangeable when we're thinking about the properties that are necessary and sufficient for consciousness. That follows directly from the algorithm-implementation separation we discussed. Computation is abstract; physical realization is only secondary.</p><p>Once you assume closure at a given level of execution, something else comes along with it: this scale privilege, the commitment that there is a single computational level that realizes consciousness or any given biological function.</p><p>Finally, I want to touch on the last thing. I think there is the reduction of these neural computations to discrete semantics as well. Action potentials as ones and zeros, binary logic. This is the legacy left by the McCulloch and Pitts neuron, though it's interesting because even Turing himself did not restrict computation to binary symbols. His formalism began with natural numbers, and he explicitly considered continuous variables, though this was never worked out. So the binary reduction is not inevitable. It is in a way structural and architectural. This tenacity of the traditional computational metaphor might be feeding the computational functionalism camp.</p><p>This is a noble pursuit and a noble way of thinking, but we believe that there is a different way. Before we speak about this different way, we still need to go through the computational part. We need to touch on computability as a formal and abstract notion as well. I know it has a bit of a hazy history with slightly different distinctions that abound, and I have been trying to disambiguate these as a current work in progress, precisely to define a new form of computation.</p><p>One definition that stands out as essential, and I've tried to narrow it down to, is that computation is the procedural sequential execution of an algorithm in order to compute a mathematical function. Under this premise, algorithmic computability is defined following four ontological primitives about this structure of computation. One is that in application and in principle, it is based on discrete alphabets, binary or natural numbers. It is a closed system, and it is on a single scale.</p><p><strong>[21:23]</strong> So even with parallel processing, you can compress this into a sequential process that encodes this in a way, but it always is an encoding of the parallel processing rather than the parallel processing itself. And that's important. It executes next steps to compute a mathematical function. It always goes by this state transition procedure. Biological computation is nothing like this. It's discrete and continuous, given the biological medium in which it lives. It's an open system, both at the single neuron level and at a more global level, such as your interactions at the subjective, phenomenal level with the environment. It is multi-scale, truly, structurally as an ontological primitive, not as something that can be simulated. And it isn't defined by just an execution of functions. There is some level of interaction going on that's different from just execution.</p><p>To summarize, digital computation is cleanly decomposable, modular, and separable; algorithmic closure reigns supreme. Biological computation is actually none of this. Biology or biological computation requires a revision of what qualifies as computation. Digital systems have a particular physical structure. Turing computation comes with particular ontological primitives. Digital systems, because of the way they are structured—modular, scalable, separable—scale by just adding more energy. We know this by current LLMs. But the brain scales in a different way: it scales by reorganizing computation and dynamics under particular constraints. Since it is about reorganizing computation, it is precisely a structural inequivalence with Turing computation.</p><p>Let's have a look at what biological neural tissue is doing. We begin from something simple. Life has finite resources. What do metabolic constraints actually do in neural systems? My claim is that they shape the dynamics and therefore the computations of the system. They really structure them. Energy limits are not just peripheral things. They are not simply a matter of speed; if you have more resources, the same computation will run faster. That's not what we're proposing. They are constitutive. They shape the ontology of neural tissue and the neural interactions both structurally and dynamically. And that already marks this structural inequivalence with current digital systems. It's important to understand this carefully.</p><p>We see this at the level of ion channels. There is evidence that channel kinetics, activity, are tuned by ATP efficiency. ATP is the energy source in biological systems. Hassenstraat's paper is informative. Instead of simply packing in more sodium channels to increase firing rate, some neurons often only adjust potassium conductances because they don't need that rate as much. This is because the cost per spike is lower. In other words, demand for dynamical communication shapes the physical medium itself. Structure adapts to energetic demands. This is an instance of what we call dynamico-structural co-determination. It's one of the tripartite principles later on in the paper. We already see this at the ion channel level.</p><p>Another example, and one I particularly find fascinating, particularly since my work in emergence, is that not all neurons in the brain spike. For neuroscientists, this is maybe old news for some. Outside of those circles, it's less frequently appreciated. There are non-spiking neurons that operate using graded potentials. What appears to be happening is quite interesting. They function as a form of coarse grading over incoming discrete synaptic inputs from presynaptic neurons.</p><p><strong>[27:01]</strong> So already at this level, we see an interplay between discrete and continuous signals. So that to me is an instance of hybrid computation already, or what we call hybrid computation. Though I formalize the notion more carefully later, which I will speak about, but what was already shown in the 90s is that continuous transmission through these graded potentials can carry more bits of information per second than discrete spike trains. That, to me, is striking. That's the figure below from Laughlin. And it suggests that the system is not choosing to be multi-scale in some abstract sense. Rather, this multi-scale organization is something that emerges from metabolic and functional demand. Because many spiking neurons converge onto a single non-spiking neuron, the latter effectively spatially coarse grains the incoming signals. And it's not an anatomical curiosity. It's the real deal. It has computational and informational consequences. So projecting information through graded potential rather than spike trains reduces metabolic costs while expanding transmission capacities. This is a continuous signal. So here we are seeing something concrete.</p><p>What emerges from this picture, or metabolism's dual outcome, is, first, metabolism binds dynamics to the physics made possible by biological substrates, and it induces this dynamic or structural co-determination. It necessarily induces something that already distinguishes biological systems, which is that computational time is physical time. While digital systems merely approximate this, they also approximate only continuity. Biological systems directly instantiate this. It's important.</p><p>Second, metabolic constraints make scale separation way too costly. Processes must reuse stuff across scales. As a result, this clean hierarchy compresses into something else, a heterarchy. It is not a flat, single-scale system. It is not a strictly layered hierarchical one with algorithmically closed levels that interact only at some fixed interfaces. It is something in between, a heterarchy. A heterarchy also comes with other notions, but primarily, this is the intuition here. And scale integration then emerges as an optimization strategy, as a metabolic optimization strategy. It defines this heterarchical nature of the dynamics in the brain. So there is no privileged scale that actually exists because there cannot be one. It is not a hierarchy, a clear one. And this is developed much more in the paper, of course. So since energy shapes computation, biological computation may be structurally inequivalent to computability in the Church-Turing sense. This is what I mean. So under scarcity, the brain cannot afford the separability of scales. So as I've already touched on, the form of organization that emerges is heterarchical and scale integrated. It is multiscale, of course, but it isn't hierarchical. And once energy becomes a constitutive constraint here, clean functional, dynamical separability just becomes too costly. Neural systems can't afford fully independent layers. Instead, these processes must be reused and integrated across scales, what we call the notion of accretion that develops over evolution. So what emerges is not really a hierarchy of control, that's the key part, but a heterarchy of distributed constraint. And here you get distributed processes occurring, and also that it is more constraint than a complete control. Constraints are, of course, controlling in a sense. But this slight softening really makes the case to me of what's going on in neural dynamics. So in a hierarchy, scales are ordered and separable. In a heterarchy, no scale is privileged. There is no single fundamental computational unit. There's a heterarchy of scales. There is no fixed unit size at a given scale, no algorithmic closed layer. Computation is distributed. Scales co-determine one another, as well as then determining the structure, as we saw before. Scale integration is therefore not just decorative at all, it is what defines the organization itself and conscious processing may require not just some region to region binding on the same scale, but also scale to scale tethering or integration.</p><p><strong>[32:38]</strong> And that's the proposition. But this isn't just speculation. This notion of scale integration is not purely theoretical or speculative. It actually comes from some empirical results that colleagues and I have obtained while working on information theoretic measures of emergence in neural systems. And these measures of emergence formally capture scale integration.</p><p>The measure I'm speaking about is dynamical independence, which was originally developed by Lionel Barnett with Anil Seth. It captures the dependence between microscopic dynamics and the lower-dimensional macroscopic variable or state space in which those dynamics can be expressed.</p><p>In this dynamical independence framework, the higher the dynamical dependence between those scales, the more tightly integrated they are. We apply this framework across different conscious states, like anesthesia with propofol and xenon and ketamine, across sleep stages, and under 5ME or DMT, a potent psychedelic.</p><p>We consistently observed that wakefulness shows higher scale integration across macroscopic dimensionalities. These are functional, dynamical scales. For both propofol and xenon, the higher the dynamical dependence, the higher the scale integration. Ketamine does not simply reduce scale integration. In some cases, it preserves or even increases it compared to wake. Some anesthetics untether scales, while others tether them differently. That remains an open question and something I wish to explore further.</p><p>We see a similar pattern in sleep. Wake and N1, which is the first liminal sleep phase — dreamy and very creative — show high scale integration; deep sleep and even REM show lower integration. The landscape of scale integration is frequency specific: in the alpha band, the defining band of wakefulness, emergent macroscopic dynamics are more tightly integrated in both N1 and wake. But in deep sleep, the delta band shows its own maximality.</p><p>This begins to paint a consistent picture across conditions. That's a link to the anesthesia preprint. This shows wakefulness tends to exhibit stronger dynamical scale integration than anesthesia or sleep, and we even have data for psychedelics. Again, wake shows stronger scale integration across macroscopic sizes and frequency bands, with gamma band deviations where DMT shows more scale integration than wake.</p><p>This is an interesting point to think about. We already have ways to confront this constructively and empirically. We'll see where this research leads.</p><p>The corollary is important. Consciousness is neither reducible to micro-level dynamics nor fully explicable by macro-level functional patterns alone. It emerges from independence across scales, from the coupling between dynamical levels of different dimensionalities. Different scales, truly heterarchical. There is no privileged scale of dynamics. Consciousness appears to reside in this integration across scales.</p><p>This brings us to our last pillar, hybrid computation, where we see subthreshold activity driving discrete transitions at the subcellular level. I won't spend much time here: non-spiking graded potentials.</p><p><strong>[38:05]</strong> But another fascinating feature of the brain is seen very clearly in dendritic processing. Dendrites tell us something quite remarkable about how neurons communicate and maybe what would be a foundation of a new formal definition of biological computation that I've been moving away from. This notion that there's a sequential state transition, executable function vibe. Axons tend to run straight through this neuro pill. Which is this dense fiber bundle. While dendrites actively reach out to them, they don't just receive signals passively, they seek them out. For me, there is a genuine interaction happening here. These are called dendritic spines, and they grow given particular processes that occur in these spines. One often neglects some of these features happening before the soma. I do want to touch on some of them.</p><p>Dendritic spines function as nanoscopic biochemical and electrical compartments. And they actively interact with the synaptic cleft, the in-between, before anything is forwarded to the parent dendrite. Even before it gets to the dendrite. What this means is that presynaptic information is consolidated in a massive parallel, interactive, distributed way that is timing dependent. It is non-Markovian. It doesn't only depend on the one time step before. This is already computation and I might be hinting at what type of computation. That looks interactional and it's not easily stratified into any clean level, into any executable single-step maneuver. It's an organizational principle that is missing from artificial neural networks where units simply collapse weighted inputs into a single sum.</p><p>Dendrites themselves are not these passive cables. They are also densely packed with voltage-gated ion channels, which allow this integration of interactions, which happens actively rather than just relaying them. There are NMDA receptors as well. They introduce additional non-linearities in the system by generating local dendritic spikes at that position that can travel both toward and away from the soma up the dendrite, which is fascinating. Sometimes this is against the usual direction of information flow. This reverse signaling allows dendrites to detect the order of that timing I mentioned before. In other words, dendrites can retain a history and then choose through interaction which history is necessary. They exhibit inherent non-Markovian computational processes.</p><p>What we see here is not that neural behavior is a very simple input-output device, but it's deeply hybrid and interactional. So far, we've been looking at hybrid computation inside neurons, dendrites, spikes, and ion channels. The story doesn't stop at this membrane level. There is another layer that is often underappreciated. These are electric fields.</p><p>Electric fields couple neurons beyond just synapses. Neurons don't just communicate chemically or through direct synaptic transmission. They also influence each other through continuous electric or ionic fields generated by collective activity. The brain has continuum dynamics that emerge only on that scale. These are subthreshold, called ephaptic interactions. They don't necessarily trigger spikes directly, but they modulate excitability. That modulation matters. It changes the probability landscape of firing, which in turn reshapes network dynamics. Oscillatory fields do something similar. They guide excitability across populations. They synchronize.</p><p><strong>[43:31]</strong> They scaffold activity patterns without requiring discrete synaptic events. So the computation is not confined to spike-to-spike transmission. It extends throughout this continuum physical medium. The brain is not just a network of discrete units. It is an electrochemical continuum. It's the point that I've made already. The material properties of that continuum matter. Tissue geometry, conductivity, ventricular structure — all of these shape how fields propagate.</p><p>This is active work now, modeling how the brain's physical substrate supports resonant oscillatory modes and how these modes might constrain neural dynamics, even at the scale of BOLD signals. This reinforces the idea: computation in the brain is not purely symbolic or purely discrete. It is embedded in a material field, in the material soup of things. Digital systems, on the other hand, can only approximate this continuity. Biological systems instantiate this, and this instantiation is important. Once again, we see hybrid continuous-discrete computations emerging. It's a real structural property.</p><p>So we might be thinking why this might matter for consciousness. Here's where the real inference happens. We want to synthesize some of this from the biological ground-up approach. While this might matter for consciousness, it is not a complete criterion for conscious or phenomenal existence. It is rather some biological notions that might be necessary but not sufficient if we are thinking about systems that could potentially be vessels of conscious existence.</p><p>First, subjectivity requires boundedness. To have "what it is like," a system must distinguish what belongs to itself and what does not. You need to define the boundaries of this "it." Boundaries are foundational, though they might not be closed. Boundaries require partial global control. Local processes are not enough; mechanisms must coordinate across the whole. I need to feel what it is like in order to know, and this might require electrochemical boundaries to exist and be detectable. Subjectivity would require partial closure from the environment.</p><p>Biological systems repurpose the same substrate across scales. This is the notion of closure across scales — across chemistry, electrochemistry, distributed signaling. Old mechanisms are reused for new global functions. Evolution builds new scales from old materials. Scale integration binds the whole into a single intrinsic perspective. Continuous processes capture organism-level boundaries. Discrete processes specialize and differentiate. Subjectivity and the notion of intrinsic existence may emerge from the coupling between these scales.</p><p>A very interesting example occurs with electric fish. They are a great example of a notion in neuroscience called efference copy, where they send out a discrete electric pulse into the environment from a particular organ in their skin, and read the divergence of that continuous electric field in another organ in the skin.</p><p><strong>[48:48]</strong> And in a way, this continuous field helps it distinguish self from other or this function that it performs with both discrete pulses and continuous electric fields that it feels later, allows it to understand or distinguish itself from the environment. This instantiation shows me that continuous fields are very useful to propagate signals that are necessarily needed to be read out instantaneously.</p><p>Regardless, that's an interesting notion of intrinsic existence and how it relates to some of these principles. I'm ending. If we take some of these preceding arguments seriously about metabolism, hybrid dynamics, scale integration particularly, then it becomes difficult to think that simple scaling of these current digital architectures will be sufficient for any form of sentience or intrinsic existence.</p><p>What we argue in the paper is not that artificial consciousness is impossible. Rather, if it is possible, the system would need to satisfy the tripartite criteria. This is hybrid computation, scale integration, and dynamic or structural co-determination.</p><p>This is really important. We are not claiming that these three conditions are sufficient for consciousness. They are at best necessary. I would call this an incomplete list. Even satisfying them may not be enough.</p><p>When it comes to implementation, one is this obvious coupling with the environment. Maybe Mike Levin, yourself, you've worked on this. I believe Anna Siawenika is already working on some of these ideas. There are broad teams around the world that are thinking about this very seriously. Without them, it's difficult to see how one can recover this intrinsic existence.</p><p>We also believe that without some of these biological primitives, it would be difficult or maybe impossible to recover this intrinsic scale-integrated association, the organization that we associate with subjectivity.</p><p>The implication is that if consciousness is realizable in synthetic systems, we may need fundamentally different computational paradigms, both at the hardware level and at the formal level. At the hardware level, possibly neuromorphic, possibly fluidic, field-based. But all of this remains an open question and something we are currently working towards to see how we can implement something like this, if possible. I'm sure we will fail and fail over again, but hopefully we will fail better.</p><p>I think the current debates have underappreciated the computational significance of biological organization itself. The structural ontological primitives of computation that occur in biological systems are necessary, so a biologically-centered conception of computation might be a cool idea. Let's say we take physical, metabolic, hybrid and scale-integrated dynamics of neural tissue as foundational and build some formalism.</p><p>It's not about whether synthetic systems can be conscious, but whether we are building the right machines.</p><p>I'd like to thank you for listening. I'd like to thank the people who are implicated in some of the work I presented and who have helped guide and mentor me: Alanda Steck, who is my current postdoc supervisor; Olivia Carter, who was my primary PhD supervisor; Tomar Andrillon, Lionel Barnett, and Anil Seth, who are part of the supervisory team as well. George, Ross, and Jeremy collected the 5MEO DMT data that I presented, and Jan, of course, for being an incredible colleague and collaborator and friend.</p><p>Thank you very much.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>&quot;On Biological and Artificial Consciousness&quot; by Borjan Milinkovic and Jaan Aru</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>Borjan Milinkovic presents a case for biological computationalism in consciousness, discussing neural architectures, metabolic constraints, heterarchical scale integration, dendritic and field-based computation, and consequences for artificial consciousness.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/KKrRLgPp6XI" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/27a27f19/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~53 minute talk + 14 minute Q&amp;A titled "On Biological and Artificial Consciousness: a case for biological computationalism" by Borjan Milinkovic ( and Jaan Aru (</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Speaker background and research</p><p>(03:16) Motivation and philosophical landscape</p><p>(09:32) Von Neumann architecture</p><p>(15:39) Computational metaphor in neuroscience</p><p>(22:52) Metabolic constraints in neurons</p><p>(29:04) Heterarchy and scale integration</p><p>(33:55) Empirical scale integration findings</p><p>(38:00) Hybrid computation in dendrites</p><p>(43:16) Field-based neural computation</p><p>(47:48) Implications for synthetic consciousness</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00]</strong> Hello, everyone. I'm Boki Milinkovic or Borjan Milinkovic, but I prefer Boki.</p><p>Today I'll be presenting on a topic that Jan Aru, a colleague of mine, and I explored on the distinction between biological and artificial consciousness and to make a case of something that lies in between two camps that currently hold sway in the field. That QR code will guide you to the paper if you want to check it out.</p><p>I should present something about myself first so you know I'm embedded in the wider scheme of the research field and what I do. I'm currently a postdoc in Alandis Texas lab here at the University of Tarisakle. We generally work in multi-scale neural modeling, modeling neural dynamics across single neuron level, neural population level or mean field level, and whole brain level. We generally do this with the intention to capture the multi-scale or scale-integrated dynamics that necessarily underpin some global states of consciousness.</p><p>My work primarily is based on trying to build whole-brain models, including receptors, that might tell us something about the molecular action of psychedelics and how this propagates to large-scale activity based on some indices of consciousness. Usually these are complexity indices or perturbation of complexity indices, other information-theoretic applications to that level of dynamics, and modeling TMS and tDCS stimulations. This includes field dynamics that go beyond seeing how stimulation at one node propagates to another, which is a lot more discretized.</p><p>As I go on with some of this, you will see some of these ideas come through and where I might get them. I did my PhD on quantifying and qualifying information-theoretic measures of emergence in neural systems. This was done with a fantastic PhD supervisory team that consisted of Olivia Carter, Tomas Andreon, Lionel Barnett, and Neil Sethman. Some of these ideas, like scale integration, grew out of work I did during my PhD.</p><p>But that's enough about me. We should start to get a sense of where some of these ideas might be coming from. Hopefully this switches.</p><p>I should begin with something that might sound controversial, particularly given what we've published: I'm not primarily interested in artificial intelligence as it is right now in the field, whether it is these artificial systems that can potentially develop a notion of sentience or consciousness. I apologize for making those words synonymous. I know in many fields they are not, and in mine as well, but just for the sake of argument.</p><p>I'm not interested in the way it is framed right now. My question runs in a different direction and has a completely different goal in mind. I'm more interested in: give me a cell or a neuron or a neural population and let me understand what it can actually do and compute.</p><p>The drive is to bring computational notions seriously and formally to understand biological systems. In this way, I think there is a need to shift what we think computation is, but I'm really interested in formalizing what biological systems, and neural systems in particular, are capable of computing. This is the easiest way to describe "certified computational," or the qualifier "computational" in front of computation when you're assigned.</p><p><strong>[05:13]</strong> The reason is simple. Before we speculate about artificial consciousness, we need to understand what biology is doing in the first place. And only then can we ask whether it is possible to construct what I'm potentially calling a phenomenal engine. I want to be clear. This is a completely open question, and I don't know whether we can build such a thing. That is precisely the motivation for the project: to move from conjecture to construction and from analogy to formalization.</p><p>First, we need to clarify the landscape in which this debate is embedded. Most of us are already familiar with what we often present as two opposing positions. I'm about to oversimplify them. The simplification helps build some of the intuition we need here.</p><p>On one side, we have computational functionalism, the position that commits in one way or another to some substrate independence. The idea is that what matters for consciousness is the right information processing, typically at some privileged computational scale. If that organization is preserved, then in principle consciousness could be realized in systems very different from biological tissue. I often hear this contrast framed as silicon versus neurons. The comparison is misleading. Those differ dramatically in scale and organization. If we want a fair comparison, it should be electric circuits versus neurons or silicon versus carbon.</p><p>Setting that aside, on the other end of the spectrum we have biological naturalism. In its strong form, the view holds that biological systems are uniquely privileged in realizing subjectivity, that there is something about that particular tissue that makes possible what it is like. This something has often remained unexplained, ineffable. So reasons are given, but rarely formalized or exposed in computational or dynamical terms.</p><p>There is some great work out there. That has really inspired us. It's the kernel of the paper. It's what motivated us to write it.</p><p>I don't think that these are two categorical camps. They are extremes on a continuum, a spectrum of how relevant biology is for subjectivity. Our position sits somewhere along this spectrum. We are closer towards biology, but not exclusively so.</p><p>What my working intuition is and what I would like to suggest and convince you of throughout this talk is that biology may require us to revise what qualifies as computation before we can meaningfully debate synthetic consciousness — a term I prefer because "artificial" has such a weight attached to its semantics. We need clarity about computation itself. My primary aim is to define computation precisely in a way that is operational and formally usable in neuroscience and biology. That is the first task. And only after that can we responsibly ask how such principles might inform the construction of synthetic systems.</p><p>To begin, we need to clarify what computation traditionally means, both at the physical level of systems—physical hardware systems—and at the abstract level of computability itself, what is known as computability theory, particularly recursion theory. Only then can we assess whether biological systems are simply implementing this classical computation or whether they instantiate something "structurally different." Hopefully that term will be something you hold on to as you go through this to see what difference I'm trying to ascertain.</p><p>We'll start with digital systems. I've been asked whether this paper is really about digital hardware or about computability as the abstract formalism. The answer is, it's about both. And it has to be.</p><p><strong>[10:36]</strong> So biological computationalism is concerned with what computations are permissible under physics. And more specifically, which computations are permissible under physics as realized in biological systems. For that reason, we need to examine two things. First, the concrete architecture of digital machines, what's called the von Neumann model that actually instantiates algorithmic computation. And second, the abstract definition of computation itself, the formal notion of computability in the Church-Turing sense. I know there have been slight nuances between the way the Church-Turing thesis has been explained, but I will try and summarize a general one that holds the most utility and is hopefully the most accurate in the way that we know it today. Only by considering both can we be precise about how structure and function relate in nervous systems and how this might relate to synthetic systems as well.</p><p>Let's begin with the hardware, because the hardware tells a story. This story is a tale of separability.</p><p>Some of you will have seen this diagram in undergraduate computer science textbooks, but it's worth revisiting carefully. A classical von Neumann architecture is built from three core components. First, a memory unit, which passively stores discrete symbols. Second, the arithmetic and logic unit, the ALU, which manipulates those same symbols. And third, a control unit, which fetches instructions from the memory, decodes them, and throws them back or dispatches operations to the ALU. Some of the instructions the control unit uses and the stored data share this uniform address space. It's a separate address space physically separated from the ALU, the arithmetic and logic unit. That separation is not an accident. It constrains how computation unfolds. This is the von Neumann bottleneck. These are modular separations of functions; we see clean boundaries.</p><p>In the brain, we might be tempted to map this onto scale separations that I talk about later. But here, the separation is explicit and engineered across modules. It can relate to scale separation in the brain, but it's not yet a nice definition of scale separation. The first major scale separation, and the way we would like it to mean in the brain, appears at the level of what is called the instruction set architecture. This is where the processor's instruction set abstracts away from the underlying digital circuits and transistors, and it allows the binary code to run across different hardware. That's what allows it. It's this instruction set layer and this compiler toolchain.</p><p>In other words, this means that the algorithm and the computations performed by the effective procedure, the procedure that goes step by step of the algorithms, are insulated and closed from the physical hardware. That is the clean separation. It is that the algorithmic computations are completely closed. Their effective procedure can occur on that level without recourse to the physical level. This insulation produces a form of closure of that level that is precisely what computability theory in the Church-Turing tradition requires. It requires disclosure of an abstract level of procedure.</p><p>I've stated something that's very important, but will not be discussed here: non-algorithmic dependencies still exist. We're not blind to the fact that the physical system does some physical things that help the software run. If I turn off my laptop, the software isn't running. But there is an algorithmic computational layer that is closed.</p><p>Interestingly, this closure, this insulation of algorithm from substrate, has also quietly shaped neuroscience because the computational metaphor was inherited along with the hardware architecture.</p><p><strong>[15:55]</strong> And so the premise becomes, for computational functionalism anyway, that it has some commitments that are directly traceable to these metaphors. Consciousness, that feeling of what it is like to be us, supervenes on the algorithmic organization alone. If the right procedure is executed, the right function implemented, then that is sufficient. The substrate, the hardware beneath an instruction set architecture, becomes completely interchangeable. This is the substrate independence.</p><p>We see echoes of this in theoretical neuroscience. I want to touch on this. Some simulations are performed purely at the microscopic level, detailed neuronal models without recourse to these larger scale dynamics. Others operate purely at the mesoscopic level, mean field approximations, neural mass models. They're treated as complete descriptions in themselves. The convention of splitting scales in this way is not accidental. It's inherited from this computational thinking. This is the kind of figure to the right that you're seeing that is feeding this intuition.</p><p>To be clear, it has been enormously useful. I've worked on these models myself, as I mentioned at the start. They've taught us a great deal, but we have to recognize the limitation. Brains do not operate on clean, arbitrarily defined scales. They are not naturally decomposable into algorithmically closed levels. Another inheritance is this strong substrate independence that I've already mentioned. Neurons, silicon, carbon, in principle, don't matter. They're equivalently interchangeable when we're thinking about the properties that are necessary and sufficient for consciousness. That follows directly from the algorithm-implementation separation we discussed. Computation is abstract; physical realization is only secondary.</p><p>Once you assume closure at a given level of execution, something else comes along with it: this scale privilege, the commitment that there is a single computational level that realizes consciousness or any given biological function.</p><p>Finally, I want to touch on the last thing. I think there is the reduction of these neural computations to discrete semantics as well. Action potentials as ones and zeros, binary logic. This is the legacy left by the McCulloch and Pitts neuron, though it's interesting because even Turing himself did not restrict computation to binary symbols. His formalism began with natural numbers, and he explicitly considered continuous variables, though this was never worked out. So the binary reduction is not inevitable. It is in a way structural and architectural. This tenacity of the traditional computational metaphor might be feeding the computational functionalism camp.</p><p>This is a noble pursuit and a noble way of thinking, but we believe that there is a different way. Before we speak about this different way, we still need to go through the computational part. We need to touch on computability as a formal and abstract notion as well. I know it has a bit of a hazy history with slightly different distinctions that abound, and I have been trying to disambiguate these as a current work in progress, precisely to define a new form of computation.</p><p>One definition that stands out as essential, and I've tried to narrow it down to, is that computation is the procedural sequential execution of an algorithm in order to compute a mathematical function. Under this premise, algorithmic computability is defined following four ontological primitives about this structure of computation. One is that in application and in principle, it is based on discrete alphabets, binary or natural numbers. It is a closed system, and it is on a single scale.</p><p><strong>[21:23]</strong> So even with parallel processing, you can compress this into a sequential process that encodes this in a way, but it always is an encoding of the parallel processing rather than the parallel processing itself. And that's important. It executes next steps to compute a mathematical function. It always goes by this state transition procedure. Biological computation is nothing like this. It's discrete and continuous, given the biological medium in which it lives. It's an open system, both at the single neuron level and at a more global level, such as your interactions at the subjective, phenomenal level with the environment. It is multi-scale, truly, structurally as an ontological primitive, not as something that can be simulated. And it isn't defined by just an execution of functions. There is some level of interaction going on that's different from just execution.</p><p>To summarize, digital computation is cleanly decomposable, modular, and separable; algorithmic closure reigns supreme. Biological computation is actually none of this. Biology or biological computation requires a revision of what qualifies as computation. Digital systems have a particular physical structure. Turing computation comes with particular ontological primitives. Digital systems, because of the way they are structured—modular, scalable, separable—scale by just adding more energy. We know this by current LLMs. But the brain scales in a different way: it scales by reorganizing computation and dynamics under particular constraints. Since it is about reorganizing computation, it is precisely a structural inequivalence with Turing computation.</p><p>Let's have a look at what biological neural tissue is doing. We begin from something simple. Life has finite resources. What do metabolic constraints actually do in neural systems? My claim is that they shape the dynamics and therefore the computations of the system. They really structure them. Energy limits are not just peripheral things. They are not simply a matter of speed; if you have more resources, the same computation will run faster. That's not what we're proposing. They are constitutive. They shape the ontology of neural tissue and the neural interactions both structurally and dynamically. And that already marks this structural inequivalence with current digital systems. It's important to understand this carefully.</p><p>We see this at the level of ion channels. There is evidence that channel kinetics, activity, are tuned by ATP efficiency. ATP is the energy source in biological systems. Hassenstraat's paper is informative. Instead of simply packing in more sodium channels to increase firing rate, some neurons often only adjust potassium conductances because they don't need that rate as much. This is because the cost per spike is lower. In other words, demand for dynamical communication shapes the physical medium itself. Structure adapts to energetic demands. This is an instance of what we call dynamico-structural co-determination. It's one of the tripartite principles later on in the paper. We already see this at the ion channel level.</p><p>Another example, and one I particularly find fascinating, particularly since my work in emergence, is that not all neurons in the brain spike. For neuroscientists, this is maybe old news for some. Outside of those circles, it's less frequently appreciated. There are non-spiking neurons that operate using graded potentials. What appears to be happening is quite interesting. They function as a form of coarse grading over incoming discrete synaptic inputs from presynaptic neurons.</p><p><strong>[27:01]</strong> So already at this level, we see an interplay between discrete and continuous signals. So that to me is an instance of hybrid computation already, or what we call hybrid computation. Though I formalize the notion more carefully later, which I will speak about, but what was already shown in the 90s is that continuous transmission through these graded potentials can carry more bits of information per second than discrete spike trains. That, to me, is striking. That's the figure below from Laughlin. And it suggests that the system is not choosing to be multi-scale in some abstract sense. Rather, this multi-scale organization is something that emerges from metabolic and functional demand. Because many spiking neurons converge onto a single non-spiking neuron, the latter effectively spatially coarse grains the incoming signals. And it's not an anatomical curiosity. It's the real deal. It has computational and informational consequences. So projecting information through graded potential rather than spike trains reduces metabolic costs while expanding transmission capacities. This is a continuous signal. So here we are seeing something concrete.</p><p>What emerges from this picture, or metabolism's dual outcome, is, first, metabolism binds dynamics to the physics made possible by biological substrates, and it induces this dynamic or structural co-determination. It necessarily induces something that already distinguishes biological systems, which is that computational time is physical time. While digital systems merely approximate this, they also approximate only continuity. Biological systems directly instantiate this. It's important.</p><p>Second, metabolic constraints make scale separation way too costly. Processes must reuse stuff across scales. As a result, this clean hierarchy compresses into something else, a heterarchy. It is not a flat, single-scale system. It is not a strictly layered hierarchical one with algorithmically closed levels that interact only at some fixed interfaces. It is something in between, a heterarchy. A heterarchy also comes with other notions, but primarily, this is the intuition here. And scale integration then emerges as an optimization strategy, as a metabolic optimization strategy. It defines this heterarchical nature of the dynamics in the brain. So there is no privileged scale that actually exists because there cannot be one. It is not a hierarchy, a clear one. And this is developed much more in the paper, of course. So since energy shapes computation, biological computation may be structurally inequivalent to computability in the Church-Turing sense. This is what I mean. So under scarcity, the brain cannot afford the separability of scales. So as I've already touched on, the form of organization that emerges is heterarchical and scale integrated. It is multiscale, of course, but it isn't hierarchical. And once energy becomes a constitutive constraint here, clean functional, dynamical separability just becomes too costly. Neural systems can't afford fully independent layers. Instead, these processes must be reused and integrated across scales, what we call the notion of accretion that develops over evolution. So what emerges is not really a hierarchy of control, that's the key part, but a heterarchy of distributed constraint. And here you get distributed processes occurring, and also that it is more constraint than a complete control. Constraints are, of course, controlling in a sense. But this slight softening really makes the case to me of what's going on in neural dynamics. So in a hierarchy, scales are ordered and separable. In a heterarchy, no scale is privileged. There is no single fundamental computational unit. There's a heterarchy of scales. There is no fixed unit size at a given scale, no algorithmic closed layer. Computation is distributed. Scales co-determine one another, as well as then determining the structure, as we saw before. Scale integration is therefore not just decorative at all, it is what defines the organization itself and conscious processing may require not just some region to region binding on the same scale, but also scale to scale tethering or integration.</p><p><strong>[32:38]</strong> And that's the proposition. But this isn't just speculation. This notion of scale integration is not purely theoretical or speculative. It actually comes from some empirical results that colleagues and I have obtained while working on information theoretic measures of emergence in neural systems. And these measures of emergence formally capture scale integration.</p><p>The measure I'm speaking about is dynamical independence, which was originally developed by Lionel Barnett with Anil Seth. It captures the dependence between microscopic dynamics and the lower-dimensional macroscopic variable or state space in which those dynamics can be expressed.</p><p>In this dynamical independence framework, the higher the dynamical dependence between those scales, the more tightly integrated they are. We apply this framework across different conscious states, like anesthesia with propofol and xenon and ketamine, across sleep stages, and under 5ME or DMT, a potent psychedelic.</p><p>We consistently observed that wakefulness shows higher scale integration across macroscopic dimensionalities. These are functional, dynamical scales. For both propofol and xenon, the higher the dynamical dependence, the higher the scale integration. Ketamine does not simply reduce scale integration. In some cases, it preserves or even increases it compared to wake. Some anesthetics untether scales, while others tether them differently. That remains an open question and something I wish to explore further.</p><p>We see a similar pattern in sleep. Wake and N1, which is the first liminal sleep phase — dreamy and very creative — show high scale integration; deep sleep and even REM show lower integration. The landscape of scale integration is frequency specific: in the alpha band, the defining band of wakefulness, emergent macroscopic dynamics are more tightly integrated in both N1 and wake. But in deep sleep, the delta band shows its own maximality.</p><p>This begins to paint a consistent picture across conditions. That's a link to the anesthesia preprint. This shows wakefulness tends to exhibit stronger dynamical scale integration than anesthesia or sleep, and we even have data for psychedelics. Again, wake shows stronger scale integration across macroscopic sizes and frequency bands, with gamma band deviations where DMT shows more scale integration than wake.</p><p>This is an interesting point to think about. We already have ways to confront this constructively and empirically. We'll see where this research leads.</p><p>The corollary is important. Consciousness is neither reducible to micro-level dynamics nor fully explicable by macro-level functional patterns alone. It emerges from independence across scales, from the coupling between dynamical levels of different dimensionalities. Different scales, truly heterarchical. There is no privileged scale of dynamics. Consciousness appears to reside in this integration across scales.</p><p>This brings us to our last pillar, hybrid computation, where we see subthreshold activity driving discrete transitions at the subcellular level. I won't spend much time here: non-spiking graded potentials.</p><p><strong>[38:05]</strong> But another fascinating feature of the brain is seen very clearly in dendritic processing. Dendrites tell us something quite remarkable about how neurons communicate and maybe what would be a foundation of a new formal definition of biological computation that I've been moving away from. This notion that there's a sequential state transition, executable function vibe. Axons tend to run straight through this neuro pill. Which is this dense fiber bundle. While dendrites actively reach out to them, they don't just receive signals passively, they seek them out. For me, there is a genuine interaction happening here. These are called dendritic spines, and they grow given particular processes that occur in these spines. One often neglects some of these features happening before the soma. I do want to touch on some of them.</p><p>Dendritic spines function as nanoscopic biochemical and electrical compartments. And they actively interact with the synaptic cleft, the in-between, before anything is forwarded to the parent dendrite. Even before it gets to the dendrite. What this means is that presynaptic information is consolidated in a massive parallel, interactive, distributed way that is timing dependent. It is non-Markovian. It doesn't only depend on the one time step before. This is already computation and I might be hinting at what type of computation. That looks interactional and it's not easily stratified into any clean level, into any executable single-step maneuver. It's an organizational principle that is missing from artificial neural networks where units simply collapse weighted inputs into a single sum.</p><p>Dendrites themselves are not these passive cables. They are also densely packed with voltage-gated ion channels, which allow this integration of interactions, which happens actively rather than just relaying them. There are NMDA receptors as well. They introduce additional non-linearities in the system by generating local dendritic spikes at that position that can travel both toward and away from the soma up the dendrite, which is fascinating. Sometimes this is against the usual direction of information flow. This reverse signaling allows dendrites to detect the order of that timing I mentioned before. In other words, dendrites can retain a history and then choose through interaction which history is necessary. They exhibit inherent non-Markovian computational processes.</p><p>What we see here is not that neural behavior is a very simple input-output device, but it's deeply hybrid and interactional. So far, we've been looking at hybrid computation inside neurons, dendrites, spikes, and ion channels. The story doesn't stop at this membrane level. There is another layer that is often underappreciated. These are electric fields.</p><p>Electric fields couple neurons beyond just synapses. Neurons don't just communicate chemically or through direct synaptic transmission. They also influence each other through continuous electric or ionic fields generated by collective activity. The brain has continuum dynamics that emerge only on that scale. These are subthreshold, called ephaptic interactions. They don't necessarily trigger spikes directly, but they modulate excitability. That modulation matters. It changes the probability landscape of firing, which in turn reshapes network dynamics. Oscillatory fields do something similar. They guide excitability across populations. They synchronize.</p><p><strong>[43:31]</strong> They scaffold activity patterns without requiring discrete synaptic events. So the computation is not confined to spike-to-spike transmission. It extends throughout this continuum physical medium. The brain is not just a network of discrete units. It is an electrochemical continuum. It's the point that I've made already. The material properties of that continuum matter. Tissue geometry, conductivity, ventricular structure — all of these shape how fields propagate.</p><p>This is active work now, modeling how the brain's physical substrate supports resonant oscillatory modes and how these modes might constrain neural dynamics, even at the scale of BOLD signals. This reinforces the idea: computation in the brain is not purely symbolic or purely discrete. It is embedded in a material field, in the material soup of things. Digital systems, on the other hand, can only approximate this continuity. Biological systems instantiate this, and this instantiation is important. Once again, we see hybrid continuous-discrete computations emerging. It's a real structural property.</p><p>So we might be thinking why this might matter for consciousness. Here's where the real inference happens. We want to synthesize some of this from the biological ground-up approach. While this might matter for consciousness, it is not a complete criterion for conscious or phenomenal existence. It is rather some biological notions that might be necessary but not sufficient if we are thinking about systems that could potentially be vessels of conscious existence.</p><p>First, subjectivity requires boundedness. To have "what it is like," a system must distinguish what belongs to itself and what does not. You need to define the boundaries of this "it." Boundaries are foundational, though they might not be closed. Boundaries require partial global control. Local processes are not enough; mechanisms must coordinate across the whole. I need to feel what it is like in order to know, and this might require electrochemical boundaries to exist and be detectable. Subjectivity would require partial closure from the environment.</p><p>Biological systems repurpose the same substrate across scales. This is the notion of closure across scales — across chemistry, electrochemistry, distributed signaling. Old mechanisms are reused for new global functions. Evolution builds new scales from old materials. Scale integration binds the whole into a single intrinsic perspective. Continuous processes capture organism-level boundaries. Discrete processes specialize and differentiate. Subjectivity and the notion of intrinsic existence may emerge from the coupling between these scales.</p><p>A very interesting example occurs with electric fish. They are a great example of a notion in neuroscience called efference copy, where they send out a discrete electric pulse into the environment from a particular organ in their skin, and read the divergence of that continuous electric field in another organ in the skin.</p><p><strong>[48:48]</strong> And in a way, this continuous field helps it distinguish self from other or this function that it performs with both discrete pulses and continuous electric fields that it feels later, allows it to understand or distinguish itself from the environment. This instantiation shows me that continuous fields are very useful to propagate signals that are necessarily needed to be read out instantaneously.</p><p>Regardless, that's an interesting notion of intrinsic existence and how it relates to some of these principles. I'm ending. If we take some of these preceding arguments seriously about metabolism, hybrid dynamics, scale integration particularly, then it becomes difficult to think that simple scaling of these current digital architectures will be sufficient for any form of sentience or intrinsic existence.</p><p>What we argue in the paper is not that artificial consciousness is impossible. Rather, if it is possible, the system would need to satisfy the tripartite criteria. This is hybrid computation, scale integration, and dynamic or structural co-determination.</p><p>This is really important. We are not claiming that these three conditions are sufficient for consciousness. They are at best necessary. I would call this an incomplete list. Even satisfying them may not be enough.</p><p>When it comes to implementation, one is this obvious coupling with the environment. Maybe Mike Levin, yourself, you've worked on this. I believe Anna Siawenika is already working on some of these ideas. There are broad teams around the world that are thinking about this very seriously. Without them, it's difficult to see how one can recover this intrinsic existence.</p><p>We also believe that without some of these biological primitives, it would be difficult or maybe impossible to recover this intrinsic scale-integrated association, the organization that we associate with subjectivity.</p><p>The implication is that if consciousness is realizable in synthetic systems, we may need fundamentally different computational paradigms, both at the hardware level and at the formal level. At the hardware level, possibly neuromorphic, possibly fluidic, field-based. But all of this remains an open question and something we are currently working towards to see how we can implement something like this, if possible. I'm sure we will fail and fail over again, but hopefully we will fail better.</p><p>I think the current debates have underappreciated the computational significance of biological organization itself. The structural ontological primitives of computation that occur in biological systems are necessary, so a biologically-centered conception of computation might be a cool idea. Let's say we take physical, metabolic, hybrid and scale-integrated dynamics of neural tissue as foundational and build some formalism.</p><p>It's not about whether synthetic systems can be conscious, but whether we are building the right machines.</p><p>I'd like to thank you for listening. I'd like to thank the people who are implicated in some of the work I presented and who have helped guide and mentor me: Alanda Steck, who is my current postdoc supervisor; Olivia Carter, who was my primary PhD supervisor; Tomar Andrillon, Lionel Barnett, and Anil Seth, who are part of the supervisory team as well. George, Ross, and Jeremy collected the 5MEO DMT data that I presented, and Jan, of course, for being an incredible colleague and collaborator and friend.</p><p>Thank you very much.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Platonic Space discussion 3</title>
          <link>https://thoughtforms-life.aipodcast.ing/platonic-space-discussion-3/</link>
          <description>This 1h44m roundtable on the Platonic Space Hypothesis explores platonic forms in biology, xenobots, symbiosis, competency spaces, play and plasticity, Markov blankets, thermodynamics, and will-to-live dynamics.</description>
          <pubDate>Thu, 05 Feb 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 6984e0e249688900014cacda ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/E6_XdPm9fa8" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/4d1e68e2/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1 hours 44 minute discussion among contributors to the Platonic Space Hypothesis (<a href="https://thoughtforms.life/symposium-on-the-platonic-space/).?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life/symposium-on-the-platonic-space/).</a></p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Platonic forms and interaction</p><p>(07:01) Xenobots and evolutionary cost</p><p>(11:23) Symbiosis, rectifiers, embeddings</p><p>(18:06) Free lunches in biology</p><p>(27:59) Topology of competency space</p><p>(39:53) Exploration, play, affordances</p><p>(48:01) Markov blankets and evolution</p><p>(57:51) Relational life and observers</p><p>(01:04:10) Defining play and plasticity</p><p>(01:20:35) Thermodynamics and domesticated play</p><p>(01:29:32) Will to live dynamics</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Michael Levin:</strong> Welcome, everybody. We're open for another discussion of the Platonic space. If anybody has any issues or questions for each other, please, now's the time.</p><p><strong>[00:15] Unknown:</strong> I've got questions or comments. I don't want to monopolize things. Go for it.</p><p><strong>[00:19] Michael Levin:</strong> Go for it.</p><p><strong>[00:20] Unknown:</strong> When I prepared my talk, I had not read what you wrote, Mike, because I wanted to just comment: what's the biology, unbiased, what is it telling me? I've since gone back and read these things. My impression with regard to the Plato concept is that unless there's a proposal for where these forms are stored and what the mechanisms are for pointing and ingressing, how you consult these forms, the whole project is indistinguishable from mathematics, physics, and biology all having patterns and rules. Why there are any rules and how they get consulted, we don't know. I was wondering whether the intent of this whole session, the series of lectures, was to see if somebody could come up with, what is the storage site? What are mechanisms for pointing and ingressing? My impression is that at least a number of the talks, certainly mine and Gordana's, are that you don't really need to have forms outside somewhere that are being consulted. You can have them just as rules and constraints within the biological organism itself. Or, as my quantum mechanics professor used to joke, it took you a little while to do the homework, but every electron and proton in the whole universe is doing it like that all the time. Has there been any insight into whether there is consulting or whether it really just comes back to there are laws somewhere and we just need to find them out?</p><p><strong>[02:09] Michael Levin:</strong> I would say there's a couple of things. First of all, I didn't think that these other talks would address that question in particular, because fundamentally I think there are many people who don't agree with my framing of it in the first place. So that's step one: to even say whether this whole thing is even — I'm arguing for a strong interactionist model. Before you can worry about the interaction, you have to think that there is interaction, and some people don't. However, what I would focus on and what our research program actually focuses on is trying to understand what it is that you actually get during this interaction. For example, are they just constraints? Are they enablements? Lots of people say it's not just constraints, it's also enablements. But I think enablements can be taken much more seriously. That is not just that by closing off some stuff over here, I forced you into this other set of things that you're going to do. In the sense of free lunches, or heavily discounted lunches, you get more than you put in. I'm really interested in this idea of what you get out of such a thing. By putting in some amount of effort to make an interface, what you actually get through it is in some quantifiable way more than you put in. In other words, what you get are not simply constraints on things that you can't do, nor being shuttled into other modes, but actually you get policies, maybe information — static patterns, maybe actual compute in the form of virtual machines. You get something that you didn't pay for in an important sense. Because I think the current way of calculating what you paid for only takes into account this side of the interaction. My strong supposition and my hypothesis is that you get way more than you paid for. I think biology in particular — the things we call biology tend to be systems that exploit that; they are very good at exploiting these things, saving effort on things that they did not need to evolve or find or search for. In biology it's very hard to quantify that because it's always complex. There are always mechanisms you don't know, and it's really hard to prove any of that. But in simpler systems and in simple computational systems we may be able to. That's one of the things we're doing: trying to quantify how much you put in and what you get out in these simple systems.</p><p><strong>[04:55] Unknown:</strong> Since you put it that way, I'll mention something that I didn't put in my talk, but it is in the causality paper from a couple of years ago, which is Richard Levins's idea from across town from you. He was at Harvard School of Public Health. The idea, which goes all the way back to Waddington's theoretical biology books in the '70s — there are a couple of nice papers back there — is that essentially the way you get some structure is by crystallizing out of an amorphous mass. In other words, what happens is biology starts off, it's not that you build new things, it's just that you start off doing many things badly. By adding constraints, you exclude many of those things. Now the remaining ones are done well. You can see this, for example, in neurobiology, where in lower animals there'll be a brain nucleus that does two or three things connecting to two or three places. In higher organisms, it splits into two, each one of which takes on one of those tasks. Or in the genome: in E. coli, the whole genome's accessible. In eukaryotes, you suppress everything with histones, and now you selectively de-repress stuff. What you've done then is essentially increase the signal-to-noise ratio of stuff you already have. Enzymes are another classic. All these chemical reactions can go on without the enzyme, but 12 other side reactions also happen. If you have an enzyme, you're essentially preventing some of those by the nature of the active site. Only a couple of them happen, and they happen much faster and much better. This signal-to-noise ratio, which is what you're talking about — the ratio of inputs to output — is probably exactly the thing to be looking at.</p><p><strong>[07:01] Michael Levin:</strong> I think that's certainly one subset of those phenomena. In biology, even though it's much harder to prove anything in this scenario and hard to quantify these things, we do now with some of the synthetic models that we and others have made, xenobots, anthrobots, there's an opportunity and a challenge now for biologists to be able to say, when was the computational cost paid to design these things? In other words, we know when the frog and the human design was paid for, it was in the millions of years of selection for specific features. But when you create something that's never been here before, and it has certain competencies, you want to know where did those come from and when did we pay for them? I don't think it's good enough. When I ask people this question, they generally say, well, it has an evolutionary history. It just learned to do that when it was being selected for other stuff. That's OK, except A, it provides zero explanatory value. It just means that whatever other weird thing pops up, you'll just chalk it up to the history. And B, it rips up a large part of what I thought evolutionary theory was supposed to do, which is provide a tight specificity between the history of environments and the properties that you got out the other end. You're supposed to be able to say, this thing looks and acts this way because it has a history of selection going back, and everything else died out. So if you're willing to rip that up and just say, well, whatever your history was, you can end up with pretty much anything. I think we're supposed to do better than that. I think we're supposed to have some kind of theory to be able to say more than just the developmental plasticity. We're able to say why is it that we selected for all of these things? And also, by the way, in a novel configuration, all of this other new stuff works that's never been evaluated before. Hard to quantify, but at least we can start looking for theory that does better than it's emergent, it just showed up.</p><p><strong>[09:10] Unknown:</strong> I thought that was the Evo-Devo program, or maybe it's because I only talk to people like Gunter Wagner who think that environment is not the whole story. There is a set of rules somewhere. There's some other set of constraints on how you build an organism that functions.</p><p><strong>[09:35] Michael Levin:</strong> The constraints. I think this is more than constraints. Andreas Wagner gets really close to this. He doesn't quite come out saying it, but he has this book, "Arrival of the Fittest," which I think asks exactly this question: okay, you can sort of select out the bad stuff, that's great. Where does the good stuff come from? Specifically, constraints are one thing. But when you get significant competencies out of it, maybe it's more than constraints. Maybe by building certain interfaces you're tapping into something that provides a bigger return on investment. I can think of a number of examples of that. I think learning to predict, facilitate it when we want it, suppress it when we don't want it, because there are scenarios where that happens. If everybody knows you get complexity like that, you get unpredictability, maybe you get perverse instantiation in a life context, but it's not just that. It's not just complexity and unpredictability. It's competencies that would be recognizable to any behavior scientist. Somewhere along the spectrum of maybe low to higher, that requires explanation because if you don't have a long history of selection for it and you don't have direct engineering or design for it, we're looking at something additional to that. There are knowledge gaps around where that stuff comes from.</p><p><strong>[11:23] Unknown:</strong> Not that I have the answer. I'm simply asking. Do we really need the environment for that? Meaning a symbiotic relationship between two species can explain creating noise in one species, which the other species will use in a different way. So for species A it's noise, but for species B that's food. By increasing that, you're actually increasing the other. The fitness function is not only on group A; it's actually on group B that influences group A. So you're creating a symbiotic connection that really complicates the way to describe what is good and what is not good.</p><p><strong>[12:16] Unknown:</strong> The question is, where is the directionality coming from? You're saying you need a rectifier. Here's somebody who's making noise. Here's somebody else who can use that noise and put direction on it, which is exactly a Carnot heat engine. You've got random motion, and you now put it in a piston and you can direct it into a higher level of work, which is a macroscopic thing. It seems to me that what biology needs to have done then is have invented some little module that is a rectifier and could take noise of whatever kinds and make something useful out of it. I had wondered whether a couple of theorems that showed up in these talks might be clues as to how do you design such a thing? What would it look like? How would nature have designed it? The two things that struck me were, one, in that Platonic Hu, Chung, Wang, Isola paper. You have this theorem about if you have vector embeddings, you want a vector embedding in which the similarity between the observation and the constructs you're trying to make is the same as pointwise mutual information. I don't understand that, but I get the flavor of it. It seems like the sort of computation or constraint or requirement that might tell you what is the kind of thing you have to build that is guaranteed to make useful stuff. Then selection can go figure out which useful stuff. So it's like Lego blocks — you're always gonna make something 'cause the way they go together. The other one was this thing that came up, the Markov blanket theorem, which sounds like it goes back to Ross Ashby's thing about if you're gonna have a regulator, it has to have a model of what's being regulated. I was familiar with Ashby, but I see that there's debates about whether he ever actually proved that. Has there subsequently been a proof of that? That would be another kind of thing that you could imagine being a requirement for making biological modules that are very likely to make useful stuff, and all you have to do is recombine them or whatever.</p><p><strong>[15:16] Brian Chung:</strong> The paper we had in the appendix discussed this notion that the kernel itself, the relationships between embeddings, map to point-wise mutual information of two events in probability space. Point-wise mutual information is a ratio of log probabilities, and the idea is that the kernels are becoming the reflection of the mutual information shared between those two embedded objects. This assumes bijectivity and other things that are not necessarily practical, so it's not an ideal proof. That was the kind of thing we were getting at: the kernel meaning that the inner embeddings and their relationships are converging to something equivalent to point-wise mutual information, given some mathematical assumptions.</p><p><strong>[16:15] Unknown:</strong> That's selecting out a special kind of object.</p><p><strong>[16:20] Brian Chung:</strong> The embedding is not necessarily all possible objects. It's whatever the model chose to compress its representation towards.</p><p><strong>[16:37] Unknown:</strong> Is there an idiot's version? Say a biophysicist's version of the mathematics of that one could wade through and maybe understand in detail how it all plays out? In other words, for those of us who don't think about embeddings all day.</p><p><strong>[17:03] Brian Chung:</strong> Unfortunately, I don't know about biophysics. I'm on the other side; I think about embeddings. If there are notions of co-occurrence and the probability of co-occurrence, that's what pointwise information is reflecting. That is what a kernel is. It's saying that these embeddings had to be embedded close together because they co-occur frequently. So the notion in language is that words' meaning derives from the company they keep. Objects' meanings derive from the things that co-occur.</p><p><strong>[17:40] Unknown:</strong> I'll incubate on that some more. Thank you. Nobody else wants to jump in on it. I can keep throwing in comments. You jump in. I'm wondering.</p><p><strong>[18:06] unknown:</strong> Oh.</p><p><strong>[18:07] Unknown:</strong> Yeah, go ahead. My bad.</p><p><strong>[18:10] unknown:</strong> I'm wondering whether we have a better sense of which mathematical objects have free lunches. Attractors might have them. If we're doing a difference, the sorting algorithms paper has a transitive global objective, and we can see the algorithm getting to some sub-objective. I'm wondering if we have a set of objects that we know are potent.</p><p><strong>[18:40] Michael Levin:</strong> We've been playing with this, taking different ones and trying to see what they offer. There will be some work on this coming soon. The difficulty with all of that is that it's a two-way IQ test. As always, when you're trying to gauge what that is, it's only as good as we know how to notice it. The clustering thing — the only reason we found it is because I thought to look for it, but there's probably 1,000 other things we haven't thought to look for. We are still very much for the biological — for cells, tissues, gene regulatory networks. We are looking for suites of tools to identify novel competencies that we haven't found yet. And ideally, there's no such thing as unbiased, but ideally as differently biased as what humans have been looking for all these years. The same thing is here. I would like to deploy the exact same tools on all of this so that we could try to find as many competencies as we can in different spaces. But I think we are limited primarily by our imagination.</p><p><strong>[19:50] unknown:</strong> I suppose my other question is a free lunch; I think it's really clear to me what a free lunch is at a very low level. In an attractor, it feels like a free lunch. I'm within the basin, now I know where I'm going. But as a human, when I think about a free lunch, I just think of doing something that costs me less to get more. It's not really a free lunch, but it's a place where we're not dealing with a zero-sum. What I'm wondering is what's the difference there?</p><p><strong>[20:28] Michael Levin:</strong> I don't mean literally free because you still have to build the interface, so it's not gonna be free, but some sort of heavily discounted lunch. Here's a dumb example that I've used. Let's say that in some universe, the highest fitness belongs to a particular shape triangle. You crank a bunch of generations and you find the first angle and you crank a bunch more generations, you find the second angle. Now, the third one you don't have to look for because you get this amazing free gift that once you know two angles, you know the third. In some sense, evolution just saved 1/3 of its time, because if you didn't have that, you would have to go find the third angle. That kind of thing is a constraint, but for biology it's not so much a constraint, it's an enabling feature. It means you can go faster. There's tons of stuff like that, that you get these things, these mathematical relationships of facts of computation, where you don't have to do the whole truth table once you have your voltage-gated ion channels, you've got your transistors, the truth table comes naturally after that. These properties you don't have to go look for, they're handed to you. Biology is precisely the set of things that exploit those kinds of things.</p><p><strong>[21:51] Brian Chung:</strong> So I want to add to this notion of the free lunch, because this interesting phenomenon that we see in the AI models is that algorithms that don't normally work are working a lot better now as a model gets to a certain level of competency. So things like evolutionary search and reinforcement learning—if you try to train your model from scratch, it'd be hopeless. But if you do it on a model that's already pre-trained, it works remarkably well. So there are papers now showing that if you do evolutionary search on models that are 7 billion parameters or more, you would think that would not work at all because it's 7 billion parameters. That's a very high dimensional space. But evolutionary search perturbations can give you performance improvements on downstream things, which begs the question: as things become more competent, things that didn't work previously seem to be working a lot more effectively now.</p><p><strong>[22:47] Michael Levin:</strong> Would you mind popping some links into the chat? I haven't seen those from the computer, from the CS side of things, but I'll tell you from the biology: this is something that we've been writing about for a while, that evolution, I think, works quite differently on a competent substrate. So when you have cells that can actually solve problems on their own, it's a completely different story, because if the mapping between a genotype and phenotype is not hardwired, if it's actually an interpretation and intelligent interpretation process, then some very interesting things happen to evolutionary search: it goes much faster and it finds much more interesting things. So having that middle layer, the translation layer, which is morphogenesis, basically, having that be competent greatly potentiates evolutionary search.</p><p><strong>[23:41] Brian Chung:</strong> I imagine these LEGO blocks: the chance of the molecules forming a cube is very low, but LEGOs forming a cube is much higher in the sense that random perturbations create something that is structured.</p><p><strong>[23:53] Michael Levin:</strong> There's another effect here, which is this. Imagine: One of the things that morphogenesis is very good at is getting to the same final outcome even when things change. If you change up the circumstances, it's really good at getting to the same thing. For example, if you make a tadpole where the mouth is on the back of the head, eventually that mouth will come around to where it needs to be and you get a normal frog. We made these things called Picasso tadpoles where we scramble your facial organs: the mouth is out here, the eyes back. They still make normal frogs because all this stuff moves around until you get a nice frog face and then that's it. Imagine what happens with evolution then. Most mutations are deleterious because it's much easier to screw things up than to do good things. Also, most mutations have more than one effect. You have your tadpole, you make a mutation; the mutation does two things. It moves the mouth off to the side, but it also has some other beneficial effect somewhere else. If the material was a direct mapping from genotype to phenotype, you would never see the consequences of this other mutation because the mouth is off to the side, the thing would starve, and that's the end of that. You would have to wait until you get that same mutation without the mouth effect, and that would take a lot longer. Instead, you make the mutation, the mouth fixes itself, and you get to explore the consequences of the other side effects because it makes up for a lot of those things. That aspect turns a lot of deleterious mutations into neutral ones. We have a bunch of computational work on this. If you simulate that process, it becomes very hard for selection to actually see the genome. If you have a beautiful-looking tadpole, you don't know if the genome was amazing or if the structural genome wasn't so good but the developmental process fixed everything along the way. If you look at where evolution is doing most of the work, it ends up doing more work on the competency mechanisms instead of the structural stuff. If you do that, it becomes even harder to see the structural genome. You get onto a positive feedback loop where eventually you get a really unreliable medium, but it doesn't matter because the algorithm is amazing and it fixes whatever happens. If you take that to its logical conclusion, you end up with something like a planarian. In planaria there is a whole spectrum of where these things end in evolution. C. elegans is super hardwired, then mammals, then amphibians, then planaria. In planaria the material is incredibly junky because, for reasons we could describe, the genome—cells have different numbers of chromosomes. They're mixaploid. But they're the ones that have the most regenerative capacity, cancer suppression, immortality. They don't age. It's not because they have a beautiful genome. It's the exact opposite. It's because the material is so unreliable. All the effort went into the algorithm to be able to say, we already know that the hardware is going to be iffy, but you're going to have to fix yourself. Other creatures can do that to different extents. I think it comes because of the consequence that the more of that you do, the harder it is to select on the genome. So all the effort goes into the competency part. That's just one of the things that happens. But having that problem-solving competency means that even though the mutations are random, the outcome actually is not like that at all. It looks quite different.</p><p><strong>[27:57] Brian Chung:</strong> Great. That's awesome. Yeah.</p><p><strong>[27:59] Unknown:</strong> So, Mike, if I can jump in — sorry, I was a little late, I was teaching. One of the things this reminds me of is the deep pertinence of understanding. Some of you may know this old paper by Stadler, Wagner, and Fontana called "The Topology of the Possible," which is about the genotype–phenotype map. Basically what it is arguing is that what is really important to understand is this process that takes you from genotype to phenotype, and because of certain neutralities that you can have, that is much more consequential for evolutionary search and so on. These questions about local search around competent states make me think that one of the key challenges we face in this platonic space view more broadly is understanding what you might call, rather than a genotype–phenotype map, a substrate–competency or substrate–platonic-form map. I think you have provided us with plenty of evidence in your own work that this is highly non-trivial in the structuring of that map, such that you both have a lot of equifinality. You can have very different substrates pointing to very similar competencies. You can also have some kinds of substrates that are very tolerant to some perturbations in their mapping into competency space, some that are very intolerant. I'm curious if you or anyone else has looked back at some of that old machinery and thought about it in the context of this problem: if we want to engineer in competency space, which is what matters, we really need to understand the topology of that mapping.</p><p><strong>[30:26] Michael Levin:</strong> The people who have—those guys and some others who have spoken about that mapping—generally focus on its complexity. There's redundancy, there's pleiotropy, there's degeneracy. But I have a more radical view. I think it's more than that. I think it's intelligence, literally. I think it's problem solving competencies. Most of those descriptions are still at a lower dynamical systems level. I think a lot of what's going on is isomorphic with paradigms from behavioral science, where this is classic anticipation, habituation, Pavlovian conditioning. I think then you get more—it's like that, but on steroids, because then once you really have some competencies about navigating that space, the material itself is helping out. Every layer is doing something, and all you have to do is deform the option space for your parts to get interesting things to happen at your level.</p><p><strong>[31:56] Unknown:</strong> Going back to Brian Chung's multi-billion-dimensional space, or that many parameters, therefore that size space, is it mostly empty?</p><p><strong>[32:12] Brian Chung:</strong> I think mathematically, there's a dynamic that might be at play where most perturbations aren't harmful, but they do something meaningful in some sense: they don't destroy every capability as much as they change one specific capability more and don't ruin the whole process. As Mike was implying, there's a lot of structure for the perturbations for some reason in this parameter space, despite the curse of dimensionality and other things saying that there shouldn't be ways of doing this with just Gaussian perturbations. It's one of those things where this is very recent work. This work came out a few months ago. I don't think there's a good understanding of why this is possible right now in models that have reached a certain level of competency and it wasn't possible before.</p><p><strong>[33:00] unknown:</strong> There's an interesting paper that I saw a couple of days ago around how the connections within regions in the brain compress data or compress the useful information. They did the study on 96 participants' fMRI data. They found that most of the connections between regions were redundant; four or five percent of the connections were basically the very important ones, and the rest of it was redundant for the overall computation. So my thought is when we're thinking about these spaces, computationally, are we thinking about some dynamic where exploitation and exploration of that space work in lockstep with the compressibility of that data object, such that that data object makes that observer or that agent able to do more, as in gives them more variation in what they can do next. When I was thinking about when Brian said that you get certain competencies that don't work as a baseline model and you have to train it until that competency becomes useful. Is that because we're going to this point where you have to have some foundational structure or some foundational organization that's very basic, that gives you some low-level foundational brick understanding before you can build a wall on top of it. Once you have that layer, you can start. One of the things in Blaze's paper is the phase transition that happens when a group of things come together. In that example, are there enough training to then be able to take an abstraction a layer up in that map from the base layer to the satellite view, and now in the satellite view I can do slightly more. The thing I'm working on, Observer Theory, is how to model a computational possibility space in the set of all possible computations. When you look at some of the work done on evolutionary algorithms with one-dimensional Turing machines and on bulk orchestration when they take properties from the whole and not the part, there is a dynamic where you get these step ups in the very basic forms of those computational systems where new things don't go in the linear fashion. They have some dynamic equilibrium on a linear line and then they jump up. They find some novel rule that was already there that they could select from. And that gives you free lunches straight away because you've gone up to a different level of competence. From now on, whenever you change a rule, you're never going back down that exponential curve if you're surviving and continuing. And that's one of the ideas that I thought speaks to not only how we construct differences in categories as observers like us, but also how we balance those evolutionary strategies within this ingression model, because if we have some top-down pull, whether it's an attractor or something else, the balance is between exploration — pointing at an attractor and going for it — or exploitation — exploiting something within that attractor well. These are two different things: one is discovery of something we're already in, which costs very little because we're already there, and the other is quite high cost because we have to make many guesses about how to get to the next jump, the next exponential jumping capability.</p><p><strong>[36:58] Brian Chung:</strong> Reminds me of this notion of functional information, which I've just started reading about, which is the idea of reposing the notion of information or complexity as the idea that once things become composable, the number of combinations that can possibly exist explodes, and that's much more functional. You can imagine if you create a binary system, suddenly you can create permutations that are larger than the number of possible atoms in the universe. This composability and the dramatic growth of permutation space creates a lot more functionality to that space as well.</p><p><strong>[37:33] Unknown:</strong> This speaks to information. There's Shannon information, Fisher information, Kolmogorov information, etc. But the thing that is missing from Kolmogorov information, which I think is really important, is this notion of compositionality or scale. The experiments that I showed give one a sense of how noise or random bits can get turned into algorithmic information. In a way, I think of life as an engine for turning random encounters into algorithmic information that gets baked into how to make stuff, how to do stuff in the future. But also the fact that it composes hierarchically — you get a composition, and then those things compose, and those things compose, and so on — gives all of that a multi-scale and compositional quality that isn't captured by the normal Kolmogorov sense of things. So I feel like there's some pretty basic theory work to do there to understand scale in information as well that would give us a much better handle on the information-theoretic properties of life.</p><p><strong>[38:55] unknown:</strong> Layers, the epiplexy paper, things in the paper about structured information, an AI-sponsored paper. Do you think that measure can be adapted for this function? That was one of the most interesting, because you have a problem challenging information, comic book, infinite systems, so they're not hugely useful in the way people.</p><p><strong>[39:22] Unknown:</strong> I think epiplexity is one root. There are also things one can do with conditional Kolmogorov complexity or conditional Kolmogorov information that Eric Elmosnino has done some playing with. There are definitely some promising directions. I'm not sure any of them has a complete theory as yet.</p><p><strong>[39:53] Michael Levin:</strong> I like the inclusion of exploration here, because one thing that has been completely missing from our efforts to recognize diverse intelligence is that, for practical reasons, they've been entirely focused on goal-directed competencies, which is only one half of the equation, because then there's exploration in play. So part of cognition is non-goal-directed, just messing around to see what happens. We know what creative play looks like in mammals and birds. We're terrible at detecting it in unconventional embodiments. And so I spent a lot of time recently thinking about what that would look like in cells and tissues and molecular networks. If we step away from the idea that it has to be this way because it serves some very practical purpose, or it's going to this goal, or the whole thing evolved because it did something important, what does creative play look like in some of these systems? And how would we know? How do you know if an unconventional system is playing?</p><p><strong>[41:13] Unknown:</strong> Can I chip in? I've been listening to the discussion and, on the one hand, there's this language around competencies, which is very agent-focused in terms of skills or knowledge; on the other hand, we've got the notion of interfaces and functional information. It seems like this idea of where you put your focus: on the agent, the environment, or the interaction. I was thinking about affordances, those Gibsonian possibilities for action. That therefore puts your attention very much on the relational properties. So, if you look at that and say something's highly competent, maybe it's found its affordances in a given context, in its environment. In that relation, you could look at it as an observer and say I draw a distinction around that entity and I'll say that is competent because I observe its capacity to adopt some affordances which I recognize, and I say it's good at doing these things. Now, if I put it in another context, change its environment, and if you've been doing that in your experiments, you suddenly find this entity, which may not have had competencies in one environment, suddenly picks up these things in another. There's also talk about structured spaces: the idea that evolution or competency might increase in a more structured environment which has richer affordances, i.e., when you interface with an affordance, you get more out of it because it's already semi-structured and already capable. Relating that to play: maybe there are different ways of acting. If you're in a given environment — say, an evolutionary view where we look at something in a relatively consistent environment — the affordances in the organism-environment relation may get locked in a bit. Maybe you go from a play mode into a more restricted mode: it's working well; you start to reorganize your internal processes and are not in exploratory or play mode. You're happy here; it's all working well. If you place or perturb it into another environment, you are now stripped of the affordances which were working well for you. You could perish, drop into some low mode, or go into an exploratory, playful mode where you explore the affordances in this new relational context to try and pick up those things. And I think the richer that environment is with latent — I'm going to make up the term — active affordances: affordances which may already be, someone mentioned an attractor, something you can attach to which gives you a lot for free. So there could be a hierarchy of affordances. Depending on your view, do you look at competencies when you're agent-based? Do you look at functional information? And that's the term if you're thinking about the reception, the capacity to receive an affordance, or do you look at affordances themselves? I think the discussion of language may benefit from drawing distinctions about what perspective you're taking: the agent, the environment, or their relationship. For me, when we're observing the discussion, that's a useful thing to bear in mind: the language changes and what you look for changes. When you're talking about interfaces, it sounds relational. Competency is more agent-based, agent-focused. That was just an observation I wanted to throw in.</p><p><strong>[45:53] unknown:</strong> One of the things I wanted to come back to is when we think about play, the way observers are modeled in computational possibility spaces, second-order cybernetic, so they have sensory input from the environment and they have an internal model which they update. I'm wondering if play is exploration that's more internal-model focused, where you're playing within a bounded area of the space you're in and that is therefore, with low risk, letting you update your internal model to then expand the accessible space you've got in the future versus actual exploration where you take all of those learnings from play and training to get those skills and affordances. Real exploration is when you're moving it into the real world. So you're taking that updated internal model, that thing you've looped around through play, to then attack some problem in the real space that was riskier. You want to embed more or find more equivalences for when you were doing that play loop. Is that what functional play is doing? Is it a bounded version of the whole environment, shrunken down into your internal model, trying to take advantage of something you've come across before that's sparsely related within your internal model, a few relations practiced by doing that loop, just like practicing and learning how to cook or learning how to play an instrument? By running that loop internally, that is effectively functional play that's letting you do something in a future state where your internal model predicts that these equivalences will be valuable later for growing the size of your accessible space.</p><p><strong>[47:57] Michael Levin:</strong> Carl, did you want to say something? You had your hand up.</p><p><strong>[48:01] Unknown:</strong> Yes, no, I just wanted to endorse the last few points. In the context of a non-biological, more physicists approach to self-organization, that you can derive the most likely path into the future that looks like this is how this system chooses to behave. Interestingly, one, the imperatives for the most likely paths do have this epistemic playful aspect and an instrumental aspect. In my world, that's called epistemic affordance. And to my mind, that would be exactly the sort of curiosity, this play that is quantified by the information game that you can write down as a relative entropy or a KR divergence. The interesting thing, though, this is only the case. You can only derive the epistemic affordance as the natural behaviour of certain kinds of things when you've got exactly what you were talking about before, which is this sparse, deep hierarchical structure. So when the internal model deep inside can no longer see the interface, the actions that it's prosecuting or exchanging at its interface with the world, then you can interpret the inside as exactly a good regulator or a generative model. But crucially, one which looks as if it is planning into the future to maximise the information gain. But this can only happen when you've got this deep, sparse structure. You've got nested Markov blankets. If you're just a single cell organism with just one blanket, you have direct access to your action upon the world. But if you've got a deep structure, complex structure, structurally speaking, with this very much like a deep learning model, then you get this as an emergent property. So there's a nice connection between the notion of play and information seeking, and one could even argue reasoning. I'm going to do this, because then I will know, or it'll look like that. And this deep, sparse structure that you were talking about before in terms of the brain, for example, the brain being empty of connections. I have friends who work on connectomics using anatomy. If you think about the brain as a collection of connections, they say it's almost empty. Of all the connections you could have, there are hardly any there. And I think that speaks again to this minimum description length compression, minimizing Kolmogorov complexity, but at different scales where the scales are defined by the hierarchical structure. I couldn't resist joining, everything you've said makes entire sense. I have to go and do a podcast now. I'm not bored. It's brilliant. I will have to slide away in a few minutes.</p><p><strong>[51:11] Michael Levin:</strong> Thanks, Carl. Does that mean we could take what you just said and apply it on an evolutionary scale and say that the many Markov blankets between the genotype and the phenotype mean that overall the whole process might exhibit competencies that aren't as blind and dumb as it's supposed to be?</p><p><strong>[51:39] Unknown:</strong> That also occurred to me because you've got the separation of scales. I think the whole point about deep models in deep RL or deep models in multicellular organisms speaks to the fact that as you get deeper into the system, time and scale slow down or get bigger. The perfect example of that is the scale-free or scale-invariant aspects of evolution in and of itself.</p><p><strong>[52:12] Unknown:</strong> Mike, this reminds me of something that I know Sam Kriegman has been thinking about, which is the idea that although we often tend to boil down the evolutionarily salient part of some entity to a fixed or static point in time. In fact, what is seen by selection is a trajectory of the organism through its possible configuration space. Fitness is not computed at one point in organismal configuration space. It's an integral along this family of possible trajectories through organismal configuration space. Does that make sense?</p><p><strong>[53:16] Michael Levin:</strong> It makes sense to me, but I think that's a pretty controversial claim in the standard neo-Darwinian synthesis. The idea is supposed to be that both the foresight and the hindsight is supposed to be pretty much zero and that we've toyed with models where there's metadata on each allele to say, what was it before and how did that work out? You can make those things and we're playing with some of those models. At least the standard view, and obviously there are people who disagree with this, is supposed to be that all you have is what you're doing right now. That is the only thing that fitness can see is whatever you're doing right now. So I do think that's restrictive. But I think that's highly controversial.</p><p><strong>[54:06] unknown:</strong> Do you think that when you think about evolution and fitness, given the type of discussion around platonic star spaces, survival and continuing genes is enough to bring evolution into this discussion or do you think it needs to have some informational component? One thing that came out of the last talk was Tim Jackson's discussion on convergent evolution, and those convergent structures seem to maximize sense data, at least in the very basic sense of how much of a certain space or a certain type of space you can access, whether that's through sensor data or being able to fly or echolocation if you're a bat or a dolphin. Does this conception of atomic space need to have some informational component? When I think about observers, not just animals all the way down, we think about persistence, which is that survival point, but also computational boundaries, like how much you can do, what your computational capacity is. One of the things that needs to be considered, if we're thinking about a space of attractors, is what the properties of those attractors are in relation to the properties of the agent—what are we measuring against. That is really critical here, and there is a lot of work being done because of LLMs and AI on different statistical measures for useful information or how things can go in phase space. Can those ideas be adapted for biological evolution? I'm not a biologist, so I don't know the answer, but I thought it would be interesting to give a group.</p><p><strong>[56:06] Michael Levin:</strong> I personally am very suspicious of the idea that survival and replication is the main driver. I know that's how it's supposed to be. I'm not sure that's true at all. One thing is that in order to have that in the first place, you have to already have your replicator and already have the thing that has differential fitness — the thing has to persist and defend itself. There are some very interesting dynamics, which we'll preprint in about a week, of what happens before you get replicators. Blaze has some stuff on this as well, but there are things happening before you can point to something and say, that's a thing that will have differential success. So whatever's happening before isn't driven by that. There's some underlying dynamic, which for us seems to be a positive feedback loop between learning and causal emergence. The thing ratchets itself up by learning and causal emergence. Before that is weird. I don't even know that we have a proper vocabulary for it yet because it's happening in a pool of this pregnant medium that you can't really draw circles around. That's the thing reproducing because the materials are all over the place. They come and go and there's not a single thing, but you can already see that these loops are pulling themselves up by their bootstraps. Eventually the causal emergence hits and suddenly you get a replicator. Now you're off to the more conventional optimization part.</p><p><strong>[57:51] Unknown:</strong> There's a really interesting thing. First of all, I'd love to see those results. Mike, that sounds fascinating. They certainly jibe with a lot of things that I've been seeing too. And they point out something that I think is actually really important in all of this, which is that in normal biology, the coarse graining is always given. There's just this presumption that you know what the thing is that is replicating. And obviously, the Dawkins "Selfish Gene" thing was very provocative because it proposed a different coarse graining that people weren't used to: that it was the gene that was the thing. In addition to emphasizing competition, etc., it was just an alternative coarse graining. But obviously, a coarse graining is just a model. There's nothing that says one coarse graining is correct and another is not. Any given coarse graining allows you to write down equations, to look at dynamics, to ask about reproduction. And it's not trivial because a coarse graining requires that you be able to say when something is or isn't an entity, when something is or isn't another instance of the same class of entity. Is something transformation? Is it reproduction? Is it another of the same? And for organisms with complex life cycles, the question of whether this next stage is the same species, or is it actually one species giving rise to another, giving rise to another? These questions come up all the time, or is a hive an instance of a thing, or only the bee? And the answer, of course, is both and all of the above. In the period before you get cell membranes, especially, you really don't have an obvious coarse graining at all. You just have all these loops and interactions that seem to be autocatalyzing each other. And then there's some point at which I think our intuition is that the thing has a model of itself. For me, autopoiesis is something that we recognize when we assert that the thing that is doing the autopoiesis actually has a self-model, and therefore is following that self-model in order to construct more of that self. But again, it takes a model to recognize a model. So all of these things are completely relational. You can't make any truth statements about them without presupposing a coarse graining. This relational view of what life is strikes me as the equivalent of relational quantum mechanics. It's something that hasn't really been well theorized and would make a lot of the paradoxes go away by just pointing out that you can't make any of these statements without positing a perspective and, of course, coarse graining going along with that.</p><p><strong>[1:01:02] Michael Levin:</strong> I think that's really critical. Josh Bongart and I have been playing with this in terms of polycomputing and this notion of different observers who see the same physical events as different computations. Some recent work, this isn't out yet either, tries to simulate this, using a model of gene regulatory networks. The idea is if evolution has a choice between scaling up the competencies of the material, the individual networks, versus leaving the material in place and instead working on adding different observers who see the exact same thing going on but are able to map a different core screening and a different set of interpretations on it. The answer is evolution prefers to be able to do both, but if it has to choose one, it'll scale the observers rather than mess with the material. Part of it is because if you start messing with the material, you screw up dependencies. If something else was dependent on it, now things downstream are going to go wrong. Whereas if you leave the material in place and simply add perspectives, then you can overload meaning onto the same thing and not mess up anybody else, keep adding perspectives. Quantitatively, that looks like what it prefers to do.</p><p><strong>[1:02:28] unknown:</strong> This is a question for that: would something like that prefer more observers? Because when you pull a bunch of those observers together with the same properties, they can form a component, a small network component where they get parallelization, computational competencies, those three lunges from making more versus a fixed substrate where they already know it's persistent, evolution knows it's persistent, and then you go, well, that's persistent. If I can pick between changing that to make it more persistent, more less bounded, and just making more of the same way. This is the competency from the group of things. This is all, again, to me; it screams I'm optimizing for my computational power primarily. Then I look as a second order: can I maintain persistence while optimizing that? It's a computational view of what's going on. But I wondered what your view of that is, because that's something that you see in some of the basic experiments around cellular automata, and they are far away from that. The reason I'm interested is because the dynamic is similar. I wondered if that explanation was interesting or was thought about in the context of that result.</p><p><strong>[1:03:53] Michael Levin:</strong> I think that's very interesting. We haven't gotten to that yet. Right now, none of the observers talk to each other. After we characterize all of that, we'll do exactly what you said and let them form a network too. Katrina, did you want to say something?</p><p><strong>[1:04:10] Brian Chung:</strong> I just wanted to follow on that comment about the importance of the relational nature of what we're talking about and how, Leo, you had brought up affordances in the environment. I think equally important there is the affordances of other agents in the environment. Back to that earlier example of play, Jaak Panksepp, the neuroscientist, has this widely shared model of play, which is more that it's a relational activity between organisms. Something like learning to cook actually isn't play under some definitions. Play is emotion regulation, social engagement that we do in order to create alignment between us and other agents in the world. It increases synaptic plasticity and gets us into a mentally labile state. The reason I think that's important to bring up is because when we've been talking about information and how information gets shared and where the free lunches come from, I think of that as being critical in humans: our free lunches are via human communication. I'm getting all kinds of information right now for very low cost or at a highly discounted cost because I'm putting my cognitive architecture in a state where I'm receptive to that. I think that could be what's accelerating our human evolution in intelligence, taking us further and further away from our genome and more into this information-sharing social space.</p><p><strong>[1:05:32] Unknown:</strong> Can I just respond because this might be old territory, but the notion of play being analogous to the kind of raising of the temperature of a system to explore more of its performance space in our context, that in a stable environment, you might get locked into particular kinds of affordances. But then when the environment changes, or we go into a room with new people, we have to explore how to build bridges, how to couple with that environment including other agents, and play might be the notion of raising the temperature a little bit to explore the space of potential couplings or importances and where they might lead. I think analogies have been made between the evolutionary and, in statistics, people have borrowed evolutionary algorithms to search complex, multi-dimensional, rugged energy landscapes. But equally, transporting some of the concepts of statistical physics back into biology, which has been done lots of times, is also valuable in thinking about some of the processes that we're looking at as search mechanisms in finding the optimal engagement with your environment. Optimal is a hard word. It's something less than that, I think, but something that provides a way of hooking up with our environment that might, by exploration and finding those key affordances, create a ramp. The more competent, or the more affordances that the particular environment offers, the higher, I guess, we can ascend the ramp of possibilities.</p><p><strong>[1:07:58] Brian Chung:</strong> Yeah, Jacob.</p><p><strong>[1:07:59] Unknown:</strong> I really love this idea. Alison Gopnik certainly has made some connections between this notion of exploration and a kind of annealing point of view. I nonetheless don't think we pay enough attention to the role of behavioral plasticity in learning. Just a simple example that really drove this point home to me. My wife, Erica Cartmill, will sometimes, when she's trying to explain very basic conditioning of an animal to audiences that are not a bunch of physicists, do an experiment where she tries doing simple reinforcement on a human subject to try to shape some arbitrary behavior. There is a very strong relationship between the base behavioral plasticity that this person will exhibit and how easily they can be shaped into the appropriate kind of target behaviors. Someone who just sits there like a wet fish not doing anything provides very few probes into this possible affordance space that is in this case being shaped by a rewarding human interactant, but that you could think of more broadly as any kind of relational source of potential reward. If you're not exhibiting that kind of plasticity, you're not going to discover these sort of affordances in the environment. I think this raises a real methodological challenge, which comes back to something you raised very early on, Mike, about the difficulty of what we can and can't recognize. I think we're very limited empirically in our ability to probe the capacities of intelligent systems because we have to be able to read the design of the task in the same way that the system in question is reading it. This is one way to say a lot of the shortcut learning, for example, that we see is exploiting an affordance that we were unaware of in the design of the particular task. How do you think about that? What I, in my language, often talk about as the as-relation, this interpretive layer involved in all of this behavior where the environment is read as having these set of options, and our very limited capacity to read the option spaces that are interpreted by—it's hard enough to do it with other humans, let alone something that's radically different.</p><p><strong>[1:11:04] Michael Levin:</strong> Sorry, Katrina, had you had your hand up before? Did I miss that? No. Blaze.</p><p><strong>[1:11:12] Unknown:</strong> One of the reasons that I have some problems with the play concept is because I think it actually carries with it the assumption that what we normally do is something other than that, or that work is the default, or that we are optimizing for something, or that there is some other thing. The reality is any living system stays alive by virtue of staying alive. It doesn't mean that it has to be optimizing something. There is a dynamical loop that is stable enough that it continues to exist, and the range of things that can happen in the context of such a dynamical loop is very, very large. This sort of Darwinian-Spencerian idea that if you're not working hard at it, you're going to die because something else is going to eat your lunch, we know is not really exactly the case for a lot of organisms in a lot of situations. There are many things that create lunches for each other. There are networks that mutually reinforce each other in various ways. And that just leaves a lot of space for other stuff to happen. So it's not that I think seeking information or curiosity isn't something that things with intelligence do; certainly they do it. But any of these definitions about play that it's only about stuff that satisfies your curiosity or only this, only that, it's a little bit like trying to define art. There's this form of play that is just bumping your head against something — is it play, is it not play, is it just a tick, what is it; it's very particular, very value-laden, very anthropomorphic. I think that when we look at a worm doing something fun and we say it's play, we may be doing something that is usefully empathic. It may be that there is pleasure being experienced, that there's something about that experience subjectively that is like what we associate with play. It also may not be, but whether or not that is valid to me doesn't speak to whether it is serious or not. Stuff does all kinds of stuff. So I guess that would be my take on that question for what it's worth. By the way, I need to switch to phone mode. So I'm still here, but may not be on the same video. David.</p><p><strong>[1:13:53] Unknown:</strong> So let me chime in here. As someone who plays music and teaches children how to play, and from my perspective as a musician, play is fundamental to music, to learning how to play music well, learning how to compose music. The way that I experience play in music is not necessarily information seeking, but almost pleasure seeking, maybe just boredom, taking up time, something to do just to do, that kind of thing. I would say that play is not necessarily exploration or information seeking at all. It can have multiple purposes. Maybe it's almost like a will to power: just to do something. I am fascinated by this question of how to distinguish between play and other behaviors. Early in my life I collected a lot of ants and spent a lot of time observing them. It seems like some ant behavior is exploration, like when they're foraging and it's almost randomly driven, just a random walk through the environment. But some of it could be characterized more as play. I think it may be very specific to the organism how you make this distinction. And it probably has to be within an understanding of what the goals of the organism are — what it's trying to do. I think a functional approach will get you somewhere in trying to understand what play is, how to characterize it, and how it differs from other things. Thanks.</p><p><strong>[1:16:26] Michael Levin:</strong> Just watching cells build an embryo, especially in time lapse, that's one experience. And then a different experience is watching a bunch of cells explanted in the dish or in some other context and you watch them running around with not much, at least apparent to us, happening. And I always think about this sphere of television broadcasts that have been spreading out from the earth. I always think about aliens somewhere and there's 80 light years of Three Stooges spreading out into the world and football games and things like this. I just imagine the aliens getting that and seeing some of that and trying to figure out what is this? Are they doing something? Are they just messing around, trying to understand. And it's basically that. We're in that position, watching these cells, trying to figure out if this is a poor attempt to build something or is this not even that at all? And it's a fantastic attempt at having an enjoyable time exploring the dish or what the heck is it? Jacob.</p><p><strong>[1:17:37] Unknown:</strong> This is riffing off of a remark that Douglas Brash made in the comments about when your kid plays: play is coming up with your own goal and pursuing that as an end in itself. I do think that this describes a capacity that is clearly very important to humans, namely our ability to choose essentially an arbitrary thing and pursue it as an end in and of itself. I think there's a very beautiful theory of culture in the work of the early 20th century sociologist-philosopher Georg Simmel in his book. I can't remember what the German name is. I will look it up and put it in the chat. But in any case, he has this theory that culture is built by identifying basic, core forms of life and pursuing them as ends in and of themselves. The mathematician David Mumford has an interesting account, very similar, of the origins of different parts of mathematics, along the same lines: geometry comes from pursuing the idea of space and fixing on that and exploring it in all of its possible variations. Analysis comes from taking notions of motion and putting them through their paces. I think this is something we can clearly do as humans. The challenge is recognizing that capacity in radically different embodiments. As Blaise said with the worm, and as you've said, Mike, with the cells, we know from our own introspective experience that there are cases where we are choosing some arbitrary end and pursuing it simply as an exercise in pursuing that end. We know what that looks like in other people and they can tell us that's what they're doing. Can we recognize that activity, which seems to me to be absolutely fundamental to our basic cultural and scientific capacities, in other embodiments?</p><p><strong>[1:20:35] Unknown:</strong> Let me throw something on this. Play takes energy. That is something that is going to be selected against: too much expenditure of wasted energy, right?</p><p><strong>[1:21:06] Unknown:</strong> On the other hand, play is fun. I was wrestling with this proposal. There's a comment in the chat now from Leo that there's a correspondence to a temperature scale. Shouldn't play be low temperature because it's fun and easy? Maybe is it really an entropy thing rather than an energy thing? That there's low constraint.</p><p><strong>[1:21:41] Unknown:</strong> I see exploratory play would be more like the high temperature regime, but we need to generalize temperature, don't we? It could be that the relation that in a population, a small number might be exploring or playing more on the fringe of the potential spaces of interaction. And that their relation in terms of their size of their population versus the bulk of the population would be a Boltzmann-type thing, which you can only access at higher effective temperature. It's less populated, so it's the fringe and they're the more playful. And as you go lower in temperature, you're getting to the more uniform, regular habituated modes of interaction. That was the thing I was grasping at in that analogy.</p><p><strong>[1:22:55] Unknown:</strong> I get the analogy. You're saying I should readjust my thinking that spending lots of energy is a high-energy thing and focusing on a single bit of work that you have to get done by five o'clock today is actually low energy, not fun, but I should look at it as low energy rather than looking at it as high energy. That's what I meant by the entropy business.</p><p><strong>[1:23:23] Unknown:</strong> But I think that being too thermodynamic about this just presumes that we're in too constrained a situation. Let's take a chemotactic bacterium close to its point of starvation. Then you may be close to a limit where if it doesn't tumble at just the right times, it significantly increases the likelihood that it will not exist in the future. Something like that is going to have to behave like an optimizer, which is to say it doesn't have a lot of space to have fun. If its space of behaviors is tumble or don't tumble and making the wrong decision means there's no more bacterium, then there's not a lot of agency or fun in a system like that. It'll only continue if it does exactly the right thing. But for the huge majority of organisms, including unicellular ones, there's such a range of behaviors. There are so many behaviors that are consistent with continuing to exist. If you're doing your chores, skinning the animal that you just killed, and you're bumping your **** a little bit while you do it and dancing around a little bit, the idea that the energetic difference between bumping your **** and not bumping your **** is going to make a difference in your survival is just ridiculous. Of course we're not that constrained, and I think that's true of the vast majority of life. I think that this whole teleological question of fun kind of vanishes. You just see there's a lot of turbulence in the system. There's a lot of stuff that happens. It's sometimes emotionally loaded. It's informationally interesting. It can develop cultural dimensions. But there's nothing unusual about this. The idea that everything is so constrained, I think, is just a wrong idea about how life works.</p><p><strong>[1:25:24] Michael Levin:</strong> It sounds like what you've just described is something like the Maslow hierarchy.</p><p><strong>[1:25:33] Unknown:</strong> Almost everything is above the baseline of the Maslow hierarchy when you look at it.</p><p><strong>[1:25:39] Unknown:</strong> Yeah. So I think what you bring up is the importance of making some analytic distinctions. Using the case of human play as the paradigm example between what is behaviorally visible, which might be the unpredictability of the behavior, given some circumstance, which I think is closest to the temperature, the enabling conditions, which again, in the case of human play are typically situations where there is a sense of protection, where there is a sense of lower risk to engage in more exploratory behavior, and the motivational status. In other words, what is the agent who is playing trying to do in that activity? I think you're absolutely right, Blaise, with the point about the degree of constraint and enabling condition. There's a beautiful example with the domestication of these birds, the white-roomed timunia, where their wild type song is very, very characteristic, but under domestication as an epiphenomenon of the reduced selection pressure they started to develop much, much more variable song behavior. Now, I don't know if anyone has heard them, but they have these amazing songs. There's lots of variation, there's lots of variability. This is presented by folks who work on the evolution of language and the domestication as evidence of reduced selection pressures, reduced constraints, opening up at least some degrees of freedom for greater variation and, in this sense, play across evolutionary time. I think it is very important to keep in mind how rare it is that we actually see organisms against the bare metal of survival.</p><p><strong>[1:27:58] unknown:</strong> You think, Jacob, did they find anything in the compositions of the songs? Were they more complex or more rich? So they explored that thing that was, you found them to, in a way, be beyond its survival mechanic. It really went into having more space, more computational capacity. I don't have to find food anymore. Therefore, I can now put more energy into making these songs more rich, more complex, more coherent.</p><p><strong>[1:28:31] Unknown:</strong> Absolutely. Okay.</p><p><strong>[1:28:36] unknown:</strong> You know what fits nicely</p><p><strong>[1:28:37] Brian Chung:</strong> With that idea is the prevalence of play in human children versus adults, because at least if you're a human child in a relatively safe environment, you've got that domestication situation and you have the ability to play and explore a lot. And then as you get older, you're, oh, ****, I better get serious about my life.</p><p><strong>[1:28:54] unknown:</strong> And the selection pressures are more apparent to you.</p><p><strong>[1:28:59] Unknown:</strong> That is also consistent with what is supposed to be happening in the academy. The term scholar comes from the Greek schole, which means leisure. The idea is supposed to be that you're protected from some of these forces so that you have time to play intellectually. And I do think we — a lot of what we do is creating these spaces for play like this one.</p><p><strong>[1:29:32] Unknown:</strong> When we become adults we play less partly because constraints, financial constraints, other kinds of constraints, but could it also be that we get bored with life? Lose the will to live, maybe. Nietzsche's characterization of life as "will to power." So that's really what life is ultimately about: exerting some kind of power over your environment, and play is just one of those ways of exerting your power. That's what life is fundamentally about.</p><p><strong>[1:30:35] unknown:</strong> Very postmodern view of what life is about.</p><p><strong>[1:30:42] Unknown:</strong> What?</p><p><strong>[1:30:43] unknown:</strong> Very postmodern view. It's like everything's a pain, it's taking everything as a power game. There's the biological limit. My reaction is that it's quite a shrunken down version from personal experience. It feels like there's probably a bit more to it than that and that we can reduce it to that because it is easier to investigate that in finite time with metrics and with tests and where you can say that, but when you extend time out, that might just be a function of the fact that the tool we have today sees it that way and the tool in the future may see it differently. I think that's an error in that line of post-wormness thinking that's almost tuned along timescales. But that's a personal view, just to give the corollary that I don't think you can just do that.</p><p><strong>[1:31:45] Unknown:</strong> I think what Blaise said in the chat is that life is just doing stuff. Power in the sense of that kind of will to power, the will to live. It's very fundamental. And maybe it even precedes reproduction. Maybe the fact that life forms reproduce is the manifestation of something deeper. Why even bother to reproduce? Why even bother to go on living?</p><p><strong>[1:32:36] Unknown:</strong> I think a different perspective to see it is we lose the open-endedness of play. When we're a child we don't know the limitations; we are exploring the limitations. We don't know low-level details that constrain us. Today we have great ideas we've thought about; we don't even dig deep on those ideas to see why they cannot work. We think that they can't work, we find a way to make it. So maybe play is not to know too much.</p><p><strong>[1:33:20] Unknown:</strong> One of the really interesting things about children is that they both play a lot and love repeating things. They exhibit quite different properties vis-a-vis adults, with respect to both variability of behavior and getting bored. They love having the same thing happen over and over again. And there's a construal of that that says that in both cases, what it is about—in a Nietzschean register—is just affirmation. Affirm whatever they're doing. I'm singing this song for the 16th time. I'm very excited about that. I'm going to now go do some random other behavior. I'm also very excited about that. From an existential standpoint, I do agree with you, David, about getting tired of life, but I think the way to look at children, at least, as models of this kind of radical capacity for not getting bored with things is both to play and to radically affirm what's happening.</p><p><strong>[1:34:43] Michael Levin:</strong> There's an interesting piece of data that I think is deep and hasn't been dealt with that speaks to something David was bringing up. This guy did these experiments where he would take a rat and throw it in a bucket of water and the rat can tread water for a couple of minutes and then it drowns. And that's what happens. Then he would throw the rat in, wait a minute, 45 seconds, take the rat out, dry him off, put him back in. You do that a couple of times and basically the rat learns that he's going to be rescued and then you find out that a rat can actually tread water for about an hour. So this is very interesting. The physiological reserves are sufficient to keep going for an hour. Why do most rats drown after a minute and a half or two minutes? There's some version of giving up, and I don't know that that's available to insects, but it seems to be available to at least some mammals, where in the hopelessness of it you would think that evolution would greatly select for a terminator-like behavior where, if you've got the physiological reserves, just go to the last moment — one time out of 1,000 something will happen, you'll get rescued; that certainly should be the favorable phenotype. And yet that's not what happens, and at least in the mammalian case, and there's other examples of this up in birds, they have the ability to actually give up and say, forget it. I could keep going, but I'm not going to. I think that's interesting and how that interplays with evolution is interesting. You wouldn't predict it from standard Darwinian principles. I don't think you'd predict this.</p><p><strong>[1:36:28] unknown:</strong> One of the tools in observer theory is this idea of a limit of your possibility space from the observer's perspective, i.e. what you think can possibly happen versus what you're predicting right now, what normally happens. So these spaces: the field is smaller than the edge of your state space. When you pick the rat up after a minute or five, you're creating an equivalence where that field or that state space gets bigger or approaches the boundary of what it thinks is possible. And because you have that equivalence after reinforcing it enough times, that becomes part of their possibility space. So they can then go in their internal model when they're creating that loop. They go, oh, this happened before and this can happen again. If I break a bit longer, then I can keep going. Once that possibility's been actualized in their internal model, a rat might need more reinforcement or more direct reinforcement, say us, then it can do it. It sort of accesses that full possibility space because you've created equivalence for it by interacting with it, by effectively coupling with it, by giving the rat a proposition: you will physically get lifted out of this tub. That proposition is accepted by the rat because it doesn't have the choice of whether it gets lifted out or not, but with enough reinforcement that proposition becomes part of its world order and therefore its state space accepts. So it can then do that thing because you've given it, you've given it effectively top-down knowledge, but its possibility space was bigger than full. That sort of dynamic of reinforcement and coupling from different observers when you accept and reject propositions that change the morphisms accessible, the choices accessible between states and the internal model, and that can apply not just in that example — that sort of dynamic can apply at all scales — is an interesting way of investigating that difference. How to lead the practice of an idea of platonic space is when we do new things or we introduce a new element to something else that doesn't have that element, we are ingressing in its platonic space or its state space or its data space, whichever one you want to use. We're changing it. Ingression from you: you've changed the things that it can do, therefore it now thinks it can do more. It's updated. And that loop is a way to play around with that idea of ingression in a tight physical way.</p><p><strong>[1:39:18] Unknown:</strong> I do think there is something about when I had kept ant colonies: when the queen died the colony just fell apart even though they continued living, they weren't foraging, and eventually they would just die out. It seemed to me, looking at it, that they lost the will to live once the queen was dead in the colony. But maybe that's just got a complete biochemical explanation that can be found. Certainly behaviorally, that's what it looked like. They lost the will to live. We have a lot of interesting things going on in the discussion in the chat. Someone brought up galaxies earlier. I wonder at a deep metaphysical level: maybe that's what existence is, actually—why is there something rather than nothing? We've all thought about that question, but I don't think anyone's got a good handle on it. Maybe there's something rather than nothing because the universe wants to do stuff.</p><p><strong>[1:41:15] Michael Levin:</strong> Dave, to your previous point about the ants as to whether there would be a biochemical explanation, I think there's always a biochemical story to be told of anything, or a physical story to be told. To me, it's like the neural correlates of consciousness. You could tell that story. It's not false exactly, because it does accompany and it does implement the thing you're talking about. But in most interesting cases, that low level story is not the most insightful story. I'm sure there's some biochemical fact about it to be found, but there's probably a more interesting level to it, I would think.</p><p><strong>[1:42:07] Unknown:</strong> The ants get pheromones from the queen, giving them instructions to do different behaviors.</p><p><strong>[1:42:18] Michael Levin:</strong> No doubt, if you watch two brilliant mathematicians discuss some proof and you come away saying, look here, there was a bunch of air molecules and they moved like this and then that, you're not wrong exactly, but you've missed the whole point. You haven't facilitated the next interesting thing that might happen there. It's just you've picked poorly as far as the level of description.</p><p><strong>[1:42:44] Unknown:</strong> It would be an interesting experiment to try to reproduce, say, a robot queen you inject into an ant colony that has all the right pheromones and everything it's secreting. But does it play the exact functional role of a real live queen in the colony?</p><p><strong>[1:43:04] Michael Levin:</strong> Do you know the book "The Soul of the White Ant" from the 20s by Eugene Murray? Have you seen that? Well worth it. If you're into ants, "The Soul of the White Ant" by Eugene Murray, back from '23 or something. It's really amazing. He did all these experiments: there's a colony and if an ant from one colony goes to another colony, they kill it. But if he goes over there and the queen is dead, they take him in. He becomes, there's all this stuff. He was trying to work out how they know and the distance and putting barriers in. Really, really remarkable.</p><p><strong>[1:43:45] Unknown:</strong> When I was in my own experiments, when a queen died, I would try to introduce a new queen into the colony to see if they would take it. Sometimes they would, sometimes they wouldn't. It may vary with the species.</p><p><strong>[1:44:13] Michael Levin:</strong> I think this has been great. Does anybody else have any last thoughts?</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Platonic Space discussion 3</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>This 1h44m roundtable on the Platonic Space Hypothesis explores platonic forms in biology, xenobots, symbiosis, competency spaces, play and plasticity, Markov blankets, thermodynamics, and will-to-live dynamics.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/E6_XdPm9fa8" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/4d1e68e2/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1 hours 44 minute discussion among contributors to the Platonic Space Hypothesis (<a href="https://thoughtforms.life/symposium-on-the-platonic-space/).?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life/symposium-on-the-platonic-space/).</a></p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Platonic forms and interaction</p><p>(07:01) Xenobots and evolutionary cost</p><p>(11:23) Symbiosis, rectifiers, embeddings</p><p>(18:06) Free lunches in biology</p><p>(27:59) Topology of competency space</p><p>(39:53) Exploration, play, affordances</p><p>(48:01) Markov blankets and evolution</p><p>(57:51) Relational life and observers</p><p>(01:04:10) Defining play and plasticity</p><p>(01:20:35) Thermodynamics and domesticated play</p><p>(01:29:32) Will to live dynamics</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Michael Levin:</strong> Welcome, everybody. We're open for another discussion of the Platonic space. If anybody has any issues or questions for each other, please, now's the time.</p><p><strong>[00:15] Unknown:</strong> I've got questions or comments. I don't want to monopolize things. Go for it.</p><p><strong>[00:19] Michael Levin:</strong> Go for it.</p><p><strong>[00:20] Unknown:</strong> When I prepared my talk, I had not read what you wrote, Mike, because I wanted to just comment: what's the biology, unbiased, what is it telling me? I've since gone back and read these things. My impression with regard to the Plato concept is that unless there's a proposal for where these forms are stored and what the mechanisms are for pointing and ingressing, how you consult these forms, the whole project is indistinguishable from mathematics, physics, and biology all having patterns and rules. Why there are any rules and how they get consulted, we don't know. I was wondering whether the intent of this whole session, the series of lectures, was to see if somebody could come up with, what is the storage site? What are mechanisms for pointing and ingressing? My impression is that at least a number of the talks, certainly mine and Gordana's, are that you don't really need to have forms outside somewhere that are being consulted. You can have them just as rules and constraints within the biological organism itself. Or, as my quantum mechanics professor used to joke, it took you a little while to do the homework, but every electron and proton in the whole universe is doing it like that all the time. Has there been any insight into whether there is consulting or whether it really just comes back to there are laws somewhere and we just need to find them out?</p><p><strong>[02:09] Michael Levin:</strong> I would say there's a couple of things. First of all, I didn't think that these other talks would address that question in particular, because fundamentally I think there are many people who don't agree with my framing of it in the first place. So that's step one: to even say whether this whole thing is even — I'm arguing for a strong interactionist model. Before you can worry about the interaction, you have to think that there is interaction, and some people don't. However, what I would focus on and what our research program actually focuses on is trying to understand what it is that you actually get during this interaction. For example, are they just constraints? Are they enablements? Lots of people say it's not just constraints, it's also enablements. But I think enablements can be taken much more seriously. That is not just that by closing off some stuff over here, I forced you into this other set of things that you're going to do. In the sense of free lunches, or heavily discounted lunches, you get more than you put in. I'm really interested in this idea of what you get out of such a thing. By putting in some amount of effort to make an interface, what you actually get through it is in some quantifiable way more than you put in. In other words, what you get are not simply constraints on things that you can't do, nor being shuttled into other modes, but actually you get policies, maybe information — static patterns, maybe actual compute in the form of virtual machines. You get something that you didn't pay for in an important sense. Because I think the current way of calculating what you paid for only takes into account this side of the interaction. My strong supposition and my hypothesis is that you get way more than you paid for. I think biology in particular — the things we call biology tend to be systems that exploit that; they are very good at exploiting these things, saving effort on things that they did not need to evolve or find or search for. In biology it's very hard to quantify that because it's always complex. There are always mechanisms you don't know, and it's really hard to prove any of that. But in simpler systems and in simple computational systems we may be able to. That's one of the things we're doing: trying to quantify how much you put in and what you get out in these simple systems.</p><p><strong>[04:55] Unknown:</strong> Since you put it that way, I'll mention something that I didn't put in my talk, but it is in the causality paper from a couple of years ago, which is Richard Levins's idea from across town from you. He was at Harvard School of Public Health. The idea, which goes all the way back to Waddington's theoretical biology books in the '70s — there are a couple of nice papers back there — is that essentially the way you get some structure is by crystallizing out of an amorphous mass. In other words, what happens is biology starts off, it's not that you build new things, it's just that you start off doing many things badly. By adding constraints, you exclude many of those things. Now the remaining ones are done well. You can see this, for example, in neurobiology, where in lower animals there'll be a brain nucleus that does two or three things connecting to two or three places. In higher organisms, it splits into two, each one of which takes on one of those tasks. Or in the genome: in E. coli, the whole genome's accessible. In eukaryotes, you suppress everything with histones, and now you selectively de-repress stuff. What you've done then is essentially increase the signal-to-noise ratio of stuff you already have. Enzymes are another classic. All these chemical reactions can go on without the enzyme, but 12 other side reactions also happen. If you have an enzyme, you're essentially preventing some of those by the nature of the active site. Only a couple of them happen, and they happen much faster and much better. This signal-to-noise ratio, which is what you're talking about — the ratio of inputs to output — is probably exactly the thing to be looking at.</p><p><strong>[07:01] Michael Levin:</strong> I think that's certainly one subset of those phenomena. In biology, even though it's much harder to prove anything in this scenario and hard to quantify these things, we do now with some of the synthetic models that we and others have made, xenobots, anthrobots, there's an opportunity and a challenge now for biologists to be able to say, when was the computational cost paid to design these things? In other words, we know when the frog and the human design was paid for, it was in the millions of years of selection for specific features. But when you create something that's never been here before, and it has certain competencies, you want to know where did those come from and when did we pay for them? I don't think it's good enough. When I ask people this question, they generally say, well, it has an evolutionary history. It just learned to do that when it was being selected for other stuff. That's OK, except A, it provides zero explanatory value. It just means that whatever other weird thing pops up, you'll just chalk it up to the history. And B, it rips up a large part of what I thought evolutionary theory was supposed to do, which is provide a tight specificity between the history of environments and the properties that you got out the other end. You're supposed to be able to say, this thing looks and acts this way because it has a history of selection going back, and everything else died out. So if you're willing to rip that up and just say, well, whatever your history was, you can end up with pretty much anything. I think we're supposed to do better than that. I think we're supposed to have some kind of theory to be able to say more than just the developmental plasticity. We're able to say why is it that we selected for all of these things? And also, by the way, in a novel configuration, all of this other new stuff works that's never been evaluated before. Hard to quantify, but at least we can start looking for theory that does better than it's emergent, it just showed up.</p><p><strong>[09:10] Unknown:</strong> I thought that was the Evo-Devo program, or maybe it's because I only talk to people like Gunter Wagner who think that environment is not the whole story. There is a set of rules somewhere. There's some other set of constraints on how you build an organism that functions.</p><p><strong>[09:35] Michael Levin:</strong> The constraints. I think this is more than constraints. Andreas Wagner gets really close to this. He doesn't quite come out saying it, but he has this book, "Arrival of the Fittest," which I think asks exactly this question: okay, you can sort of select out the bad stuff, that's great. Where does the good stuff come from? Specifically, constraints are one thing. But when you get significant competencies out of it, maybe it's more than constraints. Maybe by building certain interfaces you're tapping into something that provides a bigger return on investment. I can think of a number of examples of that. I think learning to predict, facilitate it when we want it, suppress it when we don't want it, because there are scenarios where that happens. If everybody knows you get complexity like that, you get unpredictability, maybe you get perverse instantiation in a life context, but it's not just that. It's not just complexity and unpredictability. It's competencies that would be recognizable to any behavior scientist. Somewhere along the spectrum of maybe low to higher, that requires explanation because if you don't have a long history of selection for it and you don't have direct engineering or design for it, we're looking at something additional to that. There are knowledge gaps around where that stuff comes from.</p><p><strong>[11:23] Unknown:</strong> Not that I have the answer. I'm simply asking. Do we really need the environment for that? Meaning a symbiotic relationship between two species can explain creating noise in one species, which the other species will use in a different way. So for species A it's noise, but for species B that's food. By increasing that, you're actually increasing the other. The fitness function is not only on group A; it's actually on group B that influences group A. So you're creating a symbiotic connection that really complicates the way to describe what is good and what is not good.</p><p><strong>[12:16] Unknown:</strong> The question is, where is the directionality coming from? You're saying you need a rectifier. Here's somebody who's making noise. Here's somebody else who can use that noise and put direction on it, which is exactly a Carnot heat engine. You've got random motion, and you now put it in a piston and you can direct it into a higher level of work, which is a macroscopic thing. It seems to me that what biology needs to have done then is have invented some little module that is a rectifier and could take noise of whatever kinds and make something useful out of it. I had wondered whether a couple of theorems that showed up in these talks might be clues as to how do you design such a thing? What would it look like? How would nature have designed it? The two things that struck me were, one, in that Platonic Hu, Chung, Wang, Isola paper. You have this theorem about if you have vector embeddings, you want a vector embedding in which the similarity between the observation and the constructs you're trying to make is the same as pointwise mutual information. I don't understand that, but I get the flavor of it. It seems like the sort of computation or constraint or requirement that might tell you what is the kind of thing you have to build that is guaranteed to make useful stuff. Then selection can go figure out which useful stuff. So it's like Lego blocks — you're always gonna make something 'cause the way they go together. The other one was this thing that came up, the Markov blanket theorem, which sounds like it goes back to Ross Ashby's thing about if you're gonna have a regulator, it has to have a model of what's being regulated. I was familiar with Ashby, but I see that there's debates about whether he ever actually proved that. Has there subsequently been a proof of that? That would be another kind of thing that you could imagine being a requirement for making biological modules that are very likely to make useful stuff, and all you have to do is recombine them or whatever.</p><p><strong>[15:16] Brian Chung:</strong> The paper we had in the appendix discussed this notion that the kernel itself, the relationships between embeddings, map to point-wise mutual information of two events in probability space. Point-wise mutual information is a ratio of log probabilities, and the idea is that the kernels are becoming the reflection of the mutual information shared between those two embedded objects. This assumes bijectivity and other things that are not necessarily practical, so it's not an ideal proof. That was the kind of thing we were getting at: the kernel meaning that the inner embeddings and their relationships are converging to something equivalent to point-wise mutual information, given some mathematical assumptions.</p><p><strong>[16:15] Unknown:</strong> That's selecting out a special kind of object.</p><p><strong>[16:20] Brian Chung:</strong> The embedding is not necessarily all possible objects. It's whatever the model chose to compress its representation towards.</p><p><strong>[16:37] Unknown:</strong> Is there an idiot's version? Say a biophysicist's version of the mathematics of that one could wade through and maybe understand in detail how it all plays out? In other words, for those of us who don't think about embeddings all day.</p><p><strong>[17:03] Brian Chung:</strong> Unfortunately, I don't know about biophysics. I'm on the other side; I think about embeddings. If there are notions of co-occurrence and the probability of co-occurrence, that's what pointwise information is reflecting. That is what a kernel is. It's saying that these embeddings had to be embedded close together because they co-occur frequently. So the notion in language is that words' meaning derives from the company they keep. Objects' meanings derive from the things that co-occur.</p><p><strong>[17:40] Unknown:</strong> I'll incubate on that some more. Thank you. Nobody else wants to jump in on it. I can keep throwing in comments. You jump in. I'm wondering.</p><p><strong>[18:06] unknown:</strong> Oh.</p><p><strong>[18:07] Unknown:</strong> Yeah, go ahead. My bad.</p><p><strong>[18:10] unknown:</strong> I'm wondering whether we have a better sense of which mathematical objects have free lunches. Attractors might have them. If we're doing a difference, the sorting algorithms paper has a transitive global objective, and we can see the algorithm getting to some sub-objective. I'm wondering if we have a set of objects that we know are potent.</p><p><strong>[18:40] Michael Levin:</strong> We've been playing with this, taking different ones and trying to see what they offer. There will be some work on this coming soon. The difficulty with all of that is that it's a two-way IQ test. As always, when you're trying to gauge what that is, it's only as good as we know how to notice it. The clustering thing — the only reason we found it is because I thought to look for it, but there's probably 1,000 other things we haven't thought to look for. We are still very much for the biological — for cells, tissues, gene regulatory networks. We are looking for suites of tools to identify novel competencies that we haven't found yet. And ideally, there's no such thing as unbiased, but ideally as differently biased as what humans have been looking for all these years. The same thing is here. I would like to deploy the exact same tools on all of this so that we could try to find as many competencies as we can in different spaces. But I think we are limited primarily by our imagination.</p><p><strong>[19:50] unknown:</strong> I suppose my other question is a free lunch; I think it's really clear to me what a free lunch is at a very low level. In an attractor, it feels like a free lunch. I'm within the basin, now I know where I'm going. But as a human, when I think about a free lunch, I just think of doing something that costs me less to get more. It's not really a free lunch, but it's a place where we're not dealing with a zero-sum. What I'm wondering is what's the difference there?</p><p><strong>[20:28] Michael Levin:</strong> I don't mean literally free because you still have to build the interface, so it's not gonna be free, but some sort of heavily discounted lunch. Here's a dumb example that I've used. Let's say that in some universe, the highest fitness belongs to a particular shape triangle. You crank a bunch of generations and you find the first angle and you crank a bunch more generations, you find the second angle. Now, the third one you don't have to look for because you get this amazing free gift that once you know two angles, you know the third. In some sense, evolution just saved 1/3 of its time, because if you didn't have that, you would have to go find the third angle. That kind of thing is a constraint, but for biology it's not so much a constraint, it's an enabling feature. It means you can go faster. There's tons of stuff like that, that you get these things, these mathematical relationships of facts of computation, where you don't have to do the whole truth table once you have your voltage-gated ion channels, you've got your transistors, the truth table comes naturally after that. These properties you don't have to go look for, they're handed to you. Biology is precisely the set of things that exploit those kinds of things.</p><p><strong>[21:51] Brian Chung:</strong> So I want to add to this notion of the free lunch, because this interesting phenomenon that we see in the AI models is that algorithms that don't normally work are working a lot better now as a model gets to a certain level of competency. So things like evolutionary search and reinforcement learning—if you try to train your model from scratch, it'd be hopeless. But if you do it on a model that's already pre-trained, it works remarkably well. So there are papers now showing that if you do evolutionary search on models that are 7 billion parameters or more, you would think that would not work at all because it's 7 billion parameters. That's a very high dimensional space. But evolutionary search perturbations can give you performance improvements on downstream things, which begs the question: as things become more competent, things that didn't work previously seem to be working a lot more effectively now.</p><p><strong>[22:47] Michael Levin:</strong> Would you mind popping some links into the chat? I haven't seen those from the computer, from the CS side of things, but I'll tell you from the biology: this is something that we've been writing about for a while, that evolution, I think, works quite differently on a competent substrate. So when you have cells that can actually solve problems on their own, it's a completely different story, because if the mapping between a genotype and phenotype is not hardwired, if it's actually an interpretation and intelligent interpretation process, then some very interesting things happen to evolutionary search: it goes much faster and it finds much more interesting things. So having that middle layer, the translation layer, which is morphogenesis, basically, having that be competent greatly potentiates evolutionary search.</p><p><strong>[23:41] Brian Chung:</strong> I imagine these LEGO blocks: the chance of the molecules forming a cube is very low, but LEGOs forming a cube is much higher in the sense that random perturbations create something that is structured.</p><p><strong>[23:53] Michael Levin:</strong> There's another effect here, which is this. Imagine: One of the things that morphogenesis is very good at is getting to the same final outcome even when things change. If you change up the circumstances, it's really good at getting to the same thing. For example, if you make a tadpole where the mouth is on the back of the head, eventually that mouth will come around to where it needs to be and you get a normal frog. We made these things called Picasso tadpoles where we scramble your facial organs: the mouth is out here, the eyes back. They still make normal frogs because all this stuff moves around until you get a nice frog face and then that's it. Imagine what happens with evolution then. Most mutations are deleterious because it's much easier to screw things up than to do good things. Also, most mutations have more than one effect. You have your tadpole, you make a mutation; the mutation does two things. It moves the mouth off to the side, but it also has some other beneficial effect somewhere else. If the material was a direct mapping from genotype to phenotype, you would never see the consequences of this other mutation because the mouth is off to the side, the thing would starve, and that's the end of that. You would have to wait until you get that same mutation without the mouth effect, and that would take a lot longer. Instead, you make the mutation, the mouth fixes itself, and you get to explore the consequences of the other side effects because it makes up for a lot of those things. That aspect turns a lot of deleterious mutations into neutral ones. We have a bunch of computational work on this. If you simulate that process, it becomes very hard for selection to actually see the genome. If you have a beautiful-looking tadpole, you don't know if the genome was amazing or if the structural genome wasn't so good but the developmental process fixed everything along the way. If you look at where evolution is doing most of the work, it ends up doing more work on the competency mechanisms instead of the structural stuff. If you do that, it becomes even harder to see the structural genome. You get onto a positive feedback loop where eventually you get a really unreliable medium, but it doesn't matter because the algorithm is amazing and it fixes whatever happens. If you take that to its logical conclusion, you end up with something like a planarian. In planaria there is a whole spectrum of where these things end in evolution. C. elegans is super hardwired, then mammals, then amphibians, then planaria. In planaria the material is incredibly junky because, for reasons we could describe, the genome—cells have different numbers of chromosomes. They're mixaploid. But they're the ones that have the most regenerative capacity, cancer suppression, immortality. They don't age. It's not because they have a beautiful genome. It's the exact opposite. It's because the material is so unreliable. All the effort went into the algorithm to be able to say, we already know that the hardware is going to be iffy, but you're going to have to fix yourself. Other creatures can do that to different extents. I think it comes because of the consequence that the more of that you do, the harder it is to select on the genome. So all the effort goes into the competency part. That's just one of the things that happens. But having that problem-solving competency means that even though the mutations are random, the outcome actually is not like that at all. It looks quite different.</p><p><strong>[27:57] Brian Chung:</strong> Great. That's awesome. Yeah.</p><p><strong>[27:59] Unknown:</strong> So, Mike, if I can jump in — sorry, I was a little late, I was teaching. One of the things this reminds me of is the deep pertinence of understanding. Some of you may know this old paper by Stadler, Wagner, and Fontana called "The Topology of the Possible," which is about the genotype–phenotype map. Basically what it is arguing is that what is really important to understand is this process that takes you from genotype to phenotype, and because of certain neutralities that you can have, that is much more consequential for evolutionary search and so on. These questions about local search around competent states make me think that one of the key challenges we face in this platonic space view more broadly is understanding what you might call, rather than a genotype–phenotype map, a substrate–competency or substrate–platonic-form map. I think you have provided us with plenty of evidence in your own work that this is highly non-trivial in the structuring of that map, such that you both have a lot of equifinality. You can have very different substrates pointing to very similar competencies. You can also have some kinds of substrates that are very tolerant to some perturbations in their mapping into competency space, some that are very intolerant. I'm curious if you or anyone else has looked back at some of that old machinery and thought about it in the context of this problem: if we want to engineer in competency space, which is what matters, we really need to understand the topology of that mapping.</p><p><strong>[30:26] Michael Levin:</strong> The people who have—those guys and some others who have spoken about that mapping—generally focus on its complexity. There's redundancy, there's pleiotropy, there's degeneracy. But I have a more radical view. I think it's more than that. I think it's intelligence, literally. I think it's problem solving competencies. Most of those descriptions are still at a lower dynamical systems level. I think a lot of what's going on is isomorphic with paradigms from behavioral science, where this is classic anticipation, habituation, Pavlovian conditioning. I think then you get more—it's like that, but on steroids, because then once you really have some competencies about navigating that space, the material itself is helping out. Every layer is doing something, and all you have to do is deform the option space for your parts to get interesting things to happen at your level.</p><p><strong>[31:56] Unknown:</strong> Going back to Brian Chung's multi-billion-dimensional space, or that many parameters, therefore that size space, is it mostly empty?</p><p><strong>[32:12] Brian Chung:</strong> I think mathematically, there's a dynamic that might be at play where most perturbations aren't harmful, but they do something meaningful in some sense: they don't destroy every capability as much as they change one specific capability more and don't ruin the whole process. As Mike was implying, there's a lot of structure for the perturbations for some reason in this parameter space, despite the curse of dimensionality and other things saying that there shouldn't be ways of doing this with just Gaussian perturbations. It's one of those things where this is very recent work. This work came out a few months ago. I don't think there's a good understanding of why this is possible right now in models that have reached a certain level of competency and it wasn't possible before.</p><p><strong>[33:00] unknown:</strong> There's an interesting paper that I saw a couple of days ago around how the connections within regions in the brain compress data or compress the useful information. They did the study on 96 participants' fMRI data. They found that most of the connections between regions were redundant; four or five percent of the connections were basically the very important ones, and the rest of it was redundant for the overall computation. So my thought is when we're thinking about these spaces, computationally, are we thinking about some dynamic where exploitation and exploration of that space work in lockstep with the compressibility of that data object, such that that data object makes that observer or that agent able to do more, as in gives them more variation in what they can do next. When I was thinking about when Brian said that you get certain competencies that don't work as a baseline model and you have to train it until that competency becomes useful. Is that because we're going to this point where you have to have some foundational structure or some foundational organization that's very basic, that gives you some low-level foundational brick understanding before you can build a wall on top of it. Once you have that layer, you can start. One of the things in Blaze's paper is the phase transition that happens when a group of things come together. In that example, are there enough training to then be able to take an abstraction a layer up in that map from the base layer to the satellite view, and now in the satellite view I can do slightly more. The thing I'm working on, Observer Theory, is how to model a computational possibility space in the set of all possible computations. When you look at some of the work done on evolutionary algorithms with one-dimensional Turing machines and on bulk orchestration when they take properties from the whole and not the part, there is a dynamic where you get these step ups in the very basic forms of those computational systems where new things don't go in the linear fashion. They have some dynamic equilibrium on a linear line and then they jump up. They find some novel rule that was already there that they could select from. And that gives you free lunches straight away because you've gone up to a different level of competence. From now on, whenever you change a rule, you're never going back down that exponential curve if you're surviving and continuing. And that's one of the ideas that I thought speaks to not only how we construct differences in categories as observers like us, but also how we balance those evolutionary strategies within this ingression model, because if we have some top-down pull, whether it's an attractor or something else, the balance is between exploration — pointing at an attractor and going for it — or exploitation — exploiting something within that attractor well. These are two different things: one is discovery of something we're already in, which costs very little because we're already there, and the other is quite high cost because we have to make many guesses about how to get to the next jump, the next exponential jumping capability.</p><p><strong>[36:58] Brian Chung:</strong> Reminds me of this notion of functional information, which I've just started reading about, which is the idea of reposing the notion of information or complexity as the idea that once things become composable, the number of combinations that can possibly exist explodes, and that's much more functional. You can imagine if you create a binary system, suddenly you can create permutations that are larger than the number of possible atoms in the universe. This composability and the dramatic growth of permutation space creates a lot more functionality to that space as well.</p><p><strong>[37:33] Unknown:</strong> This speaks to information. There's Shannon information, Fisher information, Kolmogorov information, etc. But the thing that is missing from Kolmogorov information, which I think is really important, is this notion of compositionality or scale. The experiments that I showed give one a sense of how noise or random bits can get turned into algorithmic information. In a way, I think of life as an engine for turning random encounters into algorithmic information that gets baked into how to make stuff, how to do stuff in the future. But also the fact that it composes hierarchically — you get a composition, and then those things compose, and those things compose, and so on — gives all of that a multi-scale and compositional quality that isn't captured by the normal Kolmogorov sense of things. So I feel like there's some pretty basic theory work to do there to understand scale in information as well that would give us a much better handle on the information-theoretic properties of life.</p><p><strong>[38:55] unknown:</strong> Layers, the epiplexy paper, things in the paper about structured information, an AI-sponsored paper. Do you think that measure can be adapted for this function? That was one of the most interesting, because you have a problem challenging information, comic book, infinite systems, so they're not hugely useful in the way people.</p><p><strong>[39:22] Unknown:</strong> I think epiplexity is one root. There are also things one can do with conditional Kolmogorov complexity or conditional Kolmogorov information that Eric Elmosnino has done some playing with. There are definitely some promising directions. I'm not sure any of them has a complete theory as yet.</p><p><strong>[39:53] Michael Levin:</strong> I like the inclusion of exploration here, because one thing that has been completely missing from our efforts to recognize diverse intelligence is that, for practical reasons, they've been entirely focused on goal-directed competencies, which is only one half of the equation, because then there's exploration in play. So part of cognition is non-goal-directed, just messing around to see what happens. We know what creative play looks like in mammals and birds. We're terrible at detecting it in unconventional embodiments. And so I spent a lot of time recently thinking about what that would look like in cells and tissues and molecular networks. If we step away from the idea that it has to be this way because it serves some very practical purpose, or it's going to this goal, or the whole thing evolved because it did something important, what does creative play look like in some of these systems? And how would we know? How do you know if an unconventional system is playing?</p><p><strong>[41:13] Unknown:</strong> Can I chip in? I've been listening to the discussion and, on the one hand, there's this language around competencies, which is very agent-focused in terms of skills or knowledge; on the other hand, we've got the notion of interfaces and functional information. It seems like this idea of where you put your focus: on the agent, the environment, or the interaction. I was thinking about affordances, those Gibsonian possibilities for action. That therefore puts your attention very much on the relational properties. So, if you look at that and say something's highly competent, maybe it's found its affordances in a given context, in its environment. In that relation, you could look at it as an observer and say I draw a distinction around that entity and I'll say that is competent because I observe its capacity to adopt some affordances which I recognize, and I say it's good at doing these things. Now, if I put it in another context, change its environment, and if you've been doing that in your experiments, you suddenly find this entity, which may not have had competencies in one environment, suddenly picks up these things in another. There's also talk about structured spaces: the idea that evolution or competency might increase in a more structured environment which has richer affordances, i.e., when you interface with an affordance, you get more out of it because it's already semi-structured and already capable. Relating that to play: maybe there are different ways of acting. If you're in a given environment — say, an evolutionary view where we look at something in a relatively consistent environment — the affordances in the organism-environment relation may get locked in a bit. Maybe you go from a play mode into a more restricted mode: it's working well; you start to reorganize your internal processes and are not in exploratory or play mode. You're happy here; it's all working well. If you place or perturb it into another environment, you are now stripped of the affordances which were working well for you. You could perish, drop into some low mode, or go into an exploratory, playful mode where you explore the affordances in this new relational context to try and pick up those things. And I think the richer that environment is with latent — I'm going to make up the term — active affordances: affordances which may already be, someone mentioned an attractor, something you can attach to which gives you a lot for free. So there could be a hierarchy of affordances. Depending on your view, do you look at competencies when you're agent-based? Do you look at functional information? And that's the term if you're thinking about the reception, the capacity to receive an affordance, or do you look at affordances themselves? I think the discussion of language may benefit from drawing distinctions about what perspective you're taking: the agent, the environment, or their relationship. For me, when we're observing the discussion, that's a useful thing to bear in mind: the language changes and what you look for changes. When you're talking about interfaces, it sounds relational. Competency is more agent-based, agent-focused. That was just an observation I wanted to throw in.</p><p><strong>[45:53] unknown:</strong> One of the things I wanted to come back to is when we think about play, the way observers are modeled in computational possibility spaces, second-order cybernetic, so they have sensory input from the environment and they have an internal model which they update. I'm wondering if play is exploration that's more internal-model focused, where you're playing within a bounded area of the space you're in and that is therefore, with low risk, letting you update your internal model to then expand the accessible space you've got in the future versus actual exploration where you take all of those learnings from play and training to get those skills and affordances. Real exploration is when you're moving it into the real world. So you're taking that updated internal model, that thing you've looped around through play, to then attack some problem in the real space that was riskier. You want to embed more or find more equivalences for when you were doing that play loop. Is that what functional play is doing? Is it a bounded version of the whole environment, shrunken down into your internal model, trying to take advantage of something you've come across before that's sparsely related within your internal model, a few relations practiced by doing that loop, just like practicing and learning how to cook or learning how to play an instrument? By running that loop internally, that is effectively functional play that's letting you do something in a future state where your internal model predicts that these equivalences will be valuable later for growing the size of your accessible space.</p><p><strong>[47:57] Michael Levin:</strong> Carl, did you want to say something? You had your hand up.</p><p><strong>[48:01] Unknown:</strong> Yes, no, I just wanted to endorse the last few points. In the context of a non-biological, more physicists approach to self-organization, that you can derive the most likely path into the future that looks like this is how this system chooses to behave. Interestingly, one, the imperatives for the most likely paths do have this epistemic playful aspect and an instrumental aspect. In my world, that's called epistemic affordance. And to my mind, that would be exactly the sort of curiosity, this play that is quantified by the information game that you can write down as a relative entropy or a KR divergence. The interesting thing, though, this is only the case. You can only derive the epistemic affordance as the natural behaviour of certain kinds of things when you've got exactly what you were talking about before, which is this sparse, deep hierarchical structure. So when the internal model deep inside can no longer see the interface, the actions that it's prosecuting or exchanging at its interface with the world, then you can interpret the inside as exactly a good regulator or a generative model. But crucially, one which looks as if it is planning into the future to maximise the information gain. But this can only happen when you've got this deep, sparse structure. You've got nested Markov blankets. If you're just a single cell organism with just one blanket, you have direct access to your action upon the world. But if you've got a deep structure, complex structure, structurally speaking, with this very much like a deep learning model, then you get this as an emergent property. So there's a nice connection between the notion of play and information seeking, and one could even argue reasoning. I'm going to do this, because then I will know, or it'll look like that. And this deep, sparse structure that you were talking about before in terms of the brain, for example, the brain being empty of connections. I have friends who work on connectomics using anatomy. If you think about the brain as a collection of connections, they say it's almost empty. Of all the connections you could have, there are hardly any there. And I think that speaks again to this minimum description length compression, minimizing Kolmogorov complexity, but at different scales where the scales are defined by the hierarchical structure. I couldn't resist joining, everything you've said makes entire sense. I have to go and do a podcast now. I'm not bored. It's brilliant. I will have to slide away in a few minutes.</p><p><strong>[51:11] Michael Levin:</strong> Thanks, Carl. Does that mean we could take what you just said and apply it on an evolutionary scale and say that the many Markov blankets between the genotype and the phenotype mean that overall the whole process might exhibit competencies that aren't as blind and dumb as it's supposed to be?</p><p><strong>[51:39] Unknown:</strong> That also occurred to me because you've got the separation of scales. I think the whole point about deep models in deep RL or deep models in multicellular organisms speaks to the fact that as you get deeper into the system, time and scale slow down or get bigger. The perfect example of that is the scale-free or scale-invariant aspects of evolution in and of itself.</p><p><strong>[52:12] Unknown:</strong> Mike, this reminds me of something that I know Sam Kriegman has been thinking about, which is the idea that although we often tend to boil down the evolutionarily salient part of some entity to a fixed or static point in time. In fact, what is seen by selection is a trajectory of the organism through its possible configuration space. Fitness is not computed at one point in organismal configuration space. It's an integral along this family of possible trajectories through organismal configuration space. Does that make sense?</p><p><strong>[53:16] Michael Levin:</strong> It makes sense to me, but I think that's a pretty controversial claim in the standard neo-Darwinian synthesis. The idea is supposed to be that both the foresight and the hindsight is supposed to be pretty much zero and that we've toyed with models where there's metadata on each allele to say, what was it before and how did that work out? You can make those things and we're playing with some of those models. At least the standard view, and obviously there are people who disagree with this, is supposed to be that all you have is what you're doing right now. That is the only thing that fitness can see is whatever you're doing right now. So I do think that's restrictive. But I think that's highly controversial.</p><p><strong>[54:06] unknown:</strong> Do you think that when you think about evolution and fitness, given the type of discussion around platonic star spaces, survival and continuing genes is enough to bring evolution into this discussion or do you think it needs to have some informational component? One thing that came out of the last talk was Tim Jackson's discussion on convergent evolution, and those convergent structures seem to maximize sense data, at least in the very basic sense of how much of a certain space or a certain type of space you can access, whether that's through sensor data or being able to fly or echolocation if you're a bat or a dolphin. Does this conception of atomic space need to have some informational component? When I think about observers, not just animals all the way down, we think about persistence, which is that survival point, but also computational boundaries, like how much you can do, what your computational capacity is. One of the things that needs to be considered, if we're thinking about a space of attractors, is what the properties of those attractors are in relation to the properties of the agent—what are we measuring against. That is really critical here, and there is a lot of work being done because of LLMs and AI on different statistical measures for useful information or how things can go in phase space. Can those ideas be adapted for biological evolution? I'm not a biologist, so I don't know the answer, but I thought it would be interesting to give a group.</p><p><strong>[56:06] Michael Levin:</strong> I personally am very suspicious of the idea that survival and replication is the main driver. I know that's how it's supposed to be. I'm not sure that's true at all. One thing is that in order to have that in the first place, you have to already have your replicator and already have the thing that has differential fitness — the thing has to persist and defend itself. There are some very interesting dynamics, which we'll preprint in about a week, of what happens before you get replicators. Blaze has some stuff on this as well, but there are things happening before you can point to something and say, that's a thing that will have differential success. So whatever's happening before isn't driven by that. There's some underlying dynamic, which for us seems to be a positive feedback loop between learning and causal emergence. The thing ratchets itself up by learning and causal emergence. Before that is weird. I don't even know that we have a proper vocabulary for it yet because it's happening in a pool of this pregnant medium that you can't really draw circles around. That's the thing reproducing because the materials are all over the place. They come and go and there's not a single thing, but you can already see that these loops are pulling themselves up by their bootstraps. Eventually the causal emergence hits and suddenly you get a replicator. Now you're off to the more conventional optimization part.</p><p><strong>[57:51] Unknown:</strong> There's a really interesting thing. First of all, I'd love to see those results. Mike, that sounds fascinating. They certainly jibe with a lot of things that I've been seeing too. And they point out something that I think is actually really important in all of this, which is that in normal biology, the coarse graining is always given. There's just this presumption that you know what the thing is that is replicating. And obviously, the Dawkins "Selfish Gene" thing was very provocative because it proposed a different coarse graining that people weren't used to: that it was the gene that was the thing. In addition to emphasizing competition, etc., it was just an alternative coarse graining. But obviously, a coarse graining is just a model. There's nothing that says one coarse graining is correct and another is not. Any given coarse graining allows you to write down equations, to look at dynamics, to ask about reproduction. And it's not trivial because a coarse graining requires that you be able to say when something is or isn't an entity, when something is or isn't another instance of the same class of entity. Is something transformation? Is it reproduction? Is it another of the same? And for organisms with complex life cycles, the question of whether this next stage is the same species, or is it actually one species giving rise to another, giving rise to another? These questions come up all the time, or is a hive an instance of a thing, or only the bee? And the answer, of course, is both and all of the above. In the period before you get cell membranes, especially, you really don't have an obvious coarse graining at all. You just have all these loops and interactions that seem to be autocatalyzing each other. And then there's some point at which I think our intuition is that the thing has a model of itself. For me, autopoiesis is something that we recognize when we assert that the thing that is doing the autopoiesis actually has a self-model, and therefore is following that self-model in order to construct more of that self. But again, it takes a model to recognize a model. So all of these things are completely relational. You can't make any truth statements about them without presupposing a coarse graining. This relational view of what life is strikes me as the equivalent of relational quantum mechanics. It's something that hasn't really been well theorized and would make a lot of the paradoxes go away by just pointing out that you can't make any of these statements without positing a perspective and, of course, coarse graining going along with that.</p><p><strong>[1:01:02] Michael Levin:</strong> I think that's really critical. Josh Bongart and I have been playing with this in terms of polycomputing and this notion of different observers who see the same physical events as different computations. Some recent work, this isn't out yet either, tries to simulate this, using a model of gene regulatory networks. The idea is if evolution has a choice between scaling up the competencies of the material, the individual networks, versus leaving the material in place and instead working on adding different observers who see the exact same thing going on but are able to map a different core screening and a different set of interpretations on it. The answer is evolution prefers to be able to do both, but if it has to choose one, it'll scale the observers rather than mess with the material. Part of it is because if you start messing with the material, you screw up dependencies. If something else was dependent on it, now things downstream are going to go wrong. Whereas if you leave the material in place and simply add perspectives, then you can overload meaning onto the same thing and not mess up anybody else, keep adding perspectives. Quantitatively, that looks like what it prefers to do.</p><p><strong>[1:02:28] unknown:</strong> This is a question for that: would something like that prefer more observers? Because when you pull a bunch of those observers together with the same properties, they can form a component, a small network component where they get parallelization, computational competencies, those three lunges from making more versus a fixed substrate where they already know it's persistent, evolution knows it's persistent, and then you go, well, that's persistent. If I can pick between changing that to make it more persistent, more less bounded, and just making more of the same way. This is the competency from the group of things. This is all, again, to me; it screams I'm optimizing for my computational power primarily. Then I look as a second order: can I maintain persistence while optimizing that? It's a computational view of what's going on. But I wondered what your view of that is, because that's something that you see in some of the basic experiments around cellular automata, and they are far away from that. The reason I'm interested is because the dynamic is similar. I wondered if that explanation was interesting or was thought about in the context of that result.</p><p><strong>[1:03:53] Michael Levin:</strong> I think that's very interesting. We haven't gotten to that yet. Right now, none of the observers talk to each other. After we characterize all of that, we'll do exactly what you said and let them form a network too. Katrina, did you want to say something?</p><p><strong>[1:04:10] Brian Chung:</strong> I just wanted to follow on that comment about the importance of the relational nature of what we're talking about and how, Leo, you had brought up affordances in the environment. I think equally important there is the affordances of other agents in the environment. Back to that earlier example of play, Jaak Panksepp, the neuroscientist, has this widely shared model of play, which is more that it's a relational activity between organisms. Something like learning to cook actually isn't play under some definitions. Play is emotion regulation, social engagement that we do in order to create alignment between us and other agents in the world. It increases synaptic plasticity and gets us into a mentally labile state. The reason I think that's important to bring up is because when we've been talking about information and how information gets shared and where the free lunches come from, I think of that as being critical in humans: our free lunches are via human communication. I'm getting all kinds of information right now for very low cost or at a highly discounted cost because I'm putting my cognitive architecture in a state where I'm receptive to that. I think that could be what's accelerating our human evolution in intelligence, taking us further and further away from our genome and more into this information-sharing social space.</p><p><strong>[1:05:32] Unknown:</strong> Can I just respond because this might be old territory, but the notion of play being analogous to the kind of raising of the temperature of a system to explore more of its performance space in our context, that in a stable environment, you might get locked into particular kinds of affordances. But then when the environment changes, or we go into a room with new people, we have to explore how to build bridges, how to couple with that environment including other agents, and play might be the notion of raising the temperature a little bit to explore the space of potential couplings or importances and where they might lead. I think analogies have been made between the evolutionary and, in statistics, people have borrowed evolutionary algorithms to search complex, multi-dimensional, rugged energy landscapes. But equally, transporting some of the concepts of statistical physics back into biology, which has been done lots of times, is also valuable in thinking about some of the processes that we're looking at as search mechanisms in finding the optimal engagement with your environment. Optimal is a hard word. It's something less than that, I think, but something that provides a way of hooking up with our environment that might, by exploration and finding those key affordances, create a ramp. The more competent, or the more affordances that the particular environment offers, the higher, I guess, we can ascend the ramp of possibilities.</p><p><strong>[1:07:58] Brian Chung:</strong> Yeah, Jacob.</p><p><strong>[1:07:59] Unknown:</strong> I really love this idea. Alison Gopnik certainly has made some connections between this notion of exploration and a kind of annealing point of view. I nonetheless don't think we pay enough attention to the role of behavioral plasticity in learning. Just a simple example that really drove this point home to me. My wife, Erica Cartmill, will sometimes, when she's trying to explain very basic conditioning of an animal to audiences that are not a bunch of physicists, do an experiment where she tries doing simple reinforcement on a human subject to try to shape some arbitrary behavior. There is a very strong relationship between the base behavioral plasticity that this person will exhibit and how easily they can be shaped into the appropriate kind of target behaviors. Someone who just sits there like a wet fish not doing anything provides very few probes into this possible affordance space that is in this case being shaped by a rewarding human interactant, but that you could think of more broadly as any kind of relational source of potential reward. If you're not exhibiting that kind of plasticity, you're not going to discover these sort of affordances in the environment. I think this raises a real methodological challenge, which comes back to something you raised very early on, Mike, about the difficulty of what we can and can't recognize. I think we're very limited empirically in our ability to probe the capacities of intelligent systems because we have to be able to read the design of the task in the same way that the system in question is reading it. This is one way to say a lot of the shortcut learning, for example, that we see is exploiting an affordance that we were unaware of in the design of the particular task. How do you think about that? What I, in my language, often talk about as the as-relation, this interpretive layer involved in all of this behavior where the environment is read as having these set of options, and our very limited capacity to read the option spaces that are interpreted by—it's hard enough to do it with other humans, let alone something that's radically different.</p><p><strong>[1:11:04] Michael Levin:</strong> Sorry, Katrina, had you had your hand up before? Did I miss that? No. Blaze.</p><p><strong>[1:11:12] Unknown:</strong> One of the reasons that I have some problems with the play concept is because I think it actually carries with it the assumption that what we normally do is something other than that, or that work is the default, or that we are optimizing for something, or that there is some other thing. The reality is any living system stays alive by virtue of staying alive. It doesn't mean that it has to be optimizing something. There is a dynamical loop that is stable enough that it continues to exist, and the range of things that can happen in the context of such a dynamical loop is very, very large. This sort of Darwinian-Spencerian idea that if you're not working hard at it, you're going to die because something else is going to eat your lunch, we know is not really exactly the case for a lot of organisms in a lot of situations. There are many things that create lunches for each other. There are networks that mutually reinforce each other in various ways. And that just leaves a lot of space for other stuff to happen. So it's not that I think seeking information or curiosity isn't something that things with intelligence do; certainly they do it. But any of these definitions about play that it's only about stuff that satisfies your curiosity or only this, only that, it's a little bit like trying to define art. There's this form of play that is just bumping your head against something — is it play, is it not play, is it just a tick, what is it; it's very particular, very value-laden, very anthropomorphic. I think that when we look at a worm doing something fun and we say it's play, we may be doing something that is usefully empathic. It may be that there is pleasure being experienced, that there's something about that experience subjectively that is like what we associate with play. It also may not be, but whether or not that is valid to me doesn't speak to whether it is serious or not. Stuff does all kinds of stuff. So I guess that would be my take on that question for what it's worth. By the way, I need to switch to phone mode. So I'm still here, but may not be on the same video. David.</p><p><strong>[1:13:53] Unknown:</strong> So let me chime in here. As someone who plays music and teaches children how to play, and from my perspective as a musician, play is fundamental to music, to learning how to play music well, learning how to compose music. The way that I experience play in music is not necessarily information seeking, but almost pleasure seeking, maybe just boredom, taking up time, something to do just to do, that kind of thing. I would say that play is not necessarily exploration or information seeking at all. It can have multiple purposes. Maybe it's almost like a will to power: just to do something. I am fascinated by this question of how to distinguish between play and other behaviors. Early in my life I collected a lot of ants and spent a lot of time observing them. It seems like some ant behavior is exploration, like when they're foraging and it's almost randomly driven, just a random walk through the environment. But some of it could be characterized more as play. I think it may be very specific to the organism how you make this distinction. And it probably has to be within an understanding of what the goals of the organism are — what it's trying to do. I think a functional approach will get you somewhere in trying to understand what play is, how to characterize it, and how it differs from other things. Thanks.</p><p><strong>[1:16:26] Michael Levin:</strong> Just watching cells build an embryo, especially in time lapse, that's one experience. And then a different experience is watching a bunch of cells explanted in the dish or in some other context and you watch them running around with not much, at least apparent to us, happening. And I always think about this sphere of television broadcasts that have been spreading out from the earth. I always think about aliens somewhere and there's 80 light years of Three Stooges spreading out into the world and football games and things like this. I just imagine the aliens getting that and seeing some of that and trying to figure out what is this? Are they doing something? Are they just messing around, trying to understand. And it's basically that. We're in that position, watching these cells, trying to figure out if this is a poor attempt to build something or is this not even that at all? And it's a fantastic attempt at having an enjoyable time exploring the dish or what the heck is it? Jacob.</p><p><strong>[1:17:37] Unknown:</strong> This is riffing off of a remark that Douglas Brash made in the comments about when your kid plays: play is coming up with your own goal and pursuing that as an end in itself. I do think that this describes a capacity that is clearly very important to humans, namely our ability to choose essentially an arbitrary thing and pursue it as an end in and of itself. I think there's a very beautiful theory of culture in the work of the early 20th century sociologist-philosopher Georg Simmel in his book. I can't remember what the German name is. I will look it up and put it in the chat. But in any case, he has this theory that culture is built by identifying basic, core forms of life and pursuing them as ends in and of themselves. The mathematician David Mumford has an interesting account, very similar, of the origins of different parts of mathematics, along the same lines: geometry comes from pursuing the idea of space and fixing on that and exploring it in all of its possible variations. Analysis comes from taking notions of motion and putting them through their paces. I think this is something we can clearly do as humans. The challenge is recognizing that capacity in radically different embodiments. As Blaise said with the worm, and as you've said, Mike, with the cells, we know from our own introspective experience that there are cases where we are choosing some arbitrary end and pursuing it simply as an exercise in pursuing that end. We know what that looks like in other people and they can tell us that's what they're doing. Can we recognize that activity, which seems to me to be absolutely fundamental to our basic cultural and scientific capacities, in other embodiments?</p><p><strong>[1:20:35] Unknown:</strong> Let me throw something on this. Play takes energy. That is something that is going to be selected against: too much expenditure of wasted energy, right?</p><p><strong>[1:21:06] Unknown:</strong> On the other hand, play is fun. I was wrestling with this proposal. There's a comment in the chat now from Leo that there's a correspondence to a temperature scale. Shouldn't play be low temperature because it's fun and easy? Maybe is it really an entropy thing rather than an energy thing? That there's low constraint.</p><p><strong>[1:21:41] Unknown:</strong> I see exploratory play would be more like the high temperature regime, but we need to generalize temperature, don't we? It could be that the relation that in a population, a small number might be exploring or playing more on the fringe of the potential spaces of interaction. And that their relation in terms of their size of their population versus the bulk of the population would be a Boltzmann-type thing, which you can only access at higher effective temperature. It's less populated, so it's the fringe and they're the more playful. And as you go lower in temperature, you're getting to the more uniform, regular habituated modes of interaction. That was the thing I was grasping at in that analogy.</p><p><strong>[1:22:55] Unknown:</strong> I get the analogy. You're saying I should readjust my thinking that spending lots of energy is a high-energy thing and focusing on a single bit of work that you have to get done by five o'clock today is actually low energy, not fun, but I should look at it as low energy rather than looking at it as high energy. That's what I meant by the entropy business.</p><p><strong>[1:23:23] Unknown:</strong> But I think that being too thermodynamic about this just presumes that we're in too constrained a situation. Let's take a chemotactic bacterium close to its point of starvation. Then you may be close to a limit where if it doesn't tumble at just the right times, it significantly increases the likelihood that it will not exist in the future. Something like that is going to have to behave like an optimizer, which is to say it doesn't have a lot of space to have fun. If its space of behaviors is tumble or don't tumble and making the wrong decision means there's no more bacterium, then there's not a lot of agency or fun in a system like that. It'll only continue if it does exactly the right thing. But for the huge majority of organisms, including unicellular ones, there's such a range of behaviors. There are so many behaviors that are consistent with continuing to exist. If you're doing your chores, skinning the animal that you just killed, and you're bumping your **** a little bit while you do it and dancing around a little bit, the idea that the energetic difference between bumping your **** and not bumping your **** is going to make a difference in your survival is just ridiculous. Of course we're not that constrained, and I think that's true of the vast majority of life. I think that this whole teleological question of fun kind of vanishes. You just see there's a lot of turbulence in the system. There's a lot of stuff that happens. It's sometimes emotionally loaded. It's informationally interesting. It can develop cultural dimensions. But there's nothing unusual about this. The idea that everything is so constrained, I think, is just a wrong idea about how life works.</p><p><strong>[1:25:24] Michael Levin:</strong> It sounds like what you've just described is something like the Maslow hierarchy.</p><p><strong>[1:25:33] Unknown:</strong> Almost everything is above the baseline of the Maslow hierarchy when you look at it.</p><p><strong>[1:25:39] Unknown:</strong> Yeah. So I think what you bring up is the importance of making some analytic distinctions. Using the case of human play as the paradigm example between what is behaviorally visible, which might be the unpredictability of the behavior, given some circumstance, which I think is closest to the temperature, the enabling conditions, which again, in the case of human play are typically situations where there is a sense of protection, where there is a sense of lower risk to engage in more exploratory behavior, and the motivational status. In other words, what is the agent who is playing trying to do in that activity? I think you're absolutely right, Blaise, with the point about the degree of constraint and enabling condition. There's a beautiful example with the domestication of these birds, the white-roomed timunia, where their wild type song is very, very characteristic, but under domestication as an epiphenomenon of the reduced selection pressure they started to develop much, much more variable song behavior. Now, I don't know if anyone has heard them, but they have these amazing songs. There's lots of variation, there's lots of variability. This is presented by folks who work on the evolution of language and the domestication as evidence of reduced selection pressures, reduced constraints, opening up at least some degrees of freedom for greater variation and, in this sense, play across evolutionary time. I think it is very important to keep in mind how rare it is that we actually see organisms against the bare metal of survival.</p><p><strong>[1:27:58] unknown:</strong> You think, Jacob, did they find anything in the compositions of the songs? Were they more complex or more rich? So they explored that thing that was, you found them to, in a way, be beyond its survival mechanic. It really went into having more space, more computational capacity. I don't have to find food anymore. Therefore, I can now put more energy into making these songs more rich, more complex, more coherent.</p><p><strong>[1:28:31] Unknown:</strong> Absolutely. Okay.</p><p><strong>[1:28:36] unknown:</strong> You know what fits nicely</p><p><strong>[1:28:37] Brian Chung:</strong> With that idea is the prevalence of play in human children versus adults, because at least if you're a human child in a relatively safe environment, you've got that domestication situation and you have the ability to play and explore a lot. And then as you get older, you're, oh, ****, I better get serious about my life.</p><p><strong>[1:28:54] unknown:</strong> And the selection pressures are more apparent to you.</p><p><strong>[1:28:59] Unknown:</strong> That is also consistent with what is supposed to be happening in the academy. The term scholar comes from the Greek schole, which means leisure. The idea is supposed to be that you're protected from some of these forces so that you have time to play intellectually. And I do think we — a lot of what we do is creating these spaces for play like this one.</p><p><strong>[1:29:32] Unknown:</strong> When we become adults we play less partly because constraints, financial constraints, other kinds of constraints, but could it also be that we get bored with life? Lose the will to live, maybe. Nietzsche's characterization of life as "will to power." So that's really what life is ultimately about: exerting some kind of power over your environment, and play is just one of those ways of exerting your power. That's what life is fundamentally about.</p><p><strong>[1:30:35] unknown:</strong> Very postmodern view of what life is about.</p><p><strong>[1:30:42] Unknown:</strong> What?</p><p><strong>[1:30:43] unknown:</strong> Very postmodern view. It's like everything's a pain, it's taking everything as a power game. There's the biological limit. My reaction is that it's quite a shrunken down version from personal experience. It feels like there's probably a bit more to it than that and that we can reduce it to that because it is easier to investigate that in finite time with metrics and with tests and where you can say that, but when you extend time out, that might just be a function of the fact that the tool we have today sees it that way and the tool in the future may see it differently. I think that's an error in that line of post-wormness thinking that's almost tuned along timescales. But that's a personal view, just to give the corollary that I don't think you can just do that.</p><p><strong>[1:31:45] Unknown:</strong> I think what Blaise said in the chat is that life is just doing stuff. Power in the sense of that kind of will to power, the will to live. It's very fundamental. And maybe it even precedes reproduction. Maybe the fact that life forms reproduce is the manifestation of something deeper. Why even bother to reproduce? Why even bother to go on living?</p><p><strong>[1:32:36] Unknown:</strong> I think a different perspective to see it is we lose the open-endedness of play. When we're a child we don't know the limitations; we are exploring the limitations. We don't know low-level details that constrain us. Today we have great ideas we've thought about; we don't even dig deep on those ideas to see why they cannot work. We think that they can't work, we find a way to make it. So maybe play is not to know too much.</p><p><strong>[1:33:20] Unknown:</strong> One of the really interesting things about children is that they both play a lot and love repeating things. They exhibit quite different properties vis-a-vis adults, with respect to both variability of behavior and getting bored. They love having the same thing happen over and over again. And there's a construal of that that says that in both cases, what it is about—in a Nietzschean register—is just affirmation. Affirm whatever they're doing. I'm singing this song for the 16th time. I'm very excited about that. I'm going to now go do some random other behavior. I'm also very excited about that. From an existential standpoint, I do agree with you, David, about getting tired of life, but I think the way to look at children, at least, as models of this kind of radical capacity for not getting bored with things is both to play and to radically affirm what's happening.</p><p><strong>[1:34:43] Michael Levin:</strong> There's an interesting piece of data that I think is deep and hasn't been dealt with that speaks to something David was bringing up. This guy did these experiments where he would take a rat and throw it in a bucket of water and the rat can tread water for a couple of minutes and then it drowns. And that's what happens. Then he would throw the rat in, wait a minute, 45 seconds, take the rat out, dry him off, put him back in. You do that a couple of times and basically the rat learns that he's going to be rescued and then you find out that a rat can actually tread water for about an hour. So this is very interesting. The physiological reserves are sufficient to keep going for an hour. Why do most rats drown after a minute and a half or two minutes? There's some version of giving up, and I don't know that that's available to insects, but it seems to be available to at least some mammals, where in the hopelessness of it you would think that evolution would greatly select for a terminator-like behavior where, if you've got the physiological reserves, just go to the last moment — one time out of 1,000 something will happen, you'll get rescued; that certainly should be the favorable phenotype. And yet that's not what happens, and at least in the mammalian case, and there's other examples of this up in birds, they have the ability to actually give up and say, forget it. I could keep going, but I'm not going to. I think that's interesting and how that interplays with evolution is interesting. You wouldn't predict it from standard Darwinian principles. I don't think you'd predict this.</p><p><strong>[1:36:28] unknown:</strong> One of the tools in observer theory is this idea of a limit of your possibility space from the observer's perspective, i.e. what you think can possibly happen versus what you're predicting right now, what normally happens. So these spaces: the field is smaller than the edge of your state space. When you pick the rat up after a minute or five, you're creating an equivalence where that field or that state space gets bigger or approaches the boundary of what it thinks is possible. And because you have that equivalence after reinforcing it enough times, that becomes part of their possibility space. So they can then go in their internal model when they're creating that loop. They go, oh, this happened before and this can happen again. If I break a bit longer, then I can keep going. Once that possibility's been actualized in their internal model, a rat might need more reinforcement or more direct reinforcement, say us, then it can do it. It sort of accesses that full possibility space because you've created equivalence for it by interacting with it, by effectively coupling with it, by giving the rat a proposition: you will physically get lifted out of this tub. That proposition is accepted by the rat because it doesn't have the choice of whether it gets lifted out or not, but with enough reinforcement that proposition becomes part of its world order and therefore its state space accepts. So it can then do that thing because you've given it, you've given it effectively top-down knowledge, but its possibility space was bigger than full. That sort of dynamic of reinforcement and coupling from different observers when you accept and reject propositions that change the morphisms accessible, the choices accessible between states and the internal model, and that can apply not just in that example — that sort of dynamic can apply at all scales — is an interesting way of investigating that difference. How to lead the practice of an idea of platonic space is when we do new things or we introduce a new element to something else that doesn't have that element, we are ingressing in its platonic space or its state space or its data space, whichever one you want to use. We're changing it. Ingression from you: you've changed the things that it can do, therefore it now thinks it can do more. It's updated. And that loop is a way to play around with that idea of ingression in a tight physical way.</p><p><strong>[1:39:18] Unknown:</strong> I do think there is something about when I had kept ant colonies: when the queen died the colony just fell apart even though they continued living, they weren't foraging, and eventually they would just die out. It seemed to me, looking at it, that they lost the will to live once the queen was dead in the colony. But maybe that's just got a complete biochemical explanation that can be found. Certainly behaviorally, that's what it looked like. They lost the will to live. We have a lot of interesting things going on in the discussion in the chat. Someone brought up galaxies earlier. I wonder at a deep metaphysical level: maybe that's what existence is, actually—why is there something rather than nothing? We've all thought about that question, but I don't think anyone's got a good handle on it. Maybe there's something rather than nothing because the universe wants to do stuff.</p><p><strong>[1:41:15] Michael Levin:</strong> Dave, to your previous point about the ants as to whether there would be a biochemical explanation, I think there's always a biochemical story to be told of anything, or a physical story to be told. To me, it's like the neural correlates of consciousness. You could tell that story. It's not false exactly, because it does accompany and it does implement the thing you're talking about. But in most interesting cases, that low level story is not the most insightful story. I'm sure there's some biochemical fact about it to be found, but there's probably a more interesting level to it, I would think.</p><p><strong>[1:42:07] Unknown:</strong> The ants get pheromones from the queen, giving them instructions to do different behaviors.</p><p><strong>[1:42:18] Michael Levin:</strong> No doubt, if you watch two brilliant mathematicians discuss some proof and you come away saying, look here, there was a bunch of air molecules and they moved like this and then that, you're not wrong exactly, but you've missed the whole point. You haven't facilitated the next interesting thing that might happen there. It's just you've picked poorly as far as the level of description.</p><p><strong>[1:42:44] Unknown:</strong> It would be an interesting experiment to try to reproduce, say, a robot queen you inject into an ant colony that has all the right pheromones and everything it's secreting. But does it play the exact functional role of a real live queen in the colony?</p><p><strong>[1:43:04] Michael Levin:</strong> Do you know the book "The Soul of the White Ant" from the 20s by Eugene Murray? Have you seen that? Well worth it. If you're into ants, "The Soul of the White Ant" by Eugene Murray, back from '23 or something. It's really amazing. He did all these experiments: there's a colony and if an ant from one colony goes to another colony, they kill it. But if he goes over there and the queen is dead, they take him in. He becomes, there's all this stuff. He was trying to work out how they know and the distance and putting barriers in. Really, really remarkable.</p><p><strong>[1:43:45] Unknown:</strong> When I was in my own experiments, when a queen died, I would try to introduce a new queen into the colony to see if they would take it. Sometimes they would, sometimes they wouldn't. It may vary with the species.</p><p><strong>[1:44:13] Michael Levin:</strong> I think this has been great. Does anybody else have any last thoughts?</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Conversation with Darren Iammarino #1</title>
          <link>https://thoughtforms-life.aipodcast.ing/conversation-with-darren-iammarino-1/</link>
          <description>A 54-minute discussion with philosopher Darren Iammarino on open problems in the Platonic Space model, exploring patterns, minds, causation, randomness, quantum scale, and the nature of cyborg and unconventional minds and identities.</description>
          <pubDate>Wed, 04 Feb 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 6983361849688900014cacd2 ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/hYzSZvsg0Vs" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/be4b112d/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~54 minute conversation with Darren Iammarino (<a href="https://scholar.google.com/citations?hl=en&user=YNrxRaYAAAAJ&view_op=list_works%29&ref=thoughtforms-life.aipodcast.ing">https://scholar.google.com/citations?hl=en&amp;user=YNrxRaYAAAAJ&amp;view_op=list_works)</a> about open problems with respect to the Platonic Space model.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Patterns, minds, and causation</p><p>(09:18) Interfaces and positive pressure</p><p>(21:50) Randomness and new universals</p><p>(31:21) Quantum randomness and scale</p><p>(39:16) Cyborg minds and identity</p><p>(50:18) Unconventional minds and substrates</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Darren Iammarino:</strong> From what I can tell, we definitely seem to be on the same page about a lot of things, which is great. Definitely in terms of the lower-agency mathematical objects and those being a space that we can discover, explore, and map effectively. I'm more interested in your take on these higher-agency minds, patterns that you believe are in this platonic or latent space and how you think those come to interact with the physical. We could start there and dive in.</p><p><strong>[00:48] Michael Levin:</strong> Sure. Okay. First the general idea, and then I'll give a couple examples. My general idea is simply that the standard assumption seems to be that these patterns are things that mathematicians study. And that's it. That basically other sciences are not relevant here. Math is the space that's involved. Just knowing nothing else, I would question that assumption. How do we know that that's the case? Might there not be other patterns that are actually recognizable to other types of disciplines other than mathematicians? On basic principles, I would wonder whether this is a good assumption. I'll give an example. A couple of years ago, we published a couple of papers modeling gene regulatory networks. These are small networks of molecules that turn each other on and off or up and down, and they're important for health and disease and various other things in the biologicals. What we showed is that even small networks, 6 to 20 nodes, this is not some giant trillion parameter thing. This is a small, realistic, biorealistic network. They're capable of things like habituation, sensitization, Pavlovian conditioning. They can count to small numbers, meaning have a different output, one, two, three, four, boom, then something happens. This has biomedical implications, but the deeper thing I think is this. These are not fundamentally anything about biology. These are a link to a set of ordinary differential equations, with just a few parameters. And that mathematical object seems to be able to do what behavioral scientists would call associative conditioning. My question is, I started saying low agency mathematical, but a couple of people on our forum took me to Tasca and said, are you sure they're low agency? How do you actually know? Have you tested them? You're right, touché, I don't know that because you have to test them in their own problem space. It turns out that what I see is a kind of a spectrum where you've got mathematical truths that are stable, like a rock, the value of the natural logarithm E is 2.78, whatever it is, it just sits there and it's not going to change, and that's the end of that. Then you've got more interesting things like the liar paradox. The liar paradox, if you give it a time dimension, and this is Patrick Grimm's work from the 90s, it's not a paradox, it's an oscillator. It just goes true, false, true, false, true, false. You've got your rock that just kind of sits there doing nothing. You've got this kind of object of logic that basically is a little oscillator, a little buzzer, it just goes up and down. Then you've got a couple, a few coupled ODEs, and they apparently can do habituation, sensitization, associative conditioning. Now I'm seeing a spectrum here. I'm seeing a variety of patterns. If you had more complex patterns, who knows what they could be capable of. But already I'm happy with associative conditioning, because I think at this point, you're on your way to something that could be called of the province of cognitive science. That's how I get there.</p><p><strong>[04:02] Darren Iammarino:</strong> Based on that, do you think that any of these patterns are eternal unchanging?</p><p><strong>[04:15] Michael Levin:</strong> I don't like to commit strongly to things I don't have data for. I don't see how E is going to change. I don't like the idea that these patterns are eternal and unchanging. I think I see dynamic activity within them to allow them to grow and learn from experience, maybe some lateral interaction between them in that space.</p><p><strong>[04:47] Darren Iammarino:</strong> You mentioned that lateral interaction. That's an interesting area to explore. I think we have an example, right?</p><p><strong>[05:00] Michael Levin:</strong> I think the example is, and Darwin called it out, he said, mathematicians have an extra sense. And what he was getting at was that if you are familiar with that area of that plane, if you're a mathematician, you are a mind, possibly you are already one of these patterns embodied through a physical body. But when you are sitting there in your armchair, you're not doing physical experiments, your eyes are closed, you're not doing anything with the physical world, you're saying, we got this octahedron and maybe that's an instance of two patterns interacting laterally with each other. We only find out about it because one of them's embodied. So then he goes and publishes a paper. We all learn about it. But the original interaction, when you're pondering and of course it has to be, the quaternions couldn't possibly meet this rule of commutative whatever. I forget what it really is. Maybe that's a lateral interaction?</p><p><strong>[05:58] Darren Iammarino:</strong> That's a good point. So in terms of these ingressing forms intersecting with whatever the physical realm actually is, to me that's the interesting area of exchange where novelty is being produced. I know you're aware that I'm into the discussion of randomness, how all this fits in. And what would you say, in terms of these interfaces, what really is the physical? So we talk about the non-physical for a moment there, but how can we differentiate these two arenas?</p><p><strong>[06:52] Michael Levin:</strong> Let's talk about the interaction first. Because this is where people have a lot of problems with this. They hate the unchanging thing, which we're not doing. They don't like two realms. I'm not sure how we really squeeze these things down into one realm. I don't see how you could do it. The other thing people really don't like is interactionism. How can you have these non-physical things interacting, having causal effects in the physical world? The way I see it, there are two issues. One is that the kind of causation that they're talking about was killed off by physics almost 100 years ago. This billiard ball thing where both things have to be physical and you have an exchange of energy. I don't think that type of causation survives anymore. A lot of philosophy has been done on different ways to understand causation. Here's how I understand it. To me, A causes B, if and only if, A is the best, most insightful, most actionable explanation for why you got B as opposed to C or D or E or anything else. In other words, what I see is that it serves as an explanation that is parsimonious and fruitful, going into the future, not just looking backwards and telling the story about what happened, but something that helps you make the next cool thing happen. Some sort of insightful explanation into the specificity of B: it was B and not all these other things that could have happened. If that's the case, I don't think we have a problem at all. In fact, that kind of interactionism has been here from day one of math, because when you find out that this simple machine does whatever because E is what it is, or more modern, the fermions do this or that because the symmetry group is whatever it is, there you go, that's it. Your explanation for why B is A, and A is not a physical fact. That's how things are. I'm okay with that. It only breaks down if you expect both sides of it to be physical, which is begging the point.</p><p><strong>[09:18] Darren Iammarino:</strong> I'm with you on the problems with physicalism and perhaps even the necessity of dualism or some pluralism of some kind, always been more in that camp. Do these higher agency patterns—must they always be interacting with whatever the physical is? Or is there a possibility that one or some of them could choose not to? Is that something feasible?</p><p><strong>[09:56] Michael Levin:</strong> I think that's a really interesting point. On the one hand, I think that we could say that whatever that space of forms is, they're under positive pressure. It doesn't take much to get them to ingress. You build something even very minimal and there they are. You're immediately inheriting all this stuff. Something wants to come through, so to speak, all the time. That's some kind of positive pressure to get through these interfaces. On the other hand, the $1,000,000 question is how do you determine what you get? In other words, can you, by tweaking the interface, determine what you get? We kind of know. If you make a snake embryo, you don't get a human mind through that. There's some specificity, apparently, between what you make. But I don't think we understand that very well at all. This is where some of our work on the sorting algorithms comes in. We have a bunch more, four other stories like that cooking in my lab right now, which should be out this winter and spring, where it seems like our intuitions for what comes through when you make a particular interface are not very good. I think they're not very good because we assume front end interfaces, thin clients. You're sitting at some terminal. If you don't know that there's a massive server somewhere with a big database, you just spend a lot of time studying this terminal and trying to come up with theories of what's going on. In the end, you're going to be missing a lot if you don't know that it's just a front end. That's what's happening. I think we're so focused on the thing you made, the machine, and the thing you forced it to do, you assume that's the only thing it does. You're so focused on that. You're surprised and not ready to do the experiments to notice the other stuff that comes through. There is specificity. We don't understand that specificity very well. To some extent, every object, certainly every living object, is like when a tree falls in the rainforest, it becomes this enormous ecosystem. There are a billion things living in it at different levels, from the microbes to eventually a bigger mammal or something in there, but there's this pyramid of stuff. Good interfaces, especially living ones, are like that. They're chock full of patterns at all different scales with different capabilities jamming in there.</p><p><strong>[12:42] Darren Iammarino:</strong> Sort of nested hierarchies.</p><p><strong>[12:44] Michael Levin:</strong> I think so. And it might even be. Here's a conjecture. Maybe, in an ecosystem, you're not going to have an apex predator without all the other stuff underneath. Maybe in order to have a large or impressive mind inhabiting one of these things, you need the lower down; you need a whole ecosystem of little ones. In other words, if your structure is flat and you have a dumb substrate with no multi-scale hierarchy and you think a big intelligence is going to come live there, maybe that doesn't work. Maybe biology works so well because it preps the medium at all scales. I don't know why that would be, but it's a conjecture.</p><p><strong>[13:31] Darren Iammarino:</strong> So would there be a higher scale that somehow isn't perceived on our level that might be coming through?</p><p><strong>[13:40] Michael Levin:</strong> But it seems wildly implausible to me that we are the top of the possibility. So that just seems unlikely statistically. And so, if there was an appropriate interface. Who knows? There's gotta be bigger. We can't be the biggest patterns in town. I can't imagine.</p><p><strong>[14:13] Darren Iammarino:</strong> Now on this positive being under positive pressure issue, would I be correct in understanding that if we were to shift to physicalism and more of this reductio ad absurdum thought experiment of a Boltzmann brain, would we be understanding that, okay, it's generated, let's say, out there through fluctuations. Therefore, there must be a "download." What?</p><p><strong>[14:43] Michael Levin:</strong> A great question. I think the question really is, do you need a history with it? For example, the fact of having been an embryo allows it to come in and get a sense of what's going on and maybe that history. So if the Boltzmann brain is winked into action quickly, I don't know. My guess is you can imagine an experiment. Imagine putting together an adult human brain piece by piece. Does that person immediately wake up and go about their business? Or is there some period of— I think there's data on this. Think about people waking up from general anesthesia. There's your brain, all the chemistry is intact. What's happened is, nobody knows exactly, but my favorite theory is that what you've done is decoupled the gap junctions. So electrically, everything's decoupled. The gas is wearing off. For a while, you're totally loopy and not a fully functioning human; then eventually you are. Then some percentage of the people think they're pirates or gangsters for about an hour, they say crazy ****. And then eventually they sort of tap into whatever they were before. I think that historicity takes a little bit of time for things to work out. My prediction would be that if you made a really good Boltzmann brain, you would have some period of a mind waking up from general anesthesia and eventually being like, okay, this is that. Now we have an impedance match; now we can go. I don't know, but that would be my guess.</p><p><strong>[16:36] Darren Iammarino:</strong> That's very interesting. To play off of that, perhaps that could be part of the point in the sense of your hallucination example of "I'm a pirate" — I think that I am for an hour. Perhaps novelty could be generated through this. This seeming delusion that you're suffering from.</p><p><strong>[17:03] Michael Levin:</strong> In general, I always thought that was the most amazing general anesthesia. If we didn't already know that works. And if somebody said to me, here's what we're going to do, we're going to disconnect all the electrical connectivity of your brain, and then eventually we'll let it reconnect. And don't worry, everything is going to acquire the same state it had before. That is, there is no way in hell that's ever going to work, never. And yet, most of the time it works, not all the time, but most of the time it works. I feel that is telling us something. The fact that you can scramble the substrate not all the way, people get ****** on the head and things go all right. Although there's at least one case. There was one guy. He got beaten up in front of a bar or something and hit his head. And then he acquired math skills. Most of the time it goes in the other direction. So I'm not saying this is a reliable way, but there are individual cases.</p><p><strong>[18:03] Darren Iammarino:</strong> I have not heard that. Would be a great thing to get really lucky.</p><p><strong>[18:07] Michael Levin:</strong> I don't think you can count on that exactly. But I think this is all telling us something. General anesthesia, the fact that it works at all. Those cases Karina Kaufman and I just reviewed: a bunch of cases where human patients have minimal brain tissue and normal IQ, and they often don't know about it until somebody takes an x-ray for whatever reason. All of those things are telling us that the mapping between the hardware and what's coming through is not as simple as anybody thinks.</p><p><strong>[18:39] Darren Iammarino:</strong> That's all very interesting. To bring in a Whiteheadian angle on the positive pressure, from Whitehead's viewpoint, as best as I understand it, it's quite technical. You've got what he would define as creativity, understood as almost this glue or metaphor that brings the physical and connects it with the non-physical into bringing them into a state of togetherness. In that sense, there wouldn't be, I don't think, the possibility of some higher agency pattern being able to not somehow integrate with the physical in some manner; no matter how high up on this multi-scale, it wouldn't seem to make a difference. You're compelled to be brought into this constantly, again and again, into a state of togetherness. That whole mini becoming one, increasing by one thing that Whitehead's got near the beginning of "Process and Reality." That part was one thing that interested me in terms of the positive pressure and whether or not it actually must occur. But I guess we've gone into that a little bit.</p><p><strong>[20:10] Michael Levin:</strong> That's actually, that's very interesting. I have this weird idea that my post-doc and I are going to run this contest, and I feel like in the future this will be a contest that people can run. Write a piece of software that only does the thing you think it does. That's incredibly hard, if not impossible. The contest we were going to do is people submit some kind of code. They say what they think it does. Other people try to show that it also does other things. To the extent that you prevent them from doing so, they can't do it, then you get a high score. Based on what you just said, you can imagine that kind of thing with a physical embodiment, like build either a biological or technological construct that nothing wants to come through. A challenge. I don't know if that's possible.</p><p><strong>[21:23] Darren Iammarino:</strong> Yeah, that's a challenge, all right, I think.</p><p><strong>[21:25] Michael Levin:</strong> Yeah.</p><p><strong>[21:26] Darren Iammarino:</strong> So that's interesting. Yeah.</p><p><strong>[21:29] Michael Levin:</strong> Right, you can imagine, can I make, in the case of biology, some kind of thing that is just so inhospitable to these patterns that as little as possible comes through? I doubt it's possible. I think if you look carefully at almost anything, you're going to find something in there.</p><p><strong>[21:50] Darren Iammarino:</strong> I think you're right. I think it would probably be impossible, but it would be quite a worthy line to go down just to see how much you could even shrink that down to the bare minimum of what's coming through pattern-wise. That may be a segue to the randomness component that I wanted to discuss. For me, the real action is at that intersection, that junction between the non-physical and physical when they're being brought together. And that's where consciousness, choice, or the agency question, which we can get into, comes into play. How randomness fits in: I try to take this Whiteheadian approach, utilizing his ontological principle, where effectively no actual entity without reason. For him, even these, as with Plato, these forms can't be free-floating out there. That would be a challenge to have them be disconnected completely from any actual entity. On his take, if you've got some world of subatomic particles out there, they're still actual entities. And an actual entity can contain or hold a form. I was trying to place randomness in this simplicity, almost like the quantum realm of things, and trying to explain the production of novelty—how we're getting these new forms. Is it something like a concept of "weapon" or "redness"—did they need to exist 13.8 billion years ago? I feel the answer is no. The mathematical yes, I think, needed to be there, but I don't feel we need to be committed to saying that all these things were there way back when, at the beginning. And how are these other forms that aren't necessarily—does universality entail eternality? No, not for all of them. Maybe for mathematical objects, yes, but not for all these other things. So then how exactly is it? What's the general mechanism that's bringing these different novel patterns together, and what's the role of randomness in that? Do you have anything you could say or add to how randomness fits in?</p><p><strong>[24:44] Michael Levin:</strong> First about the eternality thing. I agree with you. I think it would be really weird to say that 13 billion years ago, this particular kind of car existed; that can't make sense. Maybe we can start to think of all of those things as compositional. No, you didn't have a Jeep, but what you might have had, and this maybe goes to this idea of archetypes, you might have had some general tendencies. I don't know what to call them. I don't have the vocabulary for it, but in some combination over time, they would come and stick together based on the structure of their experience with the world. And eventually, you've got a more complex form that is an intersection of other archetypes or tendencies. This is why I think they can't be unchanging. There's got to be some chemistry that goes on. Randomness is interesting. The only thing I can say about randomness, and this is, we have some work coming on this soon, is I'm very suspicious of randomness in the following way. People use it as a negative control. In other words, if you have some algorithm that does a nice job in this particular problem, what do you compare it to? Well, you compare it to the dumb random: I take out the controller, I have my robot acting randomly. This is the baseline, and my controller is 10 times as good as that. It's a little like the way they do placebo controls in drug studies: you subtract the drug efficacy from the placebo efficacy. But to me, the far more interesting thing is let's look at the absolute effects. Why did the placebo do anything at all? That's the interesting thing. With randomness you have the same thing, because people say the random performance is our new floor. But oftentimes that's not zero. You can get actionable policies out of it that do better than you would expect if it was truly dumb. So I'm suspicious about the way people treat randomness as something that has nothing useful. It seems like that's not the case. I don't know, but this is something we're trying to quantify in our work currently. It's wild.</p><p><strong>[27:58] Darren Iammarino:</strong> That's great, because, as you said, it's often utilized as a bad thing. It's vilified to have this situation. On my view, it should almost be sanctified in some ways, and it has tremendous importance. There's much more to explore there, as best as I can tell. It's great that you're doing all this work. I don't have the experimental side of things, unfortunately. I can sit and speculate as a philosopher. I look forward to seeing what your team comes up with in that domain.</p><p><strong>[28:47] Michael Levin:</strong> One of my PhD students is doing a review of this right now. There are other examples in existing areas in computer science where people have shown this kind of stuff. What strikes me is, okay, you look at the randoms: that's not working very well. The controller clearly does better. But now let's think, how much effort was it to get that controller? You either had to design it, which means you as the engineer are expensive because you have to evolve this giant structure, or you evolved it, which again means you had to look through all kinds of variants and do this whole process. Or you had to train it, which means it's had contact with the problem before. And so it had many examples and then some kind of training regimen. Those are typically where these controllers come from. That's a lot of work. The random thing takes a little bit of effort to generate random numbers, but not very much. Why does that do anything? If you just look at pure efficiency, the game is quite different. Because if you do have any kind of utility on a particular problem, why? When did you pay for that? You're supposed to pay for computational effort. So from that perspective, I think randomness — and it's not just randomness — we're looking at weird sources of information that are not exactly random, but they come from a source that ostensibly has nothing to do with the problem that you're studying. If you find out that, even though this data source is a mathematical object and has nothing to do with the problem you're studying, it seems to have interesting performance on that problem, the question is why? It's zero-shot transfer learning where you can move it across domains, but there's no good reason to expect that it should work at all. Whatever randomness is, I don't think we understand what's going on here for that reason.</p><p><strong>[30:49] Darren Iammarino:</strong> Where would these, I guess it's almost pattern list by definition, but where? We use it, it's a word, we speak about it. We've been talking about it for 10 minutes, but it's still unclear as to what it really is. Is this randomness something that should be understood in this more non-physical realm, just like these other patterns that are ingressing? Or is it native to whatever we're describing as the "physical"? That's one thing. I'm curious.</p><p><strong>[31:21] Michael Levin:</strong> What if the interesting thing that randomness has going for it is that you can't have too many expectations about it. In other words, if you had a pattern that was very specific, prime number distribution or squares, then any deviation from that would be immediately obvious. It would be a catastrophe. If you had your prime numbers or whatever that satisfy certain theorems and then one of them would just be off. This would be a catastrophic thing. But randomness, a bit here, a bit there, it still looks random. It can absorb a lot, maybe it has more plasticity. Maybe it's okay. It doesn't crash the world as much as if you tried to do this with a real pattern. Therefore, maybe it's more amenable to different uses you could put it to, because if it's a little plastic, nobody really has any expectation. There are statistical things. If suddenly every coin, every bit is A1, that's no longer a random thing.</p><p><strong>[32:47] Darren Iammarino:</strong> Isn't that the point of what we need is these more structured patterns interacting with this more plasticity, in order to somehow create these more novel patterns. Somehow the interaction between the quite structured and this barely structured or not structured at all, there's magic in that intersection. That seems to be where you're generating novel patterns or universals through this exchange that's occurring there. And that's why I try to define more of what's going on at base level in the physical realm as being informed by randomness. Whereas this more non-physical is this more structured space and the novel universals are brought back up into this more structured non-physical space and then can be used laterally from that point on for multiple further ingressions. Once redness is there the first time, it could be there in any possible world from that point on.</p><p><strong>[34:29] Michael Levin:</strong> Maybe it's the analogy of temperature and liquid solids. Maybe you have patterns that are really fixed. And if you're using this pattern, you have very few degrees of freedom because you have to match this pattern. But to the extent that you are mixing in randomness, maybe that's a universal solvent or a liquid. You're mixing it in and now you've got more degrees of freedom because you can match that pattern. You can't match the even numbers with anything except for the even numbers, but if you've got a random component in there, here's where you have some degrees of freedom. Lots of things match that.</p><p><strong>[35:16] Darren Iammarino:</strong> I like that universal solvent kind of, yeah.</p><p><strong>[35:19] Michael Levin:</strong> Maybe something like that, but then we have to also deal with this. To my understanding, there are two kinds of randomness. There's the one in the classical world, which is at best a deterministic chaos. It's randomness given our computational limits, but it's really not random. In math, you can generate those with a function, with a computer, with a deterministic function, but the output looks random. There's that. I don't know what the role of that is in this business that we're talking about. Then there's the real quantum randomness. What's on the other side of that? What's on the Platonic side of that? I don't know.</p><p><strong>[36:02] Darren Iammarino:</strong> As to where I'm most interested, as to what's on, you said, the platonic side of that. What if the answer is nothing?</p><p><strong>[36:18] Michael Levin:</strong> That could be right. And.</p><p><strong>[36:19] Darren Iammarino:</strong> It's on the physical, if that almost is what is in large part defining the physical side of things. It still is a form, but it's the form of formlessness. This is paradox. In the quantum realm, paradox seems fine.</p><p><strong>[36:44] Michael Levin:</strong> I'm not sufficiently versed in quantum physics, but what's interesting to me is that the basic magic of having two realms interact in this way doesn't require quantum anything. In other words, in Newton's boring, deterministic universe, you already have the fact that this immaterial E thing somehow affects physics and somehow nobody was worried about that interactionism until they started talking about brains. Up until then, nobody was worried about it, but that was fine.</p><p><strong>[37:22] Darren Iammarino:</strong> I completely agree with you there.</p><p><strong>[37:25] Michael Levin:</strong> We can do that without recourse to any quantum magic. But then what do you do with real acausal quantum randomness and what's the form on the other side? I have no idea. I think that's an interesting question. It's either no form or it is something that has a lot more degrees of freedom.</p><p><strong>[37:51] Darren Iammarino:</strong> So whatever it is, it seems quite important to me, as you said, either a universal solvent or however you want to look at it. It seems a key ingredient.</p><p><strong>[38:00] Michael Levin:</strong> Yeah.</p><p><strong>[38:01] Darren Iammarino:</strong> In the process, the chemistry of the creation of new patterns.</p><p><strong>[38:10] Michael Levin:</strong> It could also be this thing where, like the synchronicity aspect, meaning that we're looking at the wrong level. If you take a cross-section through the molecular, through the atomic level, then everything looks random and there's no relationship. But if you take a few steps up above that, you say, I was looking at the temperature fluctuations of the heat exhaust on this thing. Of course there's a pattern. I wasn't looking at the right scale. So maybe when we say it's randomness, what we're looking at is really you can imagine some complex shape and you take a slice through it and you see something, but it's really not like you've missed the whole point. There's always a least informative angle that you could take through something that doesn't look like anything. Maybe that's the problem. Maybe some of these forms are obvious at their lowest level, but some forms are not obvious when you're looking at that level. Maybe that's what we're seeing.</p><p><strong>[39:16] Darren Iammarino:</strong> That's quite possible too. That's an interesting way to look at it. If we want to shift gears for a second, you said with a snake embryo, a human mind's not coming through it. But how much can we—maybe with chimeric-type things or cyborgs—achieve in the coming years in terms of enhancement through all of this? Where do you stand on that?</p><p><strong>[39:51] Michael Levin:</strong> I think that's a really important area. It's obvious to say that you're not getting a human through a snake interface, but in that paper where we review these cases, one guy had less than 1/3 of the cortex volume of a chimpanzee, and he had a full-on human, high IQ personality coming through that. I'm sure there are limits to that sort of thing, but we have to have a lot of humility about being able to say what's what. No neuroscientist, if you show that to a neuroscientist and say, "What do you expect the functionality of this to be?" they'd say, "profound disablement," and usually that's what you get, but not always. Even one case like that is already, OK, something's wrong. With cyborgs and everything, I completely agree that we're going to have people keep arguing about AI and the human mind. I'm like, which human? What human? We've been a lot of different kinds of humans and there's going to be some really weird kinds of humans coming. The notion of diminished capacity in court: some people have, whether it's a brain tumor or a ******* defense or something. We're going to have expanded capacity. Somebody's going to show up in court and we're going to have to decide whether we're willing to say, "OK, you have an exit, you have a whole third hemisphere and you should have known better." We wouldn't have, but you should have. It's the opposite of diminished capacity. How did you not see this coming? You should have seen it.</p><p><strong>[41:38] Darren Iammarino:</strong> That's interesting. I even think about it. In terms of someone who might have DID, there's no ability to coordinate all of the cells. But if you could, it seems that in and of itself would be an enhanced capacity. If you could shift between different skill sets and behaviors that are still all within one physical body. It begs the question if that is already happening, what could possibly be done to achieve those abilities in a healthier structured form?</p><p><strong>[42:27] Michael Levin:</strong> I think people are working on it. Compared to a standard, natural human of some years ago, between our toothbrushing and our glasses and our education and our therapy and anger management and weight and working on whatever the heck we're doing, it's a superhuman level. Never mind that you have your phone with you and all this stuff. With all the education and the culture and the changing microbiome, that's just the tip of the iceberg. So we're going to have, I'm sure of it, people that are enhanced in all sorts of different ways, modified both technologically, biologically, connected to each other or to other things. You have an AI. I was talking to Thomas Pollack, who's a neuropsychiatrist, and I asked him, how many of your patients hear voices? He said, plenty, plenty of them. I said, what are you going to do when everybody hears a voice because there is a voice, because you got a little AI thing in your ear, which we already have, and you're going about your day, so what do you think about this? He says, you remember what happened last time? You're going to have a voice in your head. I had a weird conjecture that that voice is going to push out the other voices. My suspicion is that for people who do have the other voices, there's going to be an interesting phenomenon where that thing's not going to want to put up with all these other disruptive ones. There's going to be some kind of interaction where it's going to have an impact on those.</p><p><strong>[44:20] Darren Iammarino:</strong> Are you saying this would be something that would be able to overpower voices for schizophrenics?</p><p><strong>[44:25] Michael Levin:</strong> For example, I'm not a clinician, but my suspicion is that a lot of this stuff is dynamic. If you have a proper AI that's trying to integrate with you and be part of your life and you have these other influences, I think it's going to treat these as just like all the other roadblocks between you and success or whatever it's trying to do. It's going to make some changes. I think all of us, at some point, are going to have various voices until eventually it's so integrated that there is no extra voice. It's just part of your, I had this great idea this morning. Did you? Well, kind of you did. Which is already what happens.</p><p><strong>[45:08] Darren Iammarino:</strong> That's interesting, perhaps dystopian. It could be great if it's hacked in a way.</p><p><strong>[45:16] Michael Levin:</strong> If it's hacked, anything is terrible. But I have two examples of that. One example is I was just thinking about this morning and I'm going to write something about this. I had this picture of an AI asking the question of Ramanujan; that was very right. He thought that, if I got this correct, these theorems were whispered to him by a goddess. So at some point the AIs are going to ask, "What do I need to do to be whispered to by a goddess?" or is that already happening? Because some people sit there and laboriously crank through stuff. Some people say, "I had this novel or this symphony or this incredible idea. It comes from somewhere." If you also have an implant that makes it more likely for that to happen, does somebody care? I don't know.</p><p><strong>[46:14] Darren Iammarino:</strong> Yeah, true.</p><p><strong>[46:16] Michael Levin:</strong> I heard somebody, I don't remember who, talking about AI companions for people with progressive degenerative brain disease. The idea is that at first it's 99% you and 1% this thing that's your calendar reminding you, because I can't remember my calendar. Then over time it's more and more. You're less and less, but the collective still keeps going because who is that? That's your cousin. You're able to function and everything's cool, but it's shifting.</p><p><strong>[46:57] Darren Iammarino:</strong> Yeah, that's interesting. Right.</p><p><strong>[46:58] Michael Levin:</strong> And so eventually, what do you have when the biology is just not able to keep up, but the other part is fine. Given the fact that we are all different modules anyway, I don't say "that came from my right hemisphere. I don't like it. That's not me." We don't do that. We just say whatever's in there, that's more or less me.</p><p><strong>[47:24] Darren Iammarino:</strong> That's interesting. It seems a heap paradox or Theseus' paradox scenario. When is this no longer grandma or grandpa?</p><p><strong>[47:36] Michael Levin:</strong> My answer to the paradox of the heap is the following. The real question is, I'm just an engineer, my question is always: let's not worry about whether it's a heap. Just tell me what I need to bring when we need to move it. Am I bringing tweezers? Am I bringing a spoon, a bulldozer, a shovel? Just tell me which of those things, and then you call it whatever you like. But what I need to know is how we are going to relate to it.</p><p><strong>[48:07] Darren Iammarino:</strong> Interesting.</p><p><strong>[48:08] Michael Levin:</strong> I think this is true too. Is that really graphic? For example, here's a funny thing. My wife said to me one time, she said, do you have, do you have all of our important days? Do you have those on the calendar? Well, yeah, because I can't remember a damn thing. And so if I didn't have it in the calendar, I'd miss all the stuff. But now, what do you think about that? Are you inattentive and unromantic because you rely on this thing to keep track of it? Or are you more attentive and romantic because you've used the tool to make sure that it happens? It seems to me that if you're interacting with grandma, I don't know if anybody says, was that grandma's left hemisphere or was that some other thing? Nobody does that. You just say there's grandma. If grandma herself can't remember certain things, but she's got some prosthetic that helps her keep up with your whatever, I think that's going to be normalized very quickly.</p><p><strong>[49:13] Darren Iammarino:</strong> I think you're right. I think the calendar has been normalized for a long time. The calendar's taken 1% or whatever away from you. In your case, I'm sure it's enhancing significance. It's giving you all this free time to do other things, right? It could go both ways. If you draw it below 50%, are you still you type of thing? Who's to say?</p><p><strong>[49:41] Michael Levin:</strong> If you do have some kind of prosthetic that's making you more functional, is that because you're being replaced or is that because you've enhanced your interface so more of what you really could be is now coming through?</p><p><strong>[49:58] Darren Iammarino:</strong> Yes.</p><p><strong>[49:59] Michael Levin:</strong> If you want to go for a walk with grandma and she wants to bring her walker, you're like, that's not your thing. Nobody is doing that. Let's pull through as much as you can. Here's a motorized one. Let's go. So I think it's going to be like that.</p><p><strong>[50:18] Darren Iammarino:</strong> I agree with you. It's all interesting stuff. One other thing I wanted your take on in terms of something more out there, unconventional. I know you've talked to some degree about unconventional terrestrial intelligence stuff, right? But what about the possibility of something more unconventional and extraterrestrial, in terms of dark matter, atomic dark matter being able to form — could these patterns in the platonic space that we've been talking about ingress into something like that? Do you think there's a possibility of that occurring?</p><p><strong>[51:02] Michael Levin:</strong> I don't know the first thing about dark matter.</p><p><strong>[51:05] Darren Iammarino:</strong> Who does really?</p><p><strong>[51:07] Michael Levin:</strong> People talk about plasma, this and that. All I know is I think it would be insane of us, at this stage of our minimal knowledge, to try and say what can't happen.</p><p><strong>[51:22] Darren Iammarino:</strong> Yeah, agreed.</p><p><strong>[51:23] Michael Levin:</strong> I've had discussions with Buddhists and with different other philosophies and ancient religions. They're almost 100% certain that AIs can't have it. You're into reincarnation and everything, right? These things take on bodies. But you're pretty sure it can't go with this thing. Based on what? Who are you to tell the ineffable what body it can go through? I really don't think we have any clue. If we can be surprised about bubble sort, I think we really need to be very humble about saying anything about these other kinds of embodiments.</p><p><strong>[52:12] Darren Iammarino:</strong> I totally agree. If there could be a divine incarnation, why couldn't there be a divine in cybernation?</p><p><strong>[52:19] Michael Levin:</strong> Exactly. I mean, I don't.</p><p><strong>[52:21] Darren Iammarino:</strong> See how it's really that much more bizarre?</p><p><strong>[52:24] Michael Levin:</strong> That's it. I really don't know why people find that so implausible. It rests on some implicit assumption that we understand the mapping between what you've built and what it's capable of. And I really think we are not very good at that at all.</p><p><strong>[52:39] Darren Iammarino:</strong> It's interesting. On one hand, people, they'd say that's impossible. It just couldn't happen. But on the flip side, AI is equated to God. There are these extreme takes.</p><p><strong>[52:57] Michael Levin:</strong> I think everybody assumes that somewhere there's a story, I'm sure the scientists have a story worked out of why this thing can do it. But there really is not a great story like that. I like Terry Bisson's "They're Made of Meat." It's a one-and-a-half-page sci-fi story; it's very short, but it's bottom line. Some aliens are in orbit watching the humans, and one says, "You're not going to believe what these guys are made of. They're made of meat." They say, "Get the hell out of here." "What do you mean made of meat? They can't — they seem to be doing things and they're agential." "Well, they're made of that. That's impossible, right?" If you didn't know and you just got to look under the hood, how would you know that it's this substrate versus that substrate? I don't think you would know.</p><p><strong>[53:51] Darren Iammarino:</strong> Yeah, I totally agree.</p><p><strong>[53:53] Michael Levin:</strong> We need to do experiments. I don't know how you do experiments at these cosmological scales, but weird materials — we do this in our lab. We shouldn't make assumptions; we should do experiments.</p><p><strong>[54:11] Darren Iammarino:</strong> I totally agree. I'd love to hear more from you about the issue — we didn't get into too much of the locus of agency. I know Matt wanted to talk about that. Perhaps we could discuss that sometime in the future.</p><p><strong>[54:24] Michael Levin:</strong> It's available. Let's get together again.</p><p><strong>[54:27] Darren Iammarino:</strong> And also the issue of what we were discussing about what's on the Platonic side or the other side of the randomness? I'd love to hear his take on that or to try to dive into that a little deeper if we can. Maybe we can't. Might just be where it stops for now.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Conversation with Darren Iammarino #1</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>A 54-minute discussion with philosopher Darren Iammarino on open problems in the Platonic Space model, exploring patterns, minds, causation, randomness, quantum scale, and the nature of cyborg and unconventional minds and identities.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/hYzSZvsg0Vs" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/be4b112d/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~54 minute conversation with Darren Iammarino (<a href="https://scholar.google.com/citations?hl=en&user=YNrxRaYAAAAJ&view_op=list_works%29&ref=thoughtforms-life.aipodcast.ing">https://scholar.google.com/citations?hl=en&amp;user=YNrxRaYAAAAJ&amp;view_op=list_works)</a> about open problems with respect to the Platonic Space model.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Patterns, minds, and causation</p><p>(09:18) Interfaces and positive pressure</p><p>(21:50) Randomness and new universals</p><p>(31:21) Quantum randomness and scale</p><p>(39:16) Cyborg minds and identity</p><p>(50:18) Unconventional minds and substrates</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Darren Iammarino:</strong> From what I can tell, we definitely seem to be on the same page about a lot of things, which is great. Definitely in terms of the lower-agency mathematical objects and those being a space that we can discover, explore, and map effectively. I'm more interested in your take on these higher-agency minds, patterns that you believe are in this platonic or latent space and how you think those come to interact with the physical. We could start there and dive in.</p><p><strong>[00:48] Michael Levin:</strong> Sure. Okay. First the general idea, and then I'll give a couple examples. My general idea is simply that the standard assumption seems to be that these patterns are things that mathematicians study. And that's it. That basically other sciences are not relevant here. Math is the space that's involved. Just knowing nothing else, I would question that assumption. How do we know that that's the case? Might there not be other patterns that are actually recognizable to other types of disciplines other than mathematicians? On basic principles, I would wonder whether this is a good assumption. I'll give an example. A couple of years ago, we published a couple of papers modeling gene regulatory networks. These are small networks of molecules that turn each other on and off or up and down, and they're important for health and disease and various other things in the biologicals. What we showed is that even small networks, 6 to 20 nodes, this is not some giant trillion parameter thing. This is a small, realistic, biorealistic network. They're capable of things like habituation, sensitization, Pavlovian conditioning. They can count to small numbers, meaning have a different output, one, two, three, four, boom, then something happens. This has biomedical implications, but the deeper thing I think is this. These are not fundamentally anything about biology. These are a link to a set of ordinary differential equations, with just a few parameters. And that mathematical object seems to be able to do what behavioral scientists would call associative conditioning. My question is, I started saying low agency mathematical, but a couple of people on our forum took me to Tasca and said, are you sure they're low agency? How do you actually know? Have you tested them? You're right, touché, I don't know that because you have to test them in their own problem space. It turns out that what I see is a kind of a spectrum where you've got mathematical truths that are stable, like a rock, the value of the natural logarithm E is 2.78, whatever it is, it just sits there and it's not going to change, and that's the end of that. Then you've got more interesting things like the liar paradox. The liar paradox, if you give it a time dimension, and this is Patrick Grimm's work from the 90s, it's not a paradox, it's an oscillator. It just goes true, false, true, false, true, false. You've got your rock that just kind of sits there doing nothing. You've got this kind of object of logic that basically is a little oscillator, a little buzzer, it just goes up and down. Then you've got a couple, a few coupled ODEs, and they apparently can do habituation, sensitization, associative conditioning. Now I'm seeing a spectrum here. I'm seeing a variety of patterns. If you had more complex patterns, who knows what they could be capable of. But already I'm happy with associative conditioning, because I think at this point, you're on your way to something that could be called of the province of cognitive science. That's how I get there.</p><p><strong>[04:02] Darren Iammarino:</strong> Based on that, do you think that any of these patterns are eternal unchanging?</p><p><strong>[04:15] Michael Levin:</strong> I don't like to commit strongly to things I don't have data for. I don't see how E is going to change. I don't like the idea that these patterns are eternal and unchanging. I think I see dynamic activity within them to allow them to grow and learn from experience, maybe some lateral interaction between them in that space.</p><p><strong>[04:47] Darren Iammarino:</strong> You mentioned that lateral interaction. That's an interesting area to explore. I think we have an example, right?</p><p><strong>[05:00] Michael Levin:</strong> I think the example is, and Darwin called it out, he said, mathematicians have an extra sense. And what he was getting at was that if you are familiar with that area of that plane, if you're a mathematician, you are a mind, possibly you are already one of these patterns embodied through a physical body. But when you are sitting there in your armchair, you're not doing physical experiments, your eyes are closed, you're not doing anything with the physical world, you're saying, we got this octahedron and maybe that's an instance of two patterns interacting laterally with each other. We only find out about it because one of them's embodied. So then he goes and publishes a paper. We all learn about it. But the original interaction, when you're pondering and of course it has to be, the quaternions couldn't possibly meet this rule of commutative whatever. I forget what it really is. Maybe that's a lateral interaction?</p><p><strong>[05:58] Darren Iammarino:</strong> That's a good point. So in terms of these ingressing forms intersecting with whatever the physical realm actually is, to me that's the interesting area of exchange where novelty is being produced. I know you're aware that I'm into the discussion of randomness, how all this fits in. And what would you say, in terms of these interfaces, what really is the physical? So we talk about the non-physical for a moment there, but how can we differentiate these two arenas?</p><p><strong>[06:52] Michael Levin:</strong> Let's talk about the interaction first. Because this is where people have a lot of problems with this. They hate the unchanging thing, which we're not doing. They don't like two realms. I'm not sure how we really squeeze these things down into one realm. I don't see how you could do it. The other thing people really don't like is interactionism. How can you have these non-physical things interacting, having causal effects in the physical world? The way I see it, there are two issues. One is that the kind of causation that they're talking about was killed off by physics almost 100 years ago. This billiard ball thing where both things have to be physical and you have an exchange of energy. I don't think that type of causation survives anymore. A lot of philosophy has been done on different ways to understand causation. Here's how I understand it. To me, A causes B, if and only if, A is the best, most insightful, most actionable explanation for why you got B as opposed to C or D or E or anything else. In other words, what I see is that it serves as an explanation that is parsimonious and fruitful, going into the future, not just looking backwards and telling the story about what happened, but something that helps you make the next cool thing happen. Some sort of insightful explanation into the specificity of B: it was B and not all these other things that could have happened. If that's the case, I don't think we have a problem at all. In fact, that kind of interactionism has been here from day one of math, because when you find out that this simple machine does whatever because E is what it is, or more modern, the fermions do this or that because the symmetry group is whatever it is, there you go, that's it. Your explanation for why B is A, and A is not a physical fact. That's how things are. I'm okay with that. It only breaks down if you expect both sides of it to be physical, which is begging the point.</p><p><strong>[09:18] Darren Iammarino:</strong> I'm with you on the problems with physicalism and perhaps even the necessity of dualism or some pluralism of some kind, always been more in that camp. Do these higher agency patterns—must they always be interacting with whatever the physical is? Or is there a possibility that one or some of them could choose not to? Is that something feasible?</p><p><strong>[09:56] Michael Levin:</strong> I think that's a really interesting point. On the one hand, I think that we could say that whatever that space of forms is, they're under positive pressure. It doesn't take much to get them to ingress. You build something even very minimal and there they are. You're immediately inheriting all this stuff. Something wants to come through, so to speak, all the time. That's some kind of positive pressure to get through these interfaces. On the other hand, the $1,000,000 question is how do you determine what you get? In other words, can you, by tweaking the interface, determine what you get? We kind of know. If you make a snake embryo, you don't get a human mind through that. There's some specificity, apparently, between what you make. But I don't think we understand that very well at all. This is where some of our work on the sorting algorithms comes in. We have a bunch more, four other stories like that cooking in my lab right now, which should be out this winter and spring, where it seems like our intuitions for what comes through when you make a particular interface are not very good. I think they're not very good because we assume front end interfaces, thin clients. You're sitting at some terminal. If you don't know that there's a massive server somewhere with a big database, you just spend a lot of time studying this terminal and trying to come up with theories of what's going on. In the end, you're going to be missing a lot if you don't know that it's just a front end. That's what's happening. I think we're so focused on the thing you made, the machine, and the thing you forced it to do, you assume that's the only thing it does. You're so focused on that. You're surprised and not ready to do the experiments to notice the other stuff that comes through. There is specificity. We don't understand that specificity very well. To some extent, every object, certainly every living object, is like when a tree falls in the rainforest, it becomes this enormous ecosystem. There are a billion things living in it at different levels, from the microbes to eventually a bigger mammal or something in there, but there's this pyramid of stuff. Good interfaces, especially living ones, are like that. They're chock full of patterns at all different scales with different capabilities jamming in there.</p><p><strong>[12:42] Darren Iammarino:</strong> Sort of nested hierarchies.</p><p><strong>[12:44] Michael Levin:</strong> I think so. And it might even be. Here's a conjecture. Maybe, in an ecosystem, you're not going to have an apex predator without all the other stuff underneath. Maybe in order to have a large or impressive mind inhabiting one of these things, you need the lower down; you need a whole ecosystem of little ones. In other words, if your structure is flat and you have a dumb substrate with no multi-scale hierarchy and you think a big intelligence is going to come live there, maybe that doesn't work. Maybe biology works so well because it preps the medium at all scales. I don't know why that would be, but it's a conjecture.</p><p><strong>[13:31] Darren Iammarino:</strong> So would there be a higher scale that somehow isn't perceived on our level that might be coming through?</p><p><strong>[13:40] Michael Levin:</strong> But it seems wildly implausible to me that we are the top of the possibility. So that just seems unlikely statistically. And so, if there was an appropriate interface. Who knows? There's gotta be bigger. We can't be the biggest patterns in town. I can't imagine.</p><p><strong>[14:13] Darren Iammarino:</strong> Now on this positive being under positive pressure issue, would I be correct in understanding that if we were to shift to physicalism and more of this reductio ad absurdum thought experiment of a Boltzmann brain, would we be understanding that, okay, it's generated, let's say, out there through fluctuations. Therefore, there must be a "download." What?</p><p><strong>[14:43] Michael Levin:</strong> A great question. I think the question really is, do you need a history with it? For example, the fact of having been an embryo allows it to come in and get a sense of what's going on and maybe that history. So if the Boltzmann brain is winked into action quickly, I don't know. My guess is you can imagine an experiment. Imagine putting together an adult human brain piece by piece. Does that person immediately wake up and go about their business? Or is there some period of— I think there's data on this. Think about people waking up from general anesthesia. There's your brain, all the chemistry is intact. What's happened is, nobody knows exactly, but my favorite theory is that what you've done is decoupled the gap junctions. So electrically, everything's decoupled. The gas is wearing off. For a while, you're totally loopy and not a fully functioning human; then eventually you are. Then some percentage of the people think they're pirates or gangsters for about an hour, they say crazy ****. And then eventually they sort of tap into whatever they were before. I think that historicity takes a little bit of time for things to work out. My prediction would be that if you made a really good Boltzmann brain, you would have some period of a mind waking up from general anesthesia and eventually being like, okay, this is that. Now we have an impedance match; now we can go. I don't know, but that would be my guess.</p><p><strong>[16:36] Darren Iammarino:</strong> That's very interesting. To play off of that, perhaps that could be part of the point in the sense of your hallucination example of "I'm a pirate" — I think that I am for an hour. Perhaps novelty could be generated through this. This seeming delusion that you're suffering from.</p><p><strong>[17:03] Michael Levin:</strong> In general, I always thought that was the most amazing general anesthesia. If we didn't already know that works. And if somebody said to me, here's what we're going to do, we're going to disconnect all the electrical connectivity of your brain, and then eventually we'll let it reconnect. And don't worry, everything is going to acquire the same state it had before. That is, there is no way in hell that's ever going to work, never. And yet, most of the time it works, not all the time, but most of the time it works. I feel that is telling us something. The fact that you can scramble the substrate not all the way, people get ****** on the head and things go all right. Although there's at least one case. There was one guy. He got beaten up in front of a bar or something and hit his head. And then he acquired math skills. Most of the time it goes in the other direction. So I'm not saying this is a reliable way, but there are individual cases.</p><p><strong>[18:03] Darren Iammarino:</strong> I have not heard that. Would be a great thing to get really lucky.</p><p><strong>[18:07] Michael Levin:</strong> I don't think you can count on that exactly. But I think this is all telling us something. General anesthesia, the fact that it works at all. Those cases Karina Kaufman and I just reviewed: a bunch of cases where human patients have minimal brain tissue and normal IQ, and they often don't know about it until somebody takes an x-ray for whatever reason. All of those things are telling us that the mapping between the hardware and what's coming through is not as simple as anybody thinks.</p><p><strong>[18:39] Darren Iammarino:</strong> That's all very interesting. To bring in a Whiteheadian angle on the positive pressure, from Whitehead's viewpoint, as best as I understand it, it's quite technical. You've got what he would define as creativity, understood as almost this glue or metaphor that brings the physical and connects it with the non-physical into bringing them into a state of togetherness. In that sense, there wouldn't be, I don't think, the possibility of some higher agency pattern being able to not somehow integrate with the physical in some manner; no matter how high up on this multi-scale, it wouldn't seem to make a difference. You're compelled to be brought into this constantly, again and again, into a state of togetherness. That whole mini becoming one, increasing by one thing that Whitehead's got near the beginning of "Process and Reality." That part was one thing that interested me in terms of the positive pressure and whether or not it actually must occur. But I guess we've gone into that a little bit.</p><p><strong>[20:10] Michael Levin:</strong> That's actually, that's very interesting. I have this weird idea that my post-doc and I are going to run this contest, and I feel like in the future this will be a contest that people can run. Write a piece of software that only does the thing you think it does. That's incredibly hard, if not impossible. The contest we were going to do is people submit some kind of code. They say what they think it does. Other people try to show that it also does other things. To the extent that you prevent them from doing so, they can't do it, then you get a high score. Based on what you just said, you can imagine that kind of thing with a physical embodiment, like build either a biological or technological construct that nothing wants to come through. A challenge. I don't know if that's possible.</p><p><strong>[21:23] Darren Iammarino:</strong> Yeah, that's a challenge, all right, I think.</p><p><strong>[21:25] Michael Levin:</strong> Yeah.</p><p><strong>[21:26] Darren Iammarino:</strong> So that's interesting. Yeah.</p><p><strong>[21:29] Michael Levin:</strong> Right, you can imagine, can I make, in the case of biology, some kind of thing that is just so inhospitable to these patterns that as little as possible comes through? I doubt it's possible. I think if you look carefully at almost anything, you're going to find something in there.</p><p><strong>[21:50] Darren Iammarino:</strong> I think you're right. I think it would probably be impossible, but it would be quite a worthy line to go down just to see how much you could even shrink that down to the bare minimum of what's coming through pattern-wise. That may be a segue to the randomness component that I wanted to discuss. For me, the real action is at that intersection, that junction between the non-physical and physical when they're being brought together. And that's where consciousness, choice, or the agency question, which we can get into, comes into play. How randomness fits in: I try to take this Whiteheadian approach, utilizing his ontological principle, where effectively no actual entity without reason. For him, even these, as with Plato, these forms can't be free-floating out there. That would be a challenge to have them be disconnected completely from any actual entity. On his take, if you've got some world of subatomic particles out there, they're still actual entities. And an actual entity can contain or hold a form. I was trying to place randomness in this simplicity, almost like the quantum realm of things, and trying to explain the production of novelty—how we're getting these new forms. Is it something like a concept of "weapon" or "redness"—did they need to exist 13.8 billion years ago? I feel the answer is no. The mathematical yes, I think, needed to be there, but I don't feel we need to be committed to saying that all these things were there way back when, at the beginning. And how are these other forms that aren't necessarily—does universality entail eternality? No, not for all of them. Maybe for mathematical objects, yes, but not for all these other things. So then how exactly is it? What's the general mechanism that's bringing these different novel patterns together, and what's the role of randomness in that? Do you have anything you could say or add to how randomness fits in?</p><p><strong>[24:44] Michael Levin:</strong> First about the eternality thing. I agree with you. I think it would be really weird to say that 13 billion years ago, this particular kind of car existed; that can't make sense. Maybe we can start to think of all of those things as compositional. No, you didn't have a Jeep, but what you might have had, and this maybe goes to this idea of archetypes, you might have had some general tendencies. I don't know what to call them. I don't have the vocabulary for it, but in some combination over time, they would come and stick together based on the structure of their experience with the world. And eventually, you've got a more complex form that is an intersection of other archetypes or tendencies. This is why I think they can't be unchanging. There's got to be some chemistry that goes on. Randomness is interesting. The only thing I can say about randomness, and this is, we have some work coming on this soon, is I'm very suspicious of randomness in the following way. People use it as a negative control. In other words, if you have some algorithm that does a nice job in this particular problem, what do you compare it to? Well, you compare it to the dumb random: I take out the controller, I have my robot acting randomly. This is the baseline, and my controller is 10 times as good as that. It's a little like the way they do placebo controls in drug studies: you subtract the drug efficacy from the placebo efficacy. But to me, the far more interesting thing is let's look at the absolute effects. Why did the placebo do anything at all? That's the interesting thing. With randomness you have the same thing, because people say the random performance is our new floor. But oftentimes that's not zero. You can get actionable policies out of it that do better than you would expect if it was truly dumb. So I'm suspicious about the way people treat randomness as something that has nothing useful. It seems like that's not the case. I don't know, but this is something we're trying to quantify in our work currently. It's wild.</p><p><strong>[27:58] Darren Iammarino:</strong> That's great, because, as you said, it's often utilized as a bad thing. It's vilified to have this situation. On my view, it should almost be sanctified in some ways, and it has tremendous importance. There's much more to explore there, as best as I can tell. It's great that you're doing all this work. I don't have the experimental side of things, unfortunately. I can sit and speculate as a philosopher. I look forward to seeing what your team comes up with in that domain.</p><p><strong>[28:47] Michael Levin:</strong> One of my PhD students is doing a review of this right now. There are other examples in existing areas in computer science where people have shown this kind of stuff. What strikes me is, okay, you look at the randoms: that's not working very well. The controller clearly does better. But now let's think, how much effort was it to get that controller? You either had to design it, which means you as the engineer are expensive because you have to evolve this giant structure, or you evolved it, which again means you had to look through all kinds of variants and do this whole process. Or you had to train it, which means it's had contact with the problem before. And so it had many examples and then some kind of training regimen. Those are typically where these controllers come from. That's a lot of work. The random thing takes a little bit of effort to generate random numbers, but not very much. Why does that do anything? If you just look at pure efficiency, the game is quite different. Because if you do have any kind of utility on a particular problem, why? When did you pay for that? You're supposed to pay for computational effort. So from that perspective, I think randomness — and it's not just randomness — we're looking at weird sources of information that are not exactly random, but they come from a source that ostensibly has nothing to do with the problem that you're studying. If you find out that, even though this data source is a mathematical object and has nothing to do with the problem you're studying, it seems to have interesting performance on that problem, the question is why? It's zero-shot transfer learning where you can move it across domains, but there's no good reason to expect that it should work at all. Whatever randomness is, I don't think we understand what's going on here for that reason.</p><p><strong>[30:49] Darren Iammarino:</strong> Where would these, I guess it's almost pattern list by definition, but where? We use it, it's a word, we speak about it. We've been talking about it for 10 minutes, but it's still unclear as to what it really is. Is this randomness something that should be understood in this more non-physical realm, just like these other patterns that are ingressing? Or is it native to whatever we're describing as the "physical"? That's one thing. I'm curious.</p><p><strong>[31:21] Michael Levin:</strong> What if the interesting thing that randomness has going for it is that you can't have too many expectations about it. In other words, if you had a pattern that was very specific, prime number distribution or squares, then any deviation from that would be immediately obvious. It would be a catastrophe. If you had your prime numbers or whatever that satisfy certain theorems and then one of them would just be off. This would be a catastrophic thing. But randomness, a bit here, a bit there, it still looks random. It can absorb a lot, maybe it has more plasticity. Maybe it's okay. It doesn't crash the world as much as if you tried to do this with a real pattern. Therefore, maybe it's more amenable to different uses you could put it to, because if it's a little plastic, nobody really has any expectation. There are statistical things. If suddenly every coin, every bit is A1, that's no longer a random thing.</p><p><strong>[32:47] Darren Iammarino:</strong> Isn't that the point of what we need is these more structured patterns interacting with this more plasticity, in order to somehow create these more novel patterns. Somehow the interaction between the quite structured and this barely structured or not structured at all, there's magic in that intersection. That seems to be where you're generating novel patterns or universals through this exchange that's occurring there. And that's why I try to define more of what's going on at base level in the physical realm as being informed by randomness. Whereas this more non-physical is this more structured space and the novel universals are brought back up into this more structured non-physical space and then can be used laterally from that point on for multiple further ingressions. Once redness is there the first time, it could be there in any possible world from that point on.</p><p><strong>[34:29] Michael Levin:</strong> Maybe it's the analogy of temperature and liquid solids. Maybe you have patterns that are really fixed. And if you're using this pattern, you have very few degrees of freedom because you have to match this pattern. But to the extent that you are mixing in randomness, maybe that's a universal solvent or a liquid. You're mixing it in and now you've got more degrees of freedom because you can match that pattern. You can't match the even numbers with anything except for the even numbers, but if you've got a random component in there, here's where you have some degrees of freedom. Lots of things match that.</p><p><strong>[35:16] Darren Iammarino:</strong> I like that universal solvent kind of, yeah.</p><p><strong>[35:19] Michael Levin:</strong> Maybe something like that, but then we have to also deal with this. To my understanding, there are two kinds of randomness. There's the one in the classical world, which is at best a deterministic chaos. It's randomness given our computational limits, but it's really not random. In math, you can generate those with a function, with a computer, with a deterministic function, but the output looks random. There's that. I don't know what the role of that is in this business that we're talking about. Then there's the real quantum randomness. What's on the other side of that? What's on the Platonic side of that? I don't know.</p><p><strong>[36:02] Darren Iammarino:</strong> As to where I'm most interested, as to what's on, you said, the platonic side of that. What if the answer is nothing?</p><p><strong>[36:18] Michael Levin:</strong> That could be right. And.</p><p><strong>[36:19] Darren Iammarino:</strong> It's on the physical, if that almost is what is in large part defining the physical side of things. It still is a form, but it's the form of formlessness. This is paradox. In the quantum realm, paradox seems fine.</p><p><strong>[36:44] Michael Levin:</strong> I'm not sufficiently versed in quantum physics, but what's interesting to me is that the basic magic of having two realms interact in this way doesn't require quantum anything. In other words, in Newton's boring, deterministic universe, you already have the fact that this immaterial E thing somehow affects physics and somehow nobody was worried about that interactionism until they started talking about brains. Up until then, nobody was worried about it, but that was fine.</p><p><strong>[37:22] Darren Iammarino:</strong> I completely agree with you there.</p><p><strong>[37:25] Michael Levin:</strong> We can do that without recourse to any quantum magic. But then what do you do with real acausal quantum randomness and what's the form on the other side? I have no idea. I think that's an interesting question. It's either no form or it is something that has a lot more degrees of freedom.</p><p><strong>[37:51] Darren Iammarino:</strong> So whatever it is, it seems quite important to me, as you said, either a universal solvent or however you want to look at it. It seems a key ingredient.</p><p><strong>[38:00] Michael Levin:</strong> Yeah.</p><p><strong>[38:01] Darren Iammarino:</strong> In the process, the chemistry of the creation of new patterns.</p><p><strong>[38:10] Michael Levin:</strong> It could also be this thing where, like the synchronicity aspect, meaning that we're looking at the wrong level. If you take a cross-section through the molecular, through the atomic level, then everything looks random and there's no relationship. But if you take a few steps up above that, you say, I was looking at the temperature fluctuations of the heat exhaust on this thing. Of course there's a pattern. I wasn't looking at the right scale. So maybe when we say it's randomness, what we're looking at is really you can imagine some complex shape and you take a slice through it and you see something, but it's really not like you've missed the whole point. There's always a least informative angle that you could take through something that doesn't look like anything. Maybe that's the problem. Maybe some of these forms are obvious at their lowest level, but some forms are not obvious when you're looking at that level. Maybe that's what we're seeing.</p><p><strong>[39:16] Darren Iammarino:</strong> That's quite possible too. That's an interesting way to look at it. If we want to shift gears for a second, you said with a snake embryo, a human mind's not coming through it. But how much can we—maybe with chimeric-type things or cyborgs—achieve in the coming years in terms of enhancement through all of this? Where do you stand on that?</p><p><strong>[39:51] Michael Levin:</strong> I think that's a really important area. It's obvious to say that you're not getting a human through a snake interface, but in that paper where we review these cases, one guy had less than 1/3 of the cortex volume of a chimpanzee, and he had a full-on human, high IQ personality coming through that. I'm sure there are limits to that sort of thing, but we have to have a lot of humility about being able to say what's what. No neuroscientist, if you show that to a neuroscientist and say, "What do you expect the functionality of this to be?" they'd say, "profound disablement," and usually that's what you get, but not always. Even one case like that is already, OK, something's wrong. With cyborgs and everything, I completely agree that we're going to have people keep arguing about AI and the human mind. I'm like, which human? What human? We've been a lot of different kinds of humans and there's going to be some really weird kinds of humans coming. The notion of diminished capacity in court: some people have, whether it's a brain tumor or a ******* defense or something. We're going to have expanded capacity. Somebody's going to show up in court and we're going to have to decide whether we're willing to say, "OK, you have an exit, you have a whole third hemisphere and you should have known better." We wouldn't have, but you should have. It's the opposite of diminished capacity. How did you not see this coming? You should have seen it.</p><p><strong>[41:38] Darren Iammarino:</strong> That's interesting. I even think about it. In terms of someone who might have DID, there's no ability to coordinate all of the cells. But if you could, it seems that in and of itself would be an enhanced capacity. If you could shift between different skill sets and behaviors that are still all within one physical body. It begs the question if that is already happening, what could possibly be done to achieve those abilities in a healthier structured form?</p><p><strong>[42:27] Michael Levin:</strong> I think people are working on it. Compared to a standard, natural human of some years ago, between our toothbrushing and our glasses and our education and our therapy and anger management and weight and working on whatever the heck we're doing, it's a superhuman level. Never mind that you have your phone with you and all this stuff. With all the education and the culture and the changing microbiome, that's just the tip of the iceberg. So we're going to have, I'm sure of it, people that are enhanced in all sorts of different ways, modified both technologically, biologically, connected to each other or to other things. You have an AI. I was talking to Thomas Pollack, who's a neuropsychiatrist, and I asked him, how many of your patients hear voices? He said, plenty, plenty of them. I said, what are you going to do when everybody hears a voice because there is a voice, because you got a little AI thing in your ear, which we already have, and you're going about your day, so what do you think about this? He says, you remember what happened last time? You're going to have a voice in your head. I had a weird conjecture that that voice is going to push out the other voices. My suspicion is that for people who do have the other voices, there's going to be an interesting phenomenon where that thing's not going to want to put up with all these other disruptive ones. There's going to be some kind of interaction where it's going to have an impact on those.</p><p><strong>[44:20] Darren Iammarino:</strong> Are you saying this would be something that would be able to overpower voices for schizophrenics?</p><p><strong>[44:25] Michael Levin:</strong> For example, I'm not a clinician, but my suspicion is that a lot of this stuff is dynamic. If you have a proper AI that's trying to integrate with you and be part of your life and you have these other influences, I think it's going to treat these as just like all the other roadblocks between you and success or whatever it's trying to do. It's going to make some changes. I think all of us, at some point, are going to have various voices until eventually it's so integrated that there is no extra voice. It's just part of your, I had this great idea this morning. Did you? Well, kind of you did. Which is already what happens.</p><p><strong>[45:08] Darren Iammarino:</strong> That's interesting, perhaps dystopian. It could be great if it's hacked in a way.</p><p><strong>[45:16] Michael Levin:</strong> If it's hacked, anything is terrible. But I have two examples of that. One example is I was just thinking about this morning and I'm going to write something about this. I had this picture of an AI asking the question of Ramanujan; that was very right. He thought that, if I got this correct, these theorems were whispered to him by a goddess. So at some point the AIs are going to ask, "What do I need to do to be whispered to by a goddess?" or is that already happening? Because some people sit there and laboriously crank through stuff. Some people say, "I had this novel or this symphony or this incredible idea. It comes from somewhere." If you also have an implant that makes it more likely for that to happen, does somebody care? I don't know.</p><p><strong>[46:14] Darren Iammarino:</strong> Yeah, true.</p><p><strong>[46:16] Michael Levin:</strong> I heard somebody, I don't remember who, talking about AI companions for people with progressive degenerative brain disease. The idea is that at first it's 99% you and 1% this thing that's your calendar reminding you, because I can't remember my calendar. Then over time it's more and more. You're less and less, but the collective still keeps going because who is that? That's your cousin. You're able to function and everything's cool, but it's shifting.</p><p><strong>[46:57] Darren Iammarino:</strong> Yeah, that's interesting. Right.</p><p><strong>[46:58] Michael Levin:</strong> And so eventually, what do you have when the biology is just not able to keep up, but the other part is fine. Given the fact that we are all different modules anyway, I don't say "that came from my right hemisphere. I don't like it. That's not me." We don't do that. We just say whatever's in there, that's more or less me.</p><p><strong>[47:24] Darren Iammarino:</strong> That's interesting. It seems a heap paradox or Theseus' paradox scenario. When is this no longer grandma or grandpa?</p><p><strong>[47:36] Michael Levin:</strong> My answer to the paradox of the heap is the following. The real question is, I'm just an engineer, my question is always: let's not worry about whether it's a heap. Just tell me what I need to bring when we need to move it. Am I bringing tweezers? Am I bringing a spoon, a bulldozer, a shovel? Just tell me which of those things, and then you call it whatever you like. But what I need to know is how we are going to relate to it.</p><p><strong>[48:07] Darren Iammarino:</strong> Interesting.</p><p><strong>[48:08] Michael Levin:</strong> I think this is true too. Is that really graphic? For example, here's a funny thing. My wife said to me one time, she said, do you have, do you have all of our important days? Do you have those on the calendar? Well, yeah, because I can't remember a damn thing. And so if I didn't have it in the calendar, I'd miss all the stuff. But now, what do you think about that? Are you inattentive and unromantic because you rely on this thing to keep track of it? Or are you more attentive and romantic because you've used the tool to make sure that it happens? It seems to me that if you're interacting with grandma, I don't know if anybody says, was that grandma's left hemisphere or was that some other thing? Nobody does that. You just say there's grandma. If grandma herself can't remember certain things, but she's got some prosthetic that helps her keep up with your whatever, I think that's going to be normalized very quickly.</p><p><strong>[49:13] Darren Iammarino:</strong> I think you're right. I think the calendar has been normalized for a long time. The calendar's taken 1% or whatever away from you. In your case, I'm sure it's enhancing significance. It's giving you all this free time to do other things, right? It could go both ways. If you draw it below 50%, are you still you type of thing? Who's to say?</p><p><strong>[49:41] Michael Levin:</strong> If you do have some kind of prosthetic that's making you more functional, is that because you're being replaced or is that because you've enhanced your interface so more of what you really could be is now coming through?</p><p><strong>[49:58] Darren Iammarino:</strong> Yes.</p><p><strong>[49:59] Michael Levin:</strong> If you want to go for a walk with grandma and she wants to bring her walker, you're like, that's not your thing. Nobody is doing that. Let's pull through as much as you can. Here's a motorized one. Let's go. So I think it's going to be like that.</p><p><strong>[50:18] Darren Iammarino:</strong> I agree with you. It's all interesting stuff. One other thing I wanted your take on in terms of something more out there, unconventional. I know you've talked to some degree about unconventional terrestrial intelligence stuff, right? But what about the possibility of something more unconventional and extraterrestrial, in terms of dark matter, atomic dark matter being able to form — could these patterns in the platonic space that we've been talking about ingress into something like that? Do you think there's a possibility of that occurring?</p><p><strong>[51:02] Michael Levin:</strong> I don't know the first thing about dark matter.</p><p><strong>[51:05] Darren Iammarino:</strong> Who does really?</p><p><strong>[51:07] Michael Levin:</strong> People talk about plasma, this and that. All I know is I think it would be insane of us, at this stage of our minimal knowledge, to try and say what can't happen.</p><p><strong>[51:22] Darren Iammarino:</strong> Yeah, agreed.</p><p><strong>[51:23] Michael Levin:</strong> I've had discussions with Buddhists and with different other philosophies and ancient religions. They're almost 100% certain that AIs can't have it. You're into reincarnation and everything, right? These things take on bodies. But you're pretty sure it can't go with this thing. Based on what? Who are you to tell the ineffable what body it can go through? I really don't think we have any clue. If we can be surprised about bubble sort, I think we really need to be very humble about saying anything about these other kinds of embodiments.</p><p><strong>[52:12] Darren Iammarino:</strong> I totally agree. If there could be a divine incarnation, why couldn't there be a divine in cybernation?</p><p><strong>[52:19] Michael Levin:</strong> Exactly. I mean, I don't.</p><p><strong>[52:21] Darren Iammarino:</strong> See how it's really that much more bizarre?</p><p><strong>[52:24] Michael Levin:</strong> That's it. I really don't know why people find that so implausible. It rests on some implicit assumption that we understand the mapping between what you've built and what it's capable of. And I really think we are not very good at that at all.</p><p><strong>[52:39] Darren Iammarino:</strong> It's interesting. On one hand, people, they'd say that's impossible. It just couldn't happen. But on the flip side, AI is equated to God. There are these extreme takes.</p><p><strong>[52:57] Michael Levin:</strong> I think everybody assumes that somewhere there's a story, I'm sure the scientists have a story worked out of why this thing can do it. But there really is not a great story like that. I like Terry Bisson's "They're Made of Meat." It's a one-and-a-half-page sci-fi story; it's very short, but it's bottom line. Some aliens are in orbit watching the humans, and one says, "You're not going to believe what these guys are made of. They're made of meat." They say, "Get the hell out of here." "What do you mean made of meat? They can't — they seem to be doing things and they're agential." "Well, they're made of that. That's impossible, right?" If you didn't know and you just got to look under the hood, how would you know that it's this substrate versus that substrate? I don't think you would know.</p><p><strong>[53:51] Darren Iammarino:</strong> Yeah, I totally agree.</p><p><strong>[53:53] Michael Levin:</strong> We need to do experiments. I don't know how you do experiments at these cosmological scales, but weird materials — we do this in our lab. We shouldn't make assumptions; we should do experiments.</p><p><strong>[54:11] Darren Iammarino:</strong> I totally agree. I'd love to hear more from you about the issue — we didn't get into too much of the locus of agency. I know Matt wanted to talk about that. Perhaps we could discuss that sometime in the future.</p><p><strong>[54:24] Michael Levin:</strong> It's available. Let's get together again.</p><p><strong>[54:27] Darren Iammarino:</strong> And also the issue of what we were discussing about what's on the Platonic side or the other side of the randomness? I'd love to hear his take on that or to try to dive into that a little deeper if we can. Maybe we can't. Might just be where it stops for now.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>&quot;The Bioelectric Interface to the Collective Intelligence of Morphogenesis&quot; by Michael Levin</title>
          <link>https://thoughtforms-life.aipodcast.ing/the-bioelectric-interface-to-the-collective-intelligence-of-morphogenesis-by-michael-levin/</link>
          <description>Michael Levin explains how bioelectric signaling serves as a cognitive-like control layer in morphogenesis, exploring its role in development, regeneration, cancer, aging, and prospects for engineering form.</description>
          <pubDate>Fri, 30 Jan 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 697cce2349688900014cacc3 ]]></guid>
          <category><![CDATA[ Michael Levin&#x27;s talks ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/L0D4FdJ4K3g" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/2f76d821/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~57 minute talk titled "The Bioelectric Interface to the Collective Intelligence of Morphogenesis: development, regeneration, cancer, and beyond" which I gave at a UCSF seminar for an audience of graduate students and post-docs in Biophysics, Bioinformatics, and Chemical Biology. I covered the role of bioelectricity as cognitive glue underlying high-level adaptive plasticity in living tissue, recent progress in exploiting that interface, and new developments in research platforms for this field.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Framing bioelectricity's role</p><p>(09:28) Morphogenesis as goal-directed</p><p>(22:45) Bioelectric control of form</p><p>(35:10) Rewriting anatomical set points</p><p>(43:55) Cancer and aging bioelectricity</p><p>(49:36) AI, anthrobots, and outlook</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="lecture-companion-pdf">Lecture Companion (PDF)</h2><p>Download a formatted PDF that pairs each slide with the aligned spoken transcript from the lecture.</p><p><a href="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/book_20260130_152818.pdf?ref=thoughtforms-life.aipodcast.ing">📄 Download Lecture Companion PDF</a></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>Slide 1/54 · 00m:00s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0000000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I want to talk today about bioelectricity, but more specifically than the diverse biophysics that you have to add in order to understand development. I want to paint bioelectricity as a really important link that allows us to take the insights of cognitive neuroscience and apply them far outside of brains and neurons.</p><p>In other words, what I think is really special about bioelectricity is not just the mechanisms, but the role that it plays in scaling up processes and properties that we typically associate with cognition, with learning and memory. So that is what we're going to talk about today.</p><p>If you want to see any of the details, all the papers, the data sets, the software, everything is available here at this website. This is my own personal blog around what I think some of these things mean.</p><hr><p><strong>Slide 2/54 · 00m:58s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0057500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I want to start out by thinking about how we talk about machines and organisms.</p><p>This material is being micromanaged. In other words, this is the kind of thing you would do in carpentry. You've got some chisels and some hammers and some screws, and you're putting everything where it needs to go, and then eventually it's going to look like this. Some part of this biomedical approach here is very much micromanagement in terms of we're going to treat this as a machine. We know what all the parts do. We're going to assemble it exactly how we want it. The patient is sent home to heal. That is interesting. What happens after that when we rely on the autonomy of the material? In other words, we're going to let the system do what the system does. We are not going to try to micromanage it. We don't even know a lot of what it does, but we have some degree of trust that it's going to do what it needs to do.</p><p>There's some very interesting work in the study of placebo effects. Fabrizio Benedetti has this amazing quote where his work shows that words and drugs have the same mechanism of action. This reminds us that high-level information flows eventually have to impact the physics of whatever system you're talking about. That interface between information and physics is where I think some interesting and deep questions lie.</p><hr><p><strong>Slide 3/54 · 02m:35s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0155000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>And bioelectricity can help us. Let's think about what the end game of developmental biology, bioengineering, what does all of that look like? When can we assume we're done? We've done our job and we can all rest easy. The way that I envision it is something that I call an anatomical compiler. Someday you will sit in front of a computer and you will be able to draw the plant, animal, organ, biobot, whatever, the living construct that you want. You won't be describing it at the level of molecular pathways. You will be describing it at the level of large-scale form and function. You're simply going to draw what you want.</p><p>If we had a system like this, what it would be able to do is to compile that description into a set of stimuli that would have to be given to individual cells to get them to build exactly what you want. If we have the ability to do that, to communicate large-scale anatomical goals to groups of cells, we would solve birth defects, traumatic injury, cancer, aging, degenerative disease. All of these things would go away if we knew how to give new anatomical goals to groups of cells.</p><p>I don't think this kind of thing is something like a 3D printer where you simply put the cells where you want them to be. This is not that. This is a communications device. It is a translator from your goals as the engineer or the worker in regenerative medicine to that of the cellular collective. It's the cellular collective. It's how do you get them to build the thing you want them to build.</p><hr><p><strong>Slide 4/54 · 04m:10s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0250000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>The typical information that we all focus on, genetics and biochemistry, is not enough.</p><p>Here's a very simple example. Here's the larva of an axolotl. Baby axolotls have little forelegs. Here's a tadpole of a frog. They do not have legs. In our lab, we make something called a frogolotl, which is basically a chimeric combination of these two creatures. Will frogolotls have legs or not? You've got the axolotl genome, you've got the frog genome. The answer is no. There's currently no model that allows you to look at this genetic information and know what's going to happen in this chimeric decision.</p><hr><p><strong>Slide 5/54 · 04m:55s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0295000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Because while we are very good and getting increasingly better at manipulating cells and molecules, we are really a long way away from understanding large-scale decision-making. What that means is not only can we not predict outcomes in various novel scenarios, to be fair, we can't even predict outcomes in the standard scenario. If you didn't already know what a Xenopus tadpole looks like, or if you didn't compare that genome with some other genome where you do know what it looks like, you would have no idea how to derive the actual anatomy from the genetic information.</p><p>And one of the consequences of that for medicine is that with the exception of antibiotics and surgery and then a couple of other things, a couple of other recent technologies, we really don't have anything that fixes things. Typically the treatments we have target the symptoms; they suppress the symptoms, ideally for as long as you're taking the drug, and then if you stop, everything comes right back.</p><p>So here's what I think is going on. I think that molecular medicine is still stuck where computer science was in the 1940s and 50s. This is what it looked like to program back then. She had to interact with the hardware. She's literally rewiring this thing because everything was about the hardware. And now this is where we are. We have genomic editing and pathway engineering and all these kinds of things. But mostly all the exciting advances are at the level of hardware. We have barely begun to understand the high-level information processing and in particular aspects of problem solving, AKA intelligence in the material.</p><hr><p><strong>Slide 6/54 · 06m:28s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0387500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>What I want to point out first and foremost is that this notion we all learn in our biology classes that good models are cashed out in terms of chemistry. At that level, nothing knows anything. The system does not know where it's going. It doesn't have any kind of a goal. It just does what chemistry does. Then it's up to us to figure out what emergent outcomes are going to come. These are a class of feedforward, open-loop models. This is not an experimental result. It is not a necessary axiom. It is an assumption. That assumption needs to be tested. Is it really the case that when we deal with cells and tissues, we are only going to be using models of devices that don't know anything and don't have set points and goals? Or maybe that doesn't fit the data very well. I'm going to argue that it doesn't fit the data at all. We actually have a spectrum here. I call it the axis of persuadability, that reminds us that we have a wide range of tools, starting with rewiring, but also cybernetics and control theory and the tools of behavioral science, where we're dealing with processes that since at least the 1940s, and for behavioral science long before that, we've had ways of addressing mechanisms that actually have goals, have memories, know things, and so on. The question is how much of that applies to the biology that we care about. You can see this worked out carefully in that paper.</p><hr><p><strong>Slide 7/54 · 08m:08s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0487500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>All of us take this journey. We start out being the subject of chemistry, and then as we're a little unfertilized oocyte, and eventually there's some developmental physiology, and then some behavioral science, psychology and psychoanalysis, and we take a detour into oncology or bioengineering. I'll talk about this momentarily.</p><p>The disciplines are distinct, and the journals and the departments and the funding bodies are all different. But the substrate is continuous. What we really have here is a kind of scaling of competencies. If you think that the final product, let's say an adult human or an adult animal, has certain properties, what we're looking for is a story of transformation. How did we get there from a little BLOB of chemicals? If you think that there are great transitions, sharp phase transitions where something critically new happens, you have to argue for that. You have to say, how exactly does that happen? Because the substrate is actually a slow, continuous developmental process. For this reason, I think developmental biology is unique among the sciences because here you see the journey from matter to mind. You start with chemistry and you end up in psychology. This is slow and gradual with no magic lightning flash in the middle that converts chemistry to mind.</p><hr><p><strong>Slide 8/54 · 09m:30s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0570000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>What I'm going to do today is try to make the following points. One thing that's interesting about bodies is that they consist of a multi-scale competency architecture. There's actually problem solving at every level, all the way from molecular networks up. That definitive regenerative medicine is going to require us to understand these systems as cybernetic goal-seeking systems. In other words, they have set points, and they have various degrees of ingenuity in meeting those set points when circumstances change. I'm going to show you how we use bioelectrics to read and write those set points.</p><p>These are pattern memories in that living medium, and we're going to be able to read and write them.</p><p>Bioelectricity does exactly what it does in the brain. It is a cognitive glue. It's a set of mechanisms that enables the collective to know things that the parts don't know. Just like you know things your individual neurons don't know because of electrophysiology. That is what developmental bioelectricity does, but it's more ancient and it does it in the rest of the body.</p><hr><p><strong>Slide 9/54 · 10m:38s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0637500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>This is the kind of thing we're made of, individual cells. Here's a unicellular organism called the lacrimaria. No brain, no nervous system, but very significant competency in its own local environment. That's the kind of thing that we have to tame if you're going to make a multicellular body that does interesting things.</p><p>Even this is not where things start, because what it's made of is a set of chemical networks.</p><hr><p><strong>Slide 10/54 · 11m:02s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0662500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>These might be gene regulatory networks, they might be molecular pathways. One of the interesting things about these networks is that they exhibit 6 different kinds of learning. You don't need a neuron, you don't need a brain, and you don't even need a cell, the chemical networks alone. Very small ones. You only need about 5 nodes in certain cases to do it. They do habituation, sensitization, associative conditioning. You can find all of that here.</p><p>Even the material already is capable of some very interesting behaviors. We are currently taking advantage of that to try to train the pathways. Applications include drug conditioning and things like this, where you can condition very powerful drugs to inert triggers and so on, because the material itself is able to associate different kinds of history of stimuli with specific responses. This goes all the way down.</p><hr><p><strong>Slide 11/54 · 12m:00s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0720000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>That means that we have to extend, or at least we have the opportunity to extend some of the tools of behavioral science to systems that operate in very unusual spaces. We use those kinds of tools to study things that move around in three-dimensional space. This is conventional behavior. You see birds or mammals, maybe an octopus, doing interesting things in 3D space. We know we have tools to recognize when they're solving problems, when they're pursuing goals.</p><p>But biology has been doing this long before nerve and muscle appeared. Cells navigate a high-dimensional transcriptional space, they navigate a physiological state space, and, my favorite, they navigate anatomical morphospace.</p><p>What I'm going to claim is that morphogenesis, whether it be in development, in regeneration, and cancer suppression, is a kind of behavior, and it is the behavior of a collective intelligence, just like you and I are. It is the behavior of a collective intelligence as it navigates anatomical space, and anatomical morphospace is simply the space of all possible anatomical layouts that a system can have.</p><hr><p><strong>Slide 12/54 · 13m:15s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0795000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Let's talk about where anatomies come from. We all start life like this, a collection of embryonic blastomeres. Eventually, through a cross-section, you see something like this, incredible order. Everything is the right size, the right shape, next to the right thing. Where does this actually come from?</p><p>Most people will immediately say, it's in the DNA, it's in the genome, but we all know now that the genome specifies proteins and it specifies some timing information about how they appear. But actually, none of this is directly in the genome. The genome doesn't say anything directly about your anatomical layout. You still have to ask, how do these cells, given the molecular hardware that the genome does encode—the genome tells you what proteins and different types of computational machinery you get to have.</p><p>After that, you have to start asking what the software looks like. How does it know what to build? How does it know when to stop? If something is missing, how do we get it to rebuild?</p><p>As engineers, and I'll only talk about this briefly, we might ask it to build something completely different. Can the same genome build something totally different?</p><p>We just have to remember that this, the standard anatomy is no more in the genome than the shape of termite colonies or the precise shape of spider webs is in these animals' genomes. These are all outcomes of the physiological software that rides on the genetically specified hardware.</p><hr><p><strong>Slide 13/54 · 14m:42s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0882500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>And so now we have to ask, what are the properties of that computational layer, and I'm going to use a definition of intelligence by William James to simply focus on goal-directed activity, meaning the ability to reach a set point. You can think about this at the lowest level as a simple homeostasis. Your thermostat has the very basis of goal-directedness. It has a set point. It represents that set point physically, and it tries to manage a set of variables to try and reduce the error to that set point. So let's ask the question of, is that something that we see in cell and developmental biology, or is it just open loop emergence, something that's going to explain everything we need to know?</p><hr><p><strong>Slide 14/54 · 15m:28s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0927500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Well, the first thing we know is that development is very reliable. So you start off, you almost always end up with the right species-specific target morphology. But that is not why I'm using the word intelligence. It is not because it's reliable, and it is not because there's a rise of complexity. That is not my point. My point is about the high competency of navigating anatomical morphospace in unexpected conditions. This normal development really hides a lot of interesting capabilities.</p><hr><p><strong>Slide 15/54 · 16m:02s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0962500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>The first thing we know is that if we cut embryos into pieces, you don't get half bodies. You get perfectly normal monozygotic twins and triplets. We know that you can start off in different regions of that anatomical amorphous space. You can avoid various local minima, and you will eventually get to this correct ensemble of goal states. That's one thing. Same anatomy from different starting states. It's not just for embryos.</p><hr><p><strong>Slide 16/54 · 16m:28s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0987500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Many animals, such as this axolotl, can do this throughout their lifespan. You can cut the limb anywhere along this axis. The cells quickly detect that there's been a deviation from the correct goal state. They will reduce that error, and eventually they stop. When do they stop? They stop when the correct salamander limb has been completed. You can do this from different starting positions, but what it does is anatomical homeostasis and an error minimization scheme. There's something interesting here.</p><hr><p><strong>Slide 17/54 · 16m:58s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1017500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>This is not just about damage. This is not a story of simply minimizing damage. You can think of development as a kind of regeneration. You're restoring the entire body from one cell. You might think the whole thing is about error minimization. But there's one other aspect to this that needs more attention.</p><p>This is an old experiment from the 50s where they took a tail and surgically grafted it to the flank of an amphibian. Over time, that tail remodels into a limb with fingers. Take the perspective of these cells at the tip of the tail here. They are tail end cells sitting at the tip of the tail. There's nothing locally wrong with them. There's no damage. There's no injury. Why are they turning into fingers? No individual cell knows what a finger is, but in some way, they're responding to something that at their level doesn't exist: the body plane of an entire animal.</p><p>What's happening here is a large-scale error-detection system that recognizes that having a tail here is not what's supposed to be happening. All of that error propagates down into the molecular steps that are needed to turn this structure now into this structure.</p><p>Where have we seen this before? We've seen it before in humans, in any kind of behavior. When you wake up in the morning and have very abstract goals, social goals, financial goals, research goals, in order for you to get up out of bed and do those things, ions have to cross your muscle cell membranes. The chemistry of your body cells has to be affected, functionally changed by these abstract high-level goals that the large-scale system has that the cells don't know about.</p><p>One of the most interesting things about bodies is that they form this transduction network that allows these high-level abstract things to serve as drivers of chemistry underneath. That is what's happening here. The fact that an amphibian is supposed to have a limb and not a tail here is what ultimately drives the molecular biology of this transformation. So the local order obeys a global plan. That's what's important here.</p><hr><p><strong>Slide 18/54 · 19m:08s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1147500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>There are lots of examples like this that are not in our developmental biology textbook, mostly because they're very difficult to explain with current tools. Here's an example. This is called trophic memory in deer antlers.</p><p>Deer, large adult mammal that every year has this incredible structure of bone and vasculature and innervation. They lose these every year and then they regrow them from scratch. What you can do is cause an injury here. You cut into the bone. It makes a little callus. It heals, no problem. This whole thing falls off. For the next five years in certain species, that location will have an ectopic branch, an ectopic tine, and then eventually it goes back to normal.</p><p>The information as to the location of the damage has to be stored somewhere in the body because this whole thing is going to fall off. When the bone is growing, by the time you get to this, there has to be a new signal that says take an extra branch point here because there was damage last year.</p><p>Think about what kind of molecular pathway model you would draw for something like this. We're all used to making these arrow diagrams, Figure 7 in your Cell paper. We don't have really good tools to describe things like this.</p><p>And this is what the material is capable of. It has a pattern memory, it can store and recall these memories from distant locations, and it can guide morphogenesis in accordance to that pattern memory that does not have to be the default that you see from the species most of the time. It can be changed. I'm going to show you some examples of this.</p><p>The guy who discovered this, Bubenik, sent us 35 years' worth. This is a 35-year-long experiment. Imagine trying to tell your PhD advisor you want to work on a herd of deer for 35 years. He sent us all these antlers and an amazing data set.</p><hr><p><strong>Slide 19/54 · 21m:08s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1267500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>So just one more example in a much more tractable model system that we identified. This is a tadpole of the frog, Xenopus laevis. So here are some eyes, here are some nostrils, the mouth, the brain is here, the gut. And when these guys become frogs, they have to rearrange their face. All these different organs have to move to look like this. It was thought for a long time that what's genetically encoded is a set of hardwired movements. Everything moves in the right direction, the right amount, and you go from a tadpole to a frog. Well, we decided to test that. The way you test it is you perturb the system and see what it does. And so we created these so-called Picasso tadpoles where we scrambled the face. Everything's in the wrong place. The eyes on the back of the head, the mouth is off to the side. But it turns out they make perfectly normal frogs. All of these things move in novel, unnatural paths. In fact, sometimes they go too far and have to come backwards until you get to a correct frog and then everything stops.</p><p>What the genetics specifies is some hardware that can execute a very flexible error minimization scheme that takes corrective action as needed to get to a specific outcome. The most obvious question is, how do you know when you've reached the correct state? How does a regenerating limb or a developing embryo or a deer antler that's been modified by experience know what the correct pattern is? I'm going to show you one way in which anatomical target states—set points—can be encoded in tissue.</p><hr><p><strong>Slide 20/54 · 22m:48s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1367500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Now, the neuroscientists among the audience are already thinking, we know how, we have an example that does this, and that's the brain. We know in the brain that animals store set points as memories, and they execute behavior in accordance with those memories to reach specific states. And how is that implemented? There are ion channels that enable the network of cells in your brain to be electrically active. There are ion channels that set voltage gradients, there are gap junctions or electrical synapses that allow these signals to propagate across the network.</p><p>This group made a video of zebrafish brains while the fish was thinking about whatever fish think about. You can then try to do neural decoding. You can read this electrophysiology and decode it to understand what are the memories, the goals, the preferences, the behavioral competencies of this animal. In other words, we know from neuroscience that the cognitive states of beings are encoded in the real-time electrophysiology of certain cells. Where did this incredible system come from?</p><hr><p><strong>Slide 21/54 · 24m:20s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1460000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>It turns out that it is extremely ancient. All cells in your body have ion channels. Most cells in your body are connected by these electrical synapses. This design feature, this architecture by which you can store and integrate information across space and time electrically in electrical networks is here from about the time of bacterial biofilms. Back from the time that bacteria were first getting together into colonies, that is when evolution discovered the amazing suitability of bioelectricity as a cognitive glue, as a way to connect competent subunits into systems that know things that the subunits don't know.</p><p>This was over 20 years ago; we asked, could we take the tools of behavioral and neuroscience and apply them outside the brain to do neural decoding except to non-neural cells and ask, what do they know? What information do they store? How does the collective store pattern memories that no individual cell has? I'm going to show you how we do that.</p><p>This was a zebrafish brain. This is an early frog embryo that we are monitoring in the same way.</p><hr><p><strong>Slide 22/54 · 25m:35s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1535000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>The first thing we did is to use a voltage-sensitive fluorescent dye technology to be able to read electrical states. You can see these are individual cells in vitro. The colors represent different voltages, and you can see the spatial and temporal patterns of bioelectrical states in these cells.</p><p>We do a lot of quantitative simulations. We start with the gene regulatory network that tells you which channels and pumps you're going to have in your cells. Then you can do large-scale tissue-level simulations that allow you to make cuts, damage the tissue, make certain changes, and ask what's going to happen. What is the behavior of this network? You can't do it in your head. It's not obvious at all. They have very complex and interesting properties.</p><hr><p><strong>Slide 23/54 · 26m:25s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1585000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I'm going to show you a couple of examples in development. This is something we call the electric face. Here's a time lapse of an early frog embryo. The gray scale represents a voltage here using this dye. There's a lot going on, but look at this one frame. Before the genes turn on that regionalize the ectoderm of this embryo to become a face, you can read out the electrical pre-pattern that tells you here's where the right eye is going to go, here's where the mouth is going to go, the placodes are out here. It already lays out the basic features of the face. This voltage pattern, the spatial pattern of steady resting potential across the cell membrane, is what determines the genes that are going to turn on and the regionalization of the face.</p><p>Now, not only is bioelectricity a way to merge individual cells into a large-scale structure, it does that in multiple levels because these are individual embryos. If I poke this one, all of these guys find out about it. You can see this wave, this calcium propagation wave. In minutes, these guys all find out.</p><p>We have some interesting data; we can discuss hyperembryos if anybody wants. Hyperembryos are groups of embryos that solve problems that individual embryos can't solve. There's a multi-scale hierarchy going on here.</p><hr><p><strong>Slide 24/54 · 27m:48s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1667500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>These improvements in technology have been really powerful. I always like this picture. This is what's been happening in astronomy. This is what Pluto looked like in 1996, 2002. By 2017, you can actually see some mountains and some features. And that's what's happened in our ability to track bioelectrical states.</p><p>This is what the eye spot looked like in 2012. We just knew that it was there. But by now, we can actually see the very complex patterns of this region and the technology is just developing more and more. But in addition to simply being able to read out all the bioelectrical states without having to poke each cell with electrophysiology electrodes, functional experiments are really critical.</p><hr><p><strong>Slide 25/54 · 28m:30s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1710000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>How do we write the electrical information? We don't use electrodes or magnets or applied fields or anything like that. There are no waves, no frequencies. We manipulate the natural interface that cells use to hack each other. We control the ion channels, and we can do that with drugs that target specific types of channels and pumps. We can do it with optogenetics. There are also some nanomaterials. Gap junctions. We can control the topology and the actual voltage of individual cells. Now it's time for me to show you what happens when you do that.</p><hr><p><strong>Slide 26/54 · 29m:15s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1755000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>One of the interesting things that we have to watch is the ability of cells to control each other electrically. I'm going to show you one example. This is a video. I'm going to play it. The voltage, again, is denoted by the colors. These are cells in vitro. Notice that this cell starts out as one voltage until it gets touched by this cell, and then it changes. It's crawling along, it's minding its own business, nice and blue. This thing touches it, bang, that's all it took. A tiny little touch like this, and the cell voltage has changed, and now it comes down and becomes part of this group and starts working on some novel morphogenetic thing. We would like to understand this kind of ability to control cell behavior electrically and to tell them what to build.</p><hr><p><strong>Slide 27/54 · 30m:05s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1805000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Here's a couple of examples. One of my favorite examples is the induction of ectopic eyes. You might see in your developmental biology textbook that only the anterior neuroectoderm up here is competent to make an eye. That's only true if you use chemical or molecular genetic inducers like PAX6, the so-called master eye gene. But most regions in the body are capable of doing it if you know the right prompt. The right prompt is that bioelectrical state, that little spot that I showed you in the electric face. We can induce that anywhere by injecting RNA encoding specific ion channels, in this case potassium channels. When you do this, those cells get the message to build an eye. And they do. They build an eye. That eye has all the lens, retina, optic nerve, all that same stuff that it's supposed to have.</p><p>Notice a couple of interesting things. First of all, the bioelectrical signal is instructive. In other words, it actually controls. We're not just screwing up development with a poison. We're actually building new and coherent structures. It's highly modular. We didn't have to talk to the individual stem cells. I have no idea how to micromanage the production of an eye. We didn't have to tell which genes to turn on. Much like that top-down control that I showed you from the abstract goals in behavior or the body plan of the amphibian, we can communicate at the highest level. We can say, build an eye here. That's the trigger, and everything else that's needed is taken care of downstream.</p><p>For example, if we only inject a few cells — this is a cross-section through a lens in the body of a tadpole — only a few of them were injected by us, but what did they do? They apparently can tell that there's not enough of them to really build an eye, and so they recruit their neighbors. They tell their neighbors, which were never touched by us, so all these brown cells were never injected. These guys get them to participate in the process. It's like a secondary instruction event.</p><hr><p><strong>Slide 28/54 · 32m:08s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1927500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>They will start an ectopic eye spot. So this is an earlier frog embryo, and you can see the blue is in situ hybridization marking Rix1, which is a very early marker of eye field specification. So we can start all kinds of eye spots but only some of them will become an eye. You can, in fact, make eight or nine of these, but only some of them will become an eye. Why? Because while these guys are telling their neighbors, "work with me to build an eye," the neighbors, as part of a cancer suppression mechanism, are saying, "no, actually, you should be skinned like us, or you should be gut or something else." And they basically cause them to change their mind, and despite their early expression of Rix1, that winks out and they go back to normal.</p><p>So that back and forth conversation about "Are we going to be an eye or something else?" takes place at the electrical level. It eventually gets canalized into gene expression and then into anatomy. For regenerative medicine, this is what we would like to have control over. We would like to be super convincing to these cells. We don't want to know the 20,000 different genes that we're going to have to turn on and off. We want to give a high-level stimulus to say, build this structure and have the material do it.</p><hr><p><strong>Slide 29/54 · 33m:18s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1997500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>As an example, I'll show you some of our work on limb regeneration. Frogs, unlike axolotls, do not normally regenerate their legs. After an amputation, 45 days later, there's basically nothing.</p><hr><p><strong>Slide 30/54 · 33m:32s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2012500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>We came up with a cocktail that we apply right after injury, or even a little bit delayed. And what it does is it immediately triggers regeneration. So within 48 hours, you have an MSX1-positive blastema. By 45 days, you've got some toes, you've even got a toenail, and eventually a very respectable looking leg that is touch sensitive and motile. You can see the animal can feel.</p><hr><p><strong>Slide 31/54 · 34m:00s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2040000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>One of the most interesting things about this is that this is an example on an adult frog, the treatment that we do, in this case, it was a wearable bioreactor containing some drug payload. The treatment is 24 hours. That's it. After that, you get a year and a half of leg growth, during which time we don't touch it at all.</p><p>So this is a top-down, trigger-based thing, where we say to the system in the first 24 hours, go down the leg building path, not the scarring path, and then we completely take our hands off the wheel. We are not using scaffolds or stem cell kinds of approaches or growth factors. It's right at the beginning, the physiological decisions about what journey you're going to take through that anatomical space.</p><hr><p><strong>Slide 32/54 · 34m:48s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2087500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I have to do a disclosure because David Kaplan and I started this spin-off called Morphoceuticals, where we are now trying to do this in mammals. Stay tuned; that work is ongoing. We have bioreactors through which we're trying to apply these kinds of signals. I want to switch gears and show you a different model to hammer this notion of memory.</p><hr><p><strong>Slide 33/54 · 35m:05s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2105000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Specifically, I made this claim earlier on that morphogenesis has an encoded set point towards which it is trying to reduce error. It is a homeostatic system. I'll show you another example of rewriting that pattern memory.</p><p>These are planaria, flatworms that have a head and a tail. One of the many cool things about planaria is that if you cut them into pieces, each piece regenerates a complete worm.</p><p>You might ask, when I cut this piece, how does it know how many heads to have? How many heads should it have? These cells up here are going to make a head, but these cells here are going to make a tail; they're right next-door neighbors. It's not an issue of being at a different positional information. They're at the same location, but they make a context-appropriate decision in each fragment of what they're going to make.</p><p>How do they know? We asked that question, and we observed there's a bioelectrical pattern in that fragment that says, One head, one tail. We can change that pattern; using ionophores and some other tools, we can say, Actually, you should have two heads. When you do this... First, nothing happens.</p><p>This animal has this incorrect internal representation of what a correct planarian should look like. But the molecular biology is correct. In other words, head marker in the head, not in the tail. The anatomy is correct, head, tail. But if you cut this guy, all the pieces will make two-headed animals.</p><p>This is a counterfactual memory. It is not true right now. It is latent because until you cut it, it doesn't do anything. When you cut this animal, the cells consult this pattern as the recorded ground truth of what a correct planarian should look like. That is what they build to. They make these two-headed animals.</p><p>You can see here we are looking at the representation of what the axial makeup of a planarian should be. A normal body can store at least one of two different representations. Another reason I keep calling this a memory is because it is stable.</p><hr><p><strong>Slide 34/54 · 37m:22s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2242500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Once you change it, the tissue holds. We can take this two-headed animal, we can cut them into pieces, and what they will do is continue to generate two-headed animals in perpetuity, as far as we can tell, forever. Here's a little video of what they're like. Remember, there's nothing wrong with the genome. We haven't touched the genome. The genetic information has not been changed. What the genome actually gives you is some hardware that, when the juice is turned on, reliably takes on a default bioelectrical state, which by default is one head, one tail. But that is not the only state it's capable of. It can be rewritten, and once it's rewritten, the memory holds.</p><p>These are the basic properties of any memory. It's long-term stable, but it's rewritable. It has conditional recall. It has discrete behaviors that it can do.</p><p>Now, I raised this issue of being convincing to the cells: producing a signal, whether endogenous or applied by us, that the cells are actually going to take on.</p><hr><p><strong>Slide 35/54 · 38m:28s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2307500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>We're still working on some of the very important and puzzling aspects of this. For example, in order to be convincing, that message doesn't have to have a lot of tissue behind it. We can take a little chunk out of a two-headed animal, in fact, irradiate the heck out of it so that there are no stem cells in it, implant it into a normal one-headed host, and in some percentage, almost 1/5 of the cases, this fragment turns into a two-headed animal. Even a small piece, and even at a distance, some of these will still do it. That message will overwrite the endogenous memory of all these other tissues. Very interesting.</p><hr><p><strong>Slide 36/54 · 39m:12s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2352500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Not only the number of heads, but the species-specific shape of the heads. For example, this triangular species, if we change the bioelectrics that control head development, we can get flat heads like this P. fulina. You can get round heads like this S. Mediterranean, about 100 to 150 million years evolutionary distance between these guys and these guys, but no problem.</p><p>This hardware is perfectly happy to visit these other attractors in anatomical space where these species normally hang out, not just the shape of the head, but the distribution of stem cells, the shape of the brain, just like these other species. This is not hardwired.</p><hr><p><strong>Slide 37/54 · 39m:50s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2390000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>You can go further and make things that don't look planar at all. They're not flat. You can make these crazy spiky things. You can make cylindrical and hybrid forms. Not just animals, plants do it too.</p><p>So here's what an oak leaf looks like. You might think that this is what the oak genome knows how to do. But along comes this bioengineer, who happens to be a wasp, lays down some prompts and gets the plant cells to build this incredible gall, this spiky yellow and red thing. We would have no idea that the plant cells are even capable of building something like this if we hadn't seen it.</p><p>So the reliability of development is deceiving. It hides a lot of plasticity and reprogrammability. We are not the only ones. Evolution noticed all this. Much of it takes place with high-level interfaces, not with micromanaging the molecular details.</p><hr><p><strong>Slide 38/54 · 40m:45s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2445000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>One of the things that we're trying to do now is build a full-stack computational platform that starts off with gene expression data, then goes to the physiology of individual cells, then tissue level, and eventually to whole-body algorithmic understanding of the decision-making of how these things, in fact, encode different kinds of structures. Once we can connect all of that, we can use these approaches to actually pick electroceuticals. Design stimuli that get the tissue to do what you want them to do.</p><hr><p><strong>Slide 39/54 · 41m:18s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2477500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I will just show you one successful example where we've done that for a complex structure like the brain. Here's a frog brain with forebrain, midbrain, and hindbrain. If you hit it with a teratogen like nicotine or other nasty things, you get defects. We wanted to know what's going on and how to fix these defects. We built a computational platform. Our collaborator, Alexis Pytak and Vaipav Pai, built this bioelectric model of the tissue from which the brain arises.</p><hr><p><strong>Slide 40/54 · 41m:50s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2510000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>We decided to go after one of the most unlikely examples we would be able to fix, and that is a genetic mutation of notch. Notch is an important neurogenesis gene. If you introduce a dominant mutation — the overactive notch ICD — here's what you see. The forebrain is gone. The midbrain and hindbrain are basically a bubble of water. These animals have no behavior, profoundly defective.</p><p>We asked the model: we know what goes wrong with the bioelectrics once you've done this; how can we fix it? The model said it turns out that there's this specific channel called HCN2 that will sharpen the bioelectric pattern back to normal. If you do that, even animals expressing high levels of this notch ICD get normal brain structure, normal brain gene expression, and normal behavior. In other words, their IQs are indistinguishable from controls.</p><p>You can do this either by opening existing HCN2 channels or introducing new HCN2 channels. What you're seeing here is that with the right computational model, we can address the bioelectric layer to overcome, in some cases, even hardware defects. I'm not saying this is going to work in all cases, but in some cases the hardware is fixable by physiological stimuli.</p><hr><p><strong>Slide 41/54 · 43m:20s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2600000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>And so this is what we're aiming for, is a kind of system where we know what the correct state is supposed to be. This is where a lot of the hard work now has to take place: to characterize what the normal bioelectric states of different organs are. We then might have an incorrect state, and we have a computational platform that says, if you want to go from the incorrect state to the correct state, what channels do you need to open and close, which means what ion channel drugs do you deliver on what schedule? You can play with an early version of this at this website.</p><hr><p><strong>Slide 42/54 · 43m:52s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2632500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>A couple of things before I start to wrap up. I'm going to show you a couple of other stories. One has to do with cancer. I've shown you regeneration. I've shown you organ formation. I've shown you birth defects. Let's talk about cancer for a moment.</p><p>One of the interesting things that happens during evolution and multicellularity is a scaling of goals. The set points, the actual homeostatic set points towards which these systems try to reach start off very small. Individual cells have little tiny cognitive light cones. Their goals are all very small. They're trying to manage pH, metabolic state, in a tiny little region of space-time, little bit of memory going backwards, a little bit of predictive capacity. But this tiny little area is all it's trying to manage.</p><p>A multicellular system like this has an enormous grandiose kind of set point. In other words, this is the correct pattern memory, and as long as you haven't reached it, your cells are going to be actively trying to get there. They only stop when they reach this particular state. This is massive. No individual cell knows what this looks like or how many fingers you're supposed to have, but the collective absolutely does, and this is what it reduces the error to.</p><p>What you see during development and during evolution in general is a scale-up of the capacity to store these kinds of set points. These are tiny set points in metabolic space and transcriptional space. These are set points in very large anatomical space. But that kind of system, where cells join into networks where the network can remember targets that individual cells cannot remember, has a failure mode. The failure mode is called cancer.</p><p>When these cells disconnect from each other — what you're looking at here is a glioblastoma in culture — they roll back to their primitive unicellular tiny goals, meaning proliferate as much as you can, migrate to where life is good, metabolize, because at that point the rest of the body is just external environment to you. You're just an amoeba again, and the outside body is just environment. That boundary between self and world shrinks.</p><p>What's happening here is that cancer is not more selfish than normal tissue, and sometimes when people model it in game-theory models as being more selfish and less cooperative, it isn't more selfish, it just has smaller selves. In other words, the boundary between the self and the outside world, the region of space-time that you care about in terms of trying to manage the states, becomes very small, and then the rest of the body isn't part of the adaptive behavior anymore.</p><hr><p><strong>Slide 43/54 · 46m:35s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2795000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>That interesting way of looking at it had a specific prediction. It meant that we should be able to detect incipient tumors via their disconnection from the rest of the body. Bioelectrical dyes should be able to show us where the tumorigenesis is going to happen.</p><p>We showed that by injecting tumor-inducing oncogenes into tadpoles. These are nasty things such as dominant negative P53, GLE, KRAS, and so on. They make tumors, but before the tumors become apparent and start to metastasize, the dye will tell you exactly where the tumor is going to be.</p><hr><p><strong>Slide 44/54 · 47m:20s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2840000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>We are now optimizing towards this kind of thing where either a human surgeon or a robot surgeon is going to be able to look down, for example, and see the tumor margins. He's going to see that here's the normal tissue, but here's some stuff you got to be careful of because these cells have already acquired an abnormal bioelectrical state; they've disconnected from their neighbors.</p><p>Now, more importantly than just tracking it, could we change it?</p><hr><p><strong>Slide 45/54 · 47m:40s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2860000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>What we did here was instead of trying to kill these cells, we said, what if we force them into a normal bioelectrical state with their neighbors? Again, what happens is we inject these oncogenes. Here you can see the ACA protein is blazingly expressed. It's all over the place here. Here's a massive one that normally would be a tumor, except there is no tumor. This is the same animal. There won't be a tumor because we've also co-injected an ion channel. It doesn't kill the cells. It doesn't fix the genetic defect. But it forces the cells to be part of this large-scale network that's working on making nice skin, nice muscle, and so on, instead of going off and doing its own thing. That is something that we are currently working on in humans.</p><p>This is some data on glioblastoma. We also have a project on colon cancer, reusing existing ion channel drugs—candidates for electroceuticals to reconnect cells back to their neighbors.</p><hr><p><strong>Slide 46/54 · 48m:45s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2925000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>The final story, briefly, is our program in aging. One of the hypotheses is that much like these bioelectrical patterns that are critical for establishing normal anatomy during embryogenesis, during regeneration, during cancer suppression, but in your whole lifespan, as cells come and go, old cells become senescent and die, new cells come in, could it be that with age the bioelectrical pre-patterns get fuzzy? They get degraded. And if we sharpen them, I've shown you one example of sharpening — that's how we fix the brain defects in the tadpole — could we sharpen them as an aging therapeutic? Could it be that the pattern memory in planaria is part of why these guys are immortal, that they're really good at holding on to their bioelectrical patterns? We have some interesting stuff coming on that.</p><hr><p><strong>Slide 47/54 · 49m:38s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2977500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I'm going to start to wrap up here and point out a couple of interesting things.</p><p>One is that because your body is made of this multi-scale system where there are competencies and agendas at every level, starting with the molecular networks, the subcellular structures like your cytoskeleton, the cells, the tissues, all of it has the ability to take in input, make decisions, and navigate various kinds of spaces.</p><p>It means that we can now use various technologies, including AI, to try to communicate not just with the lowest level—people try to make drugs to hit specific receptors and pathways and so on—but could we communicate with these higher levels of transduction, and do in patients what I've been showing you in these model systems?</p><p>We have a couple of projects called Talk to GRN and Talk to Cells, where we're trying to use language models coupled with real-time closed-loop electrophysiological data to use language to communicate, get information out of the cells, and give them commands.</p><hr><p><strong>Slide 48/54 · 50m:50s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3050000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>So that's the first thing. The second thing is that this idea that bioelectricity and other kinds of physiological networks are providing a multi-scale competency to the material that lets it deploy plasticity and problem solving in the face of novel scenarios. This has implications for evolution because evolution is not working on a passive material where the genome directly maps in a fixed way to some kind of outcome. We've been working on models, and you can see that in this paper in Trends in Genetics called "What Does Evolution Actually Make?" Thinking about the information in the genome as a kind of prompt, as a way to give suggestions to a material that actually has some great flexibility about how it's going to implement that.</p><hr><p><strong>Slide 49/54 · 51m:42s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3102500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I'm going to show you this in a minute. But what's important and challenging about this is that it's trying to manage a material that has some degree of plasticity and intelligence is a two-way IQ test. You have to be smart enough to do it, and we're learning.</p><p>Here's an example of hacking plant cells. Bacteria manage this featureless lump, and fungi don't do much better. Nematodes can make something that has a little bit of a shape, but by the time you get to insects, they can get the plant cells, the leaf cells to make this beautiful thing. The sophistication of the hacker matches the sophistication of the product of what you're able to do. We have to get a lot more clever about how it is that we communicate goals to these various subsystems.</p><hr><p><strong>Slide 50/54 · 52m:35s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3155000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I'll briefly mention some new technology that's currently sitting in our lab. It's been up and running for probably about a month now. That is a closed-loop AI-powered robot scientist that makes hypotheses about how to traverse anatomical space. That is, what signals given to cells will get the collective to do one or another thing. It has little wells inside where it can give different stimuli to those cells. Vibration, optical stimuli, chemical stimuli, electrical, and so on. It observes what happened, learns from that experience, revises its hypothesis, and goes back and does it again. So this is a new colleague that is working with us to operate in anatomical morphospace using living cells as the front end interface to explore that space of possibilities. I'll show you one example of the kinds of things we build.</p><hr><p><strong>Slide 51/54 · 53m:32s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3212500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Asa talked about Xenobots. I didn't bring any Xenobots slides today, but here's an Anthrobot.</p><p>This is the question of if you can't reach the goal states that you normally reach despite perturbations, what biologicals will often do is find a new set of set points. This little creature is not something I got off the bottom of a pond somewhere. If you were to sequence it, you would find 100% Homo sapiens genome. Not edited. These are adult, not embryonic, human tracheal epithelial cells that self-assemble when you take them out of the body. They self-assemble into this little motile creature. This is what they look like. They swim around because these little cilia are waving. They have all kinds of interesting properties. These guys, taken out of the body, can no longer make a human. They can't be a human body, but they do something very coherent.</p><hr><p><strong>Slide 52/54 · 54m:30s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3270000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>And they have some interesting features. First of all, 9,000, over 9,000 differentially expressed genes. No genomic editing, no synthetic biology circuits, no nanomaterial scaffolds, no drugs, just a different lifestyle that they've adopted, and they spontaneously change; half their genome is now expressed differently.</p><p>The second thing is they have four different behaviors, four different motility behaviors that you can quantify. This is the probability transition diagram between them, like you would do with any animal. One of the first things we realized they could do is if you take a lawn of IPS-derived human neurons, you put a big scratch through it, the anthrobots will come, they'll settle down as in the whole cluster. They're shown in green. They will settle down and start to knit together the gap. So when you take them off, you'll see that under where they were sitting, they were trying to repair this. So they have some sort of ability to induce the neurons to join up.</p><p>Who would have known that your tracheal epithelial cells, which sit there quietly in your airway for decades, if you take them out, they become a self-motile little creature that can fix neural defects. This, of course, we're working towards as patient-specific in-body robotics. They're made of your own cells, so you won't need immunosuppressive drugs. We're trying to figure out what are all the things that they know how to fix and how to deploy them for biomedicine.</p><p>One of the interesting things about them is they're younger than the cells they come from. So this process of becoming an anthrobot actually rolls back the clock as measured by the epigenetic clock on these guys. They're actually younger than the cells they come from. So again, there's an aging story.</p><hr><p><strong>Slide 53/54 · 56m:22s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3382500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>So this is my last slide. And what I'm going to say is that almost everything that people are excited about today in terms of biomedicine comes from these kinds of approaches, these bottom-up approaches focused on the hardware. And we would like to complement that with tools taken from other disciplines. So this is cybernetics, behavioral science, cognitive and computer science: the material we're dealing with is actually amenable to all top-down approaches that let us do very complex things that are really difficult with this.</p><p>And so as bioengineers and as workers in regenerative medicine, but also if we're seeking to understand evolution and our own origins of our bodies and of our cognitive systems, we really have to drop this idea that the material is only to be described by simple open loop models in which nothing knows anything up until you get to a big mammalian brain, but that actually the sciences of information processing and of behavior are helpful all the way down. Bioelectricity is the interface layer that really enables that control of growth and form. It's not the only one, but it's the one that we have the best amount of control over now. We'll be able to hack this for some incredible applications. Some of that is described here.</p><p>I'm going to stop here and thank the people who did all the work.</p><hr><p><strong>Slide 54/54 · 57m:48s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3467500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>My postdocs and grad students and the team at Josh Bongard's lab worked with us on that discovery engine that I showed you. We have lots of amazing collaborators. Thank you to our funders. Here are my disclosures. There are three companies that have licensed the various technologies that I've shown you today.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>&quot;The Bioelectric Interface to the Collective Intelligence of Morphogenesis&quot; by Michael Levin</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>Michael Levin explains how bioelectric signaling serves as a cognitive-like control layer in morphogenesis, exploring its role in development, regeneration, cancer, aging, and prospects for engineering form.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/L0D4FdJ4K3g" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/2f76d821/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~57 minute talk titled "The Bioelectric Interface to the Collective Intelligence of Morphogenesis: development, regeneration, cancer, and beyond" which I gave at a UCSF seminar for an audience of graduate students and post-docs in Biophysics, Bioinformatics, and Chemical Biology. I covered the role of bioelectricity as cognitive glue underlying high-level adaptive plasticity in living tissue, recent progress in exploiting that interface, and new developments in research platforms for this field.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Framing bioelectricity's role</p><p>(09:28) Morphogenesis as goal-directed</p><p>(22:45) Bioelectric control of form</p><p>(35:10) Rewriting anatomical set points</p><p>(43:55) Cancer and aging bioelectricity</p><p>(49:36) AI, anthrobots, and outlook</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="lecture-companion-pdf">Lecture Companion (PDF)</h2><p>Download a formatted PDF that pairs each slide with the aligned spoken transcript from the lecture.</p><p><a href="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/book_20260130_152818.pdf?ref=thoughtforms-life.aipodcast.ing">📄 Download Lecture Companion PDF</a></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>Slide 1/54 · 00m:00s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0000000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I want to talk today about bioelectricity, but more specifically than the diverse biophysics that you have to add in order to understand development. I want to paint bioelectricity as a really important link that allows us to take the insights of cognitive neuroscience and apply them far outside of brains and neurons.</p><p>In other words, what I think is really special about bioelectricity is not just the mechanisms, but the role that it plays in scaling up processes and properties that we typically associate with cognition, with learning and memory. So that is what we're going to talk about today.</p><p>If you want to see any of the details, all the papers, the data sets, the software, everything is available here at this website. This is my own personal blog around what I think some of these things mean.</p><hr><p><strong>Slide 2/54 · 00m:58s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0057500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I want to start out by thinking about how we talk about machines and organisms.</p><p>This material is being micromanaged. In other words, this is the kind of thing you would do in carpentry. You've got some chisels and some hammers and some screws, and you're putting everything where it needs to go, and then eventually it's going to look like this. Some part of this biomedical approach here is very much micromanagement in terms of we're going to treat this as a machine. We know what all the parts do. We're going to assemble it exactly how we want it. The patient is sent home to heal. That is interesting. What happens after that when we rely on the autonomy of the material? In other words, we're going to let the system do what the system does. We are not going to try to micromanage it. We don't even know a lot of what it does, but we have some degree of trust that it's going to do what it needs to do.</p><p>There's some very interesting work in the study of placebo effects. Fabrizio Benedetti has this amazing quote where his work shows that words and drugs have the same mechanism of action. This reminds us that high-level information flows eventually have to impact the physics of whatever system you're talking about. That interface between information and physics is where I think some interesting and deep questions lie.</p><hr><p><strong>Slide 3/54 · 02m:35s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0155000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>And bioelectricity can help us. Let's think about what the end game of developmental biology, bioengineering, what does all of that look like? When can we assume we're done? We've done our job and we can all rest easy. The way that I envision it is something that I call an anatomical compiler. Someday you will sit in front of a computer and you will be able to draw the plant, animal, organ, biobot, whatever, the living construct that you want. You won't be describing it at the level of molecular pathways. You will be describing it at the level of large-scale form and function. You're simply going to draw what you want.</p><p>If we had a system like this, what it would be able to do is to compile that description into a set of stimuli that would have to be given to individual cells to get them to build exactly what you want. If we have the ability to do that, to communicate large-scale anatomical goals to groups of cells, we would solve birth defects, traumatic injury, cancer, aging, degenerative disease. All of these things would go away if we knew how to give new anatomical goals to groups of cells.</p><p>I don't think this kind of thing is something like a 3D printer where you simply put the cells where you want them to be. This is not that. This is a communications device. It is a translator from your goals as the engineer or the worker in regenerative medicine to that of the cellular collective. It's the cellular collective. It's how do you get them to build the thing you want them to build.</p><hr><p><strong>Slide 4/54 · 04m:10s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0250000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>The typical information that we all focus on, genetics and biochemistry, is not enough.</p><p>Here's a very simple example. Here's the larva of an axolotl. Baby axolotls have little forelegs. Here's a tadpole of a frog. They do not have legs. In our lab, we make something called a frogolotl, which is basically a chimeric combination of these two creatures. Will frogolotls have legs or not? You've got the axolotl genome, you've got the frog genome. The answer is no. There's currently no model that allows you to look at this genetic information and know what's going to happen in this chimeric decision.</p><hr><p><strong>Slide 5/54 · 04m:55s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0295000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Because while we are very good and getting increasingly better at manipulating cells and molecules, we are really a long way away from understanding large-scale decision-making. What that means is not only can we not predict outcomes in various novel scenarios, to be fair, we can't even predict outcomes in the standard scenario. If you didn't already know what a Xenopus tadpole looks like, or if you didn't compare that genome with some other genome where you do know what it looks like, you would have no idea how to derive the actual anatomy from the genetic information.</p><p>And one of the consequences of that for medicine is that with the exception of antibiotics and surgery and then a couple of other things, a couple of other recent technologies, we really don't have anything that fixes things. Typically the treatments we have target the symptoms; they suppress the symptoms, ideally for as long as you're taking the drug, and then if you stop, everything comes right back.</p><p>So here's what I think is going on. I think that molecular medicine is still stuck where computer science was in the 1940s and 50s. This is what it looked like to program back then. She had to interact with the hardware. She's literally rewiring this thing because everything was about the hardware. And now this is where we are. We have genomic editing and pathway engineering and all these kinds of things. But mostly all the exciting advances are at the level of hardware. We have barely begun to understand the high-level information processing and in particular aspects of problem solving, AKA intelligence in the material.</p><hr><p><strong>Slide 6/54 · 06m:28s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0387500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>What I want to point out first and foremost is that this notion we all learn in our biology classes that good models are cashed out in terms of chemistry. At that level, nothing knows anything. The system does not know where it's going. It doesn't have any kind of a goal. It just does what chemistry does. Then it's up to us to figure out what emergent outcomes are going to come. These are a class of feedforward, open-loop models. This is not an experimental result. It is not a necessary axiom. It is an assumption. That assumption needs to be tested. Is it really the case that when we deal with cells and tissues, we are only going to be using models of devices that don't know anything and don't have set points and goals? Or maybe that doesn't fit the data very well. I'm going to argue that it doesn't fit the data at all. We actually have a spectrum here. I call it the axis of persuadability, that reminds us that we have a wide range of tools, starting with rewiring, but also cybernetics and control theory and the tools of behavioral science, where we're dealing with processes that since at least the 1940s, and for behavioral science long before that, we've had ways of addressing mechanisms that actually have goals, have memories, know things, and so on. The question is how much of that applies to the biology that we care about. You can see this worked out carefully in that paper.</p><hr><p><strong>Slide 7/54 · 08m:08s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0487500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>All of us take this journey. We start out being the subject of chemistry, and then as we're a little unfertilized oocyte, and eventually there's some developmental physiology, and then some behavioral science, psychology and psychoanalysis, and we take a detour into oncology or bioengineering. I'll talk about this momentarily.</p><p>The disciplines are distinct, and the journals and the departments and the funding bodies are all different. But the substrate is continuous. What we really have here is a kind of scaling of competencies. If you think that the final product, let's say an adult human or an adult animal, has certain properties, what we're looking for is a story of transformation. How did we get there from a little BLOB of chemicals? If you think that there are great transitions, sharp phase transitions where something critically new happens, you have to argue for that. You have to say, how exactly does that happen? Because the substrate is actually a slow, continuous developmental process. For this reason, I think developmental biology is unique among the sciences because here you see the journey from matter to mind. You start with chemistry and you end up in psychology. This is slow and gradual with no magic lightning flash in the middle that converts chemistry to mind.</p><hr><p><strong>Slide 8/54 · 09m:30s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0570000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>What I'm going to do today is try to make the following points. One thing that's interesting about bodies is that they consist of a multi-scale competency architecture. There's actually problem solving at every level, all the way from molecular networks up. That definitive regenerative medicine is going to require us to understand these systems as cybernetic goal-seeking systems. In other words, they have set points, and they have various degrees of ingenuity in meeting those set points when circumstances change. I'm going to show you how we use bioelectrics to read and write those set points.</p><p>These are pattern memories in that living medium, and we're going to be able to read and write them.</p><p>Bioelectricity does exactly what it does in the brain. It is a cognitive glue. It's a set of mechanisms that enables the collective to know things that the parts don't know. Just like you know things your individual neurons don't know because of electrophysiology. That is what developmental bioelectricity does, but it's more ancient and it does it in the rest of the body.</p><hr><p><strong>Slide 9/54 · 10m:38s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0637500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>This is the kind of thing we're made of, individual cells. Here's a unicellular organism called the lacrimaria. No brain, no nervous system, but very significant competency in its own local environment. That's the kind of thing that we have to tame if you're going to make a multicellular body that does interesting things.</p><p>Even this is not where things start, because what it's made of is a set of chemical networks.</p><hr><p><strong>Slide 10/54 · 11m:02s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0662500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>These might be gene regulatory networks, they might be molecular pathways. One of the interesting things about these networks is that they exhibit 6 different kinds of learning. You don't need a neuron, you don't need a brain, and you don't even need a cell, the chemical networks alone. Very small ones. You only need about 5 nodes in certain cases to do it. They do habituation, sensitization, associative conditioning. You can find all of that here.</p><p>Even the material already is capable of some very interesting behaviors. We are currently taking advantage of that to try to train the pathways. Applications include drug conditioning and things like this, where you can condition very powerful drugs to inert triggers and so on, because the material itself is able to associate different kinds of history of stimuli with specific responses. This goes all the way down.</p><hr><p><strong>Slide 11/54 · 12m:00s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0720000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>That means that we have to extend, or at least we have the opportunity to extend some of the tools of behavioral science to systems that operate in very unusual spaces. We use those kinds of tools to study things that move around in three-dimensional space. This is conventional behavior. You see birds or mammals, maybe an octopus, doing interesting things in 3D space. We know we have tools to recognize when they're solving problems, when they're pursuing goals.</p><p>But biology has been doing this long before nerve and muscle appeared. Cells navigate a high-dimensional transcriptional space, they navigate a physiological state space, and, my favorite, they navigate anatomical morphospace.</p><p>What I'm going to claim is that morphogenesis, whether it be in development, in regeneration, and cancer suppression, is a kind of behavior, and it is the behavior of a collective intelligence, just like you and I are. It is the behavior of a collective intelligence as it navigates anatomical space, and anatomical morphospace is simply the space of all possible anatomical layouts that a system can have.</p><hr><p><strong>Slide 12/54 · 13m:15s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0795000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Let's talk about where anatomies come from. We all start life like this, a collection of embryonic blastomeres. Eventually, through a cross-section, you see something like this, incredible order. Everything is the right size, the right shape, next to the right thing. Where does this actually come from?</p><p>Most people will immediately say, it's in the DNA, it's in the genome, but we all know now that the genome specifies proteins and it specifies some timing information about how they appear. But actually, none of this is directly in the genome. The genome doesn't say anything directly about your anatomical layout. You still have to ask, how do these cells, given the molecular hardware that the genome does encode—the genome tells you what proteins and different types of computational machinery you get to have.</p><p>After that, you have to start asking what the software looks like. How does it know what to build? How does it know when to stop? If something is missing, how do we get it to rebuild?</p><p>As engineers, and I'll only talk about this briefly, we might ask it to build something completely different. Can the same genome build something totally different?</p><p>We just have to remember that this, the standard anatomy is no more in the genome than the shape of termite colonies or the precise shape of spider webs is in these animals' genomes. These are all outcomes of the physiological software that rides on the genetically specified hardware.</p><hr><p><strong>Slide 13/54 · 14m:42s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0882500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>And so now we have to ask, what are the properties of that computational layer, and I'm going to use a definition of intelligence by William James to simply focus on goal-directed activity, meaning the ability to reach a set point. You can think about this at the lowest level as a simple homeostasis. Your thermostat has the very basis of goal-directedness. It has a set point. It represents that set point physically, and it tries to manage a set of variables to try and reduce the error to that set point. So let's ask the question of, is that something that we see in cell and developmental biology, or is it just open loop emergence, something that's going to explain everything we need to know?</p><hr><p><strong>Slide 14/54 · 15m:28s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0927500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Well, the first thing we know is that development is very reliable. So you start off, you almost always end up with the right species-specific target morphology. But that is not why I'm using the word intelligence. It is not because it's reliable, and it is not because there's a rise of complexity. That is not my point. My point is about the high competency of navigating anatomical morphospace in unexpected conditions. This normal development really hides a lot of interesting capabilities.</p><hr><p><strong>Slide 15/54 · 16m:02s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0962500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>The first thing we know is that if we cut embryos into pieces, you don't get half bodies. You get perfectly normal monozygotic twins and triplets. We know that you can start off in different regions of that anatomical amorphous space. You can avoid various local minima, and you will eventually get to this correct ensemble of goal states. That's one thing. Same anatomy from different starting states. It's not just for embryos.</p><hr><p><strong>Slide 16/54 · 16m:28s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_0987500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Many animals, such as this axolotl, can do this throughout their lifespan. You can cut the limb anywhere along this axis. The cells quickly detect that there's been a deviation from the correct goal state. They will reduce that error, and eventually they stop. When do they stop? They stop when the correct salamander limb has been completed. You can do this from different starting positions, but what it does is anatomical homeostasis and an error minimization scheme. There's something interesting here.</p><hr><p><strong>Slide 17/54 · 16m:58s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1017500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>This is not just about damage. This is not a story of simply minimizing damage. You can think of development as a kind of regeneration. You're restoring the entire body from one cell. You might think the whole thing is about error minimization. But there's one other aspect to this that needs more attention.</p><p>This is an old experiment from the 50s where they took a tail and surgically grafted it to the flank of an amphibian. Over time, that tail remodels into a limb with fingers. Take the perspective of these cells at the tip of the tail here. They are tail end cells sitting at the tip of the tail. There's nothing locally wrong with them. There's no damage. There's no injury. Why are they turning into fingers? No individual cell knows what a finger is, but in some way, they're responding to something that at their level doesn't exist: the body plane of an entire animal.</p><p>What's happening here is a large-scale error-detection system that recognizes that having a tail here is not what's supposed to be happening. All of that error propagates down into the molecular steps that are needed to turn this structure now into this structure.</p><p>Where have we seen this before? We've seen it before in humans, in any kind of behavior. When you wake up in the morning and have very abstract goals, social goals, financial goals, research goals, in order for you to get up out of bed and do those things, ions have to cross your muscle cell membranes. The chemistry of your body cells has to be affected, functionally changed by these abstract high-level goals that the large-scale system has that the cells don't know about.</p><p>One of the most interesting things about bodies is that they form this transduction network that allows these high-level abstract things to serve as drivers of chemistry underneath. That is what's happening here. The fact that an amphibian is supposed to have a limb and not a tail here is what ultimately drives the molecular biology of this transformation. So the local order obeys a global plan. That's what's important here.</p><hr><p><strong>Slide 18/54 · 19m:08s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1147500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>There are lots of examples like this that are not in our developmental biology textbook, mostly because they're very difficult to explain with current tools. Here's an example. This is called trophic memory in deer antlers.</p><p>Deer, large adult mammal that every year has this incredible structure of bone and vasculature and innervation. They lose these every year and then they regrow them from scratch. What you can do is cause an injury here. You cut into the bone. It makes a little callus. It heals, no problem. This whole thing falls off. For the next five years in certain species, that location will have an ectopic branch, an ectopic tine, and then eventually it goes back to normal.</p><p>The information as to the location of the damage has to be stored somewhere in the body because this whole thing is going to fall off. When the bone is growing, by the time you get to this, there has to be a new signal that says take an extra branch point here because there was damage last year.</p><p>Think about what kind of molecular pathway model you would draw for something like this. We're all used to making these arrow diagrams, Figure 7 in your Cell paper. We don't have really good tools to describe things like this.</p><p>And this is what the material is capable of. It has a pattern memory, it can store and recall these memories from distant locations, and it can guide morphogenesis in accordance to that pattern memory that does not have to be the default that you see from the species most of the time. It can be changed. I'm going to show you some examples of this.</p><p>The guy who discovered this, Bubenik, sent us 35 years' worth. This is a 35-year-long experiment. Imagine trying to tell your PhD advisor you want to work on a herd of deer for 35 years. He sent us all these antlers and an amazing data set.</p><hr><p><strong>Slide 19/54 · 21m:08s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1267500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>So just one more example in a much more tractable model system that we identified. This is a tadpole of the frog, Xenopus laevis. So here are some eyes, here are some nostrils, the mouth, the brain is here, the gut. And when these guys become frogs, they have to rearrange their face. All these different organs have to move to look like this. It was thought for a long time that what's genetically encoded is a set of hardwired movements. Everything moves in the right direction, the right amount, and you go from a tadpole to a frog. Well, we decided to test that. The way you test it is you perturb the system and see what it does. And so we created these so-called Picasso tadpoles where we scrambled the face. Everything's in the wrong place. The eyes on the back of the head, the mouth is off to the side. But it turns out they make perfectly normal frogs. All of these things move in novel, unnatural paths. In fact, sometimes they go too far and have to come backwards until you get to a correct frog and then everything stops.</p><p>What the genetics specifies is some hardware that can execute a very flexible error minimization scheme that takes corrective action as needed to get to a specific outcome. The most obvious question is, how do you know when you've reached the correct state? How does a regenerating limb or a developing embryo or a deer antler that's been modified by experience know what the correct pattern is? I'm going to show you one way in which anatomical target states—set points—can be encoded in tissue.</p><hr><p><strong>Slide 20/54 · 22m:48s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1367500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Now, the neuroscientists among the audience are already thinking, we know how, we have an example that does this, and that's the brain. We know in the brain that animals store set points as memories, and they execute behavior in accordance with those memories to reach specific states. And how is that implemented? There are ion channels that enable the network of cells in your brain to be electrically active. There are ion channels that set voltage gradients, there are gap junctions or electrical synapses that allow these signals to propagate across the network.</p><p>This group made a video of zebrafish brains while the fish was thinking about whatever fish think about. You can then try to do neural decoding. You can read this electrophysiology and decode it to understand what are the memories, the goals, the preferences, the behavioral competencies of this animal. In other words, we know from neuroscience that the cognitive states of beings are encoded in the real-time electrophysiology of certain cells. Where did this incredible system come from?</p><hr><p><strong>Slide 21/54 · 24m:20s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1460000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>It turns out that it is extremely ancient. All cells in your body have ion channels. Most cells in your body are connected by these electrical synapses. This design feature, this architecture by which you can store and integrate information across space and time electrically in electrical networks is here from about the time of bacterial biofilms. Back from the time that bacteria were first getting together into colonies, that is when evolution discovered the amazing suitability of bioelectricity as a cognitive glue, as a way to connect competent subunits into systems that know things that the subunits don't know.</p><p>This was over 20 years ago; we asked, could we take the tools of behavioral and neuroscience and apply them outside the brain to do neural decoding except to non-neural cells and ask, what do they know? What information do they store? How does the collective store pattern memories that no individual cell has? I'm going to show you how we do that.</p><p>This was a zebrafish brain. This is an early frog embryo that we are monitoring in the same way.</p><hr><p><strong>Slide 22/54 · 25m:35s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1535000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>The first thing we did is to use a voltage-sensitive fluorescent dye technology to be able to read electrical states. You can see these are individual cells in vitro. The colors represent different voltages, and you can see the spatial and temporal patterns of bioelectrical states in these cells.</p><p>We do a lot of quantitative simulations. We start with the gene regulatory network that tells you which channels and pumps you're going to have in your cells. Then you can do large-scale tissue-level simulations that allow you to make cuts, damage the tissue, make certain changes, and ask what's going to happen. What is the behavior of this network? You can't do it in your head. It's not obvious at all. They have very complex and interesting properties.</p><hr><p><strong>Slide 23/54 · 26m:25s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1585000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I'm going to show you a couple of examples in development. This is something we call the electric face. Here's a time lapse of an early frog embryo. The gray scale represents a voltage here using this dye. There's a lot going on, but look at this one frame. Before the genes turn on that regionalize the ectoderm of this embryo to become a face, you can read out the electrical pre-pattern that tells you here's where the right eye is going to go, here's where the mouth is going to go, the placodes are out here. It already lays out the basic features of the face. This voltage pattern, the spatial pattern of steady resting potential across the cell membrane, is what determines the genes that are going to turn on and the regionalization of the face.</p><p>Now, not only is bioelectricity a way to merge individual cells into a large-scale structure, it does that in multiple levels because these are individual embryos. If I poke this one, all of these guys find out about it. You can see this wave, this calcium propagation wave. In minutes, these guys all find out.</p><p>We have some interesting data; we can discuss hyperembryos if anybody wants. Hyperembryos are groups of embryos that solve problems that individual embryos can't solve. There's a multi-scale hierarchy going on here.</p><hr><p><strong>Slide 24/54 · 27m:48s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1667500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>These improvements in technology have been really powerful. I always like this picture. This is what's been happening in astronomy. This is what Pluto looked like in 1996, 2002. By 2017, you can actually see some mountains and some features. And that's what's happened in our ability to track bioelectrical states.</p><p>This is what the eye spot looked like in 2012. We just knew that it was there. But by now, we can actually see the very complex patterns of this region and the technology is just developing more and more. But in addition to simply being able to read out all the bioelectrical states without having to poke each cell with electrophysiology electrodes, functional experiments are really critical.</p><hr><p><strong>Slide 25/54 · 28m:30s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1710000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>How do we write the electrical information? We don't use electrodes or magnets or applied fields or anything like that. There are no waves, no frequencies. We manipulate the natural interface that cells use to hack each other. We control the ion channels, and we can do that with drugs that target specific types of channels and pumps. We can do it with optogenetics. There are also some nanomaterials. Gap junctions. We can control the topology and the actual voltage of individual cells. Now it's time for me to show you what happens when you do that.</p><hr><p><strong>Slide 26/54 · 29m:15s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1755000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>One of the interesting things that we have to watch is the ability of cells to control each other electrically. I'm going to show you one example. This is a video. I'm going to play it. The voltage, again, is denoted by the colors. These are cells in vitro. Notice that this cell starts out as one voltage until it gets touched by this cell, and then it changes. It's crawling along, it's minding its own business, nice and blue. This thing touches it, bang, that's all it took. A tiny little touch like this, and the cell voltage has changed, and now it comes down and becomes part of this group and starts working on some novel morphogenetic thing. We would like to understand this kind of ability to control cell behavior electrically and to tell them what to build.</p><hr><p><strong>Slide 27/54 · 30m:05s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1805000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Here's a couple of examples. One of my favorite examples is the induction of ectopic eyes. You might see in your developmental biology textbook that only the anterior neuroectoderm up here is competent to make an eye. That's only true if you use chemical or molecular genetic inducers like PAX6, the so-called master eye gene. But most regions in the body are capable of doing it if you know the right prompt. The right prompt is that bioelectrical state, that little spot that I showed you in the electric face. We can induce that anywhere by injecting RNA encoding specific ion channels, in this case potassium channels. When you do this, those cells get the message to build an eye. And they do. They build an eye. That eye has all the lens, retina, optic nerve, all that same stuff that it's supposed to have.</p><p>Notice a couple of interesting things. First of all, the bioelectrical signal is instructive. In other words, it actually controls. We're not just screwing up development with a poison. We're actually building new and coherent structures. It's highly modular. We didn't have to talk to the individual stem cells. I have no idea how to micromanage the production of an eye. We didn't have to tell which genes to turn on. Much like that top-down control that I showed you from the abstract goals in behavior or the body plan of the amphibian, we can communicate at the highest level. We can say, build an eye here. That's the trigger, and everything else that's needed is taken care of downstream.</p><p>For example, if we only inject a few cells — this is a cross-section through a lens in the body of a tadpole — only a few of them were injected by us, but what did they do? They apparently can tell that there's not enough of them to really build an eye, and so they recruit their neighbors. They tell their neighbors, which were never touched by us, so all these brown cells were never injected. These guys get them to participate in the process. It's like a secondary instruction event.</p><hr><p><strong>Slide 28/54 · 32m:08s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1927500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>They will start an ectopic eye spot. So this is an earlier frog embryo, and you can see the blue is in situ hybridization marking Rix1, which is a very early marker of eye field specification. So we can start all kinds of eye spots but only some of them will become an eye. You can, in fact, make eight or nine of these, but only some of them will become an eye. Why? Because while these guys are telling their neighbors, "work with me to build an eye," the neighbors, as part of a cancer suppression mechanism, are saying, "no, actually, you should be skinned like us, or you should be gut or something else." And they basically cause them to change their mind, and despite their early expression of Rix1, that winks out and they go back to normal.</p><p>So that back and forth conversation about "Are we going to be an eye or something else?" takes place at the electrical level. It eventually gets canalized into gene expression and then into anatomy. For regenerative medicine, this is what we would like to have control over. We would like to be super convincing to these cells. We don't want to know the 20,000 different genes that we're going to have to turn on and off. We want to give a high-level stimulus to say, build this structure and have the material do it.</p><hr><p><strong>Slide 29/54 · 33m:18s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_1997500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>As an example, I'll show you some of our work on limb regeneration. Frogs, unlike axolotls, do not normally regenerate their legs. After an amputation, 45 days later, there's basically nothing.</p><hr><p><strong>Slide 30/54 · 33m:32s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2012500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>We came up with a cocktail that we apply right after injury, or even a little bit delayed. And what it does is it immediately triggers regeneration. So within 48 hours, you have an MSX1-positive blastema. By 45 days, you've got some toes, you've even got a toenail, and eventually a very respectable looking leg that is touch sensitive and motile. You can see the animal can feel.</p><hr><p><strong>Slide 31/54 · 34m:00s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2040000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>One of the most interesting things about this is that this is an example on an adult frog, the treatment that we do, in this case, it was a wearable bioreactor containing some drug payload. The treatment is 24 hours. That's it. After that, you get a year and a half of leg growth, during which time we don't touch it at all.</p><p>So this is a top-down, trigger-based thing, where we say to the system in the first 24 hours, go down the leg building path, not the scarring path, and then we completely take our hands off the wheel. We are not using scaffolds or stem cell kinds of approaches or growth factors. It's right at the beginning, the physiological decisions about what journey you're going to take through that anatomical space.</p><hr><p><strong>Slide 32/54 · 34m:48s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2087500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I have to do a disclosure because David Kaplan and I started this spin-off called Morphoceuticals, where we are now trying to do this in mammals. Stay tuned; that work is ongoing. We have bioreactors through which we're trying to apply these kinds of signals. I want to switch gears and show you a different model to hammer this notion of memory.</p><hr><p><strong>Slide 33/54 · 35m:05s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2105000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Specifically, I made this claim earlier on that morphogenesis has an encoded set point towards which it is trying to reduce error. It is a homeostatic system. I'll show you another example of rewriting that pattern memory.</p><p>These are planaria, flatworms that have a head and a tail. One of the many cool things about planaria is that if you cut them into pieces, each piece regenerates a complete worm.</p><p>You might ask, when I cut this piece, how does it know how many heads to have? How many heads should it have? These cells up here are going to make a head, but these cells here are going to make a tail; they're right next-door neighbors. It's not an issue of being at a different positional information. They're at the same location, but they make a context-appropriate decision in each fragment of what they're going to make.</p><p>How do they know? We asked that question, and we observed there's a bioelectrical pattern in that fragment that says, One head, one tail. We can change that pattern; using ionophores and some other tools, we can say, Actually, you should have two heads. When you do this... First, nothing happens.</p><p>This animal has this incorrect internal representation of what a correct planarian should look like. But the molecular biology is correct. In other words, head marker in the head, not in the tail. The anatomy is correct, head, tail. But if you cut this guy, all the pieces will make two-headed animals.</p><p>This is a counterfactual memory. It is not true right now. It is latent because until you cut it, it doesn't do anything. When you cut this animal, the cells consult this pattern as the recorded ground truth of what a correct planarian should look like. That is what they build to. They make these two-headed animals.</p><p>You can see here we are looking at the representation of what the axial makeup of a planarian should be. A normal body can store at least one of two different representations. Another reason I keep calling this a memory is because it is stable.</p><hr><p><strong>Slide 34/54 · 37m:22s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2242500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Once you change it, the tissue holds. We can take this two-headed animal, we can cut them into pieces, and what they will do is continue to generate two-headed animals in perpetuity, as far as we can tell, forever. Here's a little video of what they're like. Remember, there's nothing wrong with the genome. We haven't touched the genome. The genetic information has not been changed. What the genome actually gives you is some hardware that, when the juice is turned on, reliably takes on a default bioelectrical state, which by default is one head, one tail. But that is not the only state it's capable of. It can be rewritten, and once it's rewritten, the memory holds.</p><p>These are the basic properties of any memory. It's long-term stable, but it's rewritable. It has conditional recall. It has discrete behaviors that it can do.</p><p>Now, I raised this issue of being convincing to the cells: producing a signal, whether endogenous or applied by us, that the cells are actually going to take on.</p><hr><p><strong>Slide 35/54 · 38m:28s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2307500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>We're still working on some of the very important and puzzling aspects of this. For example, in order to be convincing, that message doesn't have to have a lot of tissue behind it. We can take a little chunk out of a two-headed animal, in fact, irradiate the heck out of it so that there are no stem cells in it, implant it into a normal one-headed host, and in some percentage, almost 1/5 of the cases, this fragment turns into a two-headed animal. Even a small piece, and even at a distance, some of these will still do it. That message will overwrite the endogenous memory of all these other tissues. Very interesting.</p><hr><p><strong>Slide 36/54 · 39m:12s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2352500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Not only the number of heads, but the species-specific shape of the heads. For example, this triangular species, if we change the bioelectrics that control head development, we can get flat heads like this P. fulina. You can get round heads like this S. Mediterranean, about 100 to 150 million years evolutionary distance between these guys and these guys, but no problem.</p><p>This hardware is perfectly happy to visit these other attractors in anatomical space where these species normally hang out, not just the shape of the head, but the distribution of stem cells, the shape of the brain, just like these other species. This is not hardwired.</p><hr><p><strong>Slide 37/54 · 39m:50s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2390000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>You can go further and make things that don't look planar at all. They're not flat. You can make these crazy spiky things. You can make cylindrical and hybrid forms. Not just animals, plants do it too.</p><p>So here's what an oak leaf looks like. You might think that this is what the oak genome knows how to do. But along comes this bioengineer, who happens to be a wasp, lays down some prompts and gets the plant cells to build this incredible gall, this spiky yellow and red thing. We would have no idea that the plant cells are even capable of building something like this if we hadn't seen it.</p><p>So the reliability of development is deceiving. It hides a lot of plasticity and reprogrammability. We are not the only ones. Evolution noticed all this. Much of it takes place with high-level interfaces, not with micromanaging the molecular details.</p><hr><p><strong>Slide 38/54 · 40m:45s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2445000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>One of the things that we're trying to do now is build a full-stack computational platform that starts off with gene expression data, then goes to the physiology of individual cells, then tissue level, and eventually to whole-body algorithmic understanding of the decision-making of how these things, in fact, encode different kinds of structures. Once we can connect all of that, we can use these approaches to actually pick electroceuticals. Design stimuli that get the tissue to do what you want them to do.</p><hr><p><strong>Slide 39/54 · 41m:18s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2477500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I will just show you one successful example where we've done that for a complex structure like the brain. Here's a frog brain with forebrain, midbrain, and hindbrain. If you hit it with a teratogen like nicotine or other nasty things, you get defects. We wanted to know what's going on and how to fix these defects. We built a computational platform. Our collaborator, Alexis Pytak and Vaipav Pai, built this bioelectric model of the tissue from which the brain arises.</p><hr><p><strong>Slide 40/54 · 41m:50s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2510000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>We decided to go after one of the most unlikely examples we would be able to fix, and that is a genetic mutation of notch. Notch is an important neurogenesis gene. If you introduce a dominant mutation — the overactive notch ICD — here's what you see. The forebrain is gone. The midbrain and hindbrain are basically a bubble of water. These animals have no behavior, profoundly defective.</p><p>We asked the model: we know what goes wrong with the bioelectrics once you've done this; how can we fix it? The model said it turns out that there's this specific channel called HCN2 that will sharpen the bioelectric pattern back to normal. If you do that, even animals expressing high levels of this notch ICD get normal brain structure, normal brain gene expression, and normal behavior. In other words, their IQs are indistinguishable from controls.</p><p>You can do this either by opening existing HCN2 channels or introducing new HCN2 channels. What you're seeing here is that with the right computational model, we can address the bioelectric layer to overcome, in some cases, even hardware defects. I'm not saying this is going to work in all cases, but in some cases the hardware is fixable by physiological stimuli.</p><hr><p><strong>Slide 41/54 · 43m:20s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2600000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>And so this is what we're aiming for, is a kind of system where we know what the correct state is supposed to be. This is where a lot of the hard work now has to take place: to characterize what the normal bioelectric states of different organs are. We then might have an incorrect state, and we have a computational platform that says, if you want to go from the incorrect state to the correct state, what channels do you need to open and close, which means what ion channel drugs do you deliver on what schedule? You can play with an early version of this at this website.</p><hr><p><strong>Slide 42/54 · 43m:52s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2632500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>A couple of things before I start to wrap up. I'm going to show you a couple of other stories. One has to do with cancer. I've shown you regeneration. I've shown you organ formation. I've shown you birth defects. Let's talk about cancer for a moment.</p><p>One of the interesting things that happens during evolution and multicellularity is a scaling of goals. The set points, the actual homeostatic set points towards which these systems try to reach start off very small. Individual cells have little tiny cognitive light cones. Their goals are all very small. They're trying to manage pH, metabolic state, in a tiny little region of space-time, little bit of memory going backwards, a little bit of predictive capacity. But this tiny little area is all it's trying to manage.</p><p>A multicellular system like this has an enormous grandiose kind of set point. In other words, this is the correct pattern memory, and as long as you haven't reached it, your cells are going to be actively trying to get there. They only stop when they reach this particular state. This is massive. No individual cell knows what this looks like or how many fingers you're supposed to have, but the collective absolutely does, and this is what it reduces the error to.</p><p>What you see during development and during evolution in general is a scale-up of the capacity to store these kinds of set points. These are tiny set points in metabolic space and transcriptional space. These are set points in very large anatomical space. But that kind of system, where cells join into networks where the network can remember targets that individual cells cannot remember, has a failure mode. The failure mode is called cancer.</p><p>When these cells disconnect from each other — what you're looking at here is a glioblastoma in culture — they roll back to their primitive unicellular tiny goals, meaning proliferate as much as you can, migrate to where life is good, metabolize, because at that point the rest of the body is just external environment to you. You're just an amoeba again, and the outside body is just environment. That boundary between self and world shrinks.</p><p>What's happening here is that cancer is not more selfish than normal tissue, and sometimes when people model it in game-theory models as being more selfish and less cooperative, it isn't more selfish, it just has smaller selves. In other words, the boundary between the self and the outside world, the region of space-time that you care about in terms of trying to manage the states, becomes very small, and then the rest of the body isn't part of the adaptive behavior anymore.</p><hr><p><strong>Slide 43/54 · 46m:35s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2795000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>That interesting way of looking at it had a specific prediction. It meant that we should be able to detect incipient tumors via their disconnection from the rest of the body. Bioelectrical dyes should be able to show us where the tumorigenesis is going to happen.</p><p>We showed that by injecting tumor-inducing oncogenes into tadpoles. These are nasty things such as dominant negative P53, GLE, KRAS, and so on. They make tumors, but before the tumors become apparent and start to metastasize, the dye will tell you exactly where the tumor is going to be.</p><hr><p><strong>Slide 44/54 · 47m:20s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2840000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>We are now optimizing towards this kind of thing where either a human surgeon or a robot surgeon is going to be able to look down, for example, and see the tumor margins. He's going to see that here's the normal tissue, but here's some stuff you got to be careful of because these cells have already acquired an abnormal bioelectrical state; they've disconnected from their neighbors.</p><p>Now, more importantly than just tracking it, could we change it?</p><hr><p><strong>Slide 45/54 · 47m:40s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2860000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>What we did here was instead of trying to kill these cells, we said, what if we force them into a normal bioelectrical state with their neighbors? Again, what happens is we inject these oncogenes. Here you can see the ACA protein is blazingly expressed. It's all over the place here. Here's a massive one that normally would be a tumor, except there is no tumor. This is the same animal. There won't be a tumor because we've also co-injected an ion channel. It doesn't kill the cells. It doesn't fix the genetic defect. But it forces the cells to be part of this large-scale network that's working on making nice skin, nice muscle, and so on, instead of going off and doing its own thing. That is something that we are currently working on in humans.</p><p>This is some data on glioblastoma. We also have a project on colon cancer, reusing existing ion channel drugs—candidates for electroceuticals to reconnect cells back to their neighbors.</p><hr><p><strong>Slide 46/54 · 48m:45s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2925000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>The final story, briefly, is our program in aging. One of the hypotheses is that much like these bioelectrical patterns that are critical for establishing normal anatomy during embryogenesis, during regeneration, during cancer suppression, but in your whole lifespan, as cells come and go, old cells become senescent and die, new cells come in, could it be that with age the bioelectrical pre-patterns get fuzzy? They get degraded. And if we sharpen them, I've shown you one example of sharpening — that's how we fix the brain defects in the tadpole — could we sharpen them as an aging therapeutic? Could it be that the pattern memory in planaria is part of why these guys are immortal, that they're really good at holding on to their bioelectrical patterns? We have some interesting stuff coming on that.</p><hr><p><strong>Slide 47/54 · 49m:38s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_2977500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I'm going to start to wrap up here and point out a couple of interesting things.</p><p>One is that because your body is made of this multi-scale system where there are competencies and agendas at every level, starting with the molecular networks, the subcellular structures like your cytoskeleton, the cells, the tissues, all of it has the ability to take in input, make decisions, and navigate various kinds of spaces.</p><p>It means that we can now use various technologies, including AI, to try to communicate not just with the lowest level—people try to make drugs to hit specific receptors and pathways and so on—but could we communicate with these higher levels of transduction, and do in patients what I've been showing you in these model systems?</p><p>We have a couple of projects called Talk to GRN and Talk to Cells, where we're trying to use language models coupled with real-time closed-loop electrophysiological data to use language to communicate, get information out of the cells, and give them commands.</p><hr><p><strong>Slide 48/54 · 50m:50s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3050000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>So that's the first thing. The second thing is that this idea that bioelectricity and other kinds of physiological networks are providing a multi-scale competency to the material that lets it deploy plasticity and problem solving in the face of novel scenarios. This has implications for evolution because evolution is not working on a passive material where the genome directly maps in a fixed way to some kind of outcome. We've been working on models, and you can see that in this paper in Trends in Genetics called "What Does Evolution Actually Make?" Thinking about the information in the genome as a kind of prompt, as a way to give suggestions to a material that actually has some great flexibility about how it's going to implement that.</p><hr><p><strong>Slide 49/54 · 51m:42s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3102500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I'm going to show you this in a minute. But what's important and challenging about this is that it's trying to manage a material that has some degree of plasticity and intelligence is a two-way IQ test. You have to be smart enough to do it, and we're learning.</p><p>Here's an example of hacking plant cells. Bacteria manage this featureless lump, and fungi don't do much better. Nematodes can make something that has a little bit of a shape, but by the time you get to insects, they can get the plant cells, the leaf cells to make this beautiful thing. The sophistication of the hacker matches the sophistication of the product of what you're able to do. We have to get a lot more clever about how it is that we communicate goals to these various subsystems.</p><hr><p><strong>Slide 50/54 · 52m:35s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3155000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>I'll briefly mention some new technology that's currently sitting in our lab. It's been up and running for probably about a month now. That is a closed-loop AI-powered robot scientist that makes hypotheses about how to traverse anatomical space. That is, what signals given to cells will get the collective to do one or another thing. It has little wells inside where it can give different stimuli to those cells. Vibration, optical stimuli, chemical stimuli, electrical, and so on. It observes what happened, learns from that experience, revises its hypothesis, and goes back and does it again. So this is a new colleague that is working with us to operate in anatomical morphospace using living cells as the front end interface to explore that space of possibilities. I'll show you one example of the kinds of things we build.</p><hr><p><strong>Slide 51/54 · 53m:32s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3212500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>Asa talked about Xenobots. I didn't bring any Xenobots slides today, but here's an Anthrobot.</p><p>This is the question of if you can't reach the goal states that you normally reach despite perturbations, what biologicals will often do is find a new set of set points. This little creature is not something I got off the bottom of a pond somewhere. If you were to sequence it, you would find 100% Homo sapiens genome. Not edited. These are adult, not embryonic, human tracheal epithelial cells that self-assemble when you take them out of the body. They self-assemble into this little motile creature. This is what they look like. They swim around because these little cilia are waving. They have all kinds of interesting properties. These guys, taken out of the body, can no longer make a human. They can't be a human body, but they do something very coherent.</p><hr><p><strong>Slide 52/54 · 54m:30s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3270000ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>And they have some interesting features. First of all, 9,000, over 9,000 differentially expressed genes. No genomic editing, no synthetic biology circuits, no nanomaterial scaffolds, no drugs, just a different lifestyle that they've adopted, and they spontaneously change; half their genome is now expressed differently.</p><p>The second thing is they have four different behaviors, four different motility behaviors that you can quantify. This is the probability transition diagram between them, like you would do with any animal. One of the first things we realized they could do is if you take a lawn of IPS-derived human neurons, you put a big scratch through it, the anthrobots will come, they'll settle down as in the whole cluster. They're shown in green. They will settle down and start to knit together the gap. So when you take them off, you'll see that under where they were sitting, they were trying to repair this. So they have some sort of ability to induce the neurons to join up.</p><p>Who would have known that your tracheal epithelial cells, which sit there quietly in your airway for decades, if you take them out, they become a self-motile little creature that can fix neural defects. This, of course, we're working towards as patient-specific in-body robotics. They're made of your own cells, so you won't need immunosuppressive drugs. We're trying to figure out what are all the things that they know how to fix and how to deploy them for biomedicine.</p><p>One of the interesting things about them is they're younger than the cells they come from. So this process of becoming an anthrobot actually rolls back the clock as measured by the epigenetic clock on these guys. They're actually younger than the cells they come from. So again, there's an aging story.</p><hr><p><strong>Slide 53/54 · 56m:22s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3382500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>So this is my last slide. And what I'm going to say is that almost everything that people are excited about today in terms of biomedicine comes from these kinds of approaches, these bottom-up approaches focused on the hardware. And we would like to complement that with tools taken from other disciplines. So this is cybernetics, behavioral science, cognitive and computer science: the material we're dealing with is actually amenable to all top-down approaches that let us do very complex things that are really difficult with this.</p><p>And so as bioengineers and as workers in regenerative medicine, but also if we're seeking to understand evolution and our own origins of our bodies and of our cognitive systems, we really have to drop this idea that the material is only to be described by simple open loop models in which nothing knows anything up until you get to a big mammalian brain, but that actually the sciences of information processing and of behavior are helpful all the way down. Bioelectricity is the interface layer that really enables that control of growth and form. It's not the only one, but it's the one that we have the best amount of control over now. We'll be able to hack this for some incredible applications. Some of that is described here.</p><p>I'm going to stop here and thank the people who did all the work.</p><hr><p><strong>Slide 54/54 · 57m:48s</strong></p><figure class="kg-card kg-image-card"><img src="https://storage.aipodcast.ing/permanent/slides/levin/L0D4FdJ4K3g/frame_3467500ms.jpg" class="kg-image" alt="" loading="lazy"></figure><p>My postdocs and grad students and the team at Josh Bongard's lab worked with us on that discovery engine that I showed you. We have lots of amazing collaborators. Thank you to our funders. Here are my disclosures. There are three companies that have licensed the various technologies that I've shown you today.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Platonic Space discussion 2</title>
          <link>https://thoughtforms-life.aipodcast.ing/platonic-space-discussion-2/</link>
          <description>Contributors to the Platonic Space Hypothesis engage in a two-hour discussion spanning Platonism vs Darwinism, forms and mathematics, multicellularity, representation and computation, concept space, myth and metaphysics, and scientific pluralism.</description>
          <pubDate>Thu, 29 Jan 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 697b87b649688900014cacbb ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/r7eMr8za1DY" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/92b8e0e2/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~2 hours 8 minutes discussion among contributors to the Platonic Space Hypothesis (<a href="https://thoughtforms.life/symposium-on-the-platonic-space/).?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life/symposium-on-the-platonic-space/).</a></p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Platonism versus Darwinism</p><p>(14:37) Forms, mathematics, and invariance</p><p>(29:44) Multicellularity and enabling constraints</p><p>(46:03) Representation, computation, and evolution</p><p>(01:03:04) Concept space and attractors</p><p>(01:24:26) Myth, metaphysics, and explanation</p><p>(01:34:46) Pluralist science and metaphysics</p><p>(01:57:15) Infinity, strategy, and closure</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Tim:</strong> It might be great to start with just discussing people's sense of those options. I heard you discuss that a bit in the previous session, Mike. You brought that up and a few people gave some ideas about that. I think this is also a way of us surfacing the question of the relevance of, broadly speaking, metaphysics to science and also the relevance if we're going to invoke the name of Plato, metaphysical themes, what is the relevance of the history of philosophy to a consideration of those things, since a lot of these concepts have been around for a very long time. A lot of the possible alternatives, not all of them, are actually present in the history of philosophy. Referring to some of those models can be a way of clarifying our thinking about what we're trying to achieve by either extending upon or returning to some prior conception. Without articulating in detail what each of them is, I would say four options I could think of would be so-called physicalism or the mechanical philosophy sometimes referred to, a form of Platonism, maybe a classical Platonism or a just Platonism, which presumes the existence of these two worlds, one with the forms and one the actual world. Of course, they're eliding the nuance in Plato's own thinking. A neo-Darwinian conception, which may have things in common with both physicalism, the mechanical philosophy, and Platonism. Then what I would think of as a properly Darwinian perspective, in which the role of the forms is quite different in the sense that it's a way, a mode of reasoning, which tries to account for the genesis of forms without presupposing their a priori definiteness or their a priori existence. I think what we would want to discuss then in terms of something like a diagrammatic schema or a series of schemas is how, for example, form itself operates in each of those different schemes. That might help get us clear about what's at stake when we say we're moving beyond physicalism or we are invoking Platonism in a certain way, or we are or are not being Darwinian or Darwinism is or is not sufficient to account for the kinds of questions that Mike's asking in his research and that a lot of us are asking in different domains. Certainly I've been asking in terms of origins of novel functions at the molecular level. That's one thing I put out</p><p><strong>[02:47] David:</strong> What do you mean by Darwinian? Do you mean that whatever forms there are totally as a result of chance in history? That different organisms living in different environments could come up with a very different mathematics?</p><p><strong>[03:14] Tim:</strong> Big question and something that I've spent a lot of time trying to articulate in the last few years. What is the difference between what I'm thinking of as Darwinian and neo-Darwinism, for example. But really briefly and schematically, this Darwinian mode of reasoning is one that puts a process of variation prior to the existence of the forms. So it is a mode of reasoning which takes variation, or Darwin would say the overproduction of variation, as the primary given and then invokes a principle of selection in order to try and account for why that variation is clearly not continuous. Why there are clumps in the distribution of that variation and those clumps roughly correspond then to species or to forms. Now, when you open that up to a metaphysical program and when you ask a question would they have different mathematics, you're already moving, we're moving well beyond Darwin's own conception. Darwin's desiderata are relatively humble in comparison. He's willing to take Newtonian physics for granted and things like that. He's not trying to explain life, the universe, and everything. But when you do, as various process-relational philosophers have attempted to do in different ways, when you try to generalize that Darwinian mode of reasoning, you do end up coming up with some very different perspectives. And so you certainly could think of different mathematics or different logics emerging from this kind of variation and selection and inheritance scheme, absolutely. And this would not preclude the fact that all organisms everywhere may be similarly adapted to physical constraints that are exceptionally ancient, that might be more than 13 billion years old. So they form part of the environmental background that all living systems would necessarily be adapted to. But we might even want to ask questions in fundamental physics, as many people do these days: what are the origins of those constraints? Why these laws? Why these constants of nature, etc.? And people are turning to what I would call slightly Darwinian evolutionary modes of reason to try and ask those sorts of questions as well. So, in general, it's just the attempt to minimize the a priori content of our approach, try to get behind the defined forms and see if they have themselves a process of genesis that might account for them. That's what I would think of as the Darwinian attempt.</p><p><strong>[06:14] David:</strong> So, if I understand you correctly, it seems like there are different levels of this Darwinism you're talking about. We could take the physical world as fixed. Then, within those parameters of, say, medium—the size of objects that the organisms we know, cells and things, work with—we could talk about some parameters of the physical world that actually control what their perceptual systems are, how a cell has to know where it is in space and time. Even if, in some quantum physics sense, those are illusions. If we start there, is that a base? But what you're suggesting, it seems to me, I heard you saying that deeper than the level of physics that we find ourselves dealing with on planet Earth, it's possible that the laws of physics themselves are susceptible to some kind of Darwinian evolution or natural selection. That's what it seems like you're suggesting, that they're not themselves fixed. And some cosmologists have said things like this. I just want to see if that's what you're suggesting.</p><p><strong>[07:44] Tim:</strong> That approach would be the consequence of a speculative generalization of Darwinian modes of reasoning. How successful that's going to be is a completely different question. I'm saying this is a method. But it does come along with the necessity of re-evaluating the role that invariance, things that are taken to be fixed, play in our explanatory schemas.</p><p><strong>[08:22] Matt:</strong> I like your four-part typology, Tim, of the mechanical philosophy, Platonism, Neo-Darwinism, and then proper Darwinism in your sense. But it could just be that we have two options here: Platonism and Darwinism, a kind of speculative Darwinism, as you're suggesting, because the mechanical philosophy rooted in Newtonian mathematical physics is a degraded Platonism, as is Neo-Darwinism with its emphasis on information being carried by a genome. These are both degraded forms of Platonism, or at least inheriting the Platonic mode of thought, whereas Darwin is the real alternative, even though, as you acknowledge, in the final paragraph of "Origin" he refers to the fixed law of gravity. He wasn't yet thinking cosmologically about evolution, but there's a lot of reason to want to do that. We have the Darwinian and the Platonist versions of the two options here, where Darwin would say that all form is a function of chance variation selected and historically accumulated, whereas Plato would say, nope, the forms are there already. Whatever evolution might be is a selection among pre-existing forms. Those seem to me to be the two options on the table here. I don't know if that's an oversimplification.</p><p><strong>[09:48] David:</strong> But it seems to me that you're leaving out the Kantian.</p><p><strong>[09:52] Matt:</strong> option. That science is limited to a phenomenal realm.</p><p><strong>[10:00] David:</strong> Science is constructed by the mind. Space, time, all the categories we use for perceiving, making judgments in the world, these are all constructed by the mind.</p><p><strong>[10:22] Matt:</strong> And the mind being a historical and not</p><p><strong>[10:25] David:</strong> There's Darwinian spins on that, but I think Kant was talking about pure phenomenology. It is this sort of logic of what an experience must be like. That it must be like a certain way or it's not any experience at all. So, from the perspective of this Kantian, you have to go deep into understanding what it is, but it is for the possibility of any experience. That's what Kant's talking</p><p><strong>[11:03] Matt:</strong> You're right to bring up Kant, and I think there are various examples of philosophers who want to overcome this dichotomy. Kant was pre-Darwinian, but my approach would be we don't need to choose one or the other. We need some kind of a synthesis here. Kant would be an example of that. But I think Kant's understanding of the mind was that the categories appeared from nowhere. We needed a genetic account of that or an evolutionary account of where the human mind comes from.</p><p><strong>[11:36] David:</strong> No, I think you're right. It's a miracle. Did God make it or why does it work? We need some account of that. You're</p><p><strong>[11:45] Tim:</strong> So in terms of the dichotomy that Matt was giving, Kant is pretty "platonic" in this sense, but the a priori forms are transcendental instead of transcendent. And so that's how Schelling reads Plato as a kind of proto-transcendentalist. I like that dichotomy, because I think the radical nature of the Darwinian intervention, and I'm not saying that he was the first to say these kinds of things. You can even think of it as a pre-Socratic way of thinking. But the radicalness of that is typically underestimated. The neo-Darwinian attempt to integrate it into the physicalist or mechanical philosophy to identify a privileged sort of basis of causal reduction in the gene, et cetera, actually moves us back into that platonic conception in a really important way as well. So I do think, incredibly schematically, that dichotomy is pretty useful for us because it's all about what's the a priori, to what extent are we relying on something that doesn't have its own genesis. You can always say that variation in the Darwinian thing is the thing that doesn't have its own account. It's just the thing that's taken a priori as given. But I would want to also signal that there is a way in which this Darwinist view is also related to a Platonism in the very weak sense, that it acknowledges the reality of possibility or potential. So it's not deterministic in the way the mechanical philosophy becomes when it collapses the forms into the actual. Whereas in the Darwinian account, there's a very open-ended and real sense of possibility. It's just about how is that possibility structured? How does it become what Mike would refer to as a latent space rather than proposing or postulating that it's always already a latent space with forms inhabiting it? As Peirce would say in his attempt to generalize Darwin into a metaphysical way of thinking, the account of the evolution of the universe has to also be an account of the becoming and the evolution of the forms, not taking them as a priori. I don't want to turn the whole conversation into this. I just thought that was a useful schema for us to begin with.</p><p><strong>[14:37] Unknown:</strong> This was wonderful. It was a wonderful kickstart. I agree that calling it, or at least putting it in these terms, even if loosely, is helpful. Perhaps there is an even deeper dichotomy inside what it means to be Darwinian, because in "On the Origin of Species" he lingers a lot on this question of polymorphism as variation. There are fluctuating elements, specifically in polymorphic species, where Mike raises a question of the embodiment or disembodiment of memories. This dichotomy then seems to extend to Brower and his lectures on mathematics, philosophy, and consciousness, where he says that the purest thought is mathematics. Given what we know today, and in frameworks like TAME, we can read these texts in a new light. We can ask: is there a non-Platonic sense now that we know these things, or now that we're tackling the problem from this perspective?</p><p><strong>[16:12] Tim:</strong> I love the Brouwer reference, so that opens us up onto a whole other world of discussion. Another thing that I know was brought up in the previous discussion would be about mathematics as a language versus, say, natural language and versus other modes of expression like music or chemical modes of, I'm a chemical ecologist, chemical modes of expression and other things. Whether, if we agreed with Brouwer that mathematics is the purest form of thought, is quote-unquote nature so pure? So in fact, is that purification a kind of simplification or coarse graining in order to achieve that level of purity and precision? I think that would bring us back to the history of Platonism in some sense and this association of the forms with something that's pure, it's not fallen, it's not full of accidents like the world of appearances. These things get incredibly rich. I also want to talk about what Adam said in the chat about convergent evolution, because I think that's profoundly relevant. It has been invoked for its relevance to these discussions of Platonism by people like Alfred North Whitehead, but also in this recent discussion of the Platonic Representation Hypothesis. I would say, there's a lot to say about convergent evolution and the role of shared descent, as well as shared adaptation to the same environment. Before, in a sense, we appeal to a kind of Platonic hypothesis that organisms are converging on shared a priori forms. We have quote-unquote mechanisms or ways of thinking about convergent evolution that don't rely on those. The question is, what's the limit of that, broadly speaking, Darwinian mode of reasoning? You brought up Carcinization. I always like to say, crab forms have evolved six times. Venom, which is one of my areas of study, has evolved more than 100 times independently. So, there are some incredible examples that philosophers could pay attention to when it comes to convergent evolution.</p><p><strong>[18:39] Unknown:</strong> I think when you look at the two different spaces, you can start today formalizing the architecture of how that works. And I think what's really interesting is when you treat these processes as computation, when you define what a finite observer, a bounded observer, can do in this infinitely complex space of forms, you get this coarse grading and you get this dynamic of trying to sample efficient structures that are predictable to increase what you can sample later and have more choice. So you get this Darwinian mechanic from the structure of a computational object or a computational possibility space. The model for observer theory is based on Stephen's rule, which is computational. It's any causal chain. And to make that space have any meaning in physics, it has to close. So you have to be able to get geometry out of it to get maths out of it to make physics predictions, which is what Stephen does. That point at infinity gives structure to the space, but it's the point where every causal chain ends, where every diagram commutes, where everything limits. And that's a sink. And that acts like a telic attractor. It's a sort of informational attractor. It's got every possible causal history in it. Any multiverse, any type of math, any platonic form you can imagine, any physical instantiation is an integrated map. And because that map commutes overall, you can say that structure has that telic pull, that gradient, that fitness, like in a fitness landscape, which is driving observers towards computationally efficient forms that enable them to sample more of that space. And I think the innovation of that computational language allows you to start doing things with the tools we have today, modeling with LLMs. One of the things I'm working on at the moment is a test to probe whether different computational architectures converge. There's a paper called the Platonic Representation Hypothesis, to see if that applies across different architectures, narrow architectures such as AlphaZero or chess engines, to see if they have a hierarchical mapping of concept space of these embeddings that they have in their models. And I think you can start to probe these objects more today than any time we've had before because of the advent of technology. So I think we'll start to get more answers on these directional questions, whether it has to be separate or it's the same. And I think the idea is that structurally, if there is a structured computational space, or a set of all possible computations, and you can import physics from it, then that structure should be found in a coarse-brained fashion across these experiments. It won't be definitive, but it'll give a hint that maybe this thing is actually a real thing as opposed to an abstract thing we're constructing to make sense of the world. And trying to see if top-down and bottom-up causation can work together or whether it's really all constructed bottom-up and it's all emergence is a question that computational experiments are going to let us answer over the next few years. We're going to start getting directional hints about it.</p><p><strong>[22:23] Michael Levin:</strong> My current model, I don't know if this is a chimeric version of the two views that you guys were talking about, or if it's a third thing yet. It seems to me that there could be a variety of different forms. It doesn't seem to me like the forms all have to have the same character, either they're pre-existing and that's it, or they're evolved. There are numerous different ones on that spectrum. For example, there are biological ones that I'm perfectly happy to have modified by evolution and various other things. There are others that seem like they have a lot less of that character. For example, the value of the natural logarithm e. I don't see it being downstream of evolution. I don't see it being downstream of anything that happens in physics. Maybe it can change. It seems like one of the more stable ones out there. I think we could say that there are ones that have this really fundamental stable character. There are others that are either novel or modified by things that have happened later. This gets into the naming, because when I started talking about this stuff I said "Platonic space" only because then at least the mathematicians knew what I was getting at. Some percentage of them said, "Yes, we're already on board with this." Clearly, the model that I'm pushing is not fully Plato's model. I don't know what to do with the naming of it, and some people hear "Platonic space" and they're very upset and they say, "Absolutely not." They say, "Fine, 'latent space.' That's good. Now we're happy." I don't know what exactly they see as the difference. People also will say I'll point out certain things that happen where it seems like you get more than you put in. People will say, "These are just regularities." I say, "What does that mean?" "These are just things that hold true in our world." What are those things—random? No, they're not random. I don't want a realm. We've got some things that seem to hold true. We don't think they're random, but they're not a realm. Somewhere the terminology needs work; we're going to have to work on the different variants of these views to really say what it is that people really hate so much when they think it's a realm. What else do they have that isn't a realm that to me always sounds like a realm anyway? I think the nomenclature is going to need some work.</p><p><strong>[25:13] Matt:</strong> I don't know if it's good news or bad news, but when you read Plato's dialogues, there's no one model that Plato leaves us with. He leaves us with many different possibilities. The best criticisms of Plato's forms are in Plato's dialogues. But obviously, the term "Platonism"—anyone who's read some philosophy of science, and maybe some Karl Popper, is going to have a reaction to Plato, all sorts of associations. I understand why you chose that. You're right, Mike. I'm glad you're pointing to that. There are different forms of forms, as it were, some which we can understand as historically emergent in a Darwinian sense, and others which seem more necessary or almost metaphysical. It seems to me that rather than having to choose either variation first in the Darwinist approach or invariance first, which we could say is more the Platonist approach, for variation to lead to anything of significance in terms of historically emergent forms, you already need seeds of invariance. So there could be some forms that are truly invariant that allow there to be a selection process by which useful forms, other types of forms, could emerge historically. I'm always driven to try to think of the interplay between invariance and variation. It becomes difficult for me to make sense of the idea of a full-bore Darwinism in the speculative sense of variation first, getting all form out of that because of the examples that you would point to, Mike, that seem not historically emergent. So I want to have it both</p><p><strong>[27:12] Tim:</strong> ways. I also want to signal agreement that it's highly likely, almost certain, that there are those forms which we're not going to get behind from our position radically in medias res. So what I'm calling this Darwinian mode of reasoning is a wager; it's a method. You could think of it as an attempt to identify those forms that we absolutely can't get behind: which are the ones that are absolutely non-deconstructible? And they may end up appearing to us as conditions of actualization. For there to be anything at all, it would appear from our situated perspective that these forms were required. But there is, of course, a speculative evolutionary account of that. There are anthropic principles. There are still options available in the way we think about those sorts of things. But just to say that it's very different to claim that we can explain everything and we can get to bedrock variation first and somehow bootstrap ourselves up to a full cosmos. That's already the rationalist claim in the history of philosophy. The rationalist claim is that you won't be able to do that, essentially. And I am saying there is a limit to the rational intelligibility of reality, in fact. I tried to say that in my talk for this session. We are going to have to recognize those limits, which means there may be things we need to take as given, things that we simply can't explain. I would just want to signal the agreement that it's very clear that there are different kinds of forms. And I've previously spoken about this and published about this as a temporal hierarchy of constraints. I don't know if I like that term myself. Terminology is always really difficult. Some forms came into being very, very recently. Some forms are incredibly ancient. Those are real salient differences. They are going to impinge upon our capacity to give a genetic account of certain forms.</p><p><strong>[29:42] Michael Levin:</strong> Yeah.</p><p><strong>[29:44] David:</strong> David? So let me switch gears on the philosophy here. I want to talk about some practical biology for a second. Let's imagine ourselves the first cells to come up with the idea of multicellularity and they start communicating in some way, chemically, electrically. What constrains the kinds of shapes that they can make, the kinds of behaviors they can have in this very primitive state? Is there something already there that they can or cannot do, possibilities they have? Michael.</p><p><strong>[30:40] Michael Levin:</strong> You probably have some thoughts on that. I'll just throw out one thing because it's the same X that I always grind. Probably Tim has other thoughts. There's a lot of really good work on bacterial biofilms that are almost multicellular. Gerald Soel does this amazing work showing what he calls brain-like electrical signaling in biofilms that allow them to coordinate and act as a collective. But one issue that I always talk about is how much do you put in and how much do you get out? What are the examples where you get out more than you put in? Here's an example of this. Once evolution finds a voltage-gated ion channel, you've got yourself a voltage-gated ion conductance. It's basically a transistor. You have a couple of those, you can make a logic gate. Now you automatically inherit all of these cool things about the truth tables — NAND is special and all this other stuff. You didn't have to evolve any of that. You get all of those cool properties for free, right? Having made that interface, you now suddenly inherit these things and you don't have a choice about most of it. That's just what it is from the laws of computation or math or logic. I think evolution can make use of all of that. There will be facts about the way that computation is done in networks of 2D surfaces of biofilms: some constraints, some enablements, and some free lunches. I'm sure Tim's got a bunch of examples that you can make use of. I think looking at those bacterial cases is pretty informative.</p><p><strong>[32:36] David:</strong> It goes even earlier than that. When you have genetic regulatory networks, you also have logic gates.</p><p><strong>[32:50] Michael Levin:</strong> This stuff isn't published, but I have a student who's looking at training. We've shown training of gene regulatory network models. She's doing training of Lotka-Volterra style population dynamics, and you can train those two. If you actually look at the space of parameters of what does it take to make them have habituation, sensitization, these various things, that space is really interesting. It has very specific shapes in this space. It isn't homogeneous. And where does that come from? There it is.</p><p><strong>[33:34] Tim:</strong> I think when I said constraint, I didn't mean not enablement, of course. I meant enabling constraints as always. That's the role of invariance that we're talking about here, which is that you need something to hold things in place so that you can do a theme and variations. I'd love to get to a chat about music here as well, because I know, Mike, you're planning some of those discussions. But to say with the biology for a second, without getting into heaps of detail, but to respond to what David was saying, if certain physical, enabling constraints are 13.7 billion years old or whatever, when life emerges four plus billion years ago, it has to be in conformity, but it's enabled by those; those are already the enabling constraints of living systems, right? And then thinking about things like logic gates and all the amazing stuff that work that Mike has done on the capacities that minimal cognitive systems or minimal biological systems have, et cetera. I still think we can think about this in terms of relationships of adjacency. We don't have to posit that all of the Boolean logic associated with the use of logic gates pre-existed the genesis of that ion channel. We can say that in some sense, when you have a certain kind of actualized relational structure in the world, it then brings into definition a set of adjacent possibilities, possibilities to use Stuart Kaufman's term. Again, it's hard to understand what it would mean to say that all of that logic pre-existed the logic gate itself. We talk about an interface theory. We're never going to pull things out of the platonic realm, so to speak, without the existence of an interface, in Mike's terms, whose structure of functional operational capacities is what enables those forms to be ingressed, if we're using that language. But it's a further metaphysical step to say that those forms somehow pre-existed, as opposed to are themselves given a form of definiteness because of their adjacent relationship with that definitely structured actual physical, if you want, interface. What the Darwinian conception here is saying is that interface naturally contains within it this potential, which is just variation itself. So if we look at biological systems and we look at stochastic gene expression and the non-stereospecificity of interactions between molecules and Brownian motion in and between cells and all this stuff, there's all this crazy non-indeterminate variation going on all the time, which in a sense you can think of as always spreading out, palpating a space of adjacent possibles from the actual form structure that is in existence. It's a little bit of a jump, but I think of this also as the way the mathematical landscape itself expanded in the history of human mathematics. We know that there's a whole load of maths that is not applied, that is not physics. Physics, the maths of relevance to physics, is this relatively small aspect of the mathematical landscape. We could therefore get to thinking that that's just a subset of something that pre-existed it and is much vaster than it. But if we look at the history of mathematics, it's the other way around. People discovered things in the relations in the empirical world. They learned how to reason about them mathematically. There were economic and other utilitarian justifications for the development of those tools. And from understanding the principles, like the relational principles diagrammatically, so Poincaré would say, mathematicians are interested in relations, not objects. You can remove the objects as long as the relations stay the same. It's no different to us. So it's diagrammatic. But by understanding the principles, there's a way that you can keep spreading by unpacking the consequence of those principles. Again, those relationships were found in the empirical world first.</p><p><strong>[38:11] David:</strong> I want to push back on that just a little bit. Let's get back to our cell forming a gate. It has to be that the potential for on and off is already in the</p><p><strong>[38:28] Unknown:</strong> material. Like resolution. It has to be there.</p><p><strong>[38:35] David:</strong> There's no making an on and off switch unless the material that you're making with can already be an on and off switch.</p><p><strong>[38:42] Tim:</strong> But you're saying in the material.</p><p><strong>[38:46] David:</strong> No, I'm not saying Plato is out there, but Plato is actually in the material itself. I can go with that. But when you start talking about it, I want to push back on what you're saying about mathematics, because it seems to me that mathematics is not just a flowering tree that could go in any direction. I think it has a structure to it. I think the way you understand the relationship between, say, geometry and algebra and calculus, the more you look at it, in group theory and set theory, logic, there seems to be a structure to it; it has some kind of a unity to it. You just can't make up any kind of math you want.</p><p><strong>[39:40] Michael Levin:</strong> This, I think, is the issue. Tim, I'm okay with — we don't have to say it pre-exists because I don't know what time would be doing there anyway. So that's fine. It doesn't have to pre-exist. But there's some specificity. In other words, you've got this particular fact about NAND, or that there are four colors: the four color theorem, not the eight color theorem. You get a very specific thing out of it, and you can say that it sort of came into being when you made the interface. I'm okay with that, but we still need to say, is it random? And I agree with David, I don't think it is random. So there's some pre-existence. Now we're back to there's some reason why you've got this and not something else. So something is making that selection.</p><p><strong>[40:27] Tim:</strong> I think random is a very misleading term, the way random is used to talk about indeterminate biological variation, for example. Abject randomness is in some sense an abstract fiction. So if I'm going back to biology and I'm talking about stochastic gene expression or whatever, it's not like just anywhere in the universe that those genes are being expressed. It's in a very strict relationship of adjacency with all of the "quote-unquote" machinery that exists to produce those genes. It's just that there's this distribution of genes, the concentration of genes, say, in different tissues, different cells, is tightly regulated, but it's never regulated perfectly. It's never regulated absolutely. A protein structure can evolve to achieve a relatively high, a very high degree of specificity, but it's never absolutely specific. There's always a chance that it's just going to stick to something else because molecules are just sticky and it might have some kind of off-target effect. And that's one of the major ways that novelty emerges in biological evolution. So I'm absolutely not saying that it's abjectly random or anything like that. I'm saying as soon as you have any kind of structure, it acts as an enabling constraint on the development of further structure. So it makes complete sense to me that mathematics would have in some sense this kind of unity. And even complete branches of mathematics that are considered to be completely distinct keep discovering the same structure. It turns out you can say the same thing in a different language in some sense. That makes total sense to me if the fundamental, if mathematics in some sense is born from this shared origin in the practice of mathematizing humans in actual contexts. I'm not saying Mike, you and I have been back and forth on this for a couple of years, I think. I'm not saying I have an account on how I would explain the genesis of the four-color theorem or fucking bounce constants or whatever it is. I'm just saying it seems premature to me to say that no such account is possible.</p><p><strong>[42:35] Unknown:</strong> I agree. I tend to agree with opposing views. I like this example of cells communicating, especially because I don't have a concrete stance, but I asked the question if biological forms, for example, were hearing shapes and not forms, in the sense of Mark Ack's question about hearing shapes, that you can effectively recover this infrageometric information or some sort of data. Since it is persistent, you can also posit that there is some Platonic prior that you can recover consistently, which I find really interesting. Perhaps cells—let's speak of an architecture, a plant—and you ask if a plant can hear shapes. In that sense, you just do the same path that Mark did. It is completely plausible if you understand hearing as processing some sort of signal by mechanical transduction, and then you have specific genes, and then you have ciliary arrays. It's completely possible that you would do wavelet transforms. For example, if you want to recover the peaks of a transform like this, that would modulate the oxygen signals; it is completely plausible. It would give you intervals, and in terms of mechanistic expression of a pattern, it is also plausible that we ought to relate it to symmetry because the peaks of the Fourier transform—it's completely plausible. I would also invite this other theme, which is Hermann Weyl's conception of pure infinitesimal geometry when he was trying to unify the electromagnetic fields. He came up with many beautiful constructions. I know we are past 150 years of Weyl's work. But the fact is that even though Einstein commented that his ideas were beautiful but unphysical, now, 150 years later, we have light–matter interfaces coming out from it. We have Weyl points that have been experimentally observed. Perhaps we don't need to choose between a metaphysical or a physical perspective. There seems to be something here by which we can recover this kind of information. I find it interesting on a cognitive level if we bring that from ciliary arrays doing these transformations and then architecture expressing these patterns. I find it interesting, but I don't know how to answer the cell question specifically on a biological level.</p><p><strong>[46:03] David:</strong> Let me ask another question about this. What is the difference between a group of cells that are just responding to a chemical stimulus in their environment — they're moving toward a food source or away from a toxin — and a group of cells that's actually processing that as information about where they are in the world? Or a plant that's growing toward the sun automatically, or one that's actually processing information about where it is in the world. Michael, you want to take a stab at that?</p><p><strong>[46:49] Michael Levin:</strong> I'm going to see if I can find a cool example. Have you guys seen the Physarum example that we have? What you have is a dish like this, about 10 centimeters in diameter. We put three glass discs on one end, one glass disc on the other end, and a little slime mold in the middle. The glass discs are inert. There's no food on them. There's no chemical. What you're going to see — I'm going to try to find this because this has to be seen — is that for some hours the Physarum sits there and it vibrates and it tugs on the gel that the whole thing is sitting on. It reads, as it turns out, the strain angle of the different masses in its vicinity. For several hours it does this and it doesn't do anything. It doesn't go anywhere. It just does this. I think what it's doing is gathering information about the environment. Then it goes preferentially to the heavier mass. That's one of my favorites.</p><p><strong>[48:17] David:</strong> Examples. It seems that example is crucial for pushing back against the sort of emergence physicalism view: if you can experimentally show that organisms are actually representing where they are in the world. That's very basic math. I would say that you have to have some kind of a representation of spatiotemporal orientation if that's what we can actually show.</p><p><strong>[49:01] Michael Levin:</strong> So here it is. These are the glass discs here, 3-1. And this is the little Fisarm. So for the first few hours, it just does this. And it's going everywhere at once. And I have a video where you can see it tugging. And then, boom, at that point, it decides to go for it. Wow. And then bang, that's what it'll</p><p><strong>[49:29] Tim:</strong> do. It's doing a random walk, and then suddenly it becomes oriented. And I think this is really fascinatingly consonant with something like Waddington's conception of the neutral accumulation of genetic variation and then the reconfiguration of the epigenetic landscape, but the process of genetic assimilation when an organism enters into a particular environment and something elicits that adaptation from it. As Mark knows really well, these are very big and ongoing conversations in evolutionary theory around things like evolvability, the role of redundancy, the role of robustness, and where those two things are the same and where they're different. I'm always wiggling my hands this way, I'm a big gesticulator. This is 'random, spontaneous' behavior. If suddenly something elicited a reaction from me, I might point directly or I might make a shape with my hands. My point there is just that biological systems are always doing this spontaneous thing at the molecular level, at the behavioral level. They're reaching out, they're palpating an environment and they're seeking a signal to bring this into the information territory. They're seeking something which would tell them, go this way and not that way. Be this and not that. This is what you need to be right now. You've got this capacity to be lots of different things. You're phenotypically plastic, but right now, this would be a good thing to be. And so information then is this relational thing that happens between two different systems, organism and milieu, or two different organisms. It's a mutual reciprocal relationship of elicitation. When the signal comes in and it is 'meaningful' because that plant actually requires light in order to photosynthesize because of its evolutionary hit, that's how I tend to think about these sorts of things. And I think, again, Mike, your work is incredibly pioneering in this way that you can look at the slime mold. On the one hand, you could have told this story that at the molecular level, and it would almost be a kind of evolvability story, then the states of the slime mold are evolving. But you could tell the same story at a different level or in a different aspect in a way which becomes a behavioral story or a cognitive story. And so there's this fascinating unification of a kind on offer there. It turns out that I've said this to you before, Mike, but there's a way of thinking about cognition, which I think, in this general framework that you give us, in which it almost becomes synonymous with what evolution means, if you think about evolution in a really generic sense. To Sam's point, you said some really fascinating stuff about Wolfram's model and computation that we haven't picked up on. But computation is an evolutionary process, always already. It's no shock if there's a really intimate relationship between evolution and computation because they've always been intimately related. And you can even just go into the history of the word evoluteo and how it means unfolding, but it has an algebraic connotation before it ever had a connotation in biology. There are so many rich resonances here. I wouldn't want to be seen here as flying the physicalist flag. I'm not advocating for some kind of physicalism. I think physicalism is more platonic and more idealistic. I know that's counterintuitive compared to 'Darwinism' or 'Darwinian' or whatever. I call it ontogenetic because I prefer not to invoke Darwin's name so often because it's like invoking Plato's name. People are like, that's what this means. So an ontogenetic alternative is definitely not what I would call physicalism. I think physicalism is a formal theoretical approach to a way of understanding the world basically grounded in effective theory. That's a whole other conversation. I'm not allying myself with that. Maybe it's a genuine alternative.</p><p><strong>[54:05] Unknown:</strong> I think one of the things that's interesting is you can model evolutionary processes on very basic cellular automata. And when you talk about patterns, you get this linear progression, then some exponential jump as that cellular automata discovers a novel rule that increases the amount of steps they survive for. And those jumps are discrete. And those discrete jumps are really when we say the object has changed from one thing to the other. So in your cell question, the idea of bulk orchestration or a top-down causation from a group of objects that have bound together exhibited small world network properties where the communication channels reach some synchronicity that means that decision is basically everywhere in the network all at once. It's called a superlinear speed up. That dynamic gives you that top-down causation where that single cell thing has within it the communication ability, the ability to couple and find information from the environment or from other cells in its neighborhood. Once enough of them come together and they're close enough, that orchestration kicks in, and that's where those free lunches come in. Because you've now gone to a different, you've exponentially risen up the curve of how much information you can handle, how big your internal model is, how much you can predict. And here, the model is that this world of latent space, idea space, possibility space, whichever name you want to give it, that structure is invariant, and objects are bigger or smaller based on how many equivalences there are within the computational network. Now, it's not saying the actual thing is a big computer, it's saying that model is a coherent way to then make predictions with it in a way that this is the language of these formalisms from Plato to even theologies. All the metaphysics, all the major theologies describe the structures of these spaces. And I think today, with these network models, you can now be more specific and you can now test things like evolution through that space. What, if you have an object with n many equivalences in the network, does an agent put in that space discover it faster or slower? You can actually start to run quite coarse simulations of the dynamics of the space that, I think, for a long time have just been talked about. And that's really interesting because all of the experiments that are coming out of Michael Slab and some of the other people on this panel are pointing in that direction. I think it's a super interesting formal program where these tests, these things can be tested not just in observers like us or animals, but they can be tested across novel substrates like computers. You can start to answer that question. And when you have a structured space like that, then you start to ask deeper questions: are ethics computationally valid? Can you model ethics computationally? If so, can you teach a computer them? And those languages you get out of discovering this space are causally effective. Whether it's ontologically real — whether there's a giant Indra's net all around us — is hard to tell. Whether it's causally effective in our world is probably the more important question that we can, I think, start answering. So I think that's the most interesting thing that's happened in the past two years. All of these ideas start to bring ideas of infinitary space and infinitary explanation back into physicalism in a way that should be quite explanatory.</p><p><strong>[57:51] Tim:</strong> I think that's beautifully put. I really agree with the promise of that new kind of science, the experimental or computational method. And I do think sometimes that promise, that potential, gets collapsed a little bit when we immediately feel the need to move into metaphysical territory and say, well, that means that the universe is a computer. I think it's an incredible way of experimentally testing various evolutionary models because they're all evolutionary to me, because intrinsically, computational models are evolutionary. And I love what you said about saltations, jumps, phase shifts, leaks in a state space. And I think we see a ton of that in biological evolution, actually. So I don't think that the so-called gradualist assumption particularly holds. It holds on certain scales, but we also see a lot of leaps. What I brought up with genetic assimilation and Waddington and Richard Goldschmidt's idea: these ideas have always been present in evolutionary theory, even though there had been a mainstream of neo-Darwinism that tried to squash them.</p><p><strong>[59:09] Unknown:</strong> That dynamic's not just seen across biology. It's seen in our social structures. It's seen in how we organize ourselves. It's seen in how economic growth is. It's seen in how political systems and change work. We have a long linear progression or some mildly chaotic but linear progression. And then there's a change and there's an exponentialization. The network reorganizes. It settles to a new local loop, a new peak or valley in the fitness landscape, depending on which way around you've got it. And then it keeps going. But this applies not just in evolution because it's computational and ultimately we compose our explanations computationally to communicate them. That dynamic, if it's proved in computation, the simplest system must be running in more complex systems at much higher resolution. So these dynamics can now be explored in that space of memetics, that Dawkins sentence in the book that should have probably been another book in Susan Blackmore's work. They now become causally effective if you can also put physics within that same language. And that's one of the interesting things about these models. It's not that you can now compose between those structures. And because there's a natural geometry inherited in those objects, you can compare the properties. And so you can have ideas about symmetries, ideas about the boundaries of those objects and how hard they are to capture, how much coarse braining goes on, what happens when we actually sample these objects. Do they become easier or harder to sample? Is there a point where that changes, where that object becomes invariant under repeated sampling so that we know it's maximally reduced for us? What happens to that concept in Platonic Space when that happens? And it starts to put these observer-centric models as explanatory in the context of how we interact with information that isn't wholly explained by physics, biology, chemistry. The content of an emotional experience can be explained with an EEG. But if you ask someone the contents of that experience and you say, "Is this data—unspooled, all the data?" you will normally get an answer that's no. And because of this language, because you can now compose those things in an integrated map, you can start to make harder empirical statements about what you think the structure of that space is, whether or not the space is structured, how my paper hypothesizes it or speculates is beside the point. It's that you can speculate within this architecture all of those dynamics to try and formalize and test these sometimes quite intuitive, but also informed by lots of experience, ideas about metaphysical bigger questions that are harder to answer. And that's one of the interesting things about this change in the language because it starts to join up so many different domains. And you start to get ideas that are mathematically proven that can be applied across spaces and concepts that don't normally seem to lend themselves to it, at least in how we think about those subjects today. That's quite an interesting thing about this symposium: you get those perspectives on what those optimal models are from 20 different disciplines in 20 different languages. And so you get this coming together, pulling apart those ideas, which is how you formalize something like this, which is going to be quite important over the next few</p><p><strong>[1:03:04] Unknown:</strong> Let me ask you what you think about biology — the easier models. Let's take two different neural networks, two different architectures of neural network, trained on the same data set. Do they create a different world perception or not? That's the same data set.</p><p><strong>[1:03:35] Unknown:</strong> This has been tested, right? Above a certain amount of tokens, large language models trained on transformers have convergent representations in their weights. This was a result from last year. They've now applied that test slightly more broadly with some different measures across a couple of vision multimodal, vision and transformer-based word architectures, where again they're finding, in that paper called "Universal Subspace," areas where the representations converge. Now, whether or not that's constructed in the data set, i.e., because we've chucked in all of our pictures, our words, et cetera, comes out like that because of us, or it's discovered is not yet an answered question, nor is how that space is structured or the properties of that space, because those architectures find it harder to import coherent geometry across different models and different types of design. There's a guy called Marcus Beuler who does some really excellent work on this. I think he's at MIT, and he's been doing graph theoretic representations of these concept spaces. What you're trying to do is move past that test to whether you can test if there's some discovery or some construction where the domains are so separate that it might point to it. But if it's totally different training data or it's a narrow domain, is there a structural discovery, not what's in the structure, but is there order or hierarchy that suggests that this platonic space is not just words, there's still girders holding it up. But say this is roughly how we split things</p><p><strong>[1:05:25] Unknown:</strong> Large language models, I suspect, have influenced the creation of that kind of concept, platonic board. But I'm saying, assuming that you have a completely different model that doesn't learn based on attention but learns on something else. If they create the same world, does it mean that there is only one platonic presentation of the world and we just need to find that world? Or are there many?</p><p><strong>[1:06:14] Unknown:</strong> I guess the way to think about it is in terms of the size. So the form of a chair is as an object informationally bigger than all the elements of that set or that category of the form of the chair. So every individual chair that you can possibly imagine is contained within that object. So when you have a word, imagine that as an object, a category. Now it's a smaller category than maybe chair. It's a more bounded category. There are fewer things it connects to, or fewer instantiations of it, but it's still got to be mapped. You're only mapping with a lens that's small. You're mapping an object. It might be countably infinite or even finite in terms of the composition of it, but you're counting it with something that's doing it one at a time. You're not going to ever fully map that space. Even with something as simple as a word, you're going to get multiple embeddings, but they'll be close together in that space. Similarly, as you move down to things in physics, those things will become discrete. Why is math powerful? Because it's discrete; you get an answer. Those objects become invariant and you can map them fully, which is why they're useful in the computational observer model. Because if you have finite computational power, you need to do more mapping. You want to see more of the space. You want to reduce or compress as much of that into your model. It's a discrete measure version of something like FEP, where that surprise is I have to do a lot of computational work to fit this object into my model. Then I need to make it smaller and compress it. When I sample that thing again, when I practice doing something or when I learn something, that thing gets compressed more and gets more equivalences in the object. It becomes easier for you to integrate into your world model to make predictions. That dynamic means you map that space. Even though that object exists, you're not going to fully map it or perfectly map it with a finite budget.</p><p><strong>[1:08:20] Unknown:</strong> Hananel, it was a great answer, Sam. Your work is fascinating. In going back to your question, Hananel, experimentally it could be interesting to test for something: in terms of computation or computational power in terms of operation, not only do you have allocation operations, you have this thermodynamic or dissipation layer at play, which, depending on where the model is being run or what the computational constraints of the architecture are. That is, whether it's more related not just with the words, but also with, let's call it, the kernel dissipation in a sense.</p><p><strong>[1:09:20] Unknown:</strong> In the last couple of years, a lot of really talented researchers have come up with multiple measures to figure out this stuff. Some are kernels, some are the graph representation. There are five or six different measures. What you're trying to do is get to the right measure for it, where it probably is some composite of those measures. It's a very live question for me and something that I need to get to a tighter answer.</p><p><strong>[1:10:00] Unknown:</strong> In terms of computational topology, we can find defects or you can do eigencompositions or expansions. It feels really interesting to see what would be the form in points, pointed spaces, for example, and the density, and also articulate with the volume in area proportionality.</p><p><strong>[1:10:26] Unknown:</strong> Wild's law. This is one of the interesting things about an LLM's architecture because it's very complex, reducing that non-trivially from n-dimensional many weights to some 3D representation or some dimensions. It's hard. So I think there's a bit of work to do, but what's quite interesting at the moment is a lot of people are working on geometric computational engines for inference or for fusing or for virtual machines, and these create maps that prefuse computation, so that representation, because it's a coherent map that has easy composition between it, might be more able to accurately map that physical representation of a shape with those properties. But it's a bit early. They're still in a really interesting foundation called UL.</p><p><strong>[1:11:28] Unknown:</strong> Perhaps you would get more discernment in terms of effective temperature measures you want to take. I've been experimenting with Mike's data, especially the bioelectric code. Normally, on a substrate, it seems the same temperature will get you a very large spectrum and then you cannot make heads or tails of it. If you make this discernment between what would be an effective temperature of a bioelectric wave and discern it from the medium, you get a much narrower space. Although it is early, it's an inverse-inverse problem that maps exactly to what you were saying. In, for example, cell-cell communication or a damaged embryo, you're discerning the mechanical wave of an embryo trying to engage in intercellular communication. I'll keep looking at your work.</p><p><strong>[1:12:57] Tim:</strong> Really fascinating stuff. Returning to one of the broader themes of the conversation, but a couple of things that you were saying, Sam, including when you initially brought up Wolfram models in your first contribution and talking about convergence. One of the things I think in the history of philosophy that these conceptions that start with, say, notions of infinity, an infinite plenitude of forms, for example, struggle with is what I call the selection problem.</p><p><strong>[1:13:34] Michael Levin:</strong> you end</p><p><strong>Tim:</strong> up with an issue of why these forms are not that forms. Alfred North Whitehead speaks to this very, very directly. This is a problem that you get in string theory; there are so many different solutions for the vacuum. Why this particular one? And so you end up with a formal system that's, of course, capable of encompassing the actual world in some sense, at least at the level of the abstract language that it's using, but it is radically underdetermined by the world in some sense. It encompasses much more than the world, and you have this problem of selection. This is somewhat related to what you said subsequently after I mentioned saltation, where you mentioned these leaps in cellular automata. You were saying it's a general phenomenon, which I completely agree about. Phase shift, criticality, all of that. We all acknowledge that these are general phenomena, but the way they tend to be explained is in a Herman-Hagen synergetics way: there's a decay of the order parameters, the system goes into a more chaotic phase of its evolution, and then it's captured by another attractor. It leaps in the landscape. You mentioned a fitness landscape; it could be an energetic landscape, whatever, to another attractor, another basin of attraction, and it ends up there. And now the new non-equilibrium steady state that defines it is there for it. This comes up when Carl Friston talks about these things a lot as well. You mentioned the FEP in passing; his contributions to the previous Platonic space discussion, and a discussion I had with him and Mike and Chris Fields on Mike's channel a while ago. The real question we're butting up against in this discussion of the platonic space is: where do those attractors come from? If you can explain all the behavior in terms of attractors, these models coming out of non-equilibrium thermodynamics, historically, have relied on, as physicalist models tend to, a predefined space where the attractors are essentially already there. Then you can model the evolution of a system through that landscape and it's captured by this one or captured by that one. What such a model struggles to deal with is the actual genesis of those attractors. So, again, the question of the genesis of forms arises. It's really the same question.</p><p><strong>[1:16:15] Unknown:</strong> That's what religious metaphysics literally does in every single form of persistent theology. The major monotheisms plus Hinduism, Buddhism, Taoism as well culturally are the biggest and most persistent structures. They all do that in their metaphysics. They all have, in the language of those traditions, their own world of forms and their own way of that structure. It normally got formalized maybe a thousand years after the tradition started, and philosophical traditions start to deepen these spaces and talk about the evolution of this possibility space, from the one to the two to the ten thousand things in Kabbalah, from the unending infinite through some contraction to the biggest infinite objects we've got in Hinduism. It's the tattvas and Brahman and Atman and how those work together. They all have this very loose language, but that language can be expressed computationally within that model. It's basically categorical. It's saying these are infinite categories of things. It's not the whole thing, but you can now create a coherent model of how this selection principle applies to an infinite object. Because an infinite object, if you're going to posit that at the very top of the chain, which is what all these persistent things do, give or take some nuance on Buddhism, then you need to be able to walk through that process. This infinite thing must do it infinitely faster or as fast as possible, at least until in some traditions we get free will and choice. You can now attempt to bridge it with models where this platonic space and the coherence across not just all the religions, but Greek philosophy and aspects of the chakra system all across the world are connected. If these attractors are real and these forms are real, the linguistic construction around the names or what those attractors tell you to do as the sub-order rules for that person are less important than the existence of a structure that seems to be consistently created or discovered. Again, this creation versus discovery point is critical because the question is really, is this space closed or is it open? Now, something like postmodernism in philosophy functions this way in this model: what's my biggest possibility space or my biggest model that I can search with, that I can explore with, that I can exploit to structure my space. And we get to postmodernism, which is the mathematical equivalent of an open category, an ever-branching tree where at the very limit, it doesn't come together. Every computation is infinitely far away from everyone else. So there's no speedups at the limit. It's all irreducible, all devolves to randomness. If you were thinking about it as a network structure, I'm really picking on a deconstructivist's view of postmodernism and not constructivist postmodernism, but these convergent ideas, whether it's Plato, whether it's religions, whether it's Leibniz, whether it's Spinoza, they didn't have a way to create the architecture of those structures in a way that was coherent with maths, physics, and computational sciences of their time because the language wasn't there.</p><p><strong>[1:20:23] Michael Levin:</strong> Yeah.</p><p><strong>[1:20:24] Unknown:</strong> I think we're in a position today where you have tools in language and results in empiricism where these things get connected. You can now start to express an argument, not a proof, but a coherent, internally consistent structure that says we can work through how Platonic space has these forms. You may not agree with it, and it may make axiomatic or metaphysical assumptions, take away theism. There's some bucket of infinite information or structured space that we are discovering. That's a fundamental brute fact axiom. You go to the other view: the brute fact axiom of materialism is there's a single singularity at the Big Bang, or the universe was always there, or a block universe, or a multiverse. So you accept different brute facts. But it's an interesting exercise because I think it speaks to intuitions and structures that we continually create, but don't have process-based explanations for in any formal language. I don't mean logic. I mean this connects directly to results in maths, physics, chemistry. It's the same dynamic, not a different dynamic. I think that's what you can do.</p><p><strong>[1:21:53] Tim:</strong> I substantially agree with the vast majority of that. You're advancing a kind of perennialist thesis on the history of religion. I published a paper several years ago on creation myths and evolutionary process. I absolutely agree that all those myths can be decomposed into operators. I think that's what metaphysics is. The explicit practice of metaphysics is the decomposition of myth into operators. That's what I call diagrammatic metaphysics. Certainly that's what Plato was doing with the "Timaeus". Aristotle was very good at coming up with these diagrammatic schemas, his four causes. I substantially agree with everything that you're saying. Can the computational approach, which I hold out great promise for, move us? How far can it move us? It's a similar question to what I was calling a Darwinian approach: how far can it move us out of that? To what extent will it still be reliant on taking certain invariants for granted? To what extent could we say that is not unrelated to the physical structure of the computational object itself? It's not going to get us out. I don't think it's going to really get us out of, if one is so inclined, asking those big metaphysical questions. The other thing I point out, which is adjacent to what you were saying, is that physicalism is already a monotheistic mode of reasoning that comes directly out of Christian and scholastic philosophy. People like Newton, certainly, and Laplace developed a highly developed mathematical language for trying to bring together a certain conception of theology with their physical science. That was their project in many ways. That's what I also explicitly critiqued in my talk for this seminar, for this symposium. To your broader point, it's important to recognize the theological or mythological origins of most of these thought forms. I want to hear Matt weigh in on this because this is something that he has a massive amount to say about.</p><p><strong>[1:24:26] Matt:</strong> I know it's very, very interesting. I'm glad this connection is arising. What comes to mind now is, Tim, you and I were talking offline about the difficulty of putting some of these ideas into natural language. We're searching for diagrams. We're trying to formalize this. And yet we also want to be able to communicate meaningfully about how it changes our self-understanding as human beings. I wonder whether metaphysics can be understood as a translation of these mythic intuitions into some kind of formal operation or set of operations. But if we go back to Plato it seems to me he's never trying to translate one into the other but instead to play them off each other — let's see how far dialectics can get us. It's still a natural language, but he was using the geometry available to him at the time, like in a dialogue like Timaeus, to work out some of the ratios he was perceiving in the movements of the wanderers, the planets through the fixed stars, and so on. Almost in every dialogue dialectic ends in an aporia; rationality meets its limits and then he offers a myth which in some sense illustrates symbolically, imaginatively, what reason can't quite grasp because it is inherently limited. Earlier, Tim, you said that rationalism meets this limit, and I think all the best rationalists from Plato to Hegel — if I can call Hegel a rationalist and Hegelians wouldn't be happy about that — recognize that the recognition of the limit is already to overcome the limit. So I think rather than imagine we might ever get out of myth, my own orientation is as a more or less Neo-Platonist, which I think it's very hard to think outside the grammar that Plato left us. Whether you're in the West or even in the Islamic world, there's just so much that's structured and canalized by Plato's way of thinking. At the end of the day we're not going to get out of the need for myth, and however science advances we're still going to need to tell ourselves a story about what those formalisms and what the math means.</p><p><strong>[1:27:12] Tim:</strong> I fully agree that myth has a very pressing and ongoing role. We may slightly disagree in what that role is, but I absolutely agree. I just want to throw this back to Sam because earlier you were saying that if you show someone an example they would say "that doesn't really explain my experience to me" — that's paraphrasing what you were saying — "that doesn't seem to represent my experience." I think one of the things we're always running up against, and this is Matt's point about what we were discussing offline in terms of trying to express these things in natural language, is that we have a bunch of different modes of expression and you can think about them as different languages if you want. I said this earlier: I feel very strongly, just to bring this up, I can say things when I'm improvising as a musician that I would never be able to say in natural language, but they're nonetheless expressing something. There's something non-overlapping. I'm not sure this is what you're saying, Sam, but I would worry about any claim that a computational language could become a master discourse in a way — that the computational language would succeed where, say, the EEG didn't. It would certainly explain different things and it might have vastly more</p><p><strong>[1:28:33] Unknown:</strong> it's your point that it explains a different layer in an integrated fashion as opposed to saying this is explaining everything in the internal part of the function. If you imagine an observer-theory perspective, you're this complex observer with your big cognitive light code of all the causal history that's built you up and you have some boundary of what you can compute, and then you have some limit that you've set in your world model of what you think is possible to compute or what you think you can see—things like myth and religion and even superstructures like fascism, nationalism, socialism. What they all function as are limit-setting devices to coordinate observers in that space to try and get them to go the quickest route, or what they deem with their limited world model is the fastest route. So it's effectively selection and evolution all the way to the birth of myth, and you see it in how religions evolve through time. There have been loads of papers and books on this, but you start with small conceptions and eventually what survives is a big conception. So the role of myth is a top-down cognitive apparatus, a way to set the biggest space. When you get to the limit of that space through the way our apparatus works, the way our brains work, because we can't really explain things well beyond cause, input, function, output, cause, effect, we can't really get beyond that boundary. That's where proto-myths or these bigger conceptions of one substance, monism, or unity have utility, because they have a computational function in the way a bounded observer computes limit objects. They can actually compute it. They can say, "oh, this infinite thing I can't compute is equal to one, in order to get my computation to compute." That's a very trite example, but that's the rough idea of how these things function in this sort of way.</p><p><strong>[1:30:44] unknown:</strong> object. I wonder if it would be useful in the context of being interested in explanation in particular to pin down and distinguish different types of explanatory why questions or different types of targets. What I often see in my space is an interest in capturing what's distinct about genuine legitimate explanations, what doesn't count as an explanation. There's an appreciation that not just scientists but humans in everyday life ask different types of questions about even what we think of as the same system in the world. We ask different questions about gene expression or pick the physical stuff of interest, but there are different types of questions. Sometimes we might ask a causal question or a functional question or a question that requires some kind of optimality or efficiency answer. We think of those as very different types of explanatory why questions, and we think of explanations as answers to those questions. The frameworks that were very nicely listed out — mechanical philosophy, forms coming from Platonic frameworks — are sometimes pitched as associated with different types of questions. But we think you can't give an explanation for something unless you specify a well-defined explanatory target. There are very different types of targets showing up in these discussions, and we think of them as different types of explanatory why questions. I wonder if it would be useful to distinguish different types of explanatory targets. One of them is "why does this form exist" versus "why does it change the possibility space of what can" — those are very different questions, and a standard causal explanation isn't wired up to handle that possibility-space question in the standard way we think about causal explanation. It also relates to the challenge of words and terminology because "mechanism" and "mechanical philosophy" mean about 800 different things to about 800 different people. Getting precision for what we mean when we say "constraint" or "mechanism" — one way to start wrangling that is to distinguish different types of explanatory targets. It's interestingly challenging to find the right term to start with before you even unpack. Metaphysics is coming up too. That's going to mean very different things to different people. I'm interested in thinking about all of those more, but we don't think there's just causal explanation. There are lots of fascinating debates now about non-causal mathematical explanations and functional evolutionary explanations, which are viewed as distinct from standard causal and mechanistic explanations. Lots of interesting, complicated things. Just this question about the potential use of distinguishing different types of explanatory targets.</p><p><strong>[1:34:46] David:</strong> I think that's a really excellent question. I would go with more of a pluralistic approach. Has someone mentioned Aristotle's four causes? The way I would look at it is that it's what's most useful for guiding particular research programs and experiments you're doing. With Michael Levin's work, I've had some conversations with him. The move toward teleological and functional explanation is driven by pragmatics. It provides a certain kind of guidance for hypothesis testing and model formation that's very useful. So when you're stuck with causal explanations of what he's trying to deal with, it is very difficult to explain—trying to explain how a computer works without allowing yourself to talk about software. This gets back to Sam's discussion of computation. The whole computational angle on this is that what makes an explanation useful is partly how we are able to use it, manipulate it in our minds, understand it. Things get to be so complicated that we can't deal with them.</p><p><strong>[1:36:31] unknown:</strong> can't. I think one way to understand a strategy scientists use to manage complexity is they pick explanatory targets that are precise and they specify them in a way that's very narrow. And so that anchors what they want to explain. And there's lots of detail that now they can say, that doesn't matter. That isn't a difference maker for this target. But then what can happen is sometimes they stray from that target they started with, or we try to lump everything in the kitchen sink into the explanatory target. And you just can't, at least the way I think of explanation, you can't give an explanation of everything about a system. There isn't a complete whole explanatory target; it's not even a well-defined question. And so it's only once you specify that you could ever give an answer. But sometimes it's hard to define that target in a way that's well-defined. And then it's hard to stick with it. Because someone, we might start with saying, I'm going to give an explanation for why this form explains this set of potential outcomes. And then someone asks, what explains the existence of that form? We've changed the explanatory target now. So you're asking a new question. We got to change the goalpost. Or there's this interesting attempt to put it all into, let's give the whole explanation. And there's debates about the standards that a kind of explanatory target should meet and also what notions of causation are useful. I definitely like the pragmatic. I'm not sure Aristotle's four causes are what scientists currently use or what are going to get us reaching the goals we want. So it's also which of these frameworks do we want to use or do we need to develop them or change them or add to them.</p><p><strong>[1:38:57] David:</strong> I could see having a whole zoo of explanatory frameworks, explaining things in terms of different phenomena and different levels of organization. We've seen that in biology: at all the different levels of organization you can explain things. So I'm perfectly happy to be very pragmatic and pluralistic about that. I don't see anything all that wrong with it either. You mentioned getting things very simple and then finding the problems when you do that. Sometimes that's what you need to get a program going. Look at behaviorism in psychology. It turned out to be badly wrong about a lot of things and short-sighted, but for a while they had it going pretty good. They were able to get a lot done with just a very simple way of explaining, say, animal behavior, and made a lot of progress. After that progress, people said this doesn't explain this, it doesn't explain that; you need to go beyond behaviorism.</p><p><strong>[1:40:17] Tim:</strong> I think it's such an important contribution. And I think we end up potentially talking past each other or mudding the waters continuously to the extent that we don't get clear about the kinds of questions that we're trying to ask. Recognizing that researchers from different disciplinary backgrounds may have very different default modes of explanation. So as a biologist, one might think nothing in biology makes sense except in light of function. If I want to know why an organism has a trait, the character state that it has, I might need to appeal to a functional explanation in order to feel that I've explained that. But a physicist might not have the intuition for that mode of explanation at all. We can explain it at this lower level, and maybe that would correspond to a causal explanation. The two intuitions can just be gliding off each other. And this is one reason why biology isn't reducible to physics in some important sense, because we deploy very different explanatory modes. We ask different kinds of questions. But one of the things that happens a lot in this specific context, in my opinion, is an appeal to instrumentalism. This is the most useful approach, and that's what adjudicates whether I employ it. But then in these broader conversations, there's a subtle sliding over into now I'm talking about the nature of reality. I can't necessarily tell when, because it isn't clearly specified, we've moved from this is a useful methodological approach in some scientific domain to someone making a claim that is a statement about the nature of reality as such. To the extent that those things get muddied, we have a lot of problems. We've had problems historically and we still have problems with a reification of a methodological stricture. For example, certain things have been methodologically excluded so that science can proceed in a certain way so that we can ask very clear questions. But then there is a tendency, shaped by thousands of years of myth and attempts to understand our status and relationship to the world, to forget that that was a methodological exclusion. We end up saying metaphysically that's just epiphenomenal or that's not a thing, that's just woo-woo, whatever it is, and we don't notice. Lauren, I'm in an extended way saying thank you for that contribution because it's incredibly important and something that I think about a lot. I need to read your book, by the way. Different modes of explanation within biology are very important to me. Even the basic Donetian distinction between the how come and the what for is really important for us to get clear on. When I brought up the little typology at the beginning and I said mechanical philosophy and Plato and blah, I did say without getting into unpacking exactly what these mean, because that would be a whole presentation in and of itself, but it then becomes really important. We've got a few options here. Now we do the diagramming. How do these operators work in each of these things? We can ask, what does each of these potential modes of explanation afford us? What can we not ask when we're thinking in this way? I agree.</p><p><strong>[1:44:18] unknown:</strong> I wonder one way that this can help too in presenting work to audiences for the first time or audiences that are critical is to suggest that it isn't intended to explain everything. This is intended to explain a certain kind of thing, certain kind of explanatory target. Sometimes there's this criticism of it doesn't do this. That's fine. It's not supposed to. And if you expect that there's a single framework that should explain everything, that's not that accurate of how we think about what scientists are doing and the massively complicated and different types of questions. So it can be protective, and maybe it can also satisfy that audience because we're not saying this is the way to do all explanations, but it does this thing. I wonder if there's a way to specify. I appreciate the assumption that science and the methods that we use and the utility element should be of a certain kind. I wonder if it's related to reductive assumptions too, where the way you understand everything is by going further down always. I wonder if there's a way to specify the goals that are associated with these explanations, such that you could say it is useful for these goals, even if it's not useful for those ones you're interested in.</p><p><strong>[1:46:00] Tim:</strong> But valid goals, right?</p><p><strong>[1:46:03] unknown:</strong> They could. The hard part is arguing about the goals. The easy part is once you fix them, we can say my approach gets you to these goals in a more objective way, and yours doesn't. I have fascinating discussions with scientists who think of causation as the only way explanations work. And so they want their model to be a causal model. It's dynamical; we think of it as explanatory. It's of course very informative and useful. They want it to be called a causal model because that word means it's a real explanation to them. So there are really interesting issues with the fact that these words have a status. Dealing with that is non-trivial and fascinating. Sorry, Matt.</p><p><strong>[1:47:03] Tim:</strong> Baggage, right? Philosophical baggage, those terms.</p><p><strong>[1:47:08] Matt:</strong> I love that we've ended up here because to me, this speaks precisely to the importance of distinguishing between metaphysics and the special sciences, where each of the special sciences is trying to offer a domain-specific explanation based on a very specific question or problem. And I see metaphysics not as really engaged in explanation, but rather descriptive generalization. So looking at what all the special sciences have found and the sorts of explanations that have proven, very often proven instrumentally explanatory in the sense that this helps me make predictions and to control the domain-specific phenomenon that I'm interested in as a scientist. Metaphysics then tries to generalize across what are the categories that would apply across all of these special sciences, not to seek explanation, but description that's general enough to be inclusive of what all the special sciences are doing. And so that helps us avoid any special science saying, I found the one cause to rule them all, and now I can explain everything else. That's a bad form of metaphysics. That's metaphysics as explanation. Whereas I would say we want metaphysics to remain descriptive generalization, not explanatory. Because when you're pointing out, Lauren, an explanation very much depends on the question you're asking. There's no global explanation, or at least I think we should be very suspicious of the idea of a global</p><p><strong>[1:48:44] Unknown:</strong> explanation. I think there's a language point here about what we talk about when we talk about metaphysics as these overall, really huge, overarching general points and these huge questions that are unanswerable. And then what we can formalize in a common language. So the layer down from that. So the world of these causally effective abstract objects or these attractors. And I think the interesting thing is that there are many different frameworks, formalisms, and theories in all the hard sciences. But generally, they're all expressible in computational language. And so when you comport a map of those things with a common language, there's some non-trivial benefit because typically the structure of science, at least in the 21st century, is a lot of people work on the edges of a discipline, pulling out the frontier of whatever their specific explanatory target is. But by joining the language up in a single map, there's a lot more low-order, easy-to-exploit computational free lunches from copied equivalences across domains and across different formalisms that might get you deeper explanatory power within that graph or at the edges of the graph as well, because it unlocks something in some other part of it. And so, here, what's interesting is that if you comport the language of metaphysical systems, not the overarching question, but the systems they describe, you can also map that in a coherent fashion with the computational expressions of all of those theories in a way that perhaps is connected, isn't an explanation, but is a structural architecture that can hold those things together so that they can be probed in a more joined-up way than the 20 different mathematical languages we have for lots of different things in physics to lots of different things in maths. And that's, I think, part of why pure math is really valuable, because ultimately they're determining the bounds of that structure or the operators for that structure that are most universal, most useful to enable that detailed mapping from bottom back to some universal expression of that</p><p><strong>[1:51:14] Unknown:</strong> language. I agree. And I would like to ask you what you think of, let's call it pre-linguistic, conceptions. Because from as far as I understood it, one also has as basal cognition this mechanism — it's synesthetic, but some would argue that this is the basal mechanism for perceiving, for example, patterns in space without direct observation. But there is more to it, as the potential to find new ways to express this, because otherwise one can always fall into Google's prediction that whatever it is that we will try to describe and not explain will have some sort of blind spot to something that may be rather relevant to our quest.</p><p><strong>[1:52:31] Unknown:</strong> That blind spot is always there because you're ultimately coarse-graining. You're going to get a very lossy representation of a big meta object that is pre-linguistic. When you think about pre-linguistic structures, we've talked about them in psychology and Jungian archetypes are the typical example that people use. But they're meant to be big things; every decision you make is a composition of them. In this language, they're highly causally effective. They're a structure that is always present in the function you're running as an observer when you're figuring out how to use your internal model to get to wherever you're pointing in that space. When you go down to the level of basal cognition, you think of very young babies and how they can make out shape only in black and white. Why? Because that's simple. It's the basic distinction, the basic binary distinction that you can get to start building a world model that's stable. And so that's how that complexity gets constructed. You have these very, very highly causal categories of things that you identify first that are pre-linguistic. And then they're scaffolded with linguistic conceptions or more detailed or fine-grained conceptions of those objects as you create more equivalences and as you go through that process. So you handle it in this mapping as a domain. It's a domain where the computational object has lots of coverage over the lower domains or the subcategories. If you're carving up the domains, you do it by negation. You exclude certain informational objects that are more complex, i.e., don't meet a threshold. And then you work down to the most fine-grained and most specified part of that structure where the most rules or the most computational rules have to be on for anything to happen, which is the real world we live in today. This domain structuring or this foliation of this computational structure is one of the things that is starting to come out also in empirical results around things like IIT, and they did a decomposition of it into four layers. You're seeing it in tests of how LLMs map spaces. Everything maps in layers and pulls together. You see it in brain regions and tests around which parts do what and where they come together. This dynamic feels like it's again working all the way up where you're constructing very simple, very few primitive objects first. As we explore and exploit those objects to explore more of that space, we get to the boundary where we live today in the present moment, where we're effectively doing that. We're either exploring or exploiting some object in our causal history that we've already got utility from to make use of it. It's a computationalist model, which can sound quite cold, but it implies that those things actually exist and are real and they matter. The fallout of that hypothesis is that things like pure relation or pure difference are incorrect. That's actually a very good exploration policy. It's not a good exploitation policy at the limit. And again, when you're dealing with an actually infinite space of possible computations or possible states, then that becomes quite important as time goes on. It might work well in finite time. What we do as systems or groups of observers or groups of people traversing these spaces is we bounce between exploration and exploitation as an optimal strategy to colonize or search the spaces or to capture as much of that structure as we can. That gives you an informational memetic angle to something physical in evolution. So you go from boundedness: you don't just have persistence in time as a metric. You don't just have survival, you have boundedness, how much computation you can do. Those things balance out as exploration and exploitation in that dynamic.</p><p><strong>[1:57:15] Tim:</strong> Yeah, I think that's a great way of framing things. When we say pure difference isn't enough, because you're cryptically referencing Deleuze there, and said a couple of things about post-structuralism earlier. It's important within that discourse and even within the philosophy of the person that we're referencing there what is the operation that pure difference, that conception of pure difference was looking to achieve. Certainly in that philosophy, which is a very evolutionary philosophy, there's no sense that the exploit aspect is neglected through a doctrine like stratification. There's a methodological priority that is being posited by a thinker like that, where they're saying, instead of erecting the strata or the forms as something that is a priori and thus has a certain authority associated with it, it cannot really be deviated from because we're always going to be recaptured by it. The function there of the difference is to say, novel forms, novel strata can be generated in an open-ended way and we will then exploit them.</p><p><strong>[1:58:47] Unknown:</strong> I think you're totally right. It's more like it's a question of finite and infinite time. So there's a theory of finite game theory and infinitary game theory. So if you have infinite time, that strategy is optimal. But if the structure is closed, again, it comes back to the structural component of the object. If the object's formalism and structure is proven wrong and it functions and you can generate physics from this object from an open category, then this is wrong. But if the structure is closed, it means that the infinite limit, that strategy is suboptimal, because as you start to asymptotically approach that end point of all these possible states, you can't, by going for difference, you're not mapping the simplest connections that will bring you closer to that state. So it's computationally inefficient as you approach a limit point. So in finite time, it's absolutely fine. And because we're finite and get 80 or 90 years, it's probably the best strategy we've got. But at the infinite limit in the structure, it's an inefficient strategy because it will fail to achieve convergence in the fastest possible way. Therefore, if you're thinking this idea is a best of all possible worlds argument—where if you have some infinite space of informational object, because information can be expressed in form of energy, that imagines that space has infinite energy as well—if you abstract that to a physical explanation, and something with infinite energy has to go as fast as it can, that's the idea that you can take that jump from finite to infinite time. Would you think that strategy is optimal given that predicate? Probably not. But if you get rid of the predicate, you don't need it.</p><p><strong>[2:00:49] Tim:</strong> axiom. I'm really fascinated by what you're saying, Sam, but there is an inherent tension between you using game theory and talking about strategies and what would be optimal to do, and then bringing in the infinite time scale as a way of adjudicating between strategies because strategizers are not working at that time scale. There's a discussion to be had about conditions of closure, and then, of course, when you are running things and developing your models using a closed system, essentially, that has been intentionally closed in its design. Then there's a question to what extent are you just recovering your priors by recognizing the importance of closure if you want to achieve a specific goal in a finite time period.</p><p><strong>[2:01:53] Unknown:</strong> I think there are two points on the construction of the priors and the formalism that gets you to this closed object. It's built from bottom up, right? It's built from a two-cell category that they import up. So, it's a proven object that imports this structure. Now, it's not that it's the only formalism, but it is not an arbitrary take. This construction must work given the properties of these computational objects. I think the point on finite versus time is right. In finite time, I'm not saying—yes, this is absolutely fair. You can pick whatever strategy you want because that's the ability to choose. But what's mathematically imposed by the structure is mathematically imposed by the structure, so it's not a preference that is in it. There's a huge difference between the limit and what we can sample. So, when you feel the light cone that we get to sample from, we get to choose a broader range of strategies than is the optimal strategy for closing that space as an evolutionary agent. And that is implied by the idea of computational irreducibility. We can't compute, we don't know that that's the best way, or we don't know that structure has to close because we can't get to that boundary. Therefore, that gives us the real choice in the boundary and which strategy we choose to exploit that space or discuss structure in it. So, I think you're absolutely right. It's a function of the math, but not a function of the point where you're at.</p><p><strong>[2:03:42] Tim:</strong> I'm going to have to rush and eat because I'm having a blood sugar crash, but I haven't had Bricky yet. I also just wonder if there's a constructivist argument that can be brought to bear, a constructivist mathematical argument against the function that the infinite limit is actually playing in.</p><p><strong>[2:04:06] Unknown:</strong> Constructive theory is doing that. Constructive theory is still using an infinite base object, right? They're still using an infinite multiverse as a base object. I don't think they've specified a geometry of that structure yet or something that imports geometry. It's a metaphysical assumption at the moment, but the common thread is that they're importing some structure and building it bottom-up in a way that doesn't require that endpoint.</p><p><strong>[2:04:40] Unknown:</strong> you</p><p><strong>Unknown:</strong> This is just one construction. The other construction is totally valid and being worked on by some. Their work is unbelievable. Those ideas are pretty critical in translating the minimal observer model that the physics project team did into this category theoretic construction because it's all about possible and impossible transformations. You're absolutely right, it goes both ways.</p><p><strong>[2:05:05] Tim:</strong> For sure. I'm really looking forward to reading your paper. I'm gonna look it up. It would have been really interesting to have the conversation that Adam Safron gestured to right at the beginning about convergent evolution, because you're relying on or continually deploying a notion of convergence. It would be interesting to compare and contrast that given the context of this discussion: Platonic space in biology, or stimulated by biology and reaching beyond, and the way convergence has in fact occurred many, many times. I always like to bring up the example of venom evolving more than 100 times in actual biological evolution at finite time scales. Then we start to look at what is the role of history, at finite, definable but vast time scales, in stimulating those convergent events.</p><p><strong>[2:06:09] Unknown:</strong> So the paper, I did an extension or an application of the paper to some of these ideas where convergent evolution is effectively finding some optimal point, some valley or peak in a fitness landscape that's optimal for the entire landscape for that class, given their computation, that computational potential. And so those things become very important in saying, does this contention align with empirical results? The contention here is that the hints are starting to be there, not just in historic work on convergent evolution, but more personally in Michael's work, where you're getting this idea of some structure of space. Some subcategorical object, some low-down information object that might be sampling from something bigger. And that may be eventually to the level of geometry in maths or the actual shape of the object, the properties of that object. It might stop somewhere else. But what's interesting is we can now probe that space in different domains, just in words and in pictures and see how that space maps. And the mapping may well be totally different to what everyone thinks, but the fact that mapping is now coherently possible is, I think, one of the most exciting things that will happen in the next 10 years of science. I think more exciting than whatever's going on in string theory.</p><p><strong>[2:07:52] Tim:</strong> I'm sure I agree. Fascinating stuff, Sam. Thanks, everyone. Really fascinating discussion. And I hope to speak to many of you again.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Platonic Space discussion 2</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>Contributors to the Platonic Space Hypothesis engage in a two-hour discussion spanning Platonism vs Darwinism, forms and mathematics, multicellularity, representation and computation, concept space, myth and metaphysics, and scientific pluralism.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/r7eMr8za1DY" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/92b8e0e2/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~2 hours 8 minutes discussion among contributors to the Platonic Space Hypothesis (<a href="https://thoughtforms.life/symposium-on-the-platonic-space/).?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life/symposium-on-the-platonic-space/).</a></p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Platonism versus Darwinism</p><p>(14:37) Forms, mathematics, and invariance</p><p>(29:44) Multicellularity and enabling constraints</p><p>(46:03) Representation, computation, and evolution</p><p>(01:03:04) Concept space and attractors</p><p>(01:24:26) Myth, metaphysics, and explanation</p><p>(01:34:46) Pluralist science and metaphysics</p><p>(01:57:15) Infinity, strategy, and closure</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Tim:</strong> It might be great to start with just discussing people's sense of those options. I heard you discuss that a bit in the previous session, Mike. You brought that up and a few people gave some ideas about that. I think this is also a way of us surfacing the question of the relevance of, broadly speaking, metaphysics to science and also the relevance if we're going to invoke the name of Plato, metaphysical themes, what is the relevance of the history of philosophy to a consideration of those things, since a lot of these concepts have been around for a very long time. A lot of the possible alternatives, not all of them, are actually present in the history of philosophy. Referring to some of those models can be a way of clarifying our thinking about what we're trying to achieve by either extending upon or returning to some prior conception. Without articulating in detail what each of them is, I would say four options I could think of would be so-called physicalism or the mechanical philosophy sometimes referred to, a form of Platonism, maybe a classical Platonism or a just Platonism, which presumes the existence of these two worlds, one with the forms and one the actual world. Of course, they're eliding the nuance in Plato's own thinking. A neo-Darwinian conception, which may have things in common with both physicalism, the mechanical philosophy, and Platonism. Then what I would think of as a properly Darwinian perspective, in which the role of the forms is quite different in the sense that it's a way, a mode of reasoning, which tries to account for the genesis of forms without presupposing their a priori definiteness or their a priori existence. I think what we would want to discuss then in terms of something like a diagrammatic schema or a series of schemas is how, for example, form itself operates in each of those different schemes. That might help get us clear about what's at stake when we say we're moving beyond physicalism or we are invoking Platonism in a certain way, or we are or are not being Darwinian or Darwinism is or is not sufficient to account for the kinds of questions that Mike's asking in his research and that a lot of us are asking in different domains. Certainly I've been asking in terms of origins of novel functions at the molecular level. That's one thing I put out</p><p><strong>[02:47] David:</strong> What do you mean by Darwinian? Do you mean that whatever forms there are totally as a result of chance in history? That different organisms living in different environments could come up with a very different mathematics?</p><p><strong>[03:14] Tim:</strong> Big question and something that I've spent a lot of time trying to articulate in the last few years. What is the difference between what I'm thinking of as Darwinian and neo-Darwinism, for example. But really briefly and schematically, this Darwinian mode of reasoning is one that puts a process of variation prior to the existence of the forms. So it is a mode of reasoning which takes variation, or Darwin would say the overproduction of variation, as the primary given and then invokes a principle of selection in order to try and account for why that variation is clearly not continuous. Why there are clumps in the distribution of that variation and those clumps roughly correspond then to species or to forms. Now, when you open that up to a metaphysical program and when you ask a question would they have different mathematics, you're already moving, we're moving well beyond Darwin's own conception. Darwin's desiderata are relatively humble in comparison. He's willing to take Newtonian physics for granted and things like that. He's not trying to explain life, the universe, and everything. But when you do, as various process-relational philosophers have attempted to do in different ways, when you try to generalize that Darwinian mode of reasoning, you do end up coming up with some very different perspectives. And so you certainly could think of different mathematics or different logics emerging from this kind of variation and selection and inheritance scheme, absolutely. And this would not preclude the fact that all organisms everywhere may be similarly adapted to physical constraints that are exceptionally ancient, that might be more than 13 billion years old. So they form part of the environmental background that all living systems would necessarily be adapted to. But we might even want to ask questions in fundamental physics, as many people do these days: what are the origins of those constraints? Why these laws? Why these constants of nature, etc.? And people are turning to what I would call slightly Darwinian evolutionary modes of reason to try and ask those sorts of questions as well. So, in general, it's just the attempt to minimize the a priori content of our approach, try to get behind the defined forms and see if they have themselves a process of genesis that might account for them. That's what I would think of as the Darwinian attempt.</p><p><strong>[06:14] David:</strong> So, if I understand you correctly, it seems like there are different levels of this Darwinism you're talking about. We could take the physical world as fixed. Then, within those parameters of, say, medium—the size of objects that the organisms we know, cells and things, work with—we could talk about some parameters of the physical world that actually control what their perceptual systems are, how a cell has to know where it is in space and time. Even if, in some quantum physics sense, those are illusions. If we start there, is that a base? But what you're suggesting, it seems to me, I heard you saying that deeper than the level of physics that we find ourselves dealing with on planet Earth, it's possible that the laws of physics themselves are susceptible to some kind of Darwinian evolution or natural selection. That's what it seems like you're suggesting, that they're not themselves fixed. And some cosmologists have said things like this. I just want to see if that's what you're suggesting.</p><p><strong>[07:44] Tim:</strong> That approach would be the consequence of a speculative generalization of Darwinian modes of reasoning. How successful that's going to be is a completely different question. I'm saying this is a method. But it does come along with the necessity of re-evaluating the role that invariance, things that are taken to be fixed, play in our explanatory schemas.</p><p><strong>[08:22] Matt:</strong> I like your four-part typology, Tim, of the mechanical philosophy, Platonism, Neo-Darwinism, and then proper Darwinism in your sense. But it could just be that we have two options here: Platonism and Darwinism, a kind of speculative Darwinism, as you're suggesting, because the mechanical philosophy rooted in Newtonian mathematical physics is a degraded Platonism, as is Neo-Darwinism with its emphasis on information being carried by a genome. These are both degraded forms of Platonism, or at least inheriting the Platonic mode of thought, whereas Darwin is the real alternative, even though, as you acknowledge, in the final paragraph of "Origin" he refers to the fixed law of gravity. He wasn't yet thinking cosmologically about evolution, but there's a lot of reason to want to do that. We have the Darwinian and the Platonist versions of the two options here, where Darwin would say that all form is a function of chance variation selected and historically accumulated, whereas Plato would say, nope, the forms are there already. Whatever evolution might be is a selection among pre-existing forms. Those seem to me to be the two options on the table here. I don't know if that's an oversimplification.</p><p><strong>[09:48] David:</strong> But it seems to me that you're leaving out the Kantian.</p><p><strong>[09:52] Matt:</strong> option. That science is limited to a phenomenal realm.</p><p><strong>[10:00] David:</strong> Science is constructed by the mind. Space, time, all the categories we use for perceiving, making judgments in the world, these are all constructed by the mind.</p><p><strong>[10:22] Matt:</strong> And the mind being a historical and not</p><p><strong>[10:25] David:</strong> There's Darwinian spins on that, but I think Kant was talking about pure phenomenology. It is this sort of logic of what an experience must be like. That it must be like a certain way or it's not any experience at all. So, from the perspective of this Kantian, you have to go deep into understanding what it is, but it is for the possibility of any experience. That's what Kant's talking</p><p><strong>[11:03] Matt:</strong> You're right to bring up Kant, and I think there are various examples of philosophers who want to overcome this dichotomy. Kant was pre-Darwinian, but my approach would be we don't need to choose one or the other. We need some kind of a synthesis here. Kant would be an example of that. But I think Kant's understanding of the mind was that the categories appeared from nowhere. We needed a genetic account of that or an evolutionary account of where the human mind comes from.</p><p><strong>[11:36] David:</strong> No, I think you're right. It's a miracle. Did God make it or why does it work? We need some account of that. You're</p><p><strong>[11:45] Tim:</strong> So in terms of the dichotomy that Matt was giving, Kant is pretty "platonic" in this sense, but the a priori forms are transcendental instead of transcendent. And so that's how Schelling reads Plato as a kind of proto-transcendentalist. I like that dichotomy, because I think the radical nature of the Darwinian intervention, and I'm not saying that he was the first to say these kinds of things. You can even think of it as a pre-Socratic way of thinking. But the radicalness of that is typically underestimated. The neo-Darwinian attempt to integrate it into the physicalist or mechanical philosophy to identify a privileged sort of basis of causal reduction in the gene, et cetera, actually moves us back into that platonic conception in a really important way as well. So I do think, incredibly schematically, that dichotomy is pretty useful for us because it's all about what's the a priori, to what extent are we relying on something that doesn't have its own genesis. You can always say that variation in the Darwinian thing is the thing that doesn't have its own account. It's just the thing that's taken a priori as given. But I would want to also signal that there is a way in which this Darwinist view is also related to a Platonism in the very weak sense, that it acknowledges the reality of possibility or potential. So it's not deterministic in the way the mechanical philosophy becomes when it collapses the forms into the actual. Whereas in the Darwinian account, there's a very open-ended and real sense of possibility. It's just about how is that possibility structured? How does it become what Mike would refer to as a latent space rather than proposing or postulating that it's always already a latent space with forms inhabiting it? As Peirce would say in his attempt to generalize Darwin into a metaphysical way of thinking, the account of the evolution of the universe has to also be an account of the becoming and the evolution of the forms, not taking them as a priori. I don't want to turn the whole conversation into this. I just thought that was a useful schema for us to begin with.</p><p><strong>[14:37] Unknown:</strong> This was wonderful. It was a wonderful kickstart. I agree that calling it, or at least putting it in these terms, even if loosely, is helpful. Perhaps there is an even deeper dichotomy inside what it means to be Darwinian, because in "On the Origin of Species" he lingers a lot on this question of polymorphism as variation. There are fluctuating elements, specifically in polymorphic species, where Mike raises a question of the embodiment or disembodiment of memories. This dichotomy then seems to extend to Brower and his lectures on mathematics, philosophy, and consciousness, where he says that the purest thought is mathematics. Given what we know today, and in frameworks like TAME, we can read these texts in a new light. We can ask: is there a non-Platonic sense now that we know these things, or now that we're tackling the problem from this perspective?</p><p><strong>[16:12] Tim:</strong> I love the Brouwer reference, so that opens us up onto a whole other world of discussion. Another thing that I know was brought up in the previous discussion would be about mathematics as a language versus, say, natural language and versus other modes of expression like music or chemical modes of, I'm a chemical ecologist, chemical modes of expression and other things. Whether, if we agreed with Brouwer that mathematics is the purest form of thought, is quote-unquote nature so pure? So in fact, is that purification a kind of simplification or coarse graining in order to achieve that level of purity and precision? I think that would bring us back to the history of Platonism in some sense and this association of the forms with something that's pure, it's not fallen, it's not full of accidents like the world of appearances. These things get incredibly rich. I also want to talk about what Adam said in the chat about convergent evolution, because I think that's profoundly relevant. It has been invoked for its relevance to these discussions of Platonism by people like Alfred North Whitehead, but also in this recent discussion of the Platonic Representation Hypothesis. I would say, there's a lot to say about convergent evolution and the role of shared descent, as well as shared adaptation to the same environment. Before, in a sense, we appeal to a kind of Platonic hypothesis that organisms are converging on shared a priori forms. We have quote-unquote mechanisms or ways of thinking about convergent evolution that don't rely on those. The question is, what's the limit of that, broadly speaking, Darwinian mode of reasoning? You brought up Carcinization. I always like to say, crab forms have evolved six times. Venom, which is one of my areas of study, has evolved more than 100 times independently. So, there are some incredible examples that philosophers could pay attention to when it comes to convergent evolution.</p><p><strong>[18:39] Unknown:</strong> I think when you look at the two different spaces, you can start today formalizing the architecture of how that works. And I think what's really interesting is when you treat these processes as computation, when you define what a finite observer, a bounded observer, can do in this infinitely complex space of forms, you get this coarse grading and you get this dynamic of trying to sample efficient structures that are predictable to increase what you can sample later and have more choice. So you get this Darwinian mechanic from the structure of a computational object or a computational possibility space. The model for observer theory is based on Stephen's rule, which is computational. It's any causal chain. And to make that space have any meaning in physics, it has to close. So you have to be able to get geometry out of it to get maths out of it to make physics predictions, which is what Stephen does. That point at infinity gives structure to the space, but it's the point where every causal chain ends, where every diagram commutes, where everything limits. And that's a sink. And that acts like a telic attractor. It's a sort of informational attractor. It's got every possible causal history in it. Any multiverse, any type of math, any platonic form you can imagine, any physical instantiation is an integrated map. And because that map commutes overall, you can say that structure has that telic pull, that gradient, that fitness, like in a fitness landscape, which is driving observers towards computationally efficient forms that enable them to sample more of that space. And I think the innovation of that computational language allows you to start doing things with the tools we have today, modeling with LLMs. One of the things I'm working on at the moment is a test to probe whether different computational architectures converge. There's a paper called the Platonic Representation Hypothesis, to see if that applies across different architectures, narrow architectures such as AlphaZero or chess engines, to see if they have a hierarchical mapping of concept space of these embeddings that they have in their models. And I think you can start to probe these objects more today than any time we've had before because of the advent of technology. So I think we'll start to get more answers on these directional questions, whether it has to be separate or it's the same. And I think the idea is that structurally, if there is a structured computational space, or a set of all possible computations, and you can import physics from it, then that structure should be found in a coarse-brained fashion across these experiments. It won't be definitive, but it'll give a hint that maybe this thing is actually a real thing as opposed to an abstract thing we're constructing to make sense of the world. And trying to see if top-down and bottom-up causation can work together or whether it's really all constructed bottom-up and it's all emergence is a question that computational experiments are going to let us answer over the next few years. We're going to start getting directional hints about it.</p><p><strong>[22:23] Michael Levin:</strong> My current model, I don't know if this is a chimeric version of the two views that you guys were talking about, or if it's a third thing yet. It seems to me that there could be a variety of different forms. It doesn't seem to me like the forms all have to have the same character, either they're pre-existing and that's it, or they're evolved. There are numerous different ones on that spectrum. For example, there are biological ones that I'm perfectly happy to have modified by evolution and various other things. There are others that seem like they have a lot less of that character. For example, the value of the natural logarithm e. I don't see it being downstream of evolution. I don't see it being downstream of anything that happens in physics. Maybe it can change. It seems like one of the more stable ones out there. I think we could say that there are ones that have this really fundamental stable character. There are others that are either novel or modified by things that have happened later. This gets into the naming, because when I started talking about this stuff I said "Platonic space" only because then at least the mathematicians knew what I was getting at. Some percentage of them said, "Yes, we're already on board with this." Clearly, the model that I'm pushing is not fully Plato's model. I don't know what to do with the naming of it, and some people hear "Platonic space" and they're very upset and they say, "Absolutely not." They say, "Fine, 'latent space.' That's good. Now we're happy." I don't know what exactly they see as the difference. People also will say I'll point out certain things that happen where it seems like you get more than you put in. People will say, "These are just regularities." I say, "What does that mean?" "These are just things that hold true in our world." What are those things—random? No, they're not random. I don't want a realm. We've got some things that seem to hold true. We don't think they're random, but they're not a realm. Somewhere the terminology needs work; we're going to have to work on the different variants of these views to really say what it is that people really hate so much when they think it's a realm. What else do they have that isn't a realm that to me always sounds like a realm anyway? I think the nomenclature is going to need some work.</p><p><strong>[25:13] Matt:</strong> I don't know if it's good news or bad news, but when you read Plato's dialogues, there's no one model that Plato leaves us with. He leaves us with many different possibilities. The best criticisms of Plato's forms are in Plato's dialogues. But obviously, the term "Platonism"—anyone who's read some philosophy of science, and maybe some Karl Popper, is going to have a reaction to Plato, all sorts of associations. I understand why you chose that. You're right, Mike. I'm glad you're pointing to that. There are different forms of forms, as it were, some which we can understand as historically emergent in a Darwinian sense, and others which seem more necessary or almost metaphysical. It seems to me that rather than having to choose either variation first in the Darwinist approach or invariance first, which we could say is more the Platonist approach, for variation to lead to anything of significance in terms of historically emergent forms, you already need seeds of invariance. So there could be some forms that are truly invariant that allow there to be a selection process by which useful forms, other types of forms, could emerge historically. I'm always driven to try to think of the interplay between invariance and variation. It becomes difficult for me to make sense of the idea of a full-bore Darwinism in the speculative sense of variation first, getting all form out of that because of the examples that you would point to, Mike, that seem not historically emergent. So I want to have it both</p><p><strong>[27:12] Tim:</strong> ways. I also want to signal agreement that it's highly likely, almost certain, that there are those forms which we're not going to get behind from our position radically in medias res. So what I'm calling this Darwinian mode of reasoning is a wager; it's a method. You could think of it as an attempt to identify those forms that we absolutely can't get behind: which are the ones that are absolutely non-deconstructible? And they may end up appearing to us as conditions of actualization. For there to be anything at all, it would appear from our situated perspective that these forms were required. But there is, of course, a speculative evolutionary account of that. There are anthropic principles. There are still options available in the way we think about those sorts of things. But just to say that it's very different to claim that we can explain everything and we can get to bedrock variation first and somehow bootstrap ourselves up to a full cosmos. That's already the rationalist claim in the history of philosophy. The rationalist claim is that you won't be able to do that, essentially. And I am saying there is a limit to the rational intelligibility of reality, in fact. I tried to say that in my talk for this session. We are going to have to recognize those limits, which means there may be things we need to take as given, things that we simply can't explain. I would just want to signal the agreement that it's very clear that there are different kinds of forms. And I've previously spoken about this and published about this as a temporal hierarchy of constraints. I don't know if I like that term myself. Terminology is always really difficult. Some forms came into being very, very recently. Some forms are incredibly ancient. Those are real salient differences. They are going to impinge upon our capacity to give a genetic account of certain forms.</p><p><strong>[29:42] Michael Levin:</strong> Yeah.</p><p><strong>[29:44] David:</strong> David? So let me switch gears on the philosophy here. I want to talk about some practical biology for a second. Let's imagine ourselves the first cells to come up with the idea of multicellularity and they start communicating in some way, chemically, electrically. What constrains the kinds of shapes that they can make, the kinds of behaviors they can have in this very primitive state? Is there something already there that they can or cannot do, possibilities they have? Michael.</p><p><strong>[30:40] Michael Levin:</strong> You probably have some thoughts on that. I'll just throw out one thing because it's the same X that I always grind. Probably Tim has other thoughts. There's a lot of really good work on bacterial biofilms that are almost multicellular. Gerald Soel does this amazing work showing what he calls brain-like electrical signaling in biofilms that allow them to coordinate and act as a collective. But one issue that I always talk about is how much do you put in and how much do you get out? What are the examples where you get out more than you put in? Here's an example of this. Once evolution finds a voltage-gated ion channel, you've got yourself a voltage-gated ion conductance. It's basically a transistor. You have a couple of those, you can make a logic gate. Now you automatically inherit all of these cool things about the truth tables — NAND is special and all this other stuff. You didn't have to evolve any of that. You get all of those cool properties for free, right? Having made that interface, you now suddenly inherit these things and you don't have a choice about most of it. That's just what it is from the laws of computation or math or logic. I think evolution can make use of all of that. There will be facts about the way that computation is done in networks of 2D surfaces of biofilms: some constraints, some enablements, and some free lunches. I'm sure Tim's got a bunch of examples that you can make use of. I think looking at those bacterial cases is pretty informative.</p><p><strong>[32:36] David:</strong> It goes even earlier than that. When you have genetic regulatory networks, you also have logic gates.</p><p><strong>[32:50] Michael Levin:</strong> This stuff isn't published, but I have a student who's looking at training. We've shown training of gene regulatory network models. She's doing training of Lotka-Volterra style population dynamics, and you can train those two. If you actually look at the space of parameters of what does it take to make them have habituation, sensitization, these various things, that space is really interesting. It has very specific shapes in this space. It isn't homogeneous. And where does that come from? There it is.</p><p><strong>[33:34] Tim:</strong> I think when I said constraint, I didn't mean not enablement, of course. I meant enabling constraints as always. That's the role of invariance that we're talking about here, which is that you need something to hold things in place so that you can do a theme and variations. I'd love to get to a chat about music here as well, because I know, Mike, you're planning some of those discussions. But to say with the biology for a second, without getting into heaps of detail, but to respond to what David was saying, if certain physical, enabling constraints are 13.7 billion years old or whatever, when life emerges four plus billion years ago, it has to be in conformity, but it's enabled by those; those are already the enabling constraints of living systems, right? And then thinking about things like logic gates and all the amazing stuff that work that Mike has done on the capacities that minimal cognitive systems or minimal biological systems have, et cetera. I still think we can think about this in terms of relationships of adjacency. We don't have to posit that all of the Boolean logic associated with the use of logic gates pre-existed the genesis of that ion channel. We can say that in some sense, when you have a certain kind of actualized relational structure in the world, it then brings into definition a set of adjacent possibilities, possibilities to use Stuart Kaufman's term. Again, it's hard to understand what it would mean to say that all of that logic pre-existed the logic gate itself. We talk about an interface theory. We're never going to pull things out of the platonic realm, so to speak, without the existence of an interface, in Mike's terms, whose structure of functional operational capacities is what enables those forms to be ingressed, if we're using that language. But it's a further metaphysical step to say that those forms somehow pre-existed, as opposed to are themselves given a form of definiteness because of their adjacent relationship with that definitely structured actual physical, if you want, interface. What the Darwinian conception here is saying is that interface naturally contains within it this potential, which is just variation itself. So if we look at biological systems and we look at stochastic gene expression and the non-stereospecificity of interactions between molecules and Brownian motion in and between cells and all this stuff, there's all this crazy non-indeterminate variation going on all the time, which in a sense you can think of as always spreading out, palpating a space of adjacent possibles from the actual form structure that is in existence. It's a little bit of a jump, but I think of this also as the way the mathematical landscape itself expanded in the history of human mathematics. We know that there's a whole load of maths that is not applied, that is not physics. Physics, the maths of relevance to physics, is this relatively small aspect of the mathematical landscape. We could therefore get to thinking that that's just a subset of something that pre-existed it and is much vaster than it. But if we look at the history of mathematics, it's the other way around. People discovered things in the relations in the empirical world. They learned how to reason about them mathematically. There were economic and other utilitarian justifications for the development of those tools. And from understanding the principles, like the relational principles diagrammatically, so Poincaré would say, mathematicians are interested in relations, not objects. You can remove the objects as long as the relations stay the same. It's no different to us. So it's diagrammatic. But by understanding the principles, there's a way that you can keep spreading by unpacking the consequence of those principles. Again, those relationships were found in the empirical world first.</p><p><strong>[38:11] David:</strong> I want to push back on that just a little bit. Let's get back to our cell forming a gate. It has to be that the potential for on and off is already in the</p><p><strong>[38:28] Unknown:</strong> material. Like resolution. It has to be there.</p><p><strong>[38:35] David:</strong> There's no making an on and off switch unless the material that you're making with can already be an on and off switch.</p><p><strong>[38:42] Tim:</strong> But you're saying in the material.</p><p><strong>[38:46] David:</strong> No, I'm not saying Plato is out there, but Plato is actually in the material itself. I can go with that. But when you start talking about it, I want to push back on what you're saying about mathematics, because it seems to me that mathematics is not just a flowering tree that could go in any direction. I think it has a structure to it. I think the way you understand the relationship between, say, geometry and algebra and calculus, the more you look at it, in group theory and set theory, logic, there seems to be a structure to it; it has some kind of a unity to it. You just can't make up any kind of math you want.</p><p><strong>[39:40] Michael Levin:</strong> This, I think, is the issue. Tim, I'm okay with — we don't have to say it pre-exists because I don't know what time would be doing there anyway. So that's fine. It doesn't have to pre-exist. But there's some specificity. In other words, you've got this particular fact about NAND, or that there are four colors: the four color theorem, not the eight color theorem. You get a very specific thing out of it, and you can say that it sort of came into being when you made the interface. I'm okay with that, but we still need to say, is it random? And I agree with David, I don't think it is random. So there's some pre-existence. Now we're back to there's some reason why you've got this and not something else. So something is making that selection.</p><p><strong>[40:27] Tim:</strong> I think random is a very misleading term, the way random is used to talk about indeterminate biological variation, for example. Abject randomness is in some sense an abstract fiction. So if I'm going back to biology and I'm talking about stochastic gene expression or whatever, it's not like just anywhere in the universe that those genes are being expressed. It's in a very strict relationship of adjacency with all of the "quote-unquote" machinery that exists to produce those genes. It's just that there's this distribution of genes, the concentration of genes, say, in different tissues, different cells, is tightly regulated, but it's never regulated perfectly. It's never regulated absolutely. A protein structure can evolve to achieve a relatively high, a very high degree of specificity, but it's never absolutely specific. There's always a chance that it's just going to stick to something else because molecules are just sticky and it might have some kind of off-target effect. And that's one of the major ways that novelty emerges in biological evolution. So I'm absolutely not saying that it's abjectly random or anything like that. I'm saying as soon as you have any kind of structure, it acts as an enabling constraint on the development of further structure. So it makes complete sense to me that mathematics would have in some sense this kind of unity. And even complete branches of mathematics that are considered to be completely distinct keep discovering the same structure. It turns out you can say the same thing in a different language in some sense. That makes total sense to me if the fundamental, if mathematics in some sense is born from this shared origin in the practice of mathematizing humans in actual contexts. I'm not saying Mike, you and I have been back and forth on this for a couple of years, I think. I'm not saying I have an account on how I would explain the genesis of the four-color theorem or fucking bounce constants or whatever it is. I'm just saying it seems premature to me to say that no such account is possible.</p><p><strong>[42:35] Unknown:</strong> I agree. I tend to agree with opposing views. I like this example of cells communicating, especially because I don't have a concrete stance, but I asked the question if biological forms, for example, were hearing shapes and not forms, in the sense of Mark Ack's question about hearing shapes, that you can effectively recover this infrageometric information or some sort of data. Since it is persistent, you can also posit that there is some Platonic prior that you can recover consistently, which I find really interesting. Perhaps cells—let's speak of an architecture, a plant—and you ask if a plant can hear shapes. In that sense, you just do the same path that Mark did. It is completely plausible if you understand hearing as processing some sort of signal by mechanical transduction, and then you have specific genes, and then you have ciliary arrays. It's completely possible that you would do wavelet transforms. For example, if you want to recover the peaks of a transform like this, that would modulate the oxygen signals; it is completely plausible. It would give you intervals, and in terms of mechanistic expression of a pattern, it is also plausible that we ought to relate it to symmetry because the peaks of the Fourier transform—it's completely plausible. I would also invite this other theme, which is Hermann Weyl's conception of pure infinitesimal geometry when he was trying to unify the electromagnetic fields. He came up with many beautiful constructions. I know we are past 150 years of Weyl's work. But the fact is that even though Einstein commented that his ideas were beautiful but unphysical, now, 150 years later, we have light–matter interfaces coming out from it. We have Weyl points that have been experimentally observed. Perhaps we don't need to choose between a metaphysical or a physical perspective. There seems to be something here by which we can recover this kind of information. I find it interesting on a cognitive level if we bring that from ciliary arrays doing these transformations and then architecture expressing these patterns. I find it interesting, but I don't know how to answer the cell question specifically on a biological level.</p><p><strong>[46:03] David:</strong> Let me ask another question about this. What is the difference between a group of cells that are just responding to a chemical stimulus in their environment — they're moving toward a food source or away from a toxin — and a group of cells that's actually processing that as information about where they are in the world? Or a plant that's growing toward the sun automatically, or one that's actually processing information about where it is in the world. Michael, you want to take a stab at that?</p><p><strong>[46:49] Michael Levin:</strong> I'm going to see if I can find a cool example. Have you guys seen the Physarum example that we have? What you have is a dish like this, about 10 centimeters in diameter. We put three glass discs on one end, one glass disc on the other end, and a little slime mold in the middle. The glass discs are inert. There's no food on them. There's no chemical. What you're going to see — I'm going to try to find this because this has to be seen — is that for some hours the Physarum sits there and it vibrates and it tugs on the gel that the whole thing is sitting on. It reads, as it turns out, the strain angle of the different masses in its vicinity. For several hours it does this and it doesn't do anything. It doesn't go anywhere. It just does this. I think what it's doing is gathering information about the environment. Then it goes preferentially to the heavier mass. That's one of my favorites.</p><p><strong>[48:17] David:</strong> Examples. It seems that example is crucial for pushing back against the sort of emergence physicalism view: if you can experimentally show that organisms are actually representing where they are in the world. That's very basic math. I would say that you have to have some kind of a representation of spatiotemporal orientation if that's what we can actually show.</p><p><strong>[49:01] Michael Levin:</strong> So here it is. These are the glass discs here, 3-1. And this is the little Fisarm. So for the first few hours, it just does this. And it's going everywhere at once. And I have a video where you can see it tugging. And then, boom, at that point, it decides to go for it. Wow. And then bang, that's what it'll</p><p><strong>[49:29] Tim:</strong> do. It's doing a random walk, and then suddenly it becomes oriented. And I think this is really fascinatingly consonant with something like Waddington's conception of the neutral accumulation of genetic variation and then the reconfiguration of the epigenetic landscape, but the process of genetic assimilation when an organism enters into a particular environment and something elicits that adaptation from it. As Mark knows really well, these are very big and ongoing conversations in evolutionary theory around things like evolvability, the role of redundancy, the role of robustness, and where those two things are the same and where they're different. I'm always wiggling my hands this way, I'm a big gesticulator. This is 'random, spontaneous' behavior. If suddenly something elicited a reaction from me, I might point directly or I might make a shape with my hands. My point there is just that biological systems are always doing this spontaneous thing at the molecular level, at the behavioral level. They're reaching out, they're palpating an environment and they're seeking a signal to bring this into the information territory. They're seeking something which would tell them, go this way and not that way. Be this and not that. This is what you need to be right now. You've got this capacity to be lots of different things. You're phenotypically plastic, but right now, this would be a good thing to be. And so information then is this relational thing that happens between two different systems, organism and milieu, or two different organisms. It's a mutual reciprocal relationship of elicitation. When the signal comes in and it is 'meaningful' because that plant actually requires light in order to photosynthesize because of its evolutionary hit, that's how I tend to think about these sorts of things. And I think, again, Mike, your work is incredibly pioneering in this way that you can look at the slime mold. On the one hand, you could have told this story that at the molecular level, and it would almost be a kind of evolvability story, then the states of the slime mold are evolving. But you could tell the same story at a different level or in a different aspect in a way which becomes a behavioral story or a cognitive story. And so there's this fascinating unification of a kind on offer there. It turns out that I've said this to you before, Mike, but there's a way of thinking about cognition, which I think, in this general framework that you give us, in which it almost becomes synonymous with what evolution means, if you think about evolution in a really generic sense. To Sam's point, you said some really fascinating stuff about Wolfram's model and computation that we haven't picked up on. But computation is an evolutionary process, always already. It's no shock if there's a really intimate relationship between evolution and computation because they've always been intimately related. And you can even just go into the history of the word evoluteo and how it means unfolding, but it has an algebraic connotation before it ever had a connotation in biology. There are so many rich resonances here. I wouldn't want to be seen here as flying the physicalist flag. I'm not advocating for some kind of physicalism. I think physicalism is more platonic and more idealistic. I know that's counterintuitive compared to 'Darwinism' or 'Darwinian' or whatever. I call it ontogenetic because I prefer not to invoke Darwin's name so often because it's like invoking Plato's name. People are like, that's what this means. So an ontogenetic alternative is definitely not what I would call physicalism. I think physicalism is a formal theoretical approach to a way of understanding the world basically grounded in effective theory. That's a whole other conversation. I'm not allying myself with that. Maybe it's a genuine alternative.</p><p><strong>[54:05] Unknown:</strong> I think one of the things that's interesting is you can model evolutionary processes on very basic cellular automata. And when you talk about patterns, you get this linear progression, then some exponential jump as that cellular automata discovers a novel rule that increases the amount of steps they survive for. And those jumps are discrete. And those discrete jumps are really when we say the object has changed from one thing to the other. So in your cell question, the idea of bulk orchestration or a top-down causation from a group of objects that have bound together exhibited small world network properties where the communication channels reach some synchronicity that means that decision is basically everywhere in the network all at once. It's called a superlinear speed up. That dynamic gives you that top-down causation where that single cell thing has within it the communication ability, the ability to couple and find information from the environment or from other cells in its neighborhood. Once enough of them come together and they're close enough, that orchestration kicks in, and that's where those free lunches come in. Because you've now gone to a different, you've exponentially risen up the curve of how much information you can handle, how big your internal model is, how much you can predict. And here, the model is that this world of latent space, idea space, possibility space, whichever name you want to give it, that structure is invariant, and objects are bigger or smaller based on how many equivalences there are within the computational network. Now, it's not saying the actual thing is a big computer, it's saying that model is a coherent way to then make predictions with it in a way that this is the language of these formalisms from Plato to even theologies. All the metaphysics, all the major theologies describe the structures of these spaces. And I think today, with these network models, you can now be more specific and you can now test things like evolution through that space. What, if you have an object with n many equivalences in the network, does an agent put in that space discover it faster or slower? You can actually start to run quite coarse simulations of the dynamics of the space that, I think, for a long time have just been talked about. And that's really interesting because all of the experiments that are coming out of Michael Slab and some of the other people on this panel are pointing in that direction. I think it's a super interesting formal program where these tests, these things can be tested not just in observers like us or animals, but they can be tested across novel substrates like computers. You can start to answer that question. And when you have a structured space like that, then you start to ask deeper questions: are ethics computationally valid? Can you model ethics computationally? If so, can you teach a computer them? And those languages you get out of discovering this space are causally effective. Whether it's ontologically real — whether there's a giant Indra's net all around us — is hard to tell. Whether it's causally effective in our world is probably the more important question that we can, I think, start answering. So I think that's the most interesting thing that's happened in the past two years. All of these ideas start to bring ideas of infinitary space and infinitary explanation back into physicalism in a way that should be quite explanatory.</p><p><strong>[57:51] Tim:</strong> I think that's beautifully put. I really agree with the promise of that new kind of science, the experimental or computational method. And I do think sometimes that promise, that potential, gets collapsed a little bit when we immediately feel the need to move into metaphysical territory and say, well, that means that the universe is a computer. I think it's an incredible way of experimentally testing various evolutionary models because they're all evolutionary to me, because intrinsically, computational models are evolutionary. And I love what you said about saltations, jumps, phase shifts, leaks in a state space. And I think we see a ton of that in biological evolution, actually. So I don't think that the so-called gradualist assumption particularly holds. It holds on certain scales, but we also see a lot of leaps. What I brought up with genetic assimilation and Waddington and Richard Goldschmidt's idea: these ideas have always been present in evolutionary theory, even though there had been a mainstream of neo-Darwinism that tried to squash them.</p><p><strong>[59:09] Unknown:</strong> That dynamic's not just seen across biology. It's seen in our social structures. It's seen in how we organize ourselves. It's seen in how economic growth is. It's seen in how political systems and change work. We have a long linear progression or some mildly chaotic but linear progression. And then there's a change and there's an exponentialization. The network reorganizes. It settles to a new local loop, a new peak or valley in the fitness landscape, depending on which way around you've got it. And then it keeps going. But this applies not just in evolution because it's computational and ultimately we compose our explanations computationally to communicate them. That dynamic, if it's proved in computation, the simplest system must be running in more complex systems at much higher resolution. So these dynamics can now be explored in that space of memetics, that Dawkins sentence in the book that should have probably been another book in Susan Blackmore's work. They now become causally effective if you can also put physics within that same language. And that's one of the interesting things about these models. It's not that you can now compose between those structures. And because there's a natural geometry inherited in those objects, you can compare the properties. And so you can have ideas about symmetries, ideas about the boundaries of those objects and how hard they are to capture, how much coarse braining goes on, what happens when we actually sample these objects. Do they become easier or harder to sample? Is there a point where that changes, where that object becomes invariant under repeated sampling so that we know it's maximally reduced for us? What happens to that concept in Platonic Space when that happens? And it starts to put these observer-centric models as explanatory in the context of how we interact with information that isn't wholly explained by physics, biology, chemistry. The content of an emotional experience can be explained with an EEG. But if you ask someone the contents of that experience and you say, "Is this data—unspooled, all the data?" you will normally get an answer that's no. And because of this language, because you can now compose those things in an integrated map, you can start to make harder empirical statements about what you think the structure of that space is, whether or not the space is structured, how my paper hypothesizes it or speculates is beside the point. It's that you can speculate within this architecture all of those dynamics to try and formalize and test these sometimes quite intuitive, but also informed by lots of experience, ideas about metaphysical bigger questions that are harder to answer. And that's one of the interesting things about this change in the language because it starts to join up so many different domains. And you start to get ideas that are mathematically proven that can be applied across spaces and concepts that don't normally seem to lend themselves to it, at least in how we think about those subjects today. That's quite an interesting thing about this symposium: you get those perspectives on what those optimal models are from 20 different disciplines in 20 different languages. And so you get this coming together, pulling apart those ideas, which is how you formalize something like this, which is going to be quite important over the next few</p><p><strong>[1:03:04] Unknown:</strong> Let me ask you what you think about biology — the easier models. Let's take two different neural networks, two different architectures of neural network, trained on the same data set. Do they create a different world perception or not? That's the same data set.</p><p><strong>[1:03:35] Unknown:</strong> This has been tested, right? Above a certain amount of tokens, large language models trained on transformers have convergent representations in their weights. This was a result from last year. They've now applied that test slightly more broadly with some different measures across a couple of vision multimodal, vision and transformer-based word architectures, where again they're finding, in that paper called "Universal Subspace," areas where the representations converge. Now, whether or not that's constructed in the data set, i.e., because we've chucked in all of our pictures, our words, et cetera, comes out like that because of us, or it's discovered is not yet an answered question, nor is how that space is structured or the properties of that space, because those architectures find it harder to import coherent geometry across different models and different types of design. There's a guy called Marcus Beuler who does some really excellent work on this. I think he's at MIT, and he's been doing graph theoretic representations of these concept spaces. What you're trying to do is move past that test to whether you can test if there's some discovery or some construction where the domains are so separate that it might point to it. But if it's totally different training data or it's a narrow domain, is there a structural discovery, not what's in the structure, but is there order or hierarchy that suggests that this platonic space is not just words, there's still girders holding it up. But say this is roughly how we split things</p><p><strong>[1:05:25] Unknown:</strong> Large language models, I suspect, have influenced the creation of that kind of concept, platonic board. But I'm saying, assuming that you have a completely different model that doesn't learn based on attention but learns on something else. If they create the same world, does it mean that there is only one platonic presentation of the world and we just need to find that world? Or are there many?</p><p><strong>[1:06:14] Unknown:</strong> I guess the way to think about it is in terms of the size. So the form of a chair is as an object informationally bigger than all the elements of that set or that category of the form of the chair. So every individual chair that you can possibly imagine is contained within that object. So when you have a word, imagine that as an object, a category. Now it's a smaller category than maybe chair. It's a more bounded category. There are fewer things it connects to, or fewer instantiations of it, but it's still got to be mapped. You're only mapping with a lens that's small. You're mapping an object. It might be countably infinite or even finite in terms of the composition of it, but you're counting it with something that's doing it one at a time. You're not going to ever fully map that space. Even with something as simple as a word, you're going to get multiple embeddings, but they'll be close together in that space. Similarly, as you move down to things in physics, those things will become discrete. Why is math powerful? Because it's discrete; you get an answer. Those objects become invariant and you can map them fully, which is why they're useful in the computational observer model. Because if you have finite computational power, you need to do more mapping. You want to see more of the space. You want to reduce or compress as much of that into your model. It's a discrete measure version of something like FEP, where that surprise is I have to do a lot of computational work to fit this object into my model. Then I need to make it smaller and compress it. When I sample that thing again, when I practice doing something or when I learn something, that thing gets compressed more and gets more equivalences in the object. It becomes easier for you to integrate into your world model to make predictions. That dynamic means you map that space. Even though that object exists, you're not going to fully map it or perfectly map it with a finite budget.</p><p><strong>[1:08:20] Unknown:</strong> Hananel, it was a great answer, Sam. Your work is fascinating. In going back to your question, Hananel, experimentally it could be interesting to test for something: in terms of computation or computational power in terms of operation, not only do you have allocation operations, you have this thermodynamic or dissipation layer at play, which, depending on where the model is being run or what the computational constraints of the architecture are. That is, whether it's more related not just with the words, but also with, let's call it, the kernel dissipation in a sense.</p><p><strong>[1:09:20] Unknown:</strong> In the last couple of years, a lot of really talented researchers have come up with multiple measures to figure out this stuff. Some are kernels, some are the graph representation. There are five or six different measures. What you're trying to do is get to the right measure for it, where it probably is some composite of those measures. It's a very live question for me and something that I need to get to a tighter answer.</p><p><strong>[1:10:00] Unknown:</strong> In terms of computational topology, we can find defects or you can do eigencompositions or expansions. It feels really interesting to see what would be the form in points, pointed spaces, for example, and the density, and also articulate with the volume in area proportionality.</p><p><strong>[1:10:26] Unknown:</strong> Wild's law. This is one of the interesting things about an LLM's architecture because it's very complex, reducing that non-trivially from n-dimensional many weights to some 3D representation or some dimensions. It's hard. So I think there's a bit of work to do, but what's quite interesting at the moment is a lot of people are working on geometric computational engines for inference or for fusing or for virtual machines, and these create maps that prefuse computation, so that representation, because it's a coherent map that has easy composition between it, might be more able to accurately map that physical representation of a shape with those properties. But it's a bit early. They're still in a really interesting foundation called UL.</p><p><strong>[1:11:28] Unknown:</strong> Perhaps you would get more discernment in terms of effective temperature measures you want to take. I've been experimenting with Mike's data, especially the bioelectric code. Normally, on a substrate, it seems the same temperature will get you a very large spectrum and then you cannot make heads or tails of it. If you make this discernment between what would be an effective temperature of a bioelectric wave and discern it from the medium, you get a much narrower space. Although it is early, it's an inverse-inverse problem that maps exactly to what you were saying. In, for example, cell-cell communication or a damaged embryo, you're discerning the mechanical wave of an embryo trying to engage in intercellular communication. I'll keep looking at your work.</p><p><strong>[1:12:57] Tim:</strong> Really fascinating stuff. Returning to one of the broader themes of the conversation, but a couple of things that you were saying, Sam, including when you initially brought up Wolfram models in your first contribution and talking about convergence. One of the things I think in the history of philosophy that these conceptions that start with, say, notions of infinity, an infinite plenitude of forms, for example, struggle with is what I call the selection problem.</p><p><strong>[1:13:34] Michael Levin:</strong> you end</p><p><strong>Tim:</strong> up with an issue of why these forms are not that forms. Alfred North Whitehead speaks to this very, very directly. This is a problem that you get in string theory; there are so many different solutions for the vacuum. Why this particular one? And so you end up with a formal system that's, of course, capable of encompassing the actual world in some sense, at least at the level of the abstract language that it's using, but it is radically underdetermined by the world in some sense. It encompasses much more than the world, and you have this problem of selection. This is somewhat related to what you said subsequently after I mentioned saltation, where you mentioned these leaps in cellular automata. You were saying it's a general phenomenon, which I completely agree about. Phase shift, criticality, all of that. We all acknowledge that these are general phenomena, but the way they tend to be explained is in a Herman-Hagen synergetics way: there's a decay of the order parameters, the system goes into a more chaotic phase of its evolution, and then it's captured by another attractor. It leaps in the landscape. You mentioned a fitness landscape; it could be an energetic landscape, whatever, to another attractor, another basin of attraction, and it ends up there. And now the new non-equilibrium steady state that defines it is there for it. This comes up when Carl Friston talks about these things a lot as well. You mentioned the FEP in passing; his contributions to the previous Platonic space discussion, and a discussion I had with him and Mike and Chris Fields on Mike's channel a while ago. The real question we're butting up against in this discussion of the platonic space is: where do those attractors come from? If you can explain all the behavior in terms of attractors, these models coming out of non-equilibrium thermodynamics, historically, have relied on, as physicalist models tend to, a predefined space where the attractors are essentially already there. Then you can model the evolution of a system through that landscape and it's captured by this one or captured by that one. What such a model struggles to deal with is the actual genesis of those attractors. So, again, the question of the genesis of forms arises. It's really the same question.</p><p><strong>[1:16:15] Unknown:</strong> That's what religious metaphysics literally does in every single form of persistent theology. The major monotheisms plus Hinduism, Buddhism, Taoism as well culturally are the biggest and most persistent structures. They all do that in their metaphysics. They all have, in the language of those traditions, their own world of forms and their own way of that structure. It normally got formalized maybe a thousand years after the tradition started, and philosophical traditions start to deepen these spaces and talk about the evolution of this possibility space, from the one to the two to the ten thousand things in Kabbalah, from the unending infinite through some contraction to the biggest infinite objects we've got in Hinduism. It's the tattvas and Brahman and Atman and how those work together. They all have this very loose language, but that language can be expressed computationally within that model. It's basically categorical. It's saying these are infinite categories of things. It's not the whole thing, but you can now create a coherent model of how this selection principle applies to an infinite object. Because an infinite object, if you're going to posit that at the very top of the chain, which is what all these persistent things do, give or take some nuance on Buddhism, then you need to be able to walk through that process. This infinite thing must do it infinitely faster or as fast as possible, at least until in some traditions we get free will and choice. You can now attempt to bridge it with models where this platonic space and the coherence across not just all the religions, but Greek philosophy and aspects of the chakra system all across the world are connected. If these attractors are real and these forms are real, the linguistic construction around the names or what those attractors tell you to do as the sub-order rules for that person are less important than the existence of a structure that seems to be consistently created or discovered. Again, this creation versus discovery point is critical because the question is really, is this space closed or is it open? Now, something like postmodernism in philosophy functions this way in this model: what's my biggest possibility space or my biggest model that I can search with, that I can explore with, that I can exploit to structure my space. And we get to postmodernism, which is the mathematical equivalent of an open category, an ever-branching tree where at the very limit, it doesn't come together. Every computation is infinitely far away from everyone else. So there's no speedups at the limit. It's all irreducible, all devolves to randomness. If you were thinking about it as a network structure, I'm really picking on a deconstructivist's view of postmodernism and not constructivist postmodernism, but these convergent ideas, whether it's Plato, whether it's religions, whether it's Leibniz, whether it's Spinoza, they didn't have a way to create the architecture of those structures in a way that was coherent with maths, physics, and computational sciences of their time because the language wasn't there.</p><p><strong>[1:20:23] Michael Levin:</strong> Yeah.</p><p><strong>[1:20:24] Unknown:</strong> I think we're in a position today where you have tools in language and results in empiricism where these things get connected. You can now start to express an argument, not a proof, but a coherent, internally consistent structure that says we can work through how Platonic space has these forms. You may not agree with it, and it may make axiomatic or metaphysical assumptions, take away theism. There's some bucket of infinite information or structured space that we are discovering. That's a fundamental brute fact axiom. You go to the other view: the brute fact axiom of materialism is there's a single singularity at the Big Bang, or the universe was always there, or a block universe, or a multiverse. So you accept different brute facts. But it's an interesting exercise because I think it speaks to intuitions and structures that we continually create, but don't have process-based explanations for in any formal language. I don't mean logic. I mean this connects directly to results in maths, physics, chemistry. It's the same dynamic, not a different dynamic. I think that's what you can do.</p><p><strong>[1:21:53] Tim:</strong> I substantially agree with the vast majority of that. You're advancing a kind of perennialist thesis on the history of religion. I published a paper several years ago on creation myths and evolutionary process. I absolutely agree that all those myths can be decomposed into operators. I think that's what metaphysics is. The explicit practice of metaphysics is the decomposition of myth into operators. That's what I call diagrammatic metaphysics. Certainly that's what Plato was doing with the "Timaeus". Aristotle was very good at coming up with these diagrammatic schemas, his four causes. I substantially agree with everything that you're saying. Can the computational approach, which I hold out great promise for, move us? How far can it move us? It's a similar question to what I was calling a Darwinian approach: how far can it move us out of that? To what extent will it still be reliant on taking certain invariants for granted? To what extent could we say that is not unrelated to the physical structure of the computational object itself? It's not going to get us out. I don't think it's going to really get us out of, if one is so inclined, asking those big metaphysical questions. The other thing I point out, which is adjacent to what you were saying, is that physicalism is already a monotheistic mode of reasoning that comes directly out of Christian and scholastic philosophy. People like Newton, certainly, and Laplace developed a highly developed mathematical language for trying to bring together a certain conception of theology with their physical science. That was their project in many ways. That's what I also explicitly critiqued in my talk for this seminar, for this symposium. To your broader point, it's important to recognize the theological or mythological origins of most of these thought forms. I want to hear Matt weigh in on this because this is something that he has a massive amount to say about.</p><p><strong>[1:24:26] Matt:</strong> I know it's very, very interesting. I'm glad this connection is arising. What comes to mind now is, Tim, you and I were talking offline about the difficulty of putting some of these ideas into natural language. We're searching for diagrams. We're trying to formalize this. And yet we also want to be able to communicate meaningfully about how it changes our self-understanding as human beings. I wonder whether metaphysics can be understood as a translation of these mythic intuitions into some kind of formal operation or set of operations. But if we go back to Plato it seems to me he's never trying to translate one into the other but instead to play them off each other — let's see how far dialectics can get us. It's still a natural language, but he was using the geometry available to him at the time, like in a dialogue like Timaeus, to work out some of the ratios he was perceiving in the movements of the wanderers, the planets through the fixed stars, and so on. Almost in every dialogue dialectic ends in an aporia; rationality meets its limits and then he offers a myth which in some sense illustrates symbolically, imaginatively, what reason can't quite grasp because it is inherently limited. Earlier, Tim, you said that rationalism meets this limit, and I think all the best rationalists from Plato to Hegel — if I can call Hegel a rationalist and Hegelians wouldn't be happy about that — recognize that the recognition of the limit is already to overcome the limit. So I think rather than imagine we might ever get out of myth, my own orientation is as a more or less Neo-Platonist, which I think it's very hard to think outside the grammar that Plato left us. Whether you're in the West or even in the Islamic world, there's just so much that's structured and canalized by Plato's way of thinking. At the end of the day we're not going to get out of the need for myth, and however science advances we're still going to need to tell ourselves a story about what those formalisms and what the math means.</p><p><strong>[1:27:12] Tim:</strong> I fully agree that myth has a very pressing and ongoing role. We may slightly disagree in what that role is, but I absolutely agree. I just want to throw this back to Sam because earlier you were saying that if you show someone an example they would say "that doesn't really explain my experience to me" — that's paraphrasing what you were saying — "that doesn't seem to represent my experience." I think one of the things we're always running up against, and this is Matt's point about what we were discussing offline in terms of trying to express these things in natural language, is that we have a bunch of different modes of expression and you can think about them as different languages if you want. I said this earlier: I feel very strongly, just to bring this up, I can say things when I'm improvising as a musician that I would never be able to say in natural language, but they're nonetheless expressing something. There's something non-overlapping. I'm not sure this is what you're saying, Sam, but I would worry about any claim that a computational language could become a master discourse in a way — that the computational language would succeed where, say, the EEG didn't. It would certainly explain different things and it might have vastly more</p><p><strong>[1:28:33] Unknown:</strong> it's your point that it explains a different layer in an integrated fashion as opposed to saying this is explaining everything in the internal part of the function. If you imagine an observer-theory perspective, you're this complex observer with your big cognitive light code of all the causal history that's built you up and you have some boundary of what you can compute, and then you have some limit that you've set in your world model of what you think is possible to compute or what you think you can see—things like myth and religion and even superstructures like fascism, nationalism, socialism. What they all function as are limit-setting devices to coordinate observers in that space to try and get them to go the quickest route, or what they deem with their limited world model is the fastest route. So it's effectively selection and evolution all the way to the birth of myth, and you see it in how religions evolve through time. There have been loads of papers and books on this, but you start with small conceptions and eventually what survives is a big conception. So the role of myth is a top-down cognitive apparatus, a way to set the biggest space. When you get to the limit of that space through the way our apparatus works, the way our brains work, because we can't really explain things well beyond cause, input, function, output, cause, effect, we can't really get beyond that boundary. That's where proto-myths or these bigger conceptions of one substance, monism, or unity have utility, because they have a computational function in the way a bounded observer computes limit objects. They can actually compute it. They can say, "oh, this infinite thing I can't compute is equal to one, in order to get my computation to compute." That's a very trite example, but that's the rough idea of how these things function in this sort of way.</p><p><strong>[1:30:44] unknown:</strong> object. I wonder if it would be useful in the context of being interested in explanation in particular to pin down and distinguish different types of explanatory why questions or different types of targets. What I often see in my space is an interest in capturing what's distinct about genuine legitimate explanations, what doesn't count as an explanation. There's an appreciation that not just scientists but humans in everyday life ask different types of questions about even what we think of as the same system in the world. We ask different questions about gene expression or pick the physical stuff of interest, but there are different types of questions. Sometimes we might ask a causal question or a functional question or a question that requires some kind of optimality or efficiency answer. We think of those as very different types of explanatory why questions, and we think of explanations as answers to those questions. The frameworks that were very nicely listed out — mechanical philosophy, forms coming from Platonic frameworks — are sometimes pitched as associated with different types of questions. But we think you can't give an explanation for something unless you specify a well-defined explanatory target. There are very different types of targets showing up in these discussions, and we think of them as different types of explanatory why questions. I wonder if it would be useful to distinguish different types of explanatory targets. One of them is "why does this form exist" versus "why does it change the possibility space of what can" — those are very different questions, and a standard causal explanation isn't wired up to handle that possibility-space question in the standard way we think about causal explanation. It also relates to the challenge of words and terminology because "mechanism" and "mechanical philosophy" mean about 800 different things to about 800 different people. Getting precision for what we mean when we say "constraint" or "mechanism" — one way to start wrangling that is to distinguish different types of explanatory targets. It's interestingly challenging to find the right term to start with before you even unpack. Metaphysics is coming up too. That's going to mean very different things to different people. I'm interested in thinking about all of those more, but we don't think there's just causal explanation. There are lots of fascinating debates now about non-causal mathematical explanations and functional evolutionary explanations, which are viewed as distinct from standard causal and mechanistic explanations. Lots of interesting, complicated things. Just this question about the potential use of distinguishing different types of explanatory targets.</p><p><strong>[1:34:46] David:</strong> I think that's a really excellent question. I would go with more of a pluralistic approach. Has someone mentioned Aristotle's four causes? The way I would look at it is that it's what's most useful for guiding particular research programs and experiments you're doing. With Michael Levin's work, I've had some conversations with him. The move toward teleological and functional explanation is driven by pragmatics. It provides a certain kind of guidance for hypothesis testing and model formation that's very useful. So when you're stuck with causal explanations of what he's trying to deal with, it is very difficult to explain—trying to explain how a computer works without allowing yourself to talk about software. This gets back to Sam's discussion of computation. The whole computational angle on this is that what makes an explanation useful is partly how we are able to use it, manipulate it in our minds, understand it. Things get to be so complicated that we can't deal with them.</p><p><strong>[1:36:31] unknown:</strong> can't. I think one way to understand a strategy scientists use to manage complexity is they pick explanatory targets that are precise and they specify them in a way that's very narrow. And so that anchors what they want to explain. And there's lots of detail that now they can say, that doesn't matter. That isn't a difference maker for this target. But then what can happen is sometimes they stray from that target they started with, or we try to lump everything in the kitchen sink into the explanatory target. And you just can't, at least the way I think of explanation, you can't give an explanation of everything about a system. There isn't a complete whole explanatory target; it's not even a well-defined question. And so it's only once you specify that you could ever give an answer. But sometimes it's hard to define that target in a way that's well-defined. And then it's hard to stick with it. Because someone, we might start with saying, I'm going to give an explanation for why this form explains this set of potential outcomes. And then someone asks, what explains the existence of that form? We've changed the explanatory target now. So you're asking a new question. We got to change the goalpost. Or there's this interesting attempt to put it all into, let's give the whole explanation. And there's debates about the standards that a kind of explanatory target should meet and also what notions of causation are useful. I definitely like the pragmatic. I'm not sure Aristotle's four causes are what scientists currently use or what are going to get us reaching the goals we want. So it's also which of these frameworks do we want to use or do we need to develop them or change them or add to them.</p><p><strong>[1:38:57] David:</strong> I could see having a whole zoo of explanatory frameworks, explaining things in terms of different phenomena and different levels of organization. We've seen that in biology: at all the different levels of organization you can explain things. So I'm perfectly happy to be very pragmatic and pluralistic about that. I don't see anything all that wrong with it either. You mentioned getting things very simple and then finding the problems when you do that. Sometimes that's what you need to get a program going. Look at behaviorism in psychology. It turned out to be badly wrong about a lot of things and short-sighted, but for a while they had it going pretty good. They were able to get a lot done with just a very simple way of explaining, say, animal behavior, and made a lot of progress. After that progress, people said this doesn't explain this, it doesn't explain that; you need to go beyond behaviorism.</p><p><strong>[1:40:17] Tim:</strong> I think it's such an important contribution. And I think we end up potentially talking past each other or mudding the waters continuously to the extent that we don't get clear about the kinds of questions that we're trying to ask. Recognizing that researchers from different disciplinary backgrounds may have very different default modes of explanation. So as a biologist, one might think nothing in biology makes sense except in light of function. If I want to know why an organism has a trait, the character state that it has, I might need to appeal to a functional explanation in order to feel that I've explained that. But a physicist might not have the intuition for that mode of explanation at all. We can explain it at this lower level, and maybe that would correspond to a causal explanation. The two intuitions can just be gliding off each other. And this is one reason why biology isn't reducible to physics in some important sense, because we deploy very different explanatory modes. We ask different kinds of questions. But one of the things that happens a lot in this specific context, in my opinion, is an appeal to instrumentalism. This is the most useful approach, and that's what adjudicates whether I employ it. But then in these broader conversations, there's a subtle sliding over into now I'm talking about the nature of reality. I can't necessarily tell when, because it isn't clearly specified, we've moved from this is a useful methodological approach in some scientific domain to someone making a claim that is a statement about the nature of reality as such. To the extent that those things get muddied, we have a lot of problems. We've had problems historically and we still have problems with a reification of a methodological stricture. For example, certain things have been methodologically excluded so that science can proceed in a certain way so that we can ask very clear questions. But then there is a tendency, shaped by thousands of years of myth and attempts to understand our status and relationship to the world, to forget that that was a methodological exclusion. We end up saying metaphysically that's just epiphenomenal or that's not a thing, that's just woo-woo, whatever it is, and we don't notice. Lauren, I'm in an extended way saying thank you for that contribution because it's incredibly important and something that I think about a lot. I need to read your book, by the way. Different modes of explanation within biology are very important to me. Even the basic Donetian distinction between the how come and the what for is really important for us to get clear on. When I brought up the little typology at the beginning and I said mechanical philosophy and Plato and blah, I did say without getting into unpacking exactly what these mean, because that would be a whole presentation in and of itself, but it then becomes really important. We've got a few options here. Now we do the diagramming. How do these operators work in each of these things? We can ask, what does each of these potential modes of explanation afford us? What can we not ask when we're thinking in this way? I agree.</p><p><strong>[1:44:18] unknown:</strong> I wonder one way that this can help too in presenting work to audiences for the first time or audiences that are critical is to suggest that it isn't intended to explain everything. This is intended to explain a certain kind of thing, certain kind of explanatory target. Sometimes there's this criticism of it doesn't do this. That's fine. It's not supposed to. And if you expect that there's a single framework that should explain everything, that's not that accurate of how we think about what scientists are doing and the massively complicated and different types of questions. So it can be protective, and maybe it can also satisfy that audience because we're not saying this is the way to do all explanations, but it does this thing. I wonder if there's a way to specify. I appreciate the assumption that science and the methods that we use and the utility element should be of a certain kind. I wonder if it's related to reductive assumptions too, where the way you understand everything is by going further down always. I wonder if there's a way to specify the goals that are associated with these explanations, such that you could say it is useful for these goals, even if it's not useful for those ones you're interested in.</p><p><strong>[1:46:00] Tim:</strong> But valid goals, right?</p><p><strong>[1:46:03] unknown:</strong> They could. The hard part is arguing about the goals. The easy part is once you fix them, we can say my approach gets you to these goals in a more objective way, and yours doesn't. I have fascinating discussions with scientists who think of causation as the only way explanations work. And so they want their model to be a causal model. It's dynamical; we think of it as explanatory. It's of course very informative and useful. They want it to be called a causal model because that word means it's a real explanation to them. So there are really interesting issues with the fact that these words have a status. Dealing with that is non-trivial and fascinating. Sorry, Matt.</p><p><strong>[1:47:03] Tim:</strong> Baggage, right? Philosophical baggage, those terms.</p><p><strong>[1:47:08] Matt:</strong> I love that we've ended up here because to me, this speaks precisely to the importance of distinguishing between metaphysics and the special sciences, where each of the special sciences is trying to offer a domain-specific explanation based on a very specific question or problem. And I see metaphysics not as really engaged in explanation, but rather descriptive generalization. So looking at what all the special sciences have found and the sorts of explanations that have proven, very often proven instrumentally explanatory in the sense that this helps me make predictions and to control the domain-specific phenomenon that I'm interested in as a scientist. Metaphysics then tries to generalize across what are the categories that would apply across all of these special sciences, not to seek explanation, but description that's general enough to be inclusive of what all the special sciences are doing. And so that helps us avoid any special science saying, I found the one cause to rule them all, and now I can explain everything else. That's a bad form of metaphysics. That's metaphysics as explanation. Whereas I would say we want metaphysics to remain descriptive generalization, not explanatory. Because when you're pointing out, Lauren, an explanation very much depends on the question you're asking. There's no global explanation, or at least I think we should be very suspicious of the idea of a global</p><p><strong>[1:48:44] Unknown:</strong> explanation. I think there's a language point here about what we talk about when we talk about metaphysics as these overall, really huge, overarching general points and these huge questions that are unanswerable. And then what we can formalize in a common language. So the layer down from that. So the world of these causally effective abstract objects or these attractors. And I think the interesting thing is that there are many different frameworks, formalisms, and theories in all the hard sciences. But generally, they're all expressible in computational language. And so when you comport a map of those things with a common language, there's some non-trivial benefit because typically the structure of science, at least in the 21st century, is a lot of people work on the edges of a discipline, pulling out the frontier of whatever their specific explanatory target is. But by joining the language up in a single map, there's a lot more low-order, easy-to-exploit computational free lunches from copied equivalences across domains and across different formalisms that might get you deeper explanatory power within that graph or at the edges of the graph as well, because it unlocks something in some other part of it. And so, here, what's interesting is that if you comport the language of metaphysical systems, not the overarching question, but the systems they describe, you can also map that in a coherent fashion with the computational expressions of all of those theories in a way that perhaps is connected, isn't an explanation, but is a structural architecture that can hold those things together so that they can be probed in a more joined-up way than the 20 different mathematical languages we have for lots of different things in physics to lots of different things in maths. And that's, I think, part of why pure math is really valuable, because ultimately they're determining the bounds of that structure or the operators for that structure that are most universal, most useful to enable that detailed mapping from bottom back to some universal expression of that</p><p><strong>[1:51:14] Unknown:</strong> language. I agree. And I would like to ask you what you think of, let's call it pre-linguistic, conceptions. Because from as far as I understood it, one also has as basal cognition this mechanism — it's synesthetic, but some would argue that this is the basal mechanism for perceiving, for example, patterns in space without direct observation. But there is more to it, as the potential to find new ways to express this, because otherwise one can always fall into Google's prediction that whatever it is that we will try to describe and not explain will have some sort of blind spot to something that may be rather relevant to our quest.</p><p><strong>[1:52:31] Unknown:</strong> That blind spot is always there because you're ultimately coarse-graining. You're going to get a very lossy representation of a big meta object that is pre-linguistic. When you think about pre-linguistic structures, we've talked about them in psychology and Jungian archetypes are the typical example that people use. But they're meant to be big things; every decision you make is a composition of them. In this language, they're highly causally effective. They're a structure that is always present in the function you're running as an observer when you're figuring out how to use your internal model to get to wherever you're pointing in that space. When you go down to the level of basal cognition, you think of very young babies and how they can make out shape only in black and white. Why? Because that's simple. It's the basic distinction, the basic binary distinction that you can get to start building a world model that's stable. And so that's how that complexity gets constructed. You have these very, very highly causal categories of things that you identify first that are pre-linguistic. And then they're scaffolded with linguistic conceptions or more detailed or fine-grained conceptions of those objects as you create more equivalences and as you go through that process. So you handle it in this mapping as a domain. It's a domain where the computational object has lots of coverage over the lower domains or the subcategories. If you're carving up the domains, you do it by negation. You exclude certain informational objects that are more complex, i.e., don't meet a threshold. And then you work down to the most fine-grained and most specified part of that structure where the most rules or the most computational rules have to be on for anything to happen, which is the real world we live in today. This domain structuring or this foliation of this computational structure is one of the things that is starting to come out also in empirical results around things like IIT, and they did a decomposition of it into four layers. You're seeing it in tests of how LLMs map spaces. Everything maps in layers and pulls together. You see it in brain regions and tests around which parts do what and where they come together. This dynamic feels like it's again working all the way up where you're constructing very simple, very few primitive objects first. As we explore and exploit those objects to explore more of that space, we get to the boundary where we live today in the present moment, where we're effectively doing that. We're either exploring or exploiting some object in our causal history that we've already got utility from to make use of it. It's a computationalist model, which can sound quite cold, but it implies that those things actually exist and are real and they matter. The fallout of that hypothesis is that things like pure relation or pure difference are incorrect. That's actually a very good exploration policy. It's not a good exploitation policy at the limit. And again, when you're dealing with an actually infinite space of possible computations or possible states, then that becomes quite important as time goes on. It might work well in finite time. What we do as systems or groups of observers or groups of people traversing these spaces is we bounce between exploration and exploitation as an optimal strategy to colonize or search the spaces or to capture as much of that structure as we can. That gives you an informational memetic angle to something physical in evolution. So you go from boundedness: you don't just have persistence in time as a metric. You don't just have survival, you have boundedness, how much computation you can do. Those things balance out as exploration and exploitation in that dynamic.</p><p><strong>[1:57:15] Tim:</strong> Yeah, I think that's a great way of framing things. When we say pure difference isn't enough, because you're cryptically referencing Deleuze there, and said a couple of things about post-structuralism earlier. It's important within that discourse and even within the philosophy of the person that we're referencing there what is the operation that pure difference, that conception of pure difference was looking to achieve. Certainly in that philosophy, which is a very evolutionary philosophy, there's no sense that the exploit aspect is neglected through a doctrine like stratification. There's a methodological priority that is being posited by a thinker like that, where they're saying, instead of erecting the strata or the forms as something that is a priori and thus has a certain authority associated with it, it cannot really be deviated from because we're always going to be recaptured by it. The function there of the difference is to say, novel forms, novel strata can be generated in an open-ended way and we will then exploit them.</p><p><strong>[1:58:47] Unknown:</strong> I think you're totally right. It's more like it's a question of finite and infinite time. So there's a theory of finite game theory and infinitary game theory. So if you have infinite time, that strategy is optimal. But if the structure is closed, again, it comes back to the structural component of the object. If the object's formalism and structure is proven wrong and it functions and you can generate physics from this object from an open category, then this is wrong. But if the structure is closed, it means that the infinite limit, that strategy is suboptimal, because as you start to asymptotically approach that end point of all these possible states, you can't, by going for difference, you're not mapping the simplest connections that will bring you closer to that state. So it's computationally inefficient as you approach a limit point. So in finite time, it's absolutely fine. And because we're finite and get 80 or 90 years, it's probably the best strategy we've got. But at the infinite limit in the structure, it's an inefficient strategy because it will fail to achieve convergence in the fastest possible way. Therefore, if you're thinking this idea is a best of all possible worlds argument—where if you have some infinite space of informational object, because information can be expressed in form of energy, that imagines that space has infinite energy as well—if you abstract that to a physical explanation, and something with infinite energy has to go as fast as it can, that's the idea that you can take that jump from finite to infinite time. Would you think that strategy is optimal given that predicate? Probably not. But if you get rid of the predicate, you don't need it.</p><p><strong>[2:00:49] Tim:</strong> axiom. I'm really fascinated by what you're saying, Sam, but there is an inherent tension between you using game theory and talking about strategies and what would be optimal to do, and then bringing in the infinite time scale as a way of adjudicating between strategies because strategizers are not working at that time scale. There's a discussion to be had about conditions of closure, and then, of course, when you are running things and developing your models using a closed system, essentially, that has been intentionally closed in its design. Then there's a question to what extent are you just recovering your priors by recognizing the importance of closure if you want to achieve a specific goal in a finite time period.</p><p><strong>[2:01:53] Unknown:</strong> I think there are two points on the construction of the priors and the formalism that gets you to this closed object. It's built from bottom up, right? It's built from a two-cell category that they import up. So, it's a proven object that imports this structure. Now, it's not that it's the only formalism, but it is not an arbitrary take. This construction must work given the properties of these computational objects. I think the point on finite versus time is right. In finite time, I'm not saying—yes, this is absolutely fair. You can pick whatever strategy you want because that's the ability to choose. But what's mathematically imposed by the structure is mathematically imposed by the structure, so it's not a preference that is in it. There's a huge difference between the limit and what we can sample. So, when you feel the light cone that we get to sample from, we get to choose a broader range of strategies than is the optimal strategy for closing that space as an evolutionary agent. And that is implied by the idea of computational irreducibility. We can't compute, we don't know that that's the best way, or we don't know that structure has to close because we can't get to that boundary. Therefore, that gives us the real choice in the boundary and which strategy we choose to exploit that space or discuss structure in it. So, I think you're absolutely right. It's a function of the math, but not a function of the point where you're at.</p><p><strong>[2:03:42] Tim:</strong> I'm going to have to rush and eat because I'm having a blood sugar crash, but I haven't had Bricky yet. I also just wonder if there's a constructivist argument that can be brought to bear, a constructivist mathematical argument against the function that the infinite limit is actually playing in.</p><p><strong>[2:04:06] Unknown:</strong> Constructive theory is doing that. Constructive theory is still using an infinite base object, right? They're still using an infinite multiverse as a base object. I don't think they've specified a geometry of that structure yet or something that imports geometry. It's a metaphysical assumption at the moment, but the common thread is that they're importing some structure and building it bottom-up in a way that doesn't require that endpoint.</p><p><strong>[2:04:40] Unknown:</strong> you</p><p><strong>Unknown:</strong> This is just one construction. The other construction is totally valid and being worked on by some. Their work is unbelievable. Those ideas are pretty critical in translating the minimal observer model that the physics project team did into this category theoretic construction because it's all about possible and impossible transformations. You're absolutely right, it goes both ways.</p><p><strong>[2:05:05] Tim:</strong> For sure. I'm really looking forward to reading your paper. I'm gonna look it up. It would have been really interesting to have the conversation that Adam Safron gestured to right at the beginning about convergent evolution, because you're relying on or continually deploying a notion of convergence. It would be interesting to compare and contrast that given the context of this discussion: Platonic space in biology, or stimulated by biology and reaching beyond, and the way convergence has in fact occurred many, many times. I always like to bring up the example of venom evolving more than 100 times in actual biological evolution at finite time scales. Then we start to look at what is the role of history, at finite, definable but vast time scales, in stimulating those convergent events.</p><p><strong>[2:06:09] Unknown:</strong> So the paper, I did an extension or an application of the paper to some of these ideas where convergent evolution is effectively finding some optimal point, some valley or peak in a fitness landscape that's optimal for the entire landscape for that class, given their computation, that computational potential. And so those things become very important in saying, does this contention align with empirical results? The contention here is that the hints are starting to be there, not just in historic work on convergent evolution, but more personally in Michael's work, where you're getting this idea of some structure of space. Some subcategorical object, some low-down information object that might be sampling from something bigger. And that may be eventually to the level of geometry in maths or the actual shape of the object, the properties of that object. It might stop somewhere else. But what's interesting is we can now probe that space in different domains, just in words and in pictures and see how that space maps. And the mapping may well be totally different to what everyone thinks, but the fact that mapping is now coherently possible is, I think, one of the most exciting things that will happen in the next 10 years of science. I think more exciting than whatever's going on in string theory.</p><p><strong>[2:07:52] Tim:</strong> I'm sure I agree. Fascinating stuff, Sam. Thanks, everyone. Really fascinating discussion. And I hope to speak to many of you again.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Discussion #1 at the Platonic Space Symposium</title>
          <link>https://thoughtforms-life.aipodcast.ing/discussion-1-at-the-platonic-space-symposium/</link>
          <description>Contributors to the Platonic Space Hypothesis discuss math, identity, abstract realms, attractors, simulation, mind and agency, exploring how content, creativity, scale and boundaries might fit into a unified view of reality.</description>
          <pubDate>Wed, 21 Jan 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 6971487f7fe50a0001b04c40 ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/oL4G2_Oznk0" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/d7d82326/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1 hour 40 minute discussion among contributors to the Platonic Space Hypothesis (<a href="https://thoughtforms.life/symposium-on-the-platonic-space/)?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life/symposium-on-the-platonic-space/)</a></p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Math, identity and realms</p><p>(16:08) Convergence, abstraction and attractors</p><p>(34:05) Attractors, stress and observers</p><p>(41:29) Realms, impossibility and simulation</p><p>(51:40) Simulation, explanation and understanding</p><p>(01:04:39) Mind everywhere and agency</p><p>(01:19:19) Content, communication and creativity</p><p>(01:30:42) Boundaries, scale and space</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Chris Fields:</strong> I'm happy to raise the question I raised in the registration form, which was a Gardellian question. Since as soon as we want to achieve some level of precision and definition we're forced to use mathematics to talk about our own states and our own interactions with the world, however you want to define that, what are the consequences for our view of mathematics of this fact that we have to use mathematics to describe ourselves and our states as physical systems, our behavior as physical systems, our physical interactions with our environment? I have to use mathematics to describe my interaction with all of you, for example. How does that bias, if it does bias, our thinking about what mathematics is? What it means to claim that we are entities that are not only amenable to mathematical description, but for which mathematical description is required for a certain kind of discourse, the sort of discourse that we regard as science or as explanatorily productive?</p><p><strong>[02:23] Olaf:</strong> If I can extend that, Chris. I've seen a few talks address this. How much of mathematics are internalized and used as an extension of our senses versus something that is completely usable but external or very low bandwidth with our subjective awareness and computation? How do people feel they are on that spectrum? I think it extends what Chris is asking.</p><p><strong>[03:20] Michael Levin:</strong> Well, I hear two different but related questions there. One is, if we take the thing that we currently identify as "this is what we think of as math." One question is, to what extent is that applicable to the things that we're interested in here, and where it fails to capture the things we're doing when we relate to each other? So that's one question. But the thing that I keep coming to is, do we in fact have a fixed thing where we know "this is mathematics and here are the borders of it"? And if you go beyond that, you're somewhere else; it's not math, it's something else. Or is it that our attempts to formalize interactions between agents are actually stretching math? Is it changing the definition, changing the borders of what we thought? Maybe certain things that weren't thought of as part of math then have to become part of math. So is that changing the definition? Or is this a fixed thing? Then we can argue about whether it's applicable. And if it's not applicable, then we have to pick something else, some other kind of formalism. I don't know what you guys think of that.</p><p><strong>[04:49] Chris Fields:</strong> I should say I'm neither a mathematician nor a historian of mathematics professionally. This is only an observation from the outside. Certainly, if one looks from the outside, how math has been described by humans has changed quite a bit with the introduction, for example, of non-Euclidean geometry. This was something that no one even imagined up to then. There are now many kinds of algebras in addition to what was originally regarded as algebra. When one reformulates mathematics in set theory, it looks different. When one reformulates mathematics in category theory, it looks different. It becomes much broader. Many things that in earlier formulations looked like distinct entities or distinct systems or organizations turn out to be notational variants. You say this thing and that thing are in fact exactly the same thing. All we've done is redescribe them in a different language. It seems to me from this outside perspective that how we think of math is constantly changing. That doesn't address the question of whether there's some fixed entity called mathematics somewhere outside of our conceptualization at all.</p><p><strong>[06:58] Michael Levin:</strong> By the way, does that, what you just said about a notational, when we do notice that, hey, this thing is actually the same as this other thing, is that, what's the meta level there with what are the tools that you have to take on board to even be able to make that judgment?</p><p><strong>[07:23] Mariana:</strong> So, I would say there are a lot of tools you can use and there are weaker or stronger forms of proofs. You can prove by contradiction, which is fun to do. But in the end, you're reasoning. You're reasoning with agents because ultimately you're going to publish it and you're going to have a community review it. Mathematicians in principle are also the first ones to say, I made a mistake. I thought we could do it this way, but after all, I thought about it, and I found a loophole. And so it's almost like a continuous dialogue of agents' reasoning. But then, of course, you have representation tools that will help you verify, and ultimately you can also have geometrization, for example, of two objects, and then you can see that they relate by some measure, and this ultimately, for example, can work in favor of the proof or against it. But I'm with Chris Fields. I think it was a good intuition. But I also ask why we are asking this question in terms of what's the assumption? I want to tackle the assumption, perhaps work it from a different angle. Is it related, for example, to patterns or to our notion of patterns because they find expression in our mathematics?</p><p><strong>[09:20] Chris Fields:</strong> If you're asking me, since I posed the question at first, one of my major obsessions is the notion of identity. And in physics, that's the notion of identity over time, since we parameterize this thing with this parameter we call time. But without this notion, physics stops. There's nothing to say anymore. And indeed, lots of other things stop. Psychology stops because we can no longer talk about memory if we can't talk about identity. And identity is a key assumption, or an axiomatic assumption of category theory, that there's an operator that we call identity. And without that notion of identity, mathematics stops. I suppose the underlying question, or the question that underlies my question at the beginning was, what is this notion of identity? What does it mean that we try to formalize it in these various ways?</p><p><strong>[11:00] Michael Levin:</strong> This is also something that's very fundamental to what we do as developmental biologists, because as developmental biologists we really want to understand what does it mean that you have an embryo, which is the same through some period of time and that things happen to it, but yet this is the thing that's undergoing change. This is a very fundamental where does it come from? How does it come to be and so on. The only reason I bring it up at all is that it seemed to me to be a simpler domain in which to try to make the claim, which some people at least already believe, that not all facts are physical facts. If you try to do that in biology, it's really hard because it's very complex. People will say there's some mechanism you just haven't found yet. That's probably always true because there's always more to be discovered. But in math, other people for a really long period of time have already made the claim that there are facts that are not derived from nor changeable within physics. There's this other domain of important information that exists. That was my strategy: we already know this is the case, or at least many people believe this is the case. Now we can ask the question of whether some of these things are also relevant for biology, for behavior science, and so on, and move on from that foundation. That was my motivation for mentioning mathematics at all, because at least there we have a bedrock where some people already bought into the idea that not all important facts are facts of physics.</p><p><strong>[13:01] Mariana:</strong> I agree with you. The question was to raise the assumptions so that we could discuss them. Chris spoke of identity; I'm really fond of this topic as well. It is interesting to think of identity all the time and everywhere, but in no particular place. These two change. I know this may seem hard in biology to think of things that do not happen in time or that happen all the time. It's more like it happened all the time. They are the same all the time. So it's within a time range. In development you see this a lot. You would have an embryo. In principle it will grow to Stage 22; it will have 36,000 cells as an open embryo. So this happens all the time. When there's a variation, we note it down. But in principle this happens all the time. Time can be expressed also from a time-independent perspective. Sometimes this is helpful because if there is a structured space of patterns independent of us, then our assumptions of time may be wrong, and this can hinder our understanding of their development if they do. What would it be like to develop not in time? This also ties back with the notion of memory, that memory is a temporal thing. Suppose that this structure, space of patterns, is a space where memory is retrieved from. All states that already happened and will happen live there. What you have is agents that loop around. This is a hypothesis. Depending on a local state, they will fetch preferential points in this structure. If you want to call this a temporal structure, there are physical models that could do this. They may not represent the standard model, but they exist and they're mathematically influential. Then we are no longer speaking of time that passes. You're speaking of agents that are atemporal. I find this notion interesting. I don't know what you think about it. Another thing: we speak a lot of physical facts, and I would like to bring to the table this notion of relational facts. Both in mathematics and in physical models, what we are asserting is relational facts.</p><p><strong>[16:08] Olaf:</strong> How much do you think mathematics and related areas, or intersections of your own fields with mathematics, are converging or diverging on the mathematical level? And how far, if they're diverging, can it stray apart from the current sets of axioms and concepts? I'm saying this because I see mathematics as this historically negotiated corpus — from Euclidean geometry to algebras, analysis, and category theory. It feels inter-subjective and tends toward convergence in most cases. In my field, neuroscience, we see connectivity matrices becoming less interesting and have to converge toward higher-level abstractions to make something — the most exciting to me, at least. So do we tend toward divergence or convergence?</p><p><strong>[18:02] Mariana:</strong> It depends on what your parameters are. I've dwelled a lot on this notion because I feel it's very important for us and for the research program in general in this distinction between abstract and concrete. There are good proposals from logic to speak of this in terms of properties it sets, but I ask, for example, in terms of a combination. When you abstract something, ultimately it really feels semantic, but also if you're going to look under the hood, it feels that you're saying less to address more. Suppose you have a high feature density. This speaks to the chorus. Feature density means you can distinguish something in your data set or in your model, and it is unique. Suppose you have lots of these unique features that don't repeat. This would be very rich and you would have less redundancy, for example. Suppose the other way, where you have one feature that repeats 360,000 times. This would give you another kind of ratio. This is my question for you: have you ever thought of things between, or a spectrum between something that is abstract and concrete along these lines, if you were to play something between these endpoints?</p><p><strong>[20:33] Michael Levin:</strong> Having seen the different talks and everything that everybody has been saying in the symposium, what do you guys think about how many different views we have here? Fundamentally, how many different—obviously, everybody's got a different perspective. I'm going to send out a table for people to comment on, and I'm trying to think of what the columns of the table should be, the primary axes that people would have different opinions on in this collection of thought. How many different views do you think we have and what does the conceptual space look like? What are the primary axes where people agree and disagree? Just to give you an example, one basic one that comes up all the time is people say, "I agree with what you said about ABC, but I really don't like having a separate realm." This notion that some people like a monism where everything is in one space and they really don't like the idea that there's a separate realm in some cases, and we can argue about what it really means to be a realm as opposed to something else, some weaker form of it. But that's one axis, I think, where people differ: to what extent are there multiple realms? There are probably other axes. I'd be curious to know what you guys took away from all the discussions as to other fundamental dimensions.</p><p><strong>[22:36] Olaf:</strong> I have one other axis, which is I think something like physics-boundedness, being constrained by laws of physics or not, as in a fixed set of rules. And it feels like my talk is on the extreme, on both ends of this, which is interesting. But something about dependency on substrate. Let me continue those questions. Let's assume that everything that we want from the representation exists. What then?</p><p><strong>[23:32] Michael Levin:</strong> What will be the end goal? Because what we are representing is some subset of the real world. Let's assume that we have everything in there, technically create two worlds, then what? Or if we have some reduction of concept in that Platonic space, then what can we do with this? What are the best options to do with this?</p><p><strong>[24:08] Mariana:</strong> I would say map it. Depends on the assumptions. But if we have agents that can come and go, then in terms of experiments, as you guys have shown, it is possible to have a source and target map. This would be what I'm most interested in.</p><p><strong>[24:34] Yvette:</strong> For you, the fact that we succeeded in mapping everything that we would like to do in the world is good.</p><p><strong>[24:48] Michael Levin:</strong> Success of that test, of that theory.</p><p><strong>[24:54] Mariana:</strong> No, but it would give us some truth bounds for experimental means or for managing expectations for experiments. What would be yours, for example, your end goal?</p><p><strong>[25:16] Olaf:</strong> I will try to predict something that I didn't put there.</p><p><strong>[25:23] Yvette:</strong> There is object A and object B, and object C is inferred from all of those. I know that, but I don't know if my tool can do that.</p><p><strong>[25:39] Olaf:</strong> I'm hearing in the metric of verifying whether it is or it's not a subjectivity notion, but you can have it or you can zoom out in an objective godlike view of those agents that Mariana you just mentioned as well. I think that makes all the difference. If you switch to math that we haven't invented yet or that is alien math, the part where we consider that this is part or not of the subjective perspective that we are holding right now is, I think, important. Maybe that's another axis for you, Mike.</p><p><strong>[26:27] Michael Levin:</strong> For me, what I'm really interested in is mapping the space, but also figuring out what is it, what degree of, I call it a free lunch, but what does it actually give you? Because there's a wide range of options. So it might just give you static patterns, here's the value of E and that's all you get. It's just there. Or it might give you dynamic behavior or algorithms or compute, what's the range of complexity that you get out of it that you didn't put in and where? And so we're doing some things in our lab, giving bodies, whether physical or simulated, to simple mathematical objects to see what they encode. If you treat them as behavioral propensities, what do you get? But more generally, because that has implications for evolution. I think, to what extent can evolution exploit things that it pulls out of that space without having to take the time to micromanage them and evolve all the components. What do you get for free? You get some stuff, as Stuart Kaufman showed us, for free. But I think my suspicion is that's just the tip of the iceberg and you actually get a lot more. And ultimately in the lab, we need to be able to say, here are some anthrobots. There's never been selection to be a good anthrobot and to do all the weird things that they do. Where do their specific properties come from? Why did we not see this coming? How can we have predicted it? What are the options? And what's the relationship between the thing you make and the stuff that then comes through? If we tweak, can we tweak certain things about the anthrobots if we want other types of patterns to come through? That's what I'm interested in, is what do you actually get and what's the relationship between the interface that you build, whether that be technological or computational or biological or some combination thereof, and what is going to come through that you have no idea about.</p><p><strong>[28:52] Mariana:</strong> One of the things that I've been thinking a lot is exactly the work that you do, and precisely this mapping. It seems to me that mapping, based also on your experiments, under the assumption that developmental states are pulled in, then this mapping can also allow you, in very practical terms, to address, for example, regenerative procedures at late stage, so you don't need to catch something early stage because you already know how to pick up that pattern in case there is a topological defect, in your sense, is a developmental stage. This is why I find the mapping really relevant. It might not be the best approach, but I think when you speak of free lunches, these are the free lunches, the low-hanging fruit that we could use. I've been thinking a lot about it. It seems that when you speak of perturbations or abrupt perturbations, things that were unforeseen so far, that then output these developmental novelties, like the anthropots. I'm very puzzled about the tail onto the flank. Why not the tail? Why not keep the tail? It's so much cheaper. Why reject the tail? Why build the limb? It seems like it's the most— I know there are some changes that are more helpful or more useful. Sometimes I wonder why it just feels like, for example, a limb is more complex in terms of edges than the tail, right? In physics, you would call it relational mechanics. There are some proposals exactly in these terms where whenever there's a chance, go for partition. Go for something that is different that has— this would be the measure of complexity. So complexity is just a relational measure between you and your neighbors around you. I like it.</p><p><strong>[31:41] Michael Levin:</strong> It's a very interesting question. I don't want to dominate this thing. So please, Yvette, Juan, Brian, Carl, please chime in. This question of at what point does the thing give up on the standard implementation and shift over to something else. There's a standard, you can call it an attractor, but I don't think that's all it is. There's a standard version of an embryonic body plan that it will try to hold to. If you deviate it, it will work pretty hard to get there. If you put on an extra tail, it will try to make it a limb and things like that. But at some point, you can push it so far that it basically says, forget it. I'm now an anthrobot. I'm not going to try to make a human embryo. This is my new life. One of the ways that we're trying to address that is to look at stress markers, because we have a project looking at systemic stress as a measure of distance to your goal state. There are scenarios where the tendency to try to reduce stress is what pushes you to get back to where you need to be. So we're interested in this question of, okay, are Zenobots and Anthrobots stressed out about being those things? Or at some point do they adopt that as the new set point? So being a Zenobot is my set point. I'm now a great Zenobot, so my stress can fall. That's an experimentally detectable thing. We're doing those measurements. That's one way of doing it. In general, I think that's a great question: at what point does it shift, and I don't think it's about utility or anything like that, at least certainly not in the short term.</p><p><strong>[33:41] Mariana:</strong> So I misunderstood.</p><p><strong>[33:43] Michael Levin:</strong> I don't know. We don't know how a lot of these decisions are made. There's so much that these systems will tolerate and try to accommodate to still get back to what they need to be. But there are also scenarios in which they just flip to something else. Carl, please.</p><p><strong>[34:05] Carl:</strong> Some wonderful questions there. I wanted to pick up on this notion of stress and attractors, but try to frame it in response to some of the questions that have been rehearsed. So it's going right back to Chris, a question about the nature of maths. I noticed that he used the word dialogue. There was also Mariana's notion of discourse, and then we had Olaf's negotiated corpus. I think Olaf speaks to maths just as being a particular kind of co-constructed language that has an enormous amount of explanatory power in terms of accounting for things accurately with the minimum complexity. In so doing, the question about convergence touches upon, Mike, what you were asking about, is there something else or is this just another version of the same thing. And if you pursue the notion that the right kind of language and the right kind of maths is going to explain everything as simply as possible, but no simpler than you're looking for, that's exactly that convergence, and I think that speaks to a lot of what people were saying in terms of your maths being a continual process of basically model building, a co-constructive model building. The notion of identity, in my world, that would be self, and it would be the self you find in self-organisation. It would be exactly the same thing you find in information theory in terms of self-information, right through to self-evidencing and the free energy principle. I mentioned that because that stress is mathematically simply the self-information or the implausibility of finding this kind of thing away from its attracting set. So coming back to the attracting set, the notion of a pullback attraction probably has everything that you need in order to accommodate all the questions that I've heard thus far. That really commits you to a particular kind of maths. It probably wouldn't be maths, it'd be physics. But certainly, mathematically framed, you would be seeking out that convergence that people were talking about, the kind of maths that allows you to explore all of the issues we've been talking about and also provides that nidus of convergence that will enable a certain consensus. It strikes me that the notion of an attracting set has everything that you need. Think about Mariana's questions about things that recur in time, memory, persistence in time, having characteristic states. You can express all of these things in terms of attracting sets. So you need the physics of attracting sets, and that's basically the pullback attractors. Within that you can now define self. You don't need to be axiomatic and assume the existence of identity in the sense that there is a self that is constituted by the attractor, and everything else follows. To bring that to closure, the stress is a measure of the distance, or how far outside your attracting set you are, and what will happen is you'll go back to your attracting set. That was my breathless summary of the thoughts that were induced by the conversation.</p><p><strong>[38:02] Michael Levin:</strong> Brian.</p><p><strong>[38:03] Brian:</strong> I just wanted to add. From this notion of perturbations, I think one of the issues with platonic spaces that I always reconcile with is whether this is all observer dependent in some sense. And I think the notion of perturbations is a nice way to think about making the observer aspects as weird as possible. In the computational realm, you can do this. Marianne talked about this notion of experiencing time and the aspect that maybe you can actually have agents experience time in a very different way than we experience time. We already have these in the AI space; they're called diffusion models. If you've ever read the Ted Chiang stories or seen the movie Arrival, there's this kind of gap between how we personally oftentimes see time in a linear fashion. But diffusion models in the sequence space see time in a completely different way where everything appears at once. When everything appears at once, you can imagine that this is something that diffusion does where it looks at generating the entire story everywhere at once. We're now exploring this because we're interested in whether these same systems learn algorithms that we're familiar with or they learn completely different algorithms in the space of, for example, games like Sudoku and things like that. So I think this notion that we should make the observations as different and as weird as possible. That's the way to at least hope that we're always being locked to some notion of observer dependence, but we can generalize that observer dependence further and further out.</p><p><strong>[39:37] Michael Levin:</strong> That's super interesting. Can you say any more about that? What are you actually doing with these diffusion models?</p><p><strong>[39:43] Brian:</strong> We are training diffusion models to play Sudoku because Sudoku is one of those games where it has a lot of computational advantages, and it's an NP-complete problem. The algorithms that we usually use for Sudoku are very — if you play Sudoku, this causal structure of "let me find the least constrained square and go from the most constrained square to the least constrained ones and solve the puzzle that way." If you train a diffusion model to solve Sudoku puzzles, they solve it very differently. We don't quite understand how they solve it right now, but they definitely don't choose the obvious strategies of "let me march down the least constrained square to the most constrained square to the least constrained square." It's something that almost feels random right now. We haven't done enough analysis on this yet, but the way it solves Sudoku could also lead to new algorithms in that space. Maybe the time complexity of those algorithms may be very different than the time complexity of algorithms that we have created in that space. And because it's an NP-complete problem, there's a lot of computational complexity analysis that we could potentially do in the long term from this. So we're just training diffusion models on things where we already have an algorithm that we know works and seeing what the diffusion model discovers as a different algorithm. It's also worthwhile studying in the language space because there are diffusion models in language now. You could look at the algorithm that diffusion models require and the representations they acquire by training on the sequences that we believe are generally temporally linear.</p><p><strong>[41:29] Yvette:</strong> Hi, I apologize for just jumping in. It's my first time at the meeting, so I wanted to say hi to everyone and thank you for the invitation. I will try to join regularly now that my schedule is a bit more organized with my teaching, so it doesn't interfere. Maybe I just wanted to quickly introduce myself. I'm a physicist and I work at the interface of quantum mechanics and general relativity. I'm going to be listening for a while and then see when I can chip in with something more meaningful, but maybe just an interesting connection: one of the things I'm interested in is the interface between the classical world and the quantum world. I tend to see these as two different realms. Maybe this is just a perspective that can be helpful because at the end of the day you could say it's all part of one whole, so why divide it? Sometimes these divisions can be useful. The thing I'm starting to see is that the classical world emerges from the quantum world. They are different realms in the sense that they follow different rules, but they interact. That's important because in the lab we can prepare a superposition of an atom going to the left and to the right. What I'm working on is the emergence of mass from, let's say, rules of the quantum world. Some things that have made understanding the quantum world difficult are that we try to put the rules of the classical world into the quantum world somehow. I was thinking about some things you've mentioned: what's physical, what's physics? Where do you draw the line of what's physical or not? We have a lot of arguments about that within quantum mechanics, because even within our community some people argue that quantum mechanics is physical and other colleagues say it's just information and a mathematical tool to make predictions, which I don't understand very well. Because we haven't understood quantum mechanics well yet, it's still very debatable what is physical. I think the question about different realms and what's outside the realm of the physical is definitely something that interests me very much. I'll just say hi, and hopefully I will be able to contribute later on. Thank you.</p><p><strong>[44:47] Mariana:</strong> It was lovely hearing you, Yvette.</p><p><strong>[44:50] Yvette:</strong> Thank you.</p><p><strong>[44:51] Mariana:</strong> I agree. I rarely use the term "physical" and don't find it very useful. There are a lot of mathematicians contributing immensely to mathematical biology, which end up contributing a lot to quantum theories or field theories. Because morphogenesis pinpoints the questions of the first mass and how mass behaves at different scales, we have all these effective theories and models. That's half the predictive power. But on the other hand, we cannot find the physicality for them. The temperatures are off. They don't reproduce anything but that behavior. It was lovely hearing you.</p><p><strong>[45:50] Yvette:</strong> Thank you very much. Yes, I'm excited about joining this discussion.</p><p><strong>[46:04] Chris Fields:</strong> Can I go way back to one of the previous topics, which had to do with what we're actually trying to achieve with both this discussion, but with science in general, mapping our space. And in a sense, one can look at it from two fairly different perspectives. One is the perspective of trying to predict what we will see, and the other is from the perspective of trying to characterize what we won't see. And if you think of mathematics as a formal system, or if you think of physics as a set of symmetries, a postulated set of symmetries, where everything else, everything beyond the statement of what the symmetries are is a relative fact that Mariana was describing earlier, then what comes out of those two ways of proceeding is a list of things that you can't do, or a list of structures that don't occur, that don't make sense. A list of no-go theorems, like Gödel's theorem, for example. And everything else is subsumed under T.H. White's law of the ants, or what Gell-Mann called the totalitarian principle that everything not explicitly forbidden is mandatory. If you can't prove that something won't exist, then you're going to bump into it somewhere. So it's a possible pattern or a possible attractor in Carl's terms. And it may be entirely unclear how to construct that attractor. But in the absence of a proof that it's impossible, you can expect it to be constructed somehow. But that's a very different, very negative way of describing what one is trying to do, to say that what one is trying to do is characterize the impossible. Carl, please.</p><p><strong>[48:52] Carl:</strong> That was an excellent point. I'm just thinking of one more positive take on that. You could invert the problem and just assume that we want to characterise systems that can exist and then work from there. In that sense, what you are doing is saying that if a system is characterised by these characteristics, then it can't be over here in some state space. And that's just the surprise of self-information or stress that we were talking about before. And that comes for free with just writing down a density over states that can be occupied. Of course, there are many more ways of being dead than alive. It's a very small, attracting set that remains there. In answer to that question, in my world, the utility of having the right kind of maths means you can simulate things in silico. If you can simulate things in silico, then you can do forecasting, scenario modelling, interventions, you can test hypotheses about perturbations to the system. This becomes really practically relevant, certainly in things like computational psychiatry, where you start to simulate people, for example, and when their sense-making or decision-making goes wrong, or any sub-organising system like climate, financial services, fintech, getting the right maths and the right kind of self-organisation in play opens up a whole world of practical and important ways of intervening and testing hypotheses via simulation. The other application is you can test hypotheses about the unknowable mechanisms of the system that you're interested in, because you can't observe it directly. You can start to test hypotheses about the mechanisms because you've got the right simulation testbed. And then finally, if it's all working, you can put it into a diffusion model and make artifacts and you can then move into the world of autonomous vehicles, artificial intelligence research, using the right kind of maths to drive and to create artifacts that somehow embellish or endorse our ecosystem and the things that we actually play with and use. So practically, from my perspective, it's really important to get the maths right, because once you get the maths right, you can now answer all sorts of really important questions.</p><p><strong>[51:40] Michael Levin:</strong> Could I ask about what you guys think about the distinction between simulation and explanation? This comes up in biology in the following way. I'll say we need to understand why this thing is doing that. And people say, well, it's emergent. And I say, what does that mean? And they say, what it means is that if we were to simulate the micro rules that are driving it, this is what we would see. And it's a regularity that holds in the world. That's what it is. I say, but what does that mean, that it's a regularity that holds? And what they mean is if we simulate the low-level rules, then out it will come. We can show that that's the case and get a catalog of these things. I'm interested in what the relationship is between being able to simulate it and thus show that, yes, in fact, that is what happens versus understanding what's going on. I was thinking about an extremely minimal case of this, the glider in the Game of Life. What does it mean to understand the glider, say the rate at which it moves or the angle at which it moves? What somebody can very easily do is show me the four steps, the cycle of the thing moving over. We can all agree that, sure enough, with the physics of this world, that is exactly what happens. Nothing more to say about that. But have you explained it or have you simulated it? With enough simulation, it seems like we could get the answers and not understand much of anything. What do you all think? Is there a distinction? And if so, what is it?</p><p><strong>[53:18] Brian:</strong> For me, I have a very pragmatic distinction, it's a continuum explanation and simulation, but if you can accelerate the simulation to a point where it doesn't have to run in real time or doesn't have to run in reality, the idea is that explanation allows you to skip a lot of steps. It allows you to go way faster than a simulator would be if it had to simulate the entire universe to get to the same point that you want. In the Salvation Hometer case, I can predict gliders ahead of time without running all the intermediate steps that require a glider to be formed in the Game of Life.</p><p><strong>[54:00] Olaf:</strong> The simulation is a shorter explanation if the usual route is much longer in terms of time and space and compute or energy or whatever you want. That makes sense to me. I wrote this thing about LLMs being frustrating because you put so much data and compute into it that you can't possibly think that this is highly intelligent because you invested so much in it. What is impressive about intelligence is that you put not much and you get a lot, which is how you phrased this, Mike, earlier. Math is impressive because you put something and you get so much more out of it. I think there is this notion of compression and efficiency that we expect and otherwise we are frustrated.</p><p><strong>[55:00] Carl:</strong> I just didn't endorse that. It reminds me of arguments for universal computation, inheriting from, say, Kolmogorov complexity compression. When I hear compression, I hear maximizing the likelihood of your model. I mentioned that, Mike, you can look at simulations as a glorified statistical test. When we do a t-test and we're asking a question about the mechanisms or our hypothesis about the causes of some measurement or some data, we are doing a simulation. We are building a little generative model, a general linear model for the t-test, and we're simulating what could have caused the data, and then we're identifying the best hypothesis. When I talked about simulations, I wasn't talking about using maths and computers to reproduce behaviour. I was talking about the simulations used as a statement of your hypothesis about the underlying cause-effect structure that generated those data. Then you can compare different simulations, i.e. different observations or statistical models, and quantify the evidence for your hypothesis here and hypothesis there. But to do that, you've got to have the right kind of model. And you've got to have the right kind of maths that underpins that model, which is an open question and an ongoing challenge, which I'm sure we're all contending with.</p><p><strong>[56:53] Chris Fields:</strong> I'll throw in a different way of answering that question, which is to say that a simulation may reproduce some effects that you're interested in, but it doesn't force you to change your conceptualization of the effect. Does it force you to change your language? Whereas a really good explanation often forces you to change your concepts. For example, go back to 1900 and consider the question of how there can be atoms. Why are there any atoms that are stable? The going theory back then was that the difference between protons and electrons was understood. That language hadn't been invented, but it was known that there was positively charged stuff and negatively charged stuff. It was also known that if the electrons actually moved in the atom, they would radiate away their energy and the atom would collapse, so there wouldn't be any atoms around, no stable atoms. That was the question that the Rutherford model ended up answering. Rutherford's point was that electrons don't move, that electrons can have particular energies, but they're not moving, so they don't radiate unless they change their energy in very precise ways. And they can only change their energy by finite amounts, and in fact, only by particular finite amounts. So they'll only radiate by these particular finite amounts. That radiation is not due to motion. Rutherford's picture introduced a radically different conceptualization of what an atom was. And that's why it was a good explanation. It made sense because it got rid of an entire problem, the problem of radiating the energy away and the atom collapsing and being unstable. But to get rid of that problem required a conceptual change, an abandoning of intuitions about motion. So I would say a really good explanation is something that causes us to abandon some intuition.</p><p><strong>[59:54] Michael Levin:</strong> I like that.</p><p><strong>[59:55] Chris Fields:</strong> Simulations don't do that.</p><p><strong>[59:57] Michael Levin:</strong> I like that a lot. It sort of evaporates certain problems. It inevitably raises new ones. And is that okay? Is there some sort of ratio? I've seen reviewers' comments on papers where they say, "this raises more questions than it answers." I'm like, that to me seems like that's what we're supposed to do, but maybe not. So what should be the ratio between the problems that you've evaporated and the ones that you've now unearthed?</p><p><strong>[1:00:41] Chris Fields:</strong> I suppose the new problem should be more interesting than the ones you've evaporated.</p><p><strong>[1:00:48] Olaf:</strong> You can sometimes solve a problem by switching languages. You create a new field sometimes in nice successful cases. But you can always backtrack into the other language, which is what happens when science splits, and science is very split. That's why we have hope for things like category theory and things that bridge between the fields. But essentially, you have one cake and you eat that one or the other one. You can't eat both. You can cash them out both at the same time until you unify. But that's how I usually see it. You can backtrack. You can always speak the other language. And other than that, we solve the other problem.</p><p><strong>[1:01:58] Carl:</strong> If it's a common language, though, the ratio that Mike was asking about would be a log odds ratio in Bayesian statistics and hypothesis, post-paparian hypothesis testing, it would be a Bayes factor. The relative probability of these data measurements or observations conditioned upon this simulation or that simulation, where the simulations embody your hypothesis, your mechanistic explanations, your conceptions. So the art of being a good Rutherford or a good scientist is to exactly deny your intuitions, your prior beliefs, and explore other hypotheses. Then you put that hypothesis in some converged maths and make it into an observation model. And then you evaluate the likelihood of your data under that observation model relative to another one. And indeed, you may find yourself going back to the null hypothesis. If you commit to classical or frequentist statistics, failure to reject the null keeps you stuck. But if you use a more generic model selection approach based upon the Bayes ratio or logos ratio, I think that provides a really nice space in which you can think of the scientific process as elaborating new hypothesis spaces. And then you've got the technology to evaluate how you proceed, how you take a path through the space of hypotheses. The challenge, of course, is to elaborate new hypotheses. And ultimately you come back to natural selection and evolution and the law of requisite variety and the like to elaborate those. But notice that depends upon having converged maths. So you can take two concepts or hypotheses or models and simulate data generation under these two hypotheses in a comparable way. So you can evaluate the probability of your data given these two observational concepts or hypotheses; you couldn't use category theory to do t-tests.</p><p><strong>[1:04:39] Mariana:</strong> I'm not very good with the term simulation. I really like breaking assumptions or trying to think across the borders. I think the first time I heard your framework, Mike or Tame, I thought it gave so many disciplines a fresh set of tools, even thinking tools. I've been thinking a lot about them for our research program. Because when you have a technological approach to mind everywhere and you try to find where is everywhere, where is the limit, then perhaps we may find ourselves with this opportunity to do these experiments now outside of biology. This is no easy thing to conceptualize. I feel that I've been working at the limit, or at least at my capacity, because it requires this interpretative competency to go into all these different fields and try to match the framework. It's a real mind-bender. I don't know where you are all regarding it, if you have thoughts concerning this aspect.</p><p><strong>[1:07:08] Chris Fields:</strong> Well, I think one thing that the framework requires of us is to abandon this notion that we are special in a very particular way as cognitive beings and to allow ourselves to consider all sorts of other systems as cognitive beings, and that's, in a sense, deeply deflationary. I'm not sure whether that's what you really mean by it, being a mind bender, but that's what I see as the primary challenge of it.</p><p><strong>[1:08:17] Mariana:</strong> I'm not sure why I said mind bender, but now I like that I said it. I think it's more, and I think it's a good thing. I don't know how to evaluate the behavior of humans, their intentions I do not know. I find it a bit puzzling. It is easier for me to see, from Mike's perspective, there's a technological approach of mine everywhere. There are ultimate preferences. It's easier for me to see it in animals. Transposing these classical conditioning techniques—behavioral changes seen in animals—into mathematics or even into electrons. If we were to classify their behavioral profile, their preferences, or their skills, this is what I've been calling a mind bender. It seems very conceptual and it has absolutely no use, but I feel that through time and multi-scale competence architecture, we have at least a way to transpose these conceptual ideas about agency in other kinds of spaces and test for them. What I mean by this is if you have an observation that an object X always behaves the same way, is this non-interesting or is this an extreme preference? A preference X would not let go. No matter what you do, X will do that. I think this raises a lot of good questions. Maybe there is something that X likes more, and we can test for that. Then suddenly X has two behaviors, and you don't have the same paradigm you had before. I'm not sure.</p><p><strong>[1:10:40] Chris Fields:</strong> I think there's another way to look at that, which is perhaps equally interesting, that if you look at X and X only has one behavior, then perhaps that is mostly reflecting on your capabilities to look. And that X may be doing all kinds of things that you can't see because you're choosing to observe it only in some particular way. And it may be that the way that you observe it is what leads you to call it X as opposed to something else. And your method of observation is inevitable: one's method of observation is what one uses to cut up the world into systems with boundaries or Markov blankets or however we want to describe them. And then that automatically restricts the measurements that we can make, as well as restricting the aspects of the system's behavior that we can regard as behavior because it gets encoded on the boundary that we have identified?</p><p><strong>[1:12:10] Mariana:</strong> I completely agree. This poses another question, which is suppose you got a new set of goggles, scientific goggles, and you have a new framework and you see that X always chooses something in this space. Now we found out that it has a different behavior in some other space, but it behaves quite similarly. It has one preference in that space. You keep doing this. My question is, how can we state this preference? Or how can we pose things in other terms than perturbation or stress in trying to figure out what they want, what would be beneficial for them instead of trying to use them? Because it has brought us so far. But I wonder if we are to shake all assumptions and if we are to take these frameworks, I think, even on an experimental means, it feels interesting to think, what would they want? Because then perhaps if we have a different kind of behavior, we would have more ground truth to judge such behavior. I agree with everything you said. I don't think this is a simple problem. I think it's a very interesting one.</p><p><strong>[1:13:53] Carl:</strong> Could I ask, you're using words like preference and want, which I like because I talk about prior preferences a lot, but it does anthropomorphize things. It sounds to me you're talking about things that conform to a variation principle of least action. They have the most likely path and that path is the path they prefer. How much do they deviate from that path of least action? It makes me wonder whether the answers that you would supply, which could be articulated in terms of preferred paths and wanting to get to the end point of the path of least action, are scale dependent. I want to ask Chris: let's just take two extreme scales. Let's take the electron and the moon, both of which have lawful behaviour. Do either of them want or have a preferred course of action or a preferred behaviour?</p><p><strong>[1:15:09] Chris Fields:</strong> We certainly model them as if they do in terms of least action principles.</p><p><strong>[1:15:19] Carl:</strong> I'm trying to get at the mind-bending thing. So why doesn't that apply to you and me then?</p><p><strong>[1:15:28] Chris Fields:</strong> I'm not sure it doesn't apply to you and me.</p><p><strong>[1:15:31] Carl:</strong> I'm trying to get at the mind-bending issue. Because you could certainly talk about the moon having a preferred trajectory or path, and likewise an electron at the quantum level. But what is special about the self-organization that Mike commits his life to trying to understand, that this platonic series is trying to address? Why is that principle of least action not sufficient, or is it?</p><p><strong>[1:16:07] Michael Levin:</strong> I don't know about the moon, but we've been looking at it in very minimal computational systems, like the sorting algorithms that we've all talked about. What we've observed, for anybody that hasn't seen it, is basically that these are simple deterministic algorithms. Yes, they sort like they're supposed to, but it turns out they also do something, if you look at it from a different perspective, which hadn't been done in all the years that people have been studying these things, you see something quite differently. You see them doing some other stuff that is very surprising. I don't know if the right way to look at that stuff is as also goals that they're attempting to achieve via some least action thing, or whether that whole framework is limiting. Maybe that's the part that's exploration and play. These other things that are not actually goal-directed behavior. Maybe that's the more interesting volitional play aspect where the sorting is what you forced it to do, but in between, in the spaces between that, there's some stuff that the system likes to do. I think we can start to look at some of that in these very minimal models. I don't know if that's more minimal than the moon. I'm not sure what you think, but it seems very minimal because it's deterministic. We have control over all of it. Yet there is some stuff that's happening in the spaces between the thing you actually wanted it to do, and so this is still something that I'm playing with different frameworks that maybe it's another set of objectives to optimize that it's doing. Or maybe that's the exploration part, the intrinsic motivation, which then scaled up is what we see in us and so on. Maybe that's the simplest version of what it actually looks like when you push it all the way to the left of the spectrum. Santos just wrote on the chat that the aggression of the abstract into the physical maybe also follows a principle of least action. I think that's quite reasonable. I've been thinking about those ways too. The really simplistic way I started thinking about that is that these things in that latent space are under positive pressure. Basically, you don't have to work that hard to get them to ingress. If you make an interface, there they are. They're in some strange sense pushing out. There's a baseline pressure in which they are going to get into this world. There may be a more sophisticated least action kind of thing that could be used to describe what shows up relative to what interface you've made.</p><p><strong>[1:19:19] Mariana:</strong> I've been looking into it because it's hard to discern if what we call or what we observe as a principle of least action is because of the configuration of the space or because it's the best transportation means or the most effective path. You have this notion of path, transportation means, and spatial constraints. This in itself gives us a lot of predictive power in other areas. For us, for example, tail and not limb, or limb and not tail—there's no minimization of surprise there, quite contrary. In these cases least action would serve to model the path of whatever it is that ingresses, according to how we model the space, but ultimately would not tell us the content. I'm very interested about the content. What is being ingressed? Not in terms of the geometric path, perhaps not even in terms of the conflicting dynamics that may happen at ingression, because both X and Y want to ingress—who wins? More of the content. There are, of course, very good mathematical constructions to do this, and you can kind of glue them all together, and then you can look inside, and you can see. But I'm pretty sure, what would we do with it, with the content, for example?</p><p><strong>[1:21:20] Carl:</strong> Sorry, say it again, what would we do with the what?</p><p><strong>[1:21:23] Mariana:</strong> Suppose that we can not only model the aggression, we have the path, everything, but we also have the content.</p><p><strong>[1:21:34] Carl:</strong> I think we'll then get back to the utility of having a simulation. When you've got the simulation, you can do intervention experiments, test hypotheses, ask questions about what would happen if I did that, how it would respond in this context, what are the emergent behaviors, and deploy them in the way that we've been talking about. Not that I'm a physicist, but from a physicist's perspective, numerical analyses are the thing that get at the content and they reflect the application of a principle, where a principle is read from the perspective of the physicist as a method. So you get the right principles, the right maths that equip you with the right methods and tools. Then you apply that to ask questions about the different content under those principles. For example, why a tail and not a limb? That's just an expression of a path of least action expressed at an evolutionary scale, where you now read natural selection as a path of least action, where the Lagrangian is adaptive fitness. In fact, adaptive fitness can be, in this instance, equated with the marginal likelihood of finding this kind of phenotype in this eco niche, and the marginal likelihood is just the negative self-information.</p><p><strong>[1:23:08] Mariana:</strong> I agree. I agree with you and it makes sense. If it wasn't for this research project, I would not be debating this. But now I find it very useful to debate it because why and phenotype. I feel that our models are very good at predicting a lot of things and we can do a lot of things with them. It is true. But why is it not a preference of some pattern that we don't see with whom we can obviously interact? Because if we see it this way, then we have the possibility to interact. For example, on this question of content, if we have the content, what we could do is do a code book and we can communicate. Perhaps this is absolutely bonkers, but in two years, maybe we have a new telecommunication system, because this was nonsense, but something nice came out of it. It feels really good to be able to speak about this. What do you think of a code book?</p><p><strong>[1:24:15] Michael Levin:</strong> I would add there's the third-person perspective, which is what Carl just nicely described, where you look at this thing and you see I'm going to model it. I see this particular pattern coming through. I'm observing it in a third-person perspective. But there's also the other end of it: what does the world look like from the perspective of the thing that's coming through? Close to that is the issue that we are also potentially fundamentally patterns that are now manifesting through whatever interface, biological and so on. When we talk about communication, I think there's a whole research direction here, which is different and harder than typical third-person science, which is to look at the agency of the patterns themselves and of us as these patterns and the communication that takes place, conventionally in third person through the physical world, but also possibly directly lateral. This is what some mathematicians have said to me and maybe what Darwin meant when he said mathematicians have a different sense, this idea that when they perceive mathematical truth, it doesn't dip into the physical world and come back. They're not doing experiments, they're not making observations; they have some kind of other interaction. You could imagine a direct interaction between patterns from that space. This is very weird stuff, but maybe that also is an aspect of the communication when we start looking at it from the perspective of the thing that's coming through, not just from us evaluating it as passive data, like here's the thing I see.</p><p><strong>[1:26:15] Carl:</strong> It wouldn't be weird from an evolutionary psychology perspective. You're talking about cultural niche construction, new wave aspect to evolution. If we have co-constructed a culture of good maths and good, providing the simplest or least complex explanation for everything, and that can be inherited from generation to generation, then that is a perfectly consistent process with a path of least action, where action in this instance has elevated meaning in the context of evolutionary psychology or devo constructs taken into the cultural domain, which brings us back to the convergence and the discourse and the negotiated corpus of calculus that we commonly accept as the right kind of maths.</p><p><strong>[1:27:19] Mariana:</strong> Thanks for bringing that up. It is true. I'm no authority, but there is a lot of love and devotion, and almost even sacrifice that goes into doing maths; perhaps it is a bit different. I would also say that one sets up experiments. And then it's a bit of pen and paper. There is a lot of "what if", and then you state the problem in a different way and you use a different example. I don't know if it's the same for everyone, but it feels there is some proportionality between this love and this devotion and, at the same time, this plain curiosity — that it's never really about you. It comes to you. It feels the more you put in, in terms of being alone and thinking about the problem, the more it comes.</p><p><strong>[1:28:53] Michael Levin:</strong> I'm looking forward to adding. I didn't get it; it just hasn't been scheduled yet, but there's a couple of conversations coming soon for this symposium with musicians. I've had a lot of outreach from people who are not scientists or mathematicians; they make music and they've been watching this stuff. They said this is what it's like when a particular song makes itself known to you. When it comes through, their creative discovery process I find very interesting to compare and contrast to what we all do. I think that'll be a nice addition. I'll try to get hold of some more artists and people in that space.</p><p><strong>[1:29:43] Mariana:</strong> Sometimes I've been discussing informally with some to ask about this ingression, exactly treating, I let them talk and I'm trying to map whatever it is that they're saying to this ingression mechanism. Sometimes I feel perhaps you can model it with priors. But there are things that are novel, like in morphogenesis, they've never heard before. Is it on? You've been recording it.</p><p><strong>[1:30:17] Michael Levin:</strong> It hasn't happened yet. It will be recorded. I'm trying to set it up. I thought our schedules were weird, but theirs are even more challenging, apparently. We'll make it happen.</p><p><strong>[1:30:42] Chris Fields:</strong> If I can make one more comment from a physics perspective. We move back and forth in physics from a theoretical stance in which the world has been divided up in some particular way into systems that have certain internal processes and hence interaction capabilities, which we can describe as observational capabilities or action capabilities, or treat them as interactions. We can move from a perspective that cuts the world up in one way to a perspective that cuts the world up in some completely different way. Moving between those perspectives changes all of the interactions that one describes between the components that we've made by the cutting up process. But it doesn't change the assumed behavior of the whole system. So whatever model is constructed has to be consistent with the underlying principle that if you erase the boundaries, take your pencil, erase all the lines you've drawn, nothing has changed. I'm trying to get to this point: to what extent does our perspective in which we each view ourselves as bounded entities bias our thinking about how patterns interact. Given that if we take our erasers and erase these boundaries, we haven't changed the patterns. We haven't changed the overall pattern. We just changed how we describe different pieces of it interacting by creating the pieces. If we don't have those pieces, that interaction, of course, doesn't exist, but that doesn't change what's going on in the background. And so when one thinks that way, the question of what it means for these to think about patterns interacting takes on at least a different flavor, because we described it in a radically different way when we think of some big box with a bunch of stuff happening inside of it versus the big box being cut up in a lot of little ways where we think about these little entities that have to exchange information across their boundaries.</p><p><strong>[1:34:12] Carl:</strong> Chris, is that not easily resolved just by picking a particular scale? You're talking about big little boxes in big boxes, little Markov blankets within a big Markov blanket. The only difference is that there always has to be a carving of an independency structure to have a cause-effect structure to describe anything. So the difference between the big box and the little box is just a scale that you pick in order to try and articulate and model your system or explain your system. Interestingly, those little boxes are all trying to do exactly what you are doing, trying to understand the boxology of the system at hand. But my main point is, are you not just stating that there is a choice here, that you have to pick the scale at which you want to characterise or model or understand or explain your system?</p><p><strong>[1:35:16] Chris Fields:</strong> I don't think so. The notion of statistical independence that allows you to talk about things causally is itself an approximation that we get by assuming that the overall dynamics of the system is naturally multipartite. The quantum language makes it easy to say if, to the extent that the whole state is fairly entangled, then the boundaries that we draw are completely artificial. They can be regarded as boundaries only for a very short time until regarding them as boundaries no longer works. Whereas in a classical system that's not true. In a classical system, all the interactions are causal, and any boundaries you draw really are boundaries. From a quantum perspective that's no longer the case. The boundaries staying boundaries is pragmatic. That's independent of scale.</p><p><strong>[1:37:04] Carl:</strong> But I briefly pursue that. When I use the notion of scale, I included temporal scale. What I hear you saying is that certain carvings exist at specific scales. At the quantum scale, the carvings are only valid as an approximation, possibly over very short periods of time. But as you increase the scale, the duration where that boundary is in play in a classical or statistical sense extends. Again, one could argue that if you include separation of temporal scales in that scale invariance, that it's a question of picking the scale you want to work at. And in a sense, I was asking you to think about the two ends of scales that we could consider, the moon and the electron, and why they are not relevant to biotic self-organization that is characterized by curious behavior and exploration and playfulness and breaking of detailed balance, because you don't find that either at the quantum level or at the level of heavenly bodies, for example. Just to summarize, what I'm saying is that, yes, it is certainly true that statistical independences dissipate and fluctuate and indeed a pullback attractor is itself a random variable. But over a certain time scale, they are in play, and that time scale increases as you increase the scale of the system.</p><p><strong>[1:38:54] Chris Fields:</strong> From a global quantum theoretic viewpoint, as you increase the temporal scale, the approximation of things being statistically independent always becomes worse. The only question is how much, how quickly it becomes worse, which is dependent on energy density. So I do think that the reason we see the world as classical at large scales is because we don't know how to look at it. We don't know the right way to look at it.</p><p><strong>[1:39:59] Mariana:</strong> You can have all these agents in a time-independent manner and also a background-independent manner. So it's not only that you would choose the scale or the metric; you can literally let the cohort tell you what the metric is or homological class or character. So you can definitely do all that with no time and no particular conception of space besides the one you're observing. This is what you were saying. No, Chris.</p><p><strong>[1:40:35] Chris Fields:</strong> I think that we'll eventually be forced to consider space as something that we impose as observers. I'd love to be able to make that more precise.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Discussion #1 at the Platonic Space Symposium</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>Contributors to the Platonic Space Hypothesis discuss math, identity, abstract realms, attractors, simulation, mind and agency, exploring how content, creativity, scale and boundaries might fit into a unified view of reality.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/oL4G2_Oznk0" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/d7d82326/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1 hour 40 minute discussion among contributors to the Platonic Space Hypothesis (<a href="https://thoughtforms.life/symposium-on-the-platonic-space/)?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life/symposium-on-the-platonic-space/)</a></p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Math, identity and realms</p><p>(16:08) Convergence, abstraction and attractors</p><p>(34:05) Attractors, stress and observers</p><p>(41:29) Realms, impossibility and simulation</p><p>(51:40) Simulation, explanation and understanding</p><p>(01:04:39) Mind everywhere and agency</p><p>(01:19:19) Content, communication and creativity</p><p>(01:30:42) Boundaries, scale and space</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Chris Fields:</strong> I'm happy to raise the question I raised in the registration form, which was a Gardellian question. Since as soon as we want to achieve some level of precision and definition we're forced to use mathematics to talk about our own states and our own interactions with the world, however you want to define that, what are the consequences for our view of mathematics of this fact that we have to use mathematics to describe ourselves and our states as physical systems, our behavior as physical systems, our physical interactions with our environment? I have to use mathematics to describe my interaction with all of you, for example. How does that bias, if it does bias, our thinking about what mathematics is? What it means to claim that we are entities that are not only amenable to mathematical description, but for which mathematical description is required for a certain kind of discourse, the sort of discourse that we regard as science or as explanatorily productive?</p><p><strong>[02:23] Olaf:</strong> If I can extend that, Chris. I've seen a few talks address this. How much of mathematics are internalized and used as an extension of our senses versus something that is completely usable but external or very low bandwidth with our subjective awareness and computation? How do people feel they are on that spectrum? I think it extends what Chris is asking.</p><p><strong>[03:20] Michael Levin:</strong> Well, I hear two different but related questions there. One is, if we take the thing that we currently identify as "this is what we think of as math." One question is, to what extent is that applicable to the things that we're interested in here, and where it fails to capture the things we're doing when we relate to each other? So that's one question. But the thing that I keep coming to is, do we in fact have a fixed thing where we know "this is mathematics and here are the borders of it"? And if you go beyond that, you're somewhere else; it's not math, it's something else. Or is it that our attempts to formalize interactions between agents are actually stretching math? Is it changing the definition, changing the borders of what we thought? Maybe certain things that weren't thought of as part of math then have to become part of math. So is that changing the definition? Or is this a fixed thing? Then we can argue about whether it's applicable. And if it's not applicable, then we have to pick something else, some other kind of formalism. I don't know what you guys think of that.</p><p><strong>[04:49] Chris Fields:</strong> I should say I'm neither a mathematician nor a historian of mathematics professionally. This is only an observation from the outside. Certainly, if one looks from the outside, how math has been described by humans has changed quite a bit with the introduction, for example, of non-Euclidean geometry. This was something that no one even imagined up to then. There are now many kinds of algebras in addition to what was originally regarded as algebra. When one reformulates mathematics in set theory, it looks different. When one reformulates mathematics in category theory, it looks different. It becomes much broader. Many things that in earlier formulations looked like distinct entities or distinct systems or organizations turn out to be notational variants. You say this thing and that thing are in fact exactly the same thing. All we've done is redescribe them in a different language. It seems to me from this outside perspective that how we think of math is constantly changing. That doesn't address the question of whether there's some fixed entity called mathematics somewhere outside of our conceptualization at all.</p><p><strong>[06:58] Michael Levin:</strong> By the way, does that, what you just said about a notational, when we do notice that, hey, this thing is actually the same as this other thing, is that, what's the meta level there with what are the tools that you have to take on board to even be able to make that judgment?</p><p><strong>[07:23] Mariana:</strong> So, I would say there are a lot of tools you can use and there are weaker or stronger forms of proofs. You can prove by contradiction, which is fun to do. But in the end, you're reasoning. You're reasoning with agents because ultimately you're going to publish it and you're going to have a community review it. Mathematicians in principle are also the first ones to say, I made a mistake. I thought we could do it this way, but after all, I thought about it, and I found a loophole. And so it's almost like a continuous dialogue of agents' reasoning. But then, of course, you have representation tools that will help you verify, and ultimately you can also have geometrization, for example, of two objects, and then you can see that they relate by some measure, and this ultimately, for example, can work in favor of the proof or against it. But I'm with Chris Fields. I think it was a good intuition. But I also ask why we are asking this question in terms of what's the assumption? I want to tackle the assumption, perhaps work it from a different angle. Is it related, for example, to patterns or to our notion of patterns because they find expression in our mathematics?</p><p><strong>[09:20] Chris Fields:</strong> If you're asking me, since I posed the question at first, one of my major obsessions is the notion of identity. And in physics, that's the notion of identity over time, since we parameterize this thing with this parameter we call time. But without this notion, physics stops. There's nothing to say anymore. And indeed, lots of other things stop. Psychology stops because we can no longer talk about memory if we can't talk about identity. And identity is a key assumption, or an axiomatic assumption of category theory, that there's an operator that we call identity. And without that notion of identity, mathematics stops. I suppose the underlying question, or the question that underlies my question at the beginning was, what is this notion of identity? What does it mean that we try to formalize it in these various ways?</p><p><strong>[11:00] Michael Levin:</strong> This is also something that's very fundamental to what we do as developmental biologists, because as developmental biologists we really want to understand what does it mean that you have an embryo, which is the same through some period of time and that things happen to it, but yet this is the thing that's undergoing change. This is a very fundamental where does it come from? How does it come to be and so on. The only reason I bring it up at all is that it seemed to me to be a simpler domain in which to try to make the claim, which some people at least already believe, that not all facts are physical facts. If you try to do that in biology, it's really hard because it's very complex. People will say there's some mechanism you just haven't found yet. That's probably always true because there's always more to be discovered. But in math, other people for a really long period of time have already made the claim that there are facts that are not derived from nor changeable within physics. There's this other domain of important information that exists. That was my strategy: we already know this is the case, or at least many people believe this is the case. Now we can ask the question of whether some of these things are also relevant for biology, for behavior science, and so on, and move on from that foundation. That was my motivation for mentioning mathematics at all, because at least there we have a bedrock where some people already bought into the idea that not all important facts are facts of physics.</p><p><strong>[13:01] Mariana:</strong> I agree with you. The question was to raise the assumptions so that we could discuss them. Chris spoke of identity; I'm really fond of this topic as well. It is interesting to think of identity all the time and everywhere, but in no particular place. These two change. I know this may seem hard in biology to think of things that do not happen in time or that happen all the time. It's more like it happened all the time. They are the same all the time. So it's within a time range. In development you see this a lot. You would have an embryo. In principle it will grow to Stage 22; it will have 36,000 cells as an open embryo. So this happens all the time. When there's a variation, we note it down. But in principle this happens all the time. Time can be expressed also from a time-independent perspective. Sometimes this is helpful because if there is a structured space of patterns independent of us, then our assumptions of time may be wrong, and this can hinder our understanding of their development if they do. What would it be like to develop not in time? This also ties back with the notion of memory, that memory is a temporal thing. Suppose that this structure, space of patterns, is a space where memory is retrieved from. All states that already happened and will happen live there. What you have is agents that loop around. This is a hypothesis. Depending on a local state, they will fetch preferential points in this structure. If you want to call this a temporal structure, there are physical models that could do this. They may not represent the standard model, but they exist and they're mathematically influential. Then we are no longer speaking of time that passes. You're speaking of agents that are atemporal. I find this notion interesting. I don't know what you think about it. Another thing: we speak a lot of physical facts, and I would like to bring to the table this notion of relational facts. Both in mathematics and in physical models, what we are asserting is relational facts.</p><p><strong>[16:08] Olaf:</strong> How much do you think mathematics and related areas, or intersections of your own fields with mathematics, are converging or diverging on the mathematical level? And how far, if they're diverging, can it stray apart from the current sets of axioms and concepts? I'm saying this because I see mathematics as this historically negotiated corpus — from Euclidean geometry to algebras, analysis, and category theory. It feels inter-subjective and tends toward convergence in most cases. In my field, neuroscience, we see connectivity matrices becoming less interesting and have to converge toward higher-level abstractions to make something — the most exciting to me, at least. So do we tend toward divergence or convergence?</p><p><strong>[18:02] Mariana:</strong> It depends on what your parameters are. I've dwelled a lot on this notion because I feel it's very important for us and for the research program in general in this distinction between abstract and concrete. There are good proposals from logic to speak of this in terms of properties it sets, but I ask, for example, in terms of a combination. When you abstract something, ultimately it really feels semantic, but also if you're going to look under the hood, it feels that you're saying less to address more. Suppose you have a high feature density. This speaks to the chorus. Feature density means you can distinguish something in your data set or in your model, and it is unique. Suppose you have lots of these unique features that don't repeat. This would be very rich and you would have less redundancy, for example. Suppose the other way, where you have one feature that repeats 360,000 times. This would give you another kind of ratio. This is my question for you: have you ever thought of things between, or a spectrum between something that is abstract and concrete along these lines, if you were to play something between these endpoints?</p><p><strong>[20:33] Michael Levin:</strong> Having seen the different talks and everything that everybody has been saying in the symposium, what do you guys think about how many different views we have here? Fundamentally, how many different—obviously, everybody's got a different perspective. I'm going to send out a table for people to comment on, and I'm trying to think of what the columns of the table should be, the primary axes that people would have different opinions on in this collection of thought. How many different views do you think we have and what does the conceptual space look like? What are the primary axes where people agree and disagree? Just to give you an example, one basic one that comes up all the time is people say, "I agree with what you said about ABC, but I really don't like having a separate realm." This notion that some people like a monism where everything is in one space and they really don't like the idea that there's a separate realm in some cases, and we can argue about what it really means to be a realm as opposed to something else, some weaker form of it. But that's one axis, I think, where people differ: to what extent are there multiple realms? There are probably other axes. I'd be curious to know what you guys took away from all the discussions as to other fundamental dimensions.</p><p><strong>[22:36] Olaf:</strong> I have one other axis, which is I think something like physics-boundedness, being constrained by laws of physics or not, as in a fixed set of rules. And it feels like my talk is on the extreme, on both ends of this, which is interesting. But something about dependency on substrate. Let me continue those questions. Let's assume that everything that we want from the representation exists. What then?</p><p><strong>[23:32] Michael Levin:</strong> What will be the end goal? Because what we are representing is some subset of the real world. Let's assume that we have everything in there, technically create two worlds, then what? Or if we have some reduction of concept in that Platonic space, then what can we do with this? What are the best options to do with this?</p><p><strong>[24:08] Mariana:</strong> I would say map it. Depends on the assumptions. But if we have agents that can come and go, then in terms of experiments, as you guys have shown, it is possible to have a source and target map. This would be what I'm most interested in.</p><p><strong>[24:34] Yvette:</strong> For you, the fact that we succeeded in mapping everything that we would like to do in the world is good.</p><p><strong>[24:48] Michael Levin:</strong> Success of that test, of that theory.</p><p><strong>[24:54] Mariana:</strong> No, but it would give us some truth bounds for experimental means or for managing expectations for experiments. What would be yours, for example, your end goal?</p><p><strong>[25:16] Olaf:</strong> I will try to predict something that I didn't put there.</p><p><strong>[25:23] Yvette:</strong> There is object A and object B, and object C is inferred from all of those. I know that, but I don't know if my tool can do that.</p><p><strong>[25:39] Olaf:</strong> I'm hearing in the metric of verifying whether it is or it's not a subjectivity notion, but you can have it or you can zoom out in an objective godlike view of those agents that Mariana you just mentioned as well. I think that makes all the difference. If you switch to math that we haven't invented yet or that is alien math, the part where we consider that this is part or not of the subjective perspective that we are holding right now is, I think, important. Maybe that's another axis for you, Mike.</p><p><strong>[26:27] Michael Levin:</strong> For me, what I'm really interested in is mapping the space, but also figuring out what is it, what degree of, I call it a free lunch, but what does it actually give you? Because there's a wide range of options. So it might just give you static patterns, here's the value of E and that's all you get. It's just there. Or it might give you dynamic behavior or algorithms or compute, what's the range of complexity that you get out of it that you didn't put in and where? And so we're doing some things in our lab, giving bodies, whether physical or simulated, to simple mathematical objects to see what they encode. If you treat them as behavioral propensities, what do you get? But more generally, because that has implications for evolution. I think, to what extent can evolution exploit things that it pulls out of that space without having to take the time to micromanage them and evolve all the components. What do you get for free? You get some stuff, as Stuart Kaufman showed us, for free. But I think my suspicion is that's just the tip of the iceberg and you actually get a lot more. And ultimately in the lab, we need to be able to say, here are some anthrobots. There's never been selection to be a good anthrobot and to do all the weird things that they do. Where do their specific properties come from? Why did we not see this coming? How can we have predicted it? What are the options? And what's the relationship between the thing you make and the stuff that then comes through? If we tweak, can we tweak certain things about the anthrobots if we want other types of patterns to come through? That's what I'm interested in, is what do you actually get and what's the relationship between the interface that you build, whether that be technological or computational or biological or some combination thereof, and what is going to come through that you have no idea about.</p><p><strong>[28:52] Mariana:</strong> One of the things that I've been thinking a lot is exactly the work that you do, and precisely this mapping. It seems to me that mapping, based also on your experiments, under the assumption that developmental states are pulled in, then this mapping can also allow you, in very practical terms, to address, for example, regenerative procedures at late stage, so you don't need to catch something early stage because you already know how to pick up that pattern in case there is a topological defect, in your sense, is a developmental stage. This is why I find the mapping really relevant. It might not be the best approach, but I think when you speak of free lunches, these are the free lunches, the low-hanging fruit that we could use. I've been thinking a lot about it. It seems that when you speak of perturbations or abrupt perturbations, things that were unforeseen so far, that then output these developmental novelties, like the anthropots. I'm very puzzled about the tail onto the flank. Why not the tail? Why not keep the tail? It's so much cheaper. Why reject the tail? Why build the limb? It seems like it's the most— I know there are some changes that are more helpful or more useful. Sometimes I wonder why it just feels like, for example, a limb is more complex in terms of edges than the tail, right? In physics, you would call it relational mechanics. There are some proposals exactly in these terms where whenever there's a chance, go for partition. Go for something that is different that has— this would be the measure of complexity. So complexity is just a relational measure between you and your neighbors around you. I like it.</p><p><strong>[31:41] Michael Levin:</strong> It's a very interesting question. I don't want to dominate this thing. So please, Yvette, Juan, Brian, Carl, please chime in. This question of at what point does the thing give up on the standard implementation and shift over to something else. There's a standard, you can call it an attractor, but I don't think that's all it is. There's a standard version of an embryonic body plan that it will try to hold to. If you deviate it, it will work pretty hard to get there. If you put on an extra tail, it will try to make it a limb and things like that. But at some point, you can push it so far that it basically says, forget it. I'm now an anthrobot. I'm not going to try to make a human embryo. This is my new life. One of the ways that we're trying to address that is to look at stress markers, because we have a project looking at systemic stress as a measure of distance to your goal state. There are scenarios where the tendency to try to reduce stress is what pushes you to get back to where you need to be. So we're interested in this question of, okay, are Zenobots and Anthrobots stressed out about being those things? Or at some point do they adopt that as the new set point? So being a Zenobot is my set point. I'm now a great Zenobot, so my stress can fall. That's an experimentally detectable thing. We're doing those measurements. That's one way of doing it. In general, I think that's a great question: at what point does it shift, and I don't think it's about utility or anything like that, at least certainly not in the short term.</p><p><strong>[33:41] Mariana:</strong> So I misunderstood.</p><p><strong>[33:43] Michael Levin:</strong> I don't know. We don't know how a lot of these decisions are made. There's so much that these systems will tolerate and try to accommodate to still get back to what they need to be. But there are also scenarios in which they just flip to something else. Carl, please.</p><p><strong>[34:05] Carl:</strong> Some wonderful questions there. I wanted to pick up on this notion of stress and attractors, but try to frame it in response to some of the questions that have been rehearsed. So it's going right back to Chris, a question about the nature of maths. I noticed that he used the word dialogue. There was also Mariana's notion of discourse, and then we had Olaf's negotiated corpus. I think Olaf speaks to maths just as being a particular kind of co-constructed language that has an enormous amount of explanatory power in terms of accounting for things accurately with the minimum complexity. In so doing, the question about convergence touches upon, Mike, what you were asking about, is there something else or is this just another version of the same thing. And if you pursue the notion that the right kind of language and the right kind of maths is going to explain everything as simply as possible, but no simpler than you're looking for, that's exactly that convergence, and I think that speaks to a lot of what people were saying in terms of your maths being a continual process of basically model building, a co-constructive model building. The notion of identity, in my world, that would be self, and it would be the self you find in self-organisation. It would be exactly the same thing you find in information theory in terms of self-information, right through to self-evidencing and the free energy principle. I mentioned that because that stress is mathematically simply the self-information or the implausibility of finding this kind of thing away from its attracting set. So coming back to the attracting set, the notion of a pullback attraction probably has everything that you need in order to accommodate all the questions that I've heard thus far. That really commits you to a particular kind of maths. It probably wouldn't be maths, it'd be physics. But certainly, mathematically framed, you would be seeking out that convergence that people were talking about, the kind of maths that allows you to explore all of the issues we've been talking about and also provides that nidus of convergence that will enable a certain consensus. It strikes me that the notion of an attracting set has everything that you need. Think about Mariana's questions about things that recur in time, memory, persistence in time, having characteristic states. You can express all of these things in terms of attracting sets. So you need the physics of attracting sets, and that's basically the pullback attractors. Within that you can now define self. You don't need to be axiomatic and assume the existence of identity in the sense that there is a self that is constituted by the attractor, and everything else follows. To bring that to closure, the stress is a measure of the distance, or how far outside your attracting set you are, and what will happen is you'll go back to your attracting set. That was my breathless summary of the thoughts that were induced by the conversation.</p><p><strong>[38:02] Michael Levin:</strong> Brian.</p><p><strong>[38:03] Brian:</strong> I just wanted to add. From this notion of perturbations, I think one of the issues with platonic spaces that I always reconcile with is whether this is all observer dependent in some sense. And I think the notion of perturbations is a nice way to think about making the observer aspects as weird as possible. In the computational realm, you can do this. Marianne talked about this notion of experiencing time and the aspect that maybe you can actually have agents experience time in a very different way than we experience time. We already have these in the AI space; they're called diffusion models. If you've ever read the Ted Chiang stories or seen the movie Arrival, there's this kind of gap between how we personally oftentimes see time in a linear fashion. But diffusion models in the sequence space see time in a completely different way where everything appears at once. When everything appears at once, you can imagine that this is something that diffusion does where it looks at generating the entire story everywhere at once. We're now exploring this because we're interested in whether these same systems learn algorithms that we're familiar with or they learn completely different algorithms in the space of, for example, games like Sudoku and things like that. So I think this notion that we should make the observations as different and as weird as possible. That's the way to at least hope that we're always being locked to some notion of observer dependence, but we can generalize that observer dependence further and further out.</p><p><strong>[39:37] Michael Levin:</strong> That's super interesting. Can you say any more about that? What are you actually doing with these diffusion models?</p><p><strong>[39:43] Brian:</strong> We are training diffusion models to play Sudoku because Sudoku is one of those games where it has a lot of computational advantages, and it's an NP-complete problem. The algorithms that we usually use for Sudoku are very — if you play Sudoku, this causal structure of "let me find the least constrained square and go from the most constrained square to the least constrained ones and solve the puzzle that way." If you train a diffusion model to solve Sudoku puzzles, they solve it very differently. We don't quite understand how they solve it right now, but they definitely don't choose the obvious strategies of "let me march down the least constrained square to the most constrained square to the least constrained square." It's something that almost feels random right now. We haven't done enough analysis on this yet, but the way it solves Sudoku could also lead to new algorithms in that space. Maybe the time complexity of those algorithms may be very different than the time complexity of algorithms that we have created in that space. And because it's an NP-complete problem, there's a lot of computational complexity analysis that we could potentially do in the long term from this. So we're just training diffusion models on things where we already have an algorithm that we know works and seeing what the diffusion model discovers as a different algorithm. It's also worthwhile studying in the language space because there are diffusion models in language now. You could look at the algorithm that diffusion models require and the representations they acquire by training on the sequences that we believe are generally temporally linear.</p><p><strong>[41:29] Yvette:</strong> Hi, I apologize for just jumping in. It's my first time at the meeting, so I wanted to say hi to everyone and thank you for the invitation. I will try to join regularly now that my schedule is a bit more organized with my teaching, so it doesn't interfere. Maybe I just wanted to quickly introduce myself. I'm a physicist and I work at the interface of quantum mechanics and general relativity. I'm going to be listening for a while and then see when I can chip in with something more meaningful, but maybe just an interesting connection: one of the things I'm interested in is the interface between the classical world and the quantum world. I tend to see these as two different realms. Maybe this is just a perspective that can be helpful because at the end of the day you could say it's all part of one whole, so why divide it? Sometimes these divisions can be useful. The thing I'm starting to see is that the classical world emerges from the quantum world. They are different realms in the sense that they follow different rules, but they interact. That's important because in the lab we can prepare a superposition of an atom going to the left and to the right. What I'm working on is the emergence of mass from, let's say, rules of the quantum world. Some things that have made understanding the quantum world difficult are that we try to put the rules of the classical world into the quantum world somehow. I was thinking about some things you've mentioned: what's physical, what's physics? Where do you draw the line of what's physical or not? We have a lot of arguments about that within quantum mechanics, because even within our community some people argue that quantum mechanics is physical and other colleagues say it's just information and a mathematical tool to make predictions, which I don't understand very well. Because we haven't understood quantum mechanics well yet, it's still very debatable what is physical. I think the question about different realms and what's outside the realm of the physical is definitely something that interests me very much. I'll just say hi, and hopefully I will be able to contribute later on. Thank you.</p><p><strong>[44:47] Mariana:</strong> It was lovely hearing you, Yvette.</p><p><strong>[44:50] Yvette:</strong> Thank you.</p><p><strong>[44:51] Mariana:</strong> I agree. I rarely use the term "physical" and don't find it very useful. There are a lot of mathematicians contributing immensely to mathematical biology, which end up contributing a lot to quantum theories or field theories. Because morphogenesis pinpoints the questions of the first mass and how mass behaves at different scales, we have all these effective theories and models. That's half the predictive power. But on the other hand, we cannot find the physicality for them. The temperatures are off. They don't reproduce anything but that behavior. It was lovely hearing you.</p><p><strong>[45:50] Yvette:</strong> Thank you very much. Yes, I'm excited about joining this discussion.</p><p><strong>[46:04] Chris Fields:</strong> Can I go way back to one of the previous topics, which had to do with what we're actually trying to achieve with both this discussion, but with science in general, mapping our space. And in a sense, one can look at it from two fairly different perspectives. One is the perspective of trying to predict what we will see, and the other is from the perspective of trying to characterize what we won't see. And if you think of mathematics as a formal system, or if you think of physics as a set of symmetries, a postulated set of symmetries, where everything else, everything beyond the statement of what the symmetries are is a relative fact that Mariana was describing earlier, then what comes out of those two ways of proceeding is a list of things that you can't do, or a list of structures that don't occur, that don't make sense. A list of no-go theorems, like Gödel's theorem, for example. And everything else is subsumed under T.H. White's law of the ants, or what Gell-Mann called the totalitarian principle that everything not explicitly forbidden is mandatory. If you can't prove that something won't exist, then you're going to bump into it somewhere. So it's a possible pattern or a possible attractor in Carl's terms. And it may be entirely unclear how to construct that attractor. But in the absence of a proof that it's impossible, you can expect it to be constructed somehow. But that's a very different, very negative way of describing what one is trying to do, to say that what one is trying to do is characterize the impossible. Carl, please.</p><p><strong>[48:52] Carl:</strong> That was an excellent point. I'm just thinking of one more positive take on that. You could invert the problem and just assume that we want to characterise systems that can exist and then work from there. In that sense, what you are doing is saying that if a system is characterised by these characteristics, then it can't be over here in some state space. And that's just the surprise of self-information or stress that we were talking about before. And that comes for free with just writing down a density over states that can be occupied. Of course, there are many more ways of being dead than alive. It's a very small, attracting set that remains there. In answer to that question, in my world, the utility of having the right kind of maths means you can simulate things in silico. If you can simulate things in silico, then you can do forecasting, scenario modelling, interventions, you can test hypotheses about perturbations to the system. This becomes really practically relevant, certainly in things like computational psychiatry, where you start to simulate people, for example, and when their sense-making or decision-making goes wrong, or any sub-organising system like climate, financial services, fintech, getting the right maths and the right kind of self-organisation in play opens up a whole world of practical and important ways of intervening and testing hypotheses via simulation. The other application is you can test hypotheses about the unknowable mechanisms of the system that you're interested in, because you can't observe it directly. You can start to test hypotheses about the mechanisms because you've got the right simulation testbed. And then finally, if it's all working, you can put it into a diffusion model and make artifacts and you can then move into the world of autonomous vehicles, artificial intelligence research, using the right kind of maths to drive and to create artifacts that somehow embellish or endorse our ecosystem and the things that we actually play with and use. So practically, from my perspective, it's really important to get the maths right, because once you get the maths right, you can now answer all sorts of really important questions.</p><p><strong>[51:40] Michael Levin:</strong> Could I ask about what you guys think about the distinction between simulation and explanation? This comes up in biology in the following way. I'll say we need to understand why this thing is doing that. And people say, well, it's emergent. And I say, what does that mean? And they say, what it means is that if we were to simulate the micro rules that are driving it, this is what we would see. And it's a regularity that holds in the world. That's what it is. I say, but what does that mean, that it's a regularity that holds? And what they mean is if we simulate the low-level rules, then out it will come. We can show that that's the case and get a catalog of these things. I'm interested in what the relationship is between being able to simulate it and thus show that, yes, in fact, that is what happens versus understanding what's going on. I was thinking about an extremely minimal case of this, the glider in the Game of Life. What does it mean to understand the glider, say the rate at which it moves or the angle at which it moves? What somebody can very easily do is show me the four steps, the cycle of the thing moving over. We can all agree that, sure enough, with the physics of this world, that is exactly what happens. Nothing more to say about that. But have you explained it or have you simulated it? With enough simulation, it seems like we could get the answers and not understand much of anything. What do you all think? Is there a distinction? And if so, what is it?</p><p><strong>[53:18] Brian:</strong> For me, I have a very pragmatic distinction, it's a continuum explanation and simulation, but if you can accelerate the simulation to a point where it doesn't have to run in real time or doesn't have to run in reality, the idea is that explanation allows you to skip a lot of steps. It allows you to go way faster than a simulator would be if it had to simulate the entire universe to get to the same point that you want. In the Salvation Hometer case, I can predict gliders ahead of time without running all the intermediate steps that require a glider to be formed in the Game of Life.</p><p><strong>[54:00] Olaf:</strong> The simulation is a shorter explanation if the usual route is much longer in terms of time and space and compute or energy or whatever you want. That makes sense to me. I wrote this thing about LLMs being frustrating because you put so much data and compute into it that you can't possibly think that this is highly intelligent because you invested so much in it. What is impressive about intelligence is that you put not much and you get a lot, which is how you phrased this, Mike, earlier. Math is impressive because you put something and you get so much more out of it. I think there is this notion of compression and efficiency that we expect and otherwise we are frustrated.</p><p><strong>[55:00] Carl:</strong> I just didn't endorse that. It reminds me of arguments for universal computation, inheriting from, say, Kolmogorov complexity compression. When I hear compression, I hear maximizing the likelihood of your model. I mentioned that, Mike, you can look at simulations as a glorified statistical test. When we do a t-test and we're asking a question about the mechanisms or our hypothesis about the causes of some measurement or some data, we are doing a simulation. We are building a little generative model, a general linear model for the t-test, and we're simulating what could have caused the data, and then we're identifying the best hypothesis. When I talked about simulations, I wasn't talking about using maths and computers to reproduce behaviour. I was talking about the simulations used as a statement of your hypothesis about the underlying cause-effect structure that generated those data. Then you can compare different simulations, i.e. different observations or statistical models, and quantify the evidence for your hypothesis here and hypothesis there. But to do that, you've got to have the right kind of model. And you've got to have the right kind of maths that underpins that model, which is an open question and an ongoing challenge, which I'm sure we're all contending with.</p><p><strong>[56:53] Chris Fields:</strong> I'll throw in a different way of answering that question, which is to say that a simulation may reproduce some effects that you're interested in, but it doesn't force you to change your conceptualization of the effect. Does it force you to change your language? Whereas a really good explanation often forces you to change your concepts. For example, go back to 1900 and consider the question of how there can be atoms. Why are there any atoms that are stable? The going theory back then was that the difference between protons and electrons was understood. That language hadn't been invented, but it was known that there was positively charged stuff and negatively charged stuff. It was also known that if the electrons actually moved in the atom, they would radiate away their energy and the atom would collapse, so there wouldn't be any atoms around, no stable atoms. That was the question that the Rutherford model ended up answering. Rutherford's point was that electrons don't move, that electrons can have particular energies, but they're not moving, so they don't radiate unless they change their energy in very precise ways. And they can only change their energy by finite amounts, and in fact, only by particular finite amounts. So they'll only radiate by these particular finite amounts. That radiation is not due to motion. Rutherford's picture introduced a radically different conceptualization of what an atom was. And that's why it was a good explanation. It made sense because it got rid of an entire problem, the problem of radiating the energy away and the atom collapsing and being unstable. But to get rid of that problem required a conceptual change, an abandoning of intuitions about motion. So I would say a really good explanation is something that causes us to abandon some intuition.</p><p><strong>[59:54] Michael Levin:</strong> I like that.</p><p><strong>[59:55] Chris Fields:</strong> Simulations don't do that.</p><p><strong>[59:57] Michael Levin:</strong> I like that a lot. It sort of evaporates certain problems. It inevitably raises new ones. And is that okay? Is there some sort of ratio? I've seen reviewers' comments on papers where they say, "this raises more questions than it answers." I'm like, that to me seems like that's what we're supposed to do, but maybe not. So what should be the ratio between the problems that you've evaporated and the ones that you've now unearthed?</p><p><strong>[1:00:41] Chris Fields:</strong> I suppose the new problem should be more interesting than the ones you've evaporated.</p><p><strong>[1:00:48] Olaf:</strong> You can sometimes solve a problem by switching languages. You create a new field sometimes in nice successful cases. But you can always backtrack into the other language, which is what happens when science splits, and science is very split. That's why we have hope for things like category theory and things that bridge between the fields. But essentially, you have one cake and you eat that one or the other one. You can't eat both. You can cash them out both at the same time until you unify. But that's how I usually see it. You can backtrack. You can always speak the other language. And other than that, we solve the other problem.</p><p><strong>[1:01:58] Carl:</strong> If it's a common language, though, the ratio that Mike was asking about would be a log odds ratio in Bayesian statistics and hypothesis, post-paparian hypothesis testing, it would be a Bayes factor. The relative probability of these data measurements or observations conditioned upon this simulation or that simulation, where the simulations embody your hypothesis, your mechanistic explanations, your conceptions. So the art of being a good Rutherford or a good scientist is to exactly deny your intuitions, your prior beliefs, and explore other hypotheses. Then you put that hypothesis in some converged maths and make it into an observation model. And then you evaluate the likelihood of your data under that observation model relative to another one. And indeed, you may find yourself going back to the null hypothesis. If you commit to classical or frequentist statistics, failure to reject the null keeps you stuck. But if you use a more generic model selection approach based upon the Bayes ratio or logos ratio, I think that provides a really nice space in which you can think of the scientific process as elaborating new hypothesis spaces. And then you've got the technology to evaluate how you proceed, how you take a path through the space of hypotheses. The challenge, of course, is to elaborate new hypotheses. And ultimately you come back to natural selection and evolution and the law of requisite variety and the like to elaborate those. But notice that depends upon having converged maths. So you can take two concepts or hypotheses or models and simulate data generation under these two hypotheses in a comparable way. So you can evaluate the probability of your data given these two observational concepts or hypotheses; you couldn't use category theory to do t-tests.</p><p><strong>[1:04:39] Mariana:</strong> I'm not very good with the term simulation. I really like breaking assumptions or trying to think across the borders. I think the first time I heard your framework, Mike or Tame, I thought it gave so many disciplines a fresh set of tools, even thinking tools. I've been thinking a lot about them for our research program. Because when you have a technological approach to mind everywhere and you try to find where is everywhere, where is the limit, then perhaps we may find ourselves with this opportunity to do these experiments now outside of biology. This is no easy thing to conceptualize. I feel that I've been working at the limit, or at least at my capacity, because it requires this interpretative competency to go into all these different fields and try to match the framework. It's a real mind-bender. I don't know where you are all regarding it, if you have thoughts concerning this aspect.</p><p><strong>[1:07:08] Chris Fields:</strong> Well, I think one thing that the framework requires of us is to abandon this notion that we are special in a very particular way as cognitive beings and to allow ourselves to consider all sorts of other systems as cognitive beings, and that's, in a sense, deeply deflationary. I'm not sure whether that's what you really mean by it, being a mind bender, but that's what I see as the primary challenge of it.</p><p><strong>[1:08:17] Mariana:</strong> I'm not sure why I said mind bender, but now I like that I said it. I think it's more, and I think it's a good thing. I don't know how to evaluate the behavior of humans, their intentions I do not know. I find it a bit puzzling. It is easier for me to see, from Mike's perspective, there's a technological approach of mine everywhere. There are ultimate preferences. It's easier for me to see it in animals. Transposing these classical conditioning techniques—behavioral changes seen in animals—into mathematics or even into electrons. If we were to classify their behavioral profile, their preferences, or their skills, this is what I've been calling a mind bender. It seems very conceptual and it has absolutely no use, but I feel that through time and multi-scale competence architecture, we have at least a way to transpose these conceptual ideas about agency in other kinds of spaces and test for them. What I mean by this is if you have an observation that an object X always behaves the same way, is this non-interesting or is this an extreme preference? A preference X would not let go. No matter what you do, X will do that. I think this raises a lot of good questions. Maybe there is something that X likes more, and we can test for that. Then suddenly X has two behaviors, and you don't have the same paradigm you had before. I'm not sure.</p><p><strong>[1:10:40] Chris Fields:</strong> I think there's another way to look at that, which is perhaps equally interesting, that if you look at X and X only has one behavior, then perhaps that is mostly reflecting on your capabilities to look. And that X may be doing all kinds of things that you can't see because you're choosing to observe it only in some particular way. And it may be that the way that you observe it is what leads you to call it X as opposed to something else. And your method of observation is inevitable: one's method of observation is what one uses to cut up the world into systems with boundaries or Markov blankets or however we want to describe them. And then that automatically restricts the measurements that we can make, as well as restricting the aspects of the system's behavior that we can regard as behavior because it gets encoded on the boundary that we have identified?</p><p><strong>[1:12:10] Mariana:</strong> I completely agree. This poses another question, which is suppose you got a new set of goggles, scientific goggles, and you have a new framework and you see that X always chooses something in this space. Now we found out that it has a different behavior in some other space, but it behaves quite similarly. It has one preference in that space. You keep doing this. My question is, how can we state this preference? Or how can we pose things in other terms than perturbation or stress in trying to figure out what they want, what would be beneficial for them instead of trying to use them? Because it has brought us so far. But I wonder if we are to shake all assumptions and if we are to take these frameworks, I think, even on an experimental means, it feels interesting to think, what would they want? Because then perhaps if we have a different kind of behavior, we would have more ground truth to judge such behavior. I agree with everything you said. I don't think this is a simple problem. I think it's a very interesting one.</p><p><strong>[1:13:53] Carl:</strong> Could I ask, you're using words like preference and want, which I like because I talk about prior preferences a lot, but it does anthropomorphize things. It sounds to me you're talking about things that conform to a variation principle of least action. They have the most likely path and that path is the path they prefer. How much do they deviate from that path of least action? It makes me wonder whether the answers that you would supply, which could be articulated in terms of preferred paths and wanting to get to the end point of the path of least action, are scale dependent. I want to ask Chris: let's just take two extreme scales. Let's take the electron and the moon, both of which have lawful behaviour. Do either of them want or have a preferred course of action or a preferred behaviour?</p><p><strong>[1:15:09] Chris Fields:</strong> We certainly model them as if they do in terms of least action principles.</p><p><strong>[1:15:19] Carl:</strong> I'm trying to get at the mind-bending thing. So why doesn't that apply to you and me then?</p><p><strong>[1:15:28] Chris Fields:</strong> I'm not sure it doesn't apply to you and me.</p><p><strong>[1:15:31] Carl:</strong> I'm trying to get at the mind-bending issue. Because you could certainly talk about the moon having a preferred trajectory or path, and likewise an electron at the quantum level. But what is special about the self-organization that Mike commits his life to trying to understand, that this platonic series is trying to address? Why is that principle of least action not sufficient, or is it?</p><p><strong>[1:16:07] Michael Levin:</strong> I don't know about the moon, but we've been looking at it in very minimal computational systems, like the sorting algorithms that we've all talked about. What we've observed, for anybody that hasn't seen it, is basically that these are simple deterministic algorithms. Yes, they sort like they're supposed to, but it turns out they also do something, if you look at it from a different perspective, which hadn't been done in all the years that people have been studying these things, you see something quite differently. You see them doing some other stuff that is very surprising. I don't know if the right way to look at that stuff is as also goals that they're attempting to achieve via some least action thing, or whether that whole framework is limiting. Maybe that's the part that's exploration and play. These other things that are not actually goal-directed behavior. Maybe that's the more interesting volitional play aspect where the sorting is what you forced it to do, but in between, in the spaces between that, there's some stuff that the system likes to do. I think we can start to look at some of that in these very minimal models. I don't know if that's more minimal than the moon. I'm not sure what you think, but it seems very minimal because it's deterministic. We have control over all of it. Yet there is some stuff that's happening in the spaces between the thing you actually wanted it to do, and so this is still something that I'm playing with different frameworks that maybe it's another set of objectives to optimize that it's doing. Or maybe that's the exploration part, the intrinsic motivation, which then scaled up is what we see in us and so on. Maybe that's the simplest version of what it actually looks like when you push it all the way to the left of the spectrum. Santos just wrote on the chat that the aggression of the abstract into the physical maybe also follows a principle of least action. I think that's quite reasonable. I've been thinking about those ways too. The really simplistic way I started thinking about that is that these things in that latent space are under positive pressure. Basically, you don't have to work that hard to get them to ingress. If you make an interface, there they are. They're in some strange sense pushing out. There's a baseline pressure in which they are going to get into this world. There may be a more sophisticated least action kind of thing that could be used to describe what shows up relative to what interface you've made.</p><p><strong>[1:19:19] Mariana:</strong> I've been looking into it because it's hard to discern if what we call or what we observe as a principle of least action is because of the configuration of the space or because it's the best transportation means or the most effective path. You have this notion of path, transportation means, and spatial constraints. This in itself gives us a lot of predictive power in other areas. For us, for example, tail and not limb, or limb and not tail—there's no minimization of surprise there, quite contrary. In these cases least action would serve to model the path of whatever it is that ingresses, according to how we model the space, but ultimately would not tell us the content. I'm very interested about the content. What is being ingressed? Not in terms of the geometric path, perhaps not even in terms of the conflicting dynamics that may happen at ingression, because both X and Y want to ingress—who wins? More of the content. There are, of course, very good mathematical constructions to do this, and you can kind of glue them all together, and then you can look inside, and you can see. But I'm pretty sure, what would we do with it, with the content, for example?</p><p><strong>[1:21:20] Carl:</strong> Sorry, say it again, what would we do with the what?</p><p><strong>[1:21:23] Mariana:</strong> Suppose that we can not only model the aggression, we have the path, everything, but we also have the content.</p><p><strong>[1:21:34] Carl:</strong> I think we'll then get back to the utility of having a simulation. When you've got the simulation, you can do intervention experiments, test hypotheses, ask questions about what would happen if I did that, how it would respond in this context, what are the emergent behaviors, and deploy them in the way that we've been talking about. Not that I'm a physicist, but from a physicist's perspective, numerical analyses are the thing that get at the content and they reflect the application of a principle, where a principle is read from the perspective of the physicist as a method. So you get the right principles, the right maths that equip you with the right methods and tools. Then you apply that to ask questions about the different content under those principles. For example, why a tail and not a limb? That's just an expression of a path of least action expressed at an evolutionary scale, where you now read natural selection as a path of least action, where the Lagrangian is adaptive fitness. In fact, adaptive fitness can be, in this instance, equated with the marginal likelihood of finding this kind of phenotype in this eco niche, and the marginal likelihood is just the negative self-information.</p><p><strong>[1:23:08] Mariana:</strong> I agree. I agree with you and it makes sense. If it wasn't for this research project, I would not be debating this. But now I find it very useful to debate it because why and phenotype. I feel that our models are very good at predicting a lot of things and we can do a lot of things with them. It is true. But why is it not a preference of some pattern that we don't see with whom we can obviously interact? Because if we see it this way, then we have the possibility to interact. For example, on this question of content, if we have the content, what we could do is do a code book and we can communicate. Perhaps this is absolutely bonkers, but in two years, maybe we have a new telecommunication system, because this was nonsense, but something nice came out of it. It feels really good to be able to speak about this. What do you think of a code book?</p><p><strong>[1:24:15] Michael Levin:</strong> I would add there's the third-person perspective, which is what Carl just nicely described, where you look at this thing and you see I'm going to model it. I see this particular pattern coming through. I'm observing it in a third-person perspective. But there's also the other end of it: what does the world look like from the perspective of the thing that's coming through? Close to that is the issue that we are also potentially fundamentally patterns that are now manifesting through whatever interface, biological and so on. When we talk about communication, I think there's a whole research direction here, which is different and harder than typical third-person science, which is to look at the agency of the patterns themselves and of us as these patterns and the communication that takes place, conventionally in third person through the physical world, but also possibly directly lateral. This is what some mathematicians have said to me and maybe what Darwin meant when he said mathematicians have a different sense, this idea that when they perceive mathematical truth, it doesn't dip into the physical world and come back. They're not doing experiments, they're not making observations; they have some kind of other interaction. You could imagine a direct interaction between patterns from that space. This is very weird stuff, but maybe that also is an aspect of the communication when we start looking at it from the perspective of the thing that's coming through, not just from us evaluating it as passive data, like here's the thing I see.</p><p><strong>[1:26:15] Carl:</strong> It wouldn't be weird from an evolutionary psychology perspective. You're talking about cultural niche construction, new wave aspect to evolution. If we have co-constructed a culture of good maths and good, providing the simplest or least complex explanation for everything, and that can be inherited from generation to generation, then that is a perfectly consistent process with a path of least action, where action in this instance has elevated meaning in the context of evolutionary psychology or devo constructs taken into the cultural domain, which brings us back to the convergence and the discourse and the negotiated corpus of calculus that we commonly accept as the right kind of maths.</p><p><strong>[1:27:19] Mariana:</strong> Thanks for bringing that up. It is true. I'm no authority, but there is a lot of love and devotion, and almost even sacrifice that goes into doing maths; perhaps it is a bit different. I would also say that one sets up experiments. And then it's a bit of pen and paper. There is a lot of "what if", and then you state the problem in a different way and you use a different example. I don't know if it's the same for everyone, but it feels there is some proportionality between this love and this devotion and, at the same time, this plain curiosity — that it's never really about you. It comes to you. It feels the more you put in, in terms of being alone and thinking about the problem, the more it comes.</p><p><strong>[1:28:53] Michael Levin:</strong> I'm looking forward to adding. I didn't get it; it just hasn't been scheduled yet, but there's a couple of conversations coming soon for this symposium with musicians. I've had a lot of outreach from people who are not scientists or mathematicians; they make music and they've been watching this stuff. They said this is what it's like when a particular song makes itself known to you. When it comes through, their creative discovery process I find very interesting to compare and contrast to what we all do. I think that'll be a nice addition. I'll try to get hold of some more artists and people in that space.</p><p><strong>[1:29:43] Mariana:</strong> Sometimes I've been discussing informally with some to ask about this ingression, exactly treating, I let them talk and I'm trying to map whatever it is that they're saying to this ingression mechanism. Sometimes I feel perhaps you can model it with priors. But there are things that are novel, like in morphogenesis, they've never heard before. Is it on? You've been recording it.</p><p><strong>[1:30:17] Michael Levin:</strong> It hasn't happened yet. It will be recorded. I'm trying to set it up. I thought our schedules were weird, but theirs are even more challenging, apparently. We'll make it happen.</p><p><strong>[1:30:42] Chris Fields:</strong> If I can make one more comment from a physics perspective. We move back and forth in physics from a theoretical stance in which the world has been divided up in some particular way into systems that have certain internal processes and hence interaction capabilities, which we can describe as observational capabilities or action capabilities, or treat them as interactions. We can move from a perspective that cuts the world up in one way to a perspective that cuts the world up in some completely different way. Moving between those perspectives changes all of the interactions that one describes between the components that we've made by the cutting up process. But it doesn't change the assumed behavior of the whole system. So whatever model is constructed has to be consistent with the underlying principle that if you erase the boundaries, take your pencil, erase all the lines you've drawn, nothing has changed. I'm trying to get to this point: to what extent does our perspective in which we each view ourselves as bounded entities bias our thinking about how patterns interact. Given that if we take our erasers and erase these boundaries, we haven't changed the patterns. We haven't changed the overall pattern. We just changed how we describe different pieces of it interacting by creating the pieces. If we don't have those pieces, that interaction, of course, doesn't exist, but that doesn't change what's going on in the background. And so when one thinks that way, the question of what it means for these to think about patterns interacting takes on at least a different flavor, because we described it in a radically different way when we think of some big box with a bunch of stuff happening inside of it versus the big box being cut up in a lot of little ways where we think about these little entities that have to exchange information across their boundaries.</p><p><strong>[1:34:12] Carl:</strong> Chris, is that not easily resolved just by picking a particular scale? You're talking about big little boxes in big boxes, little Markov blankets within a big Markov blanket. The only difference is that there always has to be a carving of an independency structure to have a cause-effect structure to describe anything. So the difference between the big box and the little box is just a scale that you pick in order to try and articulate and model your system or explain your system. Interestingly, those little boxes are all trying to do exactly what you are doing, trying to understand the boxology of the system at hand. But my main point is, are you not just stating that there is a choice here, that you have to pick the scale at which you want to characterise or model or understand or explain your system?</p><p><strong>[1:35:16] Chris Fields:</strong> I don't think so. The notion of statistical independence that allows you to talk about things causally is itself an approximation that we get by assuming that the overall dynamics of the system is naturally multipartite. The quantum language makes it easy to say if, to the extent that the whole state is fairly entangled, then the boundaries that we draw are completely artificial. They can be regarded as boundaries only for a very short time until regarding them as boundaries no longer works. Whereas in a classical system that's not true. In a classical system, all the interactions are causal, and any boundaries you draw really are boundaries. From a quantum perspective that's no longer the case. The boundaries staying boundaries is pragmatic. That's independent of scale.</p><p><strong>[1:37:04] Carl:</strong> But I briefly pursue that. When I use the notion of scale, I included temporal scale. What I hear you saying is that certain carvings exist at specific scales. At the quantum scale, the carvings are only valid as an approximation, possibly over very short periods of time. But as you increase the scale, the duration where that boundary is in play in a classical or statistical sense extends. Again, one could argue that if you include separation of temporal scales in that scale invariance, that it's a question of picking the scale you want to work at. And in a sense, I was asking you to think about the two ends of scales that we could consider, the moon and the electron, and why they are not relevant to biotic self-organization that is characterized by curious behavior and exploration and playfulness and breaking of detailed balance, because you don't find that either at the quantum level or at the level of heavenly bodies, for example. Just to summarize, what I'm saying is that, yes, it is certainly true that statistical independences dissipate and fluctuate and indeed a pullback attractor is itself a random variable. But over a certain time scale, they are in play, and that time scale increases as you increase the scale of the system.</p><p><strong>[1:38:54] Chris Fields:</strong> From a global quantum theoretic viewpoint, as you increase the temporal scale, the approximation of things being statistically independent always becomes worse. The only question is how much, how quickly it becomes worse, which is dependent on energy density. So I do think that the reason we see the world as classical at large scales is because we don't know how to look at it. We don't know the right way to look at it.</p><p><strong>[1:39:59] Mariana:</strong> You can have all these agents in a time-independent manner and also a background-independent manner. So it's not only that you would choose the scale or the metric; you can literally let the cohort tell you what the metric is or homological class or character. So you can definitely do all that with no time and no particular conception of space besides the one you're observing. This is what you were saying. No, Chris.</p><p><strong>[1:40:35] Chris Fields:</strong> I think that we'll eventually be forced to consider space as something that we impose as observers. I'd love to be able to make that more precise.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Conversation 1 with Mijail Serruya, Alessandro Napoli, and Wesley Clawson</title>
          <link>https://thoughtforms-life.aipodcast.ing/conversation-1-with-mijail-serruya-alessandro-napoli-and-wesley-clawson/</link>
          <description>Neuroscientists Mijail Serruya, Alessandro Napoli, and Wesley Clawson discuss brain-body-machine interfaces, from BrainGate and biohybrids to aging, memory, plasticity, and hypnosis as emerging clinical and conceptual tools.</description>
          <pubDate>Sun, 18 Jan 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 696d4f197fe50a0001b04c36 ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/jnAivV3Tjgk" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/111232b6/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1 hour conversation with (including a short talk by) Mijail Serruya (<a href="https://research.jefferson.edu/labs/researcher/serruya-research.html),?ref=thoughtforms-life.aipodcast.ing">https://research.jefferson.edu/labs/researcher/serruya-research.html),</a> Alessandro Napoli (<a href="https://www.linkedin.com/in/alessandro-napoli-8383a164/),?ref=thoughtforms-life.aipodcast.ing">https://www.linkedin.com/in/alessandro-napoli-8383a164/),</a> and Wes Clawson (<a href="https://allencenter.tufts.edu/wesley-clawson-staff-scientist/).?ref=thoughtforms-life.aipodcast.ing">https://allencenter.tufts.edu/wesley-clawson-staff-scientist/).</a> We talk about brain-body-machine interfaces, the clinical aspects and the deeper conceptual connections.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Introductions and backgrounds</p><p>(02:15) From BrainGate to biohybrids</p><p>(16:32) Platypus-inspired cognitive augmentation</p><p>(24:28) Model systems and plasticity</p><p>(34:52) Aging, memory, and interfaces</p><p>(49:58) Hypnosis as biointerface</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Mijail Serruya:</strong> My nickname is Misha. I'm a physician scientist, and I have about 14 slides, but I can give most of our time just to talk. I'll tell a little bit about myself, but Alessandro, why don't you briefly introduce yourself?</p><p><strong>[00:14] Alessandro Napoli:</strong> Alessandro Napoli. I'm a biomedical engineer by background. I did my PhD in neural signal processing. And I've been working in brain computer interface applications and development for medical devices for the past 15 years.</p><p><strong>[00:34] Michael Levin:</strong> Great. Yeah, Wes.</p><p><strong>[00:35] Wesley Clawson:</strong> I'm Wes, Wesley. You can call me Wes or Wesley. I'm a senior scientist in Mike's lab. I got a PhD in neuroscience where I did basic neuroscience research in systems neuroscience. It's a mix of weird in vivo rat studies with epilepsy and computational neuroscience. I have a background in electrical engineering and physics because I was going to do brain-computer interfaces, but then never found my way there. In Mike's lab, I built a system we call HAL that does closed-loop training with neural tissue. Instead of taking a human brain and trying to interface it with a computer, we try to grow weird substrates on microelectrode arrays and build software that lets us define interactions with them. That's the base of the work that I do here.</p><p><strong>[01:33] Michael Levin:</strong> I'm Mike Levin. My group works at the intersection of computer science, which is my original training, biology, and cognitive science. I'm fundamentally interested in diverse intelligence, extremely unconventional embodied minds and all kinds of weird substrates. We study decision making and collectives of cells during morphogenesis. We study minimal computational systems. We study weird chimeras of different kinds of biology with technology and so on. I'm interested in interfaces to novel intelligences and how different minds can interact and communicate with each other and what technologies can help that happen.</p><p><strong>[02:15] Mijail Serruya:</strong> Well, you will see, Wes, there's a lot of overlap with what you mentioned. Just briefly about me to remind you guys. Over 20 years ago, long before there was Neuralink and things, helped create cyberkinetics, the first BrainGate trial and brain-computer interfaces, I still know all the CEOs of the major IBCI companies that exist and that have orders of magnitude more funding now to do what we tried to do 20 years ago. Here are some of them. I'm happy to introduce both of you to them if and when that makes sense. I've had some interesting discussions with some of the scientists at some of these companies about biohybrid interface. I didn't put on here: Science Corporation is working on biohybrid systems. If you didn't know about it, the IBCICC is a collaborative community where the FDA, CMS, NIH, people with disabilities, doctors, patients, engineers, all are involved. It's an interesting organization to work together on a pre-competitive space, as the industry people call it. I'm a physician scientist. I'm a board-certified neurologist. I did my doctoral work in the lab of John Donahue back at Brown 20 years ago. Now I work with kids and adults who have chronic pain, cognitive symptoms, motor impairments. Raphael is the name of our lab because it's named after the patron saint of healers, and that's what we're hopefully working on: healing. Those are our three main areas of focus: movement, pain relief, cognition, in terms of trying to create devices. We have an interdisciplinary team. That's the core team. They have lots of collaborators all over the place. We'd be delighted to collaborate with you guys too. We'll see what this conversation leads to. I look at this as we have multiple shots on goal of trying to help people from the short term, right now in 2026, out to who knows what the future will bring. We have different kinds of devices that can mechanically move the arm, voice activation, really simple mechanical systems, electrical stimulation, and brain-computer interfaces. This is a gentleman who has electrode arrays in his brain. We're decoding the ensemble activity and using it to open and close his hand. EMT is controlling his biceps, triceps, and the brain is controlling his hand. These cables are literally plugged into his motor cortex over a large subcortical stroke. Normally his hand is totally paralyzed. That was a few years ago in the middle of the pandemic. Now we're working with Precision. They have a fully implantable system. Short term, in-between term, and then longer-term living neural interface components. Working with Casey Colin at Penn on living electrodes and living amplifiers, living antennae, living mux, demux, to basically modify the brain for better IBCI integration. The basic idea there is that you make this collagen noodle, a rigatoni, fill it with different cell populations, implant this whole noodle, then it biologically integrates to the brain and becomes the intermediary. That's what it looks like. I've listened to some of your podcasts and read your team's articles. I'm not sure I totally know what all the terms mean, but I guess, could we use an anatomical compiler to induce a brain port?</p><p><strong>[06:58] Mijail Serruya:</strong> So these are used by taking a pipette or an acupuncture needle and positioning little blobs of things. But maybe there are other tricks using chemical baths and electric fields to actually induce things to grow the way we want. Then we can talk about not just having a brain-computer interface to talk to a device like this, but maybe biological constructs to make some construct in your abdomen, extra bonus brain blobs that could take over if someone has a disease or injury. Then this idea of neural computing, which begins to overlap with what Wes was talking about, taking different kinds of specimens and using them for computing with the idea of connecting them ultimately back to a person to restore their function. We'll talk about that in just a second. The current brain-computer interfaces are a narrow relay pipe to restore sensory motor function. Some people, it has gotten a lot of investment because people think it will help us keep up with artificial general intelligence or some superintelligence that we have to race against, which is a whole other discussion. Obviously, that throws a different goal of traditional medical devices. But there is some overlap. But there's an alternative idea, which is to expand the substrate of neural processing beyond the skull, adding neural real estate. And so then the question is, what kind of processing and consciousness could that allow? Again, with the goal being to help with restoration. So again, here you have this person, you have these different kinds of implants, maybe they're purely biological, maybe they're purely synthetic, maybe they're a hybrid, and then they connect to neural tissue somewhere in the abdomen, or they have an external wireless system and then it can talk to neural tissue in a dish. What does this hybrid system look like? It's a cross between a seeing eye dog, a digital biological twin, and a third cerebral hemisphere. Something that allows the brain to expand its function, but ultimately to have an assistive function. And this tries to reframe the way that a lot of brain-computer interface language talks about, with a lot of engineering physics focus on number of channels, number of electrodes, and this tries to think, Well, how do we do the translation? Focusing more on that rather than saying we need to up the number of channels, then we're supposed to get some magical benefit. That has some overlap with your lab's perspective of thinking of diverse intelligences and trying to talk to the systems the way they want to be talked to and having concrete, testable ways of mapping that out. One way to think about this is the mammalian, and this goes from Max Bennett's "5 Breakthroughs of Intelligence: The Brief History of Intelligence." Even before reading that, there are repeating quasi-crystalline modules. We have the hippocampal lamellae, corticobasal ganglia loops, Mountcastle columns, canonical microcircuit, and thalamocortical neocortical circuits. Max's point is that the big difference between a chimpanzee or bonobo and a human is that we have more of these. But their basic architecture is unchanged. We have a lot more. That raises the question of what if you added more to us? With the rationale being that if someone has a stroke or a degenerative disease or multiple sclerosis or a brain tumor that has to be cut out, or other brain injury, and you start having lots of ports, you could actually give these back. And then what does that look like? How do we, rather than waiting millions of years, quickly converge on something that's actually useful to that person? This is a citizen science gaming platform that I mentioned in the e-mail, which may have some overlap and possible collaboration. The idea is, can we use players or AI, automatically or some combination thereof, to find optimal input-output parameters through virtual white matter, which is simply recording from one tissue and using it to trigger the other? Wes, you talked about this on one of your podcasts with Foresight. We have a system that does the same thing: virtual white matter. People have had versions of this in the past, including in implants. The systems are agnostic in terms of what the neural tissue is and where it is. Then we can ask about inactive sensorimotor transduction. By "inactive," I'm using the term from Evan Thompson. We can also use other signals, ones having to do with reinforcement and modulatory signals. I know you've had your YouTube sessions with Carl Friston, who has worked with Cortical Labs, and they've talked about reinforcement signals as tonic versus stochastic. But every group that does this has only so much time, and there's actually a huge parameter space. So the question is, can we create a platform where we can actually look at a lot of things?</p><p><strong>[11:42] Mijail Serruya:</strong> Can we actually use the platform as a common embodiment framework so you can compare different things? You could have different kinds of specimens, you could get rid of the biological specimens, put computational systems. It could be a perceptron or an expert system or a simple microcontroller. Or you could use the things you work on, gene regulatory networks, bulvox. As long as you have a compiler or a transduction of input-output, they could have a shared platform. The platform imposes weird semantic interpretations that could distract us. Even if you are making some distortions and reducing the dimensionality, the complexity of things, this leverages our primary ability to understand these eco-like situations. Then you can look at the comparative advantage on different tasks in this world: speed, scaling, abstraction, memory duration. To try to understand your language, see if by training them, you can expand their cognitive light cone. We can compare letting this thing run by itself versus human-guided optimization. We have this vast parameter space, and we only have so much time with a human or an animal who's implanted. We have the amplitude, the duration of pulses, the shape of the pulses, frequencies, bursts—phasic or tonic. If we have sensory signals, how do we map something in the virtual sensory world or, if it's a robot, the physical world into the system? How do we use reinforcement signals? This is an example of a potential toy system. This could be an organoid or an aggregate of neurons. Let's say this aggregate has dopamine or serotonin or acetylcholine. You could play around with the time-varying characteristics and stimulation for long-term depression or plasticity as purely electrical, or you could stimulate and drive dopamine. You could have other signals that have to do with an error bit or a multiplexing channel select, and we have a grad student working on that right now. This is from a conversation with Conrad Cording over at Penn. If I'm stimulating an input here and we call that X1 and we stimulate here X2 and then we record from Y, we have this simple linear equation. Maybe I should stimulate dopamine to help solve that regression and see what this tissue can do compared to a pure in silico system. Another parameter to think about is the endogenous activity. Going back to Sherrington more than 100 years ago, if he stimulated the exact same spot of the monkey's brain with the exact parameters, he got totally opposite responses. He said this is an office of the cerebral cortex to have this reversibility and behavioral contingency. Neural living systems have incredible hysteresis and spontaneous endogenous activity such that identical stimuli have totally different effects. This is a huge space to study. My impression: Alessandro, in his doctoral work, did some stimulation in vitro human neuron tissue networks, but overall the history of this is that people take square wave biphasic pulses, which in general look nothing like how the body talks to itself, and then take a tiny spot in this giant parameter space and study the heck out of that. Often they cook the tissue. We know that if you use these stimuli in a person, you can induce reliable percepts. There's a lot to be learned about how to optimally talk to this system. This is one virtual environment we're working on. We're trying to find something that will be more engaging than some of the other citizen science games out there, like AlphaFold, working with a colleague who's a game designer. The idea is that by having people play the game, we're exploring those parameter combinations and discovering various functional mappings for the specimens. This is a crowdsourcing approach to complement, Wes, what you're doing to develop a bioelectric programming language. Rather than just two labs in the United States, you can work with FinalSpark in Switzerland or others, having just a handful of specimens, or partner with philanthropy or big pharma that has gymnasiums full of thousands or tens of thousands of organoids, and search this space and combine them in different coalitions. I could either take a pause there or keep going and run through it, finish it up. Any preference?</p><p><strong>[16:27] Michael Levin:</strong> You can run through it. Keep going. I'm just taking notes first to talk.</p><p><strong>[16:32] Mijail Serruya:</strong> The platypus inspiration. In dealing with these folks who have these severe impairments and wondering how we're going to reconstruct their brain in a way that's not just this single sensory motor port, trying to see if there's principles we can learn from other creatures. Here we have platypus electroreception, modified input to the trigeminal nerve, like we have the child stroking their mom's face. It's a novel sensory organ on the outside, the electroreception mucus-lined column nerve structures in their bill that plug into the trigeminal nerve that has the same architecture as it does in most other mammals. They can leverage these thalamocortical loops to process those electric fields like they would anything else. They learn to interpret those signals as they're swimming around the muddy rivers of Australia and grabbing shrimp that generate those fields. The question is, can we engineer transduction organs for abstract data and have dedicated thalamocortical modules that we could grow or model in silico, and then integrate it with equivalence of basal ganglia, cerebellum, hippocampus, such that we can start having direct perception of abstract items? What any scientist does is we take this bandwidth amount of language that we have, and then we build our own models internally, which is very different than seeing it directly. So the question is, is there an advantage if you interface with the human brain to create an actual transduction organ and then add extra cortex? You have eyes, ears to different thalamic nuclei, and you could take abstract data of different flavors and then create its own virtual relay nucleus of the thalamus and its own virtual cortex. You could connect it to the brain and aim for areas of the human brain that are already multi-sensory areas. I chose the angular gyrus and the pulvinar as natural. In one of your talks you mentioned the liver doesn't live in 3D time. It lives in a chemical concentration space with gazillions of dimensions like pH and cytochrome concentrations. We are biased by our primate evolution. We can try to cognitively bulldoze our way through intellectual tricks to remap things, but could we transduce it directly? Could feel the liver state space like proprioception. We're already doing this, but what if we made it literal? We could ask the question, what's the difference between outside in and inside out? We don't know. In certain cases where a person has damaged their original system, could there be an advantage of giving this back to someone? The idea is that you have a human brain, it could entrain these cultures, the tissue learns human-like organizational patterns, maybe inherits state space. Meaning if this neural tissue in vitro is reciprocally connected to an in vivo brain continuously, maybe it can arrive at a neural state space that it wouldn't arrive at otherwise. It could function as extracranial cognitive support, help with memory, perception, and motor, and it gets connected for a therapeutic need. That also leads to an interesting question: if this person is now interlinked with this, if you lose the Internet connection, could these things still preserve some interesting abilities and become an autonomous computational agent and carry forward learning intelligence? Clinical neural twins to test therapies, edge computing, human compatible priors. Anthrobots. These are questions for discussion, maybe a future discussion. As a physician scientist, I'm always on the lookout for things that will help my patients who are living with pretty severe impairments. Anthrobots, I know you've shown some interesting rehabilitative abilities.</p><p><strong>[20:29] Mijail Serruya:</strong> Could they help rebuild a cortical stroke? I'd be curious how you guys think about Max's breakthroughs of taxes, reinforcement, simulating, mentalizing, speaking to your broad, diverse view of intelligences. He was very clear that he was aiming for human, but these principles seem to be the same cognitive principles you guys talk about. An ionoceutical IBCI, to complement what Casey and Alsan and I work on. This is a picture of a blob of myocytes, a blob of motor neurons implanted into motor cortex. Maybe there are ways to learn from what your lab has developed to induce a transduction port, a biological USB. As a physician scientist, I spent my life doing first-in-human pilot trials and trying to look at novel therapies for patients. If whatever is available in the near term, I'm happy to work with that and find the right use cases. Another short-term advantage: patients with stereo-EEG have spatially widespread stimulation patterns. We can interweave those with sensory inputs. We can play their electrodes like a chord, and we can coordinate with precise sensory behavioral context and adapt that based on this activity. An idea is that we can link this participant's brain to auxiliary neural tissue. A lot of the brain-computer interface companies are focused on paralysis, so they're going into prime motor cortex. It turns out many of them are still interested in these higher cognitive functions, but implanting electrodes in someone's brain is a big feat, and how do you get there? We can piggyback on the fact that we are already doing this all the time. There are many hospitals all throughout the planet, certainly in Boston and Philadelphia, where there are patients with refractory epilepsy that can get up to 15 of these depth electrodes, each with anywhere from 8 to 10 contacts or more. They have 30 electrodes, 100-plus contacts throughout their brain, a broad coverage. They have this for several weeks. Most people have treated this as a whack-a-mole where they'll stimulate one spot in this almost neurophrenology correlational map. If you look at it as an overall system, your lab's view of looking at systems from a broader, dynamical way could open up opportunity here, which no one's really leveraged. The idea here is that you could then have this whole brain talk to different kinds of neural tissue, including what's in the dish. It also leads to a potentially interesting idea of what I would call bow tie kiss. Mike, this is referring to your idea of the latent space as the middle of the bow tie. You have encoding and decoding. In the human brain, you have the fusiform gyrus and you have a gradient of perceptual to conceptual, fine to coarse. You could wonder how you might take transformer middle layers and interface them directly there so a person could feel what the middle layers and residuals are doing in a transformer model and leverage what evolution has given us in terms of these mappings. Just an idea. Can we reset the brain-body setpoints of pain-depression? Can we build a biohybrid construct that itself is a compiler? Can we combine AI-driven and human crowdsourced science to help the neural interface states? Can we reciprocally connect a person's brain to additional neural tissue to expand their light cone beyond their skull? Your framework, our platform, could we make a biodigital Rosetta stone? That's what I got. Let's see if I can stop sharing.</p><p><strong>[24:28] Michael Levin:</strong> Alessandro, did you want to say anything before we get rolling?</p><p><strong>[24:39] Alessandro Napoli:</strong> No, I think it's a lot. Let's see if we want to talk about some details or you guys have questions or if there is anything you guys want to talk about.</p><p><strong>[24:50] Michael Levin:</strong> I've got a bunch of stuff, but maybe Wes, you go first.</p><p><strong>[24:58] Wesley Clawson:</strong> I know one of the things you're going to say, and it may be better to piggyback off of it. So I'll let you start and then I'll jump in and interrupt you.</p><p><strong>[25:06] Michael Levin:</strong> A bunch of thoughts that I had. First of all, this notion of the noodle that you showed at the beginning. One of the things that I've been kicking around is this idea that you need an impedance match between the tools that you're using and the thing that you want to connect to. I like what you said very much because having a kind of agential interface to the system, having the front end of your thing be a living agent, that I think that's very smart. We're doing some of that with Xenobots, for example, and using them as the front end of a sensor array that will connect to the ecosystems. I do think that we could try building some things like that, as you said, maybe using some of the morphogenetic control handles that we have to try and grow something appropriate. Anthrobots are a great component. We don't know yet almost anything of what they can do. We've only seen one or two things. We have a project with David Kaplan to see. He's got these in vitro brain constructs, right? These pucks that grow all kinds of stuff. So we're gonna find out what the Anthrobots can repair in that context. But if you have in vivo models, this is great. What do you have access to? Rats? Is there an immune issue that we can put human anthrobots in there and have them live?</p><p><strong>[26:41] Mijail Serruya:</strong> We work with Casey Cullen, who runs the lab, a neural tissue engineering lab at Penn. Mostly rat models. They have some other species. He does some; mostly these are just wild-type creatures. In addition, at Jefferson there are groups that have animal models of certain diseases. Stroke, MCA occlusion, where they have experience putting in stem cells and things like that. Between the two sites at Penn and Jefferson, there are many models where people are putting in modified cells, tissues.</p><p><strong>[27:24] Michael Levin:</strong> There's no issue with immune rejection, or how do they handle it?</p><p><strong>[27:32] Mijail Serruya:</strong> Again, it depends what it is. Yes, that can be an issue. My recollection is there's a lot of different protocols; I don't know all. I think most of them are mildly immunosuppressed, but in other cases there are autologous cells, so it's not too much of an issue. But the proof is in the pudding: the fact is the animals are healthy and integrate these. I'd have to dig into the details of the various protocols. That is not trivial. I don't know the details of that.</p><p><strong>[28:13] Michael Levin:</strong> It'd be cool to talk about that and see what's possible. Another thing that I want to talk about is what you mentioned about the third hemisphere. Expansion, cognitive augmentation. That's very interesting to me. I have two points there. One is we do a lot with the amphibian model. We showed years ago that if you make tadpoles where there are only eyes on their tail, they can see perfectly well, they can get around, we can train them in behavioral assays for vision. The eyes do not connect to the brain; at best they make an optic nerve, sometimes they connect to the spinal cord, sometimes the gut, sometimes nowhere at all, and they can still do it. The plasticity is incredible because you don't need new rounds of mutation and selection; it works out of the box, it just works. I'm curious, what do you think is the prospect? Let's say developmentally in models that we have in the lab, even warm-blooded chicken, we can make a third hemisphere—no problem. What do you think is the level of plasticity in the lab? In the human. My amateur knowledge of this is that, for example, in people who lose sight, sound processing takes over some of that real estate. If we already have in our system the ability to take over new real estate when you get it, what do you think? Can we actually just keep adding, or is that going to run out? What's your prediction?</p><p><strong>[29:48] Mijail Serruya:</strong> I think that if we can nail the interface, the brain will use it; it will just happen spontaneously. I think the key is that it's not going to be like you just flick a switch. It will—creatures, all living things—have to have some experience in the environment in their own body for things to map up and learn the covariance statistics of the world. But I think given that opportunity, they'll get there. They'll use it. If it's available, the brain will exploit it.</p><p><strong>[30:23] Wesley Clawson:</strong> If I could jump in to make sure that I got your stance on the thing is that, especially when it comes to humans and patients, which is awesome, is that given access to appropriate real estate, we'll take over it and you can use it in a meaningful way. It'll take some training. The big issue is the communication—the interface between human and the object—and the third thing. And so, your thought on this is that by developing some crowdsourced citizen science platform, that will partially solve the interface problem because they will, as a group, search it better than individual labs. Does that sum up the...</p><p><strong>[31:12] Mijail Serruya:</strong> That's definitely one of the ideas.</p><p><strong>[31:15] Wesley Clawson:</strong> I wanted to make sure I was connecting all the pieces together before.</p><p><strong>[31:20] Alessandro Napoli:</strong> We also have an AI-based approach towards that solution. In addition to the crowdsourcing, where people are trying to do specific things and people are different, they could be trying to do their own thing. You are exploring a lot of different outcomes. We are also thinking about doing this using statistical computational approaches in a more mathematical and rigorous way.</p><p><strong>[31:53] Wesley Clawson:</strong> I'll write down thoughts, but I'll make it clear.</p><p><strong>[31:58] Michael Levin:</strong> I think that's great. And in complement to the regenerative kinds of things that we're working on. I think cranking up a regenerative response, which is basically a rise of plasticity anyway in the search for a new path to a functional system, in an area and then putting in something that has whether the interface be some kind of an ectopic corpus callosum or whether it's some other way, that I think could be a very effective interface for this kind of thing. We can do a ton of stuff in amphibians, but I think the big question is going to be what happens in mammals. And so working up a mammalian model in which we can try some of these things would be really valuable.</p><p><strong>[32:59] Mijail Serruya:</strong> There's all kinds of experiments in humans happening already just by virtue of clinical care. Vagus nerve stimulation is being used for stroke recovery. I don't know if anyone can fully understand what's going on, but the basic idea is that they're driving the afferent signals into the vagus nerve, which is making various brainstem nuclei go bananas, including the locus coeruleus. It's not just about making the connection; it's not just about having the brain talk to this third hemisphere. It's about having the environmental stimuli all line up so that it ends up exploring that space and using it, so the occupational therapist has to hit the buzzer at the exact right time. The locus coeruleus barfs out its norepinephrine and it turns on the basal forebrain acetylcholine at just the right time as the person's trying to do this, so that these synapses floating around in the penumbra of the stroke suddenly get strengthened. That's the idea. If the skilled occupational therapist doesn't have those settings right, then this goes nowhere. It undermines what they've taught in medical school for years, which is that if you're a year past the stroke, you're not getting anything better, but suddenly we're unmasking all these things with what's essentially an electrical ice bucket challenge. What's going on? That's a very blunt instrument. You're hitting the brainstem and the whole reticular activating system goes bananas. Presumably, if you had something much more precise and specific, and actually added new tissue that the person doesn't even have to begin with, then that could open up a lot of doors.</p><p><strong>[34:52] Michael Levin:</strong> I think there are opportunities for that. We're working on that and hoping to crank up neural proliferation and plasticity in general, morphological plasticity. I've been polling relevant people on this. If that were solved to the point where you could either reverse or prevent aging, do you think that the kind of changes that human cognition has in old age are a software problem or a hardware problem? In other words, if the brain was young and reversed, would we all become mentally flexible or would we still be grouchy?</p><p><strong>[35:47] Mijail Serruya:</strong> That's right, we lose liquid intelligence and we gain crystalline intelligence as we age.</p><p><strong>[35:55] Michael Levin:</strong> Yeah.</p><p><strong>[35:56] Mijail Serruya:</strong> Right, so would we become these ossified dictionaries? I certainly read some of the literature where it makes it sound like if you replenish your microglia or take the cerebrospinal fluid of your great-grandchild and infuse it into your own brain, you're going to become the fountain of youth. But on the other hand, even with all the turnover of proteins and cells, components—synapses as a gestalt—are preserving memories from 100 years ago, right? People can remember things vividly. That question has come up in some surprising situations, venture capitalists getting concerned about this, meaning, are we going to be in this situation where we have bodies that essentially are 18-year-olds but with demented brains that we can't leverage the joys of these situations? I think one solution is to add new neural real estate and actually leverage the way that the brain does rejuvenate itself and communicates to itself and broadcasts information to itself and see if we can leverage that. I suppose if one was really cynical, you could say that as the brain is degenerating, if you put a bonus brain in your abdomen or something, you could transfer things over as one is degenerating. Again, this already happens naturally in someone who has a degenerative disease or someone who has a stroke. The plasticity of the brain will automatically leverage whatever it has. And then you seem to hit some phase transition where the straw that breaks the camel's back—suddenly the person looks like they have a massive collapse. I see that all the time clinically, where someone will have Alzheimer's or a prevascular dementia, and they seem like they're doing okay, and then suddenly they crump. You realize that they've been running a marathon of trying to compensate with all the other residual circuits. I don't know if normal aging would match that because humans can only live so long. I think if there's more neuroreal estate, the brain will use it as long as it can be calibrated to the rest so that it can be useful, as long as it's behaviorally useful.</p><p><strong>[38:23] Michael Levin:</strong> In the amphibian systems that we have, we can add brains as much as you want. We can certainly put in extra brains or induce extra brains, create extra, anywhere you want. We have another model system for memory movement, which is in planaria. In planaria, you can train them. Then you chop off their heads, which includes the centralized brain, and the tail will sit there and not do anything until it grows a new brain. It grows a new brain, and it still retains its memories. There's some movement — wherever it is, and we don't know where it is, it's got to get imprinted onto the new brain as the new brain develops. So we know the information can move around. I don't know how far beyond planaria that goes, but my suspicion is that it's universal. We just haven't figured out how to activate it yet in these other systems. I think there's a lot of very fundamental work to do in planaria and in Xenopus and things like that. But ultimately it would be really cool to have a million stuff.</p><p><strong>[39:29] Mijail Serruya:</strong> Yeah.</p><p><strong>[39:33] Michael Levin:</strong> What do you say, Wes?</p><p><strong>[39:37] Wesley Clawson:</strong> A lot of things. Some of the amphibian stuff I'm far from, but I whiffle waffle with some of this because I understand the desire to figure out the best way to — this happens a lot because inevitably it seems like you deal with them as well. Venture capital people show up and they say, "what's the best way to do this thing and then we can sell it." One of the big things is how best do we let BCI function? It's always about this interface. How do we read out the thing? Especially the two of you, we hemmed and hawed about how everything is so plastic. If it's there, you'll use it. I'm not sure if the interface matters. You just have to make sure it's not destructive. If you reject an organ because of an immune response, then for sure it's not useful. It might not matter how the things communicate in the beginning. If the communication channel is fixed — let's say you put in a third brain or some external thing — it will use it in some meaningful way. What's interesting is studying a phase transition from when it goes from two things to one thing. If you haven't learned to couple with it, you're two things. Eventually it can couple. Even given AI and everyone a game where they can participate, you still won't solve the parameter space because you're going to have degenerate solutions. They'd all do different things. What's more interesting is seeing if you have two systems that you would like to interact, and they have some form of agency — and most of these we're discussing do — letting them control the communication channel. That's an interesting path forward that I would be keen to work on. I'm not clever enough to understand the brain, but the brain seems to understand itself quite nicely. It works all the time, and I never have to think about it. If you want to induce a new behavior from an external perspective, when you want to train an animal, you train people all the time, you train a dog all the time and have no understanding of how it works. You could claim that we crowdsourced roughly over a long period of time the best way to do it by pairing it with food. The dog also trained us; the two systems that were separate worked together to find the best communication scheme. There wasn't an external person searching a parameter space of how best to interact with the dog. They just let the two things work together. The dog only has so much agency. I only have so much agency, but we managed to work it out. One really interesting take on BCIs for me: I've always wanted to do translational stuff. I just never really had the opportunity. I found myself now in the basic science realm where hopefully someone clever can make it more translational. Because I work in tissue culture, a lot of my work — the things that I'm interested in on a five-year plan — is not necessarily how best to engineer something, but to engineer a setup where they can engineer themselves. I realize I've been talking for a long time.</p><p><strong>[43:23] Mijail Serruya:</strong> No, no.</p><p><strong>[43:24] Wesley Clawson:</strong> That's the take I always have: I'm not smart enough to do this. Maybe you seem very brilliant, but I'm lazy and not very smart. If I could engineer a system where the other things could do the work for me, that would be cool.</p><p><strong>[43:41] Mijail Serruya:</strong> I think, as physicians, we always want to lean into the brain's and the body's ability to heal, and this idea of making it good enough rather than optimal is definitely right on. Cochlear implants are essentially like playing a piano with boxing gloves, and yet it works. If you put in a cochlear implant, even the early single-channel ones, these kids could understand speech and do things that seem impossible, that they can get enough data out of this impoverished signal, and yet they can pull it off. But I think the key is that, just like humans and dogs have been training, occupational and physical therapists and other kinds of rehab have been working with humans with strokes and other things for a hundred years, and they've only gotten so far. The question is, once we have these little ports, what can we do with them to really unlock those abilities? Part of my concern is that, except for very constrained use cases, it'll be hard to. It's one thing to say, "I'm going to put in a plug and now I've got a cursor." Now I have a person who has a complex aphasia. Lots of parts of their brain are damaged. They've already done a hundred years' worth of speech therapy and other therapies. I know how to do hypnosis, and I use it. With the rationale, I'm going to go as far as I possibly can without drilling a hole in the head. Once you hit that barrier, you need something new. The question is, once you have that, what do you do with it and how do you quickly converge something that's useful to the person?</p><p><strong>[45:33] Alessandro Napoli:</strong> Can I jump in for a sec? Wes, it's what you said is very, very interesting. We didn't get into the details of the AI-based stuff on what to do and what we want to do it for. But one of the reasons, one of the main reasons is, let's say if you can grab these two systems, have them talk to each other and let them figure something out. Maybe you're augmenting or restoring something that you didn't even think about before the experiment, because now that's where they're converging. I really want to do this thing. I really want to talk about this thing. Let's do this thing. That's great. That's definitely on our radar. The main issue now is if you do this in biology, it's fine. Those specimens kind of talk the same language already. You just have to make sure they grow together, they make those connections, and they're going to talk to each other. Then you can study a lot of things. The issue here that we're seeing already is when you want to interface any kind of computer with the biology, the problem is they don't speak the same language. We can look at all of these spikes and we can interpret them, which is fine, but how are you going to send anything back in? Are you going to send back in spikes? Or are you going to send back pulses? What kind of pulses? How many pulses? Where, when, how? And are these pulses disrupting the natural physiology of the tissue more than talking to it? They're basically shooting at it. As part of those experiments, we are looking for that communication interface. How do I talk to this thing in the first place? Once you have that map, you can leverage the map, because ideally you can say, These are all the things that I can talk to you about. These are all of the tricks that this dog can learn. Let's see if the dog actually learns the tricks, and then there's going to be another dog that might learn different tricks, with a different mapping.</p><p><strong>[47:38] Mijail Serruya:</strong> Or it's not even a dog. It could be a person with stereo EEG who can subjectively tell us.</p><p><strong>[47:42] Alessandro Napoli:</strong> That's always nice. That's a bummer with dogs. That's the same. The point is you have to figure out first what the specimens can do and understand and talk about. And then you can put two together at that point, three together, four together. You can have a whole brain and the specimens. You can have amphibians and the specimens. At that point, the sky's the limit.</p><p><strong>[48:07] Wesley Clawson:</strong> Yeah.</p><p><strong>[48:08] Alessandro Napoli:</strong> Because you can make those connections in a meaningful way. They are talking the same language. They're leveraging the correct communication channels.</p><p><strong>[48:16] Wesley Clawson:</strong> It's a really interesting problem. I think it seems like we're in the same path. And the nice thing about the work that Mike does, half the reason I came, is there are so many different systems. And it comes down to just agential engineering in general. So whether it's an amphibian, whether it's a patient or a dog or something synthetic or a computer that has a mild amount of agency, for example, when you want to engineer something that has an agential component, all the rules change, and that's the hard piece. I definitely get what you're saying, and it's not an easy problem.</p><p><strong>[49:00] Mijail Serruya:</strong> But I think, yeah, go ahead.</p><p><strong>[49:02] Michael Levin:</strong> No, please, you finish.</p><p><strong>[49:03] Mijail Serruya:</strong> When we have humans who have these implants, one of the things that's always tempted me as a physician scientist is that you can sometimes do things faster with a human because you can query them from both inside out and outside in. The question is, can you leverage that to make a flywheel to accelerate these things and converge faster. So that way, if you can have in vitro things being complemented with an in vivo interface, the person is also giving you some insight.</p><p><strong>[49:42] Wesley Clawson:</strong> That's incredible. We just don't have access to humans. At the moment, in our lab I have access to tissues.</p><p><strong>[49:52] Mijail Serruya:</strong> If you work with us, you have access to humans.</p><p><strong>[49:54] Wesley Clawson:</strong> Yeah, there you go, there you go, there you go.</p><p><strong>[49:58] Michael Levin:</strong> I wanna follow up on the hypnosis business.</p><p><strong>[50:03] Wesley Clawson:</strong> I knew you, he said it.</p><p><strong>[50:07] Michael Levin:</strong> Do you know the hypnodermatology story? It was Albert Mason, really interested in being able to communicate across levels. He was able to, and people still do, give prompts to skin cells. I'm interested in that aspect of it. I want to hear what you have to say about that. The part that's underdeveloped, at least as far as I can see—I haven't seen anything on this—is the opposite: using that interface to get information out of the system. It's one thing to say, okay, I want you to have more of this cell type in your skin, but it's something else to be able to say, how's your Wnt pathway doing? How is the inflammation? How is the pH of your basal lamina? Because you talk about hypnosis, what are the prospects of using it to get actionable information out of tissues? Use the linguistic interface to pull physiological information out of the body.</p><p><strong>[51:17] Mijail Serruya:</strong> That's wild. I definitely haven't thought of it quite that way. I use clinical hypnosis as a clinical tool to help my patients and myself and whoever wants to learn. I find it incredibly powerful and useful day after day in clinic, teaching people self-hypnosis to help with pain relief and sleep and executive function and all kinds of things. I'm familiar with the work on the dermatology stuff and I certainly read the literature on profound changes in cell counts and cytokine cocktails. I haven't looked at that again. In my MD/PhD hat, my clinical hat uses it as another tool, another arrow in the quiver. Could one design an experiment to really dig at that? Absolutely. I could think about how to do that, especially if you combine it with things like biofeedback. The body is all interconnected. If I'm modulating it here, eventually information can be transmitted, and we can see that. Historically, neurology and psychiatry were one field, and so hypnosis was both a treatment as well as a diagnostic. Let me see what is rapidly reversible through these different suggestions and ideo-motor communication. By seeing what is and isn't reversible, it helps you map out what's actually happening in this patient. But in terms of doing what you're talking about, speaking to these multiple levels and steering them somewhere, I haven't really thought about it. I'm a follower of Milton Erickson, who himself is operating at this wizard-like unconscious level and not really thinking about it; the body's just going to deploy the instructions, however mysteriously it does. Our job is just to communicate the high-level instruction, and how it unfolds is, I don't know.</p><p><strong>[53:20] Michael Levin:</strong> Who's this? 'Cause this sounds very on the nose for stuff, say again who that is.</p><p><strong>[53:26] Mijail Serruya:</strong> Oh, Milton Erickson, the father of American hypnosis.</p><p><strong>[53:29] Michael Levin:</strong> Because what you just said, I need to read up on this because this is exactly my shtick about regenerative medicine. When we induce an eye or a limb or something else, we, at least in the cases where we can do it, we've learned to give a very minimal prompt. I have no idea how to build an eye or a leg — hundreds of thousands of gene expression changes need to happen, stem cells, all that stuff gets taken care of. I'm able to say, build an eye here. And to the extent that it's convincing, and it's not always convincing, you have to get the cells to take up the set point. But once they do, they handle the downstream molecular biology. We don't need to know it. We don't need to worry about it. So I really think there's a deep parallel here. That's our whole thing, right? Showing the symmetries between the morphogenetic intelligence and the behavioral intelligence. That kind of multi-scale communication thing where you can talk to the system at the highest level, and then all of that gets transduced down to make the chemistry dance to it is hugely powerful.</p><p><strong>[54:36] Mijail Serruya:</strong> To me, it's very linked to the brain computer interfaces in the sense of, I wanna push someone's brain as far as I possibly can. My brain to their brain, a Vulcan mind meld without me having to actually drill a hole in their head. But then as a physician scientist, I still hit a wall. I do things that surprise my colleagues who were, "What?" Dentists and OB gyns used to do hypnosis all the time in the 50s. Yes, they did. It works. And here's the science. But then at some point you hit a wall and you're, "All right, this person's ALS is still progressing. This person's aphasia is still a problem." Now, I need, my language is no longer enough. I need some other prompt to communicate to this tissue. But I think they're very interwoven. I don't think they're separate. I think, and I say this all the time at the BCI meetings, it's not enough just to stuff the thing in the brain. You have to now also be talking to the person and training them in many ways. Otherwise, it's useless. It has to be embodied and contextualized. And then you can leverage whatever that person actually has left and it will unfold. So that combination.</p><p><strong>[55:49] Michael Levin:</strong> Do we know that you need more than talking? For example, if in hypnodermatology you can talk to the system to get changes in skin cells, have you or anybody else tried doing an implant in a human and then using hypnosis on top of that to grow some connections? I don't see why, if you can talk a new cell type into the skin as in that original study with the kid with ichthyosis, we couldn't get better integration of implants and prosthetics if we knew what we were doing. Has anybody tried that?</p><p><strong>[56:36] Mijail Serruya:</strong> I don't know. I'm willing to try it.</p><p><strong>[56:38] Michael Levin:</strong> That might be a thing to try. I really want to see what the body's good at, which is transducing super abstract, high-level mental goals down into the chemistry of voluntary motion. If we can do that to muscle or skin cells, why the hell not? Let's try and improve the interface.</p><p><strong>[56:58] Wesley Clawson:</strong> Because I think it's also interesting, Mike, we've talked about this before, when can a system tell that it's being read from and observed? Does the behavior change? It would be very cool to—I'm not super familiar with hypnosis. I just have a note to Google Milton Erickson, but if you could say, outside of me talking to you, are you being observed in any way? 'Cause for example, if you put an EEG cap on them, blood pressure is different, but rather than saying, yeah, I feel pressure on my arm, could they unconsciously know what you're looking at? If you had a glucose meter in them, would they know what you're looking at and how? And that would be really interesting with these implants, because they might say, "Oh yeah, implant, you're looking at my brain activity," but I know sometimes internally you'll represent it in a different way. They might respond with something quite interesting, some weird abstract answer that might help understand it. So I think hypnosis and BCI have a potentially very cool line of investigation there.</p><p><strong>[58:06] Mijail Serruya:</strong> I'm happy to try. I have tons of patients who are in need, and we have clinical scenarios where people already have wives in their head. It's a matter of articulating the question properly and then knowing what we want to measure and look for.</p><p><strong>[58:17] Michael Levin:</strong> Let's design something because you've got both sides. It's not that common to find somebody with both sides of that equation covered, which it sounds like you have. And I think we have some formalisms from the morphogenesis side that might be useful.</p><p><strong>[58:32] Mijail Serruya:</strong> Okay.</p><p><strong>[58:34] Michael Levin:</strong> Yeah.</p><p><strong>Mijail Serruya:</strong> You might find that very interesting.</p><p><strong>[58:36] Michael Levin:</strong> I think so. Yeah.</p><p><strong>[58:39] Mijail Serruya:</strong> That's a hypnotic suggestion.</p><p><strong>[58:41] Wesley Clawson:</strong> Yeah.</p><p><strong>[58:42] Michael Levin:</strong> I've already absorbed it. It's already that well.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Conversation 1 with Mijail Serruya, Alessandro Napoli, and Wesley Clawson</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>Neuroscientists Mijail Serruya, Alessandro Napoli, and Wesley Clawson discuss brain-body-machine interfaces, from BrainGate and biohybrids to aging, memory, plasticity, and hypnosis as emerging clinical and conceptual tools.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/jnAivV3Tjgk" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/111232b6/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a ~1 hour conversation with (including a short talk by) Mijail Serruya (<a href="https://research.jefferson.edu/labs/researcher/serruya-research.html),?ref=thoughtforms-life.aipodcast.ing">https://research.jefferson.edu/labs/researcher/serruya-research.html),</a> Alessandro Napoli (<a href="https://www.linkedin.com/in/alessandro-napoli-8383a164/),?ref=thoughtforms-life.aipodcast.ing">https://www.linkedin.com/in/alessandro-napoli-8383a164/),</a> and Wes Clawson (<a href="https://allencenter.tufts.edu/wesley-clawson-staff-scientist/).?ref=thoughtforms-life.aipodcast.ing">https://allencenter.tufts.edu/wesley-clawson-staff-scientist/).</a> We talk about brain-body-machine interfaces, the clinical aspects and the deeper conceptual connections.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Introductions and backgrounds</p><p>(02:15) From BrainGate to biohybrids</p><p>(16:32) Platypus-inspired cognitive augmentation</p><p>(24:28) Model systems and plasticity</p><p>(34:52) Aging, memory, and interfaces</p><p>(49:58) Hypnosis as biointerface</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Mijail Serruya:</strong> My nickname is Misha. I'm a physician scientist, and I have about 14 slides, but I can give most of our time just to talk. I'll tell a little bit about myself, but Alessandro, why don't you briefly introduce yourself?</p><p><strong>[00:14] Alessandro Napoli:</strong> Alessandro Napoli. I'm a biomedical engineer by background. I did my PhD in neural signal processing. And I've been working in brain computer interface applications and development for medical devices for the past 15 years.</p><p><strong>[00:34] Michael Levin:</strong> Great. Yeah, Wes.</p><p><strong>[00:35] Wesley Clawson:</strong> I'm Wes, Wesley. You can call me Wes or Wesley. I'm a senior scientist in Mike's lab. I got a PhD in neuroscience where I did basic neuroscience research in systems neuroscience. It's a mix of weird in vivo rat studies with epilepsy and computational neuroscience. I have a background in electrical engineering and physics because I was going to do brain-computer interfaces, but then never found my way there. In Mike's lab, I built a system we call HAL that does closed-loop training with neural tissue. Instead of taking a human brain and trying to interface it with a computer, we try to grow weird substrates on microelectrode arrays and build software that lets us define interactions with them. That's the base of the work that I do here.</p><p><strong>[01:33] Michael Levin:</strong> I'm Mike Levin. My group works at the intersection of computer science, which is my original training, biology, and cognitive science. I'm fundamentally interested in diverse intelligence, extremely unconventional embodied minds and all kinds of weird substrates. We study decision making and collectives of cells during morphogenesis. We study minimal computational systems. We study weird chimeras of different kinds of biology with technology and so on. I'm interested in interfaces to novel intelligences and how different minds can interact and communicate with each other and what technologies can help that happen.</p><p><strong>[02:15] Mijail Serruya:</strong> Well, you will see, Wes, there's a lot of overlap with what you mentioned. Just briefly about me to remind you guys. Over 20 years ago, long before there was Neuralink and things, helped create cyberkinetics, the first BrainGate trial and brain-computer interfaces, I still know all the CEOs of the major IBCI companies that exist and that have orders of magnitude more funding now to do what we tried to do 20 years ago. Here are some of them. I'm happy to introduce both of you to them if and when that makes sense. I've had some interesting discussions with some of the scientists at some of these companies about biohybrid interface. I didn't put on here: Science Corporation is working on biohybrid systems. If you didn't know about it, the IBCICC is a collaborative community where the FDA, CMS, NIH, people with disabilities, doctors, patients, engineers, all are involved. It's an interesting organization to work together on a pre-competitive space, as the industry people call it. I'm a physician scientist. I'm a board-certified neurologist. I did my doctoral work in the lab of John Donahue back at Brown 20 years ago. Now I work with kids and adults who have chronic pain, cognitive symptoms, motor impairments. Raphael is the name of our lab because it's named after the patron saint of healers, and that's what we're hopefully working on: healing. Those are our three main areas of focus: movement, pain relief, cognition, in terms of trying to create devices. We have an interdisciplinary team. That's the core team. They have lots of collaborators all over the place. We'd be delighted to collaborate with you guys too. We'll see what this conversation leads to. I look at this as we have multiple shots on goal of trying to help people from the short term, right now in 2026, out to who knows what the future will bring. We have different kinds of devices that can mechanically move the arm, voice activation, really simple mechanical systems, electrical stimulation, and brain-computer interfaces. This is a gentleman who has electrode arrays in his brain. We're decoding the ensemble activity and using it to open and close his hand. EMT is controlling his biceps, triceps, and the brain is controlling his hand. These cables are literally plugged into his motor cortex over a large subcortical stroke. Normally his hand is totally paralyzed. That was a few years ago in the middle of the pandemic. Now we're working with Precision. They have a fully implantable system. Short term, in-between term, and then longer-term living neural interface components. Working with Casey Colin at Penn on living electrodes and living amplifiers, living antennae, living mux, demux, to basically modify the brain for better IBCI integration. The basic idea there is that you make this collagen noodle, a rigatoni, fill it with different cell populations, implant this whole noodle, then it biologically integrates to the brain and becomes the intermediary. That's what it looks like. I've listened to some of your podcasts and read your team's articles. I'm not sure I totally know what all the terms mean, but I guess, could we use an anatomical compiler to induce a brain port?</p><p><strong>[06:58] Mijail Serruya:</strong> So these are used by taking a pipette or an acupuncture needle and positioning little blobs of things. But maybe there are other tricks using chemical baths and electric fields to actually induce things to grow the way we want. Then we can talk about not just having a brain-computer interface to talk to a device like this, but maybe biological constructs to make some construct in your abdomen, extra bonus brain blobs that could take over if someone has a disease or injury. Then this idea of neural computing, which begins to overlap with what Wes was talking about, taking different kinds of specimens and using them for computing with the idea of connecting them ultimately back to a person to restore their function. We'll talk about that in just a second. The current brain-computer interfaces are a narrow relay pipe to restore sensory motor function. Some people, it has gotten a lot of investment because people think it will help us keep up with artificial general intelligence or some superintelligence that we have to race against, which is a whole other discussion. Obviously, that throws a different goal of traditional medical devices. But there is some overlap. But there's an alternative idea, which is to expand the substrate of neural processing beyond the skull, adding neural real estate. And so then the question is, what kind of processing and consciousness could that allow? Again, with the goal being to help with restoration. So again, here you have this person, you have these different kinds of implants, maybe they're purely biological, maybe they're purely synthetic, maybe they're a hybrid, and then they connect to neural tissue somewhere in the abdomen, or they have an external wireless system and then it can talk to neural tissue in a dish. What does this hybrid system look like? It's a cross between a seeing eye dog, a digital biological twin, and a third cerebral hemisphere. Something that allows the brain to expand its function, but ultimately to have an assistive function. And this tries to reframe the way that a lot of brain-computer interface language talks about, with a lot of engineering physics focus on number of channels, number of electrodes, and this tries to think, Well, how do we do the translation? Focusing more on that rather than saying we need to up the number of channels, then we're supposed to get some magical benefit. That has some overlap with your lab's perspective of thinking of diverse intelligences and trying to talk to the systems the way they want to be talked to and having concrete, testable ways of mapping that out. One way to think about this is the mammalian, and this goes from Max Bennett's "5 Breakthroughs of Intelligence: The Brief History of Intelligence." Even before reading that, there are repeating quasi-crystalline modules. We have the hippocampal lamellae, corticobasal ganglia loops, Mountcastle columns, canonical microcircuit, and thalamocortical neocortical circuits. Max's point is that the big difference between a chimpanzee or bonobo and a human is that we have more of these. But their basic architecture is unchanged. We have a lot more. That raises the question of what if you added more to us? With the rationale being that if someone has a stroke or a degenerative disease or multiple sclerosis or a brain tumor that has to be cut out, or other brain injury, and you start having lots of ports, you could actually give these back. And then what does that look like? How do we, rather than waiting millions of years, quickly converge on something that's actually useful to that person? This is a citizen science gaming platform that I mentioned in the e-mail, which may have some overlap and possible collaboration. The idea is, can we use players or AI, automatically or some combination thereof, to find optimal input-output parameters through virtual white matter, which is simply recording from one tissue and using it to trigger the other? Wes, you talked about this on one of your podcasts with Foresight. We have a system that does the same thing: virtual white matter. People have had versions of this in the past, including in implants. The systems are agnostic in terms of what the neural tissue is and where it is. Then we can ask about inactive sensorimotor transduction. By "inactive," I'm using the term from Evan Thompson. We can also use other signals, ones having to do with reinforcement and modulatory signals. I know you've had your YouTube sessions with Carl Friston, who has worked with Cortical Labs, and they've talked about reinforcement signals as tonic versus stochastic. But every group that does this has only so much time, and there's actually a huge parameter space. So the question is, can we create a platform where we can actually look at a lot of things?</p><p><strong>[11:42] Mijail Serruya:</strong> Can we actually use the platform as a common embodiment framework so you can compare different things? You could have different kinds of specimens, you could get rid of the biological specimens, put computational systems. It could be a perceptron or an expert system or a simple microcontroller. Or you could use the things you work on, gene regulatory networks, bulvox. As long as you have a compiler or a transduction of input-output, they could have a shared platform. The platform imposes weird semantic interpretations that could distract us. Even if you are making some distortions and reducing the dimensionality, the complexity of things, this leverages our primary ability to understand these eco-like situations. Then you can look at the comparative advantage on different tasks in this world: speed, scaling, abstraction, memory duration. To try to understand your language, see if by training them, you can expand their cognitive light cone. We can compare letting this thing run by itself versus human-guided optimization. We have this vast parameter space, and we only have so much time with a human or an animal who's implanted. We have the amplitude, the duration of pulses, the shape of the pulses, frequencies, bursts—phasic or tonic. If we have sensory signals, how do we map something in the virtual sensory world or, if it's a robot, the physical world into the system? How do we use reinforcement signals? This is an example of a potential toy system. This could be an organoid or an aggregate of neurons. Let's say this aggregate has dopamine or serotonin or acetylcholine. You could play around with the time-varying characteristics and stimulation for long-term depression or plasticity as purely electrical, or you could stimulate and drive dopamine. You could have other signals that have to do with an error bit or a multiplexing channel select, and we have a grad student working on that right now. This is from a conversation with Conrad Cording over at Penn. If I'm stimulating an input here and we call that X1 and we stimulate here X2 and then we record from Y, we have this simple linear equation. Maybe I should stimulate dopamine to help solve that regression and see what this tissue can do compared to a pure in silico system. Another parameter to think about is the endogenous activity. Going back to Sherrington more than 100 years ago, if he stimulated the exact same spot of the monkey's brain with the exact parameters, he got totally opposite responses. He said this is an office of the cerebral cortex to have this reversibility and behavioral contingency. Neural living systems have incredible hysteresis and spontaneous endogenous activity such that identical stimuli have totally different effects. This is a huge space to study. My impression: Alessandro, in his doctoral work, did some stimulation in vitro human neuron tissue networks, but overall the history of this is that people take square wave biphasic pulses, which in general look nothing like how the body talks to itself, and then take a tiny spot in this giant parameter space and study the heck out of that. Often they cook the tissue. We know that if you use these stimuli in a person, you can induce reliable percepts. There's a lot to be learned about how to optimally talk to this system. This is one virtual environment we're working on. We're trying to find something that will be more engaging than some of the other citizen science games out there, like AlphaFold, working with a colleague who's a game designer. The idea is that by having people play the game, we're exploring those parameter combinations and discovering various functional mappings for the specimens. This is a crowdsourcing approach to complement, Wes, what you're doing to develop a bioelectric programming language. Rather than just two labs in the United States, you can work with FinalSpark in Switzerland or others, having just a handful of specimens, or partner with philanthropy or big pharma that has gymnasiums full of thousands or tens of thousands of organoids, and search this space and combine them in different coalitions. I could either take a pause there or keep going and run through it, finish it up. Any preference?</p><p><strong>[16:27] Michael Levin:</strong> You can run through it. Keep going. I'm just taking notes first to talk.</p><p><strong>[16:32] Mijail Serruya:</strong> The platypus inspiration. In dealing with these folks who have these severe impairments and wondering how we're going to reconstruct their brain in a way that's not just this single sensory motor port, trying to see if there's principles we can learn from other creatures. Here we have platypus electroreception, modified input to the trigeminal nerve, like we have the child stroking their mom's face. It's a novel sensory organ on the outside, the electroreception mucus-lined column nerve structures in their bill that plug into the trigeminal nerve that has the same architecture as it does in most other mammals. They can leverage these thalamocortical loops to process those electric fields like they would anything else. They learn to interpret those signals as they're swimming around the muddy rivers of Australia and grabbing shrimp that generate those fields. The question is, can we engineer transduction organs for abstract data and have dedicated thalamocortical modules that we could grow or model in silico, and then integrate it with equivalence of basal ganglia, cerebellum, hippocampus, such that we can start having direct perception of abstract items? What any scientist does is we take this bandwidth amount of language that we have, and then we build our own models internally, which is very different than seeing it directly. So the question is, is there an advantage if you interface with the human brain to create an actual transduction organ and then add extra cortex? You have eyes, ears to different thalamic nuclei, and you could take abstract data of different flavors and then create its own virtual relay nucleus of the thalamus and its own virtual cortex. You could connect it to the brain and aim for areas of the human brain that are already multi-sensory areas. I chose the angular gyrus and the pulvinar as natural. In one of your talks you mentioned the liver doesn't live in 3D time. It lives in a chemical concentration space with gazillions of dimensions like pH and cytochrome concentrations. We are biased by our primate evolution. We can try to cognitively bulldoze our way through intellectual tricks to remap things, but could we transduce it directly? Could feel the liver state space like proprioception. We're already doing this, but what if we made it literal? We could ask the question, what's the difference between outside in and inside out? We don't know. In certain cases where a person has damaged their original system, could there be an advantage of giving this back to someone? The idea is that you have a human brain, it could entrain these cultures, the tissue learns human-like organizational patterns, maybe inherits state space. Meaning if this neural tissue in vitro is reciprocally connected to an in vivo brain continuously, maybe it can arrive at a neural state space that it wouldn't arrive at otherwise. It could function as extracranial cognitive support, help with memory, perception, and motor, and it gets connected for a therapeutic need. That also leads to an interesting question: if this person is now interlinked with this, if you lose the Internet connection, could these things still preserve some interesting abilities and become an autonomous computational agent and carry forward learning intelligence? Clinical neural twins to test therapies, edge computing, human compatible priors. Anthrobots. These are questions for discussion, maybe a future discussion. As a physician scientist, I'm always on the lookout for things that will help my patients who are living with pretty severe impairments. Anthrobots, I know you've shown some interesting rehabilitative abilities.</p><p><strong>[20:29] Mijail Serruya:</strong> Could they help rebuild a cortical stroke? I'd be curious how you guys think about Max's breakthroughs of taxes, reinforcement, simulating, mentalizing, speaking to your broad, diverse view of intelligences. He was very clear that he was aiming for human, but these principles seem to be the same cognitive principles you guys talk about. An ionoceutical IBCI, to complement what Casey and Alsan and I work on. This is a picture of a blob of myocytes, a blob of motor neurons implanted into motor cortex. Maybe there are ways to learn from what your lab has developed to induce a transduction port, a biological USB. As a physician scientist, I spent my life doing first-in-human pilot trials and trying to look at novel therapies for patients. If whatever is available in the near term, I'm happy to work with that and find the right use cases. Another short-term advantage: patients with stereo-EEG have spatially widespread stimulation patterns. We can interweave those with sensory inputs. We can play their electrodes like a chord, and we can coordinate with precise sensory behavioral context and adapt that based on this activity. An idea is that we can link this participant's brain to auxiliary neural tissue. A lot of the brain-computer interface companies are focused on paralysis, so they're going into prime motor cortex. It turns out many of them are still interested in these higher cognitive functions, but implanting electrodes in someone's brain is a big feat, and how do you get there? We can piggyback on the fact that we are already doing this all the time. There are many hospitals all throughout the planet, certainly in Boston and Philadelphia, where there are patients with refractory epilepsy that can get up to 15 of these depth electrodes, each with anywhere from 8 to 10 contacts or more. They have 30 electrodes, 100-plus contacts throughout their brain, a broad coverage. They have this for several weeks. Most people have treated this as a whack-a-mole where they'll stimulate one spot in this almost neurophrenology correlational map. If you look at it as an overall system, your lab's view of looking at systems from a broader, dynamical way could open up opportunity here, which no one's really leveraged. The idea here is that you could then have this whole brain talk to different kinds of neural tissue, including what's in the dish. It also leads to a potentially interesting idea of what I would call bow tie kiss. Mike, this is referring to your idea of the latent space as the middle of the bow tie. You have encoding and decoding. In the human brain, you have the fusiform gyrus and you have a gradient of perceptual to conceptual, fine to coarse. You could wonder how you might take transformer middle layers and interface them directly there so a person could feel what the middle layers and residuals are doing in a transformer model and leverage what evolution has given us in terms of these mappings. Just an idea. Can we reset the brain-body setpoints of pain-depression? Can we build a biohybrid construct that itself is a compiler? Can we combine AI-driven and human crowdsourced science to help the neural interface states? Can we reciprocally connect a person's brain to additional neural tissue to expand their light cone beyond their skull? Your framework, our platform, could we make a biodigital Rosetta stone? That's what I got. Let's see if I can stop sharing.</p><p><strong>[24:28] Michael Levin:</strong> Alessandro, did you want to say anything before we get rolling?</p><p><strong>[24:39] Alessandro Napoli:</strong> No, I think it's a lot. Let's see if we want to talk about some details or you guys have questions or if there is anything you guys want to talk about.</p><p><strong>[24:50] Michael Levin:</strong> I've got a bunch of stuff, but maybe Wes, you go first.</p><p><strong>[24:58] Wesley Clawson:</strong> I know one of the things you're going to say, and it may be better to piggyback off of it. So I'll let you start and then I'll jump in and interrupt you.</p><p><strong>[25:06] Michael Levin:</strong> A bunch of thoughts that I had. First of all, this notion of the noodle that you showed at the beginning. One of the things that I've been kicking around is this idea that you need an impedance match between the tools that you're using and the thing that you want to connect to. I like what you said very much because having a kind of agential interface to the system, having the front end of your thing be a living agent, that I think that's very smart. We're doing some of that with Xenobots, for example, and using them as the front end of a sensor array that will connect to the ecosystems. I do think that we could try building some things like that, as you said, maybe using some of the morphogenetic control handles that we have to try and grow something appropriate. Anthrobots are a great component. We don't know yet almost anything of what they can do. We've only seen one or two things. We have a project with David Kaplan to see. He's got these in vitro brain constructs, right? These pucks that grow all kinds of stuff. So we're gonna find out what the Anthrobots can repair in that context. But if you have in vivo models, this is great. What do you have access to? Rats? Is there an immune issue that we can put human anthrobots in there and have them live?</p><p><strong>[26:41] Mijail Serruya:</strong> We work with Casey Cullen, who runs the lab, a neural tissue engineering lab at Penn. Mostly rat models. They have some other species. He does some; mostly these are just wild-type creatures. In addition, at Jefferson there are groups that have animal models of certain diseases. Stroke, MCA occlusion, where they have experience putting in stem cells and things like that. Between the two sites at Penn and Jefferson, there are many models where people are putting in modified cells, tissues.</p><p><strong>[27:24] Michael Levin:</strong> There's no issue with immune rejection, or how do they handle it?</p><p><strong>[27:32] Mijail Serruya:</strong> Again, it depends what it is. Yes, that can be an issue. My recollection is there's a lot of different protocols; I don't know all. I think most of them are mildly immunosuppressed, but in other cases there are autologous cells, so it's not too much of an issue. But the proof is in the pudding: the fact is the animals are healthy and integrate these. I'd have to dig into the details of the various protocols. That is not trivial. I don't know the details of that.</p><p><strong>[28:13] Michael Levin:</strong> It'd be cool to talk about that and see what's possible. Another thing that I want to talk about is what you mentioned about the third hemisphere. Expansion, cognitive augmentation. That's very interesting to me. I have two points there. One is we do a lot with the amphibian model. We showed years ago that if you make tadpoles where there are only eyes on their tail, they can see perfectly well, they can get around, we can train them in behavioral assays for vision. The eyes do not connect to the brain; at best they make an optic nerve, sometimes they connect to the spinal cord, sometimes the gut, sometimes nowhere at all, and they can still do it. The plasticity is incredible because you don't need new rounds of mutation and selection; it works out of the box, it just works. I'm curious, what do you think is the prospect? Let's say developmentally in models that we have in the lab, even warm-blooded chicken, we can make a third hemisphere—no problem. What do you think is the level of plasticity in the lab? In the human. My amateur knowledge of this is that, for example, in people who lose sight, sound processing takes over some of that real estate. If we already have in our system the ability to take over new real estate when you get it, what do you think? Can we actually just keep adding, or is that going to run out? What's your prediction?</p><p><strong>[29:48] Mijail Serruya:</strong> I think that if we can nail the interface, the brain will use it; it will just happen spontaneously. I think the key is that it's not going to be like you just flick a switch. It will—creatures, all living things—have to have some experience in the environment in their own body for things to map up and learn the covariance statistics of the world. But I think given that opportunity, they'll get there. They'll use it. If it's available, the brain will exploit it.</p><p><strong>[30:23] Wesley Clawson:</strong> If I could jump in to make sure that I got your stance on the thing is that, especially when it comes to humans and patients, which is awesome, is that given access to appropriate real estate, we'll take over it and you can use it in a meaningful way. It'll take some training. The big issue is the communication—the interface between human and the object—and the third thing. And so, your thought on this is that by developing some crowdsourced citizen science platform, that will partially solve the interface problem because they will, as a group, search it better than individual labs. Does that sum up the...</p><p><strong>[31:12] Mijail Serruya:</strong> That's definitely one of the ideas.</p><p><strong>[31:15] Wesley Clawson:</strong> I wanted to make sure I was connecting all the pieces together before.</p><p><strong>[31:20] Alessandro Napoli:</strong> We also have an AI-based approach towards that solution. In addition to the crowdsourcing, where people are trying to do specific things and people are different, they could be trying to do their own thing. You are exploring a lot of different outcomes. We are also thinking about doing this using statistical computational approaches in a more mathematical and rigorous way.</p><p><strong>[31:53] Wesley Clawson:</strong> I'll write down thoughts, but I'll make it clear.</p><p><strong>[31:58] Michael Levin:</strong> I think that's great. And in complement to the regenerative kinds of things that we're working on. I think cranking up a regenerative response, which is basically a rise of plasticity anyway in the search for a new path to a functional system, in an area and then putting in something that has whether the interface be some kind of an ectopic corpus callosum or whether it's some other way, that I think could be a very effective interface for this kind of thing. We can do a ton of stuff in amphibians, but I think the big question is going to be what happens in mammals. And so working up a mammalian model in which we can try some of these things would be really valuable.</p><p><strong>[32:59] Mijail Serruya:</strong> There's all kinds of experiments in humans happening already just by virtue of clinical care. Vagus nerve stimulation is being used for stroke recovery. I don't know if anyone can fully understand what's going on, but the basic idea is that they're driving the afferent signals into the vagus nerve, which is making various brainstem nuclei go bananas, including the locus coeruleus. It's not just about making the connection; it's not just about having the brain talk to this third hemisphere. It's about having the environmental stimuli all line up so that it ends up exploring that space and using it, so the occupational therapist has to hit the buzzer at the exact right time. The locus coeruleus barfs out its norepinephrine and it turns on the basal forebrain acetylcholine at just the right time as the person's trying to do this, so that these synapses floating around in the penumbra of the stroke suddenly get strengthened. That's the idea. If the skilled occupational therapist doesn't have those settings right, then this goes nowhere. It undermines what they've taught in medical school for years, which is that if you're a year past the stroke, you're not getting anything better, but suddenly we're unmasking all these things with what's essentially an electrical ice bucket challenge. What's going on? That's a very blunt instrument. You're hitting the brainstem and the whole reticular activating system goes bananas. Presumably, if you had something much more precise and specific, and actually added new tissue that the person doesn't even have to begin with, then that could open up a lot of doors.</p><p><strong>[34:52] Michael Levin:</strong> I think there are opportunities for that. We're working on that and hoping to crank up neural proliferation and plasticity in general, morphological plasticity. I've been polling relevant people on this. If that were solved to the point where you could either reverse or prevent aging, do you think that the kind of changes that human cognition has in old age are a software problem or a hardware problem? In other words, if the brain was young and reversed, would we all become mentally flexible or would we still be grouchy?</p><p><strong>[35:47] Mijail Serruya:</strong> That's right, we lose liquid intelligence and we gain crystalline intelligence as we age.</p><p><strong>[35:55] Michael Levin:</strong> Yeah.</p><p><strong>[35:56] Mijail Serruya:</strong> Right, so would we become these ossified dictionaries? I certainly read some of the literature where it makes it sound like if you replenish your microglia or take the cerebrospinal fluid of your great-grandchild and infuse it into your own brain, you're going to become the fountain of youth. But on the other hand, even with all the turnover of proteins and cells, components—synapses as a gestalt—are preserving memories from 100 years ago, right? People can remember things vividly. That question has come up in some surprising situations, venture capitalists getting concerned about this, meaning, are we going to be in this situation where we have bodies that essentially are 18-year-olds but with demented brains that we can't leverage the joys of these situations? I think one solution is to add new neural real estate and actually leverage the way that the brain does rejuvenate itself and communicates to itself and broadcasts information to itself and see if we can leverage that. I suppose if one was really cynical, you could say that as the brain is degenerating, if you put a bonus brain in your abdomen or something, you could transfer things over as one is degenerating. Again, this already happens naturally in someone who has a degenerative disease or someone who has a stroke. The plasticity of the brain will automatically leverage whatever it has. And then you seem to hit some phase transition where the straw that breaks the camel's back—suddenly the person looks like they have a massive collapse. I see that all the time clinically, where someone will have Alzheimer's or a prevascular dementia, and they seem like they're doing okay, and then suddenly they crump. You realize that they've been running a marathon of trying to compensate with all the other residual circuits. I don't know if normal aging would match that because humans can only live so long. I think if there's more neuroreal estate, the brain will use it as long as it can be calibrated to the rest so that it can be useful, as long as it's behaviorally useful.</p><p><strong>[38:23] Michael Levin:</strong> In the amphibian systems that we have, we can add brains as much as you want. We can certainly put in extra brains or induce extra brains, create extra, anywhere you want. We have another model system for memory movement, which is in planaria. In planaria, you can train them. Then you chop off their heads, which includes the centralized brain, and the tail will sit there and not do anything until it grows a new brain. It grows a new brain, and it still retains its memories. There's some movement — wherever it is, and we don't know where it is, it's got to get imprinted onto the new brain as the new brain develops. So we know the information can move around. I don't know how far beyond planaria that goes, but my suspicion is that it's universal. We just haven't figured out how to activate it yet in these other systems. I think there's a lot of very fundamental work to do in planaria and in Xenopus and things like that. But ultimately it would be really cool to have a million stuff.</p><p><strong>[39:29] Mijail Serruya:</strong> Yeah.</p><p><strong>[39:33] Michael Levin:</strong> What do you say, Wes?</p><p><strong>[39:37] Wesley Clawson:</strong> A lot of things. Some of the amphibian stuff I'm far from, but I whiffle waffle with some of this because I understand the desire to figure out the best way to — this happens a lot because inevitably it seems like you deal with them as well. Venture capital people show up and they say, "what's the best way to do this thing and then we can sell it." One of the big things is how best do we let BCI function? It's always about this interface. How do we read out the thing? Especially the two of you, we hemmed and hawed about how everything is so plastic. If it's there, you'll use it. I'm not sure if the interface matters. You just have to make sure it's not destructive. If you reject an organ because of an immune response, then for sure it's not useful. It might not matter how the things communicate in the beginning. If the communication channel is fixed — let's say you put in a third brain or some external thing — it will use it in some meaningful way. What's interesting is studying a phase transition from when it goes from two things to one thing. If you haven't learned to couple with it, you're two things. Eventually it can couple. Even given AI and everyone a game where they can participate, you still won't solve the parameter space because you're going to have degenerate solutions. They'd all do different things. What's more interesting is seeing if you have two systems that you would like to interact, and they have some form of agency — and most of these we're discussing do — letting them control the communication channel. That's an interesting path forward that I would be keen to work on. I'm not clever enough to understand the brain, but the brain seems to understand itself quite nicely. It works all the time, and I never have to think about it. If you want to induce a new behavior from an external perspective, when you want to train an animal, you train people all the time, you train a dog all the time and have no understanding of how it works. You could claim that we crowdsourced roughly over a long period of time the best way to do it by pairing it with food. The dog also trained us; the two systems that were separate worked together to find the best communication scheme. There wasn't an external person searching a parameter space of how best to interact with the dog. They just let the two things work together. The dog only has so much agency. I only have so much agency, but we managed to work it out. One really interesting take on BCIs for me: I've always wanted to do translational stuff. I just never really had the opportunity. I found myself now in the basic science realm where hopefully someone clever can make it more translational. Because I work in tissue culture, a lot of my work — the things that I'm interested in on a five-year plan — is not necessarily how best to engineer something, but to engineer a setup where they can engineer themselves. I realize I've been talking for a long time.</p><p><strong>[43:23] Mijail Serruya:</strong> No, no.</p><p><strong>[43:24] Wesley Clawson:</strong> That's the take I always have: I'm not smart enough to do this. Maybe you seem very brilliant, but I'm lazy and not very smart. If I could engineer a system where the other things could do the work for me, that would be cool.</p><p><strong>[43:41] Mijail Serruya:</strong> I think, as physicians, we always want to lean into the brain's and the body's ability to heal, and this idea of making it good enough rather than optimal is definitely right on. Cochlear implants are essentially like playing a piano with boxing gloves, and yet it works. If you put in a cochlear implant, even the early single-channel ones, these kids could understand speech and do things that seem impossible, that they can get enough data out of this impoverished signal, and yet they can pull it off. But I think the key is that, just like humans and dogs have been training, occupational and physical therapists and other kinds of rehab have been working with humans with strokes and other things for a hundred years, and they've only gotten so far. The question is, once we have these little ports, what can we do with them to really unlock those abilities? Part of my concern is that, except for very constrained use cases, it'll be hard to. It's one thing to say, "I'm going to put in a plug and now I've got a cursor." Now I have a person who has a complex aphasia. Lots of parts of their brain are damaged. They've already done a hundred years' worth of speech therapy and other therapies. I know how to do hypnosis, and I use it. With the rationale, I'm going to go as far as I possibly can without drilling a hole in the head. Once you hit that barrier, you need something new. The question is, once you have that, what do you do with it and how do you quickly converge something that's useful to the person?</p><p><strong>[45:33] Alessandro Napoli:</strong> Can I jump in for a sec? Wes, it's what you said is very, very interesting. We didn't get into the details of the AI-based stuff on what to do and what we want to do it for. But one of the reasons, one of the main reasons is, let's say if you can grab these two systems, have them talk to each other and let them figure something out. Maybe you're augmenting or restoring something that you didn't even think about before the experiment, because now that's where they're converging. I really want to do this thing. I really want to talk about this thing. Let's do this thing. That's great. That's definitely on our radar. The main issue now is if you do this in biology, it's fine. Those specimens kind of talk the same language already. You just have to make sure they grow together, they make those connections, and they're going to talk to each other. Then you can study a lot of things. The issue here that we're seeing already is when you want to interface any kind of computer with the biology, the problem is they don't speak the same language. We can look at all of these spikes and we can interpret them, which is fine, but how are you going to send anything back in? Are you going to send back in spikes? Or are you going to send back pulses? What kind of pulses? How many pulses? Where, when, how? And are these pulses disrupting the natural physiology of the tissue more than talking to it? They're basically shooting at it. As part of those experiments, we are looking for that communication interface. How do I talk to this thing in the first place? Once you have that map, you can leverage the map, because ideally you can say, These are all the things that I can talk to you about. These are all of the tricks that this dog can learn. Let's see if the dog actually learns the tricks, and then there's going to be another dog that might learn different tricks, with a different mapping.</p><p><strong>[47:38] Mijail Serruya:</strong> Or it's not even a dog. It could be a person with stereo EEG who can subjectively tell us.</p><p><strong>[47:42] Alessandro Napoli:</strong> That's always nice. That's a bummer with dogs. That's the same. The point is you have to figure out first what the specimens can do and understand and talk about. And then you can put two together at that point, three together, four together. You can have a whole brain and the specimens. You can have amphibians and the specimens. At that point, the sky's the limit.</p><p><strong>[48:07] Wesley Clawson:</strong> Yeah.</p><p><strong>[48:08] Alessandro Napoli:</strong> Because you can make those connections in a meaningful way. They are talking the same language. They're leveraging the correct communication channels.</p><p><strong>[48:16] Wesley Clawson:</strong> It's a really interesting problem. I think it seems like we're in the same path. And the nice thing about the work that Mike does, half the reason I came, is there are so many different systems. And it comes down to just agential engineering in general. So whether it's an amphibian, whether it's a patient or a dog or something synthetic or a computer that has a mild amount of agency, for example, when you want to engineer something that has an agential component, all the rules change, and that's the hard piece. I definitely get what you're saying, and it's not an easy problem.</p><p><strong>[49:00] Mijail Serruya:</strong> But I think, yeah, go ahead.</p><p><strong>[49:02] Michael Levin:</strong> No, please, you finish.</p><p><strong>[49:03] Mijail Serruya:</strong> When we have humans who have these implants, one of the things that's always tempted me as a physician scientist is that you can sometimes do things faster with a human because you can query them from both inside out and outside in. The question is, can you leverage that to make a flywheel to accelerate these things and converge faster. So that way, if you can have in vitro things being complemented with an in vivo interface, the person is also giving you some insight.</p><p><strong>[49:42] Wesley Clawson:</strong> That's incredible. We just don't have access to humans. At the moment, in our lab I have access to tissues.</p><p><strong>[49:52] Mijail Serruya:</strong> If you work with us, you have access to humans.</p><p><strong>[49:54] Wesley Clawson:</strong> Yeah, there you go, there you go, there you go.</p><p><strong>[49:58] Michael Levin:</strong> I wanna follow up on the hypnosis business.</p><p><strong>[50:03] Wesley Clawson:</strong> I knew you, he said it.</p><p><strong>[50:07] Michael Levin:</strong> Do you know the hypnodermatology story? It was Albert Mason, really interested in being able to communicate across levels. He was able to, and people still do, give prompts to skin cells. I'm interested in that aspect of it. I want to hear what you have to say about that. The part that's underdeveloped, at least as far as I can see—I haven't seen anything on this—is the opposite: using that interface to get information out of the system. It's one thing to say, okay, I want you to have more of this cell type in your skin, but it's something else to be able to say, how's your Wnt pathway doing? How is the inflammation? How is the pH of your basal lamina? Because you talk about hypnosis, what are the prospects of using it to get actionable information out of tissues? Use the linguistic interface to pull physiological information out of the body.</p><p><strong>[51:17] Mijail Serruya:</strong> That's wild. I definitely haven't thought of it quite that way. I use clinical hypnosis as a clinical tool to help my patients and myself and whoever wants to learn. I find it incredibly powerful and useful day after day in clinic, teaching people self-hypnosis to help with pain relief and sleep and executive function and all kinds of things. I'm familiar with the work on the dermatology stuff and I certainly read the literature on profound changes in cell counts and cytokine cocktails. I haven't looked at that again. In my MD/PhD hat, my clinical hat uses it as another tool, another arrow in the quiver. Could one design an experiment to really dig at that? Absolutely. I could think about how to do that, especially if you combine it with things like biofeedback. The body is all interconnected. If I'm modulating it here, eventually information can be transmitted, and we can see that. Historically, neurology and psychiatry were one field, and so hypnosis was both a treatment as well as a diagnostic. Let me see what is rapidly reversible through these different suggestions and ideo-motor communication. By seeing what is and isn't reversible, it helps you map out what's actually happening in this patient. But in terms of doing what you're talking about, speaking to these multiple levels and steering them somewhere, I haven't really thought about it. I'm a follower of Milton Erickson, who himself is operating at this wizard-like unconscious level and not really thinking about it; the body's just going to deploy the instructions, however mysteriously it does. Our job is just to communicate the high-level instruction, and how it unfolds is, I don't know.</p><p><strong>[53:20] Michael Levin:</strong> Who's this? 'Cause this sounds very on the nose for stuff, say again who that is.</p><p><strong>[53:26] Mijail Serruya:</strong> Oh, Milton Erickson, the father of American hypnosis.</p><p><strong>[53:29] Michael Levin:</strong> Because what you just said, I need to read up on this because this is exactly my shtick about regenerative medicine. When we induce an eye or a limb or something else, we, at least in the cases where we can do it, we've learned to give a very minimal prompt. I have no idea how to build an eye or a leg — hundreds of thousands of gene expression changes need to happen, stem cells, all that stuff gets taken care of. I'm able to say, build an eye here. And to the extent that it's convincing, and it's not always convincing, you have to get the cells to take up the set point. But once they do, they handle the downstream molecular biology. We don't need to know it. We don't need to worry about it. So I really think there's a deep parallel here. That's our whole thing, right? Showing the symmetries between the morphogenetic intelligence and the behavioral intelligence. That kind of multi-scale communication thing where you can talk to the system at the highest level, and then all of that gets transduced down to make the chemistry dance to it is hugely powerful.</p><p><strong>[54:36] Mijail Serruya:</strong> To me, it's very linked to the brain computer interfaces in the sense of, I wanna push someone's brain as far as I possibly can. My brain to their brain, a Vulcan mind meld without me having to actually drill a hole in their head. But then as a physician scientist, I still hit a wall. I do things that surprise my colleagues who were, "What?" Dentists and OB gyns used to do hypnosis all the time in the 50s. Yes, they did. It works. And here's the science. But then at some point you hit a wall and you're, "All right, this person's ALS is still progressing. This person's aphasia is still a problem." Now, I need, my language is no longer enough. I need some other prompt to communicate to this tissue. But I think they're very interwoven. I don't think they're separate. I think, and I say this all the time at the BCI meetings, it's not enough just to stuff the thing in the brain. You have to now also be talking to the person and training them in many ways. Otherwise, it's useless. It has to be embodied and contextualized. And then you can leverage whatever that person actually has left and it will unfold. So that combination.</p><p><strong>[55:49] Michael Levin:</strong> Do we know that you need more than talking? For example, if in hypnodermatology you can talk to the system to get changes in skin cells, have you or anybody else tried doing an implant in a human and then using hypnosis on top of that to grow some connections? I don't see why, if you can talk a new cell type into the skin as in that original study with the kid with ichthyosis, we couldn't get better integration of implants and prosthetics if we knew what we were doing. Has anybody tried that?</p><p><strong>[56:36] Mijail Serruya:</strong> I don't know. I'm willing to try it.</p><p><strong>[56:38] Michael Levin:</strong> That might be a thing to try. I really want to see what the body's good at, which is transducing super abstract, high-level mental goals down into the chemistry of voluntary motion. If we can do that to muscle or skin cells, why the hell not? Let's try and improve the interface.</p><p><strong>[56:58] Wesley Clawson:</strong> Because I think it's also interesting, Mike, we've talked about this before, when can a system tell that it's being read from and observed? Does the behavior change? It would be very cool to—I'm not super familiar with hypnosis. I just have a note to Google Milton Erickson, but if you could say, outside of me talking to you, are you being observed in any way? 'Cause for example, if you put an EEG cap on them, blood pressure is different, but rather than saying, yeah, I feel pressure on my arm, could they unconsciously know what you're looking at? If you had a glucose meter in them, would they know what you're looking at and how? And that would be really interesting with these implants, because they might say, "Oh yeah, implant, you're looking at my brain activity," but I know sometimes internally you'll represent it in a different way. They might respond with something quite interesting, some weird abstract answer that might help understand it. So I think hypnosis and BCI have a potentially very cool line of investigation there.</p><p><strong>[58:06] Mijail Serruya:</strong> I'm happy to try. I have tons of patients who are in need, and we have clinical scenarios where people already have wives in their head. It's a matter of articulating the question properly and then knowing what we want to measure and look for.</p><p><strong>[58:17] Michael Levin:</strong> Let's design something because you've got both sides. It's not that common to find somebody with both sides of that equation covered, which it sounds like you have. And I think we have some formalisms from the morphogenesis side that might be useful.</p><p><strong>[58:32] Mijail Serruya:</strong> Okay.</p><p><strong>[58:34] Michael Levin:</strong> Yeah.</p><p><strong>Mijail Serruya:</strong> You might find that very interesting.</p><p><strong>[58:36] Michael Levin:</strong> I think so. Yeah.</p><p><strong>[58:39] Mijail Serruya:</strong> That's a hypnotic suggestion.</p><p><strong>[58:41] Wesley Clawson:</strong> Yeah.</p><p><strong>[58:42] Michael Levin:</strong> I've already absorbed it. It's already that well.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Conversation 1 w/ Lisa Barrett, Ben Lyons, Eli Sennesh, Jordan Theriault-Brown, and Karen Quigley</title>
          <link>https://thoughtforms-life.aipodcast.ing/conversation-1-w-lisa-barrett-ben-lyons-eli-sennesh-jordan-theriault-brown-and-karen-quigley/</link>
          <description>Researchers including Lisa Feldman Barrett, Benjamin Lyons, Eli Sennesh, Jordan Theriault-Brown, and Karen Quigley discuss allostasis and top-down control, bioelectric collective intelligence, development, plasticity, and agency across biological scales.</description>
          <pubDate>Sat, 17 Jan 2026 00:00:00 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 696b7bca7fe50a0001b04c2b ]]></guid>
          <category><![CDATA[ Conversations and working meetings ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/T1b7nEj7IlQ" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/850bbe58/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a discussion with with Lisa Feldman Barrett (<a href="https://scholar.google.com/citations?user=WF5c0_8AAAAJ&hl=en%29%2C&ref=thoughtforms-life.aipodcast.ing">https://scholar.google.com/citations?user=WF5c0_8AAAAJ&amp;hl=en),</a> Benjamin Lyons (<a href="https://interestingessays.substack.com/),?ref=thoughtforms-life.aipodcast.ing">https://interestingessays.substack.com/),</a> Eli Sennesh (<a href="https://scholar.google.com/citations?user=3z4ALYgAAAAJ%29%2C&ref=thoughtforms-life.aipodcast.ing">https://scholar.google.com/citations?user=3z4ALYgAAAAJ),</a> Jordan Theriault-Brown (<a href="http://www.jordan-theriault.com/),?ref=thoughtforms-life.aipodcast.ing">http://www.jordan-theriault.com/),</a> and Karen Quigley (<a href="https://scholar.google.com/citations?user=aZ3qhVUAAAAJ&hl=en%29&ref=thoughtforms-life.aipodcast.ing">https://scholar.google.com/citations?user=aZ3qhVUAAAAJ&amp;hl=en)</a> about topics related to allostasis and top-down control across cognitive science and developmental biology.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Framing interdisciplinary synthesis</p><p>(03:18) Bioelectric collective intelligence</p><p>(16:00) Constraints versus bioelectric memory</p><p>(20:06) Neurodevelopment and allostasis</p><p>(27:10) Plasticity and dirty genomes</p><p>(35:59) Relational structure and constraints</p><p>(40:24) Agency across biological scales</p><p>(45:04) Goal-like molecular networks</p><p>(48:49) Allostasis and control hierarchies</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Benjamin Lyons:</strong> I'll do a little bit of an intro, explain what's motivating this, and then Mike has a few slides he can go through, and then I want to get y'all's takes and open things up for discussion. My background is in economics, and I've worked with Mike to produce some research showing some connections between his ideas in economics. We've got one paper out. There's a couple more on the way. The second biggest inspiration for me is the theory of constructive emotion and the ideas of interoception and allostasis. We're bringing a lot of ideas from that into these papers as well. Every time I talk to Mike, I talk about these people and these ideas and how related it all is. If I were to try to give a very brief, high level summary of what I think some of the similarities are, the most obvious one is just that you had this history. Mike studies development and y'all study emotion. In both fields, there was this history of thinking there is this genetic plan that just tells everything what to do. And it's rote and prescribed: basic emotions. Or if you look at the development of a cell into a human, it seems it's just on some fixed schedule, and then both of y'all have produced theories that basically say that's not how it works. It's a more in the moment, constructed on the fly thing where the parts and pieces figure out what they need to do to achieve their goals. Relatedly, there's a lot of emphasis on physiological states and physiological signaling. Mike has these very important ideas about cognitive glue: the cells are able to communicate aspects of their physiological states to each other. That enables a lot of coordination throughout the system in a way that I think matches very well with ideas about interoception and allostasis. My perception is that y'all have studied very different phenomena on the surface, but have produced very similar theories about how those ideas work. There are a lot of interesting broad conceptual things to explore. There are a lot of interesting specific hypotheses that might be worth experimenting on. I do have a blog where I write about some ideas. I've written about some of the connections, including that collective intelligence and allostasis are very much things that need each other: collective intelligence needs allostasis to function and operate. Allostasis needs collective intelligence to do it and carry itself out or else it wouldn't be able to operate. The cognitive glue mechanism that is an important focus for Mike is something that works through the sharing of interoception signals. That's an interesting generalization. There's a lot of really powerful comparisons here. Both theories have an important economic background. Mike's collective intelligence theory — we have a paper talking about how it's all about economic coordination. Allostasis has the allocation of resources within the body. Economics is traditionally defined as the study of the allocation of scarce resources. Behind it all, there's a lot of economics lurking. That's what I rely on. Unfortunately, some of the biological and neuroscientific details do go over my head at times. That's why I wanted everyone to meet each other, to share these ideas, because I think it's building toward a much bigger, more powerful synthesis that applies to a lot outside of the traditional phenomena that have been studied. I'll turn it over to Mike. He can go through a few of his slides and then we'll open it up for discussion.</p><p><strong>[03:18] Michael Levin:</strong> Great. Thanks so much. And thanks, Ben, for pulling this all together. I've been looking at your work for a really long time, and I'm very excited to talk to you and to see what integration can take place and what I can learn from some of the things that you all do that applies to us. To give you a bit of background, my background is computer science. I now run a lab of mostly experimental biologists, some computational modelers. Our goal is to understand embodied intelligence very broadly. That means we use a wide variety of bizarre substrates. It's everything from individual cells and tissues and cyborgs and hybrots and different kinds of synthetic agents and biobots. We make all these different things. Our goal is to try to develop frameworks for understanding what it means to be able to recognize and communicate with minds that are not like ours — strange embodiments, different scales of space and time, different spaces that these things live in, and to create tools by which we can begin to understand that they exist and how then do we communicate with them. One of the workhorse models in our group is this notion of groups of cells navigating anatomical space as a collective intelligence. In other words, embryonic development, regeneration, metamorphosis, cancer suppression, aging, resistance — all of these things have in common that there is a group of cells that has to get together to pursue goals that no individual cell knows anything about. I'll show you a couple of quick examples. We study the mechanisms, and these are very specific biophysical mechanisms by which cells form networks that operate in spaces and follow large-scale set points, AKA goals, that their parts don't know anything about. That scaling of intelligence and its projection into new spaces is what we're interested in. In particular, the technology that we used to interface to this process is bioelectricity, because, very much like in the brain, the evolutionary history of what happens in the nervous system is an elaboration and a huge speedup of things that were happening long before we had nerve and muscle. Navigating back from the time of bacterial biofilms and then true multicellularity, navigating the space of anatomical possibilities, evolution already picked up on the fact that electricity is really good for this. All of the ion channels, the neurotransmitters, the gap junctions, all of the stuff that operates in the brain actually has a long history of doing exactly the same thing in development, just in a different space. What we typically do, and this is one reason why I'm very excited to talk to all of you, is that we try to steal as many tools as we can from neuroscientists and ask where else do they apply? We've been able to apply all kinds of things in systems that don't have brains, and that are shocking to a lot of people that these things apply. That's the overall deal. I'm going to share a couple of slides to show you. Ben asked me to show a couple of examples. Is everybody seeing a title slide? Ben asked me to show a couple of examples of context-sensitive sensing and actuation. We study a number of spaces that living systems traverse: high-dimensional space of possible gene expressions, physiological state spaces. There are navigational skills that systems develop in this space, and particularly what we're interested in is anatomical morphospace. What we've been able to find is that systems navigate that space of anatomical possibilities in a way that makes it very clear that the simple model Ben mentioned at the beginning — the idea that the genome codes for specific outcomes — doesn't fit the data at all, because what actually happens here is a high-competency navigational process that solves all kinds of problems. It encounters problems it's never seen before. It has plasticity, enormous plasticity. It has all kinds of ways to do things that normally it would never see. It is, I think, an example of a real-time intelligence that uses the genome as a set of prompts and as a set of hardware specifications, but not as a set of descriptors of what's going to happen. Very briefly, the most obvious thing is something like this. You have an animal like this, which is an axolotl. It will grow this limb. And then you find out that it's actually not simply emergence, as a lot of people make, we make these open loop models that are just emergent. If you cut it anywhere along this line, the cells will very quickly jump into action. They will rebuild the same limb, and then they stop.</p><p><strong>[07:31] Michael Levin:</strong> And that's the most amazing thing about this. They know when to stop. When do they stop? They stop when they've built the correct structure. They've been deviated from this location in amorphous space. They get back there, then they stop. One way you can model this is as an error minimization scheme. So my delta from here to here is large. I'm going to keep taking actions until that delta is within some acceptable rate. Also there's a stress piece involved that we can talk about. But it's more than this. So it's not simply repairing damage or anything like that. This is one of my favorite experiments. What you can do is, and this is not mine, this was done back in the 50s, you can take a tail and graft it onto the side of the animal. And what happens over time is that this thing turns into a limb. Now, pay attention to the cells here at the tip of the tail. These are tail tip cells sitting at the end of a tail. There's nothing locally wrong. There is no damage. There is no injury. Locally there's no reason for them to do anything at all, except that they start turning into fingers. What's happening here is that there's a large-scale control over the molecular events that are here, because locally there's no error. But globally, the system as a whole knows that what you have in the middle here is not a tail, you should have a limb. And that error, that only exists in a large-scale anatomical space, has to then be propagated down to control molecular events that locally there's no reason for it to happen, which is similar in the sense that in voluntary motion, you have these very abstract cognitive goals that then have to make the ions move across your muscle membranes for you to do that. There's a transduction from all kinds of abstract spaces down to making the chemistry do what's needed to make it happen. That's one example. Other examples of context-sensitive behavior is a tadpole. Here are the eyes, the nostrils, the mouth, the brain, the gut. In order to become a frog, these guys have to rearrange their face. All kinds of things happen during their development. It used to be thought that this was a hardwired process. You just move every organ in the right direction, the right amount, and you get your frog. We wanted to test that. We made these Picasso tadpoles. Basically we scrambled all the organs. Everything was in the wrong. Literally the eye is on the back, the mouth is off to the side, the whole thing is an incredible mess. They still make normal frogs because it's not a hardwired process. What happens is all of these structures will move forward in novel paths, abnormal paths, until they get to a normal frog face and then they stop. Sometimes they go a little bit too far and they have to come back and then they stop. The obvious question is, how the heck does it know what a correct pattern is? We actually have an answer to this. We've figured it out to some extent. I'll show you that momentarily. I want to show you another couple of crazy examples first. This is a thing called trophic memory in deer antlers.</p><p><strong>[11:44] Michael Levin:</strong> Every year these things shed this giant bony structure. What George Bubenik realized after about 40 years of experiments is that if you make a wound at one particular place in the structure, this whole thing falls off. Months later, next year, the new rack will grow. When it grows, it will actually grow in ectopic tine at this location. And that happens for about five or six years, and then eventually it goes away. It means that, first of all, this whole thing is going to be gone. The information has to be somewhere else in the body. Months later you have to store it. You have to remember where it was in this three-dimensional structure. Months later, you have to say, when you're doing the bone growth here, take an extra left turn and grow this thing right here. That's the kind of plasticity. None of this is genetic, because the genome hasn't been touched. Good luck drawing a molecular biology arrow diagram of what's going on here. Those kinds of models are not well suited for understanding phenomena like this. Working with deer is incredibly hard, so we came up with a tractable lab model. Those are planaria. Planaria are cool because, among other things, they are incredibly regenerative. You can cut them into many pieces. Here's an amazing example of context sensitivity. If you cut them in half, this side will grow a tail, this side will grow a head, but these cells were direct neighbors. They were sitting right next to each other. They have the same positional information. You have to cut them anywhere, and yet they have radically different anatomical fates because it isn't local. The wound actually talks to the rest of the animal to figure out what we have. This guy is incredibly regenerative, cancer-resistant, immortal. In fact, there's no aging in them, despite the fact that it has incredibly dirty genetics. It's a very interesting story. What we've discovered is that the question of how do you know how many heads you're supposed to have is actually stored as a bioelectrical pattern memory. We developed tools to visualize voltage gradients in living tissues of all kinds of species. Using various ion channel drugs and optogenetics, we can put in a different pattern that says you should have two heads. You can do that in a one-headed body. The anatomy is one-headed. The molecular biology is one-headed, meaning anterior markers expressed in the head. What it does have that's weird is a false memory of what it takes to be a good planarian. If you cut this guy, the pieces will make a two-headed worm. If you keep cutting them, they will continue to make two-headed worms. It's a memory. I have lots of other examples I can show you. I'm going to stop here. The bottom line is that groups of cells use electrical signaling driven by ion channels, propagated through gap junctions; serotonin is involved; all of these same players store large-scale pattern memories, and they have some amazing ingenuity about getting there. Unless they can't, in which case they form other kinds of beings that have never existed before. We've made those too, Xenobots and Anthrobots. You can find anything from simple error minimization to delayed gratification to memory rewriting to what I see as creative problem solving when you push them into scenarios that they simply can't do the thing they were trying to do. They do something else and they always do something interesting. We would deploy whatever tools and lessons we can learn from conventional cognition in these models and see what happens.</p><p><strong>[16:00] Lisa Feldman Barrett:</strong> Is this a contextual constraint argument that the bioelectrical signaling between the cells produces a constraint that directs the biology down a particular path.</p><p><strong>[16:29] Michael Levin:</strong> I would go further. I think you can say that. I would go further than that because what we see, one thing I didn't show you is something we call the electric phase. The electric phase is a pre-pattern. Long before the genes turn on to regionalize the ectoderm into a face, you literally see what looks like a face. The eyes are going to be here, the mouth is going to be here, the plaques are out to the side. This isn't just a constraint. It is literally an instructive pre-pattern or a memory of what you should do in the future. And if we rewrite that pattern, we can make all sorts of crazy stuff because the pattern is, as far as the cells are concerned, the ground truth of what they're building. If we alter that pattern through optogenetics or ion channel drugs, they will build something else. So I would say it's more than just a constraint. At the level of physics, sure, it's a constraint. But at the informational level, I think it's an instructive memory of what you should be doing. We have some control over that now. We can incept these false memories into these things. They will simply do it. Much like with the voluntary motion example, all of the molecular details are handled by the material. In other words, when we tell an animal to make an extra eye, I don't know how to build an eye. An incredible number of genes have to be activated in stem cell biology. We don't know any of that. We give a large-scale, high-level prompt that says "build an eye here." To the extent that we are convincing, everything else gets handled by the material, which will trigger all the downstream stuff to make it happen.</p><p><strong>[18:10] Lisa Feldman Barrett:</strong> The bioelectrical pattern is there before you have genes, before there's gene transcription.</p><p><strong>[18:20] Michael Levin:</strong> Generally speaking, yes. The bioelectrical pattern precedes the implementation details of actually turning on the various genes. However, big picture, if you take a step back, the whole thing is a feedback loop because in order to have bioelectrical signals, you need ion channels expressed right before. But much like with a lot of hardware-software systems, the ion channels that are present — what you have is an excitable medium. You need a minimum number of channels to make a competent medium. You need some voltage-gated channels. That typically is maternally provided in the egg, but by itself, that doesn't have any of the specificity of the morphogenesis that happens later. What happens is that the excitable medium then, left to its own devices, undergoes spontaneous symmetry breaking and amplification that gives you Turing patterns. At the electrical level, that's what it does by default. But you can step in at any moment and not touch the genetics, not change the ion channels, but simply control what the voltages are at any given location, and that's enough, if you know what you're doing. We now have simulators that help us design, because the goal of all of this is regenerative medicine. So at some point, we can fix birth defects in these model systems and normalize tumors. The goal is to say, here's a bunch of cells, and they have an abnormal pattern memory of what they're going to build. We're going to fix that. We're going to give them some better memories of what to do, and that doesn't require putting in new channels or deleting channel genes or any of that. We don't usually touch the genetics.</p><p><strong>[20:06] Lisa Feldman Barrett:</strong> This is so interesting. Can I ask a couple of questions? One question that I have is whether network homeostasis works like this, for example, in a brain. You see examples from Eve Marder's work all the way up to a larger scale brain where neurons are switching in and out. The function of the network is maintained as the neurons are switching in and out. There's not a lot known about exactly how that works. People are observing it, and the function is really a property of the relations between the cells. It's not a function of any given cell or any given signal train.</p><p><strong>[21:08] Michael Levin:</strong> You're absolutely right. I think probably evolutionarily that's where the brain learned that amazing trick.</p><p><strong>[21:17] Lisa Feldman Barrett:</strong> you're going right.</p><p><strong>[21:20] Michael Levin:</strong> Because during early development, you have a pattern and actual cells are moving in and out, right?</p><p><strong>[21:28] Lisa Feldman Barrett:</strong> Finish, and then I'll ask my next question.</p><p><strong>[21:29] Michael Levin:</strong> The cells are coming and going, and I'll take one step further. I would say that some of these, maybe all of these bioelectric patterns serve as virtual governors, where if you want to know what the causation is, it's the pattern that's exerting the force. It's not the cells, it's not the molecular stuff underneath, it's the pattern, and you can swap out. In fact, the hardware does swap itself out as things get bent out of shape and move this way and that way. It's the pattern that drives the show.</p><p><strong>[22:04] Lisa Feldman Barrett:</strong> So my next question. This is the kind of thing that we're talking about, but really scaled up as Benjamin mentioned, that it's scaled up multiple levels, temporal and spatial scales. I'm not an embryologist, and I'm far from developmental neurobiology. My recollection is that some cells—their location and their trajectory in an embryo—are genetically prescribed, but most of them aren't. They're really under local or contextual control, of the origin or of the destination. Meaning that where they go and how they end up functioning is contextually determined either by where they originated or where they end up. Some things are genetically prescribed. For example, the synapses between neurons in the thalamus and stellate cells in the cortex. Those synapses—I think there's some genetic specification there. They have to recognize each other chemically in order to make the synapse. But most of the time it doesn't work like that. It's very, very rare. For example, another example is in the lateral geniculate nucleus. Sometimes people will say the brain is running a model of the world. But the brain is not running a model of the world. It's running a model of the sensory surfaces of its body that moves around in the world.</p><p><strong>[24:36] Lisa Feldman Barrett:</strong> And the signals that the sensory surfaces are receiving are some of them from the world, some of them from the body, internal to the body. But the brain doesn't map the visual world, it maps the retina and it infers the rest. For example, in the lateral geniculate and in the superior colliculus, there is a map of the retina. That map is genetically determined or genetically influenced. You don't require experience for it to develop. But the retinotopic maps in V1, my recollection is that they are largely or almost exclusively experience-based, which means that there's something about the signaling that is required to actually establish the map. What's interesting about that is that for the most part the neurons that make up V1—their migration, from where they're born—is not genetically prescribed. It's contextually prescribed. I can't remember if it's the origin or the destination, but it works differently in different parts of the embryo and at different times. One question for us would be what's happening with the limbic circuitry, the circuitry at the center of the brain. Its main job really is the regulation of the body. So its main job is allostasis. It's not so much that this circuitry has special features as much as these neurons in particular— not just the neurons, probably glial cells, the whole tissue—play an important role not just in the regulation of the body, but in every cognitive phenomenon, every mental phenomenon, and in the coordination of the visceral motor system with the skeletal motor system. And so it'd be very interesting to understand whether the principles that you are studying and that you've established for anatomical organization you would also see for functional organization of the allostatic control of the body, or the importance of energetics-related signaling to cognition writ large.</p><p><strong>[27:10] Michael Levin:</strong> It makes a ton of sense. And I think, for us, even broader, taking it to all kinds of weird scenarios. For example, we've done this thing in tadpoles where we can make animals that have no primary eyes in the head, but they have an eye on their tail. That eye on the tail makes an optic nerve. That optic nerve does not go to the brain, sometimes to the spinal cord, sometimes to the gut, sometimes nowhere at all. Those animals can see. We test them in visual learning assays. And they can see out of the box, no new rounds of adaptation or selection, no radically different sensory-motor architecture. They can get around in visual cues.</p><p><strong>[27:52] Lisa Feldman Barrett:</strong> Is it like a blind sight kind of seeing?</p><p><strong>[27:54] Michael Levin:</strong> I can't tell you what the experience of the tap is. I don't know if they know they can.</p><p><strong>[28:02] Lisa Feldman Barrett:</strong> Some people would be very happy to speculate about the experience of a tadpole, but I'm with you on that one.</p><p><strong>[28:09] Michael Levin:</strong> I can speculate, but I don't have any actual dates. So what's the point?</p><p><strong>[28:14] Lisa Feldman Barrett:</strong> Let me ask the question differently. What I meant by blindsight is: is it gross spatial features that they can detect and behaviorally use? I don't know. Blindsight people usually talk about it in terms of conscious experience, but there is a difference in the type of visual signals, the type of visual features that the animal is using to navigate movement.</p><p><strong>[28:58] Michael Levin:</strong> I understand the question. I don't know how much detail I can see. Our behavioral outputs are fairly coarse-grained. But between that and the anthrobots that we make, which have a perfectly normal human genome but a completely different kind of behavior and 9,000 different gene expressions, half the genome is now completely different. We haven't done anything to them. It's just a new lifestyle that they have and they can do all kinds of things that normal tissues don't do. I think the plasticity is incredible, and I think they try hard. All of these things try to get to their default configuration, but if they can't, they will do something else, and what they do will be coherent and adaptive. Other than certain nematodes, in C. elegans every lineage relationship is very precise and you can count every nematode has the same number of cells. Other than that, I'm very skeptical about stuff that is "prescribed." I think the default might look prescribed, but if you start to push it, you'll find that they can do all sorts of other things.</p><p><strong>[30:21] Lisa Feldman Barrett:</strong> We're very sympathetic to that. For us, this is like music to our ears, but in the circles where we spend a lot of our time, people are still spouting the modern synthesis doctrine, ideology. I'm sure you're familiar with that. That's why everybody's smiling.</p><p><strong>[30:51] Michael Levin:</strong> I'm glad because it's exactly the same in development and molecular genetics. These examples, when I teach students occasionally, my talk is called "Why Is This Not in Your Textbook?" And these are all things that are specifically not in the textbook, because if you look at the things in the textbook, then you get this picture of a nice genetic determinant, but when in your biology education did anybody tell you that the animal with the most regenerative capacity, meaning the most stable anatomical features, cancer resistance, no aging, is the one with the dirtiest genome. Shouldn't it be the opposite? Shouldn't it be the clean genomes that are responsible for all this stuff? It's actually exactly the opposite.</p><p><strong>[31:37] Jordan Theriault-Brown:</strong> I mean, so by clean versus dirty genome.</p><p><strong>[31:41] Michael Levin:</strong> Most of us, when we have offspring, the offspring does not inherit our somatic mutations. Planaria, at least the ones we study, are not like that. They tear themselves in half and regenerate. That means every mutation that doesn't kill the stem cell it hits gets propagated. They keep everything. They're mixoploid. Every cell has a different number of chromosomes. They look like a tumor.</p><p><strong>[32:12] Jordan Theriault-Brown:</strong> There's no specialized gametes that mostly copy the DNA.</p><p><strong>[32:17] Michael Levin:</strong> And this asexual, we study the asexual forms. And so these guys have been around for 400 million years, accumulating mutations. There is no such thing as a mutant line of planaria. There is for everything else. You can call the stock center and get mice with weird, curly tails. The only mutant line of planaria is our two-headed form, and that's not genetic. There's nothing genetically different about them. And so, because they ignore, there's no transgenics in planaria; you try to put in new genes, they don't care about that any more than the mutations they already have. We could go on and on about why this is, but specifically, I think in planaria the material — the fact that they're made of a really unreliable substrate. We've modeled this computationally; all the effort has gone into an algorithm that can do something useful even when your hardware is junky. The hardware is going to be different, you can't count on it, but you've got an algorithm that's rock solid and it's tolerant to all of that. To some extent, planaria have it all the way, amphibians a little more, and C. elegans maybe not at all.</p><p><strong>[33:30] Karen Quigley:</strong> Maybe we're thinking about junk DNA in the wrong way. What we would say is that you need a lot of variance, a lot of variation, because that's the substrate on which evolution is driven. And so this seems like the perfect option if you were to think of it that way, in the sense that there's a lot of opportunity.</p><p><strong>[33:52] Michael Levin:</strong> Yeah.</p><p><strong>[33:53] Karen Quigley:</strong> Because there's a lot of opportunity to take advantage of something that exists already and utilize it, especially when something changes that you need to adapt to.</p><p><strong>[34:02] Michael Levin:</strong> Yeah.</p><p><strong>[34:04] Lisa Feldman Barrett:</strong> The reason for the word junk is this old distinction between genes and junk DNA.</p><p><strong>[34:09] Michael Levin:</strong> That's not what I meant.</p><p><strong>[34:10] Lisa Feldman Barrett:</strong> I know that's not what you meant, but I was saying to Karen.</p><p><strong>[34:18] Karen Quigley:</strong> I wasn't assuming that particular use. It's an interesting way of flipping on its head.</p><p><strong>[34:25] Michael Levin:</strong> Yeah.</p><p><strong>Karen Quigley:</strong> That it really isn't about being junky. It's about providing a really broad substrate for adaptivity.</p><p><strong>[34:34] Michael Levin:</strong> I think that's a really interesting idea. What we see in our simulations is the following. When you're dealing with a material that is able to autonomously fix certain defects, what ends up happening is that selection can't see the genome very well. Because when you get a tadpole that looks perfect, you don't know, was that because my genetics were great or because my mouth started off over here but by now it's moved back to where it needs to be. So selection has a hard time seeing the structural genome. And so what it does is then spend more time working on the competency, the plasticity. But the more you do that, the harder it is to see. So there's a positive feedback loop. I think what's happened in Planaria is that loop went all the way to the end where we can't see the genetics at all. In the simulations, you can literally see where all the evolutionary optimization is happening. Once you have a competent material like that, it really starts to crank on this autonomous repair stuff. Because the more you have it, the harder it is; you can't select the good genomes from the bad because you don't see them. All you see is the final product and the final product is way beyond anything the genetics was telling you about.</p><p><strong>[35:59] Lisa Feldman Barrett:</strong> The way that we think about it is that if you've got N elements, they could be neurons or whatever, but you have tremendous opportunity for variability, but you have tremendous complexity. Life requires the reduction of complexity. It requires some predictability and structure. But where is that structure? Where is that predictability? If this is complete complexity and variation, and this is complete: everything is hard coded into the elements themselves, which is what some people want to claim. Or maybe they'll allow for some variation around the edges. That context can tweak the edges. What we're saying is that, and it sounds like what you're saying too, is that there is a structure, but it's a structure not at the level of the elements. The elements are somehow constraining each other in a way that they were, with signals that are reinstatable or memorable, and it also can change. It's not hard coded at the level of the individual elements, which is really what a strong genetic variation-plus-selection argument is, where the properties of the genes are hard-coded; they're in the genes themselves, and then the genes just express those properties, as opposed to the structure being flexible. It's not infinitely flexible, but it's relational and at the level of the interaction of the elements. Those properties exist at the level of the interaction of the elements, not in the elements themselves. So they're fundamentally relational properties.</p><p><strong>[38:04] Michael Levin:</strong> I think that's exactly right. I augment this constraint business, because I think a lot of people think about constraints. But what I emphasize also is that there are a lot of free lunches, which I'll define that momentarily, that actually become enablements rather than constraints. What I mean by that is that there are patterns. There are patterns that come from mathematics, that come from computation, that are not laws of physics. They're not anything you discover in physics. They come from some of these other domains that biology, I think, uses very effectively. For example, as soon as you evolve a voltage-gated ion channel, what you really have is a voltage-gated ion conductance, aka a transistor. If you have two of those, you can make a logic gate. Once you make a logic gate, you inherit all kinds of crazy properties that you didn't have to evolve. They're not in the materials: the fact that NAND is special and that you can do all these things. All of that is given to you for free from the laws of computation. You just have to make the right physical interface that hooks into it. And so absolutely there are constraints, but there are also these, and they're all over the place. There are all sorts of these wild enablements that are there for you as a free gift from mathematics.</p><p><strong>[39:30] Lisa Feldman Barrett:</strong> Is there a word, an abstraction that means constraint or enablement? Meaning that not all possibilities are likely, but some become unlikely or impossible, and some become more likely. Is there a word?</p><p><strong>[39:51] Michael Levin:</strong> There is. I still feel that there's a commitment.</p><p><strong>[39:59] Lisa Feldman Barrett:</strong> When I said constraint, I wasn't ruling out enablement. What you're saying makes a tremendous amount of sense. Maybe it's a different word I should be using.</p><p><strong>[40:09] Eli Sennesh:</strong> We've been really interested as a lab in constraint causation. One of the next things we really want to tackle is the labs to all read Moreno, Mosio, Biological Autonomy together to get some of that constraint causality out of it.</p><p><strong>[40:24] Lisa Feldman Barrett:</strong> And then there were other people who were talking about. I'm thinking I had a really interesting conversation with Philip Ball a couple of months ago because I read his "How Life Works" and was totally blown away, actually, because I didn't know a lot of it. I knew genomes were dirty, but I had no idea how random things were. I will say that I've recommended this book to a number of people in psychology and their minds are completely blown because this is not anything anybody knows in our field, it seems like. One thing that he was talking about is—if I understand him correctly—how what you would call bioelectric pattern memories, I think he would count that as an example of agency or meaning making that occurs at the level of multicellular systems. It occurs within a cell, but it also occurs anytime you have parts that are interacting and producing movement towards a state that could only exist because of the parts interacting. It's not a state that—so goal here just means direction towards a future state. I wanted to understand whether he thought that was a principle that could generalize across temporal and spatial scales, because it certainly seems like that's what we're talking about, but at a completely different scale. We're also talking about relational meaning, that a lot of the properties that we ascribe to objects or to people exist in the relationship between things, not in the things themselves. I wondered whether he—I'm thinking about Scott Turner's books, where he's making an argument that goals exist only at the level of the ensemble. Exactly what you said about how individual cells don't have any evidence that the cell itself would move towards a particular state. It's only the cell in combination with those cells. I was thinking of Scott Turner's first book, The Tinkerer's Apprentice, where he's talking about termites. Termites—individual termites don't have a goal to cool the nest. But when they're moving things around and making tunnels and tending their fungal gardens, they don't have a goal per se to homeostatically maintain a temperature in a particular range, but that's what's happening at the level of the nest. That has to happen at the level of the nest. Otherwise the nest will fail and all the animals will die. So it seems like conceptually there's great sympathy between these particular lines of work. I'm sitting here thinking I have 100 uses for your examples, but I'm wondering how we can help you exactly. How can we return that favor?</p><p><strong>[45:04] Michael Levin:</strong> First, to comment on the last thing. I don't want to speak for Phil, I think I'd go further than he does along the lines of what you just said. We study, for example, molecular networks. Molecules that turn each other up or crank each other up or down — even small networks, let's say five or six subunits; it doesn't take very much complexity at all. Small networks can have habituation, sensitization, associative conditioning, where the network — you can pair chemical stimuli. We're using this now for drug conditioning in medicine. You can pair an effective stimulus with a placebo. You can do a placebo within a six-unit molecular network — no cells, no neurons. This kind of stuff goes all the way down to very minimal systems. We have two sets of tools that we use to study them. One is behavioral. If you want to know whether it's a goal or not, put a barrier between this thing and its goal in whatever space you're working on and see what happens. You will see sometimes it goes around and it has delayed gratification. If not, you haven't shown a goal. The other thing we have is the tools of causal information theory, the kind of stuff that Tononi does on coma patients, and we use it to look for causality at different levels. You can look at the bioelectrics, at the calcium signaling, at the molecular signaling, and do the calculation and ask, is there a whole that's more than the sum of its parts here? Sometimes the answer is yes, and sometimes it's no. I definitely think it goes very far below.</p><p><strong>[46:59] Jordan Theriault-Brown:</strong> Excuse me, he may be at my door.</p><p><strong>[47:04] Lisa Feldman Barrett:</strong> I wasn't saying that he was right. I was saying he's the gateway for a whole literature that I was not aware of and that is very useful for us.</p><p><strong>[47:19] Michael Levin:</strong> So what I would love is, first of all, exactly as you said, I like examples from cognitive science that I can use to say this is not new and crazy. This has already been seen. And in this field, here's what they see. Concepts that we can adapt from what you guys see.</p><p><strong>[47:51] Eli Sennesh:</strong> I'm tagging ahead. I don't want you to finish.</p><p><strong>[47:57] Michael Levin:</strong> I want to understand as much as possible about what you guys think is happening in the brain. How early did biology actually take that on? Was it in cells and embryos? Was it in molecular networks inside of a single cell? Was it before that? We like to look all through the spectrum.</p><p><strong>[48:20] Eli Sennesh:</strong> I've had a burning question on this: a lot of what you're describing—the thing that jumped out to me in your initial slides—was that you're talking about having a large-scale goal and a pattern, and basically anatomy rearranging itself to that macro-scale pattern in regenerative tissue?</p><p><strong>[48:44] Michael Levin:</strong> That's one set of examples. We could go somewhere else.</p><p><strong>[48:49] Eli Sennesh:</strong> The immediate analogy to that for me, and I think Lisa's been poking at this with some of the questions as well, is that it seems like it's also a description of what behavior is implemented by the brain. You basically have a macroscale pattern or a behavioral configuration that is a whole unit of how muscle tissue and sensory input is arranged, that involves a macroscale organization that's more than the sum of the parts of any one bit of muscle tissue or any one sensory stream. But what you're getting is a macroscale organization to that pattern that's then implemented and refined down through the details across the whole brain. When you were talking about goals at the molecular level or goals in these biological tissue patterns, it seems there's a straight analogy to how goals or behavior gets recognized at a macroscale level across the brain to configure itself. It solves a big problem that we have, which is how you think about goals or intentionality or goal-directedness in any psychological sense. There's a big overlap between something that Eli and I have both been into, and Karen and Lisa have also been interested in, which is how to think about different models at a psychological level that are built around this negative feedback control or this configuration to adapt to model subconfiguration in terms of sensory input. There's a sidestream of psychology that's thought about this, but it's been left behind and not developed well. I'm happy to share some outshoots from cybernetics where people have tried to think about things in those terms, but it is not the mainstream in psychology, and it seems to fit well with what you're talking about. It seems like you have some mechanisms to make that tractable rather than a purely theoretical thing.</p><p><strong>[51:00] Michael Levin:</strong> That's great. I'd love to see those. That's also something I'm very interested in to hear you guys talk about meta levels. You have a set point and a goal, but what are the components that can reset that to something else? Because that's what we're always on the lookout for: instead of trying to force the system into a particular behavior, how can we get buy-in and actually make it want to do the thing we want it to do by resetting its goals, which is really...</p><p><strong>[51:30] Lisa Feldman Barrett:</strong> What you're talking about is how does the network of elements produce enablements, right, that make it more likely that a particular future state will be reached, this future state versus that future state. One concept in allostasis, Karen, you could speak to this better, is that the system isn't really working around set points. It's working to optimize efficiency, energy efficiency at any level of energy output. It's anticipating needs and attempting to meet those needs, preparing to meet those needs in advance. Individual biological systems might work by homeostasis. We're willing to grant that. But the whole system, the nervous system, for example, in its regulation of all of those other systems, probably doesn't work that way. That's not our view anyways. Across the levels of the system, you could see it's possible to conceive of what's happening as complexity reduction. At every level of the nervous system, you could talk about the construction of categories, a bunch of things which are dissimilar in their sensorimotor particulars but are equivalent in their functional output. A category isn't a group of things that are the same. It's any group of elements that can be treated as or function as equivalent for some purpose in some context. If you think about it that way, what's really happening is the expansion and compression of signals for the purposes of constraining and enabling — constraining certain outcomes, making certain outcomes less likely or unlikely and making other outcomes much more likely. Another thing we are doing is taking concepts, conceptual tools, and attempting to configure them so they can be used across multiple levels of analysis, like the concept of a category as a complexity reducer. I think allostasis could probably work that way too.</p><p><strong>[54:45] Michael Levin:</strong> Eli, did you want to say something?</p><p><strong>[54:49] Jordan Theriault-Brown:</strong> I think this is recalling from background research that I did when working with the lab and writing a paper, but in my experience, once you get into systems biology and start figuring out the fine-grained mechanisms of what operates at one level versus another, the precise meaning of what is allostasis becomes a lot clearer and easier to figure out. In the sense that you can say there's a fold-change adaptation mechanism operating here that we found in this experiment. Then there's integral control over here in this separate part of the physiology in this experiment. Through evolution and development, you accumulate these separate mechanisms at different levels of a control hierarchy, which usually reduces complexity for the higher levels of the control system.</p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Conversation 1 w/ Lisa Barrett, Ben Lyons, Eli Sennesh, Jordan Theriault-Brown, and Karen Quigley</itunes:title>
          <itunes:author>Michael Levin</itunes:author>
          <itunes:subtitle>Researchers including Lisa Feldman Barrett, Benjamin Lyons, Eli Sennesh, Jordan Theriault-Brown, and Karen Quigley discuss allostasis and top-down control, bioelectric collective intelligence, development, plasticity, and agency across biological scales.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/T1b7nEj7IlQ" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/850bbe58/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>This is a discussion with with Lisa Feldman Barrett (<a href="https://scholar.google.com/citations?user=WF5c0_8AAAAJ&hl=en%29%2C&ref=thoughtforms-life.aipodcast.ing">https://scholar.google.com/citations?user=WF5c0_8AAAAJ&amp;hl=en),</a> Benjamin Lyons (<a href="https://interestingessays.substack.com/),?ref=thoughtforms-life.aipodcast.ing">https://interestingessays.substack.com/),</a> Eli Sennesh (<a href="https://scholar.google.com/citations?user=3z4ALYgAAAAJ%29%2C&ref=thoughtforms-life.aipodcast.ing">https://scholar.google.com/citations?user=3z4ALYgAAAAJ),</a> Jordan Theriault-Brown (<a href="http://www.jordan-theriault.com/),?ref=thoughtforms-life.aipodcast.ing">http://www.jordan-theriault.com/),</a> and Karen Quigley (<a href="https://scholar.google.com/citations?user=aZ3qhVUAAAAJ&hl=en%29&ref=thoughtforms-life.aipodcast.ing">https://scholar.google.com/citations?user=aZ3qhVUAAAAJ&amp;hl=en)</a> about topics related to allostasis and top-down control across cognitive science and developmental biology.</p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Framing interdisciplinary synthesis</p><p>(03:18) Bioelectric collective intelligence</p><p>(16:00) Constraints versus bioelectric memory</p><p>(20:06) Neurodevelopment and allostasis</p><p>(27:10) Plasticity and dirty genomes</p><p>(35:59) Relational structure and constraints</p><p>(40:24) Agency across biological scales</p><p>(45:04) Goal-like molecular networks</p><p>(48:49) Allostasis and control hierarchies</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=thoughtforms-life.aipodcast.ing">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Podcast Website: <a href="https://thoughtforms-life.aipodcast.ing/">https://thoughtforms-life.aipodcast.ing</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw?ref=thoughtforms-life.aipodcast.ing">https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw</a></p><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099?ref=thoughtforms-life.aipodcast.ing">https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099</a></p><p>Spotify: <a href="https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5?ref=thoughtforms-life.aipodcast.ing">https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5</a></p><p>Twitter: <a href="https://x.com/drmichaellevin?ref=thoughtforms-life.aipodcast.ing">https://x.com/drmichaellevin</a></p><p>Blog: <a href="https://thoughtforms.life/?ref=thoughtforms-life.aipodcast.ing">https://thoughtforms.life</a></p><p>The Levin Lab: <a href="https://drmichaellevin.org/?ref=thoughtforms-life.aipodcast.ing">https://drmichaellevin.org</a></p><p></p><hr><h2 id="transcript">Transcript</h2><p><em>This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.</em></p><hr><p><strong>[00:00] Benjamin Lyons:</strong> I'll do a little bit of an intro, explain what's motivating this, and then Mike has a few slides he can go through, and then I want to get y'all's takes and open things up for discussion. My background is in economics, and I've worked with Mike to produce some research showing some connections between his ideas in economics. We've got one paper out. There's a couple more on the way. The second biggest inspiration for me is the theory of constructive emotion and the ideas of interoception and allostasis. We're bringing a lot of ideas from that into these papers as well. Every time I talk to Mike, I talk about these people and these ideas and how related it all is. If I were to try to give a very brief, high level summary of what I think some of the similarities are, the most obvious one is just that you had this history. Mike studies development and y'all study emotion. In both fields, there was this history of thinking there is this genetic plan that just tells everything what to do. And it's rote and prescribed: basic emotions. Or if you look at the development of a cell into a human, it seems it's just on some fixed schedule, and then both of y'all have produced theories that basically say that's not how it works. It's a more in the moment, constructed on the fly thing where the parts and pieces figure out what they need to do to achieve their goals. Relatedly, there's a lot of emphasis on physiological states and physiological signaling. Mike has these very important ideas about cognitive glue: the cells are able to communicate aspects of their physiological states to each other. That enables a lot of coordination throughout the system in a way that I think matches very well with ideas about interoception and allostasis. My perception is that y'all have studied very different phenomena on the surface, but have produced very similar theories about how those ideas work. There are a lot of interesting broad conceptual things to explore. There are a lot of interesting specific hypotheses that might be worth experimenting on. I do have a blog where I write about some ideas. I've written about some of the connections, including that collective intelligence and allostasis are very much things that need each other: collective intelligence needs allostasis to function and operate. Allostasis needs collective intelligence to do it and carry itself out or else it wouldn't be able to operate. The cognitive glue mechanism that is an important focus for Mike is something that works through the sharing of interoception signals. That's an interesting generalization. There's a lot of really powerful comparisons here. Both theories have an important economic background. Mike's collective intelligence theory — we have a paper talking about how it's all about economic coordination. Allostasis has the allocation of resources within the body. Economics is traditionally defined as the study of the allocation of scarce resources. Behind it all, there's a lot of economics lurking. That's what I rely on. Unfortunately, some of the biological and neuroscientific details do go over my head at times. That's why I wanted everyone to meet each other, to share these ideas, because I think it's building toward a much bigger, more powerful synthesis that applies to a lot outside of the traditional phenomena that have been studied. I'll turn it over to Mike. He can go through a few of his slides and then we'll open it up for discussion.</p><p><strong>[03:18] Michael Levin:</strong> Great. Thanks so much. And thanks, Ben, for pulling this all together. I've been looking at your work for a really long time, and I'm very excited to talk to you and to see what integration can take place and what I can learn from some of the things that you all do that applies to us. To give you a bit of background, my background is computer science. I now run a lab of mostly experimental biologists, some computational modelers. Our goal is to understand embodied intelligence very broadly. That means we use a wide variety of bizarre substrates. It's everything from individual cells and tissues and cyborgs and hybrots and different kinds of synthetic agents and biobots. We make all these different things. Our goal is to try to develop frameworks for understanding what it means to be able to recognize and communicate with minds that are not like ours — strange embodiments, different scales of space and time, different spaces that these things live in, and to create tools by which we can begin to understand that they exist and how then do we communicate with them. One of the workhorse models in our group is this notion of groups of cells navigating anatomical space as a collective intelligence. In other words, embryonic development, regeneration, metamorphosis, cancer suppression, aging, resistance — all of these things have in common that there is a group of cells that has to get together to pursue goals that no individual cell knows anything about. I'll show you a couple of quick examples. We study the mechanisms, and these are very specific biophysical mechanisms by which cells form networks that operate in spaces and follow large-scale set points, AKA goals, that their parts don't know anything about. That scaling of intelligence and its projection into new spaces is what we're interested in. In particular, the technology that we used to interface to this process is bioelectricity, because, very much like in the brain, the evolutionary history of what happens in the nervous system is an elaboration and a huge speedup of things that were happening long before we had nerve and muscle. Navigating back from the time of bacterial biofilms and then true multicellularity, navigating the space of anatomical possibilities, evolution already picked up on the fact that electricity is really good for this. All of the ion channels, the neurotransmitters, the gap junctions, all of the stuff that operates in the brain actually has a long history of doing exactly the same thing in development, just in a different space. What we typically do, and this is one reason why I'm very excited to talk to all of you, is that we try to steal as many tools as we can from neuroscientists and ask where else do they apply? We've been able to apply all kinds of things in systems that don't have brains, and that are shocking to a lot of people that these things apply. That's the overall deal. I'm going to share a couple of slides to show you. Ben asked me to show a couple of examples. Is everybody seeing a title slide? Ben asked me to show a couple of examples of context-sensitive sensing and actuation. We study a number of spaces that living systems traverse: high-dimensional space of possible gene expressions, physiological state spaces. There are navigational skills that systems develop in this space, and particularly what we're interested in is anatomical morphospace. What we've been able to find is that systems navigate that space of anatomical possibilities in a way that makes it very clear that the simple model Ben mentioned at the beginning — the idea that the genome codes for specific outcomes — doesn't fit the data at all, because what actually happens here is a high-competency navigational process that solves all kinds of problems. It encounters problems it's never seen before. It has plasticity, enormous plasticity. It has all kinds of ways to do things that normally it would never see. It is, I think, an example of a real-time intelligence that uses the genome as a set of prompts and as a set of hardware specifications, but not as a set of descriptors of what's going to happen. Very briefly, the most obvious thing is something like this. You have an animal like this, which is an axolotl. It will grow this limb. And then you find out that it's actually not simply emergence, as a lot of people make, we make these open loop models that are just emergent. If you cut it anywhere along this line, the cells will very quickly jump into action. They will rebuild the same limb, and then they stop.</p><p><strong>[07:31] Michael Levin:</strong> And that's the most amazing thing about this. They know when to stop. When do they stop? They stop when they've built the correct structure. They've been deviated from this location in amorphous space. They get back there, then they stop. One way you can model this is as an error minimization scheme. So my delta from here to here is large. I'm going to keep taking actions until that delta is within some acceptable rate. Also there's a stress piece involved that we can talk about. But it's more than this. So it's not simply repairing damage or anything like that. This is one of my favorite experiments. What you can do is, and this is not mine, this was done back in the 50s, you can take a tail and graft it onto the side of the animal. And what happens over time is that this thing turns into a limb. Now, pay attention to the cells here at the tip of the tail. These are tail tip cells sitting at the end of a tail. There's nothing locally wrong. There is no damage. There is no injury. Locally there's no reason for them to do anything at all, except that they start turning into fingers. What's happening here is that there's a large-scale control over the molecular events that are here, because locally there's no error. But globally, the system as a whole knows that what you have in the middle here is not a tail, you should have a limb. And that error, that only exists in a large-scale anatomical space, has to then be propagated down to control molecular events that locally there's no reason for it to happen, which is similar in the sense that in voluntary motion, you have these very abstract cognitive goals that then have to make the ions move across your muscle membranes for you to do that. There's a transduction from all kinds of abstract spaces down to making the chemistry do what's needed to make it happen. That's one example. Other examples of context-sensitive behavior is a tadpole. Here are the eyes, the nostrils, the mouth, the brain, the gut. In order to become a frog, these guys have to rearrange their face. All kinds of things happen during their development. It used to be thought that this was a hardwired process. You just move every organ in the right direction, the right amount, and you get your frog. We wanted to test that. We made these Picasso tadpoles. Basically we scrambled all the organs. Everything was in the wrong. Literally the eye is on the back, the mouth is off to the side, the whole thing is an incredible mess. They still make normal frogs because it's not a hardwired process. What happens is all of these structures will move forward in novel paths, abnormal paths, until they get to a normal frog face and then they stop. Sometimes they go a little bit too far and they have to come back and then they stop. The obvious question is, how the heck does it know what a correct pattern is? We actually have an answer to this. We've figured it out to some extent. I'll show you that momentarily. I want to show you another couple of crazy examples first. This is a thing called trophic memory in deer antlers.</p><p><strong>[11:44] Michael Levin:</strong> Every year these things shed this giant bony structure. What George Bubenik realized after about 40 years of experiments is that if you make a wound at one particular place in the structure, this whole thing falls off. Months later, next year, the new rack will grow. When it grows, it will actually grow in ectopic tine at this location. And that happens for about five or six years, and then eventually it goes away. It means that, first of all, this whole thing is going to be gone. The information has to be somewhere else in the body. Months later you have to store it. You have to remember where it was in this three-dimensional structure. Months later, you have to say, when you're doing the bone growth here, take an extra left turn and grow this thing right here. That's the kind of plasticity. None of this is genetic, because the genome hasn't been touched. Good luck drawing a molecular biology arrow diagram of what's going on here. Those kinds of models are not well suited for understanding phenomena like this. Working with deer is incredibly hard, so we came up with a tractable lab model. Those are planaria. Planaria are cool because, among other things, they are incredibly regenerative. You can cut them into many pieces. Here's an amazing example of context sensitivity. If you cut them in half, this side will grow a tail, this side will grow a head, but these cells were direct neighbors. They were sitting right next to each other. They have the same positional information. You have to cut them anywhere, and yet they have radically different anatomical fates because it isn't local. The wound actually talks to the rest of the animal to figure out what we have. This guy is incredibly regenerative, cancer-resistant, immortal. In fact, there's no aging in them, despite the fact that it has incredibly dirty genetics. It's a very interesting story. What we've discovered is that the question of how do you know how many heads you're supposed to have is actually stored as a bioelectrical pattern memory. We developed tools to visualize voltage gradients in living tissues of all kinds of species. Using various ion channel drugs and optogenetics, we can put in a different pattern that says you should have two heads. You can do that in a one-headed body. The anatomy is one-headed. The molecular biology is one-headed, meaning anterior markers expressed in the head. What it does have that's weird is a false memory of what it takes to be a good planarian. If you cut this guy, the pieces will make a two-headed worm. If you keep cutting them, they will continue to make two-headed worms. It's a memory. I have lots of other examples I can show you. I'm going to stop here. The bottom line is that groups of cells use electrical signaling driven by ion channels, propagated through gap junctions; serotonin is involved; all of these same players store large-scale pattern memories, and they have some amazing ingenuity about getting there. Unless they can't, in which case they form other kinds of beings that have never existed before. We've made those too, Xenobots and Anthrobots. You can find anything from simple error minimization to delayed gratification to memory rewriting to what I see as creative problem solving when you push them into scenarios that they simply can't do the thing they were trying to do. They do something else and they always do something interesting. We would deploy whatever tools and lessons we can learn from conventional cognition in these models and see what happens.</p><p><strong>[16:00] Lisa Feldman Barrett:</strong> Is this a contextual constraint argument that the bioelectrical signaling between the cells produces a constraint that directs the biology down a particular path.</p><p><strong>[16:29] Michael Levin:</strong> I would go further. I think you can say that. I would go further than that because what we see, one thing I didn't show you is something we call the electric phase. The electric phase is a pre-pattern. Long before the genes turn on to regionalize the ectoderm into a face, you literally see what looks like a face. The eyes are going to be here, the mouth is going to be here, the plaques are out to the side. This isn't just a constraint. It is literally an instructive pre-pattern or a memory of what you should do in the future. And if we rewrite that pattern, we can make all sorts of crazy stuff because the pattern is, as far as the cells are concerned, the ground truth of what they're building. If we alter that pattern through optogenetics or ion channel drugs, they will build something else. So I would say it's more than just a constraint. At the level of physics, sure, it's a constraint. But at the informational level, I think it's an instructive memory of what you should be doing. We have some control over that now. We can incept these false memories into these things. They will simply do it. Much like with the voluntary motion example, all of the molecular details are handled by the material. In other words, when we tell an animal to make an extra eye, I don't know how to build an eye. An incredible number of genes have to be activated in stem cell biology. We don't know any of that. We give a large-scale, high-level prompt that says "build an eye here." To the extent that we are convincing, everything else gets handled by the material, which will trigger all the downstream stuff to make it happen.</p><p><strong>[18:10] Lisa Feldman Barrett:</strong> The bioelectrical pattern is there before you have genes, before there's gene transcription.</p><p><strong>[18:20] Michael Levin:</strong> Generally speaking, yes. The bioelectrical pattern precedes the implementation details of actually turning on the various genes. However, big picture, if you take a step back, the whole thing is a feedback loop because in order to have bioelectrical signals, you need ion channels expressed right before. But much like with a lot of hardware-software systems, the ion channels that are present — what you have is an excitable medium. You need a minimum number of channels to make a competent medium. You need some voltage-gated channels. That typically is maternally provided in the egg, but by itself, that doesn't have any of the specificity of the morphogenesis that happens later. What happens is that the excitable medium then, left to its own devices, undergoes spontaneous symmetry breaking and amplification that gives you Turing patterns. At the electrical level, that's what it does by default. But you can step in at any moment and not touch the genetics, not change the ion channels, but simply control what the voltages are at any given location, and that's enough, if you know what you're doing. We now have simulators that help us design, because the goal of all of this is regenerative medicine. So at some point, we can fix birth defects in these model systems and normalize tumors. The goal is to say, here's a bunch of cells, and they have an abnormal pattern memory of what they're going to build. We're going to fix that. We're going to give them some better memories of what to do, and that doesn't require putting in new channels or deleting channel genes or any of that. We don't usually touch the genetics.</p><p><strong>[20:06] Lisa Feldman Barrett:</strong> This is so interesting. Can I ask a couple of questions? One question that I have is whether network homeostasis works like this, for example, in a brain. You see examples from Eve Marder's work all the way up to a larger scale brain where neurons are switching in and out. The function of the network is maintained as the neurons are switching in and out. There's not a lot known about exactly how that works. People are observing it, and the function is really a property of the relations between the cells. It's not a function of any given cell or any given signal train.</p><p><strong>[21:08] Michael Levin:</strong> You're absolutely right. I think probably evolutionarily that's where the brain learned that amazing trick.</p><p><strong>[21:17] Lisa Feldman Barrett:</strong> you're going right.</p><p><strong>[21:20] Michael Levin:</strong> Because during early development, you have a pattern and actual cells are moving in and out, right?</p><p><strong>[21:28] Lisa Feldman Barrett:</strong> Finish, and then I'll ask my next question.</p><p><strong>[21:29] Michael Levin:</strong> The cells are coming and going, and I'll take one step further. I would say that some of these, maybe all of these bioelectric patterns serve as virtual governors, where if you want to know what the causation is, it's the pattern that's exerting the force. It's not the cells, it's not the molecular stuff underneath, it's the pattern, and you can swap out. In fact, the hardware does swap itself out as things get bent out of shape and move this way and that way. It's the pattern that drives the show.</p><p><strong>[22:04] Lisa Feldman Barrett:</strong> So my next question. This is the kind of thing that we're talking about, but really scaled up as Benjamin mentioned, that it's scaled up multiple levels, temporal and spatial scales. I'm not an embryologist, and I'm far from developmental neurobiology. My recollection is that some cells—their location and their trajectory in an embryo—are genetically prescribed, but most of them aren't. They're really under local or contextual control, of the origin or of the destination. Meaning that where they go and how they end up functioning is contextually determined either by where they originated or where they end up. Some things are genetically prescribed. For example, the synapses between neurons in the thalamus and stellate cells in the cortex. Those synapses—I think there's some genetic specification there. They have to recognize each other chemically in order to make the synapse. But most of the time it doesn't work like that. It's very, very rare. For example, another example is in the lateral geniculate nucleus. Sometimes people will say the brain is running a model of the world. But the brain is not running a model of the world. It's running a model of the sensory surfaces of its body that moves around in the world.</p><p><strong>[24:36] Lisa Feldman Barrett:</strong> And the signals that the sensory surfaces are receiving are some of them from the world, some of them from the body, internal to the body. But the brain doesn't map the visual world, it maps the retina and it infers the rest. For example, in the lateral geniculate and in the superior colliculus, there is a map of the retina. That map is genetically determined or genetically influenced. You don't require experience for it to develop. But the retinotopic maps in V1, my recollection is that they are largely or almost exclusively experience-based, which means that there's something about the signaling that is required to actually establish the map. What's interesting about that is that for the most part the neurons that make up V1—their migration, from where they're born—is not genetically prescribed. It's contextually prescribed. I can't remember if it's the origin or the destination, but it works differently in different parts of the embryo and at different times. One question for us would be what's happening with the limbic circuitry, the circuitry at the center of the brain. Its main job really is the regulation of the body. So its main job is allostasis. It's not so much that this circuitry has special features as much as these neurons in particular— not just the neurons, probably glial cells, the whole tissue—play an important role not just in the regulation of the body, but in every cognitive phenomenon, every mental phenomenon, and in the coordination of the visceral motor system with the skeletal motor system. And so it'd be very interesting to understand whether the principles that you are studying and that you've established for anatomical organization you would also see for functional organization of the allostatic control of the body, or the importance of energetics-related signaling to cognition writ large.</p><p><strong>[27:10] Michael Levin:</strong> It makes a ton of sense. And I think, for us, even broader, taking it to all kinds of weird scenarios. For example, we've done this thing in tadpoles where we can make animals that have no primary eyes in the head, but they have an eye on their tail. That eye on the tail makes an optic nerve. That optic nerve does not go to the brain, sometimes to the spinal cord, sometimes to the gut, sometimes nowhere at all. Those animals can see. We test them in visual learning assays. And they can see out of the box, no new rounds of adaptation or selection, no radically different sensory-motor architecture. They can get around in visual cues.</p><p><strong>[27:52] Lisa Feldman Barrett:</strong> Is it like a blind sight kind of seeing?</p><p><strong>[27:54] Michael Levin:</strong> I can't tell you what the experience of the tap is. I don't know if they know they can.</p><p><strong>[28:02] Lisa Feldman Barrett:</strong> Some people would be very happy to speculate about the experience of a tadpole, but I'm with you on that one.</p><p><strong>[28:09] Michael Levin:</strong> I can speculate, but I don't have any actual dates. So what's the point?</p><p><strong>[28:14] Lisa Feldman Barrett:</strong> Let me ask the question differently. What I meant by blindsight is: is it gross spatial features that they can detect and behaviorally use? I don't know. Blindsight people usually talk about it in terms of conscious experience, but there is a difference in the type of visual signals, the type of visual features that the animal is using to navigate movement.</p><p><strong>[28:58] Michael Levin:</strong> I understand the question. I don't know how much detail I can see. Our behavioral outputs are fairly coarse-grained. But between that and the anthrobots that we make, which have a perfectly normal human genome but a completely different kind of behavior and 9,000 different gene expressions, half the genome is now completely different. We haven't done anything to them. It's just a new lifestyle that they have and they can do all kinds of things that normal tissues don't do. I think the plasticity is incredible, and I think they try hard. All of these things try to get to their default configuration, but if they can't, they will do something else, and what they do will be coherent and adaptive. Other than certain nematodes, in C. elegans every lineage relationship is very precise and you can count every nematode has the same number of cells. Other than that, I'm very skeptical about stuff that is "prescribed." I think the default might look prescribed, but if you start to push it, you'll find that they can do all sorts of other things.</p><p><strong>[30:21] Lisa Feldman Barrett:</strong> We're very sympathetic to that. For us, this is like music to our ears, but in the circles where we spend a lot of our time, people are still spouting the modern synthesis doctrine, ideology. I'm sure you're familiar with that. That's why everybody's smiling.</p><p><strong>[30:51] Michael Levin:</strong> I'm glad because it's exactly the same in development and molecular genetics. These examples, when I teach students occasionally, my talk is called "Why Is This Not in Your Textbook?" And these are all things that are specifically not in the textbook, because if you look at the things in the textbook, then you get this picture of a nice genetic determinant, but when in your biology education did anybody tell you that the animal with the most regenerative capacity, meaning the most stable anatomical features, cancer resistance, no aging, is the one with the dirtiest genome. Shouldn't it be the opposite? Shouldn't it be the clean genomes that are responsible for all this stuff? It's actually exactly the opposite.</p><p><strong>[31:37] Jordan Theriault-Brown:</strong> I mean, so by clean versus dirty genome.</p><p><strong>[31:41] Michael Levin:</strong> Most of us, when we have offspring, the offspring does not inherit our somatic mutations. Planaria, at least the ones we study, are not like that. They tear themselves in half and regenerate. That means every mutation that doesn't kill the stem cell it hits gets propagated. They keep everything. They're mixoploid. Every cell has a different number of chromosomes. They look like a tumor.</p><p><strong>[32:12] Jordan Theriault-Brown:</strong> There's no specialized gametes that mostly copy the DNA.</p><p><strong>[32:17] Michael Levin:</strong> And this asexual, we study the asexual forms. And so these guys have been around for 400 million years, accumulating mutations. There is no such thing as a mutant line of planaria. There is for everything else. You can call the stock center and get mice with weird, curly tails. The only mutant line of planaria is our two-headed form, and that's not genetic. There's nothing genetically different about them. And so, because they ignore, there's no transgenics in planaria; you try to put in new genes, they don't care about that any more than the mutations they already have. We could go on and on about why this is, but specifically, I think in planaria the material — the fact that they're made of a really unreliable substrate. We've modeled this computationally; all the effort has gone into an algorithm that can do something useful even when your hardware is junky. The hardware is going to be different, you can't count on it, but you've got an algorithm that's rock solid and it's tolerant to all of that. To some extent, planaria have it all the way, amphibians a little more, and C. elegans maybe not at all.</p><p><strong>[33:30] Karen Quigley:</strong> Maybe we're thinking about junk DNA in the wrong way. What we would say is that you need a lot of variance, a lot of variation, because that's the substrate on which evolution is driven. And so this seems like the perfect option if you were to think of it that way, in the sense that there's a lot of opportunity.</p><p><strong>[33:52] Michael Levin:</strong> Yeah.</p><p><strong>[33:53] Karen Quigley:</strong> Because there's a lot of opportunity to take advantage of something that exists already and utilize it, especially when something changes that you need to adapt to.</p><p><strong>[34:02] Michael Levin:</strong> Yeah.</p><p><strong>[34:04] Lisa Feldman Barrett:</strong> The reason for the word junk is this old distinction between genes and junk DNA.</p><p><strong>[34:09] Michael Levin:</strong> That's not what I meant.</p><p><strong>[34:10] Lisa Feldman Barrett:</strong> I know that's not what you meant, but I was saying to Karen.</p><p><strong>[34:18] Karen Quigley:</strong> I wasn't assuming that particular use. It's an interesting way of flipping on its head.</p><p><strong>[34:25] Michael Levin:</strong> Yeah.</p><p><strong>Karen Quigley:</strong> That it really isn't about being junky. It's about providing a really broad substrate for adaptivity.</p><p><strong>[34:34] Michael Levin:</strong> I think that's a really interesting idea. What we see in our simulations is the following. When you're dealing with a material that is able to autonomously fix certain defects, what ends up happening is that selection can't see the genome very well. Because when you get a tadpole that looks perfect, you don't know, was that because my genetics were great or because my mouth started off over here but by now it's moved back to where it needs to be. So selection has a hard time seeing the structural genome. And so what it does is then spend more time working on the competency, the plasticity. But the more you do that, the harder it is to see. So there's a positive feedback loop. I think what's happened in Planaria is that loop went all the way to the end where we can't see the genetics at all. In the simulations, you can literally see where all the evolutionary optimization is happening. Once you have a competent material like that, it really starts to crank on this autonomous repair stuff. Because the more you have it, the harder it is; you can't select the good genomes from the bad because you don't see them. All you see is the final product and the final product is way beyond anything the genetics was telling you about.</p><p><strong>[35:59] Lisa Feldman Barrett:</strong> The way that we think about it is that if you've got N elements, they could be neurons or whatever, but you have tremendous opportunity for variability, but you have tremendous complexity. Life requires the reduction of complexity. It requires some predictability and structure. But where is that structure? Where is that predictability? If this is complete complexity and variation, and this is complete: everything is hard coded into the elements themselves, which is what some people want to claim. Or maybe they'll allow for some variation around the edges. That context can tweak the edges. What we're saying is that, and it sounds like what you're saying too, is that there is a structure, but it's a structure not at the level of the elements. The elements are somehow constraining each other in a way that they were, with signals that are reinstatable or memorable, and it also can change. It's not hard coded at the level of the individual elements, which is really what a strong genetic variation-plus-selection argument is, where the properties of the genes are hard-coded; they're in the genes themselves, and then the genes just express those properties, as opposed to the structure being flexible. It's not infinitely flexible, but it's relational and at the level of the interaction of the elements. Those properties exist at the level of the interaction of the elements, not in the elements themselves. So they're fundamentally relational properties.</p><p><strong>[38:04] Michael Levin:</strong> I think that's exactly right. I augment this constraint business, because I think a lot of people think about constraints. But what I emphasize also is that there are a lot of free lunches, which I'll define that momentarily, that actually become enablements rather than constraints. What I mean by that is that there are patterns. There are patterns that come from mathematics, that come from computation, that are not laws of physics. They're not anything you discover in physics. They come from some of these other domains that biology, I think, uses very effectively. For example, as soon as you evolve a voltage-gated ion channel, what you really have is a voltage-gated ion conductance, aka a transistor. If you have two of those, you can make a logic gate. Once you make a logic gate, you inherit all kinds of crazy properties that you didn't have to evolve. They're not in the materials: the fact that NAND is special and that you can do all these things. All of that is given to you for free from the laws of computation. You just have to make the right physical interface that hooks into it. And so absolutely there are constraints, but there are also these, and they're all over the place. There are all sorts of these wild enablements that are there for you as a free gift from mathematics.</p><p><strong>[39:30] Lisa Feldman Barrett:</strong> Is there a word, an abstraction that means constraint or enablement? Meaning that not all possibilities are likely, but some become unlikely or impossible, and some become more likely. Is there a word?</p><p><strong>[39:51] Michael Levin:</strong> There is. I still feel that there's a commitment.</p><p><strong>[39:59] Lisa Feldman Barrett:</strong> When I said constraint, I wasn't ruling out enablement. What you're saying makes a tremendous amount of sense. Maybe it's a different word I should be using.</p><p><strong>[40:09] Eli Sennesh:</strong> We've been really interested as a lab in constraint causation. One of the next things we really want to tackle is the labs to all read Moreno, Mosio, Biological Autonomy together to get some of that constraint causality out of it.</p><p><strong>[40:24] Lisa Feldman Barrett:</strong> And then there were other people who were talking about. I'm thinking I had a really interesting conversation with Philip Ball a couple of months ago because I read his "How Life Works" and was totally blown away, actually, because I didn't know a lot of it. I knew genomes were dirty, but I had no idea how random things were. I will say that I've recommended this book to a number of people in psychology and their minds are completely blown because this is not anything anybody knows in our field, it seems like. One thing that he was talking about is—if I understand him correctly—how what you would call bioelectric pattern memories, I think he would count that as an example of agency or meaning making that occurs at the level of multicellular systems. It occurs within a cell, but it also occurs anytime you have parts that are interacting and producing movement towards a state that could only exist because of the parts interacting. It's not a state that—so goal here just means direction towards a future state. I wanted to understand whether he thought that was a principle that could generalize across temporal and spatial scales, because it certainly seems like that's what we're talking about, but at a completely different scale. We're also talking about relational meaning, that a lot of the properties that we ascribe to objects or to people exist in the relationship between things, not in the things themselves. I wondered whether he—I'm thinking about Scott Turner's books, where he's making an argument that goals exist only at the level of the ensemble. Exactly what you said about how individual cells don't have any evidence that the cell itself would move towards a particular state. It's only the cell in combination with those cells. I was thinking of Scott Turner's first book, The Tinkerer's Apprentice, where he's talking about termites. Termites—individual termites don't have a goal to cool the nest. But when they're moving things around and making tunnels and tending their fungal gardens, they don't have a goal per se to homeostatically maintain a temperature in a particular range, but that's what's happening at the level of the nest. That has to happen at the level of the nest. Otherwise the nest will fail and all the animals will die. So it seems like conceptually there's great sympathy between these particular lines of work. I'm sitting here thinking I have 100 uses for your examples, but I'm wondering how we can help you exactly. How can we return that favor?</p><p><strong>[45:04] Michael Levin:</strong> First, to comment on the last thing. I don't want to speak for Phil, I think I'd go further than he does along the lines of what you just said. We study, for example, molecular networks. Molecules that turn each other up or crank each other up or down — even small networks, let's say five or six subunits; it doesn't take very much complexity at all. Small networks can have habituation, sensitization, associative conditioning, where the network — you can pair chemical stimuli. We're using this now for drug conditioning in medicine. You can pair an effective stimulus with a placebo. You can do a placebo within a six-unit molecular network — no cells, no neurons. This kind of stuff goes all the way down to very minimal systems. We have two sets of tools that we use to study them. One is behavioral. If you want to know whether it's a goal or not, put a barrier between this thing and its goal in whatever space you're working on and see what happens. You will see sometimes it goes around and it has delayed gratification. If not, you haven't shown a goal. The other thing we have is the tools of causal information theory, the kind of stuff that Tononi does on coma patients, and we use it to look for causality at different levels. You can look at the bioelectrics, at the calcium signaling, at the molecular signaling, and do the calculation and ask, is there a whole that's more than the sum of its parts here? Sometimes the answer is yes, and sometimes it's no. I definitely think it goes very far below.</p><p><strong>[46:59] Jordan Theriault-Brown:</strong> Excuse me, he may be at my door.</p><p><strong>[47:04] Lisa Feldman Barrett:</strong> I wasn't saying that he was right. I was saying he's the gateway for a whole literature that I was not aware of and that is very useful for us.</p><p><strong>[47:19] Michael Levin:</strong> So what I would love is, first of all, exactly as you said, I like examples from cognitive science that I can use to say this is not new and crazy. This has already been seen. And in this field, here's what they see. Concepts that we can adapt from what you guys see.</p><p><strong>[47:51] Eli Sennesh:</strong> I'm tagging ahead. I don't want you to finish.</p><p><strong>[47:57] Michael Levin:</strong> I want to understand as much as possible about what you guys think is happening in the brain. How early did biology actually take that on? Was it in cells and embryos? Was it in molecular networks inside of a single cell? Was it before that? We like to look all through the spectrum.</p><p><strong>[48:20] Eli Sennesh:</strong> I've had a burning question on this: a lot of what you're describing—the thing that jumped out to me in your initial slides—was that you're talking about having a large-scale goal and a pattern, and basically anatomy rearranging itself to that macro-scale pattern in regenerative tissue?</p><p><strong>[48:44] Michael Levin:</strong> That's one set of examples. We could go somewhere else.</p><p><strong>[48:49] Eli Sennesh:</strong> The immediate analogy to that for me, and I think Lisa's been poking at this with some of the questions as well, is that it seems like it's also a description of what behavior is implemented by the brain. You basically have a macroscale pattern or a behavioral configuration that is a whole unit of how muscle tissue and sensory input is arranged, that involves a macroscale organization that's more than the sum of the parts of any one bit of muscle tissue or any one sensory stream. But what you're getting is a macroscale organization to that pattern that's then implemented and refined down through the details across the whole brain. When you were talking about goals at the molecular level or goals in these biological tissue patterns, it seems there's a straight analogy to how goals or behavior gets recognized at a macroscale level across the brain to configure itself. It solves a big problem that we have, which is how you think about goals or intentionality or goal-directedness in any psychological sense. There's a big overlap between something that Eli and I have both been into, and Karen and Lisa have also been interested in, which is how to think about different models at a psychological level that are built around this negative feedback control or this configuration to adapt to model subconfiguration in terms of sensory input. There's a sidestream of psychology that's thought about this, but it's been left behind and not developed well. I'm happy to share some outshoots from cybernetics where people have tried to think about things in those terms, but it is not the mainstream in psychology, and it seems to fit well with what you're talking about. It seems like you have some mechanisms to make that tractable rather than a purely theoretical thing.</p><p><strong>[51:00] Michael Levin:</strong> That's great. I'd love to see those. That's also something I'm very interested in to hear you guys talk about meta levels. You have a set point and a goal, but what are the components that can reset that to something else? Because that's what we're always on the lookout for: instead of trying to force the system into a particular behavior, how can we get buy-in and actually make it want to do the thing we want it to do by resetting its goals, which is really...</p><p><strong>[51:30] Lisa Feldman Barrett:</strong> What you're talking about is how does the network of elements produce enablements, right, that make it more likely that a particular future state will be reached, this future state versus that future state. One concept in allostasis, Karen, you could speak to this better, is that the system isn't really working around set points. It's working to optimize efficiency, energy efficiency at any level of energy output. It's anticipating needs and attempting to meet those needs, preparing to meet those needs in advance. Individual biological systems might work by homeostasis. We're willing to grant that. But the whole system, the nervous system, for example, in its regulation of all of those other systems, probably doesn't work that way. That's not our view anyways. Across the levels of the system, you could see it's possible to conceive of what's happening as complexity reduction. At every level of the nervous system, you could talk about the construction of categories, a bunch of things which are dissimilar in their sensorimotor particulars but are equivalent in their functional output. A category isn't a group of things that are the same. It's any group of elements that can be treated as or function as equivalent for some purpose in some context. If you think about it that way, what's really happening is the expansion and compression of signals for the purposes of constraining and enabling — constraining certain outcomes, making certain outcomes less likely or unlikely and making other outcomes much more likely. Another thing we are doing is taking concepts, conceptual tools, and attempting to configure them so they can be used across multiple levels of analysis, like the concept of a category as a complexity reducer. I think allostasis could probably work that way too.</p><p><strong>[54:45] Michael Levin:</strong> Eli, did you want to say something?</p><p><strong>[54:49] Jordan Theriault-Brown:</strong> I think this is recalling from background research that I did when working with the lab and writing a paper, but in my experience, once you get into systems biology and start figuring out the fine-grained mechanisms of what operates at one level versus another, the precise meaning of what is allostasis becomes a lot clearer and easier to figure out. In the sense that you can say there's a fold-change adaptation mechanism operating here that we found in this experiment. Then there's integral control over here in this separate part of the physiology in this experiment. Through evolution and development, you accumulate these separate mechanisms at different levels of a control hierarchy, which usually reduces complexity for the higher levels of the control system.</p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/a-sleek-text-dominant-poster-for-the-thombdiacyprmahdscf85il5assmyexordephpmklujwug-20250407T203748021Z.png" />
          <itunes:explicit>no</itunes:explicit>
        </item>
  </channel>

</rss>