Watch Episode Here
Listen to Episode Here
Show Notes
This is a meeting discussion, ~40 minutes, between Tom Froese (Cognitive Scientist at OIST, https://groups.oist.jp/ecsu/tom-froese) and Michael Levin, going over Tom's recent paper on Irruption theory and discussing embodied minds, agency, and models of mind-body interaction. The paper is here: https://www.mdpi.com/1099-4300/26/4/288
CHAPTERS:
(00:00) Eruption theory foundations
(16:54) Computers, emergence, polycomputing
(23:53) Interpretation, hacking, multi-agency
(30:53) Experiments and clinical hacking
(39:22) Seizures, entropy, free will
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:00] Michael Levin: I'll go through it and I'd love to talk about irruption theory and the thing you were saying the other day about the information that's not lost.
[00:08] Tom Froese: Right, yeah.
[00:10] Michael Levin: We can, if we have time, talk about this issue of how far down it goes.
[00:16] Tom Froese: Ideally we want to have a measure, to operationalize a criterion for that. So this is exactly what this is about. Let me just see what's the best way of sharing this. I think—is it okay if I put out my second screen? I will be a little absent, looking to the side. Maybe if we're going to record, it'd be better if I have it in the background.
[00:40] Michael Levin: It's mostly going to be to share the screen. I don't even know that we're going to see you. We're going to see the slides, so it's fine. Share that.
[00:48] Tom Froese: I can see you. It's easier that way.
[00:52] Michael Levin: Okay.
[00:55] Tom Froese: Safari, enter in share. How's that looking?
[01:01] Michael Levin: Perfect. Yeah, I can see it. Great.
[01:03] Tom Froese: Okay, let's put this over here.
[01:06] Michael Levin: Yeah, great.
[01:07] Tom Froese: So this is the new paper that just recently came out and it's a more mature version of irruption theory that now both considers not just the mental causation or the agency part, but also the other direction, how does something non-mental become part of the mental, part of subjective experience, part of mental content. It has these classic problems, which come in many varieties. But I think in this paper, for the first time, I see a way in which we can do science with them, which is getting me very excited. I'm going to take us through the main figures, because they encapsulate the different stages of the argument. The human version of the mind-body problem: we're asking the big questions. How is the mind related to matter and vice versa? Analytic philosophers have been banging their heads against this problem for a long time. If you read Kim's final book account of this, I quite enjoyed it for its clarity in showing that even if we solve the problem of mental causation by going a reductionist route and saying the mental is the physical and therefore the physical can cause the physical, you lose a lot of what is nice about thinking about the mind in terms of its qualities and teleology and your intentions and free will. Everything goes out the window, but you might save causality. So you still might get mental causation saved that way. But the problem is that if some of the things we do depend on conscious experience, then the problem is not fully solved. That would mean you also have to solve the higher problem of consciousness to have a full account of mental causation. Right from the start, I want to say I want to be realist about this in the sense that I do think our experiences make a difference for how we behave. If someone wants to say that our experiences don't make a difference to how we behave, we have two different premises and the conversation stops there. But it's a very hard sell to argue that our experiences don't make a difference. You basically lose most of the population of this planet; they exit the conversation. So we should do our best not to go down that route. What that means is we also have to solve the problem of consciousness. These are interlocked, and this is the big problem. It gets worse because you might think that's just for people. If you talk about rats or even more basal cognition, then this might not be an issue. But what I propose is that there's a more general mind-body problem working in the background. On the left-hand side, I call this the hard problem of efficacy, which is that for any mental property — let's say a representation, a goal state, or whatever — there is a problem explaining how that state, as such, as a mental state, makes a difference to the physical state. It's a generalization of the problem of mental causation. It's not good enough to say there's a supervenience relationship or something like that, but the causality at the bottom level maintains. What I want to say is how do we say that the goal — as being a goal, as an intention, as having normativity conditions, as being able to succeed or fail, or be better or worse — the whole normativity of it also needs to be accounted for in this kind of account. That's the hard problem of efficacy. And then we have the hard problem of content, which is how does anything even become part of the mind in the first place? Representationalism, for example, brushes this away a little bit. But let's say even if we treat representation as half of this and say, okay, let's not worry about how to naturalize content in terms of its origins. Let's say at some future point we have a story of how you go from something purely physical to something that has semantic content and mental content that has aboutness conditions or normativity. Assume that problem is solved. Then we still have the problem of how that even makes a difference to something physical. Take a neuron in the brain that's supposed to represent your place in a maze, your classic place cell. The fact that it's representational, that it has representational content, which is Nobel Prize–winning material, is exciting. We find the correlation, but where in our analysis of the neural activity itself does the content enter the picture? When we look into the brain and say here's this neuron and we know it's correlated with this content outside of the rat, being its position in the maze.
[05:03] Tom Froese: The rest of what we're seeing here is just physics. It's just biochemistry. It's electrical potentials. Membranes opening and closing and molecules floating around. There's no content there. There's no normativity. It's just physics. There's a kind of sense of disconnect here: How do we even work across this gap? My feeling is that we should face it head on and say yes, there is a big gap here. On the one hand, we can talk about content and consciousness and the mental and intentions and norms, health being worse or better and so on. Terms that have no frame of reference in the purely physical sciences, but they do have an existence and we know they make a difference. We're realists about it. So why don't they show up when we do our best science? One of the things that should come out of this is that just because they don't show up as such doesn't mean that they don't make any difference. There are two different things here. One would be a demand for observability and even maybe intelligibility: I observe something and it makes sense as a content. The other is to say maybe that's not what I can do here, but I notice that something else is happening that wouldn't happen otherwise if the system weren't in such a state. So far, most people have demanded that the mechanism should be observable and intelligible. But why should we assume that? There's a big gap here. We know already from other fields — quantum physics is a prime example — that demanding intelligibility and observability can be a big stumbling block. Sometimes you just have to say we don't understand why it's happening, but something's happening. We can measure that and we can work with that. But so far, cognitive science hasn't allowed itself to conceive this possibility. What I want to propose is that we do need to make room for this conception. That's why I call it a "black box framework." Let's accept that we don't understand this relationship. We have two versions of that. Let's stick with the human one for a moment because it's easier for us to relate to. We have mental causation. The scenario is putting our neuroscience hats on, looking inside an organism. If something mental makes a difference to the material basis of behavior, you've got an unobservable mental cause. I just said we have physical stuff happening. We don't see the mind as such when we go inside the organism, causing an unintelligible material event. It's unintelligible because we cannot directly observe what caused it, what made the difference. That's outside the scope of our observation when we make quantitative assessments. And the other way around is the same.
[09:00] Tom Froese: We know that some things happening in the brain make a difference to mental content or make a difference to subjective experience, but we can't measure subjective experience directly inside the brain. It's not there. It's not something that we can actually quantify directly. That would mean that to the extent that something is making a difference to experience, you've got a cause without an observable effect. You've got an unintelligible material event here: something is happening, some change is happening, but we don't understand why it's happening. We've got these two categories. What's interesting is that they point in different ways. That suggests that the signatures for these kinds of relationships would be different ones. This is where I introduce my proposal for how to work with this kind of situation. This is the irruption theory, where if we have the mind and the matter relating to each other and we don't know how, this is the hard problem of efficacy or mental causation or the hard problem of consciousness. We know that they're related somehow, but when we trace one domain into the other, it escapes us. Let's go through this one more time. I have an intention of saying words, of moving my hand to make a point, and I can notice that my intention is making a difference to my body. If not, it would be crazy. I have the experience that there's a coherence between what I want to do and what my body does, except that I don't know how my body does it. If something gets lost along the way, I can see the effect of my intention, but I don't have access to how it has that effect. That's one aspect of the black box. It's also true on the other side. Imagine I'm a neuroscientist and I look at what's happening in the pathway from my brain to my arm to make my arm move. If it's really the intention as such that's making my arm move, the neuroscientist doesn't have access to that. He can't measure intentions using his EEG apparatus or whatever he might be using. Again, something gets lost. I can work backwards from the behavior to all the activity in my body, but at the end, something gets lost. I cannot actually make the jump to the other side. That's what I mean with the black box. We know these things are related, but for some reason, maybe by necessity, we can't trace properly, translate transparently across. That's why I propose the way to think about this is in terms of absorption and irruption. Absorption means that on the side of the mental, when I inject my intention to do something, there is a compression effect. Dreyfus talked about this in terms of "absorbed coping." When I'm really engrossed in my activity, there's a narrowing of my vision. The world disappears a little bit into the background. I lose myself. There's a compression of the variability of my experience as it's invested into the activity of my behavior. But what happens on the other side? Now you're totally involved in generating your behavior, but that involvement is subjective involvement. It can't be quantified. It can't be translated into something happening purely in quantitative terms. That means there's a hidden variable. There are factors making a difference to how your behavior is generated that cannot be traced to causes at that level of description. That's what I call irruption. Now there's a kind of diversification, an increase in variability that in principle would remain unexplained at that level of description.
[12:56] Tom Froese: We can trace it back to the higher level. That was the original proposal of irruption theory in the first paper. I elaborated on that part a lot. There are many examples that we can talk about of how this fits with some of the empirical data. Then we talk about neural entropy and complexity and all these things, and why organisms are such noisy systems in the first place. A lot of this starts to make sense from this point of view. Let's talk about the other side first for a moment. That's the absorption side. That's a new part of this paper: if there's a cause that has an effect which can't be observed, it's a little bit like a reduction in variability. The difference that would have been made is now not being made because that difference is appearing in a domain that I cannot directly measure in the one that I'm currently observing. There is a reduction in variability. Two things collapse or cancel each other out. There is information loss of a different kind as things are translated into the other domain. Right now what I'm working on is more on the right-hand side, trying to flesh out that part. It's coming together quite nicely with the things we've been talking about, like bow-tie motifs, which are very nice ways of canceling out things. I think that's pretty much it. Let me end on one final note, which is that in my mind, this is a first step of something much more general. What it looks like to me now is that this is one special case of an interaction that crosses two ontological domains, or regional domains of being. It could also be considered as crossing different scales of agency. Going from the scale of agency inside your body and all the agents that are working there to your agency as a person. As long as we have a transition between ontologies of some kind, this kind of framework probably will apply too. That could even be true for cellular biology, for example. When I was reading your work on how the higher levels transform the lower levels and how they get incorporated into the higher levels again, you have very similar notions as irruption and absorption. With irruption, the way you phrase it is that there's a deformation of the energy landscape of the lower level. Suddenly these guys are happily doing whatever they're doing, but now there's an unexpected change in the game that they're playing and they would have to slightly adjust. From their point of view, they will never be able to figure out why that happened. What just happened? That's outside of their scope. It's a little like an irruption in that sense. There's an unexpected increase or change, and it can't be explained at that level of description. It just goes beyond it. At the same time, when you talk about how the components get integrated into the larger whole, you talk about a loss of identity. For example, the molecules produced by one cell aren't tagged with the name. If another cell produces it, they can't distinguish where it's coming from. The identity starts to merge between the different components. That's very close to what I'm talking about here in terms of absorption. One thing that would be very nice to contemplate is whether this kind of framework gets a quantitative grip on some of the things that are happening at those scales. Not just talking about the mind-body problem or mental causation, but talking about multi-scalar integration to some extent. Principles that would apply to the left-hand side, the irruption side, would be increases in noise, hidden variables, dimensionality, entropy — these kinds of things. There's obviously an expansion of variability, and that expansion is unexpected and to some extent irreducible to the things that are happening normally in that domain. On the right-hand side, you have things like compression, synchrony, symmetries, order parameters — this kind of thing where there's information loss as multiple components that could be behaving differently normally don't behave differently. That's maybe where the bow-tie motif and things like that fit into that category. That's the mini overview of what the paper proposes.
[16:54] Michael Levin: Do you think there is a similar problem in the case of, let's say we have some sort of classic computational device, it's running some sort of algorithm, and then there's the physics perspective where you can see the electrons shuffling around, you don't actually see the algorithm. Do you think there's a similar dynamic going on there, or is there something unique to the biological case that isn't captured in the software-hardware dichotomy?
[17:34] Tom Froese: I think it's a really good question. It's one that I started exploring now with some colleagues at Unam. And I think I'm changing my mind about it. I used to be very resistant to that idea, but I'm slowly coming around to it, especially on the side of absorption. It's almost like computers are the perfect absorption devices. All that variability that you have at the level of electrons, computers are designed such that variability does not matter for the kind of algorithms that are being implemented. If anything, it's the most excellent example of the right-hand side, but here's the problem: it's one-sided. We create devices that are super rich in experience, possibly. They've got lots of variability that they're absorbing and possibly creating experiences for them, but they can't do anything about it. It's the eruption side that's blocked, because if you do have things that are acting on those things, that's like the old school Windows blue screen of death. Suddenly you've got an unexplained change that can't be reduced to the rules at that level. You've got error correction, so either it gets thrown out, or if it can't be thrown out, then you just got a system failure. So it seems to me that the issues on that side are: if you wanted to make computers that could fit into this framework, the question would be, how do we loosen them such that the higher levels can make a difference to the lower levels in their own terms?
[19:07] Michael Levin: We have a weird example that might be relevant to this. That's maybe this. I'm very interested in higher order behavior that seems to be, in some important sense, decoupled from either the algorithm or the mechanisms underneath in that it is not just emergent complexity, because emergent complexity is easy. You get that with cellular automata; that's easy. I'm talking about more interesting emergent goal-directed behaviors. You've seen this pre-print of ours on the sorting algorithms. I purposely wanted to start with the dumbest, simplest system that is transparent, deterministic. It's six lines of code. Everybody thinks they know what these things do. These sorting algorithms turn out to have some really interesting behaviors, including this clustering thing, which is by itself nowhere in the algorithm. I wonder now. Clearly super primitive, but that was the point. We wanted a very basal example of this. I wonder if it's related to what you just said, where, on the one hand, yes, in our current architectures, they can't do anything with it. On the other hand, it seems to me, and I think this is all over the place in biology too, that even though constrained by the deterministic algorithm—there's no magic, it follows the algorithm correctly— it also manages to do some things that are not in the algorithm. I think biological things are super good at it. In fact, that's probably what we call biology or systems that are really good at this. Fundamentally, it starts very early. I don't think you need much to start seeing these things appear. Where do they come from is a whole other thing. We face that question with our various synthetic xenobots and anthropots and things where they've never existed before. There's no evolutionary long history of selection that would explain why they have certain properties. So where it comes from. I'm pretty sympathetic to this kind of black box approach, because looking for mechanism and looking for explicit representations in terms of how it works and where things come from may not always be tractable. Already we have some of these issues in a very minimal way with even just very simple computer architectures. Then there's the whole polycomputing thing. This is what Josh Bongard and I have been working on: the notion that when you have a physical process, what it's computing is in the eye of an observer. Multiple observers — his student Atousa has actual data showing that the same physical processes can look like very different computations depending on how you look at them. What it's really computing is up to an observer's interpretation, which I think in biology means that every subsystem is interpreting every other subsystem, however it can. That also gets to mental content and the subjectivity of it: what are mental causes and what is their actual content? Is there a privileged answer to say that's the content of that mental state? Or could we do this polycomputing thing and say there are modules inside and there may even be physically external other beings that you may be coupled with in some way that will also have an interpretation of what any given mental state is. We were talking about how far down it goes. These issues are very deep and they crop up very early. I don't think we have to wait until we get brains before these deep questions of interpretation come up.
[23:53] Tom Froese: I've heard you talk before about this, polyfunctionalism or what you call it, but isn't it the case that, was it Putnam who used the argument that you can read any kind of function into a physical process as a reductive objection to computationalism? In the sense that if you want to be realist about the implementation of the algorithm, then it shouldn't just be the observer who determines what kind of algorithm is being run. I don't know what this argument was, but it was: if you take any bunch of molecules and you just sub-select the right kind of properties, they'll be running Windows, because there's such a mind-blowing degree of freedom — there's enough there; you just have to be selective and then you get whatever you want. But the question is, does that have downstream consequences for the process? And so there you don't want necessarily the observer to be the one who determines what it is that's there.
[24:56] Michael Levin: Everything you just said seems completely fine to me because what I think happens in biology is, in computers, we're used to the fact that there's somebody who wrote the algorithm and we tend to take that interpretation as the privileged: I know what this is, I wrote it, that's what I'm telling you, it's a bubble sort. But in biology, you don't get that. Every subsystem, the cells, subcellular components, tissues, are looking at all the stuff going on around it. They don't get a manual to say, what the hell does this mean? There are signals coming; what computation is this? And I think what they have to do for adaptive advantage is interpret it however they can. They form their own internal model of what the computation is and say, "Oh, I see what this is telling me, that the metabolism is going to shoot up five minutes from now." And somebody else in some other module is looking, "No, what I see here is that this set of genes is going to be expressed and therefore I'm going to do this and that." In Josh's work, you can see it's this particulate material. Depending on how you look at it, you see an AND gate or an OR gate, and there is no privileged answer to which one it is. And I think biology takes that to an extreme.
[26:24] Tom Froese: Wait a minute. If I had a computer built and there was an ambiguity whether an AND gate was an AND gate or an OR gate, would the computer work? No. You have to depend on it doing a particular translation of its inputs to its outputs. And if it doesn't, then you have a kind of proper realistic machine, but it would be some other kind of system.
[26:51] Michael Levin: So let me push back on that. I think that's true if you stick to one observer and the idea that there is a definitive one thing that observer wants it to do. If one observer can't tell what it's doing, that's a real problem. But I don't think it's a problem if you have a physical device and there are multiple observers and one observer says, "Oh, this is great. This thing's generating prime numbers." And somebody else looks at it and says, "Oh, whatever." What I see is that it's doing some other function. For us, it's statistically very unlikely that you can say, "Oh, look, it's running Windows," and somebody else says, "Oh, no, it's completely different," because it's hard to have a process that matches both those descriptions at the same time. But I think what happens in biology, because there's noise, there's a high tolerance for fuzziness, there are lots of systems that are looking at the same set of events and interpreting, building internal models of those events as completely different things.
[28:05] Tom Froese: I agree with that, but that's from the point of view of looking at it. If the point of view is that there are multiple interpretations possible, and that diversity of points of view is helpful for the systems around it, I can only agree with that, and I think that's interesting to explore, but that's very different than saying that those multiple possible perspectives make a difference to the process itself, which is lending itself to be interpreted in these multiple ways.
[28:31] Michael Levin: What I think happens is the other side of the equation, and I think you're absolutely right in how you divide this into the two sides. The other side is Tenenbaum's — have you seen his paper, "The Child as a Hacker"?
[28:47] Tom Froese: Child is a hacker, and it's also me about it.
[28:50] Michael Levin: It's really good, and it's this notion overall. I expand that beyond. He's studying brains and human development and so on. The notion of hacking that's fundamental is that you don't know or care what the correct way to interact with a system is; you are going to exploit it however you can. That's the notion, and I think what happens in biology is that part of interpreting these things however you want is that you are also going to use that to control them however you can. You try to find ways. I think that's basically what's happening in biological material: both within levels and across levels, you have systems that are constantly hacking each other.
[29:39] Tom Froese: No, completely agree. So here we're on the same page. This is something that also follows from the framework I was proposing. Because if there's a black box, the components at different levels have to have a high level of tolerance of uncertainty. And they have to have trust to some extent. Sometimes from their point of view, it doesn't make sense what's going on around them. But that doesn't mean that they should disengage or fight it or try to resist it or counteract it. Because what it could mean is that it's a different level of agency, a higher level of agency, rearranging things, aligning things. You need to just roll with it, even if you don't have everything that is required in order to understand where that's coming from. But because of that, there is a gap for free riders or pathogens or something like that to take advantage of exactly the same trusting nature: "Okay, today I'm producing this other molecule. I don't know why I'm producing it." It turns out you're hijacked by a virus and it's not the higher levels that are messing with you. That's built into this system. So you can't avoid it. That's because it's such an indirect architecture. That which allows the multi-scalar integration is also that which allows bad things happening to some extent.
[30:53] Michael Levin: Absolutely. My guess is we're now starting to explore this experimentally: biological systems probably have ways to try to determine whether something that's happening was caused by me versus.
[31:12] Tom Froese: I want to do the same for the brain. There should be some frame of reference in the background that tells you how much unexpectedness you should be expecting at each moment.
[31:21] Michael Levin: Exactly.
[31:22] Tom Froese: What rate of unexpected events, for example. If it exceeds your expected rate of unexpected events, something else is messing with you.
[31:31] Michael Levin: It's the question of, there are a couple of ways to pose it: am I learning or am I being trained? That's an interesting distinction. Because how much agency is there in my environment? Is there another agenda that's responsible for my learning process? If you're a cell, the way we're going to do it is to look at a readout: stress response and some other things. You can imagine a cell or a group of cells, and you can mess with it in progressively different degrees of internal targeting. Here's a signal that comes from the outside of the cell. Here's a signal that we generated in the second messenger pathway right under the membrane. Here's something that we did in the nucleus itself. At what point does the cell say, "That's cool, that's what I'm doing," versus, "No, this is clearly coming from outside"?
[32:23] Tom Froese: This corresponds to brain stimulation, and I want to get into that for our lab, and one of the reasons is that if irruption theory is on the right track... doing TMS to the brain or whatever stimulation is irruption stimulation. Suddenly you have an uncaused fluctuation in activity, but from the point of view of the local components in the brain, they're used to that. That's already how they get impacted by higher levels of organization. So then it's like we're speaking the language of the higher levels of organization by injecting this external variability. And that's nice because then we can actually play with this. We can say, what's the rate of agency that the system is currently expecting? And there's all kinds of interesting things, like why is deep brain stimulation helpful for overcoming obsessive-compulsive disorder or helpful in depression and so on. Certain conditions where our agency becomes constricted, compressed, or limited are helped by what otherwise look like crude interventions. Why should just putting this random stimulation somewhere in your brain have the same effect as opening up your space of agency possibilities? That's very strange. But if the way in which agency at a personal level connects with the sub-person level is only indirectly through this deformation of the state space in unexpected ways, then all you have to do is deform it in unexpected ways to mimic, to some extent, the signals that they're expecting from the higher levels.
[33:53] Michael Levin: That's a really interesting point. I wonder if some aspects of plastogens and psychedelics and things like this can be understood as what you're really doing: lowering the vigilance of the system to allow hacking from the higher levels. So that you're more willing to not resist. This issue of resistance—knowing that you're being hacked, whether laterally by a parasite, by a malfunctioning component, or from above by some higher level of organization. That resistance, biomedically, is huge because one reason we have such trouble designing new drugs that actually fix anything is that, other than antibiotics and surgery, we have basically nothing that fixes anything. These drugs hold down some symptoms at best, but they don't really fix anything. I think part of it is that we are operating in a way that's very easy for the cells to tell and try to fight back.
[35:00] Tom Froese: I like it. Yes.
[35:01] Michael Levin: I think that's the limit. The limit of molecular medicine.
[35:04] Tom Froese: I know that someone else is messing with me. That's not coming from the higher level of organization.
[35:09] Michael Levin: Because it's such micromanagement and the cell, what the hell is this? There's this receptor that's suddenly being targeted. That's why you see all these drugs and then there's a list of potential side effects that are a mile long, that your head will fall off and you'll go blind.
[35:25] Tom Froese: You're actually on the right track here. I'm thinking that there could be a spectrum of possibilities. Because we're separating mind and matter a little bit, just this tiny gap between them, it opens up two possibilities. Imagine we have a reduction in agency like dementia. People stop being able to express themselves, language goes away, they stop being able to take care of themselves. But in that case, it could be that the brain itself is no longer flexible enough to be able to receive these kinds of perturbations and let them percolate through the system and scale them up in the right kinds of ways. Could be that agency is almost intact to some extent, the mind is still there, but it just cannot find the channel to express itself. On the other hand, maybe in some other cases, like depression, it could be that the brain is fine. It's actually ready to receive whatever you're going to send it, but there's something happening at the level of motivations and will and experience where the person is overwhelmed or for whatever reason is not sending the kind of impactful interventions that would normally make the body move in the right kind of ways. Both might from the outside look similar in that there's a reduction of engagement with the world, but the physiology of it could be very different. I like that because it feels more natural to have these two possibility axes. It's not always everything in the same place. What you're saying about the fact that we need to be more clever about how we interact with that, I really like that. That makes a lot of sense. What you're saying about intervening in the different levels, we're starting to think about that too in terms of humans. We can trigger muscles directly by applying contraction to your nerve fibers here. We can grasp like that. That also can feel like you're actually doing it if it's done in a good way. But what if we do it from here? Or what if we do it by doing something here? At which point is there a sense of, hey, this is not me doing it anymore?
[37:19] Michael Levin: That's very interesting. I was trying to look up some papers on this related to confabulation and people with electrodes in their brain that trigger laughing. You push the button and your mouth starts laughing. The patient doesn't report, "That's weird. I was having a serious thought and then my mouth started laughing." They say, "I thought of a funny joke." So this issue of confabulation and to what extent does the system, as you said, take this on as, "Okay, I'm going to tell a coherent story of why it was me" versus "No, I'm going to accept it." I just wanted to say another quick thing about dementia. I have a collaborator who is a hospice nurse. She's a long-term hospice nurse with a lot of experience. We're writing a paper on case studies where there is a phenomenon called terminal lucidity, which is really interesting, where you have somebody who has been in a very debilitated state for a long time and it's progressively been going down; they don't speak. And you think that's it, it's all gone. I think it's like a couple days before they actually die that it all comes back: they have elusive conversations with people, remember things, and give instructions. There's this sudden burst, and I don't think there's a good clinical understanding of it, because if it's just the fact of the hardware degrading, then that's nice.
[39:05] Tom Froese: Having a slight gap in our account would allow for that, make sense?
[39:10] Michael Levin: Exactly.
[39:11] Tom Froese: Yeah. I like it. Yeah.
[39:13] Michael Levin: There's that clinical component. It would be cool to study some of that stuff from the perspective of the model that you're putting forward. That could help.
[39:22] Tom Froese: Here's another interesting clinical example. If we think about the absorption side, what would be the maximum version of it would be if almost the whole brain is in synchrony. So that's all the local variability, each neuron has tens of thousands of connections coming in, all the variability, suddenly they're all firing in unison. It's a huge reduction of complexity, lots of information loss. When you do that, agency goes out the window. You collapse. That also makes sense because now you can't have any more irruption. Everything is ordered, everything is regimented. There's no possibility for injecting spurious variability anymore. There's no room for it. No more agency, but the model would predict that experience might still happen. In fact, it might even be more happening than in a normal state. I started just a little bit investigating it, but for some people, indeed, having a psychedelic fit can come with all kinds of experiences, even divine intervention experiences. Having something that changes your life experience can happen under those states. That fits again with this model that, on the one hand, although it knocks out agency, now your experience channels have been opening up on the other hand. There's lots of this kind of thing. You mentioned psychedelics; we didn't talk about this, but it's the other way: maybe it's actually loosening things up so it becomes easier to integrate with your brain. That fits very nicely also with other work that shows that if you're under anesthesia, if you measure neural entropy, complexity, diversity of signals, it turns out that if you're doing a cognitive task while you're slowly falling unconscious, if you're doing the task well, your neural entropy is higher than baseline. So even though you're falling unconscious, your neural entropy goes up more than expected. But if you're actually falling unconscious and you stop being able to do the task, then it goes down. Assuming that both groups of people actually are losing consciousness, then this kind of entropy is not tracking the state of awareness. It's tracking your cognitive effort: how involved are you mentally in what's happening? That's going back to dementia. One of the best things you can do for preventing onset of dementia is learning extra languages, engaging in extra normative frameworks, becoming able to respond in a much more multifaceted way to your environment. If you think about the way in which we lived before modernization, to some extent we were embedded in a symbolic world. Everything had meaning. Nothing happened just by chance. The fact that this cat crossed the road might mean that my mother will die, something. So everything had super significance.
[42:03] Tom Froese: This symbolic framework was overlaying all of our perception. And if involvement, mental involvement in our behavior is causing eruptions, then that has very loosening effects on everything. So it makes sense to me that the best way of preventing things freezing up is to become more involved normatively. And it's not just about autopilot stuff. This is important. So it's not, if you're just on autopilot, your body can take over. And maybe that's also about what you were saying in terms of stimulation, and then this confabulation happening. It's the habit. The habit takes over. It's not that you're really freely willing the words coming forward. It's that your body has already made a response to the situation that's unfolding in that moment. And I think a lot of our behavior is that. And what I've been speculating is that the success of large language models hints that even when we talk, a lot of it is habits and our bodies' predispositions unfolding with the affordances in the environment. And that's why they can be modeled by looking at linguistic patterns and so on. And so this is another consequence of the eruption ideas. And if you have this little gap, it also means that we're no longer directly in control of our behavior. We can set the intentions. We can open the space of possibilities. We can make things more flexible. But then you have to be in the right context. You have to have the right affordances. You have to have the right history of interactions. You have to have the right body for that behavior then to also express itself in the right way. And if some of these things start crumbling or deteriorating, then maybe intentions don't connect that well anymore with your behavior or with your regulation. But when things work well, it looks like we're in control because the body makes it happen almost like magic. But all we can do is open spaces of possibilities. Turning those into actualities comes out of our embodied history of interaction with the environment. So it's a different take on free will. It's interesting because you're free in the sense that you still have the choice of opening possibility spaces, but you don't have the choice of how to close them. The only way to do that is to cultivate the right kind of environment and the right kind of skills such that you're always poised to respond in a way that you want to be responding.
[44:47] Michael Levin: I completely agree with that. I like a notion of free will that extends it to a longer time scale. What's free is modification of self and environment to enable you in different ways in the future. That's where you get to really exert it.
[45:05] Tom Froese: It's a bit strange to think about that in the first place. I can't directly control the next word that's coming out of my mouth. I have to trust my body that it will choose the right word. Otherwise I get a Freudian slip. Once you're okay with that, your focus shifts and says, well, what can I do today such that tomorrow it will be better?
[45:32] Michael Levin: I think that's a much more useful view of it.
[45:37] Tom Froese: I think that's true of all of biology. It goes the same way. There's a slight indirection, which is basically the higher level of organization can't directly control what the lower level is going to do. You can set the conditions for it. It gives you more possibilities to do things. But within that space, I have to trust that the tendencies are the right ones.
[46:04] Michael Levin: Excellent. Thanks so much. This has been very helpful. I think it's a really interesting framework. Lots to talk about.
[46:16] Tom Froese: I look forward to seeing how you work with this experimentally when you start intervening at these different stages. That's going to be very insightful.
[46:23] Michael Levin: I want to get your thoughts on using some of the brain clinical data to inform how we do some of the things in cells.
[46:35] Tom Froese: There's a new horizon opening up with this way of thinking. Suddenly we can quantify these things and then everything changes.
[46:41] Michael Levin: Yeah.