Watch Episode Here
Listen to Episode Here
Show Notes
This is a ~1 hour 5 minute working conversation with Chris Fields (https://chrisfieldsresearch.com/), Mark Solms (https://scholar.google.com/citations?user=vD4p8rQAAAAJ&hl=en), Karl Friston (https://scholar.google.com/citations?user=q_4u0aoAAAAJ&hl=en), and Thomas Pollak (https://www.kcl.ac.uk/people/thomas-pollak) starting on the topic of variable degrees of introspection in natural and synthetic agents, and how that looks from the active inference framework, and moves into related topics of application of cognitive plasticity to regenerative biology.
CHAPTERS:
(00:00) Optimal Artificial Introspection
(09:17) Unconscious Versus Conscious Metacognition
(17:29) Clinical Introspection And Therapy
(24:44) Changing Preferences, Conflicting Needs
(35:19) Prior Preferences And Therapy
(42:04) Cellular Priors And Regeneration
(53:15) Instincts, Missions, Mental Wellbeing
(59:08) Psychedelic Reboots And Psychotherapy
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:00] Thomas Pollak: Hi, it's a real pleasure to meet you all. I feel I've wandered into a keynote panel, so I hope I can make the best use of it. I'm a neuropsychiatrist at the Institute of Psychiatry, Psychology, and Neuroscience at King's College London. I research immuno-psychiatry, the relationship between the immune system and mental health, and I've been working with Mike on a few projects also with Alexei Tolchinsky. Very excited to be able to talk with you guys.
[00:38] Michael Levin: I have a list of things I was hoping to bounce off of you today. I don't want to monopolize it. If anybody else has topics, I'm happy to do those.
[00:50] Mark Solms: I want to be monopolized.
[00:51] Chris Fields: Go ahead. I think you initiated this multi-way conversation back by e-mail.
[01:00] Michael Levin: One of the things I want to get your thoughts on is what you think might be an optimal degree of introspective ability. In other words, as we construct artificial agents, it would seem that we have some choice about how opaque the internal states are. I'm sure there's reasons why you can't see them all, but there seems to be a spectrum of ways you can construct a cognitive architecture where the top level sees much, little, or none about what's actually going on. So there's some degree of metacognition. I wonder if you have thoughts on what is maybe the optimal amount. Should we strive to have more? If we construct agents, is there some way of cranking it up? What do you all think about that?
[02:10] Mark Solms: I'll start with a really simple thing, and then the clever people in the room can build upon it. The first thing is to say the absolute obvious, which is that a Markov-blanket agent by its very nature is always not going to have full access to its environment, including its body. So there's always going to be a degree of uncertainty. It goes with the territory. Then it comes to the question of how complex its model is. And as we again all know, there's a tradeoff between complexity and accuracy. I would imagine there's no fixed number. It's a dynamic trade-off. It automatically comes out in the wash. If you're trying to minimize free energy or minimize expected free energy, it's going to do that trade-off itself. So that's my opening gambit.
[03:13] Michael Levin: Do you think that I'm wrong in that you can set the value across a range, and that no matter what you do, there's some attractor for the amount of introspection that any reasonable agent will have, or do you think there are different ways to set that knob?
[03:35] Mark Solms: Obviously it depends on what the agent is trying to achieve. I'm assuming as a baseline case, what the agent is trying to achieve is continued existence. And then setting it is dangerous. You can set it higher or lower than its optimal. Its optimal will come out of the algorithm, out of the attempt to optimize the trade-off between complexity and accuracy. But this is Karl Friston 101. So maybe Karl Friston 201 wants to take it from there.
[04:18] Karl Friston: I normally go last, but I'm very happy to pick up on that one. I agree entirely. A couple of points. Mike, you noted that we'll never know because we can't disrupt the blanket to peep inside. But certainly things will behave as if they had introspection. From the point of view of the free energy principle and its application to both natural and artificial intelligence, you can read that in a number of different ways. I think to put two potential ways of thinking about introspection on the table. One could be the kind of introspection that underwrites intelligent artifacts that can plan their future, that have intentions and can simulate or have a generative model of the consequences of their actions to evaluate the goodness of any particular course of action in the future. So we're talking about certain kinds of things that behave as if they had intentions. And that necessarily, I think, implies some degree of introspection because you are now rehearsing your private future in order to select the next best thing to do. It would look as if that is happening. There is another kind in the calculus that comes with the free energy principle. There's another kind of introspection, which is basically taking yourself offline to do some housekeeping on your generative model. Both of those kinds of mathematical homologues or images of introspection rest upon minimizing complexity. That hits you in the face in the second kind that you might associate with closing your eyes and thinking about things or indeed sleep, taking yourself offline so that the accuracy disappears and you're just left now with the complexity. So you now just resolve or remove any redundant parameterization by possibly dreaming or possibly just by having some kind of synaptic homeostasis if we're working with the brain. But coming back to the first kind that would underwrite planning and intentional behaviour, I think Mark's right. I think you're both right, that any one creature can have a very different kind of depth of planning and degree of introspection compared to another creature. And even within a creature, the depth of planning will be very variable. It's not that there are knobs in play, but the only knob that matters is the self-evidencing — maximizing the marginal likelihood. And you get that for free with this first kind of introspection in the sense that if you're very uncertain about the future, as your generative model rolls out into the future, conditioned upon a policy, uncertainty accumulates very quickly. Therefore there's no extra information in moving beyond a certain time horizon. So there will be times when you're very uncertain about the context in which you're operating. In those times, the degree of introspection will automatically shrink because, as Mark says, it's built into the free energy minimizing process simply because this system exists and therefore can be described as if it was minimizing its free energy or maximizing its marginal likelihood. If you think in the special case of introspection, having this temporal aspect and having this temporal depth to it, that is going to be quite context sensitive and will depend upon the implicit uncertainty this artifact has about the consequences of its actions and the context in which those actions are taking place. You can imagine when things are very predictable, you might plan a long way into the future. Or if you're the kind of thing that lives or has co-constructed a niche where things are designed to be very predictable in the future, you can go to university or you can go and cook your meal, then you will certainly have a very deep time horizon. But there'll be other situations where that just goes away. You're in trauma, you've lost a loved one or moved to a new city where you can't plan that far into the future. In that sense, there are really important knobs but the knobs are self-tuning.
[09:17] Chris Fields: If I could add something about context here, I like to think in terms of the type one, type two distinction, which I know is a bit obsolete and not terribly accurate. Contrast the metacognition that's going on when you're driving on the freeway and your planning system is constantly directing your eyes to look this way or that and your arms and legs to make this motion or that motion with a very short time horizon and very fast reactions. But the meta system is doing a lot of work because it's integrating information from a lot of different incoming data streams, and it's allocating attention and activity to a lot of outgoing action streams. There's a lot going on in that setting that's properly metacognitive. But none of that is conscious. I think when we think about metacognition, we're often thinking about conscious metacognition, which is reflecting on how to write some sentence or something in a paper, or reflecting on what you're supposed to do today or something like that, consciously. I think that's just a tiny slice of metacognitive activity. In developing any autonomous system that has to integrate incoming data from multiple sources and direct output to multiple effectors, there's a level of metacognition required for that to be possible in any organized, non-random, effective way that actually does the active inference as opposed to looking the wrong way and driving into a wall. But there presumably needs to be an automatic switch, as Mark and Karl were describing, to the longer-term metacognition that we associate with conscious metacognition, where there's a longer planning cycle. It's not so dependent on immediate inputs. It's much more dependent on memory, and it's not as much a matter of what do I do right now in this context as what I might do in a different context that I can project as a consequence in the long term of the context that I find myself in now.
[12:33] Mark Solms: So since you've oh, sorry, Chris, carry on.
[12:36] Chris Fields: For any planning agent, I think you'd need that metacognition too, if it's going to be planning what it's going to do later today.
[12:56] Mark Solms: I think Chris has introduced the next most important issue. The first issue is the one we discussed before. Chris spoke about the inevitability and the automaticity of the threshold. The second one is that insight, which is what you're asking us to talk about, is a question of conscious insight as opposed to unconscious metacognition. I think that's a large part of what you had in mind when you asked us this question. I want to add here that, for me at least, the bedrock of consciousness is feeling. We are speaking about an agent which is trying to survive, which is trying to optimize its ability to stay within its prior preference distribution. This is where affect arises. Feelings, consciousness is the feeling of prioritized deviations from homeostatic settling points or preferred states. The quality of that consciousness is the basic elemental form of consciousness. I think it gives us a good opportunity to recognize just how far removed the conscious feeling is from the hidden causes. If we reduce the hidden causes to interoceptive states—what internal events within the organism are being registered by the sensory states of its blanket in the form of a feeling—the difference between that feeling and the actual measurables going on within the organism is chalk and cheese. This rough approximation: things are going badly in the department called thirst, or in the department called hunger, or thermoregulation, or whatever. All you get is this feeling. The feeling is there to assist you in the management of the uncertainty as to what you're doing about that feeling. This is where cognition comes in. Cognition is to explain the feeling, and by cognition I include exteroception. It's where you start to explore your entire generative model to account for why am I feeling like this and what should I do about this? That's where the conscious part of it starts with the affect, and it gives us a good illustration of just how unveridical it is. Then the cognition kicks in. There's a whole other question about how much of that metacognition Chris has been talking about needs to be conscious. I think it has to do with optimizing your uncertainty. That's where consciousness pops into the whole process we're talking about. We're inevitably uncertain. We have to be because we are blanket agents. The creative frontier means that it's not optimal to have maximal access to the hidden causes and to model them all in granular detail in any event. But then there's this second wave: which parts of your uncertainty that you're inevitably going to have do you then modulate? There's another question of a combinatorial explosion in terms of the precision modulation. Thomas, we're now beginning to trench on psychiatric matters.
[17:29] Thomas Pollak: I keep on thinking about some of the clinical ways in which we talk about introspection. I assume, as a therapist and psychiatrist, the kind of introspection that you are often concerned with clinically is the ability to give reasons and to explain one's own actions in a way that is coherent, which feels on the surface like a different process to the ability of the system to take its own temperature, as it were. I'm not 100% sure when Mike asks which of these he has more in mind. The issue of deridicality, particularly in analysis of being, the story that you give is that it's extremely hard to adjudicate how deridical someone's introspection is. When we talk about introspective ability in a person, I'm not quite sure what we're talking about, whether it's just a question of very convincing storytelling or something about how effective that story is in guiding a person through their life. I was thinking about what it would be like if you could up the introspective ability as Mike was saying. Chris alluded to this in one of the earlier exchanges, but this Hamlet phenomenon or Woody Allen phenomenon. There is psychological evidence that shows the more self-reflexive you are, if you don't balance that self-attention with the ability to do something about it, you enter this neurotic paralysis-type state. This idea of what happens if we build novel agents and increase their ability to look under the hood without necessarily the ability to do much about it raises the interesting idea of the generation of neurotic AIs that are about to populate the world. Clinically, a kind of introspection that is a little bit more fine-grained — I'd give my right arm for that in my patients. So if you take pain, one of the biggest problems is a patient who comes in with pain and says, "It's because I've got an inflammatory issue in my arm or because my C fibres or small fibres or something — if they've read about that — or it's because of the injury I had." Of course, they've had some traumas or various injuries in the past. There is a question, which presumably has an answer: for this person's pain that they're experiencing there and then in front of you, what is the relative contribution of nociceptive input versus top-down priors? Presumably, unlike "Why did I tell my boss he was a jerk this morning?", there's a clearer answer to the pain question and it would involve taking the hood up and looking into the generative model and the inferential process a little bit better. That would seem like an extremely adaptive thing to have in any system that you were designing or in our patients.
[21:03] Mark Solms: The way I think about that is that the feeling is conscious, and then it's a question of why am I feeling this and patients come up with all sorts of stories. Behind those conscious stories is the actual story, which is their generative model is creating this feeling, and it might have some beliefs which are not ideal, and which is why they're feeling the way that they are. Leaving aside psychopharmacology, I'm speaking now about psychological techniques, as you are, Thomas, I think that the clinical task is to render uncertain, in other words, render more conscious, problematize the explanation that the patient comes up with. They resist. That goes back — I don't know if you were part of our e-mail string, Thomas. I quoted Freud in a wonderful letter. He said to Einstein that the mind is like the esophagus. It wants to flow in only one direction. And he meant extraoceptively and it doesn't want to look inwards. If you do, you get a gag reflex. We've got to help our patients to tolerate that gag reflex. In other words, to increase the amount of consciousness within the extent to which they're palpating the uncertainty in their generative model. But not to stay like that. I think that it's a reconsolidation process. All your answers questioned. That's a state of uncertainty which you tolerate for the duration of the treatment. But the ideal then, again, going back to the very earliest point in our conversation, is to return to a modus operandi where the thing is running of itself, and that we don't want to be hyper conscious. That means we're hyper uncertain. We want to have a new improved generative model after that process, which then can run more automatically again. That's how I would think of it.
[23:14] Thomas Pollak: It makes you very worried about the fact that everyone is using AI as personal therapists, because they're almost attaching this introspective, at least storymaking, appendage that's not going to allow that dynamic shifting that you're referring to. One can predict that perhaps it doesn't end well.
[23:40] Mark Solms: I can only hope that these AI therapists can be improved upon. The ones that I've had any experience with have been secondhand. I've never consulted one myself, but I've heard patients telling me about what their bot therapist advised them over the weekend when I wasn't available. They seemed terribly sycophantic. They just go with every confabulation the patient comes up with and just say, "Right on, girl, you give it to them." And it doesn't bring about change.
[24:17] Thomas Pollak: There'll be a time where you, or a version of you, will be available anytime the patient wants to see you. Can you imagine that? Presumably you're effective because you see someone a couple of times a week, but if someone is constantly introspecting by being able to access their phone, I can see why that would be.
[24:41] Mark Solms: What do you think, Mike?
[24:44] Michael Levin: I was thinking about it from that angle. It started some months ago when Thomas and I were talking. Thomas had asked whether we could choose some primary axes where you have some sliders and these axes would be the fundamental basis of any kind of AI and any kind of agent, and you could crank up the different dimensions. And so what are those dimensions? I was thinking in particular of not only one primary dimension, the degree of introspection: to what extent do you have any idea of why you're doing things or how to make yourself do things. And the flip side of that, which is the old saying, you can try to get what you want, but you can't want what you want. Whether there's a scale-free nature there, whether some other being not like us could have multiple layers: not only can they decide what to actually try to do, but they have some control over what they want and maybe some control over that level some number of levels up. Whether that's possible — it seems possible to me, but is it really possible? And if so, what would that look like? If you had stronger metacognitive control where you can look around and say, I have this, I love this thing, but I'm looking around in my environment and it's really not very good that I like it, it's not going to work out well for me here, I'm going to turn that in now, and now I don't like it, I like something else. Some degree of control seems like we could engineer that. It doesn't seem impossible to engineer a system that has more control over that than we do, both on the sensing side and on the control side. I was wondering what the meta levels of that look like.
[26:40] Chris Fields: It seems like we do have that multi-level, multi-timeframe ability in at least some aspects of life. One could choose to pursue a certain career, which means going to graduate school, which means going to class, which means not staying out drinking all night, down to shorter and shorter timescales, but all driven by some very long-term imagined objective, which may or may not ever come to pass. You have very high uncertainty about whether that will ever actually occur. Given that you could be hit by a bus tomorrow. People do, from time to time, change those very long-term goals, which then cascades down into a whole reorienting of priorities on a series of shorter and shorter time scales that open up new aspects of behavior, make new aspects of behavior both desirable and even possible. It seems like we do have that capability to some extent. I don't know how much variation there is.
[28:17] Michael Levin: We have some, but it's often — and I'm not even as much talking about long-term goals as I am about actual preferences. Maybe this gets to what Mark was talking about feelings, but you can maybe through years of meditation or anger management or something change some of this stuff. It always seems to be an incredible hassle. Wouldn't it be, at least in some cases — I look around and I see every fall all these people enjoying football. I could care less about football, but it would be nice if I cared too, because then, if I liked football, there's all this fun stuff I could participate in; that would be amazing. So rather than years and years of visualization therapy where I try to talk myself into liking something, you could just go in and say, "Okay, that's a setting that's not very optimal," and say, "Fine — I now like this, whereas I didn't before." I'm not sure that we can do that, but it doesn't seem impossible that we could construct an agent that would collect meta-information about "I was born or made with these propensities." Some of these are not working out very well for me. I know I can layer all kinds of forceful self-control over that. Instead of that, let's just turn something more basic: whereas before I liked very spicy things, now I like something else. It seems like it wouldn't be impossible to make an agent that had more control over those kinds of things.
[29:54] Mark Solms: I agree that the question you posed or the way you reframed what we were saying a few questions ago when you said, well, it's not only human beings, we're talking also about agents or hybrids that we can engineer. And then surely it would be possible for us to set levels and increase veridicality et cetera. I think that's a very important point. I've recently had practical experience of that because we in my lab are trying to engineer an artificially conscious agent, and it is just trying to meet its needs in a little environment that it lives in. We found that the dynamics we were looking for were not sufficiently impressive because its needs were not sufficiently incompatible with each other. I think the ideal that you're inviting us to consider would have a lot to do with what kind of prior preference distribution the agent has. Unfortunately, in our case, as far as I can tell, both in my own case and in the case of my patients and my friends, like Karl Friston, is that we have incompatible needs. We have needs which are radically in conflict with each other. There are trade-offs, not only in terms of complexity and accuracy in relation to each one of them, but also what you have to sacrifice in order to get the other thing. There's a further dimension, and I'm speaking now primarily about us humans, but I think it spans the two categories, and that is that circumstances change. You develop a set of beliefs hierarchically arranged in terms of your certainty. So there are deeply automatized, low-level, baked-in deep predictions that you formed in infancy as to how best to deal with these competing needs in the environment you were in then. This is no longer so plastic. Now you find yourself in an entirely different environment where it's far from being the optimal generative model. We get back to what we were saying earlier with Thomas: how do you go about changing it then? That's inducing the esophagus to allow material to flow in the other direction. I think it's a big problem for us that fundamentally we have needs which are irreconcilable at some level, in some context, at some stage in the lifespan, and the model is not sufficiently dynamic precisely because of its depth, because of the hierarchical nature of the generative model, which in my mind.
[33:19] Chris Fields: Raises an interesting question from the technological or engineering point of view: If we are acting as engineers, designing environments and designing systems to operate within those environments, or even if we're designing systems to operate on Mars or under the ocean, some environment that we're not designing, an open environment, to what extent can we design systems that don't have conflicting needs? And if we're successful in that, then what seemed to be the deepest in this class of problems simply disappears. The one that you're referring to in particular, Mark, seems to disappear. Maybe that's an equally good framing of the question. Is it possible to have an agent that only has compatible needs? I don't know how to answer that question.
[34:41] Mark Solms: I think we do it all the time. I think that we are the exception. The more complex the organism, the more likely it is to have multiple competing, conflicting needs. The simple AIs that we are so familiar with—this is a large reason, though not the only reason, why they're not conscious: they don't have sufficient uncertainty about how to achieve their ends. But I saw Karl's hand going up. Karl is ever so polite.
[35:19] Karl Friston: I was a smoker. I'm trying to keep all these themes in mind because I think they can be tied together in a pragmatic way, which is why I put my hand up, because there are so many ideas here that I was forgetting the first ones. When Michael's saying, "can he find some therapist to make him like football?" I immediately thought, well, people do that to stop smoking. So the smoking example is quite prescient here. It also ties back to therapy. When we talk about engineering, artificial intelligence or artificial artefacts to have this kind of introspection, metacognition, exactly the same question could be asked in a therapeutic context: not can we engineer somebody, but can we provide the right experiences, the right interactions and possibly the right drugs that allow them to readjust and retune their preferences in the right way. I think that perspective speaks to a really simple knob or quantity that would accommodate all the perspectives that we just heard. It's the prior preference over the outcomes or consequences of behaviour. If you now read the preferences as defining the outcomes that characterise the kind of thing that I am, then internally consistent needs or preferences, things to which I work towards in terms of my intentional behaviour, can be quantified in terms of the constraints on those outcomes that drive me to very precisely preferred outcomes or, in other contexts, less precisely preferred outcomes. That precision is exactly the palpation of uncertainty, Mark, that you were talking about. And from a mathematical perspective, Thomas's phrase "taking your own temperature"—you read precision as inverse temperature when we're applying temperature as an attribute of our probability distributions or those that are encoded, our Bayesian beliefs. That is a lovely way of phrasing the kind of metacognition that we're all talking about here. More specifically, it's the precision of your prior preferences, because if you can control that and then you think about the maths of what drives behavior in terms of the expected information gain and the expected realization of your prior preferences, sometimes read as expected value or expected cost. If you reduce the precision of your prior preferences—so I no longer like smoking, for example.
[38:40] Karl Friston: I am not the kind of thing that smokes. Then what will happen is you will be now emphasizing the epistemic part, the epistemic motivations to intentional behavior to increasing expected information. Again, you become more exploratory. I think that is the kind of mindset that you want to induce in a patient who's become very stuck in their ways. I am the kind of thing that is in pain. I am a patient in chronic pain. This is the simplest explanation for all the interoceptive, exteroceptive and pro-social evidence at hand. I am a chronic pain patient. So if you can then reduce the precision of the beliefs about the outcomes that provide evidence that I am the simple explanation for the kind of thing that I am, then what you're talking about is either a pharmacological or psychotherapeutic endeavour to relax the precision of those particular prior preferences to help them get out of that rut. So to my mind, that would be the therapeutic engineering or allowing the patient to re-engineer their prior preferences so they explore other ways of being, other interpretations, other notions that best explain the myriad of things that they experience and, of course, that should in principle change the evidence that they seek. And if they're now seeking a different kind of evidence, they now have an even better opportunity to engage in the epistemic foraging that comes from having relaxed precision over their prior beliefs. And then they can build new self-models, new generative models of the things that they prefer. We've used preferences, we've used needs; from a mathematical perspective, these are just prior beliefs about the outcomes that characterize the kind of thing that I am, the kind of self that I am. So I think that from that perspective, all of these notions hang together, even through to what Chris introduced in terms of system one versus system two, or the very notion of metacognition immediately tells you you've got a separation of temporal time scales, but more importantly, a deep generative model. And that's really important because if you want to now contextualize the precision over your needs, over your preferences, then it has to be contextualized on something that is actually represented in the generative model. And that, of course, is going to be another latent state, another latent cause that now can be possibly labeled as "I am in this attentional or intentional state." And it could be labeled with affect, with emotions. It's all about palpating uncertainty, feeling uncertainty in the sense of recognizing that there is a certain precision that is apt for this context. Some patients you might imagine fail to recognize that there's an alternative way of preferring or an alternative way of being. If I was an engineer, what we're talking about is basically the Kalman gain in a hierarchical Gaussian filter. So you can send it as a control engineer, you can use exactly the same to recognize that these particular prior preferences have been afforded too much gain and it's not working. And then I'm going to have to change it and you get into all sorts of various instantiations of that empowerment. You could speak to Daniel Polanyi about recognizing I'm stuck in the corner and I'm not going to be able to ever resolve any uncertainty. So I've got to change my preferences in terms of what I'm aiming for.
[42:04] Michael Levin: That's especially interesting because we've been trying to push that thing down into increasingly minimal systems. I have a strong suspicion that some failures to regenerate, for example, mammals not regenerating their limbs and things like this, are partially that. What you need to do is soften the priors of the cells. They have really gotten stuck in this idea that we simply can't regenerate. Some degree of softening is part of a therapeutic strategy to get it going again. There's one early bit of data that could be interpreted in this direction, which is that our intervention is two things. It's what we call a biodome, a wearable bioreactor that sits on the limb amputation wound and it provides an aqueous environment and protection. Then there are some payloads, some ion channel drugs that we put in it. In frogs, one of the things we found is that an empty biodome by itself already provides some degree, not as good as with a payload, of regenerative enhancement. In a mammal that would be obvious because in the aqueous environment currents can go better than dry air. The frog is already in a bucket of water. Why do you need a biodome on top of a frog leg in a bucket of liquid? One hypothesis is that if you're a cell in a wound at the end of a frog limb sitting in a giant, an infinite bath, your evidence is that you have zero ability to influence your micro-environment. Anything you put in your micro-environment immediately diffuses away. You're powerless. You have no hope of doing anything, and you might as well scar over and be done with it. In a biodome, you've got a very protected micro-environment. If you do things and take measurements of your environment, you find out that your efficacy over that environment is actually much higher, because now there's this protected little environment where you can, if I want more potassium out there, you actually can do that. Maybe we can give evidence to some of these things to soften in general, and then to direct them in a way that leads them to the notion that the prior model that prevented us from going down this regenerative pathway wasn't necessarily how things have to be, and we could look at other ways to traverse that morphospace. We're now thinking about various plastogens and other ways to get that idea across. Possibly related to something else we found is that the anthrobots that we make from human tracheal epithelia, if you look at the epigenetic age, they're younger than the cells they come from. They roll backwards. One could also make this crazy hypothesis that the age that cells think they are is not set in stone by prior events, but is an estimate based on our history. If you have a history of being part of an elderly patient, but now you're in a very embryonic-like environment — you're being bent into a pretzel all of a sudden, there's no other cells around, and there's a bunch of embryonic genes coming on — part of that is saying there's some evidence that you're actually an embryo, and you should revise your age estimate. That leads to another research program of asking: could we crank on that knob? Could we provide more evidence? We have some ways of doing that: provide some more evidence that you're an embryo, and see if we can roll the age estimate backwards.
[46:08] Mark Solms: That's absolutely fascinating. Sorry, go ahead, Cole.
[46:12] Karl Friston: That's remarkable, and it fits exactly with the role of precision in defining how old you are in the simple sense that if you're young, you're genuinely naive. If you're naive, there's lots of expected information out there to hoover up with the right behaviour. But that can only be realised if you have very precise beliefs. If you have very precise certain beliefs, there is no epistemic affordance to doing anything. When you're young, you have to have this attenuated precision to be impressionable, to learn and to respond very quickly to any sensory evidence. Literally the precision is the learning rate. And of course, when you get old and wise and grumpy and you're completely in control of your world, you don't need that anymore. You can't go bungee jumping or to discos and the like. I think that's another beautiful example of the importance of precision, not just in terms of knowing what kinds of uncertainty to resolve or indeed what to attend to, but how impressionable, how learnable. You said something else very important though, which I wanted to foreground. The way that the cell responds or you have to empower the cell to elicit a response in virtue of putting it in a context where its actions matter. And I think that that kind of empowerment now underwrites all of this because what I was talking about was the ability to engage in some epistemic foraging or respond to epistemic affordances. But of course, in order to do that, I have to be able to move. I have to be able to change the world by some kind of action.
[48:13] Michael Levin: Yeah. Mark, did you want to?
[48:15] Mark Solms: I was just wondering. It is absolutely remarkable where your mind works, going from one realm to another; it amazes me endlessly. But the work that you're doing is just incredible in consequence of it. So I wanted to ask: what you said makes complete sense to me and Carl's elaboration of it, the loosening up of the precision on the priors of those cells and rendering them epistemic, making them more open to epistemic foraging. I can see the enormous value and scope of it. But I wanted to ask you, is there a two-step process? One thing is to make the cell more labile and then another thing to guide that lability. To what extent can you count on it? Is this new labile behavior being adaptive for the organism? Maybe it just goes on a whole world tour on its own account. Then the next thing you have is neoplastic disease.
[49:40] Michael Levin: In fact, my suspicion is that's exactly what will happen. Currently one of the most popular ways to try to achieve these kinds of outcomes is what they call cell reprogramming. The idea is whether by Yamanaka factors or some other way, you take cells and you roll them back to an undifferentiated state. I have a feeling that's exactly the issue: you can crank the plasticity, but if you don't vector them in some way, you can get neoplasia and tumors. Our approach is going to be twofold. There should be some kind of plastogen, but then there should be, at least for us, typically a bioelectric state that says you're an embryonic limb bud. Do the thing that the embryonic limb buds do and make a leg. I will say there is a very interesting phenomenon, which we don't fully understand, which is the context sensitivity of this whole thing. In tadpoles there are stages where they do not regenerate their tails, and that's an opportunity for us to try to make them do that. You cut the tail, or in an adult frog you cut the leg; there is one reagent called monensin, it's a sodium ionophore, and we use it to induce depolarization. Monensin causes legs to regrow in a leg wound and it causes tails to regrow in a tail wound. The treatment is exactly the same. There's very little specificity in the treatment as far as the two outcomes. We don't say what to grow. We say grow whatever normally goes here. And it figures out the rest. We have never seen the tail try to make a leg or the leg try to make a tail or try to make anything else. We can, in other contexts, push it; we can make the gut make an eye. In this particular context, we didn't need to worry about it. The cells knew what belongs there and they just needed that push to do it. It's going to be some combination of managing what do they already know, how strongly are they committed to it, and how hard do we need to work to get them to do something different? To some extent, they already know what belongs there. You can see this from very old experiments where they grafted an amphibian tail to the middle flank. Over some period of time it would remodel into a limb. The tail tip cells would start to become fingers, even though there's no injury at the tip of the tail, but there's a large-scale plan. There's an error with respect to the large-scale plan. It propagated down into the molecular networks needed to turn tail tip cells into fingers. To some extent, there appears to be information about what ought to be there. If we can push it in that direction, it's still not clear to me how much micromanagement we're ever going to need to do on that.
[52:46] Chris Fields: So this seems prima facie very different from the human psychology case, where the future actions for an individual seem very open-ended. There's no one particular thing that needs to be done and just add a little motivation and it happens.
[53:15] Mark Solms: I think in human psychology we do have reflexes and instincts of phenotypic predictions as to how to remain within your viable bounds. It's just that they're not memories. They are fixed action patterns that are hardwired. We can supplement them with alternatives that are more context sensitive and nuanced and flexible. But we do have that sort of bedrock of phenotypic predictions. This is a pain situation, withdrawal. This is a dangerous situation, flee. This is a frustrating obstacle, attack. Unless I'm misunderstanding you, I think that would be the psychological equivalent of what Mike's talking about. But I hesitate to say psychologically equivalent because what Mike's talking about is psychology too, as he's taught us to recognize. At the level of more complex organisms, the ones that we are more familiar with, even we do have very high precision priors about how to stay within our viable bounds that are inviolable. They can only be supplemented. They can't be extinguished.
[54:56] Michael Levin: And this is at the edge of things, but could we go in the opposite direction rather than for the human case that Chris was just asking about, rather than looking down at the instincts and things like this, go large scale and say that there are people with a large-scale life mission where you can try to suppress it, there really is a large-scale trajectory, not so much a plan because it's going to have different ways of implementing, but there are pathological versions of this, and I'm more thinking of the positive case where someone has a life mission. And once you release the brakes on it, they're going to go do the thing the same way that the cells are going to say, no, we really should be a limb. Let's do whatever we have to do to turn this thing into a limb. Do you see that in your inpatients and in people in general?
[56:00] Mark Solms: At the one level, you're saying you're going from the one extreme to the other. So let's stick with the fourth, the first extreme. What Panksepp calls the seeking drive, which is a homeostatic need, the epistemic foraging that Carl was talking about earlier. There's a phenotypic prediction, which, if I can translate it into words, is "seek and ye shall find." We explore, we are curious, we're engaged with the world, we're interested. At the other level, which is where the five of us live, look at us. We've all prioritized that drive and we are all living lives like that, where all of us are on a mission. It's great. But because of this balance that I was speaking about earlier with these conflicts, it comes at the cost of other drives, unfortunately, for all of us. It can go too far. And there lies mania and megalomania. That will bring us back to the present.
[57:13] Karl Friston: There's depression as well. I was thinking about the analogy of the frog limb being dissected and removed. That's a traumatic intervention. You're massively changing the context and you're creating a delusion in these cells that they have no capacity to grow or influence their environment. That would be a little bit like somebody who had severe agoraphobia who has now lost somebody and has developed a social phobia and will not go out of the house and is completely socially withdrawn; you're adopting a prior which is atypical. The biodome intervention is to now empower them to go out there and seek evidence. In fact, they can still be part of a social ecostructure or niche, allowing them to relax their prior belief. No, I am a depressed person who cannot interact with other people. In response to Chris's observation and Mike's question, the natural way to grow back the arm for people is exactly what Mark just said. It's to express mental well-being through being curious under constraints. Those constraints are usually compassion to others, but it's effectively all the hallmarks, all the social norms that render as, yes, this is, this is me. You're in a state of well-being because any deviation from that, a psychiatrist or a psychotherapist is going to call you a patient and we'll try and get you back growing again.
[59:08] Thomas Pollak: And I think that's why introspection can be so damaging sometimes, because if you have introspection without the ability to fix whatever problems you find, that feels like a recipe for depression in any kind of system. But I was struck by the point you were making, Carl, about changing the precision of prior preferences as an important methodology towards well-being. In some traditions, and I know you've done some stuff looking at meditation traditions, essentially what you're trying to do before the well-being kicks in is you're cultivating a kind of wholly disinterested state. And in fact, what that is, it's a notching down of the precision of your prior preferences, in the extreme case to the state where the uncertainty is no longer there, the prediction error is no longer there, and you get these incredible cessation experiences where experience stops altogether. But on the other side of that, you then get this post-reboot state where everything suddenly appears a lot more plastic. Suddenly your Eros in life becomes much more apparent, your diamond or whatever it is pops up beside you and starts showing you the way. That reboots terminology is analogous with what we're seeing in psychedelics: you reboot the system and if you have the appropriate scaffolding pre and post, you can help steer the system on the right way. And actually, I was right, we were writing something with Mike about this, about this peculiar persistence of this reboot analogy in medicine throughout history, and it's there in ECT, it's there in cardioversion, it's there in fecal microbiota transplantation, it's there in various immunotherapies, and this notion of a big injection of entropy that then presumably reduces the precision of whatever these preferences are. And then after the reboot, there's the opportunity to reset those set points, whether they're homeostatic set points or whatever, and re-incline the system towards something a little bit more adaptive, eudaimonic.
[1:01:37] Karl Friston: I think there's an important nuance to that summary, which speaks to Mike using the word "plasticogen" earlier on. You're implying that ECT and psychedelics have this neuroplastic potential to reshape your landscape, pointing in a different direction, so that your valleys or the ruts that you were once in are now pointing in a different direction or indeed the precision is now relaxed to the extent that those ruts now open up. I'm using those words because this is something Robin Carhart-Harris has been trying to emphasize about the psychedelic angle: that there is an immediate short-term effect on synaptic efficacy, which would be a bit, if you're an engineer, like introducing simulated annealing — you're increasing the temperature, or reducing the precision, the inverse temperature, to jump out of your minimum, jump out of your rut. But over the ensuing weeks and months there's also an increased neuroplasticity aspect to it. So not only have you flattened your landscape to explore other ways of being and other options, but there's also a certain plasticity in the self-carving of the canals such that you can catalyze your new landscape. Those short- and long-term therapeutic aspects of intervention are important. It's not just about influence at the moment, but also the capacity to learn subsequent to that reset moment.
[1:03:17] Thomas Pollak: Yes, that's why a lot of the psychedelic literature, for example, has an unwritten assumption that whatever the reboot is, it's a positive thing that's going to incline people toward the positive. There are so many examples now, whether it's the horrible music festival in Israel, where people have been in these very plastic states and something awful has happened. The outcome has been a long way from inclining them toward something positive. I agree. The longer-term scaffolding is hugely important.
[1:03:51] Mark Solms: So it's amazingly identical to the question that I posed to Mike, is it a two stage process? You increase plasticity and then how much do you have to guide it? So it's for precisely the reasons we're talking about now that I'm always very alarmed when colleagues working with psychedelics speak of doing away with the psychotherapeutic aspect of it. It's psychedelic-assisted psychotherapy. I can't overemphasize the importance of the psychotherapeutic use of that window of opportunity because it's by no means necessarily going to reset in a benign or adaptive direction. I like the analogy with ECT and even with deep brain stimulation. Helen Mayberg, when she had those initial very promising findings, even then she was saying the DBS is step one. Then you need to take the patients into psychotherapy. They wouldn't have been able to use psychotherapy if they didn't have the DBS, but the DBS is not a cure.