Watch Episode Here
Listen to Episode Here
Show Notes
This is a ~1 hour meeting with Richard Watson ( Alexey Tolchinsky ( Mark Solms ( and Karl Friston ( where we discuss issues of memory (especially, the role of forgetting) in diverse intelligence (human patients and beyond), and a bit on dreams and psychoanalysis. The original question from me was motivated by some findings on the effects of induced forgetting in models of unconventional cognition ( and more coming soon).
CHAPTERS:
(00:00) Role of forgetting
(06:22) Overfitting and generalization
(10:45) Accuracy minus complexity
(21:13) REM sleep and transference
(24:40) Choosing futures and pasts
(31:18) Cellular psychotherapy ideas
(34:58) Dreaming of cell phones
(39:47) Photographic memory costs
(44:18) Precision and future paths
(52:25) Collective cellular identity
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:00] Michael Levin: What I'm interested in is to get all of your thoughts on the following question, the role of forgetting in particular, the role of losing memories, if you even think that happens, but the role of forgetting in agency and the potentiation of agency, and just in general, what role you think forgetting plays in the mind and in the capacity to have a significant mind, like how important is forgetting? How do you see forgetting and so on? So that's what I'm interested in. And yeah, I can give you the context of why I'm asking this, but that's what I'd love to hear about.
[00:42] Mark Solms: I'm sure we all remember the context. If I may, I will begin. When I read your original description, the thoughts that occurred to me were exactly the thoughts that Carl then articulated over the emails about model complexity and the need to balance accuracy with complexity, and Carl drawing attention to how during sleep, when a lot of memory consolidation goes on, consolidation, of course, involves both what we retain and what we forget. It's a selective process. And Carl drew attention to how we believe-- he says we believe-- actually, it began with him and Alan Hobson believing, and now we all agree with them, that during sleep, there's a reduction-- there's a getting rid of redundant synapses or synaptic connections, because otherwise, you have too complex a model. And this is the ideal time to do it because nothing's happening. There's no new incoming era. So those thoughts that Carl articulated were exactly the thoughts I had. So then I thought, well, now that Carl's expressed my thoughts, which were actually derived from his thoughts, I'll have to come up with new thoughts. And these were the additional thoughts. They actually are just two of them. The one is that there's an interesting problem in infancy when you've got a hell of a lot to learn and new things happen all the time. How do you? How do you balance this business at the very beginning of life? And how do you retain any kind of a stable model when the world is so utterly unpredictable? Then there must be some mechanism whereby there's some continuity in the kind of base model. Otherwise, you're just totally fragmented and every day wipes out your beliefs that you had established the day before. And I would like to link that with the fact that in the first two years of life in humans, there's pretty much no declarative memory. It's all non-declarative. So things go from short-term memory into non-declarative long-term memory. They can't retrieve those memories and rethink them because that's what non-declarative memory is.
[03:24] Mark Solms: Things just go straight into these automatic memory systems. And the way that I think about those subcortical non-declarative memory systems is that they carry high precision. This is on the view that consciousness is uncertainty. That's what consciousness is for, is to feel your way through situations where you're not so confident about your predictions. You're palpating them and testing them against the incoming errors. And so this is not happening in relation to the memory systems of infants. Everything goes into long-term non-declarative memory. So I think that there's some kind of biasing, some kind of excessive confidence. I don't know if that's right, but that's my thought. And then you can link that with the fact that there's so much REM sleep in infancy. It used to be thought that it's during REM sleep that all the memory consolidation is going on, but in fact, it turns out to be the opposite. It's during non-REM sleep that all the memory consolidation is going on during sleep. And REM sleep is a highly entropic state. So it's dealing with uncertainties and it's conscious. You know, you're dreaming during REM sleep. So you're in a state of uncertainty by physiological measures and by psychological measures in the sense of the subjectivity of a highly emotional, conscious state of mind. I have the view, and this is the last thing I'll say, that REM sleep, which incidentally is also characterized by highly unstable homeostasis, we go out of kilter across a great many homeostatic parameters during REM sleep. So it really is a state where you're in a lot of uncertainty, even at the level of autonomic homeostatic mechanisms. So I'm of the view that during REM sleep, we are actually resisting, like we do during infancy, resisting too much model, too much forgetting. We're wanting to retain non-declarative memories against the accumulating errors of the day. It's trying to explain away. In other words, trying to forget, trying to not remember, trying to not update the existing non-declarative model. So those are my opening shots.
[06:10] Michael Levin: Great. I made a couple of notes because I want to come back to the whole sleep thing, but maybe we'll go around with this topic. Who wants to?
[06:22] Alexey Tolchinsky: I mean, build on what Mark just said, which is very useful. And to add to your work, Eric Oil's overfitted brain hypothesis, which was new to me because I studied your dream and sleep work thoroughly and I watched your debate with Alan Hobson with great pleasure. So Eric Oil suggests that one of the things dreams are useful for is they reduce overfitting because what we've learned in the day is being placed in a wildly different context. It allows us to loosen the priors and to see what can generalize. And incidentally, he's a writer, he writes fiction. He said fiction has an additional function for that. When we fantasize, we do that. Because when we hold on to very precise notion, we cannot generalize. And I think that's the general theme in forgetting and building. So what I think, Michael, you said, when we remember, when we recall, we build agency, we build a higher level, we build a macro. But when we forget, we sharpen the causal signal. This is sculptor's chisel. So then one of the things we optimize is exactly generalization, because if we use the precise memories we've learned, we cannot use them in other instances. The metaphor for that is Funis de Memorius. It's the book or the story by Luis Borges. A man fell from a horse and lost the ability to forget. And then he couldn't recognize his dog anymore because at 3.45 and 4.05, there was a slightly different angle of view and slightly different shade of the fur. So he lost concepts, he lost abstraction, he lost pattern recognition. And incidentally, speak about agency, he lost self because self is a mental object and we must abstract to retain some coherence and some continuity of the self. And in neurology, I suppose semantic amnesia is close to that where concepts are gone and we only have details. We sort of live in the here and now. It's the recent self without any continuity to the past. And, you know, so, but generalization is a balancing act. So these are the cases where there's not enough generalization. But when there's too much generalization, we have another issue like Alzheimer's when it starts, you know, we start losing the recent details. And in that sense, self lives in the past. You know, we have some concepts, but we will lose the recency. We stop updating the self. And also generalization can be skewed or biased. Like in PTSD, a flashback is re-experiencing now in the same context what happened back then in the circumstances of trauma. So this is incorrect generalization, overgeneralization of the phobic memory. And I suppose in depression or in OCD, when we ruminate, it's again, the negative experience of the past is casting a shadow on the present and on the planning for the future. So I think that this forgetting serves a function of optimizing generalization. And exactly like Mark said, also metabolic function, because every memory trace is metabolically costly and we just can't afford to hold on to everything. I mean, I think in physics, the structure that remembers everything is a black hole. It encodes everything on the event horizon at maximum density. So that's the kind of structure that remembers everything. Without forgetting, we are dysfunctional, including the self-functioning. But I've talked too much, so these are my thoughts on what Mark said.
[09:43] Richard Watson: Alexei, can I check that I understand the connection between what you were saying about reducing overfitting and what Mark was saying previously? So the connection is that by resisting the update of long-term memory with particular instances, that's what Mark was talking about, you are fostering an ability to avoid overfitting to those particular instances, right?
[10:09] Alexey Tolchinsky: I think that memories are malleable, even Pavlovian memories. We update and change the context. We weaken and dampen them. If we cannot let go of some details, we cannot think, we cannot disambiguate, exactly like that dog that is different, if that's...
[10:30] Richard Watson: Even the things which are the same.
[10:31] Alexey Tolchinsky: Right.
[10:40] Michael Levin: Carl, do you want to?
[10:45] Karl Friston: Say anything about that? Yeah, so lots of themes here. Just to address that last question, from the point of view of machine learning and physics, that point about generalisation being the same thing as avoiding overfitting, I think it's absolutely fundamental. So, you know, it's fairly straightforward. I think David McKay was the first person, or perhaps even before that, statisticians Cass and Stepley were able to prove that the ability to generalise is a measure of the evidence for your generative model of the way in which your data or your world supplies data. And the log of the evidence is just the accuracy minus the complexity. So coming back to Mark's point, which means that to generalize is to have the simplest explanation or model or account of an accurate sort of everything you're trying to explain. So mathematically, they are the same thing. And if one elevates that notion of model evidence or interprets it now in an evolutionary context, or, sorry, more generally in a selective context, again, coming back to Mark's notion that we are selecting things, then what is selected is just simply the thing that is most likely to be there. And the thing that is most likely to be there, with a nod to survival of the most likely, is those that have the greatest marginal likelihood. And model evidence just is the marginal likelihood. So I think mathematically all these things are the same thing. So to summarize that, the things that are selected, the last man standing, as it were, is just the most likely thing that you're going to see. That likelihood is always expressed as accuracy minus complexity. And thereby maximising the marginal likelihood means minimising the complexity. And that means that you will have the best model that is able to generalise. So the question then just, I think, resolves again formally to what timescale we're talking about. I mean, the selection process, you could argue, unfolds at all timescales, but is exactly the same kind of process. So you can have attentional selection over, say, 300 milliseconds to several seconds. You can have action selection. We select the most likely thing that we're going to do next over multiple time scales, right the way through to, well, you could even argue in neurodevelopment from the perspective of neural Darwinism and the theory of neuronal group selection if you wanted to, but you can jump right through to natural selection at a very, very slow time scale. So it's the same thing going on every time scale. It just looks different and we have different disciplines and different ways of talking about these things. But it's the same underlying, almost tautological explanation for the way things are.
[14:14] Karl Friston: It couldn't be any other way from a mathematical perspective. Dreaming is interesting because that talks about a particular time scale of a diurnal sort. And it's interesting then to link that to memory. And something that I think both Alexa and Mark alluded to was that to consolidate is to forget selectively. And I often think of this in terms of a sculpture creating a figurine, for example. It's what you remove which gives it its form. And therefore, if I now read forgetting as removing the right stuff, minimizing the complexity in the right kind of way, then forgetting is just a particular kind of learning or model optimization that basically consolidates the stuff that is not removed. So it's not surprising that much of the process of selection is taking stuff away, either by death or by ignoring it, or by some synaptic homeostasis while we're asleep. So forgetting is just the other side of the coin from learning. Without forgetting, you couldn't learn; without learning, you couldn't forget. The both, I think, descriptions. There's another conversation we could have here, which is not so much biological, but more you would find in economics and states-based modelling, which is Bayes-optimal forgetting and volatility, adapting the particular learning of certain things, and in particular, the learning rate which is just a precision. I think that's another sort of identity or isomorphism, which is important to remember. Precision is just a learning rate. So if you write down, if you just think about any differential equation and you apply some precision or some parameter to some prediction error that's driving changes in what you're representing or learning, then the units of precision are per unit time. So precision is a learning rate, which means that if Mark is right and children have to learn very, very quickly, then they're going to be assigning a lot of precision to their sensorium relative to their prior beliefs, for example. So on that view, there's a really interesting link between volatility in your environment and the right precision or learning rates that you bring to the table to match that volatility. And this, you know, I see this in many, many different fields ranging from the Kalman gain and Bayesian filtering. If you've got very, very precise data, you pay a lot of attention to i.e. high precision to i.e. you increase your learning rate in the face of those data. But if the data are really, really noisy or you've got your eyes shut during sleep, then you wouldn't afford the same kind of precision in state estimation.
[17:43] Karl Friston: In an evolutionary context, I first came across this in Ernst Mayer's The Growth of Biological Thought, where he was telling a story where if you have Drosophila fruit flies and you rear them in a volatile environment by manipulating the temperature, you increase the mutation rate. So they forget genetically or epigenetically the kind of environment to which they are most likely, they are best fit. So I can't remember. And then Stuart Kaufman came in with sort of second-order selection, selection for selectability. Again, it's just mathematically the same thing. It's just the selectability is just the rate of forgetting, which is just the precision at this particular level of optimization. I think Stuart Kaufman went on to actually revisit that second-order selection, which I think you could easily read as forgetting and just basically matching your learning rate, your precision, your rate constants to the actual volatility of the world in which you're trying to explain. So to come back to the neuro-environmental thing, which I hadn't really thought about, that basically means you'd expect things that have a lot of learning to do to get a consolidated, good, generalizing generative model of their world. They're going to learn very, very quickly. And that means that they're going to forget also very, very quickly, until they can weed out what things are invariant over time. The last thing, more of a question, it's, you know, in terms of declarative memory, you know, it's interesting that during REM sleep, unless you've woken up, you don't actually remember your dreams, which is, which I think there's another sort of dynamic in play here that, in order to not forget, you have to literally do reinforcement learning, literally in the way that the word reinforcement learning was originally introduced, which is to reinforce a synaptic connection. So although in dreaming, well, in my world, in terms of simulating these processes, you are generating some sort of fictive content in order to weed out the redundant synapses and associations to minimise the complexity. The imperative here is to get rid of synaptic connections. You really do not want to retain them. So there must be another neuromodulating mechanism that says no, okay, this was actually activity induced by real exposure to the sensorium. And I'm going to remember this. I'm going to lock it in in some way of the kind that we do during waking. But that's not what's going on in rapid eye movement sleep. But maybe during slow-wave sleep. I haven't kept up with that literature. Mark, you look as though you've got something.
[21:13] Mark Solms: Well, I just agree with everything that you're saying. The slow wave sleep is a much more predictable process. That's what the slow waves are. You know what waves are coming next. It's a much more passive process. There's much less mental work, predictive work going on. It's just, I imagine, accepting, as it were, the errors that have accumulated. The active process is resisting the updating. It's fighting against the errors, you know, so forgetting. And I agree with you. Dreaming is an eminently forgettable process. It's really one of the most striking features of dreams is you can't remember them. So, you know, they specialize in forgetting. It's trying to explain away everything that's trying to make me update my model that I don't want to. So I think that things that are relatively superficial, in other words, tolerable by the simple generalizable model, superficial things which don't actually question your core beliefs, those get encoded, but things which threaten your core beliefs, your generalizable, non-declarative model, those things you need to explain them away. And I think that that's the main thing that's going on in dreaming now. Michael invited us in his original e-mail to link this to psychoanalysis. I normally am reluctant to bring in psychoanalysis because it's my own pet interest and it doesn't generalize to everyone else's interest. But since Mike asked me to or invited us to, I want to say this. The problem with childhood models, I mean, we can see why they must be high confidence models. They must be highly generalizable. You know, they must persist. They become our core beliefs. The problem is that they're models that we built a long time ago under very different circumstances to the ones that prevail in adulthood. And this is what we deal with in clinical psychoanalysis. The problem is that our patients are living in the present as if it were the past. That's what we call transference. They're transferring the past and their beliefs and predictions deriving from the past, which are the best solutions they could come up with to the world that they were living in then, or the least bad predictions they could formulate then. They then become non-declarative and automatized, and they perseverate into adulthood. And they're living in a world that isn't the world that's there. And I think that this is where Freud's wish fulfillment theory of dreams comes in. It's an attempt to explain away that which does not fit with your non-declarative generative model, your simple generalizable childhood model. And there's a lot that doesn't fit with that model precisely for the reason that I just said. And you're resisting it, you're resisting updating. I will just add one other little footnote, which is that, of course, as we age, I think I must be probably the oldest in the room. I can tell you that I don't do a hell of a lot of updating anymore.
[24:34] Michael Levin: Thanks. That's great. I've got a bunch of stuff to ask about. Richard, did you want to say anything before we?
[24:40] Richard Watson: Yeah, just a little, thank you. So I guess I mean all of us are on the same page. I think that the naive idea that it would be best if you could remember everything because obviously you could make better informed decisions if you didn't forget anything, I think we're all of the opinion that that's naive and that forgetting is necessary in order to have a model of future behavior which is specific rather than retaining all possibilities. A way of thinking about it that occurred to me that may or not be useful here is an idea of agency that is time reversible. So that it's similar to what Carl said, that forgetting the right stuff and deciding the right stuff are really the same kind of action. If you think about deciding something as decoupling the causal relationship between the state of things as they are and the actions they're going to have on the consequences for the future, right? It's like deciding something is that I It's as though I changed the state that I am now in such a way that I will do this action rather than that action. And forgetting is like a decoupling between the state that I am now and the causes that made me like that in the past. It's like I'm going to become the thing that was made by this history instead of the thing that was made by that history. So by choosing to have a particular history, which means forgetting something instead of holding both of those possibilities, I'm going to forget this one and be that one. Now that's the same as being something different now, which is the same as deciding a different path for the future. So that the choice that you make, if you think you can make choices about which path you go on in future, that's the same thing as making choices about which path you came at from the past. So there ought to be a collapsing of possibilities going forward, ought to be identically symmetric with a collapsing of possibilities from the past, because otherwise, you know, you've lost even more causation than free will thinks you've lost. Let me try that again, right? So imagine that you couldn't change the past, but you could change the future. I can make a decision and just decide to do this instead of that as though my free will intervenes on causation in some way, right? That's super weird because I'm somehow imagining that I can't change the past, but I can change who I am right now in this moment so that I can take a different path in the future. I think it's less inconsistent with causation. It's more consistent with causation to say that when I choose a different path for the future, I'm also choosing a different history. And I'm like I'm stepping between train tracks and one was going this way and one was going that way. And if I can make a decision about which way I'll go into the future, that's the same as making a decision about which history I come from. And so that's the same act. The act of deciding what you're going to do in the future is the same as the act of forgetting a particular path about where you came from in the past. So I think you can't have one without the other. So I'll try, I'll try one more time because I've been rambling a bit. That it's about whether you think the state that you are now causes what's going to happen next. And if you can decide between one possible future and another possible future, that's a decoupling between what you are now and what happens next. And if you can do that, that's the same as saying, I'm decoupling what I am now from what caused me to be like this from the past. I'm, it's, a word that we might use for that is I'm attending to this thing from the past rather than attending to that thing from the past. And by attending to them, I change who I am in this moment and thus what I'm deciding for the future. So I'm just offering that view of it as a sort of a time reversible relationship between decisions and forgetting.
[29:09] Karl Friston: I'd be interested to hear what Mark has to say from the point of view of psychotherapy on that, because I imagine most of his life is actually opening up that choice of paths into the future, given the past.
[29:22] Richard Watson: I would imagine that being able to do a different future is tantamount. What I'm suggesting is that it's tantamount to being able to see your history differently.
[29:37] Mark Solms: Yeah, I don't want to go too far down the psychotherapy line because I'm sure that Michael has questions from his own field in relation to what we've already said. But I will just say that it's a hell of a hard. Psychotherapy is very difficult. People don't want to change. That's what they resist. And it's because it's the non-declarative aspects of their predictive model that are causing all the trouble. It's not easy to change. So what we do is draw attention to the patterns of behavior, what they're enacting. They're enacting their beliefs, and they are enacting their predictions. If I do this, then that will happen. Of course, they're doing this automatically, and that isn't happening. That's why they suffer from emotional disorders. That's the error signal. But they're not using it to update what they're doing. So we draw attention to, can you see you're doing this all the time, and it's meant to have that outcome, and it's not having that outcome, and that's why you're suffering like this. That problematizes their generative model. And then they lay down new predictions. It doesn't extinguish the old ones. The bad old ways always stay there. That's why we can go back to our bad old ways. So we don't extinguish those core beliefs, but we supplement them with better ones, with new beliefs, which gradually get deeply consolidated. And that's why the treatment takes so long, working through, we call that. But over to you, Mike.
[31:18] Michael Levin: A whole list of things. Let's see, just briefly, this business of forgetting or changing your story of the past is hugely relevant to some of our work on regeneration, for example, because one way to look at, for example, mammals not regenerating their limbs is that they have an evolutionary history in which it didn't make sense for them to try. It wasn't going to work, they would get infected and all these things. But now, with our wearable biodomes and various other things, there is a future that now makes sense, whereas it didn't before. And we spend a lot of time thinking about how to soften those priors. What are the signals that we could give the cells? Because it's not that they can't. I think they've, it's just not the model of themselves and of their future that they have now, because it's been shut down for various practical reasons that we can now lift. And so I spent a bunch of time trying to understand what kind of stimuli we can get, right? So this is, not to be facetious, but some kind of psychotherapy at a somatic level for cells and organs and things like this that basically I think have a bunch of frozen priors about what they should and shouldn't do that are now limiting more than they are helpful. And if we can sort of guide them to a different, a reinterpretation of what their past was into a new future, I think the mechanisms are all there. They have the tools to do it. I think they're just on a different path, so to speak. So, I don't know, what the relevant version of therapy is in that case. I mean, we thought about plastogens and some things like that, but surely there are more techniques.
[33:15] Mark Solms: Somehow what comes to mind, and it's a tangent, it's a free association to what you've just said. So it's relevant, but I don't know why. And this also builds on what Richard was saying earlier. It seems that once you've automatized, in other words, deeply consolidated, in other words, rendered very precise a belief, then you no longer need to know where that belief came from. I mean, that's what adds to the uncertainty. It's sort of like, well, step A, B, C, D, E led me to this, maybe B was wrong, I better go back and rethink it. But once you've automatized, deeply consolidated the outcome of that predictive work, then you don't need to know how you got there. And I think that's a big part of what you're talking about. As I say, I just intuitively, that seems relevant to what you just said, Mike. So it's not, so forgetting, it's too general a word, it's selective forgetting, it's retaining the products of learning, but forgetting the sort of course by which you got there, because that's no longer, you no longer need that information. If forgetting were to pertain equally to the products of the learning process, you'll have a very unstable system with much less agency. I mean, that's what your question was all about at the outset.
[34:58] Michael Levin: Okay, a couple of things following up on this. First, back to the dreaming thing, and maybe you guys can fact check me on this. So my collaborator, Marca, whom you should meet at some point, was telling me this thing, which I had heard, that people don't dream of cell phones, despite how common this is, that nobody ever dreams of cell phones. So first of all, is that a fact? Is that a real issue? And if so, then I want to hear what you guys have to say about that, why you think that's the case. And more broadly, the reason I'm interested in this is because in thinking about novel beings, okay, and the changes that, so of course, the humans in various cyborg configurations and then ultimately some very, very different kind of beings that are thinking to be around, what do you think their sleep, not so much the sleep, the architecture of the sleep, but the content, the interpretation, the meaning of the dreams of beings who don't have the same evolutionary past as we do? And what does it mean when we do and don't dream of specific things? And how are we going to, for example, interpret dreams of these novel beings and so on?
[36:13] Mark Solms: So I don't know the facts about whether we dream of cell phones. But what it brings to my mind is a slightly older literature from the 80s and the 90s when a lot of work was being done on typical content. And there were remarkable swathes of things that we don't dream about. And it included things like calculating, writing, typing, and it seems to be in the same ballpark as cell phones. These kind of boring, repetitive things that we do all the time and that don't have much, there's not a hell of a lot to learn there. They're just taken for granted sort of things. It would be interesting to see if that is a finding. I'm not questioning it. I just don't know that data. But it would be interesting to see, does it apply equally across the age range for those of us for whom cell phones were a novelty, as opposed to those who were born into a world of cell phones? It would be interesting to see if there's a difference in their dream content that would tell us something about what we're talking about. As to the dreams of future cyborgs, sure. I'll pass on that one. Oh, go on.
[37:41] Michael Levin: Just out of curiosity, show of hands, we're all roughly the same age. Has anybody here dreamt of cell phones? I don't think I ever have.
[37:48] Richard Watson: I don't think so.
[37:49] Michael Levin: I don't think so.
[37:50] Mark Solms: I certainly can't bring a dream to mind of cell phones, but I'm going to pay attention to that question now. I'd never thought of it.
[37:59] Richard Watson: I certainly do dream of activities that I didn't do until I was an adult. And to keep it clean, I'm talking about driving.
[38:07] Michael Levin: That's a good point. Lots of driving, lots of driving dreams. I wonder.
[38:12] Richard Watson: And with respect to the repetitive ones that Mark just mentioned of writing and typing and things like, but you do dream about walking, right? Let's say you dream about the environment you're walking in.
[38:27] Mark Solms: He's dreaming about walking.
[38:33] Richard Watson: My first thought, Mike, when you, but I'll do, I'll go where Mark dares not go. My first thought about chimeric dreaming was I wondered whether it might be more like multi-participant collective community dreams, right? So sometimes, you know, we maintain, generally, I think, a singular sense of identity even whilst we're dreaming. I accept that even though it's all in my head, I still get surprised by things. So something in my head is making things up that I wasn't expecting, right? So it's almost like there is something collective happening in any dream when you get surprised by things. So I was just trying to connect that with what would it be like if one entity that had multiple evolutionary histories and a chimeric being was dreaming? Would it have more of that multi-participant dreaming sort of feel to it? Or is it the case that each participant can only have a singular identity in it? It's just that it's more surprising to them because there's other things going on.
[39:47] Michael Levin: What do we make of the occasional person with a data memory? I think that's a real phenomenon, right? Where people apparently can remember the most trivial details of any day. What do we make of that? Because some of them have apparently normal cognition. They get around, they live in society and all that. What do we make of that?
[40:11] Mark Solms: Yes, I find it, so again, one must be careful talking outside of one's area of expertise. I'm not an expert on eidetic memory. But what comes to my mind is a famous case of Alexander Luria. What was his name, Alexei? Was it Sharashevsky, the patient? Anyway, the book describing him is the title is The Mind of a Mnemonist. And it's a man who can't forget. And reading that case study, you see how extremely inefficient that form of, that way of being is. It's an extremely concrete, extremely overly complex model, and the person was frankly autistic. Although Luria doesn't describe him as such, it's clear reading between the lines that he was autistic. So, you know, he doesn't... He doesn't generalise, he doesn't abstract, he doesn't get the big picture. And so you're saying that there are people who get by fine with that sort of memory. Just on the reading of that case, I find it hard to believe, but I really don't know that literature, but I find it hard to believe. And generally in development, kids have much more concrete sort of, you know, I don't mean babies. I mean, you know, once your declarative memory systems kick in, they remember a hell of a lot of trivial nonsense that we don't. But I think that then that all gets consolidated into a more generalisable picture. So I think generalisation, which means forgetting, as we were discussing earlier, just is obviously the efficient way to deal with the problem, the formal problem that Carl introduced us to at the outset, the fact that accuracy comes with complexity costs.
[42:28] Richard Watson: I don't think it's just efficiency. I think it's literally the same thing. If I'm still causally affected by two things that happened in the past, then I'm not able to respond to one of them alone. If I'm still causally affected by two things that happened in the past, then I'm causally affected by two things that happened in the past, not by only one of them. It's like I haven't decided what's happening next if I haven't forgotten, if I haven't broken that causal dependence on one of those things. And I don't think that's just about efficiency.
[43:14] Mark Solms: Well, I might be going off in the wrong direction now, but I also saw that Carl wanted to say something, so I'll be super brief. I think that what you're saying touches on what I said earlier, that you might be affected by two things, but once you've come up with one solution, you don't need to remember the two things. And I think that that's what we're talking about. We're talking about a generative model which doesn't have a solution for each and every thing. It has compromises. It has solutions which fuse, synthesize different problems. And then you can forget the two things. So what you're left with is the product. And the opposite is a problem. If you've got a solution for each and every situation, then it's not really a solution. It's not workable. And that is what I mean by efficiency.
[44:09] Richard Watson: I see.
[44:16] Michael Levin: Carl, did you want to comment?
[44:18] Karl Friston: Yes. When I heard the word efficiency, I normally assume people are talking about the path of pleased action, which is just the most likely path into the future, given the kind of thing I am. So I quite like the efficiency word. But just to try and draw some of the things together, the ability to remember everything that you see at a very elemental sensorial level did remind me, as Mark was alluding to, of idiot savant and the capacity to reproduce. And of course, what accompanies that remarkable ability is what some people call a lack of central coherence. So experts in autism say that this ability to remember everything that you've seen and reproduce it in a drawing comes at the price of failing to build a deep generative model, where you abstract those things that are required for generalisation. Mark actually articulated that very nicely in terms of, there is no abstraction. There is nothing that you can use for making sense of the coarse-grained carving nature at its joints in a much more fundamental way. So my suspicion is that the people who say they have photographic memory, they're either autistic or they've trained very much as Chinese children trained to do mental arithmetic. You can't do both. You can't have a deep generative model that is minimally complex in the right kind of way and remember all the fine-grained details because that would entail too many degrees of freedom and that would basically render your model having low evidence because it's too complex because there are too many degrees of freedom available. So you would never generalize. You could do just that like an idiot savant. But to try and get back to this notion of paths into the future and the like, if you are someone with severe autism or you're, well, any artifact that cannot disengage from the sensorium, then you are effectively affording too much precision and thereby rate learning rate to the immediate moment, which means that you cannot. If you're the kind of thing that has, as Richard was talking about, the ability to simulate into the future, to explore different paths so that you can select the one that is most likely for the kind of thing that you have learned you are, then the depth into the future of those paths is severely curtailed. So one aspect of this lack of central coherence is the fact that if you're severely autistic, you just can't model yourself into the future. You can't predict yourself into existence in the future, which means that you become very, very reflexive. You become very tied to the moment, tied to the sensorium. So the depth of the path is severely compromised in things that don't have this kind of ability to forget about the sensory data, to reassign more precision or to, you know, to the deeper, slower aspects of the generative models. And for things like you and me, models of models of the future, I sort of bring that, I sort of emphasize that from what you come back to Mike's question about 10 minutes ago, if you want to apply these ideas to a cell, what kind of imperatives would you bring to the table in terms of scoring different paths into the future, selecting what you would do? Now, for you and me, because we have explicit models of the consequences of our actions, we can actually select literally what to do, this or to do that, to choose Richard's words, we can choose, we can decide. But if you don't have a, if you don't, if you're a much simpler and say single-cell organism that doesn't have a deep hierarchical structure, you don't actually have a jointed model of your future.
[48:22] Karl Friston: You just have reflexes, which means you can't run out into the future. So there is no way of generating choices or paths into different paths into the future. You've just got to commit to one like a thermostat. What is that path? It's the path of least action. It's the most efficient path, given the kind of thing you are. So what I'm trying to work towards is that you can't do psychotherapy on cells. But what you can do is just look at the maths that determines that the path into the future. And there's only one. And that is in engineering, that would be something called path integral control. And basically what it is, it's the measuring along the short-term path into the future, the difference, technically the information and the relative entropy or the KL divergence between what you anticipate given the current circumstances is going to happen in the future and what a priori you think would happen to me as a cell, for example. And that balance basically determines the direction of travel. And if you want to now open up the directions of travel, then you have to decrease the precision of the preferred probability distribution over the kind of thing that I am. So if I was trying to simulate this or I was faced with this problem technically as an engineer, I'd be looking for where are my preferred states of being, and more specifically, my preferred distribution over paths encoded by physically. And more specifically, where is the precision of those sort of sub-personal mathematical beliefs about my preferred paths into the future. So basically, if I was looking at a thermostat, I'd be looking where is it that there is a sensitivity, a precision, a learning rate that controls the set point? Is this a very precise thermostat that gets really upset as soon as the temperature deviates? Or is it something that has a bit more latitude in it and can tolerate a greater range with less precision? So I'll be looking for the knob that encodes the precision on the set points over the paths of the kind of thing that I am. And if I'm a single cell, these will be responses to the world at different temporal scales. And then I relax that. And that in principle will allow you to take different paths that are not constrained by your very, very, very precise engineering and precise belief about the kind of thing that I am. And of course, what Mark was saying before, like he doesn't forget or learn anymore. He has now a very precise set of beliefs. He does exactly what he's going to do and say, given the kind of thing he is, because he has now learned a very precise self-model. So that if you wanted to get Mark to go bungee jumping or go to discos, you'd have to find the neurotransmitter basis of the precision on those particular paths into the future.
[52:25] Richard Watson: You don't know, maybe that's what he always does.
[52:27] Michael Levin: Yeah, right. Now I know the activity we're all going to do later this year. I got it. So that's very interesting, and it's not because, of course, I'm trying to deal with the intermediate case. It's not a single-cell thing. I'm trying to understand what are the possibilities open to the collective of cells in anatomical space, right? So I'm not so much thinking about single cells, and I don't think we know exactly what the collective can and can't do, but to correct me if I'm wrong, one of the key sort of parameters in what you just said is the kind of thing that I am, and that also, I think, is very interesting here because, can we, for example, if you're the kind, if you think you're an axolotl, you maybe have a different, a different, that regenerates organs that will, you might have a different future open to you. And so this is something I'm actually very interested in, is what kind of a thing do you think you are as a cellular system? And the experimental models that we often have, so something like a frogolotl, right, where you combine a bunch of frog cells. You combine a bunch of axolotl cells. It's a perfectly viable thing called the frogolotl. And now you can ask some interesting questions. What do you think you actually are in anatomical space? Because frog larvae don't have legs. Baby axolotls do have legs. As a frogolotl, do you think you should have legs? I mean, you can't answer that question from the genomics. You have all the genomes. That doesn't help you. We still need to understand what do you really think you are and what are you going to do? And maybe to some extent, that's the trick is if you want to induce those kind of outcomes that normally don't happen, you have to change your, you have to change up the kind of thing you think you are. Maybe that's the control knob here, right?
[54:17] Alexey Tolchinsky: I may add a quick thought based on what you said, Michael and Carl, going back to your GRNs, right, in your Pavlovian conditioning experiment, you gave a task which was somewhat stressful. It wasn't trivial. And then the learning built some agency, some intelligence. And how did that happen? These nodes learned how to work together. You've built collective intelligence essentially to accomplish this task, right? And you've done it by introducing some stress. And with a note to Mark, and with Freud, in ego grows in frustration, we need a balancing act of some containment and some predictability. So when you build a biodome, you introduce some containment and some predictability in the foundation, saying, I will survive. But then we do need to introduce some stress, some frustration, not too much like in trauma, not too little like in triviality. And that may possibly shift the system into a new regime, such as this new collective needs to build something. But without stress, the change is not possible. We need an influx of energy for it to change.
[55:21] Michael Levin: Carl, please.
[55:23] Karl Friston: I have to go in 2 minutes to do a PhD in Montreal, but just to pursue that point. Mike, do you remember very early on when, pre-Frans, in fact, when we were doing that simulation of morphogenesis, it just struck me that you're talking about having the potential to be different kinds of things. I mean, that was exactly the whole point of that sort of pluripotentiality. All of the constituent cells could be anything. And all they had to do was to infer which particular kind of thing I'm in in this context. And that context was established by communication with the others. And the precision with which they commit or select to being this particular kind of thing in this particular sort of anatomical space, as it were, or the contribution to the ensemble of that space was the bioelectric signalling. So that would be the knob you'd be looking at to change, relax the precision. So the other thing just goes like that because everything very precisely believes I should be a tail, I should be a head, I should be this and I should be that. But if you reduce the precision by just putting smaller gradients on the bioelectric communication or chemical communication, then you get a much, much more uncertain and much more pluri and much more diverse set and slower outcomes. We didn't simulate that other than sort of cutting things in half, but it might be interesting to revisit that sort of, because there you know what the precision knob is. It's basically the strength of the signal from you to me, telling me, I'm over here, I've got to be a head, you're a tail, and then when we're both in agreement sending the right kinds of messages that are precise, we can commit our pluripotential to being this kind of thing.
[57:21] Michael Levin: Similarly with anthrobots and xenobots, what kind of thing are you? Looking at your genome doesn't help because you've got the same, in case of the xenobot, same genome as the frog, so that's not going to help. But you are a different thing, and you end up upregulating genes for sound perception and doing things that normal frog embryos don't do, because in some way you've now changed, and this is something we're very interested in looking at. We have now calcium signaling data on all of these things and so on to try to figure out what does it think it is. And then I guess in subsequent chats, what I'd love to dig into is that we talked about a very general notion of sleep. And I'd love to talk about how one recognizes sleep in things that aren't typical brains, so you can't sort of lean on REM patterns and whatnot, but what does it look like? How do you know when a system is sleeping? What does that look like in different embodiments?
[58:23] Karl Friston: I'll get Juliet to talk about his fruit flies sleeping. He loves that.
[58:28] Michael Levin: Fruit fly is way more conventional than what I'm thinking of. I'm thinking of some really weird things. We'll have to go well beyond the fruit fly, and then come back to the whole GRN thing because...