Watch Episode Here
Listen to Episode Here
Show Notes
This is a working meeting and discussion between Mark Solms, Chris Fields, and Mike Levin
Mark Solms - https://scholar.google.com/citations?user=vD4p8rQAAAAJ&hl=en
Chris Fields - https://chrisfieldsresearch.com/
CHAPTERS:
(00:02) Reunion, Apologies, Agenda
(05:02) Phenotypes, Preferences, Control
(19:33) Social Emotions, Uncertainty, Curiosity
(31:11) Minimal Unifying Consciousness Model
(37:39) Consciousness Beyond The Brain
(49:22) Cellular Predictive Systems Wrap-Up
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:02] Michael Levin: Excellent. Well, good to see you guys again.
[00:06] Mark Solms: You too. I enjoyed our previous discussion enormously. I was just saying to Mike before you joined us, Chris, that it's been a very long time since we had that previous discussion. I must apologize. Mike said to me that it was my fault that there was such a long delay. Apparently, my assistant could only find this time, which shocks me. So I'm very sorry about that.
[00:33] Michael Levin: No, problem at all.
[00:35] Chris Fields: No, these things happen.
[00:39] Mark Solms: Mike, Chris, and I have seen each other twice since we met, because we were both part of a meeting arranged by Maxwell Ramstead under the auspices of the computational phenomenology group. I was surprised that you were not there, Mike, but Chris was very much there. You seem to have been quite pivotal in that project, Chris.
[01:15] Chris Fields: Those are the only two meetings of that group that I've ever been in. Maxwell just invited me to join those discussions at the last minute.
[01:30] Mark Solms: Okay.
[01:31] Chris Fields: But yes, they were quite interesting.
[01:36] Mark Solms: I just wanted to reference that, Mike, because Chris and I were in those couple of meetings. I watched the recording of our previous meeting because it was so long ago. I wanted to remember what it was that we had said to each other. I was embarrassed to see how much I spoke. I hope today to do a little bit more listening and a little bit less talking.
[02:10] Michael Levin: I don't know if you guys have other stuff on your agenda. I have a couple of issues that I wanted to get your thoughts on. I'm looking forward to some of your talking.
[02:22] Mark Solms: Tell us what's on your agenda and then maybe Chris will tell us what's on his.
[02:27] Michael Levin: Well, I wanted to get your thoughts on 2 specific things. One is I wanted to run by you an idea for a paper that a student of mine and I are doing on the various existing theories of consciousness and how they might apply outside the brain. I just want to talk about what we're doing and have you give thoughts on that and see what you think. And then the other thing that I wanted to chat about is see if anybody has thoughts on this idea of can you decide what your next thought is going to be. It ties into control and free will and things like this. It seems to me that one thing we don't really have control over in the short term is what my next thought is going to be. It's pushed by whatever it was before that. I don't know if you agree with that. And then long term, you can think about strategies to control the overall ensemble of your thoughts in the future. Maybe meditation, who knows what else one does over the long term to affect it. This idea of what level of control do we have over the next thought that we have and what's the time scale for that and what do we think about that? Those are two things. There's a third one. The third one is wouldn't it be interesting if we had more control over our preferences? There's some quote that somebody said, "You can try to get what you want, but you can't want, you can't decide what you want." It's like we have these goals, and it's very hard to rewire your preferences the same way that you have control over other things. It would seem to me that would be pretty adaptive. In certain environments, if you find that your preferences are out of touch with the group or whatever, wouldn't it be nice to be able to change your preferences? But it doesn't seem that we have any ready control over that. I'm curious if you have thoughts on why that is. As a stupid example, I've often thought there's a lot of sports being shown on TV. There's football all the time on TV. I always thought, wouldn't it be nice if I actually enjoyed that? It would be tremendous. There's all this because it's there all the time. That would be great. But I get nothing out of it and I have no ability to make a decision that, okay, from now on, I like that. We have no ability to do that. So I'm curious why that would be.
[05:02] Mark Solms: Before we hear your agenda, Chris, let's tackle Mike's agenda and then we move to yours. I'd like to start by responding to the second and third of your three points. And then maybe Chris wants to comment on those. And then we can go to your first point, because that's rather a different point about different theories of consciousness detached from biological constraints. But the second and third of your points, I think they form a unit, the one about how much control do we have over the course of our own thought processes and how much control do we have over the contents of our preferences. I think that they're linked in the following way, starting with the third point. Preferences are fundamentally rooted in our phenotype. There are certain — we prefer a certain blood pressure. We prefer a certain hydration, salt, sugar, oxygen, carbon dioxide, etc., level, because we have to, because we are human beings. Those are our viable ranges. Those are our preferred ranges. So we are constantly seeking to find ourselves in those states. That's given by our phenotype. Cognitive neuroscientists in particular don't sufficiently appreciate that there is a joint prior preference distribution that goes with your phenotype, and that dictates the whole story. I don't mean it dictates the whole story in its entirety. I mean, that is the sort of starting point for the whole story. In other words, the whole of the predictive model is naturally selected.
[07:37] Mark Solms: There are certain predictions that we are born with, which are called reflexes and instincts. These are predictions as to what must I do to bring myself back into my viable bounds. In other words, what must I do to bring myself back into my preferred state? Then we have an acquired predictive model. In other words, we greatly supplement in a context-sensitive way the innate predictive model. The point is that all of this is in the service ultimately of meeting our phenotypic needs. To get derivative preferences is just that, they're derivative. They ultimately are, they're context-specific routes back to how to meet the requirements, the demands of the joint prior preference distribution. What I've just said is the answer also to your second question. It's that cognition is not something that we just have at our disposal and therefore we can do what we will with it. Cognition is the predictive work that is demanded by those prior preferences. So you have to satisfy your expectation of a certain temperature, hydration, oxygenation, et cetera level. So you must perform predictive work, in other words, learning from experience. That's what cognition is. So it's not something that's a free for all. I don't mean that it's entirely determined. It's probabilistically determined in all sorts of ways. But secondly, there are expenses of cognition where you can, in a top-down way, decide. I am now going to recite the multiplication tables, and then that determines what your cognitions are going to be for the duration of that recitation, or I'm now going to cite Shakespeare's 54th sonnet. The extent to which we are not able to impose our will over our thought processes, let alone our preferences, I think is a really important lesson for cognitive neuroscience about what is calling the shots in the life of the mind. So that's my answer to your second and third points.
[10:20] Chris Fields: I'll comment on the second point first and then the third. I think we have very limited meta-processor control over what gets presented to consciousness. Probably even more limited conscious meta-processor control over what gets presented to consciousness. For evidence in that regard, I would look to things like Nick Chater's book, which is really all about that question of how much control we have over the very next item that we are conscious of. I think the answer is very, very little. We can choose to look in some direction, for example, but we can't choose what we're going to see when we look that way. It could be a tremendous surprise and often is. I think that's actually in the nature of meta-processing. The meta-processor is a set of heuristics that does the best job it can at predicting what's going to come next. Only if you can predict what's going to come next can you feel that sense of controlling what's going to come next. It's not all that good at doing so. It certainly doesn't have a 100% accurate deterministic prediction system. It just has very rough-and-ready heuristics; basically, our minds are a combination of a bunch of rough-and-ready heuristics. To look to the third point, as Mark said, the answers are coupled in that we can learn to like certain things and, given certain experiences, we can come to dislike certain things, sometimes intensely. A good example is people who come to have very strong food aversions after a very bad experience with some particular kind of food. That's very surface-y icing on a very deep cake that has to do with things of the sort that Mark was talking about that have to be satisfied. I think a really good question is: to what extent are our affective and reward-system-driven responses to things, including finding things interesting and the feeling of curiosity especially? How closely coupled are those to probably very early learning experiences that had to do with maintaining the more basic kind of homeostasis that Mark was mentioning? To what extent is our interest in science, for example, driven by experiences that we had when we were very young? I don't know. To what extent is that an outcome of prenatal development? I don't think any of that is actually known. I think those are interesting questions of how those kinds of affective and reward system processing loops get put into place by experience.
[15:11] Michael Levin: I was thinking of, let's say we were building an agent that was going to go around and maybe live in social groups and something like that. And so, now the design decision: do we — definitely, I take the point that we don't want to give it control over physiological set points, because if it decides on a temperature that's going to kill it, that's terrible. We don't want that. But beyond that, all the variable social stuff that gets developed later, wouldn't it be useful in a group setting if you find that you have preferences that are completely out of tune with the rest of the tribe and, rather than get kicked out and have all this friction, wouldn't it be nice to be able to say, okay, from now on, I too like coconuts and so now we can all do whatever the tribe does? So I just wonder, is the fact that we don't have control over these one of these vagaries of evolution that just didn't show up and that's it? Or is that actually a good design decision? Would it be better for us if we did have control over these non-physiological preferences? They're extremely wide, especially in humans. There's an extremely wide range of human preferences. And one can't help but think that life would be, some aspects of life would be easier if one had some control over some of those things. But I wonder if our inability to do that is fundamental and important in some way, or that it just happens to be the architecture we have and we could make a better one.
[16:48] Chris Fields: I think that...
[16:49] Mark Solms: Oh, go ahead, Chris.
[16:51] Chris Fields: I think an interesting example of this is phenotypes like sociopathy, in which you have people who have preferences which are very different from the rest of us. That seems to be not a straightforward genetic variant, but it's certainly some kind of developmental variant, which is very long-lived in the population. Evolution hasn't gotten rid of it. Yet it has, at least in some individuals, enormous rewards, not only in biological fitness, but in social status, et cetera. But it's an enormous misalignment with the preferences of most members of the group up to a point. It's also true that many humans are enormously attracted to that personality. That's one reason why sociopaths are so successful in many social settings. So I think it goes both ways in a sense. We have these enormous preference variants. What kind of social feedback do I want as an individual? And the fact that we have these huge variants and they seem to be stable over tens of thousands of years suggests that, if they do not have a function, they somehow keep societies going in a way, even though they would seem to be disruptive.
[19:33] Mark Solms: What I would say to this last point of yours, Mike, Chris, is the following. I want to go back to what Chris said prior to what he said now, his initial response before your follow-up remark, Mike. About this point, we must recognize, not that there's consensus on this point, but the evidence is overwhelming, that there are a multiplicity of phenotypic social emotions. Social emotions are not by their nature social preferences. Therefore, there are social preferences which people assume, because they're social, are acquired idiosyncratically. But that's by no means the case. There are social emotions like all mammals play, and there are rules that govern mammalian play. That's very social. It's got to do with in-groups, out-groups, dominance behaviors, et cetera. There's nurturant behavior, which is something you see in all mammals. There's attachment bonding in the sense of looking for a caregiver. Fear and rage are perhaps less obviously social. That's the first thing: there's a range of social preferences that are phenotypically given. With those come phenotypically given predictions, which, as I said earlier, are reflexes and instincts about how one remains within this preference distribution. They're too stereotyped. They're too generalized and don't apply in many contexts. We have to individualize our predictive model in relation to each one of those preference distributions. What else must I do? What must I do in this situation versus this one versus this one in order to remain within my viable bounds? There comes the variability, the individualization. It depends on what kind of niche you find yourself in. Even more important is that many of these social emotional needs conflict with each other. For example, you have an innate prediction in relation to your attachment needs because all mammals need to attach to a caregiver because they can't look after themselves. The innate prediction is stay close to, keep mummy close forever and always. That's the prediction. There's another innate preference: anything that frustrates and impedes me, that gets between me and what I want, I must get rid of. That's the prediction for aggression, for rage, for hot aggression.
[23:20] Mark Solms: Now, whose mother never frustrated them? So here you have a conflict. I've got a prediction that I should attack this very person whose presence I most need in the world. How do I resolve this? Many of the idiosyncratic preferences that we see, especially in clinical populations, are ways in which people try to get themselves out of these problematical corners. Because there's a range, I'm saying there's not two social emotions, there are a range of them. The permutations that you end up with, the possible permutations are enormous. But to come back to your initial point, I think the more that it becomes something that you can decide, not I'm going to do this — that's a policy — but rather, I'm going to feel good about this outcome. This outcome will make me feel good. This one will make me feel bad. The more you're able to actually say, I will feel good about this outcome, even though it's not the one that naturally comes to me, the more I think you're no longer talking about real preferences. You're actually talking about things which are no longer of an affective nature. So that goes back to your initial point. But now, as I said, I want to go back to what Chris said earlier in response to your question, Mike, about the extent to which we have control over the contents of our consciousness. What Chris said emphasizes consciousness, and I want to pick up on that. I think that it's a very important point. I believe that what we become conscious of is, to a very large extent, determined by where the uncertainty is. The more certain, the more precise your predictions, the more confident you are in your predictions, the less you need to palpate the error signal, the more you can just have a monotonous course of action which does not require consciousness. For me, consciousness pivots on felt uncertainty. In other words, it's the palpating of the confidence in your prediction over the error signal. That constitutes consciousness. Consciousness is the palpating, the modulating of the amplitude of the precision. So Chris's point, which he made without any conscious reference to that way of thinking, was an intuitive response that it's got to do with access to consciousness. I think access to consciousness has everything to do with uncertainty. The more precise the prediction is, the less you need to modulate the precision in the error signal associated with it, the less conscious that the execution of that prediction will be, and vice versa. Then again, there's a prioritization of these different needs, because you don't only have one preference that you have to satisfy; you have a joint preference distribution that you have to try to balance.
[27:07] Mark Solms: So there's a prioritization of one or the other in time. What is most salient is also a matter of uncertainty in relation to which of my homeostatic bounds is the one that I need to prioritize now. That also determines what gains access to consciousness. But I want to link that to what Chris said about novelty and curiosity. The word I would use there is something like epistemophilia. And you spoke about scientists: what is it that makes—there too, it links up in a very deep way with what I've just said, building on what you just said, Chris. When things are going okay, in other words, when you are not in the grip of an urgent need, to escape a predator or to find your caregiving object when you're a juvenile mammal, what you revert to is something like explore rather than exploit. Now I have the luxury of being able to engage proactively with uncertainty, which is what curiosity is. This is novel. This I do not understand. This therefore is interesting. Let me engage with this. Because in the bigger scheme of things, the less uncertainty I have about how the world works, the safer I'm going to be in the future, the less the world is going to bite me in the ****. So I think that our default mode of consciousness is if I'm not having to attend to a clear and present danger, then I just find the world interesting and I engage with it. This comes to the question of what is it that comes to consciousness? It is the thing that we are least certain about, because that's what epistemophilia is: engagement with uncertainty. So it goes against the grain to stick with something boring that's predictable. Eventually it's like, **** no, I don't want to. And so you used the phrase, Chris, you said it frequently is something very surprising that captures your consciousness. And I think that's no accident. It is precisely the things that are most surprising that for very good mechanistic reasons are what we attend to. It's because that's where the uncertainty is. And as I said, what's uncertain is what requires consciousness. Maybe we're beginning to head toward answering your first question, Mike, about what are some of the fundamental mechanistic requirements of a conscious agent? I'm so sorry you weren't at that computational phenomenology meeting because that was pretty much the topic of those two meetings. The authors, one of whom is Chris, were trying to write a paper on the minimal mechanistic requirements for a conscious agent. They looked at all the main theories of consciousness within the free energy or active inference framework and tried to come up with a kind of integrated consensual model. Chris played a big part in that. So maybe, Chris, in terms of answering Mike's question, you might want to tell him something of the views that you and that group have been developing.
[30:55] Michael Levin: Tell us what you guys are doing and then we'll go back as my original aim is slightly different than that, but I'd love to hear what you guys have come up with.
[31:11] Chris Fields: This is not quite done yet. It's an interesting question, I thought, in that the question that was posed is: if we start with the free energy principle, then to what extent can — the term being used in the literature is "a minimal unifying model" — but it's really to what extent can an abstract model be generated that, with the addition of other assumptions, and these other assumptions may be mutually contradictory, generate some of the ongoing neuroscience models of specifically human-like consciousness. So this is a very different question than the question that you, Mike, and Jim and I worked on in that 2021 paper where we were trying to think about basal awareness and basal cognitive systems. The free energy principle, kind of unadorned with many other assumptions, gives you a nice starting point. But in this case, the other assumptions that get added have to do with the way the nervous system appears to be layered in a hierarchy of peripheral to more central processes, and the kind of sandwich-like picture that one gets when thinking about the arousal and attention systems, the emotional systems that we've just been discussing, and the kind of heuristic, meta-processor-driven choice function, and how those relate to the content generation stuff, which seems in some sense to be in the middle, which determines what kinds of contents one can actually represent as a cognitive system. So that would be, in the language that we used or have been using: Which reference frames does one actually have? What sorts of objects can one actually recognize? What kinds of motions can one actually recognize? What kinds of attributions can one actually make of agency? In the case of human consciousness, that's a very rich set, and we have introspective access to a lot of that stuff, which other systems presumably don't have. So the question has been: what are the minimal assumptions to be added to the free energy principle to provide a basis for constructing theories of that kind of consciousness as opposed to theories of more basal awareness? Interesting.
[35:14] Mark Solms: Interesting. It's quite amusing to hear from Chris after the fact that there was a certain assumption that I had missed. I didn't know in that group that we were particularly trying to come up with a minimal unifying model of what kinds of mechanisms generate human-type consciousness. I think that's a horrible constraint on my whole field: my colleagues are too bound to a model example of themselves. I'm thinking this is the best place to start in terms of trying to understand what consciousness is all about and how it works. I think that we're the last place we should start because we have such a complicated example that we should rather start with the simpler. I didn't even realize that was one of the constraints on what we were doing.
[36:11] Chris Fields: This is a constraint that I insisted be made explicit in this paper because it became so obvious that it was the goal, in fact, of most of the theories that were being considered as interesting theories developed on the basis of neuroscientific data about humans, for the most part. I think that my interest, as well as Mike's, is much more in this area of basal awareness and in very simple systems. What sorts of things do you have to put into a simple system to make it able to do certain things?
[37:08] Mark Solms: Mike, I spent a lot of my time in those two meetings not realizing that constraint was driving the authors complaining about the corticocentrism. Why is the cortex such a big-ticket item? Now you realize why. You said that you had a more specific way that you wanted us to address your question, Mike. Let's come back to it.
[37:39] Michael Levin: A student of mine and I are trying to do this. What we want to do is start broader than the theories that are focused on active inference. All of the major theories of consciousness that exist, whatever they focus on. What we want to do is analyze to the extent that any of those theories point out why the brain is normally thought to be the seat of it, and ask to what extent those same criteria are found at other places in the body. For example, you've got Stu Hameroff, which talks about microtubules. There's microtubules everywhere in the body. Somebody else will say it's the magnetic field of the brain. There are magnetic fields everywhere in the body. What we have is a table. The columns are the different theories; there are six or seven different ones. They differ greatly in the extent to which they're explicit. They mostly assume it's in the brain, though some make it more explicit by referring to specific mechanisms. On the horizontal end, we have body organs, different cells, different software agents, and slime molds. Let's take the case of body organs, for example. What we want to analyze is to what extent these various theories tell you why, in fact, it's the brain and not your liver that should be considered to be the only conscious thing in the brain. Because generally speaking, when you talk to most people and you say, "How about your liver?" they say, "No, that doesn't; my theory is about the brain." But why exactly? What's the mechanism that rules whether that makes the difference? A lot of them tend to come down to things that are not brain-specific at all. This is what we wanted to analyze: to the extent that somebody is committed to one of these theories, are they also committed to having some distinct type of consciousness elsewhere in the body? That's our little project: to go through these and see what they say about, first, the hardware and then the dynamics, and why brain, basically.
[40:16] Mark Solms: Do you want to go for it, Chris? I'll follow.
[40:20] Chris Fields: This sounds like a delightful project, and I'm glad you're doing it in as explicit a way as possible.
[40:30] Michael Levin: We'll see how delightful people find it.
[40:33] Chris Fields: Just thinking from a free energy principle point of view, we and Carl and his group have argued very strongly that here's a principle that applies to everything. Certainly any organ in the body, any collection of cells in the body. If one wants to be brain specific, then one has to add very strong architectural assumptions. If you think about something like integrated information theory, then really having positive phi comes down to having internal feedback loops. Everything has internal feedback loops. You're calling a lot of bluffs in that if you start with these general theoretical principles about information processing, then it's very hard to construct something that's brain specific.
[42:06] Michael Levin: When I've brought this up to various people, because you want to do their theory justice, then if there's something, they should give the best case they can for the distinction. The biggest argument I get is, "that can't be right because I don't feel that my liver is conscious." You don't feel that I'm conscious either. How would you know? That seems like a terrible argument. I think that's primarily where people say, "okay, we're good. I know I feel conscious. I don't feel any of the rest of this stuff being conscious. Good enough." I think that seems like a fundamental error to me.
[42:50] Chris Fields: No one says I feel like my brain is conscious.
[42:54] Michael Levin: Yeah.
[43:00] Mark Solms: I agree with Chris. That's a wonderful project, Mike. I really look forward to reading what the output is of your deliberations. I also have learnt from you that prejudice I shared with all of my colleagues, that it just goes without saying that when you talk about consciousness, you're talking about the nervous system in general and the brain in particular. It never crossed my mind that we would—the question that you ask is just not a question that has any premise. I now, because of you, find it a very interesting question. Chris's point at the biological level, the free energy principle. It doesn't apply only biologically. The whole of natural selection is a self-organizing, complex, dynamical system where there is an adjustment of the predictive model as to what needs to be done in order to remain viable for the whole of life, for each species, and then ultimately for each of us as individuals. That comes to the question of what I think a very important part of what the nervous system does, and the brain in particular, is that it remains plastic, that it is massively plastic in terms of its amenability to updating of its predictive model, I think, compared to most organ systems. There's a hell of a lot more learning from experience going on in the brain. I don't mean it's exclusive; that's why I started with the statement that I did. You really have opened my mind to problematizing this prejudice. Remember what I said earlier in our conversation today when I said that consciousness is about palpating the uncertainties in attaching to the prediction over the error signal. The error signal, of course, is what drives the learning process. The learning process is pivotal to what consciousness is all about. It's learning from experience. So I think it's fundamental to a conscious system that it's doing predictive work. In other words, it's updating its predictive model, and in particular, that it's updating its predictive model in a way that is not given by phenotypic innate predictions. In other words, it's feeling its way, it's palpating the precisions. Is this going well or is this going badly? It's formulating policies on the basis of fluctuating precisions in those policies. So I think an organ, or any mechanism that does that, that's learning by palpating the precisions in competing possible courses of action. And that means that it's capable of choice.
[46:10] Mark Solms: It means it's capable of voluntary action. And I think that's what consciousness is for. It's to guide voluntary as opposed to stereotyped automatic policies. I would just add two further details. One of them is that if consciousness has qualities, which it does, then I think an important requirement is that there has to be categories of need. It's not a system that's just doing the same thing. I make widgets, I make widgets, I make widgets. It's that, now I'm doing this, now I'm doing that, now I'm doing the other thing. So complex organisms have a multiplicity of needs, and they are categorical variables. They have to be satisfied in their own right. They can't be reduced to, you can't just say, I'm minimizing my free energy. It has to be, I'm minimizing my free energy, yes, but it's factorized across sleep, hydration, oxygen, et cetera, et cetera. Each of those needs has to be met. Therefore, you have to have categorical qualitatively differentiable preferences. I think that is a major factor in what gives rise to qualia in the 1st place, that they have to be qualitatively differentiated from each other. And then the last thing I would say, I think the nervous system has more to do with dealing with the whole bang shoot rather than dealing with one or another. I don't mean that each organ system deals with only one need, but I think that the nervous system deals with them all. It's a meta system in that sense. And I think that's an important reason why it's not uniquely, but especially bound up with consciousness. And then the last point, the predictive hierarchy. We all know those, the three of us in this room don't need to go through why the predictive model is hierarchically organized. I think that that's another thing that the nervous system seems to particularly lend itself to. It has a hierarchical anatomy. So those are my responses to your fabulously interesting question.
[49:22] Michael Levin: One of the things is we have a number of other systems that we're testing for all this stuff. One of the things that Gio Pizzullo and I are going to be looking at is predictive model updating in gene regulatory circuits. Something as simple as a gene regulatory network can learn from experience; we and others, Richard Watson and others, have shown that. We've shown it has six different kinds of learning, just there with no other magic added. Now the question is whether there is an active inference version that can be mapped onto it, where as it guides the experience of the cell being hit with different stimuli, these pathways and molecular networks are updating their models and, as you said, having these qualitative differentiated preferences. We'll be looking and see what that looks like. There are many other systems. There are morphogenetic systems where we're looking at that. There are physiological systems. It would be nice to develop a panel of computational tools that could be applied to a system. IIT is, in some sense, an attempt at that.
[51:10] Mark Solms: But on your last point, I just want to underscore what Chris said about feedback. I don't find myself particularly persuaded by it.
[51:31] Michael Levin: Yeah.
[51:34] Chris Fields: If we look at any cell, even a very simple prokaryotic cell, there are requirements for things like osmolarity, energy transduction, waste removal, one temperature, etc., all of which have to be met separately. And there are gene regulatory networks that are specialized around those functions, maybe not completely specialized to just one of them, but they are specialized around the idea of, for example, expressing or producing the right sets of sugar metabolism genes at the right time and in the right proportions in something like E. coli. So one has this kind of compartmentalization of needs and compartmentalization of need-satisfying mechanisms that then have to be wrapped in by a set of feedback loops that effectively serve as a meta-processor that allocates energy to dealing with different needs in real time. Energy being the basic resource, one could also think of it in terms of allocating memory or computational resources. But that meta-structure that coordinates the response to different needs and prioritizes which pathways get the energy is, at least from an information-processing perspective, a separate system that has its own requirements for energetic and memory resources. So it has to keep itself running also. You get something that architecturally looks not a million miles from a brain. It's hierarchical, it's got multiple channels, it has to satisfy many needs simultaneously.
[54:51] Mark Solms: Bunny, up until the last sentence, I followed everything you said, but I thought you were going to conclude with a different statement than you did. I thought you were going to conclude with a statement that this meta-control system within the cell which has to allocate and prioritize energy resources, that sub-component of the cell's processing is equivalent to the nervous system of that single cell. That's what I thought you were going to say, because I think that the single cell is like the body at our level. And then within the body, there's the nervous system. From my point of view, the importance of the different needs of the system having to be treated as categorical variables. A major reason why it becomes so important for why qualia exists is because there's a meta system that has to say, okay, which one of these things am I, which flavor is at issue here? If there was a fixed pattern of how they get prioritized, then the issue wouldn't arise. It's because there has to be some qualitative prioritization processes. That's why I think that aspect is the important one. And then there's the additional thing, which is the action bottleneck. It's the energy allocation bottleneck, as it were, because there's only a finite amount of energy resources. But there's also that in our allostatic life we can't do everything at once. So you have to say I'm now going to deal with this need. The prioritization also has to do with the action bottleneck. We've discussed Mike's questions. I don't think we exhausted the potential answers, but we didn't get to yours at all, Chris. So we have to have another meeting. Next time we'll start with Chris's question.
[57:21] Michael Levin: Sounds good. I'm in. I'll send out an email. We'll schedule the next one.
[57:27] Mark Solms: I can't tell you how much I enjoyed talking to the two of you. Thank you so much yet again.
[57:32] Chris Fields: Likewise. So much fun. Thank you.
[57:34] Michael Levin: Yeah.
Chris Fields: Good. Very good discussion.
[57:36] Michael Levin: So long, gentlemen. Good to see you. Cheers.
[57:38] Mark Solms: Bye bye. Bye.
[57:39] Chris Fields: Ciao.