Skip to content

Conversation with Mark Solms and Chris Fields #4

Michael Levin, Mark Solms, and Chris Fields explore how novel behaviors arise in problem-solving, whether explanted brain tissue and hybrid robots can be conscious, and how to define sleep in artificial and unconventional agents.

Watch Episode Here


Listen to Episode Here


Show Notes

Chris Fields, Mark Solms, and Michael Levin discuss what novel behaviors are (in the context of problem-solving in novel circumstances), consciousness in explanted brain pieces, and sleep in unconventional agents.

CHAPTERS:

(00:00) Introducing Behavioral Novelty

(01:51) Affect-Driven Novel Behavior

(11:12) Surprise, Confidence, Tool Use

(18:18) Mistakes, Agency, Morphogenesis

(22:00) Parabrachial Complex In Vitro

(28:21) Hydranencephaly And Hybrid Robots

(34:53) Objective Criteria For Consciousness

(42:53) Outputs Of Consciousness Theories

(48:06) Artificial Agents And Sleep

(53:57) Behavioral Definition Of Sleep

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:00] Mark Solms: I thought that's likely to be on Mike's agenda.

[00:04] Michael Levin: That wasn't on the list, so we can absolutely do that. Why don't we start there?

[00:14] Mark Solms: Do you want to get the ball rolling then, Chris?

[00:21] Chris Fields: No, I think we left it with your statement that it was uniqueness of behavior that counted. And I was just going to raise basically the same question of how unique does behavior have to be to count as unique? And with humans, that's kind of easy to answer. But with other systems, it seems more difficult to say what counts as a unique behavior or a novel behavior. And certainly as one goes down into the cellular scale, or the scale of unicellular organisms, it becomes a lot more difficult, I think, to say what counts as a novel behavior. Although it may be an old behavior executed under novel circumstances. Mike's "bury a worm" example, where they're doing something familiar, but they're doing it in previously unencountered circumstances, perhaps that would render it a novel behavior. I don't know whether that fits within your idea of what novelty is.

[01:51] Mark Solms: Let me say, first of all, that as always with you two, it's not a matter of what my definition is, because my definitions change all the time in response to conversations with the two of you. So let me throw my hat into the ring, but I doubt it'll come back in the same shape, because I think these sorts of questions are at the frontiers of what we need to think through. I don't think that any of us has a really satisfactory, thoroughgoing formulation of what view we should take on it. When you said, well, it depends what you mean by novel, it's clear at the human level there's lots of novelty, but as you go to simpler and simpler organisms, it becomes more difficult. I think that's also what is to be expected, both the obvious—that there's more novelty with us and less novelty as you go to simpler organisms—but also that it's a graded capacity. It's not either present or absent. And of course, that applies to consciousness too. It's very unlikely that there's suddenly at some moment in biological time a dawn of consciousness. It's more some sort of something that's edging towards that, something which possibly qualifies to be that. And then eventually you think, no, so it's a matter of degrees of confidence in whether or not what you're looking at actually meets the criteria. What I was saying in the emails is that if an organism responds to a novel stimulus with a behavior—or an action; I'm using the word behavior very broadly to include any response—if it responds with a behavior that is within its fixed phenotypic repertoire, then there's two ways of interpreting that. One is that it has recognized an analogy, which implies intelligence. It's a thought process of some kind, an inference, rather. Or alternatively, it's a mistake. In other words, it is misidentifying this stimulus as the other stimulus, and therefore giving the response that it would give to the other stimulus, which just happens to be the right one, or which is within the same ballpark and that's why it triggers the same response. For example, it may be that there's a range of responses, nine out of ten of which are completely erroneous, and the organism therefore expires. One of them randomly happens to be the right mistaken response, a mistaken response which does work, and then that organism survives. So it would be a matter of whether what we're seeing there is just something to do with random variation, which is how natural selection works. Now, there's an increased chance that the offspring, the descendants of that one out of ten of that species, will always do the right thing in that situation. I mention that because the mechanism I'm going to specify now stands in contrast to that. The adaptive advantage of the mechanism I'm going to describe is that you don't need to rely on natural selection. You can actually learn from experience during your own ontogenetic development. That is a huge advance over having to lose a great proportion of the species so that the ones that just happen to do the right thing can survive and propagate. Now, the mechanism that I have in mind is the alternative to the phylogenetic natural selection mechanism.

[06:30] Mark Solms: It is that the organism encounters a novel situation, a novel stimulus or a novel problem. And at that point, its determined behaviors fail it. In other words, its reflexes, depending on how complex a creature we're talking about, it might be its reflexes, its instincts. Everything that pre-exists in its behavioural repertoire is of no use to it. So, at that point, a purely determined kind of response has to be replaced. In other words, they die, they make a mistake, or they randomly do the right thing. Rather than all of those, it behaves in a stochastic fashion, but not just stochastic fashion because it has affect. So it does various things. And it is able to register which one of these is working best. Because the need is the need, the problem posed, I now don't have a solution to this situation. I'm in a state of need. This one, this behaviour is meeting the need. This behaviour is not meeting the need, nor is that. So I'm going to carry on doing this. I think that that's the fundamental contribution made by affect. And you can see from what I'm saying it underwrites choice, it underwrites voluntary action, because it's not just random. It's initially, I'm in a panic, I don't know what to do, but I can feel my way through the responses that I'm generating, I can feel which one is working. And that in turn leads to learning from experience. And now you have an additional arrow to your bow the next time that you encounter this situation. It's now a situation for which you do have a prediction. So that is the moment of suddenly expanded degrees of freedom, and the navigation of that situation is what feeling contributes. It provides value, the value being survival is good, death is bad, and it allows you to register within that system of values. That's what feelings do. I would only add one other thing, which is that it's not just goodness and badness. It's goodness and badness within multiple categories of need. So it has a valence and it has a quality. That's the mechanism that I tentatively am proposing. That would be my behavioral criterion for feeling reasonably confident that what I'm seeing is an organism which is using what we call feelings to solve a novel problem.

[11:12] Chris Fields: So it sounds like what you're proposing is a kind of general purpose analogy mechanism with affective response, some form of confidence that this will solve some kind of problem as the criterion for picking a good analogy.

[11:42] Mark Solms: I wouldn't have thought of putting it like that, but as you say that, I can see why you say that. It does make sense. But I'm glad you mentioned the issue of confidence, because I don't know if you remember in my very rapid presentation in Boston, I spoke at the end about the importance of precision, of palpating, modulating of confidence. So the affect, remember, it is not just valence, it's valence within a category of need. So you're feeling increasing or decreasing, let us say, a suffocation alarm. Let's say the increase in the suffocation alarm gives you less confidence in your current policy, in the thing that you're doing. So you have increasing confidence in the error signal, that is the affect, and you have decreasing confidence in the action. So you change your mind, you do something else. Once that leads to decreasing, the affect is now moving in a positive direction, in other words, decreasing suffocation alarm, decreasing respiratory distress, now you have more confidence in that policy. That in turn, the high-precision situation, drives the updating of the predictive model.

[13:33] Michael Levin: I was thinking earlier about what counts as a novel behavior. I wonder if we should lean more heavily on the notion of an observer: a novel behavior is one that surprises you as an observer of the system. Then we can say the system itself is surprised by it. Could we close the loop that way and say that a truly novel behavior is one that surprises some observer? That may be a scientist or some metacognitive module in the system itself. From that, I feel there's a certain symmetry there.

[14:15] Mark Solms: I agree with you completely because it's a very neat formulation. What you're saying about the observer tightly mirrors the situation the organism found itself in, which was a situation of surprise. I do not have a prediction for this. So whatever you do then is going to be surprising. Initially, there's an element of randomness. There's this; this is not surprising. That's one of the repertoire of responses that the organism has: stochastic behavior. What's surprising is that it's able to read the gradient that this one is a better direction than that. And so then you start seeing voluntary behavior — a behavior that you didn't predict, a behavior that's surprising to the observer. That's a voluntary behavior and that's a novel behavior. It's no longer stochastic.

[15:19] Michael Levin: I also, I also, sorry, go ahead first.

[15:23] Chris Fields: It also works in the sense of a behavior under uncertainty when you're just trying things out; when it actually works, it's both surprising and pleasurable. The case I always go to is tool use because it's highly analogical and you see it in lots of different organisms. When a system is learning how to do tool use, a young human is learning how to use their toy tools, it's a very pleasurable activity when it works, and it's very frustrating when it doesn't work. So it seems like a good kind of model system for thinking about this.

[16:26] Michael Levin: I love that because I bet we could engineer some kind of a cellular system and some sort of instrumental learning thing where the cell gets to do something. It works, it helps. So a tool use scenario. And could we measure some sort of positive affect, or at least a reduction of stress, where it gets that "I was able to make that happen. Great." Is there a primitive version of that we could detect in a cell or a tissue? That gives it a chance to learn something.

[16:59] Mark Solms: Yes, the whole point is to be able to come up with an agreed criterion. Then you can start doing exactly that. That's the whole point. To go back to Chris's remark about surprise, both of your remarks about surprise I like very much because to me, when an affect is generated, it's always because of surprise. Negative affect is things are going worse than expected. A positive affect is things are going better than expected, because this is random now. So the fact that this is reducing the need is a surprise, but it's a pleasant surprise. The fact that this works, you don't expect random behavior to work. You don't have high confidence in stochastic behavior. So it's a surprise in both directions: worse than expected or better than expected. Then the surprise reduces, and that's when you get to satiation. Now I know this works. It's no longer surprising, I'm no longer in a state of uncertainty, and I've now got a new behavior at my disposal.

[18:18] Michael Levin: I love what you said earlier about this, is it a generalization or is it a mistake? I think that's very cool for two reasons. One, somebody said that making, being able to make mistakes is a mark of agency. Chemistry doesn't make mistakes, but if you're an agent, if you've made a mistake, that says a lot about what you've got going on, because you've had expectations.

[18:48] Mark Solms: I agree. I would specify what's meant by 'mistake' there is 'wrong choice.' It's only once you have the functionality of choice that you can speak meaningfully of mistakes in the sense that you're using the word now. If it was Dan who said that in Dan's sense of the word, you can speak of: if by 'mistake' you just mean it did something stupid, it did something, it responded to stimulus A as if it was stimulus B and therefore died, that's different from 'I responded to it as if it was stimulus B. That didn't work.' So I'm now moving to, okay, I don't know what kind of stimulus this is. Let me try various things, then you can make mistaken or not choices as you're going.

[19:51] Michael Levin: That strikes me as a continuum, because you can make really dumb, useless mistakes, and you can also make useful inspired mistakes and everything in between. We deal with this in embryogenesis all the time: what exactly is a birth defect? Is it an error, because one wrong shape is some other species' perfectly good shape. We think about this all the time, as these systems try to navigate that morphospace when they make errors. Error with respect to what, and then you start thinking about expectations. Some of these mistakes work out great. Some are random and some are, as you pointed out, more along a gradient of trying to improve things and affect. That gets me into one of the things I was gonna bring up, which is this issue of expectations. What do you think about when you're talking in your book you talked about the intrinsically conscious properties of the parabrachial complex and those areas. I'm curious what you think about: let's say I extract it into the ex vivo culture system. So I've got this parabrachial complex. It's now living in a petri dish on our benchtop and it's being fed. I wonder what you think about that aspect of it. Would it now think that it was in a sort of sensory deprivation tank kind of experience or not? Once it's out of the body, does it maintain the special status that it had or not?

[22:00] Mark Solms: My answer to that is I'll give a very concrete answer to it, but I think there's a deeper principle that you're asking about. So my concrete answer to this particular instance that you're raising might not address the deeper principle, in which case, please say so, and then we can look at it again. And it's really a concrete answer. When I say that all reticular activating system nuclei are intrinsically consciousness producing nuclei, that is their function. Their function is to modulate precision, and in the ways that I was describing earlier, palpating of precision is an essential mechanism as opposed to message passing in visual cortex. And when I say it's intrinsic, the message passing in visual cortex doesn't need to be conscious. It only becomes conscious when it's modulated by reticular modulation. But now, here comes the specific answer. The reticular activating modulation is that modulation of confidence, that palpating of confidence and adjustment of confidence is in response to what's happening in the periaqueductal grey. The periaqueductal grey is not part of the reticular activating system. It's the other end of the cycle. So I'm acting up here, message passing, I'm doing this and I'm sensing that in response. That's all modulated by reticular arousal. And then the residual error signal — I have confidence in this policy, but it results in this error. In other words, my confidence is misplaced. So the periaqueductal grey is where all of these homeostatic systems ultimately deliver their residual error, and on the basis of periaqueductal grey registration of the magnitude of the various error signals in relation to the current opportunities, it prioritizes.

[24:40] Mark Solms: The prioritization of an error signal is the feeling of the affect. It's "I am," so I've prioritized fear over thirst. I feel fear. That's the prioritization of fear. Now thirst is going to be dealt with automatically. I'm not going to palpate my precision in my actions in relation to thirst. That's rendered automatic. I've now prioritized fear. I set about an action program that is designed to meet that fear. That action program is the activation of long-term memories; in other words, it's an action program with expected consequences. But these are expected precisions. It's not fixed precisions. I'm now palpating the precision in my policy, in my predictions, in response to the extent to which they meet or do not meet the sensory consequences that were predicted. That's a long way of saying that you can't really speak about the parabrachial complex as consciousness-producing unless you link it to how it's actually modulating the message passing. That's one type of consciousness it's producing, and the other one is in relation to the periaqueductal gray consequences of that action cycle. There's the feeling generated in PAG. Then the feeling "this about that" is the other aspect of consciousness. The parabrachial complex, or any other part of the reticular activating system, if you were to cut it out and put it in a Petri dish, wouldn't have that functionality unless it was in that loop.

[27:23] Michael Levin: We sometimes discuss in the lab: when you do have something like that sitting in a petri dish, what is the input? Is the input nothing? Because the cells still have the sensors on them, they're still facing outwards, they're in the media, but is that the equivalent of turning everything off and getting no input, or is that the equivalent of getting bizarre inputs that are surprising because they're different than what you would get physiologically? Or is it even possible to turn it off entirely, because you're always going to get some receptor, and if you get rid of the receptor, there's still some transduction machinery — there's always something. And the question is, what happens in sensory deprivation: you try to shut that stuff off, but it will immediately generate its own simulator.

[28:21] Mark Solms: I think there's not going to be one answer to that. What would happen would depend on all sorts of imponderables. But of course, something will happen, as you say. But I want to just introduce one. Actually, again, I showed you in that brief talk in Boston. At one instance of the sort of situation, although it's a small instance of the sort of situation you're describing, which is hydroencephalic children. So there they have a reticular activating system, but there's no cortex to modulate. And so in what sense are they conscious? What they're conscious of is the errors. They're conscious of the feelings. But they don't know what the feelings are about. So they're still able to generate consciousness, and they're still creating a reticular activating response. We also simplify when we speak because the reticular activating system doesn't only activate cortex. In fact, it activates all over the place, including spinal cord and everything. It's just that the main destination by far is ascending, but there are also descending pathways. There is also descending modulation right down, so it's modulating something, but it's not generating conscious images.

[29:55] Chris Fields: This is very interesting because we're discussing the role of a particular set of cells in error detection on the one hand and precision modulation on the other hand. But both of those sets of cells can also be considered to be active inference agents in their own right, living within some environment that provides them with inputs and responds to their outputs. And they have their own little models of how they expect their environment, which is whatever sends them inputs, to respond to whatever outputs that they give it. So we're working at two different levels of description here. In the cutting it out and putting in a dish thought experiment, one has to characterize that normal, in situ environment of this little system very carefully to know how one would give it inputs and measure and respond to its outputs in a way that was close enough to its model that it would behave in the way it's behaving in the body and in some way that's intelligible to you as an observer.

[31:40] Mark Solms: You're completely right. You write that each one of these cells and each one of these nuclei has its own predictive model, and they're responding to their environment. I don't do this kind of work like you do, Mike, and you have, Chris. I don't know to what extent it is possible in that situation to start giving artificial inputs to the cell or to the nucleus. You can start doing experiments to empirically address the sorts of questions we're discussing here.

[32:42] Michael Levin: Totally possible. That was one of the things I was going to show you guys during the visit, and we never got to it because the worms stole the show. Wesley has these hybrids, which many people have made different kinds of hybrids, which are brain structures and for us also non-brain structures that are hooked up to artificial bodies. People have published that you can take a lamprey brain and make it drive a little robot cart around by just basic taking sensory-motor substitution and prosthetics and putting that on steroids and just saying, okay, instead of your normal artificial limbs, we're just going to give you new stuff: new effect, new sensors, and new effectors. That works pretty well. There have been robots powered by slime mold and by fish brains and by frog brains. You can certainly do it. We have some of that going on, having artificial virtual reality worlds for a bunch of cells in a dish so that they can inhabit that world and certain things are rewarded and other things are discouraged. What we are able to collect from that is data on behavior and physiology. We're not able to access first-person consciousness. I think that will stay that way until and unless we merge with the experiment. At some point, if you want to know what it's really like to be a composite system of whatever this crazy thing is in yourself, you plug it into your brain in some high data-transfer way, and maybe you'll have some experience of it. I don't know what it's like to be any of these agents doing their thing, but you can absolutely explant all these brain regions, instrumentize them, and give them artificial sensors and effectors, and they will act sensibly in these virtual worlds. These hybrids will drive around and do various adaptive things that they're being rewarded for.

[34:53] Mark Solms: I'd like to know what you guys think about this. I'm going to keep it very brief because you got your list there, Mike. What you are talking about, what we are now talking about is, of course, the infamous problem of other minds. If you go back to the beginning of our conversation today, we seem to be in the region of some kind of consensus that that sounds like a reasonable, formal description of what a feeling agent would and would not be able to do when we were speaking about novel problems and novel solutions. I told you what I think the mechanism is whereby that happens. It has to be the organism is registering its own state. It's registering its own state in relation to its free energy — is this increasing or decreasing the chances of me continuing to exist as a system? It's only from its point of view; it doesn't have the same value to any other organism. The registering of that state is intrinsically subjective. The value only applies to that organism. And then it has to be within one of these categories of need. It has to be qualitative; it has to be qualified. You've got a mechanism I've just described, which is subjective and valenced and qualitative, registered by the system for the system. If that's the mechanistic description, then is that not just a description of a feeling? What else is that but a feeling? That's what a feeling is. Do we have to throw up our hands and say, problem of other minds, we can never know? Or can we approach it this way and say, once we've given a mechanistic description of what a feeling is, then we can see situations where such a mechanism is present and where such a mechanism is not present. We don't have to be the system. I think it's a question of at what point it is justified to take the viewpoint of the system, to say that the system has a viewpoint at all. If it's justified to take the viewpoint of the system, if it's got this mechanism and it shows that behavior, then it must be feeling bad now and it must be feeling good now. What do you think of that? Chris, you want to? Is there no possibility of an objective criterion? That's what I'm saying.

[38:07] Chris Fields: I think what that motivates, at least for me, is to try to understand in some phylogenetic lineage, or looking at phylogeny as some sort of network of systems, how that assignment of affect is implemented in different kinds of organisms, very simple organisms, for example, things like worms or paramecia, where we have some chance of understanding a pathway that does this kind of precision modulation. That's why I was bringing up the system even in E. coli, where there is precision modulation. It seems like a bit of a stretch to interpret in terms of feedback, but maybe it isn't.

[39:23] Mark Solms: That's a very interesting conclusion, a provisional conclusion for us to have reached. I will now defer to Mike's list.

[39:34] Michael Levin: This is very good. There's only one other thing on the list. Just to speak to your last point, I think that's very interesting. I agree with you that we don't have to throw up our hands and say we have no clue what the conscious state is. I think we can objectively make some good statements about that. But I wonder if that goes back to the Mary argument about the blind color scientists. I think you can say quite a lot from that objective perspective, but I wonder if that's really everything. Does it really add nothing to then be the system and say, here's the system and we've made objectively all kinds of conclusions about if it's happy or unhappy or whatever, and then magically you become the system? And is your experience, "well, yeah, I already guessed all of that, so nothing new here," or is it like, "oh, now it's much more, there's something added"?

[40:45] Mark Solms: No, I don't agree with that, and I think that you don't either. I think that when we're engineering such a system, it's having artificial feelings about what it is like to run out of battery power. I've never been an inorganic system and I've never had a battery. So I can't imagine what it feels like to run out. I can imagine, but I can't know what it feels like to run out of battery power when you're a robot that's got dysfunctional architecture. But that's not the same as Frank Jackson's knowledge problem. In Frank Jackson's knowledge problem, Mary, the visual neuroscientist, would not only not know what it's like to see; her knowledge of how vision works makes no predictions that there would be something it is like to see. There's no necessity for there to be something it is like to see, but you can understand the whole physical mechanism of vision and leave out what it is like to see and you've still got the whole mechanism. In that case, "what it is like to see" is neither necessary nor predicted. Mary would never have expected that there is something it is like to see. In our case, the one we're talking about, we fully expect there will be something it is like from the viewpoint of the agent to be running out of battery power. We can say it will be unpleasant. We can quantify the unpleasurable increase along this gradient because it has to do with the decreasing confidence in the policy as it leads towards critical values. The exact quality you wouldn't know, but it's way different from the position that Mary was in.

[42:53] Michael Levin: Maybe Lewis Carroll or somebody was saying that in order to transmit the feeling of fear, to make someone feel scared, the thing to do is not to describe what that's like, but to provide a stimulus that scares them, to say something that would scare them. One idea along this other minds thing is: what format should the output of a decent theory of consciousness be? If we had a good one, what would it output? Most of our theories output numbers and things about observable behavior, but what does a theory of consciousness actually output? Maybe what it outputs is poetry or scary stories that try to put you in the same state they're talking about. A stimulus or scenario in which you get to experience it, at least to some extent, as opposed to a scientific objective story about links and positive and negative. That's why I was leaning on this connection: what I think a theory would output is protocols. Protocols: "You want to know what it's like to be this? Well, here's a protocol for putting yourself as close as you're going to get to the state that this thing is in." Then you'll know.

[44:45] Mark Solms: So I think when you posed the question now, two answers occurred to me in terms of the output of a theory of consciousness. The second one is closer to what you're talking about, Lewis Carroll's advice. The first one is more remote. It's got to do with the numbers. When you were saying, "well, it won't be a system of numbers and so on. It's something else, because it's a theory of consciousness," I was thinking, rather, what it will produce is a system of numbers. Its output will be, I'm speaking poetically like you were, it will be a formalism. It will be algorithms and formalisms which underwrite those algorithms. But those formalisms will describe both the behavior of the organism, in other words, the external observables, and it will describe the internal states. So what is novel, what is the contribution of the output of a theory of consciousness is you'll have one formalism which will explain these tightly correlated variables, the physiological observables and the psychological observables. That's the yield, the scientific yield of a theory of consciousness. In other words, it does away with the mind-body dualism. It gives us a dual-aspect monism, which is not a metaphysical one, but a natural scientific one. But to come to the second answer, which I think gets closer to what you're getting at, is that I think, and I don't just think this, I've experienced it. As I was working out my own felt-uncertainty theory of consciousness, I started to have experiences that I hadn't had before. And those experiences were of the kind where I would suddenly realize, "oh, that's why I'm feeling this now." This is not just something I'm subject to. It's not just a thing that happened. I know why I'm feeling this in response to that. I started noticing my picture of the world is not so continuous. I started to have different experiences which were yielded by the scientific account. But it wasn't that I started to have feelings that I hadn't had before. It was more in the nature of I could understand my feelings in ways that I never had before. And that has consequences. But it's not whole new sets of feelings. It's more like thoughts that flow from the feelings that I hadn't previously thought. That's how I've experienced it. So it is what you were saying, Mike. A theory of consciousness won't only have objective consequences. It surely will also have subjective consequences.

[48:06] Michael Levin: In the last few minutes, one other question I meant to ask you. We've talked about this before, and you mentioned towards the end of your talk here in Boston, the artificial agent that your team is building, is it going to sleep either by design or do you predict that as an emergent feature? Is that something you guys have thought about?

[48:33] Mark Solms: We're really at a very basic level so far, so there's a lot to come. We've gone through one phase of our study and we're now starting the second phase of it. One of the main things we're trying to do in the second phase is making an environment which is better able to bring out and display the functionality. We're also wanting to introduce multiple agents in the environment, where the agents are having to predict each other's predictive models. The needs that we're building into this agent are just ciphers, because we're dealing at this stage with entirely virtual agents. It has a need for energy supplies. It has to find those resources in the environment and the environment keeps changing. While it's doing that, it bangs into things and has to avoid banging into things because this is the equivalent of suffering tissue damage. It needs to rest in order for the tissue damage to repair, which conflicts with its need to find the energy resources. Those were the reasons we used those particular ciphers: we wanted to create a situation in which one need competes with the other. So we theoretically have introduced something that functions a little bit — I say theoretically, more like metaphorically, or in some way, we've created something which is analogous to a need to sleep. But I don't think it begins to tap into the sorts of things that you're getting at there. For example, a crucial aspect of sleep is that the agent is no longer navigating an environment. It's no longer having to test its predictions against sensory consequences because there aren't any actions that can offer any sensory consequences. That enables it to be offline, exempted from having to do that sort of thing. It's able to do a whole lot of other things, including memory consolidation — the very introspective task of attending to which of the new synapses, or the new synaptic weightings, that I established today am I going to retain and which am I not. This has increasing consequences the deeper you go into the predictive hierarchy. If something happens today that was not predicted by one of your most heartfelt beliefs, you're not going to just give up your heartfelt belief on the basis of one contrary bit of evidence. You need to really consider. I think this has a lot to do with dreaming. Why does consciousness intrude into sleep if there are no problems to solve? Because you're no longer predicting anything about the world. I think it's a kind of housekeeping or a mopping-up exercise that's best done offline, because you don't have any here-and-now problems to contend with; you can do this consequential business of internally considering, on the basis of an internal precision modulation exercise, whether to retain your old prediction or to update it when it comes to deeper levels of the hierarchy. When you're dealing with a complex system with a deep hierarchical predictive model, that is very important. It has consequences for a theory of consciousness because of what I just said about dreaming. So why must you lose consciousness at any point? In other words, why must you sleep? And why is sleep punctuated by dreams? I think those kinds of questions can be addressed mechanistically in an artificial agent that has proper sleep, not the toy sleep or rest that I was talking about earlier.

[53:16] Michael Levin: One reason I ask is because I have a postdoc that's very interested in sleep and unconventional systems. We've been thinking about how to recognize sleep in things that don't have the typical, obvious hallmarks. In planaria and paramecia and in our tissues there are circadian rhythms and things like that, but the informational features you're talking about now, the consolidation and all that—besides simple inactivity, how do we recognize sleep? Do zenobots sleep? How do we recognize that? I'm thinking about this whole spectrum from the truly minimal agents.

[53:57] Mark Solms: Since we're out of time, it's a good moment to say we've come full circle, because my answer to that is: one objective way of ascertaining whether or not the organism is asleep is this. If it's an organism which is capable of the sort of thing we were talking about earlier, generating novel behaviors, then during sleep that should stop. That's an essential feature of sleep: you're no longer dealing with life's problems. Externally, you're no longer dealing with the external environment. You've withdrawn from it, so you're no longer acting on the environment. It really is, in many different senses of the word, definitional of sleep that you're not producing voluntary behaviors. So I would say if you're talking about organisms that do display the functionality that we were speaking about earlier and calling that voluntary behavior, when that stops, if it stops for a phase of the diurnal cycle, that's its sleep phase. It doesn't have to happen only once, but if there's a protracted period like that, it would look pretty much like sleep to me.

[55:21] Michael Levin: Yeah, cool.

[55:22] Mark Solms: There, again, we see the benefits of having a more objective mechanistic definition rather than having to become the organism to know whether you're asleep or not.


Related episodes