Skip to content

Conversation #1 between Richard Watson, Mark Solms, and Michael Levin

Richard Watson, Mark Solms, and Michael Levin discuss dreams, affect, and artificial minds, exploring brainstem lesions, information encoding, and multilevel causation. They also examine emergence, individuality, consciousness, group minds, time, and agency.

Watch Episode Here


Listen to Episode Here


Show Notes

Discussion between Richard Watson, Mark Solms, and I about brains, agency, and more.

Richard Watson - https://www.richardawatson.com/

Mark Solms - https://scholar.google.com/citations?user=vD4p8rQAAAAJ&hl=en

CHAPTERS:

(00:05) Dreams, affect, artificial minds

(13:43) Brainstem lesions and affect

(21:43) Where information is encoded

(31:13) Multilevel causation and meaning

(41:13) Individuality, emergence, consciousness

(51:17) Group minds, time, agency

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:05] Richard Watson: Historically, I spent most of my time thinking about evolutionary processes, and in particular, the phenomena of transitions and individuality, such as from single-celled life to multi-celled life, where there's a change in the level of the unit that's doing the evolving. More recently, I've been thinking about processes of adaptation that are not natural selection, that are more like learning than natural selection. And I think that they are different. And I've been thinking about a way of modeling cognition and learning in terms of oscillations and synchronization and how synchronization affects higher level decision making processes, things like that. That's where I'm at the moment.

[01:18] Mark Solms: When you have to summarize your life's work in a few sentences, it's an interesting exercise. There's a little home movie that was made of Sigmund Freud after he had fled to England in 1939. It must have been one of the early talking home movies. He toddles up to the camera in the garden of his house in Hampstead, and he says the following: "My name is Sigmund Freud. I started my professional life as a neurologist, trying to bring relief to my neurotic patients. I learned some interesting things along the way, but they came at a terrible price." That's his life. I think the terrible price referred to is the loss of credence among his scientific colleagues. Your summary brought that to mind, and I realized it's relevant to my own story. I started my professional life as a neuropsychologist. I worked with a neurosurgeon named Owen Sparrow, because I trained in South Africa. I worked with him in London at the Royal London Hospital. I started out as a neuropsychologist because I was fascinated by how mind and brain relate. By mind I meant the thing that we are busy experiencing, and I was rapidly disabused of that conception of mind. I was told not to ask questions like that because they were bad for my career. When learning about memory I would ask, "But why does it feel like something to have a memory?" That was the kind of question that was shot down.

[05:22] Mark Solms: And really, a purely functional and behavioral and cognitive account of the mind left me very frustrated. So this is why the story about Freud's terrible price came to mind. When I emigrated to England in the late 1980s, which is when I worked at the neurosurgery department of the Royal London, by night, while working there by day, I trained as a psychoanalyst, which, as one of my colleagues at that time said, was like an astronomer training in astrology. But the reason I did that is probably obvious. For all of its enormous faults, Psychoanalysis was the one approach to mental science which took as its starting point a subjective experience, took as its starter the experience of the life of the mind. As part of my training, I had to myself undergo a psychoanalysis, which took me nine years, five times a week. It is quite an education to get to know yourself in that degree of detail and depth. What you quickly learn, if you take that subjective observational perspective on the mind, is what a terribly important part is played by feelings, that which is precisely the most fundamental form of subjective experience. It is what comes from the subject, what the subject brings to its perception of objects. My research around that time, because of the melding of these different interests, focused on brain mechanisms of dreaming. But what I learned about the brain mechanisms of dreaming was that they are not the same thing as the brain mechanisms of REM sleep, which we had until that point assumed they were. In other words, we knew on the basis of correlation that dreaming happens during REM sleep. On that basis, we assumed that dreaming was simply the subjective experience of the objective physiological state of being in REM sleep. And therefore, they were assumed to share the same causal mechanism. Once that correlation was established, the interest in dreams was abandoned because they're such slippery fish when it comes to data gathering. We studied only the physiological mechanisms, and using the best methods available at that time inevitably meant studying animals because there were no methods for isolating which parts of the brain were causally responsible for generating this state without invasive techniques. The subjective side was completely disregarded. All the work done on identifying the brain mechanisms of dreaming was done on animals.

[09:26] Mark Solms: And then there was no possibility of monitoring what these lesions were doing to the subjective experience of dreams. And I then did a big study, which I published in the 90s, of human beings, 361 of them, with focal brain lesions, studying the effects of those lesions on their experience of dreaming. What I found was damage to the mesopontine tegmentum, which leads to loss of REM sleep in human beings, just as it does in other animals, did not lead to loss of dreaming. Conversely, there was another lesion site in the ventromesial quadrant of the frontal lobes in the white matter down there, which led to loss of dreaming with preservation of REM sleep. It turned out to be doubly dissociable functions. That's the price we pay if we eschew subjective data. To fast forward from there, I became interested in the brain mechanisms of REM sleep and dreaming, and then in brain mechanisms of consciousness generally, recognizing that we were taking too cortical an approach to the study of consciousness, that consciousness is fundamentally generated in upper brain stem structures. For example, the smallest lesion that is required to induce coma is 2 cubic millimeters in the parabrachial complex. Human beings born with no cortex seem perfectly conscious in the sense that they wake up in the morning and that they're reactive to stimuli. In particular, they are emotionally reactive, as are deep cortical animals, but even in human beings with no cortex, consciousness persists. I became interested in those brain stem sources of consciousness, and then gradually persuaded of the view that this is not a purely quantitative level of consciousness that's generated down there. It's not just a blank wakefulness; it has quality and content of its own, and that is an affective feeling. I've been in recent years working on two fronts. The one is trying to better understand what precisely is going on in these upper brainstem structures that are prerequisite for any form of consciousness and specifically generate affective feelings, trying to understand the relationship between affect and consciousness, understanding consciousness as fundamentally affective, and trying to reduce that to its underlying mechanisms, and even to mathematical formalisms, because it turns out it's all fundamentally homeostatic. Affect is an extended form of homeostasis; it's not a very complicated thing. So together with Carl Kristen, I've been trying to understand at a fundamental level how affect is generated mechanistically in the belief that if we can get a grip on that, then we will have a mechanistic grip on consciousness itself. The other track that I'm working on, which takes me into your world and also somewhat out of my depth, is an attempt, together with colleagues, computer scientists and physicists and roboticists, to artificially engineer a system with the mechanisms that we have inferred from the biological base, and to instantiate that same mechanism in an artificial consciousness. And so that's the wildest thing I've ever been involved in. I'm fortunately old enough now to be allowed to do wild things without it destroying my career. But it is an absolutely fascinating thing to be doing, too. So that's a long-winded account of who I am and what I'm up to, Richard.

[13:31] Richard Watson: Thank you. Are we assuming that Mike doesn't need to do an intro because we both know Mike already?

[13:41] Mark Solms: We both know Mike, yes.

[13:43] Richard Watson: All right. Can I? One thing popped up there was this 2 cubic millimetres of brain at the top of the stem and its role in consciousness or the smallest lesion that could remove consciousness. Whether or not that's the same as saying that's the place where consciousness is generated. They're not the same, are they?

[14:13] Mark Solms: No, that's a fallacy that has long bedevilled my field. Starting with Broca, the idea that a lesion in a specific part of the frontal cortex, which obliterates the capacity to speak, or at least to speak grammatically and fluently, that finding means that there's a language center. There's a part of the cortex within which the faculty for language resides. And so, of course, that is wrong-headed reasoning.

[14:54] Richard Watson: It's noted from the... Sorry, go ahead.

[14:57] Mark Solms: No, but it nevertheless is an important and interesting fact that damage to such a small area can obliterate consciousness entirely because it demonstrates, and that's the word that's used correctly, that it is a prerequisite: without that you can't have consciousness. So it must be contributing something fundamental to what is obviously a more system-wide state.

[15:26] Richard Watson: You find what's the smallest genetic knockout which stops you from having an eye or stops you from developing properly. The smallest genetic knockout that breaks something is not the same as saying that's the place where the information for that thing lives.

[15:48] Mark Solms: So for that reason, alongside another one, which I'll briefly mention, I told you that these children, by the way, the condition is called hydranencephaly, not hydrocephaly. These kids can't speak because they have no cortex. Many of my colleagues are skeptical that they really are conscious. They say it's the moral fallacy: I would like to believe that they're conscious, so I project the property of consciousness onto the behavioral observations that I see. I hasten to add, none of them have actually related to these kids. It's something to say in a kind of abstract philosophical way. It's a lot harder to sustain when you have experience of interacting with them. Because of that doubt and the fallacy that you just referred to, I want to point out that the only evidence for this part of the brain, this upper brain stem's pivotal role in consciousness and affective consciousness in particular, is not the lesion evidence. It's not the only evidence at all. I'll just quickly mention a few other lines of converging findings. One is that if you stimulate those structures electrically, which we get opportunities to do every now and then in awake brain surgeries, you generate the most intense affective states. You don't generate a waxing and waning of the level of arousal. What you generate is intense feelings. For example, suicidal depressions in people who've never been depressed, intense fear, dread, and excruciating pain, and so on. So really intense affects. In fact, the treatment of chronic pain follows the same line of finding that there's a part of the upper brain stem called the periaqueductal gray where you can stimulate with higher voltage electrodes and very clinically successfully modulate chronic pain. That's the brain stimulation evidence. There's also imaging.

[18:40] Richard Watson: The brain stimulation evidence: your examples were only negative affect.

[18:48] Mark Solms: In fact, it was recently commented that we find more sites in the cortex that generate negative rather than positive affects. But the affects you generate with cortical stimulation barely deserve to be called affects. They're more like memories or memories of feelings. You get the widest range and the most intense affects from the upper brain stem, particularly periaqueductal grey, but all of those reticular activating nuclei and the diencephalic ones and basal forebrain ones just above them. There's also functional imaging. You image research participants when they are in intense affective states, and where the activation is—upper brain stem and the pathways ascending from them. Interestingly, that includes orgasm. The main line medications used by psychiatrists are manipulating serotonin, dopamine, noradrenaline, and so on. These are neuromodulators whose source cells are in the reticular activating system. So if these systems were just for the level of arousal, you can see why an anesthetist might be tinkering with them. But no, it's the everyday psychopharmacological interventions that manipulate those systems. So if you take all of those different lines of evidence—the lesions, the brain stimulation evidence, the functional imaging, and the pharmacological manipulations—on the basis of all of that, I think it becomes more reasonable to speak of these structures actually being where affect is generated. It doesn't mean that they are the only structures involved in affect, because what those arousal systems are modulating is everything else. So they are modulatory systems, which means they're modulating something. But I think that it's not just a matter of a disruption at a critical site of a complex functional system. I think it's a little more interesting than that.

[21:39] Richard Watson: Sure.

[21:43] Michael Levin: I'd love to get your thoughts on. Richard just asked a really interesting question. You made the point that the fact that this particular piece of neuroanatomy is required for it doesn't mean that that's where it lives. I'd love to dig in and see what it would mean to actually find the place where something like this lives. I'm not even sure what a positive control for that kind of question looks like. Because we understand the logic, what would a proper answer to that look like? I'm increasingly, and here you get into issues of representation, convinced that we need to change the way we think about that question entirely. As people ask me where the information for the Zenobot lives and things like this — where. This issue of things being encoded and specified and represented, I feel we don't have the right framework for this. I'm just curious what you think a good answer to that would even look like.

[22:49] Mark Solms: Are you asking me that, Mike?

[22:51] Michael Levin: Both of you, I'd love to hear you discuss it. I'm assuming you think it's a question.

[22:57] Mark Solms: You could go from one extreme, the most profound level of trying to answer that question, which I won't dwell on because it's above my pay grade, would be the relational interpretation of quantum mechanics. It's all a matter of interactions, of correlations. But sticking with what I know, I would answer that by reference to the functional architecture that we are building in that artificial consciousness project. So are we building a thing that throbs with consciousness in a center or something else? Of course, it's something else. It's a system that has survival needs in the sense that if these quantities are not kept within viable bounds, the system ceases to exist. It's a homeostatic system that has to satisfy, as it happens in the one we are working on now, three needs which are not orthogonal but which conflict with each other. There's a need to prioritize these different needs, which importantly means that they are categorical variables. They're qualitatively distinguished from each other. The way I think about it is these are valenced; this is an intrinsically valenced mechanism in the sense that there is a goodness and badness from the point of view of the system. The goodness and badness only applies to the system. What I'm referring to by valence here is that increasing free energy is bad for the system from its point of view, because the whole purpose of such a self-organizing system is to continue to exist. It's a matter of minimizing free energy across different categories, which are qualitatively distinguished from each other. How does the system minimize its free energy? It does so by monitoring all three need states, in other words, prioritizing them. Secondly, more importantly, what it's modulating, what it's using these factorized free energy quantities to do is to optimize its confidence in its current policies. A policy which is leading to increasing free energy, or, in terms of the formalism, increasing expected free energy.

[25:59] Mark Solms: That is a policy in which it loses confidence. And importantly, given that I'm talking about affect, it's a policy that is bad for the system. This is bad for me existentially, the consequences of my current policy unfolding before me. And so I reduce my confidence. I lower the precision in this policy and simultaneously, of course, increase the precision in the error signals issuing from it. On this basis it shifts to a new policy. Although we're talking about affect in an artificial sense, when I say it's valenced and it's qualitatively distinctive, that is applied to its active and sensory states. There would be no purpose in feeling that this is going well or badly for me if by this you're referring not only to your current state, but to what you can do about it. It's the application that, if I anthropomorph or biologize it, it's affect applied to perception and cognition. And then, of course, learning flows from that to come to your area, Richard, of great interest. That's the basis upon which a very important form of learning can occur, namely learning within your own lifetime rather than by natural selection what worked stochastically and didn't, at great expense to large numbers of individuals of that phenotype. This type of learning enables you to learn before it's too late and change your mind. In other words, adjust your policies and formulate new policies. It's a system-wide thing, Mike, but each component part of the system is, of course, playing its own role. But it couldn't play that role if it weren't for its place within the system as a whole. Maybe I'm answering your question too stupidly, because you guys are computer scientists, and it's, as I keep telling you, not my field. But that's the way I think about it.

[29:06] Richard Watson: Mike, can you remind me how you formulated the question?

[29:09] Michael Levin: People often, when they see a particular outcome or functionality, the number one question people always ask is, where is this recorded? Naively, many expect it to be in the DNA somehow. Where it comes up for me all the time is shape and behavior. We'll make some sort of a crazy looking thing or a Xenobot or something else. People say, where was that written down? I keep saying DNA—no anatomy is written down in the DNA ever. They say, then where does it come from? I think what they're looking for. To some extent, our work actually has answered some of that in a much more traditional sense than what's been going on, which is, the standard story is emergence: nothing's written down anywhere, local rules, lots of things obeying simple rules, and voila, something complex. We've actually found that there are bioelectrical patterns that literally presage and encode what's going to happen later. We've actually found some of these things. More fundamentally, this question of where is it, where is the thing encoded? They're sort of thinking of a traditional representation, some sort of data storage and then an interpreter that looks at it. Some of that does actually exist. More broadly, I wonder if we have frameworks for thinking about where some of these things are. What's a good answer to that question? It naturally comes up all the time. I always eventually end up leaning back on mathematics and sort of platonic businesses: where's the distribution of primes encoded, that kind of stuff. But I don't know if there's a better, I want to know if there's a better way to think about it.

[31:13] Richard Watson: It's related to a pair of questions: where are the symbols, which is like where is it written down? And why did the symbols mean anything, which is how are the symbols grounded? There's a back and forth between those two things, which is slippery and annoying. So if you say too much about where it's written down — this is the gene that codes for this, this is the gene that codes for that, this is the neural correlate that generates this, etc. — then we end up with a sort of mechanistic explanation which doesn't explain the thing that we wanted to explain, which was how did that have any meaning? How did that produce consciousness? How did that result? What was it that was doing the interpretation? Was that some other machine that was looking at this machine and all sorts of strange loops like that? The attempted answer to the question where is this written down is to point to some sort of symbol system that records the information that describes how to do that, how that's written down or what the meaning of it was. But we also need those symbols to be cashed out into, as Mark was saying, something active, connected to your senses and actuators, something that does something in the world that matters to your ability to maintain or sustain whatever behavior we were talking about, or your confidence in your understanding of the world, if we're taking a Frestonian position, or a change in policy. My feeling about that is that these are levels of description, that there are different levels of describing something. But they're not just descriptive; they're also causal. So they're different levels of causal frames of reference. So there's a causal process going on at one level of organization. You point to that causal process and say, look, it's doing this. Then someone asks, but why is it doing that? Where did the information come from? And you necessarily need to step down a level to say, well, because what that's made of made it do it. That's just the way those components work when you put them in interaction with one another. So why did it end up doing this holistic thing and not some other holistic thing? You end up with answers like, well, because those particular parts in this particular arrangement couldn't have done anything else — a mechanistic understanding. But then you haven't answered the question: why were those particular parts in that particular arrangement?

[34:55] Richard Watson: They could have been in a different arrangement and done something different. It's all swimming around, which is perhaps appropriate. My feeling is that to answer these questions, we need to have a framework where there are multiple levels of organization, each of which is causal, each of which has a mechanistic process going on at that level of description. But there needs to be multiple levels of those and they need to be connected to one another. If they were too connected, then the reductionist position would be sufficient of just saying, well, the higher-level processes were entirely determined by the lower-level processes. We don't need to talk about the high-level processes. They're not really a thing. Organisms aren't really a thing. Evolutionary units above the level of genes aren't really a thing. Minds aren't really a thing, et cetera. But we don't want those. We're thinking about higher levels of organization as being causal processes in their own right when we think about them in terms of error-correcting codes and such. There are feedback mechanisms happening at that level of organization which are self-sustaining, which don't particularly depend on the level of organization below. We don't want to explain away these higher-level organizations, high-level causal processes. The high-level causal processes are real in their own right at that level of description. But we don't want them to be disconnected either. There's this slippery in-between where we have high-level causal processes which happen at that level of organization in their own right and are not entirely determined by the level below. But they're not entirely separate from the level below either because we're interested in things like how the higher-level processes orchestrate the behaviors of the lower-level processes and how the lower-level processes implement the machinery of the higher-level processes. There are connections between them, but they're not one-to-one. As you move between those scales, the level of indirectness increases. It's a bit like the layers of a deep learning neural network, where the level of description between one layer and another only has meaning insofar as it's self-consistent within that layer and communicates something appropriate to the layer above and the layer below. As you go deeper in those layers, the output-input feedback loop at that level massively underdetermines what's happening in the deeper layers of the network. What's happening in the deeper layers isn't completely undetermined relative to the outer layers. It just gets more and more undetermined as you go down levels of organization.

[38:41] Mark Solms: I doubt anybody could disagree with that. It sounds — I know you're taking a position different from a narrow reductionist position, but it is, I think, demonstrably true what you're saying. It has to be so. I know this cuts to the heart of what you've just been saying and cuts to the heart of some of what you're doing and some of what you've been doing with Mike in terms of levels of individuation. How does — if we start at the level of the single cell. Interactions occurring at the level of the cell and then their interactions at the level of the organ constituted by those cells and then at the level of the organ systems and at the level of the organism as a whole and then all the way back down again in both directions. If you think about it physiologically, it's patently obvious that what you've just said is so. Because I'm interested in consciousness and in feeling, which I think is the most fundamental form of consciousness, I want to throw something into the mix: I do not think that it's a matter of hierarchical emergence. I don't think that's the right frame of reference for thinking about how mind emerges, or how mind, in my subjective sense — of there being something it is like to be the system. I don't think that it's a matter of first you have cells, then you have brains, then you have consciousness arising above the level of brains. At the highest levels of the emergence of the functional system of the nervous system, you get bingo — suddenly you have consciousness.

[41:10] Richard Watson: Suddenly the lights are on, yeah.

[41:13] Mark Solms: I think it's much more a matter of which is not to say this is consciousness, but it's the prerequisite for consciousness: you have to take the subjective point of view. It can only ever be observed subjectively. So it's taking the viewpoint of the system upon its own states. So that's not a level. That is, what is it like to be the brain? What is it like to be the cortex? What is it like to be a pyramidal neuron and beyond? This is a question that I've been exploring with Mike and Chris Fields in another forum. Because of your special expertise in this business of what constitutes an individual, I would love to know what your take is on that. Mike and Chris have blown my mind. What is your take on that, Richard?

[42:23] Richard Watson: That's a good question. I wish I had answers for you. I can say something about how not to do it from the field of natural selection and evolutionary units. The conventional way that it's done, the Dawkins-esque way of approaching it from Williams before him, is to think about the smallest replicating unit. The smallest unit that reproduces faithfully. The obvious problem with that view is that whilst it might be true that those small units get to replicate faithfully, they weren't the units we were interested in. We were interested in the higher level organizations. The question is whether they were manifested by those genes or whether they were orchestrating the behavior of those genes for the purposes of their higher level persistence. There have been lots of people trying to talk about: if I put a bunch of replicators in a bag and select on the bag, in which circumstances can I reasonably call that collective a new evolutionary unit and not just some sort of context in which the smaller evolutionary units were being selected? When is selection on the whole different from the sum of the selection on the parts? You can see people getting all bent out of shape in that area in the same sorts of ways that they get bent out of shape about that in neuroscience and in developmental biology as well. I have at least one thing to say about that. If you want the effect of selection on the whole to not be decomposable into the sum of the selection on the parts, then the relationship between the character on the whole and the character on the parts needs to be a nonlinearly separable function, like XOR or if and only if. It can't just be a non-linear function like non-linear synergy or diminishing returns. The relationship between the character of the parts and the character of the whole can't just be linear or additive, but it can't be simply non-linear that still remains a linearly separable function. It has to be a non-linearly separable function. That means there isn't any character that a part can have that can explain the fitness of the part. Instead, there can only be combinations of characters that collectives of parts can have that can explain the fitness of the parts. To put it crudely, if you want to know what this input has to do to make the XOR output true, there isn't any value that this input can have that makes the XOR output true. That's only a function of itself and the other input that tells you what they need to do in combination to make the XOR output true. If the parts are going to get access to the fitness differences that are created by that interaction, then they have to control the interaction. They can't just be static things. They have to be things which interact and communicate with one another in order to figure out answers to policies: if you're going to be plus one, I'll be minus one, and if you're going to be minus one, I'll be plus one. I need to know what you are, and you need to know what I am, and I need to respond to what you are, and you need to respond to what I am. We need to do this little dance in order for us to satisfy that non-decomposable product at the higher level of organization, which in evolutionary terms means there needs to be a developmental process.

[46:49] Richard Watson: The developmental process is where that communication happens between the parts that creates a whole, which is a non-decomposable function of the parts. The fitness of the whole can't then be decomposed into the fitness contribution that this part made plus the fitness contribution that that part made. Because if you can do that kind of a decomposition, then the whole wasn't anything at all. So you have to have that developmental process that computes a collective phenotype from the collection of particulate phenotypes that you had. That computation needs to compute a nonlinearly separable function like a deep neural network does, not just a perceptron, not just a shallow network. And then you can have a character that properly belongs to the collective that can't be decomposed into the fitness contributions of each of the parts. That's something about what needs to happen and the kind of machinery that you need to have in order for that to happen. But after you've got that, you start to realize that the whole — it's reasonable to say that the whole is telling the parts what to do now, that it's not just that the parts have figured out how to make a whole, but that the whole is telling the parts what to do. A simple example in XOR is that a way to calculate XOR with two hidden nodes is that hidden node one will calculate A and not B, and hidden node two will calculate B and not A, and then the output can be the XOR of it. There are other ways to divide that problem into two hidden nodes. Choosing between one way of dividing it and the other way requires making that decision, because the other way of doing it, for example, is I'll calculate A&B, and you calculate neither A nor B, and then we'll calculate XOR from that. There's no right way to do it. Which way you're going to do it isn't determined by the correctness of the output. The correctness of the output doesn't tell you which way to do it. Which way to do it is determined only by the complementarity of what the parts are doing with respect to one another. They can't discern the correctness of what they're doing with respect to the input-output of the system as a whole. They're only correct with respect to their complementarity with the other parts. That complementarity is something that belongs to the whole. I think it's reasonable to say that the whole is orchestrating the parts to play the roles they need to play in order for the whole to do the function that the whole wanted to do. That responsiveness at that level of organization orchestrates the parts to say, you do A and not B, and you do B and not A, and then together that enables us to do this whole thing. So even in that simple example, there's enough there to say that a bottom-up way of thinking about what's going on doesn't explain enough about what's going on. Does that nibble at individuality? I'm not sure. It's at a very low level of description, but it's the kind of non-decomposability that we're reaching for when we talk about emergence, that I think is the nub of it.

[51:17] Mark Solms: That's very, very interesting. I had an almost facetious thought that occurred, which I will mention before coming to the more serious question. Given what you were saying about the fundamental part played in all of this by development, and by implication, by learning, given the way that you're thinking about it. It's amusing that us living systems, who are seeking to prevent the ravages of entropy, that in order for us to do this megantropic work, there has to be an arrow of time. If there's a learning process. And so that was the semi-prestigious thought. So in biological science, we have another derivation of the arrow of time, which is in fact resisting the second law rather than derived from the second law. Everything you say makes perfect sense, and it makes formal sense. So what you're saying makes perfect sense from soul to organ to organ system to organism. It also makes perfect sense in terms of at least the group. I don't know if there's work being done at the level of the species. I think the species might just be the beneficiary. But certainly at the level of a group of social animals, there are functions that are performed which are directed to the survival of the individual members of the group that cannot be performed by them individually. They require the group unpacking the mechanism you've just described into a prosaic biological situation. But now to the question of consciousness: does that mean the group is an individual, first of all, and why do we stop at the group and not at the level of the species? And is there a meaningful and important way in which the group is not an individual in the sense that the individual phenotype is an individual? That seems to be an immediate question that flows from what you're saying. I have this intuition that there is not—it's not something it is like to be a group in the way that there is something it is like to be me.

[54:17] Richard Watson: Can I say that back? Your feeling is that there isn't something that it is like to be a group in the same way that there is something that is like to be.

[54:26] Mark Solms: I would emphasize feeling, because it doesn't make logical sense. It doesn't make mechanistic sense. So it is just an intuition, or rather I should call it a prejudice. This is what I mean when I say that Mike and Chris have been blowing my mind: I find it very hard to make that leap.

[54:44] Richard Watson: I'm sure that cells, your cells say the same thing about you.

[54:52] Michael Levin: This has been hitting me more and more, and I know that this isn't the kind of issue that we have, but I think that many folks who think about this emphasize the "what's it like to be" in an experiential sense. In other words, the input side of that equation. We have this thing called epiphenomenalism, where some people think, yes, you do have real experiences, but you don't actually act on anything. But there isn't, as far as I'm aware, anyone who's proposed the opposite view. In other words, there's a real symmetry in the emphasis that people put on the feeling side of it. I'm very interested in the action side. In particular, first person and then third person, I've been focusing on second person, as in starting out by trying to control systems like cells and smaller things, trying to say to each other, "you should do this." From that action, I don't even know what the vocabulary is, but I would be interested in the flip side of what a lot of people mean by qualia, which is that I'm receiving, I feel it's this, but the action side. What's it like to do? What's it like to know that you have active causal power in the world and to know what your effectors are?

[56:18] Richard Watson: I didn't say anything about the relationship between individuality, natural selection and learning in that long little monologue that I did. But I think individuality is much more to do with whether you're a kind of system which can hold history that affects your future, which is i.e. learning. Are you the kind of system that can be affected by your past in such a way that you hold history that affects how you behave in the future? Not in the same way that I put the chair over that in that corner of the room and it stays in that corner of the room and then that affects what can sit on it later. But it has to be in a non-decomposable way as well, that the whole is more than the sum of the parts. So there's a related question, Mike, that occurred to me from what you just said about this, in contrast to the question, what is it like to be? If you think about it in terms of, is there something that it is like to be a system that has history like that, that has memory like that? It's more like the question is, what is it like to have been a system like that, rather than what is it like to be right now in this moment? There's the backwards-in-time question: what is it like to have been a system like that, to have had those experiences, to have had those learnings, to have had that history? What is it like to be that? And then looking forward, it's like, how does being a system with that history affect what you do next in the cognitive light cone way of thinking about it.

[58:01] Mark Solms: That's fantastic.


Related episodes