Skip to content

Conversation 1 between Gunnar Babcock, Daniel McShea, Mark Solms, and Michael Levin.

Around-table discussion with Gunnar Babcock, Daniel McShea, Mark Solms, and Michael Levin exploring affective consciousness, determinism, agency, and how feeling might arise in biological and artificial systems.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~1 hour conversation on topics of consciousness, AI, causation, affect, evolution, and philosophy between Gunnar Babcock (https://cals.cornell.edu/gunnar-babcock), Daniel McShea (https://scholars.duke.edu/person/dmcshea), Mark Solms (https://scholar.google.com/citations?user=vD4p8rQAAAAJ&hl=en), and I.

CHAPTERS:

(00:00) Introductions and Affective Consciousness

(04:40) Where Consciousness Begins

(18:42) Determinism, Agency, Scaling

(34:16) Engineering Artificial Feeling Machines

(42:29) Emergent Minds in Machines

(49:54) Sorting Algorithms' Hidden Agency

(56:17) Testing Feeling in Machines

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:00] Mark Solms: Hello, good to meet you.

[00:02] Daniel McShea: Yep, I'm Dan.

[00:03] Mark Solms: Great to see you again, Mike.

[00:05] Michael Levin: Great to see you. Have you guys never met?

[00:10] Daniel McShea: No, I've met Mark through papers virtually. He doesn't know it.

[00:20] Michael Levin: Maybe everybody say a couple of words about what you're interested in.

[00:28] Daniel McShea: I'm an evolutionary biologist slash paleontologist who moved over to philosophy of biology about 15 years ago. Since then, I've been interested in a bunch of things, including evolutionary trends, laws in evolution, laws in biology generally, and in the past 10 plus years, goal directedness and purpose.

[00:57] Gunnar Babcock: I'm Gunnar. I've been working with Dan for a while and am a philosopher of biology, philosopher by training.

[01:08] Mark Solms: I'm a neuroscientist who, in his youth, decided also to train in psychoanalysis, a very peculiar combination. I've been interested in the fundamentals of feelings and how they are bound up with consciousness. I think that feeling is the fundamental form of consciousness, affective feeling. And I've been interested in the role of the brain stem in generating raw feeling and in relatively simple homeostatic mechanisms underlying those feelings. I emphasize relatively simple because this opens the way to a mechanistic understanding of how feeling arises in nature. Right at the moment, I'm in Leeds, not at home, and I've met my one-week-old grandson, so I have mush on the brain.

[02:17] Daniel McShea: They're adorable at one week, I bet. And at two weeks too. Most everything you just said I already know because I've been looking in on your work from time to time. One of the reasons that I wanted to have this meeting is that what you have to say plays exactly into a line I've been pushing for decades on the relationship between feeling or all of affect and behavior and thought. I've stayed away from the word consciousness because I didn't know what it was till I read your stuff. Now I know what it is. But the basic theme has been that all thought, speech, and action are driven by affective states of various kinds, from wanting, caring, preferring, intending — all those I think of as affective states. Correct me if necessary. And consciousness by itself, in the way it's conventionally understood, motivates nothing. I have to revise my language now to put it in your terms, because what you mean by consciousness is close to what I mean by feeling and affect.

[03:45] Mark Solms: Yes, all of that sounds right to me.

[03:50] Daniel McShea: Oh, well, okay, great, see you. I'm kidding.

[03:57] Mark Solms: Especially in the case of a one-week-old baby.

[04:03] Daniel McShea: And all these feelings and motivations that these affective states get highly specified as we age to the point where, I want my coffee just so on the left-hand side of the coffee table with just that much amount of milk and not more and not less and so on. And that's very different from, wah, I'm thirsty. But phenomenologically, I don't think it's any different than that.

[04:40] Gunnar Babcock: One thing I am curious about is we have both Mike and Mark here, it does seem like where each of you would like to draw the line and where consciousness extends is different. I would be curious to hear your takes on that because Mike, your account is much more liberal. Where do you end up on that debate?

[05:08] Michael Levin: Yeah, Mark, please go for it.

[05:12] Mark Solms: As Mike has heard me say before, my encounter with his thinking has been deeply alarming because I was already on the margins of neuroscience when I claimed that the mechanism of consciousness is not cortical, that it's far more basic than we think, far more primitive. I've had a hard time convincing the majority of my colleagues that the fundamental architecture for consciousness can be found in the upper brainstem of the vertebrate. I've never thought that it's exclusive to vertebrates, but that was already a very radical claim as far as my colleagues are concerned. When I encountered Mike's arguments in favour of the view that it's more elemental than that, I have to say that it provoked a resistance in me, partly because I've had a hard enough time of it already, arguing that vertebrate brainstems are enough. But also partly because I'm rather wary of the slippery slope to panpsychism. So the question is, where exactly do we draw? I don't think emotional resistances are a good basis for drawing the line, so I've managed to get over that. But I don't find it easy to stipulate what the decisive transition is. And of course, it's not a transition in the form of, on Friday, you're unconscious and on Saturday, you're conscious. So what the decisive factor is in that gradient from unconscious to conscious living organisms is a very interesting question. Let's hear what Mike has to say on that score, and then I'll happily give my own view, such as it is.

[07:37] Michael Levin: The first thing I would say is that I would challenge the premise of the question. I don't believe there's a sharp line at all. I think the right way to talk about this is what kind and how much. The whole business of trying to find a line leads very quickly to various reductio situations that you cannot work yourself out of. What I think we need are stories of scaling and trying to understand the larger kinds of things that we are more willing to attribute consciousness to, and what the simpler versions of that might be. I think the major thing that is preventing us is failure of imagination. In many of these things, we have our own evolutionary firmware that leads us to expect certain kinds of things as signatures of consciousness that we're familiar with through our life. We need to really expand, and science is the way to do it, our ability to envision consciousness in simpler forms and to develop formalisms that would say, okay, here are things that certainly don't look like consciousness because we're not used to them, but here is a process by which they scale up, and this starts to look more and more like what we're used to. For that reason, we don't particularly work on consciousness per se, but we've been working on extremely minimal systems. We're talking about things that are gene regulatory networks, not even cells, never mind cells, but just sets of molecules that turn each other up or down. Things like sorting algorithms, deterministic, very small, transparent systems. We're finding all sorts of phenomena and handles that seem to be well addressed by concepts in cognitive and behavioral science. My claim is that it goes all the way to the bottom. It's just how do you tell a useful story from least action principles all the way up to human metacognitive self-aware thought and how you tell that scaling story. That's my take.

[10:03] Daniel McShea: Mike, I'm with you in spirit. As I think you know, I'm a connectivist. I want to think in broad terms and how much and what kind lines up with me perfectly. But can you point to some system, one billiard ball bumping into another, which has negligible consciousness? I won't say zero.

[10:27] Michael Levin: Two important things. One has to do with the actual spectrum. I think that least-action principles are the basement of what we mean by goal-directed effort and activity, and which is strongly involved in affect and feeling. But the lowest form of this is what physicists will call least action. I asked Chris Fields once, because this is outside my expertise, I said, is it possible to have a universe in which there were no least-action laws, in which things did not try to seek some sort of final outcome. He said it would have to be a universe in which nothing ever happened, if you're perfectly static. That tells me that, at least in our world, I don't believe there is a zero. I think there are incredibly minimal kinds of things where things are only smart enough to follow a gradient and that's it. They don't do delayed gratification. They don't do memory. Those things are the basement in our universe. But I don't think it's zero because I have a very engineering approach to this. To me, the question for all of this is what can you depend on in terms of autonomous action? If you have a human, you can depend on quite a lot. If you have a dog, you can depend on less. If you have a paramecium or a yeast, you can depend on a few other things. But even if what you have, let's say you're building a roller coaster, you as the engineer have to work really hard to get it up the hill. You don't need to do anything to get it back down. You can depend on it to do that because the thing is just all it knows how to do is follow, minimize its energy and get back down. So to me, that's not zero. That's extremely minimal. It's not a brilliant conversationalist, but it's already on the spectrum. And then from there, we can just talk about more sophisticated ways to navigate gradients, and you get to things that you would like to have with you on a long trip and have a relationship with. I think that's what the bottom is. Something else I should say that I think is important is this slide to panpsychism. I'm not worried about it because I don't think that we have a good intuition for what kinds of things we should avoid sliding into. I have no idea. And so I am happy to let the data tell us where we're going. One thing that's often brought up about panpsychism that's a problem is the combination problem. You have basic properties of components, and how does that end up adding up to a larger mind? I think we can talk about that. I have a pretty weird view about it: it's a different twist on that problem, but it's something that has to be talked about. I think that's what makes a lot of people uncomfortable with it. But I think it's a solvable problem.

[13:29] Gunnar Babcock: Mark, I'd be curious to hear your response.

[13:37] Mark Solms: No, I wouldn't say arguing so much as discussing this question is really interesting, because it challenges my prejudices. It's always a good thing to recognize you have prejudices. But let me have a go. I would say Hamiltonian, this least action principle that Mike is saying, let's start there. Anything that follows a least action gradient has some tiny amount of consciousness. I find it difficult to agree with that. I can see why you might say there's some sort of intentionality. I can see why there's some sort of value system. In other words, least action is good, more action is bad. So I can see where you're coming from when you say that that's already starting to head in the direction of affect. In other words, there's something, some kind of proto-intentionality and some sort of proto-affectivity in the form of a value system, a goodness and a badness. But the part that I think is missing is unpredictability. The fact that it is entirely predictable what will happen. In the case of an agent — and I for this reason hesitate to even use the word agent — versus an object following the least action principle, you can bet your bottom dollar on what it's going to do. The outcome is 100% pre-determined.

[15:37] Daniel McShea: No, go ahead.

[15:38] Mark Solms: I think that a fundamental property of consciousness in the sense of why the object, now deserving the name agent, why it needs to register how well or badly it's doing, is so that it can change its mind. I think this is the fundamental adaptive advantage of being able to register how well or badly you're doing, it's because you're then not obliged to continue pursuing the course of action that you're currently pursuing. You can register, this is going badly for me. I'm going to now do something differently. That's not to say that there's no determinism anymore. Not far from it. There are constraints. The constraints are provided by the value system. But there's a degree of uncertainty that's been introduced. So the agent is now trying to solve a problem within its value system in which it's not entirely confident that it's doing the right thing, palpating how well or badly it's doing, and then changing its mind. In other words, this underwrites the possibility of choice. And I think that something rather big happens at that point. And so I would start drawing the line or the gray area that we're speaking about. I would locate it there rather than at the level of least action principle.

[17:19] Michael Levin: So I think that's really critical. Go ahead, Dan.

[17:22] Mark Solms: Go ahead.

[17:23] Daniel McShea: There's something that Gunnar and I can help with in this conversation. It's not going to get us all the way from Mike to Mark, but it's going to get us a small step. We think what's important in these least action principle interactions is hierarchy. The little thing that's being guided by the bigger thing above it. Now, if it's a ball rolling down a tube, there's no freedom of the sort that Mark wants. It's completely predictable all the way down, but it's following least action principle. There is a hierarchical relationship. The tube is big, the ball is small within the tube. It's a downward causation. The tube guides the ball, but there's no freedom at the lower level, no independence. In contrast, a bacterium swimming up a food gradient or an electron moving through an electric field. There are options here. There's some degree of freedom at the lower level that is consistent with—I won't say least action anymore, because these things don't move in straight lines. There's a principle of action at work here in which there's a higher level system guiding a lower level particle with some degree of freedom. That doesn't get us all the way to everything you said, Mark, but it moves us in your direction from what Mike said.

[18:42] Michael Levin: Both of those are important. One thing I didn't say before that speaks to Mark's point is when people ask me what the bottom of this spectrum is I usually pick two things. At least action is one of them. The second thing is exactly what you said, which is some kind of indeterminism. Such that the local pushes and pulls summoned over the system are not sufficient to say what it's going to do. That is, you need to understand the history of it to some degree, and you need to understand its internal perspective to understand what it's going to do. In my framework, this is the size of the cognitive light cone that you need to consider in order to interact with the system. The basement version of that is quantum indeterminacy. It's not a good type of free will because it's random and who wants to be random, but it's the very bottom level of it. What happens after that is a scaling. You get into your gray area where if you're a paramecium, you don't have lengthy deliberative chains of how I'm going to act differently in the future. What you do have is a bunch of mechanisms that, as Dan just said, are multiple scales that help you tame that underlying noise and randomness into something that does begin to be causally linked to the things that were good or bad last time. We could tell a pretty good biochemical story about how unpredictability and noise at the bottom level can be harnessed into the kinds of things that, together with these gradient-following things, begin to be exactly the sorts of things that you're talking about. That happens in extremely primitive organisms. Life is very good at doing that kind of thing. There's something else that I want to point out, which is much weirder even than this indeterminacy business. In our study of sorting algorithms, bubble sort, these simple computational algorithms to sort a string of numbers, they've been studied for many decades. They are completely deterministic. There is no indeterminacy. They are transparent: six or seven lines of code. There is no new biology to be found. When we examine the ability of those things to react to novel situations, which people have never tested before, we find some very interesting behaviors, propensities, problem-solving capacities, and weird side quests that they go on that are not in the algorithm themselves. One thing I'm very interested in is the appearance of not just emergent complexity and emergent unpredictability, but actually emergent goal-directedness and problem-solving competency in very simple deterministic systems. As of the last year, I'm not even sure you need indeterminism for any of this. Some of this stuff can arise in extremely minimal systems that look to us as fully deterministic because we've bought into the story that the algorithm tells the whole tale. It doesn't actually tell the story of what the system is capable of any more than the rules of biochemistry tell the story of what the mind is capable of. I'm completely in agreement with you both that hierarchy and unpredictability are critical to this. You can scale it very slowly and gradually all the way to the end.

[22:14] Gunnar Babcock: I would say that I am taking the traditional compatibilist line here and thinking that the issue of agency or goal directedness is just separate and distinct from the question of indeterminacy. So whether or not some system's predictable or not seems like an epistemological question. I'm very unpredictable in all sorts of ways, though I would point at a lot of psychological evidence that suggests I'm probably far more predictable than I'd like to believe, but you can have perfectly deterministic systems like the one that you're pointing at, Mike, that seem as though they're perfectly capable of exhibiting the type of agency that's relevant and that there's absolutely no conflict between having a deterministic system that exhibits agency and that those are entirely compatible with each other. So I see the issues as being orthogonal to the agency question. Traditional Dan Dennett arguments.

[23:32] Michael Levin: It does come up, and this is something that we should discuss, because Mark, I know Mark in particular has some interest in this too, where it comes up a lot is in 'machines.' Because the assumption people are happy with is the compatibilist version for life forms, or at least for advanced life forms, and they say, "Yes, there are these two levels and it's fine. Yes, you're a chemical machine, but don't worry, it's fine that at high level it's all good," but suddenly when it comes to 'machines,' people say, "well, that's it. The algorithm and the materials tell the story, and machines only do what you tell them to do. They certainly can't have this or that property." I think that's where the rubber hits the road on some of these things, that if you take seriously this compatibilist view, then you have to examine these very simple, low-end, deterministic-looking things. You might find, as we are now finding, that the compatibilist story actually goes all the way down. That the machine does do the things you wanted it to do via the construction and the algorithm, but also does some other stuff. This other stuff is not just unpredictability and complexity. That's cheap and easy and everybody knows about that. I actually think that's not just what you get. You also get goal directedness, you get problem-solving and, who knows what else you get in terms of consciousness, and I have no idea, but I think that's where people become very resistant to that compatibilist idea.

[25:00] Gunnar Babcock: I'm very much with you, Mark and Mike in that. I think that once you accept that compatibilist position, all sorts of deterministic machines or algorithms are suddenly going to be candidates for agency, and it's the question of indeterminacy. Determinacy is not the relevant question. You're thinking about whether or not something's an agent. I'm curious, Mark, to hear what you would say on this, because this is a mini debate that Dan and I have sometimes. Dan definitely sees the effective profile in creatures more like us as being key to exhibiting some of the robust agency, particularly higher-level stuff. Dan, step in here and correct me if I'm misrepresenting your position. I might be more sympathetic to where you're coming from, Mike. I'm more inclined to say really robust, effective creatures like us are capable of more unpredictable stuff. It's going to be a lot harder to say what I'm going to do next than what my Roomba is going to do next. I'm more sympathetic to where you're coming from, Mike, and think it's just different scaling, different levels of agency there. I've got a much bigger bag of tricks given the effective states that I have, but fundamentally, it's not a difference in kind.

[26:49] Mark Solms: Thinking about this in terms of scalability, there's a worry that goes something like this. If you believe that consciousness, the emergence of consciousness in our universe, happened at a certain point in time or there's a transitional phase in which consciousness evolves — which is what I'm arguing — unlike Mike, Mike's saying it was there with the big bang in a very simple form. It's there with the indeterminacy principle. What I'm leading up to is that if you believe, as I do, that it evolved, moreover that it evolved probably somewhat later than life evolved — in other words, it's a biological phenomenon and it's not a phenomenon that applies to all biological life — then if you take that evolutionary naturalist view, it's implicit in that view that it evolved out of things that were already there. So indeterminacy was already there. And the other things that Mike listed before he remembered to include indeterminacy were also there. But those are raw ingredients. There comes some point at which those raw ingredients combine, they scale up, in a way that introduces something more than just those component parts. There comes a point where it starts to become meaningful to speak of consciousness. I'm making a very banal point. You could say you can't speak of liquids when you're only looking at individual atoms. Liquids are made of individual atoms, but the state of their arrangement only becomes a question once there are enough of them. Something like that seems to me to be called for here. I agree that the raw ingredients are there. Consciousness is not a miracle. It's something that emerges out of some combination of components that pre-existed. But the question becomes: what sort of transition occurs that starts to make it meaningful to speak of what it's like to be that object, that particle, that agent? And it's not a matter of unpredictability. It's a matter of how the object or agent deals with unpredictability. It's what sort of tools it has for continuing to exist as a particle, as a thing separate from its environment, with some self-organizing properties, utilizing these emergent tools to navigate uncertainty in a way that its predecessors could not. I know what I'm saying is vague. I think it's necessarily vague. Thanks, Dan.

[30:35] Daniel McShea: No, it's not. I don't think it's vague. There's a distinction that Gunnar and I make. It also comes from David Hume: the distinction between cognition and passion, reason and passion in his language. In this scheme of thinking, reason, calculation, computation has no motive force whatsoever. All motive force comes from passion, translated today as affect. What you're talking about, Mark, sounds like the buildup of cognitive complexity, of reasoning complexity, none of it with any motive force, but highly important when it comes to figuring out what the organism is going to do with its passions. Because all of the things it wants, all the oomph, the action activation that's driving it, is going to be executed by that cognitive machinery. It's going to produce a planaria if there's very little cognitive machinery, and it's going to produce us if there's extraordinarily complicated cognitive machinery. But again, with this separation, we're not asking about the boundaries of affect anymore. We're asking about the boundaries of cognition, of reason. Tell me, Mark, how you respond to that.

[32:01] Mark Solms: That sounds right to me. It's a very simple response. Forgive me, I'm not a philosopher, and I'm astonished to hear that was Hume's position and gratified to hear it.

[32:20] Daniel McShea: I'm getting it right. He's my real philosopher's concern.

[32:25] Gunnar Babcock: That's my read of Hume, but you know Hume better than I do. Mark, what you cite, and some of your work with, is it Merker's work? Some of that is just almost indisputable, really nice empirical evidence that suggests exactly this Humean line is right. And I think that your view, Mark, very much aligns with that. The dualism of Palmer is problematic for all sorts of reasons. But you can find in that transition to what you were talking about that a liquid state fits very well with the story that Dan and I would want to tell about a non-reductive materialistic perspective on consciousness. But I'm always curious: how much of this do you think hangs on the affective state that seems to be most readily identified in biological phenomena? I tend to hang with Mike on this one. I don't see it as problematic to find something akin to at least agential. How much that is synonymous with consciousness, I don't know. That's an interesting question. Finding that in machines that may not have anything akin to an affective state doesn't seem deeply problematic to me. I see this as affective being one of the primary drivers of it in this human view. But I don't see why that couldn't be found in the right machine.

[34:16] Mark Solms: I would like in a few minutes to come back to Mike and ask him at what point, although the raw ingredients are there, it becomes meaningful to speak of a conscious agent? But what you've just said, Gunnar, it links up with why I enjoy and appreciate conversations like this. Because it exposes you to your prejudices. That was a prejudice that I subscribed to, for want of a nicer way of putting it. That artificial intelligence had nothing to do with the mind. I was not in the least bit interested in artificial intelligence. I was of the view that consciousness is a biological phenomenon that evolved at a certain point in the history of life. It's not synonymous with all life and certainly didn't pre-exist life. Once consciousness evolved, and to the extent we can discern the mechanism whereby it evolved, you can engineer it artificially. There is no reason why that mechanism can't be engineered. The mechanism evolved for very good reasons, and that's not the only way it can be deployed thereafter. So the prejudice I'm referring to is that machine consciousness is an illusion or a sci-fi story. I no longer believe that at all. In fact, as Mike knows, I'm deeply involved in a project, and have been for a few years, where we are trying to engineer an artificial consciousness. We're trying to engineer an agent that instantiates the functionality that we find in the vertebrate upper brain stem. I'm fully on board with that now. This is why it's important to have conversations that test your assumptions and enable you to get over your prejudices. It's probably the most exciting thing I've ever done, that project I'm working on now. Ten years ago, I would have poo-pooed it. As I said, I want to come back to Mike about that transitional thing. I just want to insert something here, which is that in our attempts to engineer an artificial consciousness — a computer that gives a damn — it has proven rather difficult, Mike, to go back to your starting point when you're saying anything that follows the Hamiltonian principle of least resistance has a little bit of consciousness. I'm not even sure that our artificial agent, which we've been laboring on for years to get to display the functionality I would think reasonable evidence it's using feelings to make its choices, is doing so. It's proving rather difficult to do that. I think that's the same point from a different angle: I'm skeptical that proto-consciousness is present at that level, or that the word consciousness deserves to be used at those more elementary levels. I've gone ahead of us. Dan, you were going to say something, and maybe we can come back to Mike's answer to those other questions.

[38:34] Daniel McShea: I'm going to set up a fresh confrontation between you and Mike on this.

[38:42] Mark Solms: I don't even like to confront Mike.

[38:46] Daniel McShea: AI, in Gunnar and my views, has no feeling whatsoever because feeling is consistent with the least action principle. It's oomph. The only oomph that any AI has is the voltage difference between the prongs of the thing where you plug it in. If you're going to create something motivated and not just something cognitively smart, AI is incredibly cognitively smart. The computational machinery is just pattern recognition, but it's really good at it. Its affective profile amounts to that voltage difference, near as I can tell. I want to hear both your reactions to that.

[39:29] Mark Solms: Shall I go, Mike, or do you want to go? Go ahead. For me, unless I'm misunderstanding you, Dan, the crucial thing there is whether or not you understand energy only in physical energy terms. I think that an informational energy is what we're talking about — the oomph when we speak of the uncertainty of the system over the question of what to do next. In other words, the principle by which it exercises choice. It has to do with informational energy, with what we can call variational free energy. I don't agree with the premise that we're talking about voltage differentials.

[40:28] Daniel McShea: Cast my decision-making in terms of diffusion of the free energy gradient from the sugar that I had to eat, and say, that's the equivalent of voltage difference, Dan. There's no motivation going on there beyond that. But of course that's wrong because there are intermediate sources of oomph between the sugar and my behavior, namely my desire to get up and go for a walk this afternoon, powered by the sugar, but that's upstream. Downstream at the level of me, there's something there. So in order for AI to be demonstrably feeling and caring and preferring and all that stuff that's wrapped up in consciousness, there need to be these intermediate states which are powering it in its own directions which are sometimes different than the voltage difference. I don't know enough about AI. Maybe one or both of you can convince me to have those intermediate states.

[41:22] Mark Solms: This interleaves with so many of the issues that we now are busy discussing. Let me say in a very simple way that precisely the sort of problem that you are talking about now, Dan, is why I'm loathe to attribute consciousness to an organism that doesn't have a nervous system. Because it's that higher level that you're talking about that is introduced by a nervous system, which orchestrates what's going on in terms of the chemical radiance of many of its organ systems. There's this informational gradient that then regulates what's going on in terms of the other energy gradients. And it's that thing that I think we're talking about when those of us who are saying we are skeptical about the mechanism in question being in the raw ingredients, although the conscious agent is made up of those raw ingredients. Over to you, Mike.

[42:29] Michael Levin: I've made notes and I'm going to come back to the first thing you asked, Mark. Let me say something about this AI business. I preface this in two ways. I'm not arguing that there's some sort of weird magic that we're not going to be able to unravel. I think there absolutely is a research program here and I'll briefly describe it. The second thing is I want to be very careful with what I say because I still haven't sorted out. There are some ethical dilemmas here, because, and Mark and I have talked about this before, to the extent that any of this is right, I think it probably leads to an advance in machines or created things that we then have to be concerned about on a moral level in terms of the capacity to suffer. So it's still a little unclear to me what I should and shouldn't be saying, but let's just put it this way: what Dan just said about the AI — I agree with you. I don't believe that any AI is conscious because of the algorithms that it's following. That computational study is not why I think it may or may not be conscious, and I agree with that. However, do you know the old Magritte painting with a pipe that says in French, "this is not a pipe"? Jeremy Gay, who's my amazing graphic artist, I asked him to make a thing that has a picture of a Turing machine, and it says on the bottom, in French, "this is not a Turing machine." Here's the problem I think we're making. We somehow have bought into the idea that the limitations of our formal models are limitations of the actual thing. When you have a device that somebody wrote an algorithm for and people say, I write these language models. I write the code. It's linear algebra. I know what it does — my point is you don't even know what bubble sort does. If you don't know what bubble sort does, you don't know what this thing does. I agree with you that there's a bifurcation here between intelligence, language competency, and consciousness. We've now split those things apart. I don't think you conclude that it's conscious because of the things it says or because of what's in the algorithm, but we are seeing even very simple algorithms do things that are not in the algorithm. I think what Whitehead called ingressions, and I think they're ingressions from a space of patterns where the boring ones are facts about prime numbers and truths of number theory and things like this. But I think there are other patterns that are much higher-agency things that we normally associate with certain kinds of minds. When you make these things — whether synthetic biology, normal embryos, AIs, or some kind of hybrid cyborg construct — what you're really doing is making pointers that get out more than you put in. We are seeing this again and again: you've made something, and what you've really made is a pointer into a space of patterns, which we do not understand at all. We call some of them emergence, but that just means you don't know where it came from, and you're making a catalog of these things. I want to be very careful about concluding what these things have and don't have by focusing on the material and the algorithm. That would be making the same problem, what you were just saying, Dan: looking, well, the laws of chemistry, there's nothing agential there, so therefore you're just a machine. We know that's not a good way to think about it. We have to be fearless and consider that line of thinking goes much further than we're used to thinking. That's why we have to be careful, because the properties of these things cannot in any obvious way be discerned from the materials, the composition, the algorithm, or what you think it's doing.

[46:02] Michael Levin: Even very simple algorithms do interesting things. They have certain kinds of goal-directed behaviors and competencies that are not in the algorithm. If we get surprised there, I'm sure we will be surprised when we make these more complex things that have been trained on human data and so on. That's one thing I think we have to be very careful not to make that assumption. The other thing I want to say is to go back to Mark's point about when it is meaningful to speak of these things. I think that is exactly the right question. My view on this is very engineering: what I take all these claims to be are really protocol claims. What tools does it make sense to use to interact optimally with a system? Is it psychoanalysis? Is it behavioral science? Is it cybernetics, control theory, rewiring? Which tools are the appropriate tools? For this, I think about the paradox of the heap. You have a pile of sand, and they say, you take away sand, when does it stop being a heap? My view of that spectrum is this. I don't want to worry about when it's a heap and when it's not a heap. What I do want to worry about is if you tell me that you have a heap and you want it moved, I need to know, am I bringing tweezers, a spoon, a shovel, a bulldozer, dynamite—what are the appropriate tools? Now we can ask a very empirical question. For these very simplistic things for which consciousness is not appropriate, what are the tools that we can deploy on this? One tool that we have is the visualization of what it's like to be that thing. I would claim that works for other humans, maybe for other animals, but it becomes increasingly an unreliable guide to simple or exotic other forms. The fact that we cannot imagine what it's like to be a Roomba or a magnet or anything else is guaranteed by the fact that for us it's pretty hard to imagine being a mind at all different, or even non-neurotypical humans; it's just very hard for us. That's not surprising. We can use specific tools and ask, can we take the concepts that you, Carl, and other people in neuroscience use, and apply them to these other very simple systems? It becomes an empirical question. What we're finding is that whenever we try it, we discover new capabilities, new research programs. That is the judge. Am I claiming that psychoanalysis is applicable to gene regulatory networks? No. But training certainly is. Something that we just put up as a preprint: measures of causal emergence, like IIT-style metrics, apply very well to gene regulatory networks. They change with training. You can use all these things on these very simple systems. Our imagination is not a good guide, but porting the tools and seeing how far you get gives us a good payoff.

[49:36] Daniel McShea: Your caution is well articulated and it chastens me a bit to hear it. So thank you for that. How do you tell the degree to which something cares as opposed to the degree to which it's just computing? What's the experiment? What's the test? You name the tool.

[49:54] Michael Levin: I'll tell you a very simple experiment that we've done. It addresses this common critique that machines do what you program them to do. I'll give you a very simple example. You have arrays of jumbled up numbers, randomized, and you have a sorting algorithm. This algorithm is very simple. It's just a few lines of code, and it's how to rearrange the numbers so that the whole thing becomes in order, monotonically increasing. What you can do is plot the movement of that process in its behavior space, meaning how sorted the array is. You start from all kinds of different starting points, but in the end, there's one point where everything is sorted, and they all reliably get to that point. Now you have behavior in a space, and you can start asking some questions. What are the competencies of this thing in that process? I'll give you two examples. One thing you can do to test goal-directed behaviors and intelligence is give barriers. You put in a barrier and you see how good this thing is at overcoming a barrier. One trick that some systems know how to do is delayed gratification. That means you've got a barrier. In order to overcome the barrier, you have to temporarily get further away from your goal. Two magnets with a piece of wood in the middle, one magnet is not going to go around because it's too dumb to go against the gradient to recoup some gains later on. It doesn't delay gratification. What does a sorting algorithm do? The standard sorting algorithm does not have any metric of asking how am I doing. It's not in there. It assumes reliable substrate because standard computing assumes that your hardware does what the code says it will do. So it has no ability to say did my action succeed? Am I doing well? Do I need to do something? There's nothing in there like that. It assumes everything's fine. If you put a barrier between it and its goal, and the barrier is a broken number, it's a number where the hardware refuses to move. I say I want to swap the five and the seven, and you issue the command, but the seven won't move. It's stuck. It's broken. That prevents you from going where you need to go the way that you normally would go. Turns out the sorting algorithms, despite having no extra steps for this at all, do delayed gratification. If they come across a broken number, they will backtrack, de-sort the array. The sortedness actually goes down. They go against the gradient, something that simple magnets don't do. They go against the gradient, they go around, and then they get to where they need to go. Now that capacity, that delayed gratification, is nowhere in the actual algorithm. There are no explicit provisions for that. That is a simple example of a very simplistic kind of competency, but it's something, and it's something that a lot of systems don't do, and you didn't have to put it in, and you didn't know it was going to be there from the steps that you did have. That's the first thing. The second thing is we found a novel — there's the thing that you made it do, and there's the thing that it did on its own that you did not want it to do.

[53:12] Daniel McShea: No, I'm just singing along, that's all.

[53:16] Michael Levin: There's something else that you can do with this: you can take that sorting algorithm and you can put it into each number. Instead of a master, a central controller that's shuffling numbers, you can put the same algorithm in the actual number. The five wants to be next to the six and between the six and the four and so on. When you do that, one thing you can do is you can make heimeric strings. You can make strings of numbers where half of them are following one algorithm, half of them are following some other algorithm, and you mix them up. It still works. Everything gets to where it needs to go because the cells ultimately agree on where they want to be. Everything works. But you can do an interesting thing. You can say along the way, what's the tendency for cells with similar algorithms? We call them algotypes. What is the tendency for cells with similar numbers with similar algotypes to cluster together? Initially, that's zero because it's completely random. At the end, the probability of being next to your own type or some other type is 50% because you have to get sorted and the assignment is random. If you're going to sort the numbers, you have no guarantee of who you're sitting next to. But in between, the sortedness goes like this. They have this weird extra propensity to hang out together with others of that type. The physics of their world rips them apart, because in the end, the algorithm insists that you are going to be sorted, and that will rip up any of these clusters that form. In the middle, you get this weird thing where cells with the same algotype tend to cluster together. Now, nowhere in the algorithm does it say, What type am I? What type is my neighbor? Let's go sit next to my neighbor. None of that is in there. This tendency to hang out with your own kind is completely emergent here. When I first saw it, it was a weird existential moment because it's the story of all of us in the physical universe. You can't escape the laws of physics. Eventually, entropy grinds you down, but in between your start and your end point, you get to do some things that are not inconsistent with the physics of your world. Everything is totally deterministic and consistent. You get to have these other side quests that are neither explained nor forbidden by the physics and not at all obvious to any observer until they know how to look and where to look. These are just two very simple examples of taking a system where you don't expect any of this and using creative different ways to ask what is this thing doing and finding out that it has competencies you didn't know about and it has some tendencies that, while consistent with the physics, are not in the physics. We need to have a science of this, of looking at what else these systems are doing besides what we told them to do.

[56:17] Mark Solms: I'm mindful that I have to end in 4 minutes because I have a meeting with the funder of the project I was talking about earlier. I very much hope that we will have further conversations; it's obvious there are many things we haven't resolved. There are many things we can't resolve, but I think there are many things we've been talking about that we can make further progress on if we talk more than we have in the hour we've had. I want you to squeeze in under the wire, if I may. First of all, I agree that we need some sort of objective test. By objective, I mean it needs to deal with prejudice. In other words, something along the lines of a Turing test with all of its problems: where you can't look, am I dealing with a machine or am I dealing with a creature? You have to judge it by its outputs. The question then becomes what sorts of outputs are convincing evidence that the agent is using the sorts of functionality that we are looking for. What Mike has just said...

[57:34] Daniel McShea: I have a hypothesis to run by you, and I'll do it by e-mail.

[57:38] Mark Solms: Okay, thank you.

[57:40] Daniel McShea: I'll send it to everybody about a way to go with that test.

[57:44] Mark Solms: But what Mike has been saying, and it grows out of conversations I've had with you before, you have extended my thinking along these lines, because I had initially said to you in my first encounters with you, I would only be persuaded that an agent is conscious if it is able to solve novel problems. And I would have to add, novel problems which are consequential to its own existence. It's not just a novel problem. It's a novel problem that matters to it, that it gives a damn about, that has consequences for itself as an agent because of the necessarily subjective nature of feeling. What you've just described, which we've talked about before, is that there's a goal that's written into the algorithm, which is, say, for example, "sort the numbers." And then there's a novel problem: how do I get there. And the agent comes up with a novel solution. This you've persuaded me of. What I'm not persuaded of is that in a situation such as that, it's used feeling in order to get to the novel solution. This is how you've forced me to have to think more deeply on matters like this in terms of functional criteria. I would like to see evidence that the agent is using this functionality. And I don't think it's something magical. I think this is the crucial thing we need to get our heads around: what the causal mechanistic powers of feeling are that are not there prior to the emergence of feeling. I've told you before, Mike, there's a thing we use with zebrafish, where it's called hedonic place preference behaviour, where they hang out on this side of the tank because that's where the food is delivered. Then you deliver cocaine or morphine or amphetamine and even nicotine to the other side of the tank and they gravitate there, prefer to be there, and just dart back for food. The explanation for that surely is that there is a hedonic, there's a pleasurable — it feels good to be on the side of the tank because those substances are not doing them any good. So the feeling is somehow having some causal consequences for the behaviour of the fish. I think something along those lines is what I would like to see from an artificial agent, that kind of dissociation of the feeling's causal power from the causal power of the end goal that is written into the thing, which is "survive."

[1:00:49] Daniel McShea: Conflicting feelings. Yes, Yep.

[1:00:52] Mark Solms: I really look forward to further exchanges with you, Chad. And Mike, as always, thank you for introducing me to endlessly interesting people. I don't know where you find them. Thank you. Great to meet you. Cheers, guys. Bye-bye.

[1:01:12] Michael Levin: I have to go in a couple of minutes. I've taken some notes. Next time, I want to pick up where he just left off, because Mark focused on finding novel solutions, but actually my example wasn't even that. It wasn't finding novel solutions to a problem that we gave it. It's finding a new problem that it's dealing with that we never gave it. And that I think is a different story. So we can start on it next time.


Related episodes