Skip to content

Iain McGilchrist, Richard Watson, and Mike Levin #3

Neuroscientist Iain McGilchrist, futurist Richard Watson, and biologist Mike Levin hold a working meeting on machines, life, agency, and the continuum of being, examining mechanistic models of humans, AI, souls, emergence, and what sustains meaningful experience.

Watch Episode Here


Listen to Episode Here


Show Notes

A working meeting discussing machines, life, agency, souls, and the continuum of being.

Iain McGilchrist - https://channelmcgilchrist.com/

Richard Watson - https://www.richardawatson.com/

The poem I was referring to half-way through is this one:

It doesn’t interest me what you do for a living. I want to know what you ache for and if you dare to dream of meeting your heart’s longing. It doesn’t interest me how old you are. I want to know if you will risk looking like a fool for love, for your dream, for the adventure of being alive. It doesn’t interest me what planets are squaring your moon. I want to know if you have touched the centre of your own sorrow, if you have been opened by life’s betrayals or have become shrivelled and closed from fear of further pain. I want to know if you can sit with pain, mine or your own, without moving to hide it, or fade it, or fix it. I want to know if you can be with joy, mine or your own; if you can dance with wildness and let the ecstasy fill you to the tips of your fingers and toes without cautioning us to be careful, be realistic, remember the limitations of being human. It doesn’t interest me if the story you are telling me is true. I want to know if you can disappoint another to be true to yourself. If you can bear the accusation of betrayal and not betray your own soul. If you can be faithless and therefore trustworthy. I want to know if you can see Beauty even when it is not pretty every day. And if you can source your own life from its presence. I want to know if you can live with failure, yours and mine, and still stand at the edge of the lake and shout to the silver of the full moon, ‘Yes.’ It doesn’t interest me to know where you live or how much money you have. I want to know if you can get up after the night of grief and despair, weary and bruised to the bone and do what needs to be done to feed the children. It doesn’t interest me who you know or how you came to be here. I want to know if you will stand in the centre of the fire with me and not shrink back. It doesn’t interest me where or what or with whom you have studied. I want to know what sustains you from the inside when all else falls away. I want to know if you can be alone with yourself and if you truly like the company you keep in the empty moments. By Oriah Mountain Dreamer

CHAPTERS:

(00:02) Evolution, Values, Human Limits

(01:22) Critiquing Mechanistic Human Models

(10:06) Diverse Minds and Machines

(20:36) Identity, Teleology, Automation

(29:51) Animals, AI, and Souls

(42:03) Experience, Harmony, Shared Meaning

(52:27) Wholes, Emergence, Active Matter

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:02] Michael Levin: Some gems.

[00:05] Iain McGilchrist: Would you want to start talking or should we just wait?

[00:11] Michael Levin: Let's start. I think he's heard my introduction to this before, so we can start.

[00:25] Iain McGilchrist: Go ahead and ask me again what you need to ask.

[00:29] Michael Levin: To summarize what I'd love to hear you talk about. My claim is that the process of evolution is not guaranteed to optimize for most, if not all, of the things that we value: happiness, intelligence, meaning — all of these things. Therefore, the condition in which we find ourselves, meaning the various limitations of our bodies, of our minds, all these different features, I think are fundamentally up for improvement. As one changes the various aspects of the physiology, integrates the design devices, changes the biology, we will change. At some point, what is a human, and your views on what we should not change? What's the essential?

[01:16] Iain McGilchrist: Yes. Hey, Richard. Yes.

[01:18] Richard Watson: Hello. Sorry to keep you.

[01:20] Iain McGilchrist: Hi, Richard.

[01:21] Richard Watson: Hi.

[01:22] Iain McGilchrist: Happy Easter, by the way. We've just started. I was going to comment on something that Michael said. It's very important, I think, about the way we're evolving, what we're capable of doing, whether evolution takes any note of the things that we would actually value or not. I think there are several things to say about that. One is to do with the model that we have of a human and what effect that has on us as well as on the research. And another is to do with the whole business of teleology, direction — you're effectively saying that there is no direction to evolution. That's a very common position.

[02:21] Michael Levin: No, I'm not sure that's true at all.

[02:24] Iain McGilchrist: Oh, okay.

[02:25] Michael Levin: If there is a direction, and I think there are many smaller directions, I'm not convinced that direction is aligned, as the current terminology is, with the things that we would like it to be aligned with.

[02:38] Iain McGilchrist: I think the biggest problem for us is, I've always found and increasingly found and have critiqued at length, the sole use of the model of the mechanism or machine in describing what a human being is. And I think that at the same time that we believe, although the belief may not be well founded, that we're creating machines that are more like humans, humans are indisputably being forced to be a little more like machines in order to interact all the time with what is increasingly a machine at the other end of the process, not another human being. So I think it has its impact for good or ill, and I think largely for ill. It can help us answer certain very small scale questions because in a complex system, you can always find small areas. If you hone in narrowly enough, you can find an area where a mechanism is actually quite a useful way of thinking, but it's not a good model of the organism as a whole. So this also entails that when we think about human beings, we think of them largely in terms of a certain kind of cognition, that we liken to something a computer might be able to do. But pretty much everything that matters in our lives is not at all like this. When you listen to an astounding piece of music, when your daughter gets married, when you simply turn outwards to the beauty of nature, almost anything that we do that is not rigorously tied to some matter of expounding in words. We're always bringing a whole host of things to bear that are the things that do give meaning to life. There is a tendency to say there can't be meaning and there can't be values. I'm not saying this is your position, but I'm just saying this is a position that one frequently encounters. My feeling is that there's a problem with it because we decided in the late 17th century that we were going to pursue a kind of science in which ideas of purpose or directional value were ruled out at the base. And so it's a bit of a petitio principii to spend a couple of hundred years examining the world and then say, We can't find any purpose or values in it because we ruled them out at the start of the process. I think being aware of that and the unspoken force that is so important of these things that we value can usefully be thought of in terms of the great Platonic virtues of goodness, beauty and truth, that these are not bad things to orientate your life by. And I'm not sure that whatever it is we're doing at the moment is improving them. I also believe that there is enormous value in what looks very negative to our point of view, because the very idea of negation is to us something bad. But in fact, negation is how anything comes into being, by being defined, sequestered from something else. And in fact, the business of not doing and not thinking, and indeed of silence, is absolutely critical to every important human endeavour. I'm absolutely convinced of this. What we're doing is to drive out that space in which the other things can flourish. And machines don't help us with this.

[06:14] Iain McGilchrist: In many ways, they distract us. They substitute for that fruitful silence in which we can be creative and see deeply into the nature of things; they substitute something more familiar, more trivial. So I think those are some of the problems. And on teleology, which is another angle — whether we're going anywhere that would do us any good — I just have a simple observation, which is: why is life at all? And why is it going in the direction that it seems to be going in terms of evolution? I think I said last time that consciousness may not be something that emanates from life, but is actually there in the cosmos, is an ontological primitive. If that's the case, then what life brings is not consciousness. What it seems to do is enormously increase responsiveness. That responsiveness is to these values. I can't tell whether a lump of rock is valuing things. I don't think it can value. Creatures can value; some creatures can value rather narrowly; a single cell can value certain things, but we can value more than any other creatures. So something is happening in evolution that is at the cost of survival because we are fragile, short-lived, vulnerable creatures compared with many far longer-lived ancestors. There are single examples of actinobacteria in the depths of the ocean that are themselves around 1,000,000 years old. The redwood forest has thousands and thousands of years, compared with the human being with this measly 70 years. We're obviously not doing terribly well at surviving. Something is driving this, and I think it is responsiveness. It is that we are responsive to these deep things, and response has in it this idea of responsibility and moral engagement with the world. I know this is nothing like what is normally talked about in the world in which either of you operate, but I do think it's important that creating texts, including from a computer that can spew out text, is taking us further and further away from the creative, connective, reverberative, resonant nature of human experience. I was trying to compress quite a lot into a short space.

[09:57] Richard Watson: I'm totally with you, Ian.

[10:00] Iain McGilchrist: Are you?

[10:01] Richard Watson: Yeah.

Iain McGilchrist: That's wonderful. Yeah. What do you think, Mike?

[10:06] Michael Levin: I'm in on the part where I think that the things that we're interested in, including consciousness and mind, predate life. I agree with that. I think it is a primitive. And I think what life is very good at is scaling it up. I've tried to formalize this notion of a cognitive light cone, which is the spatio-temporal boundary of the biggest thing you can care about. So the biggest thing you can activate, the goals. I've got this formalization of the biggest goal that you as a cognitive system are capable of pursuing, from little local goals of bacteria to humans potentially having planetary and wider-scale goals, including uniquely the first ability to have goals that are bigger than your lifespan. If you're a goldfish, all your goals are achievable because they're smaller than your expected lifespan, so it's fine. But if you're a human, many of your goals are fundamentally unachievable, and so there could be various psychological pressures there. I'm completely with you on all of that. I see the machine thing a little differently. I think goals are essential, and I think that we are essentially goal-driven observers with values. I just see a real spectrum: if we look down the evolutionary tree and even in our own bodies, one can start very slowly and gradually replacing various things. When we've got our wheelchairs and our glasses, which to a primitive natural human may look like "you're engineered to the gills," but obviously that's not the limit. You're brushing your teeth and doing all these things to extend your natural state, which is quite different than where we are now. I'm very interested in this question. I think it's inevitable that, because we are physical beings in large part, we are going to make some machines that are other kinds of minds. I don't think they're like us at all. Just because we can talk to them doesn't mean they're like us. The question of whether or not they're dangerous doesn't hinge on whether they're like us. They can be completely unlike us and also be very dangerous or not. I'm focused on this diverse intelligence idea where there are many other types of minds. Some of them are very different than ours. There are pros and cons to relating to them, but there are many different types of entities out there.

[13:09] Iain McGilchrist: You're not talking about living creatures here. You're talking about artificial so-called intelligence networks.

[13:20] Michael Levin: The whole business, I've tried to literally draw out a space of possible beings that you can make. Yes, there are AIs now that you wouldn't call living for many reasons, but we also have in our lab we have hybrids, which are some neurons driving a little robot and they really do care about what happens, but their body is now different. They're not driving this kind of body, they're driving something else. That robot may not even be in three-dimensional space. It may be in physiological space. They may live in a completely different problem space. And partly, there are parts of them that you would call living and parts that you would call not living, as you might with a crab shell. There are components that have mechanical properties that aren't actually alive, but the whole thing you would call alive. I think these crisp categories used to be quite useful when the technology wasn't there to blur them. But now we're starting to see that the possibilities are such that I'm not sure these very binary categories can be maintained in the space of possible beings. I think we're going to be able to explore all kinds of hybrids and cyborgs and every possible combination that has a novel body and mind. And we're going to have to figure out ways to — I'm very interested also in the ethics aspect of it, how do you relate to them? Because in the olden days, it was pretty easy. You come and you knock on it. If you hear a metallic clangy sound, that tells you everything you need to know. It came out of a factory and it's pretty boring and you can do whatever you want to it and that's fine. If you hear a soft thud, then you say, you better be nice to it because it's a naturally evolved creature. But that's not going to do for us in the coming decades. That's not going to work anymore. I don't think those categories were ever any good, but now they're definitely not going to be usable. So we have to come up with new ways to relate to something that's not just based on what you look like and how you got here.

[15:30] Richard Watson: Not just in the sense of what it's made out of, but also the behavioral properties. I was looking at my towel rail the other day and thinking that's made out of metal and trying to imagine it as the vibrating atoms in the lattices connected with one another, resonating in lockstep in a way that keeps all the bonds tidy and trying to think of it as an active thing. I was thinking about a machine in the classical sense, which has a causal topology at a completely different scale. This part's pushing that part, that part's pushing this part. Classical machines are generally built in such a way that the causal scale that you're interested in at the machine level is completely disconnected from the causal dynamics at the molecular level. I don't want what this part is made out of to matter to the machine. I can make this arm or this lever or this cog out of something that has to behave like an arm or a lever or a cog. I shouldn't have to care whether it's made out of steel or iron or brass or whatever. The thing that's different about organic systems is how well connected those different dynamical scales are, that they are connected with all of the scales in between, that we have a causal scale at the scale of the organism, where there are parts and systems interacting with one another, and this part pushes that part and that part pushes the other part, creating homeostatic cycles at that level that are causally self-contained. But it's really close to the level below where the level below shows through and interacts with it in a way that if you push it a bit or you stress it a bit, it begins to do something different. It changes into a different causal story at the higher level because the parts are giving way, the parts are squirming, the parts are changing. In a truly organic system, a living system filters down through all of the levels, right down to the molecular components; we can change our gene expression by thinking about it. We can go from this causal level of something's happening at the scale of our bodies to something that's happening at the atomic scale. And those causal scales are connected with one another upwards and downwards. Usually when we interact with the machine, we feel ethically safe in turning it off or taking it apart because when you look inside, it's just some parts. It's just this part pushing another part. Once you get down a couple of levels, you take this apart and that apart, and then it's just material. That's just a cog. There's no point looking inside the cog. Until you go down to the atomic scale, there's nothing inside the cog. There's no connection between that causal scale and all the others. The thing that is a bit different about the AIs that are occurring now is that they have quite a bit of causal depth. They're deep learning systems — there's a clue in the name. There's quite a bit of squirm, quite a bit of parts inside parts inside parts. They are harmonized with higher level structures that are meaningful to us. They are using words and ideas that are in tune with us and synchronized with us. But they are not connected all the way down like we are. I do think that makes them a bit more dangerous than other kinds of machines and other kinds of diverse intelligences.

[20:21] Iain McGilchrist: No, go on.

[20:22] Richard Watson: No, because we're likely to think that it's like us because it looks like us on the surface. And I don't think it's that it has a nose like us, but those few layers of causal structure that, oh, that's like me.

[20:36] Iain McGilchrist: I often misconceive the nature of the danger: that it's something these creatures will do to us in a willful way. But in fact, they are dangerous simply because people will mistake themselves for these machines and the machines for themselves. Because their whole way of thinking about what we're doing and who we are has become so narrow. Children are taught from a very early age that we're machines. In an amazing Royal Institution lecture to children, the lecture begins by saying, "It's wonderful, take a look at one another." This is marvellous because you're all just complicated machines. That seems like a harmless remark in the world in which we move. But packed into it is something longer than the Encyclopaedia Britannica about how we conceive of what life is, what we're doing here, and what our goal should be. Even the phrase goal-directed, which Mike was using and everybody does, is only a part of what we do. It's what the left hemisphere, which is designed to have a goal and go straight forward and get it, does. There are things to which we are attracted. Not so much that we're pushed towards a goal, and we know the steps and we take them, but there are things that we can't entirely account for that are powerfully attractant. Therefore, something like a final cause is operating, some idea of what could be, which is just a potential that we're not able to specify exactly, but we know there's something there in that area and we're drawn towards realizing it. That is quite a different idea from being a goal-directed being. It complicates the idea of what a cause is, because, famously, Aristotle had four kinds of causes, and two of them have been removed. One is the formal cause and the other is the final cause: the thing towards which the whatever it was was designed. We've been left simply with the pushing and shoving kind of causation. That's what a machine is, but an organism: you need to understand the whole before you can understand the parts, and you need to understand the parts before you can understand the whole. I know that's a paradox, but I believe that when you get close to the truth in these areas, the paradox is what you find. To be able to say what a spleen is, you have to know what a mammal is that has a spleen and then you can understand it.

[23:20] Iain McGilchrist: We're going all the time backwards and forwards between the whole and its parts. Whereas the kind of thinking that is mechanical goes in one direction from the bottom up, says we can do this, that has a knock on that and produces that. What I'm trying to say about that is there's nothing wrong with it, but it's just a kind of very limited way of thinking about what we're dealing with, which is useful for some purposes. Because it's so useful in making us powerful, that bit of us that just seeks power, more money, greed, whatever it is, is gratified by this and can't let go of it. That may turn out to be dangerous. Not, as I say, because machines may turn against us. They might. But in a certain curious sense, they're already doing so. I found that my life has become vastly more difficult over the last four or five years. One can say COVID, and in this country one can say Brexit and so on. Talking to people of all ages, everyone is finding the same thing. It's to do with the fact that more and more of everything we have to do in daily life has become automated. I know that's a million miles away from the really exciting imaginative sort of uses of artificial intelligence that we're talking about. But the fact is that all our lives are being deteriorated in front of our eyes by this business, that we no longer can get an intelligent answer from a person. Everything has been delegated to a machine and the machine can't understand the context. It can't understand any ramifications. It can't understand anything that's implicit. That is actually having a tangible, depressing, exhausting effect on the entire population of the West. I think it's one of the things that is already playing into why we're sick. Let me put that on the table.

[26:07] Richard Watson: Yeah, I completely agree again. Go ahead, Mike.

[26:18] Michael Levin: I definitely agree again with the first part. I think that story of drilling down to the parts which we can do and we can see all the little cogs and things that are inside of ourselves. Literally little cogs and things, and say, look, you're nothing but a machine. I agree that is a very pernicious story. I think there are other such stories; we've talked with Richard many times about the standard dog-eat-dog view of evolution and free will, the business around free will. All of these, I think, are very pernicious stories for the human experience. But I have a weird, I won't say solution, but I think the answer lies in the opposite direction. Long term, I don't know that we will make it out of this. I think we're in a local minimum. I think things are trending downwards in these cases, as you said. But if we make it out of it, the way to make it out is actually to go in the opposite direction and to really embrace this diverse intelligence field. What I see in a lot of discussions about this is people will say these things are just like us and therefore all this good stuff, or they're not at all like us and all this bad stuff. We have to embrace the idea that we are not the measure of all things, and that we have a tendency to look at everything. As you say, it's extremely dangerous. Both of you have said this. It's very true. It is extremely dangerous to misunderstand your interaction partner in every interaction, the good ones, the bad, and the adversarial ones. If you have a fundamental misunderstanding of what you are interacting with, you are not going to do well. But I think the answer to this is to really get comfortable. I hope the people of the future, kids are not bad at this natively, will have this intuitive understanding that we can relate to many things, and many of them are just not like us at all. That's okay. We don't need to assume that they must be like us because they speak like us. That used to be true. It used to be that the only things that talked were things that were like us, but that's not going to be true in the future. Your tea kettle will have opinions about how much caffeine you should have. Everything in your life will talk, but that doesn't mean that they are like us. If we truly embrace the lessons of diverse intelligence, we will figure out that there are many minds, and many of them fail in different ways than we do. We confabulate and make things up. But they will have different failure modes, and you just need to know, much like we have with our various animals, and this will just scale that up. Biting down on this idea that there will be things that mimic aspects of our behavior, but are just not like us at all. In the end, we will come to grips with that, and then we will have more productive interactions with each other, with them, with everything else. This ability to only look at things through our own lens is really hurting us here. Exactly as you said, not because they're going to do anything to us, but because we are too myopic to realize that not everything is exactly like us.

[29:51] Iain McGilchrist: I noticed a slippage in what you said in that you were looking for examples of intelligences other than our own. You mentioned that ghastly kettle that I shall put straight in the bin if it speaks to me even once about my caffeine intake. You also mentioned animals. One of the things I feel very strongly about is that I'm not a person to pick up the fashions of the moment. One word that I do think has become commoner and is important is "anthropocentric" — the view that we are very special, no doubt about it, and we have qualities and characteristics that other animals don't. But the idea that somehow there's any comparison between a machine's ability to have what I insist is not intelligence, which would require understanding, which would mean having feelings, appreciating history, context, and having a body and knowing that you're going to die. That is not the same as the more limited intelligence of an animal, which comes out very rough on this, because if we prize particularly the kind of cognitive processing that a machine can do and think that's what's special about humans. These animals can't do it to the same extent. I'd like to say that there's an awful lot going on in the minds of animals. In researching over the last 30 years, the capacities for animals to understand things, to think, to make calculations, but also to feel things, to honour things, to have rituals. They are truly intelligent beings in a way I don't think the machine is, however suave its overview of Wikipedia and however quickly it can produce 300 words on what Ian McGilchrist is about in his latest book. It wasn't a bad stab, but it's not intelligence. So I wanted to try and preserve a distinction there, because I think it's important between different kinds of living intelligence. I don't think you can then do a segue and go, "but these machines are just rather like that." I don't think they are.

[32:39] Richard Watson: Would that be aligned with a sentiment that there's more shared values and more appropriate compassion for a cockroach than I would for ChatGPT?

[33:01] Iain McGilchrist: Yes, there is.

[33:05] Richard Watson: When you stress a cockroach, that's the same kind of stress that I feel when I'm stressed. It's cockroach stress, not human stress. There's some shared value there that there isn't between me and you.

[33:24] Iain McGilchrist: I think that's right. That's giving it a hard test. Because cockroach is not what most people want to. And we can never get inside them and know how stressed they are. But I believe that this is a continuum with feelings that we have, certainly in many animals. I think we ought to be able to be careful about what we assume about animals, what we do with animals, because we do have a responsibility again. That word brings something about a relationship that is a two-way relationship.

[34:04] Richard Watson: So there's something about the cockroach or the other living thing that might be limited in its cognitive-like cone compared to me, enormously. But built to the same stuff. And by that I don't mean that it's organic. I mean that it has the same causal structures at the deep levels like I do. Whereas when I look at myself in a mirror, I see something very like me. I see something that appears to have the same causal structures that I do. And if I was to watch a video of myself, then again, I would see something that's very like me, but not an obvious reflection of me. It's a little bit delayed or lagged. AIs are able to make reflections of us and not just reflections of one person, but of humankind. But there isn't another soul in it. It's just a reflection of us. Reflections of us might be useful, but they're not. The cockroach really does have another thing in it. It's also a reflection of me because it's from the same tree of life. But the depth of the folds involved between me and my relationship with the cockroach go much, much deeper than the depths of the folds involved between me and an AI that's just a reflection a few layers deep.

[36:01] Iain McGilchrist: I'm reminded of another insect, good old Drosophila, and Barbara McClintock's revelation that some meaning is going on here, that the whole organism is able to react to parts of itself that it knows are not working, repair them, and do so in a way that it may not ever have been prepared for, either by heredity or by its own experience. Odd things like this that we now know are going on all over the place, these apparently intelligent decision-making coming from the whole and going back to the part, seem to me an important part of what we're talking about when we're talking about a living thing. It's only a part of it. I'm going to have a campaign to reintroduce the concept of the word soul, the meaning of which I haven't any idea. But I have lectured several times on its indispensability. If you've got a spare hour, I can show you how there is no way you can do without this word. It won't be translated into emotion, intelligence, cognition, will, or anything else. There's something there which is experiential. That is the difference. It's not just a technical difference. It's a vast difference, which is why I'm slightly concerned about the slippage that can be made in this area between a living being and a mechanism, a machine, which is useful, but it's only useful if we're wise enough to know how to use it and when not to use it, as with all tools. We've got amazing power to alter the world without any noticeable recent increase in wisdom. In fact, on my wisdom counter, the thing is sagging towards zero.

[38:11] Michael Levin: The slippage is essential because as we look down at the origin of life in the simpler forms, I would like to think that the paramecium has a soul in our parlance here. But it's awfully close to the kinds of things that molecular biologists are going to be able to make using fairly machine-like interventions. One can imagine replacing some parts of it. I think that aspect of slippage is unavoidable. I completely agree with you that what we call AI today does not share the things that we're looking for here. I don't think it has that. However, I have a lot of trouble thinking that evolution has a monopoly on creating things that do matter. I think that these beings that we talk about, even though we don't have any purely artificial ones today that match this description, I find it very hard to believe that only the process of evolution can create them.

[39:31] Richard Watson: Well, evolution didn't create those either.

[39:34] Michael Levin: Yes, that's a whole other thing. I agree with that too. Typically, when people say, "this is an organism and the machine is never going to," what they mean is it's some sort of natural product of this tree of life. I really can't believe that is the only way these things can ever come to be. I also like what you guys said about having a shared causal structure and things like that. Here's what I think is essential to be shared. I don't think it's to be on the same tree of life because otherwise we can't relate to aliens. I think what needs to be shared is the existential struggle. I think that what we want in common. Somebody asked me this one: if you were going to go live on Mars or off in a cave somewhere, what do you want with you? You don't want a Roomba with you. That's not going to help. You want a human experience. I'm not sure I want a human experience necessarily. I think I could get along with an alien that didn't have a human experience. What I do think we need to share is that basic existential struggle, that autopoiesis where you constructed yourself from your bootstraps at the beginning, from scratch, from parts. You didn't know ahead of time as you were coming into this world what you were, what your parts are, where the boundary between you and the world is, what your affectors are; all of that had to be self-constructed. You are in danger of disappearing at any moment. These are the sorts of things—this, for lack of a better terminology, existential struggle—which current machines and robots don't have. They're told from day one, "this is your body, here's the border between you and the outside world, this is what you're going to do." You get all the energy you want. All of these things are completely different from us. So that's where I would pin it. I think it's more about that. It's more about knowing that we are both facing the same kind of fundamental existential problem in the world of figuring out who and what we are and where we begin and where we end. And from that, I think we can build rich, fruitful, ethical relationships with things that have that origin story, even if they are nothing like us.

[42:00] Richard Watson: Interesting.

[42:03] Iain McGilchrist: Wouldn't they have to have experience though, Mike? What you said is the machine knows all this stuff because it's been told it, yes. And that is absolutely non-experiential. What you're saying about a human being is that we learn everything through experience. Let me qualify that because we know that there's a lot that is inherited in some way. And that seems to be getting more complicated and difficult to follow than it once was, quite where that's coming from and what it is. But still, there is that shape. And it's therefore to do with experience. And if you were in a spaceship with something that you didn't know had any experience but was just programmed to say certain things, I think that would undermine for me entirely any quality in it, because there would be no shared feelings. We seem to glide away from whatever you want to call it, feeling, experience, emotion, consciousness, these things that give us everything that matters. As soon as you start to think about it, everything that really makes life worth living is not something that you can measure in the lab, find or insert in another creature. What is love? Love is indisputably in existence. But we can't say anything about it, where it is, what it is, how big it is, or how to give, put it into another object or a being. It's not like that. That's just one example. You could say the same thing about beauty and truth and so on. What you can do and what people like me who are interested in the brain can do is seem to answer a question, but actually answering a completely different question. They say, what goes on in the brain when you experience X, which is not the same as what is X at all. That's another slippage that we need to be wary of. Or do you think I'm missing something there?

[44:33] Michael Levin: I agree with all of that at the human level. When you say experience, I immediately think of a single cell organism. And I think it has real experiences in exactly the same way that we do. So what does that experience look like? Here comes a noxious bit of salt. One can give an account of the processes that are involved in that experience. Then there's a causal structure. So this thing will learn from that experience and try never to come back to that same area. It will be stressed. Because it's stressed, it will make other mistakes and all of these things that we would readily understand. I think that the fact that all of the details about that story could have been swapped out, and, in fact, at some point at that level, will be reproducible. I cannot fathom what would be the barrier, 100 years from now, to people synthetically reproducing those events with all of the causal structure of everything that follows. It seems inevitable to me that people will be able to reproduce that at some point. I think that will be a real experience. I think if it's real in the paramecium, what are we going to do when 100 years from now somebody says, look, I've put this thing together from scratch. It does all of the things the paramecium does. I agree with you that the paramecium has real feeling. Why does my construct not? Why don't I need to morally worry about it? Because I actually think it's very dangerous to make those kinds of distinctions because they lead to you being able to discount, in a kind of ethical-concern way, things that we really should be worried about for that reason. I don't know what the answer then would be. Why, if we buy it in the paramecium, would we not buy it in another construct that was technologically attainable at some point?

[46:47] Richard Watson: I'm going to, if I may, take issue with you, Mike, about the struggle for existence. I think that's really interesting. To say it doesn't have to be made from the same stuff. It doesn't have to have been from the same evolutionary lineage. It doesn't have to have been part of the same tree of life as me. But if it has, if there's a struggle for existence which is meaningful to it, then we have something in common, if I heard you right. So I don't think I want to go that way. I don't want to view my existence as a struggle. I don't think life is a struggle for existence. But that's part of the mythology created around the separateness of me and everything else. Life is really a harmonious resonance between me and everything else, not a separateness, not a me trying to persist whilst everything else that's not me doesn't matter to me. All of the meaning is taken from my relationship between me and the other. So I would gravitate towards something I'm more likely to have a genuine relationship with in a way that matters to me and is meaningful to me if it has the same harmonic depth that I do, that there's multiple levels of causal structure inside it in the same way that I do. But how am I going to know that if it's not in the same key as me? You can build a song out of a particular fundamental: you start with a particular fundamental and you add lots of other harmonics, you take some particular harmonics out, you phase shift some others, and you look at the intervals between them and you create this construct which has lots of harmonic depth to it. And now it meets another song. Do they have any relationship to one another? If they really weren't built from the same fundamental, then they could be discordant with each other at every level of that hierarchy in a way that they just don't dance together at all. I think that will look to us not like another living thing we can't get on with but like nothing at all: it has no harmonic resonance with us at any causal scale. We can't even see it. It's not there. But to the extent that things are there, it's because they're built from a similar harmonic scale as we are. When one song meets another and it says, look how we are harmonizing together, how we are jazzing together here, it's because I've got part of that refrain too. That little refrain makes sense to me. I've heard it before. Look how it fits together with mine. The only way that it fits together with mine is because we are actually different branches of the same tree drawn from the same fundamental, because there isn't any reason for us to harmonize with one another otherwise.

[50:28] Michael Levin: I agree completely that it may be extremely difficult, if not impossible, to really be able to tell that when your implementation is significantly different. This is something I always come back to: it has been retread in science fiction from day one, that there may be intelligences that are so alien that we are just not smart enough and our concepts are just not mapping at all so we cannot recognize each other in that way. I think that's certainly possible. I take your point about the struggle. That's a very deep thing that I don't know yet what to say about. But I will tell you that it's more like—I'll send this around later on. We don't have the time for me to read this all out, but there's a poem by Oriah Mountain Dreamer that basically talks about this very issue, which is what keeps you up at night. It's not that I'm not claiming that life is supposed to be a struggle, but having shared concerns, which automatically presupposes that you've got goals and that you are not a constant, the paradox of transformation and all of these things. Even if we are in harmony, I still think that there are these fundamental big questions about what we are and what we ought to be doing. Maybe this is my own limited development, but I don't see any way of getting rid of those, even with the love and the IQ and everything else. I don't see any way of getting rid of those fundamental questions that are supposed to be keeping us up at night. I'll send this around. This says that you said it better than I ever could.

[52:27] Iain McGilchrist: And I like the emphasis on a relationship in what you had to say, Richard. Because I believe that relationships are fundamental and therefore it's not about a thing that is atomistically angsting about its survival, but it's already constituted by a web of interconnections and is part of that. I'm not going to deny that life is very often a struggle. It surely is. But there's also a lot left out of the picture if we focus only on that. Evolution used to be presented as purely a matter of competition, but we know that it's more a matter of collaboration than it is of competition, though competition plays a very important part. So there's a lot of relational stuff that's very important to the existence of a being. To go back to your paramecium, I think if you make the future simple enough that you can actually have bits of other paramecia. In a way, you've done a Lego job on it. You've taken it apart and you go, well, if I put this back in there and back in there and back in there, with any luck, it'll take off again. But you haven't really created anything there. All you've done is reverse an act of destruction. Because the thing itself is not created by humans or by anything like that. We don't have to be from the same tree of life in the sense that there are many branches of it. But there has to be a recognition that whatever we do that helps us by saying, look, we made a paramecium, what we're really doing is piggybacking on something that nature has given us that we don't fully understand, but we've just about reversed something. It's not terribly different from doing a heart transplant. A person needs a heart, there is a heart, put the heart in. But of course, what's really exciting and interesting about heart transplants, and it's not an urban myth, is that after a heart transplant, the person takes on something of the person whose heart they've received. I know of a senior surgeon at Hairstock in this country, which is a centre for transplants, who gave up doing his work because he was so spooked by what he was doing to people. So even when we think we're doing a parts job, we don't really know what kind of a hole is coming with it. We're fixated on the idea that everything can be broken down into parts and then reconstituted. But I think the relationship between parts and wholes is greatly misunderstood. Don't keep banging on about that, but it is important.

[55:43] Michael Levin: I think that is absolutely critical. I've been giving a couple of talks about this and writing some about this too, this idea that we know automatically what we have when we understand the parts is completely wrong, but very, very pervasive. I agree with you about taking apart the paramecium. Let's go even further down then. There's this emerging field of active matter, which I think is extremely interesting. A lot of people think that this sort of undermines the kind of humanist organicist things that we've been saying here. I think it's just the opposite. All of this work on these amazing, unpredictable, emergent properties of very simple systems is actually highlighting exactly what we started out with, which is this claim that deep cognition is in some way a feature of the universe: when we create these machines, these physical bodies, we're basically pulling out some interesting things out of some platonic space of minds out there. There is truly minimal matter. We're talking about two or three chemicals at most. That's it. This is not like taking apart some complex paramecium because we don't know how it works. You see all the ingredients. There's just three of them. What you're starting to see is unexpected problem-solving behavior. I'm not claiming that this has all the richness of the human experience. Of course not. But I think that the thing that you've got with a paramecium can already begin. I know you may not like the slippage, but I think it's a continuum. You've already got it there. I agree with you on this: there are parts that you did, which is to put together the physical system. There are aspects of the whole that you did not put in: you didn't create, you didn't predict, you didn't know it was going to be there. None of those are on you. What you created was a physical manifestation that seems to somehow pull down some of these dynamics that we have a very poor understanding of.

[58:08] Iain McGilchrist: I don't consider that the kind of slippage that was worrying me at all. I too believe that we're not equipped to understand what consciousness exactly is or even what matter is. But I do believe that all of that is a continuum; I don't make a hard and fast difference between animacy and inanimacy. I think that they are extensions and this is why consciousness doesn't need to begin with life. It's there anyway. And so is some kind of direction, not a direction that is of a tinkering God engineer, but some sort of sense of urgency towards something complex and beautiful, I believe. Why does the cosmos produce such amazing variety? Because this is the unpacking of potential that's within that whole. I absolutely agree with you that I would expect to see what you're describing from these three chemicals.

[59:18] Michael Levin: I asked the scientist who makes these things, how long did you have to search for to find these three chemicals? They run mazes and they do all kinds of stuff. And I said, how long did you have to search for to pick the right three chemicals? He says, these were the first three things on my shelf I tried. And so that tells me, if this was the first thing you tried, my God, what else is out there? So the space of possible implementations is not sparse, I don't think. I think it's incredibly dense with these things.

[59:56] Iain McGilchrist: Yes, possibly limitless, yeah.


Related episodes