Watch Episode Here
Listen to Episode Here
Show Notes
Mike Gazzaniga, Richard Watson, and I discuss split brains, minds, confabulation, and consciousness.
Richard Watson - https://www.richardawatson.com/
Mike Gazzaniga - https://people.psych.ucsb.edu/gazzaniga/michael/
CHAPTERS:
(00:01) Backgrounds, AI and dogma
(11:09) Clinical view of consciousness
(24:55) Split brains and identity
(37:37) Stories, selves and dogma
(46:09) Could LLMs evolve naturally
(56:56) Multiple selves and unity
(01:05:38) AI experience and emergence
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:01] Michael Levin: I'm super excited to have you both on together. Do you want to each say a couple of words? Because you haven't met before. Just say who you are.
[00:12] Richard Watson: Sure. My background is in computer science and evolutionary algorithms. I've spent a lot of time recently thinking about the relationship between evolution and learning. I have come to the conclusion that evolution is a more sophisticated learning system than is usually conceived to be. I think it's more the case that the relationship between evolution and learning is that evolution is smarter than we thought, not that learning is dumber than we thought. I've been interested in thinking about learning in systems that we don't usually think of as learning systems: learning in gene regulation networks, but also ecological networks and social networks where there's a distributed learning process going on in the network as a whole, even though they're not evolutionary units and not designed for the purpose of doing learning. I've also been thinking about learning in physical mechanical systems like masses connected by springs, things like that, where the springs are imperfectly elastic and that leaves a residual memory in the springs and has shown that that's capable of doing the same kind of learning that a Hopfield network does. Those are some of the things that I've been thinking about. I'm excited to talk about what we think the real limitations are on current deep learning mechanisms. Are there opportunities for relieving those by moving to a different way of doing computation that's much more to do with Bayes and resonance and holographic principles and things like that instead of the current conventional way of thinking about computation in artificial neural networks based on amplitude. That's everything on the table from me.
[02:44] Mike Gazzaniga: I came up through neuroscience and testing of humans with various kinds of neurosurgical procedures, and from that went into studying classic lesion cases. But I've got my hand in the neurosurgical cases all these years. In fact, we have a new series of studies being launched in Germany as we speak. There's a series of studies coming out of the intellectual parts coming out of Cologne and the actual patients are elsewhere. What I also do is keep an eye on things at my age and watch others have the fun. One of my colleagues here, a man by the name of Scott Grafton, is one of the real sharp guys in brain imaging. He had teamed up with one of his students, I don't know if you've heard of Danny Bassett, but she's a network science young guru who's now at Penn and really a superior talent. Both of them together have studied networks at the level of fMRI tractography and have ideas about how they can identify networks that are constantly shifting. They've captured what they would call the plasticity at the level of network shifting, and they can identify networks that, when learning is about to occur before it occurs by their analysis of these systems, something that would be, I think, relevant and of high interest to you guys. Of course, everybody is now inundated with this. At Santa Barbara, they just concluded a two-day conference; it was the realization of the incredible power of these learning systems. Everybody at one level is spooked by it. At another level, they want more of it. One of the take-home lessons I got — there are a lot of leaders in the field here, you would know them more than I would — was that we, the people in the field, do not know how these systems are working. They're doing these things, we're measuring the behavior and we know what the algorithms are they're working on, but we can't figure out actually how they're doing it. I said, well, you're joining the neuroscience community. We don't either. I walked away with the feeling that, because these things come together, people think, well, let's figure out how the brain does it. If we can figure out how these neural nets, these artificial systems, we can get a handle on it. They don't have a handle on it. I'm worried that we're walking down this path holding hands, and I'm not sure we should be holding hands with this approach yet.
[06:20] Mike Gazzaniga: Let me just add in one of the things I've been working on for the last few years, and we're just getting going, is to hold these scientific meetings where we try to examine dogma in a field. Your wonderful lecturer, I listened to Richard the other day, where you're taking on Darwin kind of directly up front. How dogma captures a field and people won't let other ideas in, and there's almost an industry involved in keeping it that way. As my business friends tell me, the reason regulation exists is to keep people out. And the reason these noggin was foreign is to keep people out because the people inside are already doing business. We've had three or four meetings. I'm retiring this July, but the system is set up that we're going to have three or four meetings. What I'd like to do is put in the back of both of your minds that maybe we organize one. The whole idea of the meeting is to get 5, 6, 7 people together. Free form discussion. The idea sometimes is also to bring the biggest advocate of the idea to the meeting if they're truly a great scientist. They frequently know more of the pitfalls than others. Then they have a full discussion without any Chatham House rules. You can talk about the ideas, but without attribution as you leave the room. We've had a couple and they work. They're really quite extensive. I think I mentioned to Mike and her earlier, we had one that kind of got us going five or six years ago with Randy Gallistel challenging the simple synaptic idea for storage of memory. We had five or six people here. The dogma of the simple idea emerging out of kind of the Candel position forward is strong and still strong. To think of it differently is a hard task for those guys. This was a meeting that achieved that and a little paper came out. The idea in structuring the meeting is that it is a meeting to have full and free discussions. That's why you're there.
[10:01] Richard Watson: It's difficult to do that, isn't it? You have to choose your participants carefully. You want somebody who's vested enough to defend a position, because otherwise you're just all agreeing with yourselves.
[10:14] Mike Gazzaniga: Right.
[10:16] Richard Watson: But, as you say, you want somebody that's whilst they might not be particularly open-minded. They are convinced that they're right. They're convinced that the dogma is right, but they're willing to talk about the problems.
[10:40] Mike Gazzaniga: And there's a whole other group that are just tired. They don't want to put the intellectual effort into rethinking fundamentals. It's not going to happen on their watch. So you've got to set them aside too. It's a tough agenda.
[11:09] Michael Levin: Richard and I spent a lot of time thinking about unconventional cognitive systems and what does it take to make models of the outside world, of yourself, to process information, memories. I would love to hear Mike's kind of overall take on what are we really, right? Fundamentally, everything we know from neuroscience and the phylogenetic tree — it doesn't have to be human necessarily — but just given your insights on the bipartite brain and all of that, what do you think we really are? And then from the inside out, we can expand from there.
[12:03] Mike Gazzaniga: I can make this short because there's three or four things that don't get talked about enough that I think are key to thinking about the mind–brain issue, which is: how does the brain generate this phenomenal thing called consciousness. There's a whole lot of talk about this and everybody's jumped in on it, and they seem to gloss over a few simple clinical facts. One of the clinical facts is it's really hard to say somebody is not acting like a conscious agent. That includes people with dementia, that includes people with massive brain lesions. The way I think about it, it's almost impossible to stamp out consciousness. If you've all interacted with demented people, you wouldn't say they're not conscious, and yet they have massive pathologies. You wouldn't say a person with global aphasia is not conscious. You wouldn't say a person with a huge spatial disorder is not conscious. You wouldn't say a case HM with massive immediate memory loss is not conscious. None of that would make you say they're not conscious. But the experimental guys are now building these big models of consciousness where the information flows from here to there. Those things are tremendously damaged in a lot of these instances. In my history, someone said, "Let's just disconnect the two hemispheres to see if there are two conscious systems." Boom — there are. They wake up; they have two systems separated; one doesn't know what the other is doing. So there's two. That was easy enough. Now what about if we start doing more disconnections? It turns out they're all over the place. You can't get rid of this thing. That means — for the next little metaphor, I like to say something my dad said to me once. He was a physician in aging and had had many strokes, so he was into that. I came home one day and I said, "Dad, you know what I do for a living. I have to ask you this question: what's it like? What's it like in there?"
[14:54] Mike Gazzaniga: You've lost vision over here, you had this over there. What's it like? He just said, "Mike, you work with what you got." You don't work with things that you don't have anymore; you work with what you got, and the story keeps coming, the capacities keep coming out. So this gave rise to the bubbling up metaphor: through time all parts of the system are up as they were. They're up at that moment in time. It's a constant dynamic system. It's not something building for that moment; the moments are changing through time. What is more active in the brain at that moment is what seems to be the content of consciousness. So if you then take that a step back and go, what do you mean by that? My view, simple view, is consciousness is a feeling about a specialized capacity. That's it. That's what it is. There are thousands of specialized systems in the brain. As they come up to the fore, like bubbles in a jar, at that moment they come out; that's their moment. That's the instant that we're calling conscious. That content can vary through the underlying dynamics. That's just a way of saying what we know to be true. Things are happening through time; they change, and it looks like very local regions are generating this felt state. So we should be looking at the local circuits if we're looking for the neurobiological underpinnings of this thing. I think they all have it. All parts of the brain seem to have it because whatever's remaining they seem conscious of, and so forth. So it's quite a different view. I worked it out in my 2018 book. That's how I see the problem. It's a minority view. It's not where I think the field is trying to get to at this moment.
[17:50] Richard Watson: Can I check that I understood you properly? You think that the field at this moment thinks that consciousness is some sort of systemic property that requires all of the pieces to be in place. But it seems more likely, the evidence suggests, that consciousness is in every part of the brain, not at all times, but at the moment they come to the surface, and that you could take very small parts of it and they would still be little consciousnesses rather than a singular thing belonging to the brain as a whole. Did I get you?
[18:34] Mike Gazzaniga: That's the idea. I invite people to think about the neurologic reality of this. You might come up with a different metaphor that's better. One of the most illuminating things to do is to go on rounds in a neurology ward. Every day it's illuminating because you find something that's, what? What's that? This basic thing is that if you have this question in your head, would I say this person is not conscious? No, you wouldn't. They may be working at a simpler level. It's quite startling to see your first true Alzheimer's patient who can be sitting there talking to you about life and the past, and meanwhile can't put a red dot on a red dot. Their cognitive mechanism has been corrupted. You wouldn't not call them conscious. It's crazy.
[20:11] Richard Watson: If we did have a way of understanding consciousness that belonged in each of the parts, which we don't, wouldn't that still leave the problem of why it feels like we have a singular consciousness?
[20:33] Mike Gazzaniga: Yeah. except that.
[20:36] Richard Watson: That could be an illusion.
[20:38] Mike Gazzaniga: There's a whole body of philosophers who think that just gets you to that point. And it's a powerful argument. If you go back and spend an hour looking at Roger Shepard's illusions, they're powerful things. You're looking at it, you see it and you're experiencing it. What is that? And why can't that be the same for this? That's the puzzle. The thing that, at another level, of course, the unity comes from, in my mind, from at the level of story. We are storytellers to ourselves all the time, constantly telling ourselves a story. One of the things that we unearth in these split brain patients is there's a part of the brain that will make up a story why they're doing something that we know we had commanded the silent right hemisphere to do. And yet it's coming out of your body and you simply don't let things happen without you interpreting them. So we're sitting there changing our stories, our feelings go up and down. We're changing our story. I was just listening to someone who's studying the role of bringing maybe decision-making, artificial decision-making systems into the law to make judgments of sentencing, a very practical thing. There are all kinds of things that can bias a decision the judge will make by tweaking this and that. It turns out judges are totally victims to things like when had they just eaten lunch? Is it after lunch? They do studies on this and a huge variation that's correlated with these physiologic states. The idea of bias-free decision systems will get rid of that, and that's how we should make these calls. We already know in sentencing hearings for parole that the juries want to see Dr. Jones up there in the witness stands. Either a medic or a psychologist, they're terrible at this. They're about 35% prediction of whether there's going to be recidivism. If you got a social scientist up there with their outcome studies and all their calculations, currently they make a 55% prediction rate. The notion is if you really had some neuroscience in there, you could get it up to 80% prediction rate. But the jurors don't want any of that because they think that it's the old Doc Jones who has the wisdom. And then, to make that problem an interesting sidelight, the judges say, well, you could get that up to 90%, but then we have to decide whether to give that defendant that 10% chance. Our whole system is built like that. All kinds of fascinating questions on that front.
[24:55] Richard Watson: In the split brain patients where one hemisphere confabulates a reason to explain their observations of the behavior of the other hemisphere, that's entirely mutual? It's not like the other hemisphere says, "No, it wasn't. Why did he say that? That wasn't what I was doing at all." That the other hemisphere confabulates as well to explain why that hemisphere gave that explanation? That's a question.
[25:24] Mike Gazzaniga: That gets complicated because there are examples where the other hemisphere rules. Some of the patients, the right hemisphere over time evolves to be able to say simple words so it can react. And it has reacted to the correct answer. There is that story that there's a disagreement. There are explicit studies we've done where you ask a patient to value how much they like a set of words on a 1 to 7 scale. There are two testing days. Let's take this one study where the patient was very calm, the interactions were very normal, low key. We put up these words randomly and they have to point one to seven and you distribute them to one hemisphere or the other. Their rankings were just in parallel. If somebody liked this, got a six, the other hemisphere got a six. If one got a five, the other got a five. It was a yawn; we couldn't see any differential liking to the cement. Then another day, a patient came in and the patient was agitated and feisty. This was a 16- or 17-year-old boy. He was jacked up. When we ran this test, the scores were completely disparate. One hemisphere was giving a one, and the other one was giving a six, and it would flip. If these things were set differently, there's tremendous conflict and agitation as a result. All the split-brain thing is just a research tool to see how this thing could work. What is conflict? Conflict is two different messages being sent to the system to consider.
[28:00] Richard Watson: Makes you wonder whether the agitation is in one hemisphere and the other hemisphere not? Or is the agitation caused by the disparity of the hemispheres? That's the conflict between the two: the agitation.
[28:21] Mike Gazzaniga: Yeah. So.
Richard Watson: Fascinating.
[28:24] Michael Levin: It's funny that the storytelling thing sounds so intrinsically critical to being an agent like us. In the AI community, many people see these language models confabulate, basically just come up with whatever they can that has little bearing on the actual causality of it. They say, "Oh, look at this — this thing's making up stories." That part tracks a lot. That's the part that's working. We have the case you're talking about with split brains, but I'd love to hear what you have to say about more extreme dissociative identity disorder cases, where there aren't two coherent inhabitants but many more. I've been reading about cases where they're aware of each other and talk about each other as they show up and have various dynamics. For example, the two we've been talking about segregate nicely anatomically — you've got two hemispheres. But if you have 29, do they overlap? Are they all resident on exactly the same hardware, or are they different regions? Is there any known anatomical correlate to those distinctions?
[29:51] Mike Gazzaniga: Don't know. Never looked at that in a serious way. But how about this experiment? When your wife picks up the phone to talk to somebody, and you're observing this from afar, the way she responds, I bet you with almost 100% accuracy who's on the other end of the phone. Because there's different greeting patterns for each. Whether it's a friend, a deep friend, a person that's everybody's ****** *** at, or a child, or you just go right down the laundry with, I just know, I don't have to ask. I know who's on the other end of the phone. And so that's us. The idea here is that's us falling into another story. Or knowing they are, we're observing the phenomenon. Now what happens in these clinical cases is these become encapsulated and insular and available to be sustained states. I just don't know what the underlying pathology is on it. Could the story be — I tried to play with the idea — could the story be the consciousness? It's the story and we're constantly updating and feeding and modifying it. That's what's driving the machine, the story. But I don't know. Gavin, what's Gavin's last name? I don't know. Newsome to Gavin, Dawson, Tegazi. Anyway, a philosopher was saying that doesn't get at it. He says one of the things that we humans have from the get-go is the moment you wake up, you're ready to go, and you have new agendas, you have a set of goals, you have things you want to do. What's that? That's just part of the hardware. That's not a thing you built up from a story. It's just inset. However, what else is going on? You still have that thing. Unless you're deeply depressed, I guess.
[33:01] Richard Watson: That's the booting up agency you're talking about, Mike.
[33:06] Mike Gazzaniga: Yeah.
[33:10] Richard Watson: So there's a connection between two of the topics here that might not be so obvious, because you started, you mentioned getting people together to talk about scientific dogma.
[33:22] Mike Gazzaniga: Yeah.
[33:23] Richard Watson: Which one could interpret as a collective story, that each of the persons involved in that dogma is trapped by that story, that they can't see things differently than that story, and they are dedicated to the continuation of that story. And that makes the dogma something that doesn't belong to any one of the persons involved. The dogma belongs to the community, to the collective. And analogously, the story that the different consciousnesses within potential consciousnesses or subconsciousnesses within the brain is committed to is what makes them one thing. And what makes them part of something rather than separate things. And they can't get out of it. They're trapped by the story, people are trapped by the dogma. But that's when we're thinking about scientific dogma, we think that's a bad thing because they're not seeing the truth. But when we think about it in terms of ourselves, we think that's what makes it, that's what makes it us. That's what makes it more than the sum of the parts. That's what makes it something that belongs to the whole and not the parts. That's what makes it not just each of the parts reacting to the evidence that they have.
[34:57] Mike Gazzaniga: Yeah. Nicely put.
[35:01] Michael Levin: This ties exactly into the booting up business because when you look at an early embryo, let's say an amniote like a human or a bird, and you get this flat blastodisc of tens of thousands of cells, you look at this thing and you say, there's an embryo. What are you counting as one? What are we actually counting? What you're actually counting is commitment to a story. You're counting the commitment of all those cells to a very particular journey in anatomical morphospace. They are all going to work together to build this thing that has so many fingers and so many eyes and everything is in the right location. They're all committed to this story. You're actually counting morphogenetic stories. I've twisted this 90 degrees to talk about morphological space instead of behavioral space. But it's the same thing. They're committed to that story. You can very easily generate a dissociative state by taking a little needle and scratching across the blastoderm to create one or two different islands. In that case, each island will form a new embryo. They will self-organize and commit to their own story. Eventually they heal up and you get more conjoined twins or triplets. I used to make these things in duck embryos as a grad student. The question is: you see how many individuals are in this embryo. You don't know right away, any more than from looking at a brain how many cells are in there from the outside. There might be zero, there might be one, there might be two, probably up to half a dozen, I'm going to guess, in a regular embryo. It's commitment to a particular story. After they heal up, you get cells that are on the border between two embryos and they're not quite sure. In the cases we studied, if you've got two of them like this, you get left-right problems because the cells here are the right side of this one but the left side of that one. Who are they? Which genes do they turn on—the left-side markers or the right-side markers? This is why conjoined human twins often have laterality defects. They'll be mirror-imaged because the cells are very confused about whether they're part of this story or that story. I think we can tell some very parallel kinds of frames here about these things.
[37:35] Richard Watson: Yeah.
[37:37] Michael Levin: One of my favorite stories about this whole dissociative thing was a guy who was a therapist and he practiced integration therapy to help patients get back unified. He had this patient he was working with; the patient was unhappy because this other personality would pop up during work and he was a partier. He would just disrupt the workday. They were working, and one day the patient comes into the office and it's the other one. He says to the doctor, "Hey, what's this I'm hearing about integration therapy?" He says, "We're going to integrate you." He says, "Yeah, but when you integrate me, where am I going to be?" He said, "To be honest, with any luck, you'll be gone." He said, "Excuse me? What happened to the Hippocratic oath? What do you mean I'm going to be gone?" And then he says, "Make the other guy gone. He's boring. All he does is want to work all day. I'm having a much more exciting life. Have him gone. I don't want to be gone." And so this is a real existential issue.
[38:44] Richard Watson: You should get your own lawyer there. You should get your own integration therapist, shouldn't you?
[38:49] Michael Levin: To represent you. That's a real problem.
[38:54] Richard Watson: I'm sorry, it's the other personality that hired me.
[38:57] Michael Levin: That's right. That's right.
[38:59] Mike Gazzaniga: Richard, in your really great talk, where you very generously and kindly set up Richard Dawkins as great, admire him because he's so good at selling this story. Having run a few of these dogma things now, there's a critical moment there to get the people to listen. I was wondering whether you lose the lead, as it were. If you were to re-give that talk with a question, I have some observations. There's a body of work that looks like Darwinian theory just can't explain, and everybody would be listening. What a beautiful meaning it would be if Dawkins was there. He opened, I don't know him personally, but open to that kind of intellectual challenge. I think you're so far into it, this story, this is all happening. After these two days of intense hearing these guys, they're going to do everything. This thing's going to do everything. Sit back and relax, because these neural networks are going to be fully capable of all kinds of stuff. It's not doing things in the old traditional Darwinian way. It's got all kinds of new games and tricks up its sleeve. We better try to understand this. We're being blocked from understanding by hanging on to the simple idea. I had just heard your talk and I was listening to these guys as I was trying to think of how to get in, get a conversation going with them.
[41:32] Richard Watson: That's very kind of you with the talk about natural induction.
[41:35] Mike Gazzaniga: Yeah, the natural induction one, yeah.
[41:38] Richard Watson: It's tricky. Because when you try to point out phenomena that aren't explained by Darwinian evolution, people just say that they are. Because there just isn't any other explanation. So it must be explained by Darwinian evolution. That's not really explaining, is it? That's just faith. It's really hard to get past that, to even see that there's a question there.
[42:06] Michael Levin: There's another aspect to this, which is looking backwards to say, does this framework explain what we've already seen? That's one, and people do that all the time. But I'm actually even more interested in it, because that one is hard to do for exactly the reason that Richard just said. People can tell all kinds of stories, and maybe that's plausible, I don't know. But to me, the real acid test is looking forwards and say, okay, how good is your lens on these things at generating new research, new predictions, new capabilities? There's all kinds of things that I think the standard models, including the standard neo-Darwinian model, is just not going to get us to. It's just a barrier to research in certain areas. It does not facilitate the discovery of certain types of things that other frameworks might facilitate better. And so I think we have to complement this idea of explaining past data with which two models may be hard to distinguish from that alone. But then the next step is, okay, fine, but how good is it at generating the next step, the next advance? How good is it at opening new?
[43:26] Richard Watson: You don't just mean testable predictions.
[43:30] Michael Levin: No, I don't mean just testable predictions, although that's part of it. Just a dumb example in the case of the Game of Life, the cellular automaton. You got these very simple rules and the little cells turn on and off. So you could say I have a very reductionist view of this, and I don't believe that anything exists. For example, these gliders, these patterns that move around. I don't believe any of that exists. What I believe in are the little elements, and they each have an on and an off state. That's it. That's what I believe in. So the thing there is, it's not that the framework is not usable to explain everything that's ever happened in the Game of Life, because you can explain everything that's ever happened that way. But the thing you're not going to do with that view is to do what people have done, which is to make a Turing machine out of gliders in the Game of Life, because you don't believe in gliders. If your frame doesn't help you think about these higher-level permanent entities that propagate from here to there, and maybe they carry information and I can engineer this thing if they cross, I can make a logic gate. You're not going to think of any of that. If your entire view is micro-reductionist and I could have any story you tell me and it's consistent with my worldview, it's consistent, but how fruitful is your worldview for new stuff? It's not just about predictions; it's about how good it is to get you to invent the next, and I don't mean technologically invent, but to see the possibilities of other things. That's what I mean.
[45:18] Richard Watson: It's slippery, isn't it? Because in my experience, even when you get an audience to put their hands up at the beginning of the talk and say, what do you think will happen next? And then you show them something that they didn't expect, and then you say, so what do you make of that? And they just say, well, you didn't ask the question right, or something like that. In what sense did I not ask the question right? You were completely wrong about what you thought would happen here. But they just, they'll just make up a reason for their answer having been reasonable given the way that I posed the question or something. No. But now that you've told us how it worked, of course that would happen. After I've explained it.
[46:09] Michael Levin: That's exactly right. In looking backwards, almost anything can be shoehorned into your story. But the question is, why didn't you do that experiment? There's a reason why somebody else did it and you didn't do it. It's because there are different frames and different types of research programs. I'm curious what you guys think about the link between evolution and these language models. My dad asked me a really interesting question the other day. Is there a plausible evolutionary path? Could you imagine a possible Earth somewhere, a possible world, where the thing to evolve would have been an entity that are these kinds of language models? So not the thing that's us, that's multi-scaled, and it had to go through particular self-construction and autopoiesis and all this, but something. Could we imagine a world where the thing that naturally evolved is something that runs on exactly these kinds of principles that current AI models do? Is there a path backwards that's natural, or does it require us to have evolved first, and then we engineers engineer this crazy thing that's very different from us? Is there a natural path to it? Another way of saying it is, do you think that the major features of our cognitive system — the fact that we feel unified, that we have an innate sense of agency or free will — is that inevitable? Any cognitive agent is going to be like that, or is it possible that we could have evolved in a completely different way? What do you think about that?
[48:09] Richard Watson: Is there a way in which we could have bypassed all that biology and just gone straight to the apparently cognitive artifacts?
[48:21] Michael Levin: Is there a possible world where we show up on a planet, we look and I say, whoa, there are no biological creatures, including engineers that look like us. It went right to what evolved looks very much like our LLMs; that's what it is. It's not that there was never a step where there were engineers like us.
[48:48] Richard Watson: No. Well, that's interesting, right?
[48:54] Michael Levin: Because if you think the answer is no, then the claim is that the kind of architecture we have is in some way essential, that every sentient thing we find is going to be like us. Is that the claim?
[49:13] Richard Watson: I'm inclined to think that AI systems as we find them now couldn't have occurred naturally because they are a mirror or an imitation of the cognition that we do that was created by our cognition. That's not to say that we couldn't create an artificial one that was like us, but I don't think the current ones are. I think that they are, as we've discussed before, not connected through enough causal levels to the stuff that they're made out of. You can have a thin, superficial intelligence that looks like intelligence on the surface, and when you scratch it a little bit, it immediately shows its naivety. Or you could have one that was a bit deeper, which you can dig a bit deeper before it starts to show its naivety. It's still the case that there's a yawning chasm beneath there that it isn't connected to a substrate that is actually meaningful to it like we are. I have a stronger opinion about that than I realized.
[50:41] Michael Levin: Interesting. We can agree. For example, we could say that, yes, that's true, and they fundamentally are lacking that juice of meaning. But might there not be a planet somewhere where these minimally meaning agents are running around? We can accept that they don't have it, but did it really need to go through us? I've been trying to invent some kind of an artificial selection scenario with bacteria or something that would get them to carry out back propagation. But it seems like there could be worlds like that where you go straight to that.
[51:32] Richard Watson: Yeah. Interesting.
[51:38] Michael Levin: Yet another way of saying it is, if you do find a world like that, your immediate conclusion is that there must have been engineers here and they left. If you don't see them now, they must have been here at one point. This cannot show up on its own.
[51:54] Richard Watson: I think that's a good way to put it. So a natural intelligence would have to be connected all the way down. It would have to be connected on the cognition of the subatomic particles on which it was built.
[52:09] Michael Levin: That makes me think of the old Paley's watch argument. Remember this? I don't know any. You find this thing and it talks to you. On the one hand, you would say that's even worse than the watch. That definitely means there was an engineer here somewhere. But on the other hand, I'm not convinced that there isn't a path to something like this, even if it is shallow.
[52:38] Mike Gazzaniga: I'm taking your guys' class here. I got a question. Why wouldn't the layered architecture metaphor just absorb all of this? There's no way that people who push the layered architecture, John Doyle, do you know John Doyle's work? The way he talks about it, the layers have their own logic and physics to them, and they have a protocol between the layer below it and so forth, but they have no knowledge of what's going on, nor do they want it. If you have that view, I don't see why this is; that's just the way systems are built.
[53:34] Richard Watson: I can see how many layers do I want before I think it's a real cognitive thing, right?
[53:43] Mike Gazzaniga: Exactly.
[53:45] Richard Watson: If I want 50 of them, can I have them at layers 100 to 150 instead of from naught to 50? What if I've got 50 layers? Then there's a sense in which it becomes substrate independent about what the bottom layer is. Maybe I am thinking a little bit too bottom up.
[54:14] Mike Gazzaniga: Is it one of the arguments these guys make that the reason we kick things up to the social layer is that we can't solve it and we're going to have to have the social idea manage us because our mental capacities, our feelings about stuff aren't helping us? We think things should be fixed and therefore we do social structures that can move along faster.
[54:53] Richard Watson: It's interesting that you mentioned, I'm cool with the idea that a layer can have its own logic, but the protocols of the communication between the layers are vital. Otherwise you only have a single layer logic and it doesn't have any depth to it. Despite the things that I just said in the last 10 minutes, I'm much more top-down than the average bottom-up scientist is. But if I was really scale invariant, I would say it could start at any scale and you could build upwards and downwards simultaneously. Or you could simply acknowledge that all scales are always involved. There can be a bulge at a particular scale. There's no sense in which a cognition is grounded in a physical reality, whichever scale you're at.
[56:33] Mike Gazzaniga: I'm going to take this phone call. It's my wife.
[56:37] Richard Watson: We would have known it was your wife by the way you spoke to her anyway.
[56:42] Mike Gazzaniga: Are you on your way? Sorry, thanks.
[56:56] Michael Levin: Mike, earlier Richard had said, why do I feel such a unified being then, if we're made of parts? Is there an alternative? Is there a possible cognitive being that would ever say anything different? I'm not sure. Is there some possible being somewhere that does not feel like a unified, or if there was, could we ever communicate? I don't know, but what do you think?
[57:36] Mike Gazzaniga: So one of the wisdoms out of clinical neurology is that patients—not just split-brain cases, which allow us to study these things coolly and experimentally—but any patient with a brain disorder is constantly recrafting who they are to deal with it. And if it's a motor disorder, sensory disorder, memory disorder, there's constant adjustment. And so we reflexively change our story, change our feelings about things. I think we know we do, and we marvel at it, and we try to understand why yesterday I wanted to go to Rome but today I don't. What's changed there? And it's usually some kind of feeling: you imagine yourself sitting on a plane for 15 hours. I heard the greatest advice on accepting an invitation. When somebody invites you to something a year from now and you say, yeah, sure, a year from now, in your mind you say, would I want to go there next Tuesday? And that really cleans out a lot of good. It's pretty good.
[59:12] Michael Levin: It's pretty good.
[59:15] Mike Gazzaniga: We know from human experience all these dimensions of our own personality. We try to compensate for them and we try to deal with them. In doing that, we're trying to tell a different story about ourselves. I was thinking of your wonderful example there, Richard. I wonder if you could have set that up in the lecture too. I'm going to show you an example of the storytelling brain. Then you give the challenge and the person fumbles around and you just say, C.
[1:00:02] Richard Watson: You should be able to set it up so that whatever answer they give, I'm right there, shouldn't I? Do that right.
[1:00:08] Mike Gazzaniga: That'd be fun.
[1:00:11] Richard Watson: So Mike and I were discussing the other day how many voices we have when we write together. If we write something together, are we writing with one voice or do we need to write as one voice or can we still write as two so that we're writing a dialogue for other people to read? So your answer to the question, Mike, is there ever a consciousness that feels like it's not a unified thing? Well, I don't. That's what we are, right? I don't completely feel that you and I are one thing.
[1:00:51] Mike Gazzaniga: Haven't we all had this experience? Let's say you've co-written a paper with somebody and then five years later you go back and look at it and you can come across, in the discussion section usually, oh, I would have never said that. And then you realize you didn't say it.
[1:01:19] Richard Watson: I've read papers that I know I read, that I know I wrote. I don't remember ever thinking that.
[1:01:27] Mike Gazzaniga: Well, that happens. With increasing frequency, I might point out.
[1:01:39] Richard Watson: But I can also imagine that you come back to a bit of text and think, I really don't know whether I wrote that paragraph or whether the co-author wrote that paragraph.
[1:01:48] Mike Gazzaniga: This isn't true in scientific papers, but if it's a book and you have a really good editor, what they bring to it is enormous: their skills. I've noticed that they can really change the lucidity of a paragraph by a few words, moving them around, and bingo. That's a good question. I want to think about that.
[1:02:34] Richard Watson: In the same way that different personalities might answer the phone, depending on who was calling, different personalities reside in me all the time. When I'm talking to you, you think that's the same me. Each of them would tell you that it was the same me, but they're not really. Each of them is willing to confabulate. It's quite common that I would think to myself, "I don't really know why I did that."
[1:03:09] Mike Gazzaniga: Yeah.
[1:03:11] Richard Watson: I live with that. We all do. They've done a reasonable explanation, but it wasn't me.
[1:03:25] Mike Gazzaniga: In personality research, there's a view that one is described as having a personality. You have a personality, a certain type. What the researchers have found out is that by the age of 26, the world you live in has pretty much decided what they want to think of you. You start deviating from that model they have of you, and they beat you back into what you're supposed to be by the various distributors of rewards and punishments and ignoring. They don't want you to change because they don't want to go through the energy of building another model of you. They've spent their time. And that's weird to be aware of the constraints that the world is making you react the way you have reacted.
[1:04:30] Richard Watson: One part of ourselves does with another part of ourselves.
[1:04:37] Mike Gazzaniga: You're right. That too.
[1:04:41] Michael Levin: It's amazing, all of these things, especially the thing that, what you just said, Mike, and what Richie was saying a minute ago about you might be actually talking to a different personality, even under the so-called normal conditions. All of these are things that the people who critique these various AIs critique them for: "You're not talking to a single thing underneath, it's different from day-to-day, and it doesn't have a stable core." They pretend that is not true of us, but these are, I think, important — all this neurological stuff and everything else. I think it's an important body of work that has not penetrated into the discussions of AIs.
[1:05:38] Richard Watson: I want to revise my answer about how grounded does it need to be or could it exist just at that level. It's not just how many levels there are. It's not just how deep I can scratch before it starts to show the wires. In order to be sufficiently deep, it would have had sufficient experience that shaped each of the layers so that it was their own experience and not just a faint image or facsimile of such experience. It's more like that. The question is, can you tell the difference between something that was built from real experience and something that's a facsimile experience?
[1:07:04] Mike Gazzaniga: Buddy put it the other day: simply, you take GPT-5, 6, whatever, however credible they're going to become. At the end of the day, they're going to be doing these incredible things and maybe they won't have had the human experience. Now, is that a fair constraint? If you're basically a laptop sitting on a table doing all this stuff. But is that the threat?
[1:07:52] Richard Watson: They won't have had the experience, but the question is, could they acquire structure equivalent to having had the experience? That the only way to explain the level of interaction that we have with them at this level is that they have correctly induced, in a deep way, what the human experience is, even though they didn't actually have it.
[1:08:23] Mike Gazzaniga: Say it, they can say about the experience, but have an experience. That's what writers do all the time.
[1:08:32] Michael Levin: Well, that's the thing. And I'm not, I'm certainly not arguing that GPT, whatever architecture is the one you want to be, fully agential and all that. I have no commitment to that. But I do think two things. One is, when you say we've had an experience, I don't know that we've had any experience. We're in an important sense we're a brain in a vat. You haven't actually had an experience. What you've had is some sensory data that makes sense to you and that you've told stories about. And that's adaptive enough; you've interpreted it correctly enough that you're still around and about. So that aspect of it being some sort of an agent that's locked into a certain really narrow slit of the electromagnetic spectrum. This is what I can see, which is this really thin piece of the spectrum. I'm completely blind to all the other stuff. And I've got a reach of, I don't know how long, but I can't contact anything far. I don't know what the senses are connected to out there anyway. That situation can certainly be emulated in these things. I also think back to the early days of when you have kids and they're really little. They go through this phase, and what Mike just said about writing is exactly right: they can talk about animals they've never seen and what it's like in Africa and all these things; they have zero experience with any of that. What they have heard is a bunch of stories from which they've concocted enough syntax so that they're just totally babbling. They go through this phase where they can put on a pretty good show, but they've not had a functional interaction with any of the things they're talking about. As they get older, eventually you say, well, now he really understands what this concept is. That was a pretty smooth journey from putting a bunch of words together. I still remember my youngest spent about a week running around adding ".com" to everything. He would go "sandwich.com." He had no idea what it was, but he knew that in certain circumstances it brings up all sorts of interesting new things. He was trying it out. Eventually that went away because he realized it's not actually that useful for many things. I feel like when you have kids, you watch that transition from a pure syntax engine to somebody who knows what they're talking about. I was thinking about this for myself. How many concepts do I talk about on a daily basis that I think I know what I'm talking about that I actually have had no experience with at all? Everything I know comes from reading and hearing what other people have said about it. And we assume, yes, there is a Milky Way out there. Well, is there? Who knows. Like I said, no commitment to GPT per se, but I'm skeptical that real experience is something we can get hold of. I feel like we all live in VR in some sense. People say, what if we live in a simulation of something? I don't know how it could be any other way. We are constantly building this view of what we live in. We don't have access to actual reality. That's my thought.
[1:12:07] Mike Gazzaniga: Noam Chomsky in the last week or so has this great quote about GPT-3. He says, "It's nothing other than high-tech plagiarism."
[1:12:21] Richard Watson: Can we all?
[1:12:23] Michael Levin: Somebody said that to me the other day that it's just linear algebra. We've had linear algebra for a good couple hundred years and no one saw this coming. It's just ordinary differential equations then. Okay, it's linear algebra, but that doesn't mean — and that's another thing, something Mike said, I think at the beginning: someone said, "what went in and when and how you made it," but you don't know what it's capable of. People don't feel that either. People tell me all the time, "Look, I make these things. Don't tell me they're this and that. I made the thing. I know what it's capable of. It's just linear algebra. I didn't put any magic in there." You made it, but like many other things, especially in biology, once you made it, that doesn't mean you know what it's going to do. I really think people have this underappreciation of the strong emergence that's going on here. They think they know what the ingredients are and therefore they know what they have. I think it's very dangerous.
[1:13:40] Richard Watson: You wouldn't say that in chemistry either, right?
[1:13:43] Michael Levin: That's right.
Richard Watson: I put these things in the test tube and therefore I know what it is. No, you don't.
[1:13:50] Michael Levin: Absolutely.
Richard Watson: We didn't know that gas would be flammable.
[1:13:52] Michael Levin: Yeah, I think of that.
[1:13:57] Mike Gazzaniga: The next time you take a new pill from your doctor.
[1:14:00] Richard Watson: Yeah, yeah.
[1:14:02] Mike Gazzaniga: Looking at a very local regional thing.