Watch Episode Here
Listen to Episode Here
Show Notes
We discuss definitions of a Self from the perspective of history and anticipation, free will and responsibility, what learning (generalizing) agents bring to the table beyond the input data, truly alien aliens, and whether your brain is really necessary (e.g., https://www.science.org/doi/10.1126/science.7434023).
CHAPTERS:
(00:00) Temporally Extended Experience
(12:51) Free Will And Responsibility
(28:17) Neural Network Generalization
(40:43) Induction And Shared Universes
(49:30) Hydrocephaly And Brain Capacity
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:00] Richard Watson: I had a thought that followed on from conversation last time that I wrote down in the following way. It was something like, we often ask the question, what is it like to be X? Or is X the sort of thing that it can be like to be? I wondered about a slight variant on that question, which is what is it like to have been X rather than to be X? My intention of asking it that way is to place emphasis on the history of experience that X has had. What is it like to have been the kind of thing that has had the history of experience that X has had? That takes the question away from the tendency to say, what is X made of or how does X work, and instead to think of how X has been changed by the history that X has had. I wondered if putting a question that way did any useful work for us. That was something I could say more about. Not a lot more, but that was something that was left hanging from last time for me.
[01:23] Mark Solms: Maybe I'm getting the cat by the balls. But when you first phrased it as you did now, "what was it like to be X?" I thought X therefore no longer exists. So would it not be "what has it been like to be X?" which implies X still exists.
[01:52] Richard Watson: Yes, I didn't mean to imply that X doesn't exist anymore. I'm asking a question: what is it like to have been the kind of thing that has had the experiences that X has had?
[02:06] Mark Solms: I prefer that because of the way you formulated it in the abbreviated version.
[02:14] Richard Watson: It was too cute. It missed the proper question.
[02:19] Mark Solms: Not only does it imply that X no longer exists and you're asking it from the grave—what was it like? But also it means the qualia are no longer necessarily there. If it just is a readout of what it was like, I think that's a lesser point. It's a very interesting nuance. What do you think, Mike?
[02:52] Michael Levin: I love it. And I immediately went in the other direction because now I'm thinking a lot about what it's like to do. I think we talked about this last time. And so now I'm wondering, putting it in your way, Richard, if there's another question, which is, what is it like to be something that anticipates things happening to it? That something that must act and is not indifferent to what happens next. Exactly what you're saying, but looking forward. What's it like to be something that has that level of investment in what happens next? Two sides of the same coin, but I like it a lot. It also seems to take the emphasis off of there's only the present now and it's what you are now.
[03:55] Richard Watson: It felt to me that it immediately takes the pressure off its current structure, its current material being, its current material form. What is it like to be X? It's just too easy to jump in and say what X is made of and how X is constructed and how X works. And when you pose it more historically, or to generalize it in the way that you suggest, in a more temporarily extended way, I don't know if you can say it both ways at once. What is it like to be a system that has had that kind of experience, that it is carrying forward with some expectation about the future and trying to do both at once, but make it temporarily extended instead of just in the moment?
[05:02] Mark Solms: What's occurring to me now, as I think further about this, is, firstly, to be a system that has temporal depth, in other words, to be a system that has learned from its experiences, that has been affected by its experiences, is a system which is updating itself. It's changing its mind. It's changing its mental structure. It's changing its control system, for whatever you want to call it, on the basis of past events that affected it. Unless the events mattered to it, they would not have been recorded in the way that's implicit in the question. Any system that has such a history would have such a future because it's that kind of system. I don't think, Mike, that a system has to know that it is changing its mind in order to anticipate the future. I think it's sufficient that it changes its mind. In other words, it changes its actions, its policies, based on past events. That means it is going to change its policies in relation to future events, whether it knows it or not. So I think that the linguistic gymnastics that you get stuck in, Richard, I think they can be avoided just by saying you can ask the question in either direction, but I think that the simplest one is to ask it in the past tense. What has it been like to be X?
[07:11] Michael Levin: We could even go more minimal than that and say, what's it like to be something to which things can happen? You don't even have to, you can go super minimal and not even know, not know that you're changing your mind, not be able to predict awfully much, but to be the subject of possible future happenings.
[07:35] Richard Watson: But you just changed it to "object," though.
[07:39] Michael Levin: Yeah.
[07:40] Richard Watson: What kind of a system can be acted on?
[07:53] Michael Levin: Yeah, you're right.
[07:57] Mark Solms: I had a few months ago a brief conversation with a colleague with a humanities background. I think she's a literary scholar. She was getting very exercised about this way of speaking. This, I suppose, is a Nagel-come-Chalmers way of speaking: this "what is it like?" She objected to it on the grounds that it implies an "as-if-ness." It's not "what is it to be," but rather "what is it like to be?" That was the way she was reading it as a literary person. She said, "Why don't you just say, what is it to be?" But that takes away the first person perspective.
[08:52] Richard Watson: Because we mean, what does it feel like to be?
[08:57] Mark Solms: How does it feel to be ex? Or how has it felt to be ex.
[09:06] Richard Watson: When I think about how a system has been changed by its experience, I immediately reach into the machine learning territory or learning theory territory and notice: what is it like to be a neural network that's been trained on XOR? What's it like to have been a thing that's had that past experience? And there's something that you can say. I noticed that the way in which it behaves in the future is interesting from a machine learning point of view, exactly because of the ways in which it behaves that are not determined by its past experience. In other words, the ways in which it generalizes. So there can be many networks that have had the same past experience that you could have given them exactly the same training data. And because they generalize differently, the way in which they generalize is what they brought to the table that wasn't in their past experience. This undermines the validity of the question that I started with: it isn't just what's it like to be a system that has had that kind of past experience, but what is it like to be that system in a way that isn't described exactly by the past experience that it's had?
[10:53] Michael Levin: But that has a really interesting flavor, which you and I have been talking about anyway in other contexts of getting more out than you put in. It's not that it wasn't the past experience, but it wouldn't have been able to unlock whatever this is that it now brings to the table if it weren't for that past experience. It's a two-part thing. It's your experience that allows you to do things that weren't directly experienced.
[11:35] Richard Watson: Your inductive bias was there before you started, but your ability to generalize wasn't. You needed to transform your inner model with that inductive bias to that experience. And that transformation can be deterministic given where you started from, including your inductive bias and the experience that you've had. But it's not determined by the experience that you've had. Neither was it apparent before you had that experience. It was the data plus it was a pass money pressure. The pass money pressure predicted what I should do in this case. No, you still needed the experience. And in a sense, it doesn't. You can't take the set of all networks that have been trained on the same data and say, okay, the way that this one generalizes is different from that one, because you can't just read that directly from its inductive bias.
[12:51] Michael Levin: It's related to the free will question. People ask me this all the time. There are many ways to butcher it, but I'm interested in what you guys would say about it. I typically then just press for a definition. Are you trying to be free from your own psychological drives or free from physics? Or what are you trying to be, what free from what exactly? For example, Mark, what would you say to a client in psychoanalysis or to anybody — is there a useful definition of that? What do you think about these issues of control? I think it gets to exactly what you were talking about, Richard. You've got this past history, then there's some other stuff you bring to the table. So what's the useful version of "I could have done otherwise" in this scenario?
[14:00] Mark Solms: The way that I look on the question of free will is that, first of all, it's probabilistic. There are greater or lesser chances that one will do A or B or C. Then those degrees of probability are determined, for a system of the kind that I'm interested in, a conscious system. They are determined by its feelings. Feelings govern behavior. Negative affect means you're moving away from your preferred states, from your viable bounds. Positive affects mean you're moving back into your preferred states, back towards your viable bounds. The chances of you doing something are increased if what you're doing feels pleasurable and they are decreased if what you're doing feels unpleasurable. It doesn't mean you won't do it. That is the point that I'm making. It just means you're less likely to do it. To use the biblical example of Daniel in the lion's den, Daniel chooses to enter the lion's den, which is a very unlikely thing to do. He did it anyway. That's evidence of free will. But there are very few Daniels, so this sort of behavior you can indulge in, but you're very unlikely to. In that sense I understand free will and its relationship to feeling. I think that the only reason why there would be such individuals is because we have a multiplicity of feelings. We have a multiplicity of competing needs. What satisfies one homeostatic drive may have the opposite effect on other homeostatic drives. It's a matter of prioritizing them, and that will be context sensitive. Here's how this sort of thing arises: something at one point in time, which may be very unlikely, is not so unlikely for this particular system, given its history and given the context that it's in and given the current relationship between its various homeostatic needs. Does that begin to address the question that you're asking, Mike?
[17:11] Michael Levin: There are two aspects. Some people are really into the science of it and they say, "but physics, in the end, it's all particles zooming around." So what do we really have? Then there are the people that are more interested in the personal side, given all the stories, the physics stories, the sort of story that you just told, what does it mean for us as individuals? Are we just — is this an inevitable story being worked out, or should I be, I point out, you don't have a choice about it, but should I be worried about actually doing things, or is it all pre-written? In particular, when you deal with patients, do you come across people who really buy into the hard determinist story, and what effect does that have in their life, and what do you do then?
[18:14] Mark Solms: You're making it easier for me when you ask me to speak as a clinician, because clinicians can speak very loosely. But before I do that, let me remind you that the explanation or the account that I just gave of free will, simple as it was, and mechanistic as it was, nevertheless, feelings play a central role in it. So it's not excluding the subject of the mind when I put it the way that I did. I place the emphasis on feelings for very good reason. When I speak of homeostats, there are many homeostats that I do not think have this extended functionality, which gives rise to feeling. So it's a special kind of homeostasis that we're talking about, where there is palpating of uncertainties. In other words, modulating of precisions in the different policies relating to the different homeostats and then the relationship between them. Let me speak now as a clinician. The thought that comes to mind immediately when you put it to me that way is that I'm speaking now both from the point of view of individual experience and from the point of view of having to give expert testimony about whether or not somebody's responsible for their actions. Sometimes it's not just a matter of a binary question of yes or no, which is a forensic psychiatric question of a quite simple kind. In terms of mitigation of sentence, it becomes a more gray question. It's not yes or no. It's a matter of to what degree are we to take mercy on this person. The position that I've found myself coming to is that there are two factors. First, I think it's undoubtedly so that our individual histories for which we are not responsible — those aspects which are huge, like what family we are born into, what personalities and socioeconomic status our parents have, and things that are perpetrated upon us by those families and the environments that we find ourselves in — we can't possibly be held responsible for. It's true that these influences exist, that we can't be held responsible for them, and that they will have a powerful determining effect on the likelihood that we will, for example, indulge in criminal behavior. You are much more likely to indulge in criminal behavior if it's been the norm in this social setting that you grew up in and where there's an imperative, for example, to steal or to behave violently, because otherwise you're not going to get by.
[20:54] Mark Solms: You can't be held responsible for all of that. That's the one factor. The second factor is that, yes, indeed, it's more difficult for person A to resist criminal action than person B. But ultimately, in the end, it is their responsibility. It's not a matter of do we or do we not take sympathy with person A because of their history? We do. We say it's been much harder for you to do what is right, in other words, to obey the law, to keep that matter simple. It's much harder for you, but still it is your responsibility to do so. And so ultimately it is a matter of choice. It's what I was saying earlier about free will. There are probabilistic factors which weigh the likelihood much more in case A than in case B that they will follow a certain course of action. But ultimately, it's up to them to decide whether or not they will follow that course of action. That's based on my experience with these things. When you're dealing with the patient, you realize, "oh my God, the cards were really stacked against this person. What a tragic history." But there are many such people with such histories and some choose one route out of it and others choose another. That isn't due to the history entirely. It's something that you do hold the patient or the perpetrator responsible for.
[23:41] Michael Levin: So what do you think, Richard? It's got both of these things. They've got this case, this flavor of there's the part that's determined by your past history of inputs, and then there's the other part that is brought to the table somehow. What do you think about that? Do we have some minimum?
[23:57] Richard Watson: It felt like there was a little slippage in the middle about, we appreciate the history stacks the cards one way or the other, but the law requires us to determine that you don't do this or you do that, and therefore it was a choice. But was it though? The law assumes that it must have been a choice; otherwise you can't hold anybody guilty. You can't hold anybody responsible for their actions unless there was a choice, unless there was free will. But maybe that whole edifice doesn't make sense. If we were to imagine that the edifice of law was built on a folk psychology of a free will that doesn't exist, we can't use that as reason to believe that they had a choice.
[24:56] Mark Solms: Yes, that's why I agree with you that there's slippage there. I think that slippage is the interesting part of it. I'm glad to be allowed to speak as a clinician because clinicians can speak loosely. You notice that I moved from being a clinician to being an expert witness clinician. I have found that a terribly interesting experience: that a clinician, all you have to do is understand how the patient came to be who they are now. If it's your patient, your job is to help them. It's not to judge them, but rather to help them to live a life that is more satisfactory for them. What is satisfactory for them does include the impact they have on other people, which rebounds upon them. So it's not an entirely solipsistic thing. When you're in a court of law, I've always found it really interesting that the judge has to decide whether the person was or was not responsible for this. They have these quaint notions like "what would a reasonable person do?" and they have to decide whether this is reasonable. It has a kind of sobering effect on the clinician when they can't get away with just saying, "We can understand why this person did that." That's not the point. The point is, are they to be held responsible? There has to be a binary answer. On the one hand, we can throw stones at the folk psychology that the law rests upon. On the other hand, if you take seriously what the law is there to do, what its task is, then it can't afford not to have some sort of simple criteria. I think it's a very easy way of addressing the question we're talking about. Can we really have a situation in which, because you can trace more or less deterministically, including probabilistic causes, why the person came to be who they are, it absolves them of any responsibility? Does it mean there is no such thing as being responsible for one's actions? I find it hard to imagine how we can live in a society in which we believe that, because everybody is who they are because of their history. I find it very interesting that you're drawing on earlier conversations. I'm not in your field, so there are things you take for granted that I don't know about. I found it very interesting what the two of you were saying earlier, the way you were conceptualizing it: that it's not just a matter of what the history of the system is. It's a matter of what the system brings to that history based on its own priorities or tasks. I think that's a very interesting way.
[28:17] Richard Watson: Or its own physical makeup. I've been playing with this phrase, "the history of the system, the whole system, and nothing but the system," which resonates quite nicely with our imagining being in court for a moment. You can take a really simple example, like a neural network learning XOR. There are basically two ways in which a simple one hidden layer neural network can learn XOR. One is that it puts two decision boundaries like this, and the other is that it puts two decision boundaries like that. But so long as you've cut off two of the corners, you can do XOR okay. The difference between this and that is what answer did they give in the middle? They give opposite answers for what they give in the middle, the way they generalize differently beyond the training data. If you initialize such a network and train it from initially small random weights, can you explain why it's going to generalize this way or that way? Well, you can't explain it with the data because the data says that they're equally good. You can't explain it with the architecture of the network because it was the same architecture in both cases. You could say, if you had initialized them exactly the same, then they would have necessarily become exactly the same. So the initial conditions of the weights do seem to be playing a part. But you can also see that when you look at why is this hidden neuron doing this decision boundary and not that decision boundary, that decision boundary only makes sense if the other one is doing this one. And this one only makes sense if the other one is doing that one. So they're doing it because they are a symmetry of each other. There isn't any way to explain why hidden node one is doing this boundary or that boundary.
[30:52] Richard Watson: They're all equally likely. But there's a symmetry breaking involved, which is about being complementary to the other hidden node. That's happening inside the network. It's about that sensitive dependence on initial conditions and the symmetry breaking dynamics which happen inside it, and after it's become a little bit like this, it becomes much easier, much more likely that it comes out that way and that it suddenly flips to generalizing the other way. So the reason that the network says that the point in the middle of feature space is or isn't in XOR — they give opposite answers for that point in the middle — is a function of the history of the system, what training it has had. It's of the whole system because the reason that it has this decision boundary can't be separated from the presence that the other neuron has that decision boundary. You can't separate it into parts. And nothing but the system, in the sense that there isn't anything you're exactly interested in that's not explained by the history or the history of external inputs to the system. So that's why I remember: the history of the system, the whole system, nothing but the system. You need that to explain why they did that. Why did they do that behavior? Why did they say yes for this answer and not no for that answer? If you have that in something as simple as one hidden layer neural network learning XOR, it's no wonder we find it difficult to ascertain whether it was history or something intrinsic to the system which takes responsibility for what happened. If it couldn't have been otherwise, then you say you're not responsible. Or could it have been otherwise? When everything that happened was deterministic, so in that sense it couldn't have been otherwise. But the reason that it came out this way and not some other way was not really a function of its experience at all.
[33:26] Michael Levin: The other ingredient there is the nature of the exorness of the problem. It's the nature of the problem space. And what I was thinking about, people, in terms of the will question, people often try to imagine: could there be a universe with no free will, and what would a universe with actual free will look like? I'm wondering, could we have a universe in which that kind of generalization doesn't work? What would we have to break for that not to work anymore?
[34:04] Mark Solms: Sorry, what would we have to break for what not to work?
[34:07] Michael Levin: The generalization. So Richard started out with this really good example of the fact that when the network learns to generalize, what it's doing is bringing something new to the table that literally wasn't in the training data. There's an extra something that you're getting out of it because now we can work on novel inputs that it hasn't seen before. I'm wondering, is there a way to break that? Is there an imaginable world in which that does not work, that all you ever get is the inputs you've seen before, where that extra bit doesn't happen?
[34:40] Richard Watson: So it depends on whether you care about what outputs it gives to things you hadn't been trained on. A simple place to stop is I trained my network to do XOR. It correctly gives the truth table for XOR, job done. And you never ask the question of what does it do if I give it an input that wasn't in the corners of the hypercube. I don't ask that question. I wouldn't care about what it brought to the table that wasn't in the history of the system. There's something observer dependent in it. Because why do I care what answer it gives? Is there a right answer or a wrong answer for rows of a table that wasn't in the training data? These two networks really are equivalent, even though they generalize differently. They really are equivalent because the only ways in which they're different are ways which weren't determined by the data. So if I think that this one is generalizing correctly, this one has a true model of the world and this one doesn't. If there's something that matters to me about the way that this one generalized versus the way that that one generalized — this one broke the law and this one didn't — then that suggests that I know something about the real world from which the data was drawn that wasn't in the training regime. I want to know whether this system sees the world the same way I do. If I was given that training data, I would have said that the middle point was in the class, but they said the middle point wasn't in the class. And that means they are not like me. And therefore I can hold them responsible for their actions. The way in which they behaved wasn't the way I would have done it. Can you really hold somebody responsible or guilty for something if you really believe that in the same situation you would have done it? I think maybe you can.
[36:55] Michael Levin: Well, if you're willing to hold yourself. Yeah.
[37:00] Mark Solms: That's categorically imperative.
[37:03] Richard Watson: If I'd been in that situation, I would have totally done that. Thank goodness I wasn't. But you're here and not me. Off you go. There is something, I think, about the way when we acknowledge that a system can bring something to the table that isn't in their history, about whether the way in which they generalize makes sense to me. Because then there are two of us involved in this system, the observer and the system. There isn't a way of saying this system was bringing something meaningful to the table and that system wasn't just because they generalized differently. I can say something like this system generalized in the way that I would have generalized. Or this system didn't generalize in the way that I would have generalized. Although I know more about what it's like to be them than I do about the one that generalized differently from me.
[38:17] Michael Levin: I keep being pulled to this issue of the structure of the space. If you learn to generalize a rule that derives prime numbers, once you've done that, there's a whole sort of infinitude of these things that you can keep on generating beyond what you've seen so far. The structure, the explanation for why you say the things you say, in part it's your history, your structure, but in part it's the pattern of prime numbers, such as it is. That's neither your structure nor your inputs; it's the sort of external.
[38:58] Richard Watson: Nor the observer.
[39:00] Michael Levin: I don't know. So that's an interesting, so that's.
[39:03] Richard Watson: You know, different observer, those numbers are not special.
[39:06] Michael Levin: Yeah, right.
[39:07] Mark Solms: That's the crux of your point.
[39:09] Michael Levin: I wish I could remember the exact book, but there was this case of savant syndrome. Mark, you may know what this is. There were two brothers. They were severely disabled in most aspects and never went to school. They would sit with each other and say prime numbers back and forth. The investigator got a book that was just a list of prime numbers. He read some, and they included him in the circle, so they would go three-way. His book ran out. He got to the end and couldn't do it anymore. They wouldn't talk to him after that. It's interesting because they hadn't been to school to learn the significance of that, but there was something in there generating these things. They were pulling it out and could do it better than the almanac he had. Yes, there definitely needs to be an observer, but it still feels like it's not any old list of numbers. Even if you have an observer, there's a specific structure to it.
[40:43] Richard Watson: There's something I've said before about how it is possible that anything can generalize. How is it possible that anything has an appropriate inductive bias for generalizing without having already tested that inductive bias to see how good it was at generalizing? The cross-validation method. What if you've already tested it to see how good it is at generalizing? You're not doing generalization anymore. You're just trying a different bunch of sets and seeing which one worked best. How can it ever be possible that an inductive bias works in a way that hasn't been proved by the data? It's by definition; it can't be proved by the data. How does learning work then? How do organisms ever survive in the natural world? How do I ever learn anything? How do brains ever work? How does machine learning work on data sets you haven't seen before? I think what I've been saying about that is they're built from stuff of the same universe. They don't have to be built from literally the same stuff, but they're built from stuff of the same universe. That brain might be a causal network that happens to be built out of neurons and synapses, but it's still a network of pairwise interactions like the network of pairwise interactions out there in the world that it's trying to understand, whether that's interactions between people, between objects, or between elements of the selective environment. If they weren't built from the same kind of stuff, induction would be impossible. For me, there's something about being able to intuit what it is to be you based on whether I think you and I come from the same universe. I think what I was reaching for there was that although prime numbers aren't arbitrary because they're from the same universe of possibilities, in this universe at least, factors are meaningful things and the absence of factors is a meaningful thing.
[43:51] Michael Levin: It's really interesting. It makes me ask: what does it take to break that? In what universe—what's the alternative universe where that doesn't work? I think you could change all the constants of physics and everything. I think the number stuff would still be exactly what it is.
[44:23] Richard Watson: You first raised that question in the context of what a universe would be like where free will wasn't possible, right?
[44:32] Michael Levin: That's another way that people ask this. Dennett in his early book on free will points out that in terms of physics we know two things. We know determinism and we know quantum randomness. And that's it. Neither of those things look like free will to us. So it doesn't exist. That's his argument. I think it's important to ask, do we have a coherent version of what a universe would be where everyone could say, "In that world, you really do have proper free will." What does that even look like? That's why I ask these questions about these counterfactual worlds.
[45:22] Mark Solms: This is going to be like our last conversation. I'm going to spend a lot of time after this meeting thinking. They are extremely interesting questions that you're raising.
[45:41] Michael Levin: Richard brings up this point that we're from the same universe. So how far does that go? Does that mean that every possible being in this universe, all of our synthetic agents, the aliens, the whatever crazy architectures are out there? So we're all good. We're all part of that same set. We can all sympathize with each other. Now the question is, is there some other universe somewhere where that wouldn't be the case? Where's the border where if something showed up from that universe, you'd be like, there's no way, we just can't.
[46:17] Richard Watson: So that would be a universe where things that were not from the same universe could meet each other. And in that universe, there would just be deterministic stuff and random stuff and nothing in between. When you did something that was obviously determined by your history, there was nothing to explain. And when you didn't, it was random because there was no structure in it that I could possibly understand. What you do that's not determined by your history has no structure in it at all that I can see with my history. Then my ability to induce what induction you're doing would be 0. You also asked how far this goes. Life from other parts of this universe are still from the same universe. So we would get that. But you don't have to go that far. The kind of induction that rocks do doesn't make any sense to me. They appear to either be deterministic or random to me. If they don't have enough shared history with me for me to understand what kind of induction a rock is doing, I can't see it.
[47:52] Michael Levin: That's super interesting.
[47:53] Richard Watson: It may as well be random numbers instead of prime numbers. I just can't see it.
[47:57] Michael Levin: That's super interesting because that sounds exactly the way that some of the people in my field will think about cells too. That's not at all the kind of thing that I do. I don't see how you could possibly see this as a cognitive anything. They would group cells, tissues, all of that with the rocks in this scenario. In fact, that's exactly what they say when I start talking about cells making decisions. The number one question is, what about rocks? It literally is in the same bin.
[48:37] Richard Watson: But because you've spent a lot of time in cell world thinking about things that cells care about, their generalizations make more sense to you. They're not just saying random numbers to you, they're saying prime numbers.
[48:52] Michael Levin: That's very interesting, because a lot of the people who feel this way, it's not that they've spent less time staring at cells than I have. They do the same stuff, but they get something different out of it. They've seen the same data and they've reached it. It's exactly right. Never thought of that before. Amazing.
[49:30] Mark Solms: What was next on your list, Mike?
[49:32] Michael Levin: I was going to ask you, Mark, whether you know the Lorber cases of hydrocephaly.
[49:49] Mark Solms: Hydrocephaly. You mean hydrandencephaly?
[49:53] Michael Levin: There's a couple of papers from the 1980s by Lorber, and one of the papers is called "Is Your Brain Really Necessary?" What he was studying was the very rare cases of adults with drastically reduced brain volume who had normal or above-normal IQ. I believe I saw this guy interviewed on TV when I was a teenager in the 80s; they said he had less than a third of the cortical matter of a chimp, and the guy was in a master's program for math. You wouldn't know—totally normal, but actual brain volume radically reduced, and it's because of the water pressure. What do you make of cases like that? People will say redundancy, but if that was all it is, I don't think we'd have such huge heads. If you could get away with less, I don't see why most people would have such huge heads. They have big heads and all that. So what do you make of that? What's with these cases?
[50:59] Mark Solms: That is hydrocephaly rather than hydranencephaly. The standard answers are twofold. One has to do with the mode of onset of the pathology. What we're talking about in terms of the hydrocephaly is not unique to hydrocephaly. There are many other pathologies, for example meningiomas, which grow very slowly and ultimately take up an enormous amount of cranial volume, thereby compressing the parenchymal tissue. Just as in the case of hydrocephaly, the CSF, the cerebrospinal fluid, compresses the parenchymal tissue, so too can a rampant meningioma. If it's a sufficiently slow or constant factor in the organism's development, then it simply organizes itself in relation to that. The other, which is fairly closely related but not quite the same thing, is that it depends on how early in development something is taken away versus having to adjust to something that's there from the beginning. I had a patient who came to me after a minor bicycle accident. Because he went to a private hospital, they did all sorts of investigations on him that were totally unnecessary because they could charge for them. He was a very successful business person. The investigations included an MRI scan, which revealed that he had no left hemisphere—never had a left hemisphere. There he was as a very successful person. He just needed reassurance from me that he really is normal. How can it be possible? I have almost literally half a brain. The answer is it's because you've always had only half a brain. Your brain developed around the fact that there's no left hemisphere. There are hemispherectomies performed in very young children. Everything that was going to go to the left hemisphere—including motor control of the right side of the body and the occipital lobe representation of the retina—shifts over to the other side. As I said, I'm not giving you a very deep answer, but it's not entirely surprising that you have a person with chronic hydrocephaly whose cortex is very compressed, which is not ideal, but they've been able to compensate because it's always been like that. It's a constant factor in their development.
[54:25] Michael Levin: We're almost out of time, so I'd love to pick this up next time because I understand the plasticity and all of that. But I think on the computer side, we're told that there's a certain density of work you're going to get out of a certain amount of medium. And we're going to need more GPUs, we're going to need more memory. It takes an amount of medium to fit a certain performance. And if I say, well, don't worry, I'll give you half of that, but we can start that way from the beginning. Well, that's not going to work on the engineering side. Clearly it's not infinite. You can't have infinite density of capacities per whatever cubic millimeter. So I'm really interested in that aspect of it. If you really can fit everything, if this guy is a successful businessman and he's riding a bicycle and he's doing all this stuff, you can really fit all that into one hemisphere. What's the density of competencies per unit volume? I don't get it. Maybe we can talk about that next time.
[55:38] Mark Solms: As always, you ask questions that I don't normally get asked. It's really useful to be asked such questions because otherwise I truly have never thought about what you've just said. Just two further footnotes to this question: the one is it's not infinite. It's a matter also of you have to have something of the stuff. You can't do without all of it. For example, you can have no left hemisphere or no right hemisphere; the other one will take over. But if you have no hemispheres, the brain stem can't take over what hemispheres do. So there's a limit to the principle that we're discussing. The other is that there are differences in brain size between individuals, and there are also gender differences in brain size. It's reliably demonstrated that size makes no difference. Male brains are not any better than female brains, despite the fact that they are considerably bigger. Individual variation in intellectual capacities doesn't correlate with brain size, which varies enormously. Those are just two quick notes.
[57:22] Michael Levin: Super relevant. People often say, I'll talk about an example, this tree frog that remembers where all the babies are. And it says, oh, with that tiny little brain akin to that, I say, did you have an expectation for what the size should be for that? Because I don't have a clue what the size should be for a given set of competencies.