Watch Episode Here
Listen to Episode Here
Show Notes
Richard Watson, Iain McGilchrist, and Michael Levin discuss the brain, unconventional cognition, oscillations, agency, and more.
Iain McGilchrist - https://channelmcgilchrist.com/
Richard Watson - https://www.richardawatson.com/
CHAPTERS:
(00:02) Framing Hemispheres And Symmetry
(07:36) Twos, Threes, Asymmetry
(18:29) Embryos, Alignment, Third States
(31:17) Life, Machines, Teleology
(39:46) Resonance And Connective Structures
(50:07) Algorithms, Embodiment, Plant Learning
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:02] Iain McGilchrist: Great.
[00:08] Michael Levin: So I have some questions that we could talk about, but I'd love for you guys to chat. There was some stuff left at the end of last time. So we have preferences.
[00:24] Iain McGilchrist: I'd suggest that you tell us the things that would interest you. I'll tell you a couple of areas that I'd like to go into at some point. Richard, if you have some. There are two rather massive areas of interest for me. One clearly is this business of lateralization in life and how much you, for example, share my belief that this is important functionally and changes effectively the experience of the creature, the organism. The other is the difference between living organisms and machines, particularly in the sense that there are complex systems that are intrinsically unpredictable. With a machine, you can start from the ground up with the syntax of the machine, and work out its meaning at some higher level of what it's doing. You can't really do this with an organism. You have to work from both ends, top down, seeing what it is achieving and what it seems to drive at, and from the bottom. But you can't get to the full meaning of the organism by working in a mechanical way from the bottom up. So those would be two areas that, if they're of interest to you, whether you agree or not, I don't know. I'd be happy to talk about those.
[01:59] Michael Levin: That sounds excellent. Richard, any specific topics for you?
[02:04] Richard Watson: I enjoyed reading your chapters, Iain. Lots of that material resonated strongly with me. For a while I found my linear mind getting frustrated that you wouldn't tell me in the right order what things I needed to know. Then I realized you were giving the example of spiraling around the topic.
[02:30] Iain McGilchrist: Exactly.
[02:32] Richard Watson: So that resonates with what you just said about top-down and bottom-up understanding living things. I would like to dig in a little bit on the metaphor of music. And as you say, metaphor is the stuff of thought. I really think that there's something there that crosses new gaps that would be worth discussing. Relatedly, there's stuff about folding away detail to get at the general shape of things. But when you look at the detail of things, you can't see the general shape, you can't see the infinity of all the steps involved. Those are the sorts of things which are on my mind. But the leap between machines and living things captures a lot of that.
[03:49] Iain McGilchrist: Would you like to say, Mike?
[03:51] Michael Levin: Let's do all of that. Some of my stuff will come up while we're doing this and other things I can keep till next time. It's fine. These are great. Maybe we can start with some of the lateralization stuff, and specifically one place we can start is for you, Ian, to talk about to what extent you think the architecture is specifically bilaterally symmetric. In other words, that you need to have exactly two things that are tied in a particular way. Or is more of the magic in the specific breakdown of the roles and how the different functions of each one are. I'm very interested. I worked on left-right asymmetry for a long time, but I'm interested in alternative architectures. Could we have cognitive systems, either synthetic that we can make or alien, that could be in a completely different architecture? We are in a position to make animals with as many hemispheres as you want. We can make three, four. I'm interested in what you think is the essential bit here. With Richard, we had also talked about a couple of things. He could talk about the importance of the ebb and flow of these kinds of systems. Also, Richard, a few weeks ago you had a really interesting comment about the computational power of having a second copy. What happens when you have room to offload some things in the meantime? Please go for it. I think that's a good topic.
[05:50] Iain McGilchrist: It's a theme that becomes more prominent in The Matter With Things, particularly towards the second part, the second volume, that we need things and their opposites or contraries, and we need them both to maintain their integrity as distinct from one another, and yet also to be capable of a union. So we need the principles of division and union together, so that when I think about the two hemispheres, there's evidence that they sometimes work together in a sense we would call collaborating. They're working together in the same way towards certain ends, but they're often complementing one another by taking different approaches. This is important and we mustn't either collapse them into being versions of the same thing or say that they are so distinct that they don't work cooperatively, which they clearly do.
[07:04] Michael Levin: That also sounds like some of the stuff that Richard has been talking about: alternating cycles of being soft, letting the world imprint on you for a while, and then coming back to push.
[07:28] Richard Watson: Yes.
Iain McGilchrist: That's why I brought that up because I thought that might be related to what Richard was saying.
[07:36] Richard Watson: I think that the union and division. If the brain were not in halves, then you wouldn't be able to have any differences between the two parts. It's important that you have differences because the differences between them create a whole that's worth having. If the two parts were the same as each other, then you would just have double what you had before. It's important that they're different because by being different, they create a whole that's more than the sum of the parts through the relation between them.
[08:27] Iain McGilchrist: Yep.
[08:29] Richard Watson: I take your question, Mike, that did it have to be two? What if you divided it into three parts? And I am bothered by that because it feels like maybe you could do things dividing into threes and not twos. Dividing into twos is the obvious way to do things. It's like if you can't do something with one, then make a copy of it, differentiate them, and now you can do something more than you could do with the one of them. But there's something about the period doubling in the logistic map that eventually exits chaos in a third that makes me think that basically the thirds are real and it doesn't have to always be period doubling.
[09:39] Michael Levin: It's natural to think of twos and so on, but actually, if you think about early embryonic development, the asymmetry of drawing a midline is really not well understood at all. There's a lot understood about what happens after you've got left and right. That first step, especially in amnioids—how it is that you divide, you get to thousands and tens of thousands of cells, and then how you decide where that midline is—is completely non-trivial. We can interfere in that process, and of course there are animals with other types of symmetries, but bilateral symmetry shows up very early on. Chirality is there even before that. But it's not obvious how you bisect something that has lots and lots of tiny cells.
[10:41] Richard Watson: Organisms with symmetries that are odd, like starfish, is there a nice three-way symmetry?
[10:54] Michael Levin: If I recall correctly, the starfish are actually not five-fold, really. They're bilateral with a Vitruvian man type of thing. There are jellyfish. The jellyfish do have true multiple symmetries, and there are all kinds.
[11:15] Richard Watson: And then they're not always factors of, they're not always powers of two. They can be. That's interesting that a lot of them that I showed you the video the other day, Mike, look like jellyfish, right?
[11:27] Iain McGilchrist: A couple of reflections I'd make. One is that this lateral asymmetry is present in the earliest neural network of which we have any knowledge, 700 million years ago. It has this axial asymmetry. There's a very good reason, I believe, why that should be the case, because of this business of the organism needing to pay finely focused, narrowly focused, targeted attention to a detail that it already requires, and its ability simultaneously to keep the precisely opposite kind of attention open, the broad, vigilant attention without preconception as to what it might find or what it's interested in. So that makes a natural pairing. Out of that pairing, almost anything can come. Through the pairings of one and zero, many, many complex structures can be made. As soon as you've got two, you've already got potentially many things. Because, as I often say, you need both and either-or. So you need the situation where it's either-or; that is an important distinction in life, but it's also important to have that as well as the both-and ability to synthesize these things. As soon as you've got that, you've already got more than two things. You've got either and or, and you've got the combination of the two and the relationship between all three of them. And so the thing expands from there. So although it may begin for very good evolutionary reasons as a pair, more than that, almost infinitely more of that can emerge from it.
[13:25] Richard Watson: So one could explain brain bilateral symmetry by the fact that it's growing in an organism that already has bilateral symmetry. If we take the organism itself to be fundamentally cognitive, even before it had a brain in a basal cognition way, then that's just saying bilateralism was always about cognition and bilateral hemispheres in brains was just a particular extension of that. If you go right back to the origin of multicellularity, was that likely to have occurred through cell divisions which didn't separate, in which case two would have been a natural number. Then the vibration that you can get between two cells is different from the vibration that you can get in a single cell, because you can coordinate them. They could be the same period but out of phase — I'll be on when you're off. You can also have one doing something at twice the frequency of the other, which is stable. Now they're doing something intrinsically different. It seems natural to create things which were powers of two that way, that will do two doublings instead of one doubling. But it's also the fact that the Taylor expansion gives you a third by one minus a half plus a quarter minus an eighth minus a sixteenth. The pattern you have to do to do that is just an oscillating pattern. I think it is very natural, Mike, that it's pairs to start with. I think that's very reasonable; there are good reasons for that. But I also think that there's something very interesting in considering other factors.
[16:14] Iain McGilchrist: But a couple of other things occurred to me on that. One, as you mentioned, chirality goes well before life, and it does. And of course, famously, the weak force is chiral in its nature. And chirality has a left-handed spiral and a right-handed spiral. I'm not sure what the spiral would be that was the third-handed spiral. That's where all this starts. In the metaphysics part of "Matter with Things", I spend a lot of time on the importance of the coincidence of opposites. Much of everything that we see seems to be structured on what we call opposites, because we think in this sense of them being opposite ends of something, but they may just be the complements of one another, that you cannot have one without the other. This is a very ancient insight. It's present probably in all the sophisticated cultures that we know of. Pasteur in the 19th century said life needs asymmetry. Symmetry it may have in places, but that's completely unimportant. It's asymmetry that allows life to flourish, and it does so, he thought, because it reflected the innate asymmetry of the cosmos. Pierre Curie, commenting 20, 30 years later, said that the asymmetry of the cosmos, not so much of life, but of the cosmos, was absolutely central to its existence and persistence. So I think this idea of asymmetry is good. Asymmetry and symmetry are a pairing that mop up. What would be your third thing in there? The third thing is chaos, actually. It's simply chaos. Not everything that is not symmetrical is, strictly speaking, asymmetrical. It may just be a mess.
[18:29] Michael Levin: One interesting thing about the establishment of symmetry in the early embryo is it's very easy. There are many treatments that we've discovered that confuse the left and the right halves about which one they are. So you can get double left, double rights, you can get mirror image. That's very easy. Of all of those treatments, in every single case except for one, everything else just confuses which side thinks it's left and which side it's right, but all of the individual cells within that side agree. So if you use a marker that shows you whether something's left and right, you never see speckling. You never see every cell confused and its neighbor has a different option. You just see this way, this way, this way, or this way. There is exactly one thing that we found that breaks the concordance. So normally you can randomize it and you can make the left and right halves flip a coin as to which they are, but all of the cells within each half flip the same coin with one exception. There's only one thing that breaks that. The thing that breaks it is a cellular alignment mechanism, literally alignment, but also metaphorically, alignment in the cognitive sense. It's a planar polarity mechanism that allows the cells to be literally aligned in the same direction, not even in left-right. It's actually an alignment in anterior-posterior. Once you break that, then you get the speckling. And then you get individual cells that don't know what to do as opposed to whole coherent, cohesive regions that don't know what to do. I really like that whole model as a way of thinking about what it takes to make selves, to make a coherent individual out of pieces that makes a collective decision, this sort of collective decision making.
[20:25] Iain McGilchrist: What significance would you attribute to this final case that you described with the anterior-posterior alignment, but not left-right coherence? If I've understood you right.
[20:42] Michael Levin: I think the reason that I found it interesting is because what it gets to is this question of collective decision making. So what is a collective intelligence, which we all are. The question we have to answer is in what sense does the collective have goals, memories, whatever else that the individual pieces don't have? We are a bunch of cells, but then we have all kinds of cognitive and morphological properties that the individual parts don't have. Right at the beginning of embryogenesis, when 50,000 cells have to come together to give one embryo, this is a breakdown of that collective decision-making process. We have many other examples where the components begin to make decisions as one. They're all synchronized. Some of that is bioelectrical. Some of it is these planar polarity alignment mechanisms. To me, this is the root of the whole business. When you start with 50,000 cells and you look at it and you say, this is one embryo. What are you counting when you say it's one embryo? What you're counting in a functional sense, and it doesn't have to be one: if you make little scratches in it, what you'll get is multiple conjoined twins. Until that scratch heals, each portion doesn't know that there are other portions around. It basically decides I'm going to be the embryo. They all do that. Eventually they heal. You can have twins, triplets, and whatnot. The issue of how many individuals, how many humans are in any one embryo is not fixed by the genetics. It's not fixed by the physics. It's a kind of an outcome in software, because it could be zero, one, two, three, five — some number.
[22:39] Richard Watson: It's a really perfect example of the top down and the bottom up involved there. If you think of where is the embryo and how many embryos there are as a question that you can answer bottom up. These are the parts, so that must make one embryo. But the fact that the whole becomes divided into two parts, that's a macro-scale phenomenon. That top down organizes the parts to orchestrate them to make a whole.
[23:12] Michael Levin: And then we have cases that are partial. Sometimes they're separate and you can make twins that are completely separate, but you can also make ones like this that overlap. There are even human — you can make them in any orientation, but there are human cases too where you've got two bodies and the brain is fused or vice versa. There are all sorts of possibilities.
[23:37] Richard Watson: Did I ask you already whether conjoined twins ever join up parts which are not homologous?
[23:43] Michael Levin: Yes, you can.
[23:44] Richard Watson: What was the answer?
[23:46] Michael Levin: You can, that can happen.
[23:49] Richard Watson: Presumably the conjoining is much more superficial in that case. If I make a conjoined twin, the joint between the head and the foot, then it's only actually integrated at the skin.
[24:01] Michael Levin: No, it goes pretty deep. There are scenarios where out of the middle of one embryo another one starts sprouting, and at the connection it's nuts. They're connected all the way through. These things occur naturally too, but especially in something like a chicken, where it's very easy to manipulate all the parts, you can make almost anything.
[24:31] Richard Watson: Whilst you were talking about that, I was thinking about Ian's suggestion of opposites and chaos. I've still got the logistic map in mind. Do you mind if I share my screen to point out something in the logistic map that I showed? I showed Mike before, but I didn't show Ian. In the logistic map here, you have this first doubling and then another doubling and then another doubling and then it turns into chaos. Are we up or are we down? On this path we're up and on this path we're up, up. On this path we're up, on this path we're up, down. Whereas this one is down, up, and this one is down, down. This line going up here, all the way up here is the up, up, up, up, up, up, up, up, up line. And this one going down here is the down, down, down, down, down, down, down, down line. How come this period doubling ever ends up at this period three? Because there is no power of two that ends up in dividing into thirds. This resonance happens much later on. But as you can see, there's a little trail of dust, a little bit of higher density stuff happening here that corresponds with the point of the third. What's that trail of dust? Well, that's the I go down, I go up. I go down, I go up, I go down, I go up, I go down, I go up, go down, I go up, down, I go up. So plus a half minus a quarter, plus an eighth minus a 16th, plus a 32nd minus a 64th. It's a third. It gives you the period of the third. Another way of thinking about this, reflecting upon what Ian just said about there's a thing and it's opposite, is that in addition to 1 and minus 1, there is also 0. Zero is the unstable thing that you get with a perfect blend of this much minus one and this much one, this much minus a half and this much half, this much minus a quarter and that much eighths. And that comes out at zero, which enables you to have three things instead of two things. Instead of the everything up position and the everything down position, there's the exactly canceled out position in the middle. That's the only way to get thirds out of period doubling.
[27:09] Iain McGilchrist: And why does the top part of the graph not produce the mirror image thirding that goes on in the bottom part of the graph?
[27:17] Richard Watson: That's a good question. There's a thirding happening here, which lines up with the fifth. Is that 1, 2, 3, 4, 5? It lines up there. It gives you the other resonant frequency here. Here we have one that this one is canceling out to give you half of a fourth, and this one is canceling out to give you another half of a fourth, and together they give you a period four. So there's the canceling out on the top one, which comes first because it's more squashed, and the cancelling out on the bottom one, which comes second because it's less squashed. When you have one without the other, you have thirds. If you take a naive view where you insist on opposites, you say, is it A or is it not A? You get into this region where it's neither; it's just a mess, it's just chaos. But there's a null, which is, first of all, you might identify that as it's neither A nor not A—it's null. But then there's a more precise version of null, which is the perfect balance of being there and being not there gives you exactly zero, which then takes its place as another genuine state. Now there's one and minus one and zero, three things and not just two. I think it's a lovely example of those paradoxes you were talking about at the end of the chapter, Ian, where you're talking about I take this infinite number of steps, but I never see the infinity. I take this infinite number of steps, but I never catch up with the tortoise. So I take this infinite number of steps in the period doubling, but I never really see the third. But our eyes see the third for sure. We don't see that as some overlapping powers of two all smushed together. The resonance pops out as that, oh no, that's a real thing.
[29:41] Iain McGilchrist: I wonder if the question of zero, which the Greeks didn't have, but was introduced in probably the 6th, 7th century from India, is not so much a third entity, so to speak, as what happens when you allow opposites to do exactly what I was suggesting we shouldn't do, which is to collapse. For certain mathematical reasoning, we want them to do that. But what I'm suggesting is that in the physical world, the things being asymmetrical ultimately — they may for certain purposes within very confined systems appear to cancel out, but ultimately they don't. I don't know if that's something you would agree with Richard or not.
[30:30] Richard Watson: But the canceling out, you say at the growth scale, it looks like they disappeared. I've got nothing now. But there's another sense in which look at all of that spiraling depth that there is in order to create. It's gone this way, it's gone that way, it's gone this way. That takes lots and lots of depth to actually cancel them out, to then stay on the top level. It's almost like it went **** in a puff of smoke and I didn't have anything at all.
[31:01] Iain McGilchrist: Yeah.
[31:17] Michael Levin: Yeah. Shall we talk about the machine business?
[31:24] Iain McGilchrist: Sure. Do either of you know the work of Robert Rosen? Who I think is very interesting. And he's probably written about this more clearly than anyone, that a living system has final causes rather than just efficient causes. In fact, a living system is not the result of a single efficient cause, whereas a closed system such as a machine, it's open in the sense that it's open to an efficient cause, but it all can be accounted for by that efficient cause within that protected space of the machine. Whereas in a living being, even if you were to be able to start from a known standpoint, you'd soon run into problems of complexity where re-entrant loops occur, where decisions that cannot be predicted are made within the system. So this seems to suggest that I'm cutting everything very fine here because I talk quite a bit about this, particularly in chapter 12. But my view is that there are, and of course in chapter 27, where I argue that it is almost impossible to talk intelligently about life without talking about the possibility of purpose, teleology, which Darwin was extremely keen on. Darwin said, "thank goodness," and Darwin's bulldog, Huxley, also said, "the great thing that Darwin has done is restored teleology," which, as J.B.S. Haldane said, was like the scientist's mistress couldn't live without her, but wasn't willing to be seen with her in public. And I think this whole thing of teleology is so important. Whenever you talk to scientists one-to-one and off the record, they say, "well, of course, there are purposes and purposeful behaviour direct. All of what we watch is directional." And yet they're forced by belonging to the Soviet state. There are rules, things you can say and things you can't say, just to deny this existence. That's a good provocative place to begin thinking about the differences between these systems. And I outline in chapter 12, which is called "The Science of Life, A Study in Left Hemisphere Capture," because I believe that while physics gave up on this rather simple mechanical model a very long time ago, at least 100 years ago, biology has until very recently clung to this purely mechanical model. And I may be wrong, Michael, but I see you as one of the people who has been able to say, yes, there's more to it than this.
[34:30] Michael Levin: I think we could and probably should at some point talk about the teleology in machines and cybernetics. I 100% agree. I think teleology is absolutely key to all of this. One reason I don't like a sharp distinction is I like Robert's work a lot. What I don't like about that sharp distinction between living things and machines is that we know now — we didn't when Robert was writing this stuff — that we can make all the transition forms. Then we end up in this weird place where we have to try to come up with criteria. We can now make something that's basically at every level. So you've got molecular, cellular, tissue, organ, swarm — you've got all these levels. At every single level, we can introduce any percentage you want from zero to fairly high numbers of something that you would call a machine. Something that was designed by humans, that may be quite complex, maybe has unpredictable behavior. It's easy to make machines with very difficult-to-predict behaviors, but nevertheless they're designed; they're completely different from the natural in many ways.
[35:49] Iain McGilchrist: Yes.
[35:50] Michael Levin: And then I don't know what we do after that, because now we have something: there are humans that are 95% human, but there are electronics by which they run a wheelchair or prosthetic limbs or other stuff. And then we also have these Roomba vacuum cleaners, and pretty soon they'll have some human neural cells living on board to help them do various things. And those are easy to deal with because in this case you say, you're just a human with some peripherals and you're just a vacuum cleaner with some human brain cells, fine. But we can make cyborgs and hybrids; we're going to be seeing all this stuff. I think we get into a really difficult place where if we try to draw these hard boundaries.
[36:44] Iain McGilchrist: Go on. No, please finish.
[36:48] Michael Levin: That's just it.
[36:51] Iain McGilchrist: There's an enormous amount one could say about all of that, but that would be another day, perhaps. I don't think there is a hard and fast difference between the living and the non-living. In fact, Robert Rosen said as much. He said that animacy is the norm, and inanimacy is an asymptotic, never achieved state in which animacy is reduced to a minimum. The whole cosmos is animate, and I hold this to be the case. Some people would say inanimate matter doesn't have consciousness, but we think that living things do have consciousness, and now we think that very primitive living things probably have consciousness, so it's gone a long way down, if not to the very bottom of life. But in an earlier chapter, 25 I think, I argue that consciousness is an ontological primitive in the cosmos, and that matter is a state, a phase, of consciousness in which it exhibits greater stickiness, permanence, resistance than consciousness which has no matter in its form. Things persist longer and cause the possibility of resistance. I believe nothing creative can happen without an element of resistance. This thing that matter offers is incredibly important. My view is that what we are used to calling inanimate matter does many of the things very slowly and very little that living things do. What living things bring is not consciousness, because that's already there, but an increase in the capacity to respond and the speed of response. Living things respond to circumstances, to whatever is around them, perhaps a billionfold faster than inanimate matter could possibly do, and respond to a vast range of elements that are there in the cosmos. They respond all the time to things, much faster and on a much bigger canvas than the inanimate. I see the difference as a matter of degree, not that this means there are long phases where you can't tell. Nonetheless, even though animacy and inanimacy do seem very distinct, they share important qualities, and the difference is more one of degree.
[39:46] Richard Watson: I would like to offer a refinement on that. It's nice to be able to have a conversation that and for somebody to be able to say things that and for us to be nodding along, and I'm on board.
[40:07] Iain McGilchrist: Yes, it is nice and often unusual in science.
[40:12] Richard Watson: We could try and pinpoint the continuous difference between living and non-living in terms of the physical scale of the capacity to respond, the temporal scale of the capacity to respond, the richness of the capacity to respond, things like that. There are reasons to suggest that there are some inanimate mechanisms that happen at very large scales and very quickly. A supernova explosion is a very big response and it happens very quickly, but it's still not living. There are other things that happen slowly. I don't think that going one way makes things more living and going the other makes things more inanimate. I think it's more about the connection between the scales. Is the large scale connected to what's going on at the small scale? Is the fast scale connected to what's going on at the slow scale? When those things are connected, they behave organically, they behave animately. When those things are not connected, then they behave like hardballs in space, whether they're big or small or fast or slow, they just behave like hardballs in space.
[41:45] Iain McGilchrist: May I respond to that, Mike? I like that very much. I could clarify that what I meant was not that inanimate things can never do something fast like a volcano exploding, but that generally speaking, without a cataclysmic force, there is no fast response. I'm talking about a response to the whole set of circumstances. For example, there's a lump of rock in my garden and I don't know what it is responding to, but whatever it's responding to, perhaps wind, water and so on, it's responding extremely slowly. But there is also a vole in the flower bed that is responding to everything a billion times faster, at least, in a way that inanimate stuff just doesn't seem to do. Complex reactions to many facets, many different kinds of values of affordances. There are many affordances for living creatures, and they use them very rapidly, whereas lumps of matter tend not to. That's one thing I wanted to say. The other is I think it's true, this thing about the bridging of the scales, but I was very interested in an observation made by Michael Bramovitz that there are different structures, the distinction he makes between architective structures and connective structures. By architective structures he means things that are relatively rigid and static. When they change, they make a complete change and it is a cataclysm. And there are things that are connected that change by means of flowing change. I don't know which bits of my book you've had the chance to catch up with and I'm extremely grateful.
[44:22] Richard Watson: 15 and another section on induction and deduction.
[44:25] Iain McGilchrist: I think the difference between concatenation and flow is absolutely essential. These are types of motion where things flow from one state into another, and architective states hold themselves until there's a break; they're rather like what Taleb calls fragile states, and the connected ones are antifragile. They make adaptations all the time so that they don't have that cataclysmic need to change. He suggests that at the very lowest level, and particularly at the very highest level of magnitude, you only find connective states. The architective states are found more at the intermediate phase, where we happen to be leading our lives and have our experience. I've probably given a very bad account of that; I do a better job in the book.
[45:33] Richard Watson: Thank you. Whose terms were those, "architective" and "connective"?
[45:39] Iain McGilchrist: There's a now-retired physicist called Mike Abramovitz. He has a very interesting website and we've corresponded.
[45:54] Richard Watson: There's a sense in which the rock in your garden can respond to things quickly. If you kick it, it moves a little bit; you hit it with a hammer and it moves a little bit across the garden. In a sense, that's a response to the force that's acted on it. But the architective structure, if I can use that term, isn't changed. It's still the same rock; it's just over here instead of over there. It didn't really change its relationship to anything, and it didn't change its internal organizational connective structure in response to being hit with a hammer. A minimal example of a physical system that I've been thinking about recently that has some connection between scales is resonance in systems that resonate. In a tuning fork or a piece of steel, when you hit it, it doesn't just move or just transfer the impulse. When you hit it and it rings, there is a really intimate connection between the macroscale geometry of the fork as a whole and the microscale elasticity of the molecules of steel holding it together. One needs to be an integer multiple of the other in order to get a resonant standing wave in that geometry. There's an intimate connection between the whole organizing the parts and the parts organizing the whole. And I don't think it's any coincidence that we say that the ringing of a bell feels organic or has lifelike structure to it.
[47:50] Iain McGilchrist: That's because of its resonance. This idea of resonance, reverberative connectivity is essential to the understanding of just about everything, but certainly living things. There are certain kinds of things where the constituents lose their qualities completely and become something else. So an evil smelling greenish yellow gas mixes with some double metal and it becomes table salt. They've lost what they had before. This is architective change, he says. But there are other things that mix with one another, add themselves to things and work by accretive processes. They are able to cause something to create a bigger entity out of the smaller elements that will come together. That's all one needs to take on board for that point. He spots that catastrophic scale is above the scale of particles in physics, but is below the scale of the cosmos. In the cosmos and the very subatomic regions, what you see is not cataclysms in which one thing is wiped out and changes its nature completely; things come together and create, as they do in the wider cosmos. They create something new, but it's not catastrophic for the constituent parts.
[49:51] Richard Watson: The music metaphor for me is like when you put two notes together and they're concordant. Both the notes are still there and they created something new. They created the interval.
[50:07] Iain McGilchrist: Absolutely.
Richard Watson: When you put two notes together, which are discordant, they just crash, and it seems to tear apart the two notes you had.
[50:18] Iain McGilchrist: Yes, that's a good metaphor. But what an extraordinary thing it is that by putting two completely meaningless, bland things together, a note and another note, you suddenly create an event and an experience. The more you add, you can do things that are completely unpredictable from the outside. You can only know them when you hear them.
[50:47] Richard Watson: Yes. You can't tell. If I give you 469 Hertz and 468.9 Hertz, you cannot tell the difference between those separately played. But when I play them together, you can hear the beating as they go in and out of phase. There was something I'd like to return to, the possibility of the hybrids and cyborgs that Mike was talking about, and what we were just talking about, whether there's multiple scales involved in the dynamics, the system, or the algorithm that's running. So in computer science, we have this notion of an algorithm being substrate independent, that you can implement the same algorithm in multiple different substrates, and it just doesn't matter because the details of the substrate that you implemented it on just don't matter to the essentially symbolic and discrete computation that you're doing at the higher level.
[52:14] Iain McGilchrist: It's all syntax and no semantics.
[52:17] Richard Watson: By me divorcing it of all of the concrete details of the instances, you get something that you can apply to other instances. You get the generalization. But you lose that connection to the concrete instances in a way that the symbols that you're manipulating don't have any meaning. If you treat them abstractly, then they are, as you said, pulled out of their context in which they had some meaning. That notion of the substrate independence of algorithms suggests that you could replace part of an organism with a machine or part of a machine with an organism and it wouldn't really matter, or you would create something that was in between. But I'm not so sure about that. I think that when we implement an algorithm on a machine in a conventional sense, we are doing so in a way which is as divorced as possible from the physical implementation of it, so that it doesn't matter what numbers I put into my sorting algorithm, it never makes my logic gates overheat. That getting to the physics of what I implemented it on just doesn't matter; whatever computation I do on it just doesn't interact with the physical implementation. But in organisms, it's not just that it does interact with the physical implementation, but that they are organisms because it interacts with the implementation. So when you can get an organism to behave like an AND gate if you put it in the right circumstances, but when you make it do it over and over again, it says, **** this, and it crawls outta the dish and does something else. That the interaction with the substrate of which it is made matters. And it matters not just because it's an unreliable algorithm then; it wasn't perfectly abstracted. That wasn't really a good way of implementing an AND gate then. That was the thing that made it organic: when you stress one level of function, the implementational level of function below it begins to show. It begins to show through. And that's really important for me in being able to get those responses that you were talking about. A quick response is by reorganizing the function. Okay, I won't do that function and I'll do this function. And a lot of the higher level and lower level integrity, the higher level and lower level structure is still there, but it's been reorganized. Whereas when you do that with a mechanical device and you push it beyond its limits, it doesn't show you the in-between states. And you need the in-between states because it can't really be adaptive. It can't really learn unless its insides show. And when its insides do show, you just say, **** I broke it. It doesn't degrade gracefully because you went straight from this really, really high level symbolic stuff to the physics it was made out of, multiple levels below. And that means that when the insides do show, it just breaks. It doesn't degrade gracefully.
[56:07] Iain McGilchrist: I don't know enough about the systems that you're working with, but how they could be adapted to the way a plant takes in 15 kinds of measures and synthesizes them and makes a decision about whether or not it's time to bud or flower. I'm sure you could find algorithms that could make an artificial flower do something like this, but I don't know that it would be anything at all like the flower.
[56:46] Richard Watson: Yeah, it's like.
[56:47] Iain McGilchrist: Unfortunately, I'm not a cyberneticist, so yeah.
[56:51] Richard Watson: How the flower behaves under normal circumstances could probably be extracted into such an algorithm involving 15 variables and a bit of logic. But what's really interesting about the flower is what it does when you put it under circumstances that are not quite like that. When Mike does an experiment and cuts it in half when it was just a seedling, how many plants do you get? And that's now it's making two decisions. This one is deciding differently from that one. How did it make two? How can one thing make two decisions?
[57:31] Iain McGilchrist: I think this situation sounds different from the execution of an algorithm, where effectively Monica Galliano has done experiments with pea shoots that are deprived of light. The light comes on in a Y-shaped structure over the bed in which they are, and it comes on in one or the other arm of the Y entirely at random. You can't predict each time which it will be, but some time before the light comes on, a puff of air is sent down the arm of the tube out of which the light is going to come. These pea shoots, starved of light, benefit from orienting themselves towards where the light is coming from. I believe that in only three days they learn to orient towards where the puffs of air are coming from, because that's where the light will come from. Intriguingly, the reverse case has also been done as a control, in which the puff of air comes down the arm from which the light will not come, and the plants, again within a few days, learn appropriately. That's not something they could conceivably have been programmed by, either from their own experience or from any past historical experience. This seems like intelligent behaviour.
[59:09] Richard Watson: I don't have any problem with that. Mike's not surprised, right?
[59:14] Michael Levin: No, I'm not surprised. What we were just saying — you don't see what it can do until you stress it — is 100%. One reason why people aren't into teleology is because what they observe most of the time is the default behavior of embryos, and they think it's a purely feed-forward emergent system. They say, "complexity and emergence — local rules can give rise to complexity," and look, it does this thing. They say you can't just label that as intelligence. That's what it has to do. That's true if all you do is observe the standard default behavior. But once you start putting barriers in its way and you start stressing it in various ways, then you get pulled out of this false sense of inevitability and you see that it actually has, to some degree depending on what you're looking at, what James called intelligence, which is the ability to reach the same goal by different means. You start to see the incredible ingenuity that these things can muster in different problem spaces. I have to run.