Watch Episode Here
Listen to Episode Here
Show Notes
A 1-hour conversation about our algorithms data (https://thoughtforms.life/what-do-algorithms-want-a-new-paper-on-the-emergence-of-surprising-behavior-in-the-most-unexpected-places/) and how it relates to behavioral policies in minimal and not-so-minimal biological agents.
CHAPTERS:
(00:01) Sorting algorithms, delayed gratification
(05:59) Emergent algotype kin clustering
(13:27) Explicit and implicit cognition
(25:48) Surprise minimization and prejudice
(34:25) Novelty, swarms, and cognition
(43:37) Basal cognition, cellular hierarchies
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:01] Michael Levin: I wanted to get your thoughts on a specific angle to some of the computational stuff that we've been doing lately.
[00:08] Mark Solms: I knew that you would have something on your mind, but I was busy racking my brains trying to think, what is it that we're going to be talking about? I thought, well, it doesn't matter with you, it's always interesting. So I'll just see what it is and then we'll take it from there. So what is it that's on your mind?
[00:31] Michael Levin: Here's what I have. We have a pre-print; this is Adam Goldstein and Taining Zhang, a student of mine, on the following. What I wanted to do was to address this issue of non-obvious cognitive properties in very simple systems. The thing that I chose specifically was sorting algorithms. These are very simple algorithms that computer scientists use. All CS students study these. I'll just give you a very quick rundown of what we had found. They are a good model system because they're fully deterministic. They're completely transparent. That's six lines of code or so. There's nowhere to hide. In biology, there's always more mechanism that could be involved. Here you see all the steps of the algorithm. Fully deterministic, very, very simple. Everybody thinks they know what they're capable of. I thought let's see if we can find any surprises there, and we made a couple of tweaks to be able to study them from this basal cognition perspective. I think there may be a psychoanalytic perspective here that I want to probe with your opinions on. The first thing we did is we visualized. The way they work is you start with a string of integers, let's say 100 integers. They're jumbled up in random order. The algorithms eventually sort them into order so that it's monotonic from 1 to 100. You can visualize the progression of any given string in that space. It starts off somewhere where it's quite random. They all move to a spot where everybody's in the same order. What you can do is plot this notion of sortedness — how well sorted a given string is — and you see that over time it increases. That's the journey through space that this thing is taking. That's the space it lives in. We can ask a couple of interesting questions. The first question that we asked was in James's sense of delayed gratification: if we put a barrier in its place, is it capable of moving backwards away from its goal in order to reap gains later on? The business with the two magnets separated by a piece of wood versus Romeo and Juliet separated. Keep in mind that in these algorithms, there is nothing like that. There is nothing explicitly in the algorithm about if you come upon a barrier then you should temporarily back off. We kept the algorithms exactly as they are. The way we do a barrier is we simulate a broken cell. Every cell has a certain number and the algorithm says move the four and the five. What happens if one of them is broken? They won't move. If they won't move, this breaks an assumption in these algorithms. Usually you assume that the hardware is reliable. If the algorithm says swap the numbers, they swap. The algorithm doesn't have any logic built in to check whether the swap occurred; it doesn't have any of that. It just goes on its merry way, assuming that what it said actually took place. Biology isn't like that. The biological medium is notoriously unreliable and noisy. If we put a barrier while sorting, you need to move a number to improve things, but it won't move. It's broken. There are two ways it could be broken. Either it could refuse to be moved at all, or it could refuse to request to be moved. We'll deal with that momentarily. Let's say it's broken. What we found is that these algorithms, despite not having any specific code for it, when they come to a barrier like this, they back off: the sortedness drops, the string becomes less sorted, they do other stuff, they move other numbers around, and then eventually they go around and things improve. They have an ability for this kind of delayed gratification. They can back away from their goal in order to reap gains later on. They have a little horizon; they're willing and able to do that. That was the first thing we found. That's not the thing I want to focus on, but I thought I would say it in case you had anything interesting to say about that. There's this interesting new column.
[05:02] Mark Solms: I want to be sure I'm following correctly, Mike. So this algorithm, is it a stepwise procedure or is it aiming for the end goal, the totally sorted sequence, or is it just following the steps that are coded?
[05:29] Michael Levin: It's the steps. It is not an explicit goal-directed loop at all. It has certain criteria. It says look at some numbers and then, based on that, shuffle the numbers. If you execute that long enough, you will eventually end up with a string. But there is no logic in there. There is no logic that asks, how are we doing? Did it work? Is it sorted? There isn't any of that. There's no self-monitoring. There's nothing like that.
[05:57] Mark Solms: Yeah, okay, thank you.
[05:59] Michael Levin: The second thing we did is the usual way these algorithms are done is that there's a centralized controller that can see the whole string and is following the algorithm and is moving numbers around. We got rid of that and said each cell is following the algorithm. The four wants A5 on its right and the three on its left. Each cell has preferences about what its neighbors are, but there is no single central controller that sees the whole thing. If you do that, it still works. You can make it bottom up and distributed in that way. That also allows you to make a chimeric string. You can make a string, an array of numbers composed of numbers that are actually following different algorithms. We looked at three sorting algorithms that we used: selection sort, insertion sort, and bubble sort. Normally you have one algorithm and you're following that algorithm to move the numbers around. Now that the numbers are moving themselves around, some of them are going to be following this policy, some following that policy. If you do that, it also still works, meaning they don't all have to follow the same policy. Imagine you have an array of 100 cells, and each of those cells has a number. That's what you're sorting. You're sorting on that number, and they're starting out randomly. We assigned algorithms to each of those cells randomly, one of two. Adam came up with a name for it: he calls it an algotype, the way you have a genotype and a phenotype. Every cell has an algotype. The algotype is either bubble sort or insertion sort. The algotypes are also selected randomly. We ask a simple question. During the lifetime of this array as it's doing its sorting, what's the internal structure like? That is, what is the probability that any given cell looks to its neighbor and says, "oh, he's just like me, same algotype as me"? What's the probability of that? In the beginning, that probability is 50% because we assign algotypes to numbers randomly. It's a 50-50 whether the guy next to you is the same as you. At the end it's also random because we're sorting on the digit values, and the assignment of digit values to algotypes is random. The question is, what does it do in the middle? What we find is that it starts out at 50%, it goes up, and then it comes back down.
[09:43] Michael Levin: In other words, what happens during the sorting is that cells tend to cluster next to ones like them. They tend to cluster next to their own kin, basically. Now, keep in mind, in the algorithm, there is nothing, there is no code in the algorithm. What algotype am I? What algotype is my neighbor? I want to go sit next to this one. I don't like sitting next to that one. There's nothing like that in there. If you examine the physics of the system determined by the algorithm itself, you don't see any of that. The only time you see this is if we actually look and ask this question of, we know what their algotypes are, and we say we are uncovering a hidden cause of their behavior that is to them not apparent at all. We can see that you're clustering in a particular way. There's a behavioral pattern that we're identifying here, but it isn't explicitly coded; if anyone examined the code, there's nothing in the code that says that. There are a few interesting things here that I really like. One is that it's an extremely minimal metaphor for this existential condition of living things where eventually in this universe, the inexorable force of the algorithm is going to pull apart all of those clustered pieces, because in the end, everybody's got to get sorted according to their number. Eventually the physics grinds you down and undoes whatever organization you had at the beginning. But during the process you have some amount of time where, while following the algorithm, there's an interesting behavioral tendency that you're able to satisfy for some period during your lifetime. It's not against the algorithm you're following; it's consistent with it but not specified in the algorithm. For some time it's actually at cross purposes because your behavior is keeping things together that eventually will be pulled apart. We also did one more experiment. We asked how strong this tendency to keep together is. One way we can ask that is by allowing repeat digits. If I allow repeat digits—let's say we have 100 total and every digit can occur 10 times—there are only 10 digits total: 10 fives, 10 sixes, and so on. Then you can satisfy both: in a run of 10 fives, five could be one algotype and five could be the other. That makes it possible to retain clustering while still being consistent with the algorithm. If you do that, they cluster even more, which shows these things are somewhat at cross purposes. If you remove the pressure and allow them to be clustered, the clustering goes up. I'm interested in your thoughts on, in general, this notion of a system — this is an extremely simple one, but you and I are both interested in minimal agents. These kinds of explicit drives and behaviors aren't in the algorithm; they are emergent and not obvious, but can be discovered if we study how the system reacts in the problem space it's working in. If we don't assume, even though it's deterministic, that it's a dumb thing that only does what the algorithm says. It made me feel like we were uncovering hidden motivations for the patterns in its behavior; it's not complex enough to realize it, but we can see them. What do you think about all that?
[13:27] Mark Solms: I don't know how you come up with the problems that you spend your time on. You have the most extraordinarily creative mind. This is Mike Levin. Nobody else is going to have this conversation with you ever. Are we getting some sort of interference?
[14:00] Michael Levin: Somebody's texting me and I'm trying to shut it off.
[14:05] Mark Solms: I was thinking it through and trying to understand why it would do what it's doing. I don't know the algorithms, but it was making sense to me. Why these emergent behaviours — there are ways in which I can understand this would happen. For example, your own algotype is probably more predictable. If it's functioning by the same principle that you are, you will prefer to operate in a space that is understandable, that behaves in the way that you expect and there's less work required from you. How that comes about made putative sense to me. When you asked the question, what you were describing was emergent properties of the system as you later labelled it yourself. When you say you're linking it to psychoanalysis, it's broader than psychoanalysis. It's a system that has explicit and implicit principles. The model I would use is a general multiple memory systems model where there are non-declarative and declarative modules. In that, it works in the reverse of what you've described in the sense that what is non-declarative is the more bottom-up. It's the more predetermined and what is declarative, what is explicit is the more emergent — what couldn't be predicted and needs to be dealt with on the fly is what is explicit. So it's interesting. The fact that you drew that analogy, which I wouldn't have thought of — this is Mike Levin. You have to remember that you're Mark Solms and that's Mike Levin, and now you've got to start thinking differently. Since you made that connection, which I wouldn't have made, I find it interesting that for me what I would call the emergent part is the part that's explicit. Explicit in the cognitive sense as opposed to explicit in the coding sense. In other words, this was explicitly coded, but this behavior emerges. What I think is explicitly coded in our multiple memory systems is what is non-declarative. What is implicit and emergent is what is declarative, because that's where the uncertainty is. That's where the unpredictability is, which links with my way of thinking about consciousness, which is that it's felt uncertainty. Now, why do I think that the non-declarative systems are equivalent to your explicit coding? I think it's — I'm trying to unpack my own intuition.
[17:59] Mark Solms: Why do I have that intuition? First of all, I can link it to what I've just said, which is a kind of functionalist account: what is implicit is what is more certain in the memory sense of the word. In other words, use the word "non-declarative" so it's clearer that I'm talking about memory systems. The non-declarative systems are non-declarative because one has a high confidence in those beliefs; they're not subject to review. This is always the same. In terms of a hierarchical predictive model, you're talking about the layers in the hierarchy, which are simpler and more generalizable. I don't normally picture it as a pyramid. I picture it as a concentric onion. I think of the core of the onion as the deepest layers of the predictive model, and that's where there's the greatest confidence. The simplest, most generalizable predictions are at the core. As you move to the periphery, you're dealing with eventualities on the fly. One of the reasons why I think of the non-declarative systems as the explicit in the sense of pre-programmed is because there's more predetermination, higher confidence, more generalizability, and a simpler algorithm is being followed down there. Of course, the converse applies to the periphery. Another reason I think that way is because the non-declarative memory systems function most like reflexes and instincts, which are the pre-programmed predictions. These are the predictions that are determined by natural selection, and as we automatize our acquired predictions, they function more like reflexes and instincts. They are more automatized, more unconscious, more predictable, simpler, and more generalizable. The periphery of the predictive model is specialized for complexity and context sensitivity. That's where the uncertainty is. It's also what is more required. It is what is more unique to me here and now, as opposed to what is generally true of all versions of my type, things that always function that way. The meaning of the words "explicit" and "implicit" in memory-systems terms is the opposite of the meaning of the terms in the way that we are using them in your emergent properties of your system. Those are my initial thoughts, but it links to an issue that is extremely interesting and cuts to the heart of the whole problem of consciousness: what do we mean by explicit?
[21:53] Mark Solms: Why does anything become explicit in the sense that the system now has to monitor its own states? As you know, I'm working on an artificial consciousness project myself. In the computational architecture that we are designing and adjusting as we go along, this is one of the problems we're busy tackling right now: we can make our agent function perfectly well without there being any obvious distinction between what is explicit and what is implicit. We have to introduce that to try and get what we're calling "biological plausibility," but it actually has two different meanings. One is to get it to function like I expect it has to, and that's not biological plausibility, it's Mark Solms's plausibility. The other is to make it function more like a vertebrate brain. What we know about mammalian brains is what I'm using as my starting point — in other words, a brain with cortex. So we're having to introduce a constraint on the way that it works in order to make this distinction between what is explicit and what is implicit. I realized that as we were doing that, it's due to a concrete way of thinking. There can be this functional distinction; it doesn't have to actually coincide with its architecture. In other words, you mustn't map the architecture onto your concepts. The concepts of what functions it's performing — these emergent properties are not in any of the code, they're not in any of the modules in your flow diagram, but nevertheless, they emerge, they exist. These distinctions exist without being coded for and without coinciding with the brute structure of the algorithm or the structure of the program. But that issue of the meaning of "explicit" in the consciousness sense of the word is the issue. Why does anything have to become explicit? What does explicitness do? I think it has to do with when a system is uncertain about its own states and therefore has to focus on monitoring its own states. We come back to the problem I started with in my response to you, which is that the terms end up having diametrically opposite meanings: the meaning of "explicit" and "implicit" in the phenomenological sense as opposed to the functional-architecture sense of what is explicitly coded and what emerges. Those are my thoughts about that problem. Does that resonate with you or lead you to any interesting places?
[25:48] Michael Levin: Super interesting. I hadn't realized about the inversion. I think that's incredibly important. And I wonder if tracking down when we can look at the kinds of systems that we're working in as one end of the spectrum, the very minimal kind, and then over here is the mammalian brain you're trying to model. One wonders whether that inversion takes place somewhere along that spectrum. And whether that's a phase transition or a gradual flip — I don't know yet because I'm just taking this on now. But the fact that it's backwards, I wonder if that is actually an important criterion for some of the things that we are interested in, to distinguish truly simple models that don't have whatever properties from now you're into something that's a true agent. Maybe that flip is important to that. And so that's the first thing. So I need to think harder about this inverted mapping. Then the other thing is that this question, which we wrestle with all the time in our models, is what do you bake in to make it do whatever it's supposed to do? In our example, what keeps striking me is that the thing that we've observed, and who knows what else, I'm sure there are other things to observe. We're just not smart enough to test for them yet. This clustering was just something we thought of to do, but I'm sure there's other things to do. It doesn't have any real — we didn't have to do anything for that to emerge; there's nothing in the algorithm that remotely is about optimizing that. To some extent, the apparent tendency of the system to optimize a particular behavior is completely emergent. We didn't have to do anything. And so one wonders, as we think about how to construct things like that. Maybe a lot of it we don't construct. Maybe it's — I don't know where it comes from. But maybe it really emerges under extremely minimal conditions. And that makes me think that if something as dumb as a six-line sorting algorithm can have these unexpected behavioral tendencies that have to be discovered by experiment and cannot be predicted in advance, then the more complicated things we make, who knows what they're doing. That's part of why I started this: because I wanted to have an extremely simple system. And it seems like already there the thing you started with is amazing because it took me probably a couple of months to realize what you said at the beginning, which is that what it looks like they're doing is minimizing uncertainty about their neighbors. Because having a neighbor who's like you means I eventually got there too, but that took me a couple of months to realize. Carl and I talked about it a few weeks ago. I think that's a perfectly reasonable explanation: there is nothing in the algorithm about that. There's nothing about making predictions or testing. We're doing some experiments right now to see how many interactions they have before they cluster. Do they have to spend some time with each other before they realize that this one is just like me? All of that is completely emergent. If it wasn't already fundamental enough, this surprise minimization principle seems now even more just baked in. You don't need to do much of anything. You get that just from extremely minimal systems.
[29:40] Mark Solms: I don't know if there's a literature on this. What you just said made me realize that this principle of preferring to be in the neighborhood of agents that function like me because they are more predictable — they reduce my uncertainty — makes me realize, with a sort of chill down my spine, that this is part of prejudice: why people want to be with their own kind, because they're more predictable. What's going on, what to do, what to expect. Being with others in the social sense makes you uncertain. It increases your free energy and therefore you don't want to be there. It sends a chill down my spine because it makes me realize how little prospect one has of being able to socially engineer this out of us. The only way to do it is to make sure we all have a common enemy. That's a side thought. I don't know if anybody has said that. Maybe it's generally recognized to be the case, but for me, I've only discovered that in my own mind right now in this conversation.
[31:16] Michael Levin: It's amazing that something like that is present, it didn't take much. It didn't take any evolution at all, in fact, to come upon this. But this idea cropped up about a month ago. Some colleagues and I were discussing AI and diverse intelligence and we had a lot of debates about this sort of spectrum. You can imagine the sort of spectrum where on one end it's things like objective philia, where you completely mistake the amount of agency in whatever system you're dealing with, and you fall in love with a bridge or an Eiffel Tower or something people do which are not at the level that you think they are. That's one type of error, but the other type of error is exactly what you just said, which basically, if you're too strict about saying, "Well, I don't believe you have the origin story or composition that I do," and therefore I don't think you have a real mind. I think you're faking it, as they say about certain ads. The far end of that is something like only love your own kind. That's what you were just saying. We spend a lot of time talking about how we could arrive at principal strategies for picking criteria along that, because you don't want to be at either one of these poles. You want to match the impedance, match the kind of relationship you have with the system. I firmly believe that even in the absence of aliens and whatnot, we're going to be surrounded by really radically different systems. We're going to have to work on getting this right. From that perspective, I completely agree with you that this is an important thing to get right. And I think our fundamental principle is surprise minimization. But I wonder if one thing you can do to alter how this works is to reframe what it is that you're measuring. Let's assume for a minute that yes, you don't want surprises; you want to be predictable. But now the question is, what about? If we can get away from superficial differences that drive social distress and focus on fundamental things, then maybe this can actually make sense. What I would like to be surrounded by is what, humans with the same DNA? That's not what I'm measuring. Maybe we measure something else. Maybe we measure kindness or pro-adaptive, pro-social behaviors or somebody who has the same kind of existential concerns that I do. Shift the measurables in a way that it's better. That's where I've been trying to think about how to create principled strategies for that.
[34:25] Mark Solms: It is very much a concern of mine as I'm working on this artificial consciousness. The prejudice that it's not like me, therefore it can't have this valuable thing that I've got. I'm very exercised by, in my field, I have it even with colleagues who won't acknowledge consciousness in creatures which are too far from us in terms of their phenotype, let alone artificial agents. The prejudice is just enormous. It's interesting to link that with what we've just been talking about. I hadn't thought of that. The solution that we, mammals at least, have come to that functions in the other direction, is an epistemophilic inclination — that it's important to approach what is uncertain, a positive attraction to gaining more knowledge and therefore to engaging with what is novel, with what is not understood, it's what becomes interesting. So the mesocortical, mesolimbic dopamine system functions like that in the mammalian brain. Its homeostatic set point is to be in a state of uncertainty. It's in homeostatic deviation if it's bored; if everything's predictable, then that need is not being met, the need to engage with novelty. It's what makes us forage. It's what makes us explore. It's what makes people like you and me exist at all, that we're constantly attracted to what is not known. You can see over longer time scales the biological advantage of that, because to engage with uncertainty is in the long term to reduce uncertainty. So the world's not going to bite you in the **** in future because you engaged with it now proactively. An entirely different line of thought that was triggered in me as I was listening to what you were saying is that really what you're talking about is a sort of swarm intelligence, isn't it? Where each creature has a very limited range of behaviours. Although each one of them is doing this very limited thing, what emerges is very complex behaviours by the swarm as a whole. So there's no master programme in each member of the swarm, but the swarm as a whole produces this much more intricate outcome. That maps onto what you were talking about in terms of your algorithm and the interesting emergent behaviors that you described to me. I realized that this links actually to one of your basic concerns, which is that we multicellular creatures are all of us actually swarms of cells. We're not swarms of equal cells. We have nerve cells and bone cells and so on, which are specialized; in some respects they are all just animal cells, but in another sense they're not. So you do end up having master controllers in the nerve cells, for example, in relation to other cells.
[39:00] Mark Solms: We're not just a swarm; there are interesting ways in which you can understand the cooperation between all of our individual cells on the analogy of a swarm, but there are other ways in which that analogy breaks down. The other thought sparked in me by what you were just saying is: at what point is one justified in speaking of cognition? What does the term cognition refer to really? If you have a very simple algorithm that always does exactly the same thing, where none of this kind of emergent behaviour you've just described happens, is that cognition of a very basal kind? Or does one only have a reason to speak of cognition once emergent behaviours are novel problems, or at least novel phenomena — novel computations which are not written into the code, computations which are implicit in the sense that we were using, which are emergent in the sense that we were using those two terms? Does that qualify as cognition? If it does, bearing in mind my own beliefs about what constitutes consciousness: functionally, what is probably definitional of a conscious agent is an agent which can solve novel problems. It has to have some way of feeling its way through the problem, in other words of monitoring: is this going well or is this going badly? Because it's not written into my DNA. This is something that I have no predictions for. I now have to predict the present. How do you do that? You have to palpate how well or badly this is going, and then you have to change your mind. Putting it in very simplified terms, that's my criterion for what is plausibly potentially a conscious agent. Now I realise there is no distinction between what qualifies as cognition and what qualifies as consciousness. The only way I could find a justification for continuing to use those words differently is that there are operations which become automatized, which were once not there. Something rises to the level of explicit in the psychological sense of the word — as of feeling your way through the problem; then you learn from the experience and you lay down new predictions which can become very confident and automatized. They're the products of consciousness, but they are not in the present functioning consciously. Those are cognitive operations because they're the products of learning. But the actual work of learning, the actual learning from experience, with the emphasis on the word experience, that part is conscious cognition. It's working memory as opposed to long-term memory: I lay down a prediction on the basis of the predictive work that I did. What would your definition of cognition be in the sense that you're looking for basal cognition? What constitutes cognition as opposed to rote following a program?
[43:37] Michael Levin: I agree with you. I think it's fundamentally about problem solving and feeling your way through a problem, specifically in vulnerable creatures. If you are a composite being that is in danger of falling apart into your component pieces, it becomes a whole other business. And I think that's what we mean by cognition. I'm willing to take it very basally, and I don't think it has to be the complex metacognitive stuff that we have. But what I'm realizing now is that I'm not sure if there are any. My initial intuition was like yours, that there would be rote kinds of things, and then there would be these more complex emergent things. I'm not sure there are any of the first part anymore. The reason I say that is because if you were to look at these algorithms—how much more rote can you get? They are fully deterministic. There's not even a pseudo-random number call there. They're completely transparent. They're very simple. There is no more hidden mechanism of biology to look at. Same thing with our gene regulatory networks that we showed could have six different kinds of learning, just a couple of differential equations. So if from something as dumb and simple as this we can already see the ability to solve new problems, meaning go around the barriers, and we can already see that they are optimizing things that are nowhere explicitly provided a mechanism for, such as minimizing distinction from your neighbor, I don't know that there are any that don't have that. We could try to come up with something, but even if we were to come up with something extremely dumb, I don't know that our failure to find these kinds of properties is anything but a lack of imagination on our part. We could come up with some sort of bit adder that just adds bits and we'd say, surely this thing doesn't do anything else other than this. Having seen this in sorting algorithms, that distance has gotten really thin. I'm starting to wonder if our inability to observe it in simple things is just lack of imagination and our evolutionary training on very complicated things where we're not good at seeing. So I don't know. It leads me to a more panpsychist view where these properties look like they're baked in at the very beginning, maybe wherever mathematics comes from, I don't know, but baked in very early on. The other thing I feel strongly about is that we have to do experiments. We cannot just observationally look at something and say this thing is dumb; it's not going to do anything. I think we're not good at that at all. I don't think it's possible anyway, but I think we're just bad at it. We need to do experiments. We need to do barriers. We need to try various kinds of training techniques. We need to see what this thing is doing when we give it a little leeway. What does it look like from its perspective? You also said something very interesting about—you can treat this thing as a distributed agent from the cell's point of view. From the cell's point of view: who are my neighbors? Someone — it might've been Carl or Adam — was saying the other day when we were talking about this: look at it from the cluster's point of view, because in forming these clusters, maybe we do have a temporary higher level of organization. It's dynamic, meaning pieces come and go, but so are we. We're a molecular Ship of Theseus and the cellular one, right? So we come and go too. Maybe there's another story to tell from the perspective of each cluster as to what the world looks like. That's where I've been recently: thinking about how to see the world from the perspective of these weird agents, and what does it look like? What is it like to be... I don't think it's a crazy question to ask what's it like to be a chunk of data being sorted. I don't know. What's it like to be the sorting algorithm? What's it like? I'm now open to that; it gets back to your consciousness question. There may be valuable inner perspectives in all of these things. I don't know.
[48:25] Mark Solms: Since our very first conversations, I've told you that you've made me much more radical than I was when I was already in trouble in relation to my colleagues in reducing everything to what the vertebrate brainstem is capable of. But you've persuaded me that it goes way lower than that in terms of what is plausibly a cognitive and plausibly a conscious system. I end up finding that there can't be a sharp dividing line. There has to be gradation. One has to speak of proto-consciousness or proto-cognition, and that has to go all the way down, so I'm with you there. It's not entirely the same as panpsychism, but it's difficult to say in which way it's not panpsychism. But now, in trying not to be a panpsychist, let me take it a step further. It goes back to what we were saying a few minutes ago in this conversation, when I was saying in one respect composite creatures are just a swarm of cells. And each cell is just doing its thing. From the cell's point of view, this is what I do. It's what I always do. There's no cognition going on here. I always do exactly the same thing. Then you could say, from the cluster's point of view, where the emergent behaviours arise, or from the swarm's point of view, what Adam or Carl had said: what is it like from the point of view of being the swarm? Or what is it like from the point of view of being the cluster? That is one way of going, but another way is the point I made when I said there are ways in which we are not a swarm. We do have master cells and slave cells and they're not on all fours. They're not all doing the same thing. Once you start to have cells within this cluster or swarm that are monitoring at a higher level than the soldiers—I'm thinking of ants—once you have generals and soldiers, you can more justifiably speak of the mental process as being at the level of the organizing, overseeing, overarching cells. Not every cell then has equal dignity in terms of its cognitiveness.
[51:47] Michael Levin: That gives me an interesting idea that we should try experimentally in our system. We can make some of the cells more powerful than others. We can let them see further or we can let them push harder in terms of where they want to go and see what effect that has. Right now, everybody's equal. I see your point. I think it's very interesting. If there are regionalizations like that, I wonder if we will see substructure. I'm visualizing a nervous system within the cluster. So you have a cluster and there's a pattern of the more powerful cells organizing a temporary, almost nervous system. I think we'll do that. I'll talk.
[52:37] Mark Solms: It makes my mind suddenly leap back to your worms. I remember that the epithelial cells had functionality that the intestinal cells didn't have. You could grow a new worm from some cells and you could not grow a new worm from other cells.
[53:12] Michael Levin: The worms have a population of adult stem cells. If you cut a piece that has even one of these stem cells in it, you can get a whole worm. There are regions of the animal where the tip of the nose has no stem cells. If you cut off that tip of the nose, it will not. But I think it's a matter of the hardware and not the information. So these stem cells are the source of new cell types. They produce the stuff you need to make a proper worm. But I don't think they carry much of the information because we've done another experiment, and this is not published yet, but we know what the answer is. When you take a little piece out of a two-headed worm and you irradiate it to kill all of the stem cells that are in it, and you take just those somatic cells and you shove it into the body of a one-headed worm, under some percentage of the time what ends up happening is that the one-headed worm becomes two-headed. So that information did not require the stem cells. It did not require new cell types. The transplant, in fact, will all die. Those cells only live 30 days or so, but the information lives on. And so I think there's a real distinction here about what you need to know versus what you need to actually build the body of the worm?