Watch Episode Here
Listen to Episode Here
Show Notes
This is a ~1 hour conversation on topics of consciousness, affect, evolution, and philosophy between Gunnar Babcock (https://cals.cornell.edu/gunnar-babcock), Daniel McShea (https://scholars.duke.edu/person/dmcshea), Mark Solms (https://scholar.google.com/citations?user=vD4p8rQAAAAJ&hl=en), and I.
CHAPTERS:
(00:00) What Counts As Memory
(09:16) Consciousness, Affect, Priorities
(19:26) Cellular Creative Problem-Solving
(30:36) Needs Versus Cognition
(37:15) Novel Goals, Pattern Space
(45:07) Affect, Needs, Fitness
(55:50) Patterns, Minds, Reality
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:00] Michael Levin: Mark wanted to talk about consciousness and using it to solve problems that matter to the agent. We were going to talk about finding solutions, but also choosing problems. I'm interested in that aspect of it, active agency in terms of choosing the problem you plan to solve, not just looking for solutions in a pre-existing problem space. Does anybody have thoughts?
[00:32] Daniel McShea: I'll lead off. The crowd I run in, or used to run in, would have said about memory, every system's got memory. The moon remembers every asteroid impact. DNA is a memory molecule. None of these are memories in the psychological sense. It's a broader understanding of memory. Tell me what memory has to do with consciousness, because in the world I'm talking about, it's no big deal. Every system has memory.
[01:09] Mark Solms: I would say the way that Mike posed the question was not bound up with consciousness, but that is itself an interesting additional question. The question as posed by Mike is what is the difference between— is there a difference, and if so what — between memory in the biological meaning of the word, in other words neural memory, as opposed to other forms of memory in a broader definition, like, for example, immune function, which involves memory, and ranging all the way through to what Dan just said: the moon, any impact upon it leaves a trace. But is that what we mean by memory? So the first question is, the word memory, what does it actually refer to? Does it refer to the impact of the past on the present state of the system, or does it mean something more specific than that? Dan raised an additional question: what is the role of consciousness in memory? Memory is not just a matter of the trace or impression, influence, effect of the past upon the present functioning of the system. Memory is learning in the sense that past experience — and I don't necessarily mean conscious experience, but the experience of trying to perform the functions of the system — successes and failures lead to adjustments in how it performs those functions in the future. Memory is about the past, but it's for the future in the sense of updating what I would call the generative model or the predictive model of the system. On the basis of my past experience, what beliefs do I hold as to how the world in which I need to meet my needs works? How do I adjust my performance accordingly?
[03:34] Gunnar Babcock: I'll add to that taxonomy. Mike, I think your comment from our last discussion on memory, as a response to Dan, was the notion that there's a record or historicity to something like the moon being impacted with an asteroid. My recollection of what Mike said was that memory would be a recovery from that, a return to an original. If, Dan, the example you've given is if a bumper is hit in a car and it's dented, that could be construed as memory, but returning to the original form would be a memory of where it had been. I think that was what Mike had introduced as a notion of memory. Correct me if I'm wrong here, Mike. Mark, what you're introducing would be a step further in the taxonomy, an incorporation of actually learning from past experience. It reminds me, Dan, of that paper we read in the reading group about jellies running into rods. I can't remember what species of jellyfish, but they were found to have a minimal notion of learning or memory: they were run through rods that would hurt them to see if they could learn where the rods were and then avoid them in the future. Some preliminary research seemed to indicate they were capable of doing that, despite having basically nothing akin to any sort of neural activity whatsoever. It was an interesting study that led me to think that something like the learning you're talking about, Mark, is capable of pretty minimal cognitive systems.
[05:26] Michael Levin: For me, the critical thing about it is the interpretation end. When you have some sort of physical change take place, the question is who's interpreting it later to recover the meaning of it, if any? In the case of the moon, what happens is that we as scientists might observe it and therefore have a history of the moon, but the moon itself doesn't use that in any significant way. One can imagine scenarios in which that would happen, but typically speaking, the interpretation agent in that case is not the system itself. Whereas in other scenarios, it is the system itself that has to continuously reinterpret the memory traces that it has, as Mark just pointed out, to say, what does this mean for me going forward into the future? I think it's a continuum because there are all kinds of in-between cases of who is the recipient of that memory. Who does the recall? That tells us a lot about what we can say about the memory, but also what the utility is of having the concept. Having labeled something as memory, the question now is, does that allow you to bring in all the tools that we have for dealing with memories? To the extent that it does, then it's more or less useful to think of it that way. That's what I think. For the DNA that Dan mentioned, I think it's exactly the same thing. I think morphogenesis interprets information that it has, including DNA, but also cytoskeletal components and everything else. I do think it's a memory specifically in the sense that it's an engram compressed of past experience, but it's up for creative interpretation, as we see embryos do all the time. It isn't a prescription or a plan or anything like that.
[07:17] Gunnar Babcock: In light of the last conversation that we had on memory, I started digging into some of the Phil Bio literature on this. I found the term used all over the place in wildly different contexts. In the context of DNA, you find it used quite frequently. It really led me to think that is probably the best baseline understanding of what memory might be and that the cognitive notion of memory that we tend to use as the default is probably just one little branch offshoot. Mike, what you're talking about seems to me to have all the important ingredients for what you need for a robust system that does the memory thing.
[08:04] Michael Levin: In certain cases where the mechanisms are actually conserved, we've given memory blockers to planaria. If you give memory blockers to a two-headed worm and then cut it, it forgets about the second head and only builds the original head. If you crank up the dosage and give it to a one-headed planarian, what you get is a featureless disk. It makes a flat circle with no apparent sense of what it should have been in the first place.
[08:44] Daniel McShea: We've got 3 levels of memory here and no disagreement that I've heard so far. One is the stone tablet memory that DNA represents, that Gunnar was talking about, then the restorative memory that Mike is talking about, and then the forward-looking memory that Mark is talking about. And you could see them along some continuum. Is that a fair summary?
[09:16] Mark Solms: I think that's a fair summary. Where we could go from there is to revert to the question of consciousness in relation to all of this. Because in my own way of thinking about consciousness, it actually has everything to do with that trajectory. At the level of memory that I'm talking about, the third of the three levels that you've just enumerated, it's a matter of during the lifetime of the organism; it doesn't have to wait for what worked and didn't work in terms of the preservation of the species across generations with learning, but rather during my own lifetime as a system. I can adjust my behavior in terms of my organismic needs on the basis of past experience and thereby adapt to the environment. And where is consciousness required? I think it's precisely where learning fails us. In other words, where you find yourself in a situation for which you do not have any pre-prepared policy, because you've not been in that situation before. To put it in simple terms, the system then behaves more or less stochastically, because there's no basis to the extent that you're genuinely talking about a novel situation. To that extent, there's no basis for acting in one way or another. So I think that stochastic behavior has to have some sort of guide as to whether this is working or not before it's too late. And you can't wait for the outcome of the random action, because then it's too late. If it was the wrong action, you die. I think that a feeling tells the organism whether this is taking you further away from or closer to your target state, by which I mean your viable state. Why consciousness of the action and its consequences is valuable is that it tells the organism how well or badly it's doing in terms of its underlying value system, namely that it's good to survive and bad to die. That goodness and badness is felt as pleasurable or unpleasurable feelings in relation to the particular category of need in question. I think that's the most basic form of consciousness. So to put it differently, consciousness arises in the absence of memory in order to guide voluntary behavior from which we then in turn learn once more from experience and lay down new memories.
[12:16] Daniel McShea: I want to have a sentence written down here that attempts a synthesis between the two topics of the day. One is picking problems and the other is memory. Affect, by which I mean consciousness, Mark, or consciousness by which I mean affect, meaning intentionality, caring, wanting, preferring, picks out problems. That's how the organism picks out problems. It cares about X, Y, and Z, and not about A through W. Then, when it comes to solving that problem, memory is the input that it has as a guide, I think that was your word, Mark, to how to go about solving. Otherwise, you're going to have to behave stochastically. So there's an ordering to it. First, you've got to pick the problem. That's what consciousness does. Then you've got to solve it, and that's what memory does.
[13:18] Mark Solms: I think that there's three steps in what you've just described, not two. I agree that it starts with the selection of the problem, and I agree that consciousness is involved in that. Then there's the "what preparedness do I have from past experience as to how to solve this problem." The third step is what do I do beyond that? Where I enter the arena of uncertainty. Where I do not have a pre-programmed solution based on past experience or on my genotype, what do I do then? And consciousness kicks in again. So consciousness kicks in twice. To come to step one, which is the selection of the problem, I think this is a prioritization task. The organism, or the self-organizing system, always has a variety of problems, but it can't act on all of them simultaneously. There's an action bottleneck. It's got to prioritize: which problem am I going to tackle now? And generally speaking, that is determined by how big is the need? How far am I from my target state? What are the current opportunities? How great are the chances that I'll be able to satisfy that need? I think that there's a balancing of multiple problems that could be tackled in terms of how urgent is the problem and in terms of how great is the opportunity of solving it. The outcome of that prioritization process, in my way of thinking, is that you then feel the problem that you've prioritized. The others are relegated to automaticity. I'm going to rely exclusively on past knowledge when it comes to the solution of these problems. They're going to run on autopilot. This one, I'm going to navigate. I'm prioritizing this one. I'm giving all my computational power to solving this problem. I'm feeling my way through this problem. Step two, as I say, is the kicking in of the prior learning. I've already dealt with that. Step three then is: it's not just I feel hungry as opposed to I feel fear as opposed to I feel sleepy. That's the prioritization. The one that you feel is the one that's prioritized. That's the category of affect. Then you're in that category. You're feeling your way through the solution of that problem. How well or badly am I doing in terms of meeting this need? It's tethered to the affect, the valence of the affect. But it also has to do with a consciousness of the context within which that problem is. I feel like this about that. This thing out there is causing this problem. That thing out there is presenting this opportunity. That's a different meaning of the word consciousness. It's the feeling of your way through the problem in context, as opposed to the state of the sort of I am now in a state called hunger as opposed to sleepiness, as opposed to fear, et cetera.
[16:49] Gunnar Babcock: So in that context, the way that you've laid it out there, I'd be curious with cases like the one that Dan and I have focused on as illustrating what we would see to be an emblematic instance of goal-directedness: a bacterium doing chemotaxis. In that case, it's in a chemical gradient and goes through straight runs and tumbles, then it reorients and keeps trying to head in a certain direction. In that minimal system, do you see this sort of operation unfolding? Would it have an affective state?
[17:27] Mark Solms: I must first of all declare my ignorance when it comes to this, bearing present company in mind. I don't know that much.
[17:43] Gunnar Babcock: I just recite what Dan's told me.
[17:46] Mark Solms: I'll tell you what I think. I think that following a chemical gradient like that, to the extent that there is a pre-programmed action, when I come across this particular chemical gradient, this is good, this is bad, so I always do this, to that extent, I don't think that consciousness is involved, to the extent that there's an algorithm that just runs its course. However, because even such a simple organism as you're describing has more than one need, I think that their affect does come into play. In other words, am I currently avoiding a noxious tissue damage stimulus, or am I following an appealing chemical gradient? I think that to the extent that you don't just have one need, to the extent you have categorical needs, both or all of which need to be met in their own right, to that extent you just are talking about qualitatively differentiated variables. So I think that's the ground zero for me of Qualia, is to the extent that the system must register its needs in terms of qualitatively distinct categories.
[19:07] Daniel McShea: Which leads me to ask Mike a question. What fills the role in these restorative systems that are just trying to get back to where they were? What fills the role of stating the problem and prioritizing the problem that Mark has assigned to affect or to consciousness?
[19:26] Michael Levin: Great point. Two things can be said. One is just about the previous example. There, there's the bacterium, and there's a circuit that the way it knows how to go in the gradient is because of a memory. My understanding is that it's not that it can measure the concentration of the sugar between the front and the back because it's too small. It has a memory that tells it, about 20 minutes, how things are improving. And then it does this run and tumble thing. In some bacteria, there's an extra loop on top of that, which is this metacognitive thing that doesn't measure the sugar. What it measures is how my metabolism is doing. You might be in a field of sugar, but the sugar could be poisoned. Going with the gradient, your primary directive to go up the gradient might actually not be helping you at all. There's another set of loops that looks inward, not outward. It doesn't measure the outside world. It measures your own metabolism. If the metabolism isn't going well, it will override the primary loop and say, go somewhere else. Your basic thing isn't working. Even in bacteria, there are these kinds of meta loops. One of the things we've seen, and this is not published yet, we're still sorting it out, and this will go partly to Dan's point, is that when we take a cell and knock in, using a heterologous promoter the cell doesn't have, an ion channel that depolarizes the cell, the cell turns that off. Cells don't like it. In fact, Belushka thinks it's the earliest form of pain: depolarization, which is what neurons are so good at, an ancient thing. Within about 24 hours, the cell turns off that gene we knocked in. This goes to Mark's point, which is that you have to know what's causing your problem. So yes, I'm depolarized. This is a real problem. I need to fix it. How does the cell know which thing is causing that problem? We don't know. This is because it's a completely new thing that has never existed. It's not one of the cell's normal genes. It comes from the outside. Knowing that particular one out of whatever 20,000 that you might have that could be causing it is distinctly non-trivial. Figuring out — I think this is one of the things biology is very good at — is what computer scientists would call credit assignment. Here are some things that are going well, here are some things that are going poorly. How much of that can I assign to what so that I know which knobs I need to twist? In the morphogenetic systems that Dan was asking about, we often have this scenario: in the case of completely novel architectures, like anthrobots and chimeras, things that really can't rely on past fixed behaviors nor fixed interpretations of the genome. Their genome might be perfectly standard. Anthrobots have a standard human genome, xenobots have a standard frog genome. There's nothing different in that material that's going to tell you what to do. You need to figure out how you're going to solve the problem. The problem is very general: to be a coherent organism in this new environment. You might solve it in a number of different morphologies, a number of different behaviors. There are different ways to be, especially if you're not one of the standard things you've evolved to be. Sometimes we see pieces of this in some of the bioelectrical circuitry of figuring out which of several modes you're going to try to persist as a coherent entity. There's a lot of what otherwise would be called creative problem solving. If any of these things were robots that we had made, you would say that it's intelligent problem solving: these things are coming up with new ways to pick the problems they're going to solve. There is a bacterial example of this too. If you're in a sugar gradient, you can do one of two things. You can move up that sugar gradient — that's an action, an effector in movement space — or you might transcribe a gene that lets you use a completely different chemical as food. Physically you haven't moved, but you've taken a step in transcriptional space and solved the problem by moving in a different space — the transcriptional space — which leads to improvement in physiological space. Which of those are you going to do? Those are both options for you.
[24:20] Gunnar Babcock: That second move would be an intergenerational move, though. It would be like a move in the lineage.
[24:25] Michael Levin: No, I don't mean mutation. It's transcriptional. So I have multiple enzymes. I have an enzyme for metabolizing lactose, glucose, fructose. One thing you can do is if you've been using your metabolism enzymes for glucose, let's say you're moving up that concentration, you might, if that gradient, let's say, is not paying off for whatever reason, you might just upregulate the gene that allows you to metabolize fructose instead and just start exploiting a different, those might be completely orthogonal gradients. And then once you do that, now you start following a totally different gradient. But ultimately, what you've done is you've taken a move in a space that's normally not visible to us. It's not the movement and behavior that we can see, movement in 3D space. These things are traversing all kinds of problem spaces that we don't normally see.
[25:21] Daniel McShea: Whether we're talking bacteria or morphogenesis or conscious problem solving, what we've got is first the picking of the problem. This is a conceptual breakdown. It doesn't happen this way in time exactly. We've got the picking of the problem and the prioritizing of problems. We've got memory, which is the input as a guide to solving all of those problems, whichever ones you've picked. Then there's the actual solving of the problem, which I would call cognition, and it's affect-free, consciousness-free, problem-prioritizing-free, except Mark pointed out there's this back-and-forth between the problem-solving and the problem itself. Mike calls it generalized problem-solving, creative problem solving. There are three steps here. Picking and prioritizing of problems, the memory to guide and solve them, and then the generalized problem-solving, creative problem solving that actually enables you to solve it.
[26:35] Mark Solms: I'm happy to call it cognition, bearing in mind that we use that word for both conscious and unconscious problem solving. So the conscious bit is the bit where there's the palpating of the uncertainty, and that in turn, as you reminded us, is tethered back to the prioritized need. How well or badly is this problem solving working in relation to the original problem?
[27:01] Daniel McShea: but conceptually those are separate functions. This tethering is across a gap. One is value-neutral solutions to problems and the other is the prioritizing and picking of problems. I agree there's a back and forth. But I want to split cognition off because it's value-free. It's problem-solving.
[27:21] Mark Solms: I would be very happy to split it off. To put it in very, very simple terms, I think that the affect is the demand for cognitive work, and the cognition is the work so demanded. They're not the same thing.
[27:37] Daniel McShea: You need to chime in here if this made any sense.
[27:43] Michael Levin: I think it made total sense, but I think there's another component that typically gets left out that I'm super interested in, and I think we know the least about this, which comes in at the very beginning, which is, and I don't think this is algorithmic, Ben. I don't know how else to characterize it other than creative, in that before any of the other steps, you have to figure out what is the set of possible problems that you could solve, right? And some of those — it's like those IQ tests where they show you here are four objects, and you make them stand one on top of each other. And it turns out that the genius solution is to not do that at all, but to do something completely different and to misuse all of them in some way that lets you do something interesting. We see people doing that; we call it genius problem solving because you've identified another way to do something interesting that is out of the box. So that part of just figuring out what is the set of things that you might take on that could help you? What are all the problem spaces? So you've got effectors in, if you're a cell, in physiological space, in transcriptional space, in physical space, if you can move. But knowing that those are our categories as a scientist looking down on this thing, that's how we cut up the world. But what does the system actually see and what other spaces are there? This is important also because people often say that organoids in a dish are not embodied, and they say that software agents in a computer are not embodied. And that makes me very nervous, because we're not very good at seeing bodies, and we're not very good at seeing navigation of different spaces. Looking at it from outside, just because the thing isn't rolling around on wheels or something, it doesn't, to me anyway, mean that it's not embodied. It could be doing this perception-action loop, exactly the kind of thing that Mark was describing in all kinds of spaces that are not visible to us and maybe don't even make any sense to us. We've decided that transcriptional space is different from physiological space. We did that as observers. So I think that first step of cutting up the world from a very high-dimensional space, if you want to track microstates, which I don't think living systems ever do, into some kind of a low-dimensional coarse-grained set of spaces in which you can choose problems — that step is fundamentally creative. And I think it's fundamental to being a mind in the world: deciding how you're going to cut it up into spaces to look for problems in.
[30:28] Daniel McShea: And you don't think that's handed to every organism by its biology, roughly speaking?
[30:36] Mark Solms: No, I would agree with Mike, and I would do so on the basis of this distinction. We're using the word "problems" in quite a general way here. I think we need to distinguish between two ways in which we're using the term. The one is organismic needs. They are invariant. So we all have the problem of thermoregulation, sleep, pain, hydration, blood oxygen, and so on. These needs, we can't choose them. These problems are given by our phenotype. What we're going to do about meeting those needs is another meaning of the word problem. And this is where the creative processes that Mike is talking about enter into the picture. There are certain standard procedures, those are reflexes and instincts. Then there are things I've learned on the basis of my own past experience. In other words, my own individualized solutions to these problems. And then there are new solutions on the hoof. But I think that what many of my cognitive colleagues fail to remember in this connection is that that's true. There's all this cognitive creativity, all of the scope, but ultimately it's tethered to the fact that it's got to meet those stereotype needs. Cognition doesn't exist for its own sake. Cognition exists because the organism has phenotypic needs that must always be met. The cognition for its own sake would be a perversion of cognition. It would be very bad for the organism if it just starts solving problems for their own sake because they exist.
[32:40] Michael Levin: What would you do here?
[32:41] Mark Solms: I wonder...
[32:42] Michael Levin: Sorry, go ahead.
Gunnar Babcock: Oh, no, go ahead, Mike.
[32:43] Michael Levin: That is a really interesting point that I hadn't previously appreciated. I've got one weird example for you for that one. When we have a bunch of loose frog embryo epithelial cells, the xenobot into which they assemble does not live longer than the cells themselves. As far as I can tell, it provides no survival advantage to them. I don't know if the Maslow hierarchy is really a thing or not, but if you think about this hierarchy of needs, the very bottom thing of survival — that morphogenetic process is not helping that at all. I almost envision some sort of higher-level need than that. It's not just about bare-bones survival. We can all survive as cells. But what if we do something more interesting than simply survive and sit there as individual cells? What if we assemble together, solve some problems in anatomical space, which are the morphogenesis, and then maybe we solve some problems in behavioral space because they can move around and do a couple of interesting things. Some of these needs are certainly going to be survival needs, but maybe the need for sociality, the need for interesting experiences, however we're going to quantify that, is also there beyond bare-bones physiological needs.
[34:09] Mark Solms: But both of those, sociality and curiosity, are certainly survival strategies. They certainly are strategies that enhance our survival, our fitness. Existing in a group, there are all kinds of advantages to the survival of the individual member of the group by becoming part of a group, which then the survival of the group enhances your own individual prospects of survival as a member of that group. And likewise, curiosity is just proactive engagement with uncertainty and is always a good thing because uncertainty bites you in the backside if you don't engage with it. So there's certainly survival advantage in just exploring.
[35:11] Gunnar Babcock: It's interesting, Mark, the question of picking amongst competing needs or competing problems. With the issue of fitness, it always strikes me that fitness is not just survival but also frequently reproduction. What can reproduce is often the better measure of fitness than survival. Those are often in conflict with each other. As far as picking among priorities, it seems you already have an elemental way in which something will be forced to prioritize between those two baseline elements. I was curious, Mike. Mark, you were saying that insofar as we are embodied biological things, there are metabolic baseline physiological needs that we all must maintain that are pretty universal. But when you start extending the idea of cognition outside the biological realm into other embodied agents, the set of what cares or preferences such a system would have would be on an entirely different dimension if they aren't rooted in the baseline physiology that life shares.
[36:49] Michael Levin: Mark, did you want to say anything about the first one?
[36:53] Mark Solms: You're right to reprimand me for saying "survival" without adding "survival to reproduce." I certainly agree with you. Survival in the service of reproduction, so reproductive success certainly takes priority. Survival of the species is what we're talking about.
[37:15] Michael Levin: It now occurs to me that the interesting thing about the xenobots in this case is that having assembled into this multicellular thing, they have recovered a weird new way of, I'm not going to call it reproduction, but it's replication because if provided with loose skin cells, they do collect them into new generations of xenobots. So it's like they know they want to do multicellularity. They know they want to replicate, but in a way that has not been reinforced by evolution before. There's never been, as far as we know, kinematic self-replication in living lineages. So it's some sort of this creative. We have the urge to make copies of ourselves, can't do it the way that we've been doing it for eons. We don't have any of that equipment, but here's this crazy new way we can actually propagate our pattern forward that has never been seen before. So maybe you're right, maybe that is some kind of a baseline imperative underlying some of this creative problem solving. But Gunnar, the thing about these imperatives and non-biological systems: I have a crazy opinion on this. That's very different from what I think most people would say, which is that most people are pretty comfortable with the idea that biology has internal innate imperatives because of its needs for survival and reproduction. And then we have machines, and these machines just have whatever imperatives we give them with our algorithms. We write the algorithm, we contrive the materials, and they have whatever we've given them as imperatives. I don't think that's right. They certainly have some of those, and good machines do the things you wanted them to do. But from the research that we've been doing, I think even extremely minimal things are also doing things that are not in the algorithm, that you did not tell them to do, and are simple or perhaps more complex versions of the kind of processes that we've been talking about here that do not derive from the maker of that machine. They are intrinsic. Where do they come from? We can talk about that too. I have some crazy thoughts on that. But I don't think there are any machines in the old sense of the word. I don't think they exist. I think even the dumbest, simplest things in our model system for this are these sorting algorithms. We have a paper showing that even bubble sort — this is six lines of code that every computer science student has been studying for decades — sorts numbers all right, but if you look at it from a different perspective what you can detect are things like delayed gratification and some other interesting things that are literally nowhere in the algorithm. And yet something as simple as bubble sort can have these, I call them side quests, while they're sorting the numbers. So they have to sort the numbers, just like we have to obey the laws of physics. But consistent with that, there's a whole range of other things that you could do that are not prescribed by the laws of physics. And humans and other living things do those things while being consistent with physics. But it turns out that even very minimal, quote-unquote, machines or algorithms have some of that as well. And so I think that aspect of it probably goes all the way down, that everybody's to some extent constrained by physics, and everybody to some extent has these novel behavioral propensities that they do to meet new goals that we would never have known if we hadn't looked, because it isn't in the algorithm any more than our goals are in the biochemistry that underlies it. I still see a single continuum. And it's just a question of how sophisticated are these side quests that you're able to impose on top of the physics that constrain your body.
[41:29] Mark Solms: I've heard you speaking about that before, Mike, and engaged with you on that. As a bare minimum, I would have to agree that there's no fundamental distinction between biological and non-biological systems in terms of the things we're talking about. I think this philosopher Kaugelund said, "Computers don't give a damn." I would say it depends on what kind of computer you're talking about.
[42:04] Daniel McShea: No, I was just about to launch, but you finished.
[42:07] Mark Solms: I wanted to go on to a slightly different question. Dan, you should speak first if it's about the thing we've just been saying.
[42:15] Daniel McShea: It is. Mike, do you think that these novel goals that these creatures come up with, and our novel goals for that matter, aren't they all going to be derivative of some large basic set that's been handed to them? My getting out of the way of the bus was never pre-programmed genetically. My fear of the loud noise in the beginning of "Beethoven's Fifth" was never pre-programmed. But it's derivative of things we can point to deep in our biology.
[42:54] Michael Levin: The short answer is no. I think that in our very simple, that's the reason I go to these extremely simple model systems where there is no new biology, there is no history of evolution. It's just the algorithm. So you can see all the parts. There's nothing hidden. What I see, I don't think is derivative. My definition of derivative would be that an intelligent person looking at the algorithm would say here's a set of things that could be derivative of this. I don't think they're in that class at all. I think they're completely distinct. I do think they're pre-programmed in one funny way, which is that I now suspect that the agent in all of these cases is not the machine or the body or the embryo or the algorithm that we see. It's the pattern that manifests through it. I've been toying with the idea of a platonic space of patterns. I wonder if what we're actually looking at is that the agent in this case is the pattern itself. It's not the system. These patterns are basically what we build when we make embryos, xenobots, computers, whatever. What we're building are pointers. We're building pointers into the space of patterns. I think that space might be under positive pressure in a way that these patterns are — the vocabulary fails completely — but they're looking for places to poke through and grass into the physical world. In that sense, they are pre-programmed because there is a space of these things. To some extent, they pre-exist, but I don't think they are in any direct way connected to what we build. It's the same connection as between the structure of a pointer and whatever data structure it points to. There has to be some relationship for it to work, but it's not at all straightforward. You learn very minimal things looking at the structure of a pointer about what you're going to find when you dereference it. That's my suspicion at this point.
[45:02] Daniel McShea: Let's head down Mark's path.
[45:07] Mark Solms: Yes, that last point that Mike made, we could go back to that for an hour. There's much to be said about that. As Dan says, I want to change gear; it's picking up on a theme we discussed earlier. It's the question of what is an affect in relation to a phenotypic need. We were saying you have to prioritize these needs and then the one that you've prioritized will be the effect that you feel. They are categorical, therefore they are qualitatively distinct. They are my needs, therefore they are subjective. I say all of these things because I frequently find myself having to persuade colleagues, including computer scientists, that if you're describing a mechanism that has these properties, it is the system's own need in order for it to continue to exist as a system, and it is just categorical and it's registered only by the system and it only has value for the system. That just is an affect. I don't care whether it's in a biological system or not. I don't care how simple the organism is. That just is an affect. It has the mechanistic properties of an affect. Now, I would like to go a little further in the opposite direction, which is that I don't think that an affect is the same thing as each phenotypic parameter. Mike was saying earlier that the bacterium finds itself in a situation where it has a need; it can either act in that way or it can transcribe, and both ways it's done the same thing. I think that once we have relatively complex organisms, there are also multiple physiological parameters which combine to constitute a need. So if you take the effect of hunger, for example, it's not measuring one thing. The things going on in the organism that are ultimately felt as hunger are really myriad physiological processes. This has been very much on my mind, and I'd love to hear your take on it. One of the reasons why affects exist as affects — in other words, it's not just a measurement of a physiological variable — is that it's a measurement of some kind of average of a number of variables. Those variables could be tweaked in a number of different ways. They'll all still have the same outcome, which is energy supplies being met, but there are multiple different constituent parts of that. And there's a forward-looking part and a present part. So I think that the reason affects exist as affects, rather than just as a physiological variable, is because they are a higher-order kind of summation of the state of the organism in relation to this category, that category, or the other category. The same could be said extroceptively. We all know that there's an affect called fear, that means I'm in danger. There are myriad things going on in the world that could constitute a danger, so all of those multiple different things are registered, are averaged or summated or measured in terms of one homeostat called the homeostat for danger. The affects exist at a level slightly beyond the individual physiological or environmental parameters which constitute them. My intuition is that this is important for why there is such a thing as the affect and why the affect has causal consequences itself, beyond the physiological parameters that it monitors.
[49:15] Daniel McShea: Mark, I'm going to go in the same direction you're going, but I'm going to go further.
[49:19] Gunnar Babcock: I'll be right back. I apologize.
[49:23] Daniel McShea: I'm going to deny that there's any reality to a desire to survive or to reproduce for a living organism in the moment. There's no programmed affect that goes with that. What there is is programmed affect to eliminate the hunger in my belly, programmed affect to respond to some attractive mate that wanders by, all very context-specific, a slightly different feeling. There's no such thing as fear. Evolution never favored fear. Natural selection doesn't care about it. What it favored was a certain kind of reaction to this stimulus, a certain kind of reaction to that one. We lump them together linguistically and call them all fear. But the affect is what's driving the system in the moment. I don't have any desire to survive and reproduce. This is Darwinian thinking transplanted from an intergenerational story into a proximally caused organism.
[50:25] Mark Solms: I'd love to hear Mike's view on that, but I must say that it strikes horror in my heart, what you've just said, Dan.
[50:32] Daniel McShea: Why is that? Because I'm taking what you said and I'm making the conceptual separation between the evolutionary process and the physiological processes.
[50:42] Mark Solms: Why it strikes horror is because there's a colleague in Affective Neuroscience who argues what you've just argued and comes and does so on the basis of a whole relativist, social constructivist thing. There are no natural kinds of affect. Yes, when I say it strikes horror, it strikes horror irrationally in my heart. I know exactly what you're saying and I'm very sympathetic with what you're actually saying. Let me put it slightly differently. I really do want to hear what Mike is going to think about this. I agree. There is no imperative at the level of mental states: I must reproduce, I must do my biological duty and reproduce, or I must improve my sugar levels. Therefore I better look for something that's got high glucose content. No, I feel sexual desire and I satisfy that ****** pleasure in ways that have no chance of leading to reproductive success. In fact, most of the time when we have sex, even heterosexual sex, we're trying not to reproduce. The thing that motivates us is the feeling, not the underlying biological imperative. Likewise with sweet things: we eat them because they taste nice. We don't eat them because they're going to improve our glucose supplies. We don't even know that; a kid has no idea why it likes sweet things. It just likes them and it eats them because it likes them. That goes back to my original point. The feeling has causal consequences. So we are actually saying the same thing, and forgive me for my PTSD at the beginning.
[52:35] Daniel McShea: You directed this at Mike and I'm going to start his answer for him and he can tell me where I'm off track. The problem space that the organism brings with it into the world is not survival, reproduction, fear. It's way more complicated than that. We can't begin as specialized organisms to know what the bacterium's problem space is and how it has chiseled it up. We can lump these behaviors and these imperatives that it has under the category of reproduction. But that may not be how the organism chisels up the world. Its feeling set, all the reproduction-related feeling sets in its case, may be completely different from ours. What's required is an empirical project to figure out what they are, rather than just saying that's reproduction and that's hunger. Go ahead, Mike.
[53:28] Michael Levin: I agree with what you just said, and I think that's right. I think our categories are not necessarily what the system itself sees at all. I'm not certain about any of these; these are just things I've been playing with recently. I think there's also a version of this where what's actually going on is that there are specific patterns of behavior that facilitate their own propagation into the physical world because evolution gains massively from these free lunches pulled out of this platonic space, where you get all kinds of interesting things for free when you make very minimal pointers. That then guides the system towards elaboration of those same physical bodies that, whether or not the individual recognizes it or is capable of recognizing it in their own behavior, strengthen those behavioral patterns over a lineage lifetime. I agree with you that there are these two different scales and multiple observers. There is the conscious observer who tells stories about why I just did this or that, versus the larger-scale story about why your cognitive architecture facilitated you to do that on the longer scale. There are many perspectives. There's the perspective of the organism. There's the perspective of the lineage of the organs within the organism, of the scientist observing it, of the parasites trying to hack and take advantage of those various drives. I also think there's the perspective of the patterns themselves, which are actually using the physical bodies that we are so fixated on in terms of evolution as vehicles to propagate themselves. I don't mean it in the Dawkins sense that these things are just mindless replicons. I actually think that's where the cognition might be, but I think it's real. I don't think it's mindless, and I don't think it's mechanical. I do think that some of it is not embodied in the standard way that we see these things.
[55:50] Gunnar Babcock: The other thing that strikes me with all this, too, is that even in the fitness enhancing capacities that something like Affect would have, there's still so much more space because I think it's easy to jump from affect playing the role of perfection, but it's always that in evolution, it's good enough, not perfect. So even though these things can be fitness enhancing, it doesn't necessarily mean that they were all selected for those reasons. And in that extra space where selection might not have been the mechanism, this kind of governing how affect developed in some system or other, it seems like there's lots of different possible space where it could go off in other directions. I think something like cognition is a prime example of that, where it has the capacity to go in lots of different directions. On the whole, it's probably a fitness-enhancing capacity, proved to be in every instance by a long shot. So there's all these other ways in which it's good enough. And it actually leads in weird, interesting directions in all sorts of other cases. I'm very much with you, Dan, and that it's just the realm of biology. It's such a fascinating empirical project to try to figure it all out.
[57:13] Mark Solms: I must say one thing about it, Mike: I have pondered for quite some time from a dual-aspect modus perspective. If we are both brains and minds and they are the same thing, the easiest way to explain that is it's just the same thing looked at from different perspectives. My experience is just the being of the brain. It's the subjective perspective upon this thing. If I looked at it as an object, I would see a thing. If I look at it as a subject, then I experience a state of mind. That always seemed to me a fairly straightforward way of thinking about the mind-body relation, that they are, in fact, perspectives upon the same ontological entity. Which then raises the question: what is that entity? If these are just two representations or two perspectives upon it, then ontologically, what is that entity? What is it consisting of? If it's not physiology and anatomy, and it's not conscious mental states, what is it? And I think that the answer to that is something similar to what you're talking about. When you say that there's an organizational pattern, there's a functional system which manifests in one or another form. But the actual fundamental underlying entity is not the realization in its form, especially easy to see when we're talking about an entity which realizes in two different forms simultaneously, as in the mind-body relation.
[59:01] Gunnar Babcock: I'm very happy.
[59:02] Mark Solms: I thought you'd look happy; you'd say, "Okay, Mark, my idea is not completely untethered to your idea, but maybe it is."
[59:10] Michael Levin: Oh, it makes sense to me.