Skip to content

Conversation 2 with Lisa Barrett, Ben Lyons, and Karen Quigley

Lisa Barrett, Karen Quigley, and Benjamin Lyons continue their discussion of relational realism, allostasis, predictive processing, and embodiment, exploring how brain, body, and world jointly shape emotion, perception, and scientific objectivity.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a second conversation with Lisa Barrett ( Karen Quigley ( and Benjamin Lyons ( about Relational Realism, allostasis, and questions of mind/body/behavior.

CHAPTERS:

(00:00) Rethinking emotion universals

(09:34) Brain evolution and allostasis

(17:25) Predictive processing and signaling

(30:05) Objectivity and first-person science

(38:45) Flexible body-world boundaries

(45:46) Relational meaning in perception

(52:06) Embodiment, morphospace and realism

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.


[00:00] Lisa Barrett: What we could do is start at the beginning of how we got into the work that we're doing now and where I'd like to end up is talking about the relationship between brain and body signaling and the contextual nature of that. And also the philosophy of science that it's led us to this idea about what we're calling relational realism, the idea that it's really a solution to the problem of the dichotomy between traditional realism, where there's an objective world that is fixed and that you can only perceive through the veil of your own concepts, and something like idealism or any kind of anti-realism. So this is a realist view, but it's a realist view that's rooted in the idea that what is real is relational and that things don't have fixed meanings, they have relational meanings. And the things that we think of as having properties in the world are actually properties of relations, not properties of objects. I'll give a very brief overview. And Ben, you'll stop me if this is not what you think is useful.

[01:36] Benjamin Lyons: I trust your judgment and Mike's ability to pick up on this stuff. When I was talking about economics, he started absorbing things very quickly, so I think we'll be good.

[01:45] Lisa Barrett: Karen and I started off as colleagues. This was many years ago. We had very separate research programs. I fell into the question of trying to understand the nature of emotion because I was in psychology and in psychiatry and in much of neuroscience there's this assumption that there are these fixed categories for emotion, fixed circuits for emotion, that emotions are essentially adaptations that are wired in, that are basically programmed into your genes. This is like modern synthesis: DNA plus natural selection gives you these adaptations which exist in a fixed manner, and emotions being some of those. The idea is that there are these challenges to fitness which have persisted throughout millennia and emotions evolved as solutions to these problems. And there are a set of universal categories that are shared by all humans on earth and also other animals, which ones depends on who you read. But the idea is that there's a circuit in the brain for fear, a circuit in the brain for anger, a circuit in the brain for happiness. People debate on how many circuits and how many categories, but at least six and maybe upwards of 20, there are these hardwired things at birth. They are there, which means that everyone around the world will widen their eyes and gasp in fear, and that there is one cardiovascular pattern for fear, and so on and so forth. It sounds like a cartoon. The idea is that there might be some variability in what people look like and sound like when they're fearful, but that variation can be explained by or is epiphenomenal to emotion. People who study non-human animals will, for example, look at a fly that rubs its legs or expose a rat to the scent of a predator or do classical conditioning with an electric shock. So they'll pair a tone with a shock. They believe that what they're studying is fear. They're attempting to identify the neural circuit for fear, maybe the genes for fear. And the assumption is that's going to generalize across species, across all animals of that species, but also across species, usually mammalian species, but sometimes all vertebrates, just depends on who you read. They're usually citing Darwin as evidence, which is a whole other thing about what Darwin actually said. When I was a graduate student, I needed to measure emotion and I needed to measure it what I thought objectively, meaning in a third-person way. I thought this was going to be convenient because there are all these expressions that are universal and physiological patterns that are universal. I systematically discovered that if you read what the introductions and the discussion sections of these papers say, it's inconsistent with what the data actually show.

[05:39] Lisa Barrett: What the data actually show is contextual variation. Probably the first 20 years of my career was spent just documenting this variation in the brain, in the face, in the body with Karen. I met Karen. I started off as a psychologist and then I needed to retrain as a psychophysiologist so that I could study peripheral physiological signals to actually test this hypothesis. I started to work with Karen and then I had to retrain as a cognitive neuroscientist. I had to keep picking up skills to try to test these different domains. And what we discovered across all of this time is that really the business problem that a brain and a body have to solve is not how do you read emotion in other people, not how do you inhibit these pre-potent emotional responses. This is meta-analytic evidence: in the West, when people are angry, 35% of the time people scowl. That's better than chance, but 65% of the time, people don't scowl. When they're angry, they express emotion on the face in some other meaningful way. Half the time when people are scowling, they're not angry. There is variability in how people experience emotion, how they express emotion. There's variability in the neural patterns for emotion that seems to be yoked to context. That is not random variation. There's variability that is structured within a person across situations, as well as across people, for example, across cultures. What this means is that there is no inherent meaning of a scowl, the raise of an eyebrow, or the curl of a lip. An increase in heart rate or a decrease in heart rate, even amygdala activity, this area of the temporal lobe, doesn't have inherent psychological meaning. Even the activation of individual neurons doesn't have inherent psychological meaning. They have relational meaning in the signals; for example, action potentials or the local field potentials around a set of neurons have meaning in a pattern of other signals, but they don't have an inherent meaning, like in a labeled-line sense. In fact, nothing in the brain that I can determine has a label. There are no fixed receptive fields anywhere in the brain. There are no labeled lines where a particular axon fires and it has a particular meaning every single time. The meanings are really relational. That's the punchline. Karen, do you want to say your part and then we'll catch Mike and Ben up to that point, and then we can talk about going forward.

[09:34] Karen Quigley: What exactly would you like me to focus on?

[09:39] Lisa Barrett: What part of the story haven't I told that is relevant?

[09:49] Karen Quigley: Well, it seems you've done a pretty good job of telling the basic idea behind the story. We've spent the last decade trying to further enhance the empirical evidence for this idea and in saying more about what we mean contextually and what we mean more at the individual level.

[10:12] Lisa Barrett: First, basically what we did for 20 years was just document the question and get people to accept the fact that these emotions, these kinds of fixed forms don't exist. There is no circuit in the brain for fear. There is no circuit in the brain for anger. There is no fixed chemical; dopamine isn't a reward chemical. These fixed meanings just aren't there. I think we spent a lot of time marshaling a lot of evidence from our own studies and also meta-analytic evidence from a lot of domains to basically try to frame what is the business problem that we have to solve here. Historically in psychology, this problem has been encountered before. This is probably the third time that people have encountered this problem about emotion, but it's a broader problem than just emotion. People have been attempting to start with folk categories that they learn from their own experience. Being socialized in a particular culture, those meanings and those categories are culturally inherited. When I say they're learned, what we mean is they affect the patterns that people learn and that come to be where it's possible for the brain to remember those meanings. We can put some biology on that, but here I'm just talking generally and colloquially. They learn certain categories and then use their experiences in these instances that they've categorized in particular ways. They go searching for the physical basis of these categories in a fixed way. In cognitive neuroscience there were 30, 40 years where people were searching for specific localizations and specific sets of neurons for anger, sadness, fear, episodic memory, semantic memory, this kind of attention, that kind of attention. They were looking for fixed modules to map to these categories. What we decided to do is take a step back and say what any animal has to do is deal with a tremendous amount of uncertainty. Animals move around. They have a particular body shape. They have a particular ecology. They have a particular set of metabolic demands. They're moving around in a highly uncertain, only partly predictable world.

[13:48] Lisa Barrett: And they have to create meaning in such a way that they can survive and thrive. And so that means we took a step back and said, well, instead of starting with these folk categories, why don't we start with brain evolution and metabolism, and not so much homeostasis but allostasis, this idea that what a system is doing is anticipating metabolic needs and attempting to meet those needs, preparing to meet those needs before they arrive, and that different parts of the system might use homeostasis. Different parts of the system might function by homeostasis, but really allostasis is what is most metabolically efficient. We started drawing from different lines of research, from electrical research and electrical engineering on signal processing, what's efficient and energy efficient signal processing, brain evolution, anatomy, neuroanatomy, just various literatures and bringing them all together. We developed a set of hypotheses based on this integration of a lot of different literatures coming together. The idea is that, and there's quite a bit of evidence for this now, that the traditional way of thinking about the way—and we're talking primarily now about vertebrates; we're going to talk it upscale, which is much more simplified than what you deal with. But the general idea is that sensory signals—that an animal is detecting with their sensory surfaces—detect changes in the world. Those signals are ferried to the brain as small details that then have to be somehow compressed or integrated. So you have all these lines and edges in primary visual cortex that then have to be integrated, bound together into objects, which then have to be bound together with sounds and smells and so on until you get a representation of an object, which then you retrieve from memory, your understanding of what that object is, and you compare it and then categorize it. Then the object is meaningful, and then you plan an action. You're walking on the street, you're taking in all of these sensory signals; your brain somehow is binding them. It actually used to be called the binding problem. How do you bind together all of these sensory signals into an object that you categorize? So you see some ball of fur that has whiskers on the street and eventually you perceive a cat and then you categorize the cat as a cat, and then you make an action plan. Are you going to bend down and pet the cat? This is the idea. It's a bit of a cartoon for how people understood it, but that is the general idea: you start with the details and eventually you get to objects and then scenes and objects and then action plans, and then you behave towards the object in some way. That's the general idea. Karen, anything to add there?

[17:25] Karen Quigley: I think that's right. That's the cartoon version.

[17:28] Lisa Barrett: That's the cartoon version of it. That is basically the version of perception and action. People still use.

[17:35] Karen Quigley: Yeah.

[17:36] Lisa Barrett: For the most part, there is a literature that considers energy and metabolism and allostasis, but not this literature. When people are thinking about cognition or perception or emotion or decision making, even people who are studying reward, they don't typically think about the dynamics in the body at all that have to support those actions. What we do is challenge that view. We take a predictive processing approach where we say not so much that the brain is running a model of the world, but it's running a model of its own body. It's running a model of the sensory surfaces of the body. Your brain doesn't have a map of the world. It has a map of its retina. It has a map of the cochlea. It has a map of the skin. So it has some kind of very spatially degraded, compressed map of the body. It has a fine temporal map of signals inside the body. These signals are compressed as they make their way to the brain to various spatial and temporal degrees. What they meet when they get to the brain is a set of intrinsic signals in the brain that are a neural context that direct the compression of the incoming signals and give them meaning fundamentally in a metabolic sense. I could show you pictures to explain, but that's the general idea. What the brain is doing in any given moment, if you just stopped time, the brain is generating; it's remembering, essentially, re-implementing a set of past experiences similar to the present in some way. There are features of equivalence that the brain is using. It's not remembering or reinstating the signal patterns for particular instances. It can do what's called conceptual combination, the sort of flexible implementation of patterns. So it's creating patterns of activation. It's not remembering a single instance, it's remembering a collection of instances which are similar to the present in some way. In psychology, a bunch of things which are similar in some way for some function in some context is called a category. What it's doing is generating categories that are potential. If we take the cerebral cortex, for example, in any given part of the cerebral cortex, what's happening is that the neurons there are reinstating a pattern. The pattern is fundamentally a visceral motor pattern for regulating the body. There are axons that will leave a cortical column in layers five and six, but mostly five, that descend to the subcortical areas all the way to the spinal cord.

[21:47] Lisa Barrett: That is essentially a visceral motor pattern. Literal collaterals off those axons make their way to other neurons in other parts of the cortex as prediction signals. So the motor pattern, the predicted motor pattern or plan, and then the predicted sensory consequences of those movements. That's happening across the entire expanse of the cerebral cortex. I'm picking the cortex because that's what people know the most about. There's much less known about how the subcortical areas are working together, but we're working on a paper about the hippocampus, for example, as also adhering to this kind of pattern. So what the brain is doing in any given moment, it's making an action plan, a visceral motor plan for regulating the body, for controlling metabolism, and it's also making a set of prediction signals that will anticipate the incoming signals from the sensory surfaces of the body. An interesting aspect of this is that any given action-potential-like train of spikes has no inherent meaning because what the same set of spikes can mean, depending on who's sending and who's receiving the signal, can be an action. It can be a motor plan or it could be anticipating a sensory signal. It's the same set of action potentials, but it means something different depending on who's sending and who's receiving. So it has a relational meaning. We're not saying it has no meaning. We're saying it has a relational meaning. The meaning isn't inherent in the action potential itself. It's relational depending on the pattern. Any given set of sensory signals has no inherent meaning; it has a meaning in relation to the neural context that's been created by the brain. That's one way to think about it. Another way to think about it is that those signals are constraining the brain. One way to think about it is that the brain is a network and it has inherent signaling that will continue until it runs out of energy and things are perturbing it. If you think about it that way, then you would say these intrinsic signals are giving meaning to signals from the body, which are reporting on the sensory conditions in the body and the sensory conditions in the world. Another way to think about it is that these sensory signals are actually constraining the brain. Without them, all kinds of patterns could occur, some of which would not be beneficial. For example, partly what psilocybin is doing is relaxing those constraints, so the brain is not so constrained by signals from the body. Or when you go to sleep, your dream signals are not so constrained by exteroceptive signals from the retina and from the cochlea. I could go on, but I'll stop there and see what you want. Maybe Ben, you could say if any of this is what you had in mind?

[25:58] Benjamin Lyons: This is exactly what I had in mind. You covered everything, I was hoping you would. What I'm trying to accomplish is a further integration of these literatures. I think what y'all study and what Mike studies are the same thing at different scales and timescales. There are other literatures I think are relevant to economics, but also developmental psychology and the science of how motor behavior is produced and developed. That's also highly relevant. There's a bunch of stuff I'd love to show y'all there. It's a process of seeing that it's all the same pattern.

[26:26] Lisa Barrett: I will say one other thing: this idea that the way that neurons signal each other is not unique to neurons and that any cell—a cytokine is just one cell signaling another cell. It doesn't have a special meaning. That's an epiphany to a lot of psychologists and neuroscientists who think that cortisol is a stress hormone, as opposed to just one way of signaling. So you can think about the brain as a system and the body as a system, and they're interacting with each other. You could think about the brain and the body as a system that is interacting with things outside the body. You could think about the four of us as a system. You can place those boundaries—what do you think? But basically, any system is trading in signal patterns. The meaning of what's occurring is in the pattern; it's not in the individual parts. For example, Nick Lane and I talked about electromagnetic signals that mitochondria generate, and that may be another way the body can signal the brain about metabolic status. That would mean there would have to be a set of receptors for those if they were interoceptive signals. He was thinking about electromagnetic signals from mitochondria in neurons signaling each other, but I said they're in the heart too and they're in the gut. I didn't know any cell could generate electrical activity, but that means that if there were a receptor somewhere in the brain, that could be a global signal about metabolic status, some kind of allostatic signal that the brain could receive. He suggested nanoparticles, like iron. We have a scan where we scan for iron, but we use it as a control in an fMRI study to control for signal that will interfere with the magnetic signal of the scanner. We thought that if this is a metabolic signal, we would expect the concentrations to be in certain places more than others. We looked, and in fact, that is where there seems to be more concentration. That doesn't mean anything other than that this is a really important question to ask in a more controlled way. My point is that it's a really different way of thinking about things than people in our domains are used to. The small amount of work that I've been able to read of yours that I can understand the details of seems to me to show Ben is exactly right: we are talking about relational meaning, but at completely different scales.

[30:05] Michael Levin: Could I ask a couple of questions? Going back to the first part of what you were saying about the debate on emotions, could you give me an idea how much of that is related to the hard problem? Are any of the issues about first-person perspective, or is the debate about behavior and physiology?

[30:35] Lisa Barrett: Very few people think about it in terms of the hard problem. There is an assumption broadly in psychology and in neuroscience that science is objective. Here's how we would think about science. What we do is we create a condition under which we will experience things. Observations are experiences of scientists that then we quantify with numbers in some way. We don't bifurcate nature. We don't say, well, these things are objective and these things are subjective. And there's a whole history in psychology for how that happened. Basically, that's our view. The view of a large number of scientists in our field is that science is objective. What they mean by that, the kind of objectivity, has undergone a change historically over time. They're using a 19th-century definition of objectivity, which means that observations, because they are automated by technology, and because they are made publicly, are either free from human concepts and experience, or you're minimizing the bias of human concepts and experience. They're doing a third-person kind of science that assumes there is an objective, verifiable pattern to, in a perceiver-independent way, identify a state of anger or a state of fear or a state of sadness. The assumption is that when this putative circuit triggers for anger, there will be a definable physiological pattern, a definable pattern in the brain somewhere, a definable expression, and all of these things are very diagnostic of that state. In older versions of this, it was an essence: necessary and sufficient conditions for membership in the folk category anger. Now people would say it's a prototype. Your face might not look the same every single time. You might not scowl every time. Your blood pressure might not go up every time. But there's a family resemblance to this prototype. The prototype is fixed, so your response might not be fixed. And your response, my response, Ben's response, Karen's response, people who live in Tanzania and are hunter-gatherers, their response, maybe even the response of a rat, will all have a family resemblance for some or all of these features. That's the view. They believe their epistemology is that there is a viable third-person. They demote the reports, the subjective experiences, of their human participants. From our perspective, every observation that you make as a scientist is an experience of some sort that you've created for yourself. If Ben is our subject, I look at Ben and observe and quantify his movements in some way. And I have the experience of Ben as angry. And we ask Ben, how do you feel?

[34:29] Lisa Barrett: And Ben says, I feel sad. In that other view, the view is that we're right and Ben is wrong. Because Ben can't possibly know his state. He has all kinds of reasons for, even if we can assume we've created conditions where he will be as honest as he possibly can. There are moments where there's no way he could know his state, but we could know. Our view is that what is real in that moment is that we experience him as angry and he experiences sadness. That's what's real in that moment. That's what we have to try to figure out: that pattern. So there's an assumption that what is buried in the definition of objectivity is a prioritizing of certain experiences over other experiences. The experiences of scientists matter more. My experience of Ben as angry is what is closer to the ground truth than Ben's experience of his experience in the moment as being sad. Whereas what we would say is we would use an older definition of objectivity that is rooted more in Francis Bacon, around the time of the scientific revolution, which would be to say every human has a point of view. We all have concepts and categories. We can't escape them. The way you do science is you try to minimize any particular bias. The way that you do that is by trying to come to consensus over the data if you have diverse points of view and using lots of methods, some of which would disadvantage you and others would advantage you, and you use them all. Or if you're doing an analysis, you would do a multiverse analysis where you would vary every parameter in your analysis, and then you would have a distribution of results. Then you would interrogate that distribution as opposed to picking parameters in your analysis so that you result have one result and then potentially have picked the ones that favor your particular perspective in some way. So it's called transformative interrogation, where you have a community of scientists who are actively engaged in a self-critical examination, but the community has to be diverse. I don't mean ethnically diverse, although I'm sure that matters, but it's more that diverse in terms of your starting assumptions; your starting assumptions have to be diverse. That's the way to get to not truth, but usable, justified knowledge. That doesn't solve the hard problem either, but it does acknowledge the fact that all science is first person science. This is just ******** that it's third person. That's just a way of saying I think that my expertise counts, my experience as an expert counts more than your experience as a different kind of expert. That's our view.

[38:25] Michael Levin: On the topic of managing the sensory interfaces, I'm thinking of Andy Clark type of ideas, the extended mind. How does it decide where the boundary actually is?

[38:45] Lisa Barrett: That is a decision that is made continuously and it varies. Do you want to say something about this, Karen?

[38:54] Karen Quigley: I was going to use the example of getting in the car.

[38:56] Lisa Barrett: Yeah. Yeah.

[38:57] Karen Quigley: When you're walking around the world, presumably the boundaries of your sensory surfaces are putatively at your skin, although depending upon what you're doing it could be quite different. Let's say you get in your car. Now the boundaries of your actions, the boundaries of your body have gotten out to the edges of the car — very personal space, basically.

[39:21] Lisa Barrett: Yeah.

[39:22] Karen Quigley: We would see that as highly flexible based on the current context and what your actions are.

[39:30] Lisa Barrett: Michael Graziano did these studies at Princeton, where he's doing electrical recordings in neurons, and he identified these neurons in prefrontal, premotor cortex that he called bubble wrap neurons, which start to fire very frequently; their action potential spike trains speed up a lot the closer something is physically corporeal to the animal. What's really interesting is that that boundary changes depending on the state of the animal. It looks like when the animal is metabolically compromised, the boundary is out here. When the animal is allostatically balanced, everything's running smoothly, the boundary is closer corporeally to the animal's body. It's really clear that that boundary — there are a lot of "me, not me" systems in the body, like the immune system. So I think that boundary of where you end and where the world begins isn't always at the skin, and it rarely is; it's always fluctuating. There are some interesting cases. Maybe you had this experience when you were an adolescent. I had this experience when I was pregnant. I was constantly whapping things with my belly. It's not that I forgot that I was pregnant, but there would be some difference in the amount of growth and then I'd be walking into things; it was not explainable. You hear adolescents talk about how they don't know where their body is in space. I think there are also some interesting cases where people don't update when they should, when they need to. The car is an example of something that fluctuates, or a pen in your hand. It becomes part of the peripersonal space, but sometimes you don't update. I think that where this peripersonal boundary is is related to time, the experience of time, like how long you think things take. You could create a just-so story about how this came to be when animals developed distance senses like vision and audition, senses where you're sensing something at a distance as opposed to a proximal sense like olfaction or touch or anything interoceptive or gustatory. It turns out that in the brain this is something unique to us. A lot of people make a distinction between exteroceptive, meaning outside the body, and interoceptive, meaning inside the body, sensory signals. We think about proximal senses versus distal senses because they're processed very differently. That is where the signal compression is happening, the temporal speed of them. These things seem very different for distant senses versus proximal senses. Distance senses came last. Proximal senses were there first, and they're more tied to movement. There's increasing evidence that signals from proximal senses, in the way they're processed, are gating the sampling of distant senses like vision and audition.

[44:06] Michael Levin: These are issues we grapple with all the time at the cellular and even subcellular level: exactly that change of that boundary, that flexible boundary between self and moral. In particular, both in natural biological cases and then all the weird stuff we do where we either instrumentize something and give it a sense that it never had before or connect it to some crazy engineered thing in a hybrid mode.

[44:36] Lisa Barrett: I will also say one other thing that there are senses that humans have that we have no sensors for that the brain computes. Temperature is a really good one. Skin temperature — we have no sensors on the skin for temperature at all.

[45:00] Karen Quigley: You mean wetness?

Lisa Barrett: We have no wetness. You feel wet when you take a shower, when a raindrop hits you, or when you're swimming. But we have no sensory signals for wetness. It's a combination of temperature and touch. There are other examples too. We were talking about the kinesthetic sense of your head — where your head is positioned in space. That's a combination of five different sensory signals. Or flavor is a combination of olfaction and gustation, what's called taste. But what most people call taste is really flavor. Sam would know a lot about that.

[45:46] Michael Levin: When you were talking about relational meanings, are there scenarios you know of where a given set of events has multiple relational meanings, where different observers look at the same thing and have different interpretations of it?

[46:08] Lisa Barrett: I'll just use the very tired example of seeing red to make the point, because I think even though philosophers use it a lot, it's actually a really good example. So normally we see an object that's red. I'm looking around for an object that's red. I don't see one. But an apple is red. And you think redness is in the apple. But red, the property of red is a property of the relation between the signals coming from the apple, the signals that your retina transduces and the signals in your brain. Neurotypical people have three types of cones with three different types of opsins, and you need all three in order to take light, which is reflecting off an object at 620 nanometers in order to see red. That's not all you need, but that is necessary. And if you have a person or an animal with only two cones, cones with two opsins, they would experience that wavelength as a muddy brown, greenish brown. And so we would say that, or people say that they're colorblind, meaning red is in the apple. If you can't see the red, then you're colorblind to the reality of the red apple. But there are also some humans with four opsins. They're rare and they're mostly women, but they do exist. They have the same opsin, the same fourth opsin. They parse the visible light spectrum with many more categories than we do. So they would experience 620 nanometers in the same visual context as some other color. But if neurotypical humans had four cones, then that apple would not be objectively red. It would be objectively some other color. And we, those of us who have three cones, would be colorblind.

[49:06] Lisa Barrett: What happens with objectivity is that we prioritize the biology of certain people over other people, and then we call it objective. That idea happens everywhere with lots of different examples. Some of them are very basic visual examples, and some are more social examples where people come with a different neural context, a different set of categories that their brain is equipped to make, and they experience the world and the same signals extremely differently. In the predictive processing Andy Clark way, if you combine that with anatomical evidence, what it seems like is that predictions are not for perception, they're actually for action. The action is planned first; the sensory prediction, so the perception, is a consequence of the action plan. What's really happening under the hood is the action plan is there first, and lived experience is a consequence of the action, not the other way around. When we say that people experience things differently, embedded in that is the assumption based on the anatomy that they will be having very different action plans when confronted with the same set of sensory signals. I can present stimuli to you, and you will experience the signals one way, and then I can make one change and you will experience the signals completely differently. I can show you another image, take it away, show you the first set of signals exactly the same, and you will experience them completely differently. It's a party trick. I use it all the time on audiences. We have done careful brain imaging studies where we do this with subjects, where we show them the initial visual image, and then we show them a second visual image. We show them the first visual image, then give them another image, take that away, and show the first image again. The pattern of BOLD signal activity is different than the first time. We can also show them the same thing three times, and it doesn't change. An intervening experience changes their experience of the first pattern of signals, and it doesn't revert. The way they make meaning of the first set of signals has changed, and it's changed pretty much forever.

[52:06] Michael Levin: We didn't get to Ben's stuff. Should we make a new one?

[52:13] Lisa Barrett: We absolutely can. But I wouldn't mind, in the last remaining two minutes, since I just talked the whole time without slides and showing you anything, what your thoughts are initially.

[52:26] Michael Levin: I think it's very compatible with a lot of the stuff we're doing. If we change scale and substrate a little bit, a lot of this carries over. We could use some of these models, vice versa, to map this onto some really ancient cellular stuff that's going on in the body at all scales.

[52:49] Lisa Barrett: That'd be really great.

[52:51] Michael Levin: Yeah.

Lisa Barrett: That'd be really exciting.

[52:52] Michael Levin: Yeah.

[52:53] Lisa Barrett: I also think there's an implication here for how we do science. I think that our way of understanding how the brain is processing signals from the body or how the body is constraining the brain — is it both true? It just depends on what you're focusing on. We can use that to think about the epistemology and even the metaphysics of what we're doing as scientists. Karen is still rooted in the nuts and bolts of the science, but I've been dipping my toe into this other world of thinking about the epistemology and the metaphysics of how we do science and what we think we're doing exactly.

[53:47] Michael Levin: I think that's a great area to get into. Our contact with it now is this weird thing. We call Mom bot, which is a joint project with Josh Bongard's lab and Doug Blackiston. One way to see it is it's a robot scientist. It's a thing that sits in our lab. It has an AI that makes hypotheses about stimuli you give to the cells to make certain biobots. It makes the xenobots; it physically makes the xenobots with those stimuli. Then it observes the biobots in terms of their shape and behavior, goes back, revises its hypotheses, and tries again. In that sense, it tries to make discoveries in morphogenesis. That's one way to think about it: an automated robotic discovery platform. But the other way I like to think about it is that this thing is basically a reverse hybrid. The typical hybrids people make are they'll take a brain from a fish and put it in a little cart that drives a little car around. This is the reverse. What you have here is an AI that is exploring morphospace, and the body it has to explore with is the living cells—the frog cells. So it's basically, whatever level of intelligence it may or may not have, the body through which it experiences anatomical space is the living material. It uses the biobots as the outer surface to feel around.

[55:23] Lisa Barrett: That's very cool.

[55:25] Michael Levin: Isn't that wild? People always talk about embodiment and they always think it has to be running around in physical space. This thing sits still as far as our obsession with 3D space is concerned, but it's exploring morphospace.

[55:37] Lisa Barrett: I know we're out of time, but I have to say this one thing. This is really interesting to me because I think that some of the fundamental features of reality that we take to be fundamental— is this solid? We assume that this is real, objective, perceiver-independent; that's traditional realism. I think we experience this as solid because of the kinds of bodies we have. If we were subatomic particles, this would not be solid. This would be mostly empty space. Many of the things that we take to be Kant's primary properties—shape, solidity—are hidden from us because we all have bodies that are very similar in experiencing solidity, experiencing these signals as solidity. And this is a fundamental aspect of relational realism, this thing that I was talking about, this metaphysics that I've been ... That is almost impossible. It's not possible to test with others—we're like fish in water; we can't escape the water. It's a bit like the hard problem: we have to study consciousness through consciousness. Anytime we make a new discovery, it's because there's a reverberation or a pattern in a signal that we didn't expect—for example, dark matter. There's some pattern that we don't expect, and that tells us that something else might be there. So it's almost impossible to expand our island of knowledge because we're limited by our sensory surfaces. This is really cool because it suggests a potential—not a solution, but maybe an avenue for dealing with this problem.

[58:06] Michael Levin: Chris Fields and I have this paper on diverse spaces. As you said, solidity, what do barriers look like in transcriptional space? What does it feel like to be walking around a bent physiological state space, or my favorite, it was just anatomical morphospace. There's a metric of distance and you can send signals across and you can wander around it. I think that's exactly what groups of cells do. They live in these weird spaces with no doubt weird perceptions.

[58:42] Lisa Barrett: One thing that we think is that some of the things that we call illnesses are actually different physiological spaces for people, that are not the neurotypical kind of physiological spaces, not the biologically typical range of physiological spaces. That is an idea that we've had. We haven't been able to figure out how to create experiences for ourselves called observations. We haven't been able to figure out how to study it.


Related episodes