Watch Episode Here
Listen to Episode Here
Show Notes
This is a ~1 hour discussion with Ben Lyons (https://interestingessays.substack.com/), Eli Sennesh (https://scholar.google.com/citations?user=3z4ALYgAAAAJ&hl=en), and Jordan Theriault (http://www.jordan-theriault.com/), where they each give talks and then we have a brief Q&A on topics related to neuroscience, economics, and specifically the interoception and allostasis view of behavioral and psychological processes.
CHAPTERS:
(00:00) Intros and Research Backgrounds
(04:10) Allostasis and Brain Architecture
(17:19) Predictive Categories and Metabolism
(36:11) Control-Oriented Evolutionary Modeling
(48:31) Origins of Navigational Control
(52:21) Brainless Tadpoles and Regulation
(57:08) Bullwhip Effect and Brains
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:00] Benjamin Lyons: Thank you both for being here. I'm super excited about this. I've been talking to Mike for a long time about how I think there's a lot of really important connections between the work he does and the interoception and allostasis ideas that Lisa Feldman Barrett and her colleagues have been putting out. I'm hoping both of you can do a presentation; we'll have a little bit of time left over for a discussion. Jordan and Eli, could each of you introduce yourselves, and then we'll start with Jordan's presentation and then do Eli?
[00:30] Jordan Theriault: I'm Jordan Theriault. I'm an assistant professor at Northeastern. I've been working with Lisa and Karen Quigley for 8 years now. I came on as a postdoc. I originally had a background in moral psychology and social neuroscience. I transitioned out of that and got much more interested in the constructivist and biologically grounded perspective that Lisa and Karen have been running. Ben, you cited some of the stuff that I had on social pressure. I got interested in thinking about how social pressure might work from a biological and allostatic perspective — how others' expectations compel an affective motivation driven by interoceptive signals. I can talk about that. That's not in the presentation I've got here, but the stuff I got more interested in is that Lisa pushed me to think more about the underlying metabolism as a governing force for motivation. I got interested in particular problems in brain glucose metabolism that underlie the BOLD fMRI signal. My goal right now is to think about how to detach the BOLD signal as another measure of activity that people lump in with spiking and EEG and how to look at the BOLD as a measure of interest in itself, largely to do with waste clearance and local pH homeostasis. The talk I've got is a broad overview of how the lab thinks about some of these topics and a bit about the metabolism stuff at the end.
[02:28] Benjamin Lyons: Fantastic. Thank you for that. Eli.
[02:31] Eli Sennesh: Hi, I'm Eli. I did my PhD dual-posted between the IASL, where Jordan was, and Jan Willem Bondemaint's Deep Probabilistic Programming Lab. I stretched between the computational side of machine learning and the theoretical side of neuroscience. Then I did a postdoc with Andre Bastos in Nashville, where I looked at the implementation of predictive processing in the visual hierarchy. I totally derailed my early career by finding that the theory we all went in with was actually wrong, which is what Popper says you're supposed to do in science, but everyone knows that's not actually how you do it.
[03:30] Jordan Theriault: It happens.
[03:32] Benjamin Lyons: Fantastic. So thank you both very much for that. Jordan, could you jump into things then and share your presentation with us?
[03:39] Jordan Theriault: You see this okay?
[04:09] Benjamin Lyons: Yes.
[04:10] Jordan Theriault: Feel free to stop me. These are some slides from the presentation that Lisa and I put together last year for a group talk in Paris on scientific concepts in neuroscience. It was a gathering of philosophers and neuroscientists in Paris to cover some of this stuff. Let's jump into it. First, I just wanted to point out that this is all A-team effort. I think Lisa gets a lot of the attention for a lot of this, but we really try to make sure that people know the lab itself is run as a team. The way Lisa and Karen used to divide things up is they'd say that Lisa handles everything above the neck and Karen handles everything below it. For me, coming in as a third director in the lab, where do I fit in? I think looking at metabolism and an underlying systems perspective to join those two halves and think about the bigger picture has been interesting for me. I think metabolism crosses those boundaries a little bit too. There's room for everybody here. The way this is structured is to talk about a couple of scientific principles that have organized the team's thinking. The first principle I want to start with is that we've tried to think a bit differently than other labs: to think about the brain as a whole organ and to think about what, from that perspective, the brain as an organ's most important job is. We were really inspired by some work from Peter Sterling and Simon Laughlin. They have a book called "Principles of Neural Design," which is, if you ever see Karen's copy, just earmarked to hell with different footnotes and sticky notes in it. Inspired by that, we really want to have an explanation here that spans rats, humans, nematode worms. From that perspective, the idea is that the brain's important job is not to think. It's not thinking. The most important job from our perspective is to coordinate and regulate tissues and organs of the body. Considering that we've got over 600 muscles, we're balancing dozens of hormones coursing through your blood. We're pumping at a rate of four to five liters per minute. It's directing the GI system as it digests food. It's coordinating kidneys and liver as they excrete waste, coordinating your immune system to fight illness. We collectively call that whole process of regulation allostasis. The other reason why we use this term allostasis is to draw a distinction. We want to say that allostatic regulation is about energy regulation and efficiency of energy regulation at different levels of output. We want to contrast that with homeostasis, which is about regulating toward a set point. The idea is that with allostasis, there's not a permanent set point. There's a flexible whole-system change that has to adapt to perturbations. The other distinction is that we think of allostatic regulation as predictive. It's about avoiding errors as they come up and ideally leveraging other parts of the system to avoid an error-driven correction. From a homeostatic perspective, we think of regulation as reactive. There are individual systems in the brain and body that are working from a homeostatic perspective, but at a collective, broad level, things need to be coordinated and regulated from these broader allostatic efforts of the brain. This is drawing a lot on Sterling and Laughlin's work and the cross-species analysis of requirements and design principles of a brain.
[08:33] Jordan Theriault: A nice pull quote from Sterling and Laughlin's stuff here is to say that the core task of all brains is to regulate the organism's internal milieu by anticipating needs and preparing to satisfy them before they arise. That's a dicey summary statement of what we mean by allostasis. What that means is that as the brain's regulating and coordinating the systems of the body, it's also modeling the sensory consequences of those changes from the body, and that's called interoception. Again, for us, interoception is the brain's modeling of sensory changes that are resulting from allostasis. It's supporting allostasis in the same way that somatosensation is supporting feedback about skeletal motor control. The idea is that your brain's always regulating your body. It's always modeling the body's sensory state. This is something that's always on. It's on even when you're lying perfectly still in a resting state fMRI. A lot of people have a very cognitive focus when they're talking about resting state fMRI. They'll say it must be modeling internal thought or self-directed thought or mind wandering. We make the point that there's a lot of stuff that has to be going on all the time, especially as it regards bodily regulation. You can't have these cognitive blinders on and think that it all has to do with thinking. That's the mistake of thinking about whether the brain is for thinking or whether it's for allostasis. An example of how we've tried to study this: we had a resting state connectivity analysis that tried to identify brain architecture responsible for coordinating and regulating the body. This work is from 2017, and we had a replication at 7 Tesla that's in press. First, the lab found tract-tracing studies of macaques and other mammals to identify cortical areas that have direct projections to brainstem nuclei involved in regulating the autonomic nervous system, immune system, and endocrine system. They were looking for cortical areas that were important for performing allostasis, proven by direct projections. They identified the homologous voxels in the human brain and used those as seed regions. They then looked at resting state connectivity between each of those seed regions and the rest of the brain, where each seed region would produce one discovery map. They performed cluster analysis on those discovery maps to find clusters within each, and identified overlap across the discovery maps to find points of convergence from the seed regions. They found two distinct networks and some overlapping regions that seem critical for allostasis. They replicated this in HCP data and more recently in a large 7 Tesla sample we have. That analysis produced one integrated system comprised of two brain networks, resembling the default mode network on the left and the salience network on the right. Those overlap in a set of hubs, some of which are members of the Rich Club hub, thought to be backbones for communication across the brain. One thing pointed out here is that there's a lot of work looking at gradients across the brain. Daniel Margulis organized a conference in Paris; he'd been looking at these gradients from sensory surface to higher-level abstraction, from default mode to sensory surface. He interpreted that as sensory surfaces on one end and the default mode network on the other for abstraction. We'd think of a gradient in the brain between exteroceptive sensation and abstraction. We're finding the same networks here, so it's consistent with that. From our perspective, those core networks aren't necessarily associated with abstraction. They're related to a different sensory surface, which is the interoceptive sensory surface. You have a gradient running from exteroceptive sensation all the way to the interoceptive surface. Other people, with cognitive blinkers on, will interpret that interoceptive surface as being about abstraction instead of about visceral regulation.
[12:56] Jordan Theriault: We've got a bunch of stuff from our group making that point here. Summing up first principle here is just to say that your brain's regulation of your body is at the core of brain structure and function. And so therefore, what we'd say is that it's at the core of your mind. There's a function, there's a core aspect organizing cognition that comes from this allostatic perspective. Then what we'd say is that your brain's most important job is not for thinking. It's not feeling, it's not seeing or hearing. What we'd say your brain's most important job is to run your complicated body. Once that animation stops here, we have a paper that came out that was really trying to drill down and make this point. This perspective just came out in Neuron. We are saying it's not the thought that counts, allostasis at the core of brain function. If you want the most up-to-date version of all of these arguments, that's the one that we just put out. We tried to synthesize a lot of that anatomical and functional connectivity work into that paper. That's point one. Point 2 is to talk more about not just where allostasis might happen in the brain, but how specifically the brain achieves it. The idea here is that allostasis is about predictive regulation of the body. The most efficient way to regulate a body is to predict the body's needs in advance and to correct those predictions when necessary. The claim that we want to make here is that the predictions in the brain are organizing concepts in the mind. These are a whole bunch of papers. The point here is to show that there's a broad and growing literature tradition that's emerged in the decade focused on understanding how the brain predicts. Predictive coding, free energy — there's hundreds or thousands of papers providing evidence for predictive functioning in the brain. What I'm going to do here is to talk about the gist of this, but with their lab's own particular theoretical twist on it. The general idea is this, which is that your actions and experiences are beginning as memories, and those memories are similar to the present in some way. You're not necessarily aware of yourself remembering, but this is what's happening. We have a prior experience that's projecting on the current experience. The idea is that your brain is re-implementing signal ensembles from the past that are similar in some way to the present. So in psychology, when things are similar to one another, we'd call them a category. Prediction signals from this perspective are categories. When the brain's predicting, it's constructing a category of possible futures. These predictions are continuously being tested against the physical signals streaming into the brain from your body and from your sensory surfaces as they're sampling the outside world. To see how this works, you can think about the last time you were thirsty and drank a glass of water. Within seconds after draining the last drops, you probably felt a lot less thirsty. It actually takes about 20 minutes for that water to reach your bloodstream and for the associated signals to reach your brain. What is relieving your thirst 20 minutes earlier than when those signals reach the brain? The argument is prediction: anticipation of those allostatic and interoceptive consequences. Over the course of your lifetime, your brain has had thousands of opportunities to learn that certain motor commands involved in drinking, along with the feel and taste of water in your mouth, are eventually going to quench your thirst. Another example I really like: have either of you seen this example before or do you recognize this picture?
[17:19] Benjamin Lyons: I don't.
[17:21] Jordan Theriault: This one, Lisa's used this a bunch of times. The example here is that if you look at this image, it's difficult to make out what this is. Your brain is searching through a whole lifetime of past experiences, issuing thousands of guesses, weighing probabilities, trying to answer the question of what is this thing most like? The point is it's not asking what is this, it's asking what is this thing most like? It's trying to draw on memory and past experience to put this ambiguous sense data into a category. If you're still seeing only black and white blobs, your brain hasn't found a good set of predictions yet, and you're in what we call a state of experiential blindness. I can cure your blindness. Oops, if the animation works, there we go. If you haven't figured it out, this is a bee. You can see its head here. You can see the tail, a wing over here, tail over here, a leg. And then we'll put it back. It should be possible to see these blobs as a bee now, pretty reasonably. The point is that what your brain is doing is it's creating new predictions thanks to the color photograph that I showed you. Those predictions are changing the way that you experience the blobs, the ambiguous sense data. What we've done is we've constructed a prior experience, reused a prior experience to help your brain construct the image of a bee so that you see a bee, even though there's not really a bee present on the screen. It's not what's on the screen here that's changed. You've changed, your prior experiences have changed, and they've helped you construct this experience. The whole process, which is making meaning of ambiguous sense data using past experience, is responsible for all the experiences that you've had in your life and the actions that you take. The claim is that the same process of meaning making here is how psychological events, like instances of emotion, when we talk about construction of emotion, are made as well. They're assembling ambiguous sense data, a lot of it from the body in the case of emotion, to categorize it into a particular experience. Zooming out a little bit, the idea is that physical signals are constantly streaming into the brain. On their own, those physical signals don't have inherent meaning or inherent psychological meaning, just like you experienced with the blobby image. The physical signals become meaningful as they meet the prediction signals that are already present in your brain. If those signals match, then the incoming signals are going to be categorized and explained. If they don't match, we call that a prediction error. Your brain encounters a prediction error. It can adjust what it predicts next. We call that learning. Category construction is really about meaning making. What your brain is doing is it's starting off with a category of possible futures, and then it's ending with specific instances of actions and mental features. This is what we're saying is the basic operating principle of your brain. In fact, when taking it from this perspective, a lot of the major topics that psychologists would normally be interested in fall under this more general umbrella of meaning-making when it's being considered in this sense. Predictive processing, interpreted with this particular spin that we have on it as category construction, is uniting a lot of these different folk categories into one common framework.
[21:57] Jordan Theriault: It's a common framework that has efficient energy regulation at the core, from an allostatic perspective. To summarize the second principle: your brain's constructing your experiences. What you're seeing, what you're hearing, what you're feeling, it's constructing those as it's predictively controlling your body. The concepts that we're using, even the concepts that we're using as scientists, are ultimately stemming from how your brain has clustered up the physical signals that it's received in the world and how it's chunked those up into categories to help facilitate meaning-making. That is predictive processing, but it's predictive processing interpreted through a constructivist lens. The last principle, which is more of the work that I've focused on since working with Lisa and starting up here, is that none of this stuff comes for free. Encoding all of these physical signals in the brain as prediction error has metabolic costs attached to it. I've spent a couple of years trying to disentangle some of the research related to those costs. As an entry point I'm focusing on the BOLD signal, the blood oxygen level dependent signal. When we're talking about observing activity in fMRI, this is almost always what we mean. We're showing people a stimulus. We're seeing a BOLD increase in associated regions like visual cortex. What I want to push on here is what really is activity? Activity is a clunky term. It's combining a lot. It's saying that something's happened. I'm not the only one who said this. In 2008, Nikos Logothetis, as well as a paper by Singh, had been urging caution in how you interpret the BOLD signal. This matters because Nikos Logothetis's work had established this link from the BOLD signal itself down into local field potentials, and then from there to local synaptic activity. I'm using dotted lines here for correlational and solid for causal. The point is that these links helped settle this interpretation that the BOLD is an index of local synaptic activity, and many people left it at that. We could go on doing a lot of fMRI experiments and thinking we've got this index deep down into synaptic activity. It's more complicated than that. The BOLD signal itself reflects changes in local blood oxygenation. You get a BOLD signal in the first place because active regions increase blood flow more than they increase oxygen use. What's unusual, and what I've been interested in, is that something else also happens during that same process, which is that local glucose metabolism increases quite substantially at the same time as everything else is happening. To understand why that matters, a quick refresher. Most cellular energy is produced by metabolizing glucose and oxygen together. In fully oxidative metabolism, you get about 32 ATP per glucose. At rest, that is what's happening across most of the brain. You get a coupled ratio of oxygen to glucose of about six oxygen per one glucose consumed. It's fully oxidative. In these active brain regions or in these stimulus-elicited patterns of BOLD activity that we're seeing, we get something different happening, which is anaerobic metabolism. Active regions use glucose without oxygen, which gets us about two ATP per glucose. It also generates free hydrogen and lactate.
[26:34] Jordan Theriault: What we're getting is that what we'd normally think of as activity is in fact a couple of different things: blood flow and oxygen use give us the BOLD signal, but we're also getting a combined increase in anaerobic metabolism. When we talk about activity, we need to consider everything that's happening locally as well. I can pull up some data on this too if you're interested, but I'm just going to skim through this for the second. The point here is why this matters. It matters because anaerobic metabolism means lactic acid buildup, lactate, and free hydrogen. These active brain regions are shifting their metabolism in a way that could potentially acidify themselves, which matters because neural function is sensitive to pH. As regional acidity increases, firing rates will decline. Fortunately, the increase in blood flow for the BOLD signal can potentially clear that acid out. Because that blood flow is increasing more than the smaller increase in oxygen metabolism, what we get is a BOLD signal. If we put all that together, you can get this causal pathway where we think about a cascade from anaerobic metabolism to lactic acid to blood flow to clear it to BOLD signal. You have a potential problem of local pH, which the blood flow helps regulate. I'm not the only one who's proposed this. Denuzzo et al and Doug Rothman and people at Yale Medical School have done some great stoichiometry work and models to map this out and make this argument. We can go a little further and ask what specific neural activity might be driving this cascade. To get at that, we can superimpose the Logothetis model from 2001 and update it since we know more now than we did in 2001. The Logothetis paper emphasizes correlation between local field potential and BOLD signal. What we know now is that it's specifically high-frequency gamma band oscillations that the BOLD signal is most correlated with. Those gamma oscillations are caused by interactions between pyramidal neurons and parvalbumin interneurons. I'll focus on the parvalbumin here for now. Those parvalbumin interneurons' spiking seems to be glucose dependent. There have been studies that substituted lactate or pyruvate or other fuels, and it seems that if you don't have glucose, you'll get a suppression of firing rate and suppression of normal spiking within these parvalbumin populations. Normal oxidative metabolism doesn't work as a substitute. There seems to be a glycolytic component that's necessary here. In our 2023 paper, which is a big review of all of this in depth, we hypothesized that those parvalbumin interneurons are driving this anaerobic cascade that ultimately produces the BOLD signal. The critical part is that all of these links can give us insight into function. There's evidence accumulating that links these gamma band oscillations to prediction error encoding. Some of the other work I'm doing, which I don't have time to get into, is that people have gone a ways in thinking about their fMRI signals in a very stimulus-focused way. I think there's a way of reinterpreting the BOLD signal as a response to prediction error encoding rather than just stimulus encoding. It's a lot to digest here. What I'm trying to do is lay a spine through this research that we can connect other observations onto. This isn't complete. The goal is to give a big picture overview. To connect this back to the big picture: as we talked about before, your brain performs two major functions. In category construction, the brain is using memories to categorize physical signals in the present. When it's learning, it's updating those categories. There are two major directions that signals flow in the brain.
[31:11] Jordan Theriault: For learning, signals are flowing bottom up as prediction error. If our model is right here, the idea is that the prediction error signals have some quirks. For one, we think that they might be using glucose to make less energy. They might be less efficient. They might be anaerobic. If they're acidifying the regions that they're passing through, blood has to flush these regions to regulate pH. This whole metabolic cascade of learning and prediction error encoding is potentially visible through the BOLD signal that we can measure in fMRI in the first place. There's a particular type of signaling here that has regulatory responses to flush out waste lactate, which makes it visible to us as an fMRI signal. On the other side, within category construction, signals are flowing top down as predictions. We think of the resting state as a case where people are not being bombarded with prediction error, where they're in a predictable environment. Glucose metabolism in that case is energy efficient. It's largely oxidative. Because it's oxidative, it's not creating lactic acid that has to be flushed. Because it's not creating lactic acid that needs to be flushed, this type of top-down signaling or category construction signaling, we'd suggest, might be invisible to BOLD. Even within resting state fMRI, the fluctuations that we're getting might not be necessary; they're there for many different signals. It's complicated to work out what it's all from in resting state. The clear hemodynamic, stimulus-elicited response that's the bread and butter of task-based fMRI might be indicating a very particular bottom-up process and missing a lot of this top-down signaling, which is really the bulk of the metabolic cost of the brain, if the brain's about 20% of the whole body metabolic cost in humans. If we're right and we're at the early stages of this, then the idea is that prediction error is supported by what's really an unusual type of metabolism. At the same time, prediction might be supported by a much more typical metabolic pathway, which is creating more energy and isn't creating the acidic waste that needs to be recirculated. In the resting state, this is almost exclusively the type of metabolism that's being used. This has some psychological implications as well, because it's implying that the brain is metabolically optimized for category construction, for being in these predictable environments. By contrast, it's suggesting that learning actually has a pretty big cost. If you're learning something well, then you don't need to learn it all over again. It's worth considering whether the randomized experimental designs that we use, especially in fMRI, are overemphasizing prediction error, and whether they're putting the brain into a situation that it's optimized to avoid. If your brain evolved to deal with environments where stimuli aren't popping suddenly into existence at the center of your field of view like this, then if you found yourself in an environment like that, you'd expect it to be pretty metabolically taxing on the brain because it's putting it outside of the evolutionary zone that it's evolved to deal with. You'd expect people to want to get out of that situation or to have to implement regulatory strategies in the brain to try to deal with that perturbation, which is very outside of the naturalistic norm. I'm going to skip over this and jump to the conclusion. If this is all true, if this model's right, then what do we do with this metaphor of BOLD activity that we'd started the section with? We need to give up on this idea that the BOLD is measuring generic function or generic processing. We can make a lot more progress by recognizing that the BOLD signal is coming from a very particular metabolic cascade that's driven by particular neural populations, which might create the need for a particular regulatory response. The argument is that BOLD is a measure of that regulatory response. Altogether, that's just to give you some perspective on where we're coming from in the lab. Summing up one last time, the three principles here that are performing the work. First was this principle that the brain's core role is to regulate the body. The second principle is that concepts are reflecting how the brain groups and organizes physical signals to allostatically regulate the body. Finally, even the simple act of encoding those physical signals is creating metabolic costs and consequences that need to be regulated. If we want to connect the mind to the brain, then we need to understand the underlying biological systems. Principles like this are general guideposts to help keep us oriented as we're working out a lot of those details. Thank you, guys.
[35:50] Benjamin Lyons: Thank you. I appreciated that. That's exactly what I was hoping for. I love the dollar signs. Let's have Eli do his presentation. Eli, don't worry about time for discussion. We'll just get to your presentation. My main goal is to make sure that Mike is familiar with Interoception Allostasis on some level. I'll be following up.
[36:11] Eli Sennesh: I'm going to share my screen. I do wish to preemptively apologize. These are slides that I have recently updated to contain recent work. Other than that, they're somewhat old. Continue to share this window. The perspective that I came to in graduate school, which is, if I was in private with Jordan, Lisa, and Karen, I would call it a nuance on what Jordan has just described. In public, I might call it a difference of opinion, which is that when we talk about the brain being an organ of regulation of the body, that means that its core job is not categorization, it's control. Now, if you have sufficient recording resolution such that you can treat every spiking pattern as a different "category" or treat spike versus subthreshold activity as a category boundary, then it's categorization. This is the artifactual part. I've been working backwards through computational modeling of brain evolution in a certain way. When I was in grad school, I started from predictive coding or predictive processing as my known factor that other people had collected experimental evidence for.
[40:00] Eli Sennesh: Nowadays I would have to be more nuanced on that because as Jordan showed, prediction error encoding seems to be linked to synaptic activity and gamma band local field potential oscillations rather than whole circuit spiking activity. Nonetheless, we can look at the neurophysiology of shared views of prediction and control in cortex. Think about the ideal motor principle that's well established in psychology and start developing equations from there. This basically gave me modeling papers where I say, let's use log probabilities as a common currency. There will be such a thing as optimal dynamics and a way of telling what's your approximation error to how you should act. You can unify with the math of predictive processing all over again while making sure that you track your regulatory targets. This is really how allostasis would map onto that control systems view of the brain. Interoception provides your sensors, visceral motor and somatomotor action provide your effectors, and you have an internal model in the forebrain that basically helps you run a controller. Now there's the question that I left grad school with, and by which I'm going to bridge to more recent work, which is, if you're doing this control as a process of bringing about pre-specified sensations, then where do those pre-specified interoceptive sensations come from? Why are some such interoceptive sensations specified through development and learning and evolution to be the ones that you regulate towards versus the others away? This took me a while and I went back in evolution along the way, criticizing the tendency to outsource this question to reinforcement learning studies, because in reinforcement learning studies, in conditioning studies, we condition behavior using something the animal already cares about, which means that studies of the learning and performance of conditioned behavior can't tell us why the animal cares about that thing in particular. So eventually I read more books and papers and got all the way back towards the beginning of bilaterian brain evolution and found that there was something very understandable and interpretable going on with taxis navigation. Jordan, there was that one time in energetics group where I had this aha moment about a certain mathematical thing that you could do with the vector of the direction you're going in interoceptive space versus another vector that you could measure in a certain way. That came up again a couple of years later when it turns out that these combinations of vectors are things that people have found cells in some organisms that actually measure that kind of quantity exactly. So then here's the hypothesis that I put down with Maxwell Ramstead in a very freeform paper, saying once you evolve the ability to navigate a physical environment in such a way that you're coupling your internal states and your behavior to meet your allostatic needs.
[43:49] Eli Sennesh: Once you evolve this ability to steer, then as the nerve cord actually folds from bilaterians to chordates and vertebrates, this steering system can be reoriented towards your internal physiology entirely. You can exapt and elaborate something that evolved to solve a very literally spatial problem to solve a metaphorically spatial problem while still using many of the same sensory and neurocomputational strategies. I made myself some mathematical modeling. This is work in progress, because even last week I've jotted down some new notes on how that's going to work. As for the biology, the actual steering neurons that measure these products of vectors that measure the strength of a gradient in the direction the animal is facing have literally been found in the model nematode C. elegans. Extensive recording has been done from them, and we have a mathematical model that matches up exactly to what we would expect. There are specific cells you can record from, and their activity tracks the time derivative of a logarithm of a spatial density. They receive modulatory input via neuropeptides from interoceptive cells in the gut. What I'm currently working on is putting together and formalizing a very detailed model of that, and then aiming to show that when you move from bilaterians to vertebrates, we can understand this transition in terms of navigating an internal physiological space rather than a literal environmental space. We can then try to understand some of the neuroanatomy below the forebrain that the existing theory of constructed emotion treats as just more sensory and motor effectors. We can try to elaborate the theory from the beginning of evolution forwards as a way of getting at what kinds of computational models or neurocomputational strategies might be best used rather than starting from the cortex-heavy human behavior in the scanner and then developing theoretical principles from that, which are then applied backward in evolution. This is a diagram from, I believe, 2019, "Resynthesizing Behavior Through Phylogenetic Refinement." He's trying to say what was the eventual evolutionary origin of reinforcement learning, which at that time he thought was followed through evolution by the development of lineage-specific neural circuits with distinct behavioral functions. I'm trying to go back and understand these bits. Is it anterior nervous system and blastoporal nervous system?
[47:40] Jordan Theriault: Yeah, ANS and BNS.
[47:42] Eli Sennesh: Basically, how did the evolutionary transition actually take place where this ANS and BNS are elaborated into what Paul at one point claimed was, and I quote, "the hypothalamus and the rest of the brain," which I think might be an overly strong statement, but I think we could talk about a C. elegans style X-apted steering system, plus a spinal cord, plus a midbrain, plus a forebrain, and eventually get somewhere. And that's about it for slides I had prepared.
[48:31] Benjamin Lyons: Thank you very much, both. This is extremely interesting. In the time we have, could I ask a couple of questions? A few things. I'll start with the last thing first. What I understood as this idea that the ability to steer around in space was eventually reoriented to steering internal physiology. We've been studying this proposal exactly backwards. We do a lot of work on things that don't have a brain. We're talking about single cells, various embryos, organisms without brains. What we see is biology navigating all kinds of spaces long before it had the ability to navigate three-dimensional space — transcriptional space, physiological state space, anatomical morphospace during development and things like that. We import a lot from neuroscience: memories, making predictions, learning, errors. But we see this in other spaces. I've been proposing the opposite thing, which is that very early on it learned to navigate those spaces, then it got some muscle and some nerve, and then it began to run around in 3D space.
[49:51] Eli Sennesh: If I could nuance my view there, I wasn't actually describing 3D space. I was thinking of aqueous space. It's a physical space, but there's plenty of chemical signals to navigate by. It would have been an instance of multicellular organisms with very early brains just recapitulating the behavioral strategies that were already used by single-cell organisms immersed in their environment.
[50:29] Benjamin Lyons: An interesting model system. Jordan, did you want to say something?
[50:36] Jordan Theriault: I was going to jump in to say I had the same question for Eli too. I'd written it down there, that it could be flipped the other way as well. Because one of the books we've been reading as a group is Catherine Nave's "Drive to Survive", which is an activist, biological and activist perspective account of digging into free energy and predictive coding. One of the things I've been making notes on with that is that she is interested in what the minimal normative push is that an organism can have for why it could want one thing instead of another or why it could be driven in some way. She's coming from the Varela autopoietic perspective or biological autonomy. She's trying to say that at a minimum, an organism needs to preserve its boundaries and replenish its parts. What she's trying to do is to say that has to be the ultimate end that any living system is working towards in preserving its biological autonomy. Navigating toward that is what everything else has to be working towards. I was curious too if there's ways of navigating toward that or maintaining, as Mike was saying, a transcriptomic space or something internal to itself, that minimal normativity can be met by some internal navigation, which then gets projected externally once it has to suck in outside resources to persist for even longer.
[52:21] Benjamin Lyons: Very interesting. Another model system that you guys might be interested in is we, some years back, wanted to ask the question: there's a lot of development and morphogenesis that takes place before there's a brain. And we wanted to know how much of it relies on the brain, right? It can't be all of it, we know that, but how much, right? So in the models, one model system that we work on is the tadpoles of Xenopus laevis, these are frog amphibian embryos. And we can make these critters without any brain at all. You can remove the brain early on, and the rest of the body's completely fine. There's something subtle that happens to their tail and a couple of other things that we have to work pretty hard to notice. But the rest of it works quite well. They continue to develop. They become tadpoles. What they don't have is active behavior. They can, but they make it to pretty late stages and they're fine. So I'd be curious what, if any, predictions you would have about that kind of model. One thing I wonder is, would you think then that without the brain they would be using up more energy than otherwise? Because that's very testable if that's the case. What do you think? If we, I know this is hard in mammals, but in some of these organisms, if you could make one that just doesn't have a brain at all, what consequences would you foresee for that?
[53:56] Eli Sennesh: I think I would expect difficulties in motility. Based on the same old Carl Friston anecdote about the sponge. If you make an organism with no brain, it's going to have difficulties moving in its environment to self-regulate as a motile organism.
[54:21] Benjamin Lyons: It's true of those animals because there is no real muscle motion without the brain. If you tap them, they will come up and do this, but they don't have any initiative. They won't get up and swim on their own much. Although we do have anthrobots and xenobots, which are multicellular organisms that navigate their environment and swim around, they don't have any neurons at all. It is hard. Because without interesting behavior, it's hard to do much. But if the idea is that the brain is important for regulating the internal milieu besides motile behavior, then presumably there's something we should be able to notice about these things that's off.
[55:06] Jordan Theriault: That's interesting, because motility should be down, like Eli said, but it's hard for me to think of how to cash it out in energy terms, because part of the energetic efficiency should be energetically efficient motility, but there should be something. Part of the idea of an allostatic perspective is that there's some coordinating, a conductor function of the brain coordinating all of those internal organs. I wonder if you would see some sort of coordination to a challenge that you would not see under other circumstances. Say you were exposing it to an environment of intense heat or heat outside of a range that it should normally be in. Do you normally see some sort of coordinated organ function to counteract that challenge? Do you see some sort of response of individual organ systems, but nothing in a coordinated way that helps it maintain that allostatic state? It's tough because the fact that it doesn't move, there's a lot of stuff going on, but I'm trying to branch apart that. The two parts of the allostatic thing that we're talking about are both, like Eli said, I totally agree. Eli, when you were saying that there's maybe a difference of opinion at the start, I don't think there's a difference of opinion. I would say that the brain is for motor command first, motor control for regulating internal organs. Like you said in a meeting, we're motor chauvinists.
[56:59] Eli Sennesh: I might say that there's a difference of opinion if I'm in public to people for whom category construction really has a very narrow meaning.
[57:08] Jordan Theriault: But the point is that the category construction is also a means to the end. The category construction is for efficient motor action through an environment. There are two horns of the problem. One is navigating an environment to get metabolically necessary resources. The other part is conducting and coordinating behavior of the internal viscera, which I think is more testable, like in the model that you're talking about there. If you could think about how to issue a challenge where you know what it should look like in a brained tadpole.
[57:48] Benjamin Lyons: I think that's doable. I was thinking you had this very nice slide going down to all the different organs where the muscle is 1, but then there's all this other stuff. The other stuff should be right. There should be predictions on that. Temperature regulation is a good one. I like it. We could test that.
[58:07] Eli Sennesh: I really do have to go, but before I go, I want to make this suggestion. Benjamin, could you help translate this into human words? I would wonder if you take an organism that normally has a brain, develop it with no brain, does its peripheral physiology suffer from some form of the bullwhip effect seen in economics?
[58:37] Benjamin Lyons: My perspective is that the brain is analogous to a financial system. It's similar to a conductor, but different. I see what you're saying about the bullwhip effect. I would expect that. I have a blog post talking about how Parkinson's disease might be analogous to a recession. So I think not having a brain or damage to the brain should be analogous to the problems we see when the financial system is harmed or limited. Unfortunately, we don't have any sizable economies that don't have a financial system. That would be very hard to develop, and we can't test that directly.
[59:14] Eli Sennesh: For Jordan and Michael, before I go, the bullwhip effect is this thing in supply chains, where essentially if you don't have tight coordination between the different firms, then unpredicted changes in supply and demand at either end can suffer from positive feedback. So a small error, a small prediction error at the retail end becomes a larger prediction error at wholesale, becomes an even larger prediction error in production.
[59:49] Benjamin Lyons: I get it, I get it.
[59:50] Eli Sennesh: At core industrial production, the error really only attenuates when you get so far back in the supply chain that you're talking about raw inputs that are always bought and sold in monumental bulk. The typical solution to the bullwhip effect in modern markets and modern firms is to have the firms tightly coordinate with each other to the point of usually sharing their databases and information systems. If you look at Amazon or Walmart, every retail sale is logged, and they are sharing the data all the way back to the manufacturers in real time so that the manufacturer knows about consumer demand.