Skip to content

Conversation with Lauren Ross #1

Philosopher of science Lauren Ross discusses how scientists use explanations and causation, covering mechanisms and levels in biology, emergence and control, pragmatism versus metaphysics, and the role of mathematical and mechanistic models.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~1 hour discussion with philosopher of science Lauren Ross (https://sites.socsci.uci.edu/~rossl/), on the topic of explanation and causation - what does it mean to look for explanations in science, what counts as a good explanation, and how do you know you've got one.

Lauren's book: https://www.cambridge.org/core/elements/explanation-in-biology/743A8C5A6E709B1E348FCD4D005C67B3

CHAPTERS:

(00:00) Background and explanation landscape

(11:09) Mechanisms and explanatory levels

(19:03) Control, emergence, and surprise

(30:37) Pragmatism, metaphysics, and science

(40:54) Philosophy, norms, and mechanisms

(47:32) Mathematical explanation and necessity

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:00] Michael Levin: Can you talk about your background and your interests and where you are in this field now and what you're working on?

[00:09] Lauren Ross: The field I'm in is philosophy of science in general. I focus on philosophy of biology, neuroscience, and medicine as the scientific domains. The topics I focus on are typically causation and scientific explanation. And the work that I do is looking at and providing an account of types of explanation that we find in these different scientific contexts in a way that captures the success of science but also doing this philosophical work of, in many cases, providing normative guidelines for settling disagreements, or being able to do more than just science reporting, saying when we have a good scientific explanation, settling some debates in scientific context, but philosophical work that's both descriptive and normative in a way that informs how we should think about scientific explanation. The same with causation. These are two topics that come up together in many cases because the standard view in philosophy of science, at least currently, is that many scientific explanations are causal, although what we'll talk about here today is the sense in which some might fall outside of that scope. There has also been, especially more recently, philosophical work on non-causal explanation involving certain types of mathematics. Similar to ways in which I understand a good amount of your work, some of the similar interesting points of overlap are interest in causation and explanation, but I view my work as having pragmatic elements, pluralistic elements, and focusing on having this, you've referred to it as an observer-focused framework. That's how I think about the work that I do in a way that's not always found in philosophical work. Sometimes there's a more God's-eye view or outside the observer picture of causation or explanation. For me, I'm always including the observer or scientist in how I think about it. It's an overview of the area I'm in. It's philosophy of science. I have a background in medicine. I got my MD before getting the PhD in philosophy of science. That certainly informs how I think about these topics and why I focus on the sciences that I focus on.

[03:18] Michael Levin: Fantastic. Especially that last point about the observer. So the scientists, we also look at other cells and parasites, all the different observers observing each other and trying to hack each other within the system. Could we start by talking a little bit about explanations and how you see it and what the different options are and anything normative that you want to say about it? I think in science, especially with our students, everybody says we're trying to explain things. And then when you actually dig in, what does that mean exactly? And what would make you happy as an explanation? Because there, I think we've got a number of scenarios where it's not clear at all what would actually count for an explanation and what we're looking for. So talk a little bit about that.

[04:16] Lauren Ross: A first important point and step to mention is that for philosophers of science, when we want to provide an account of explanation, part of what we care about is specifying how that project is distinct from other projects that scientists engage in, like giving descriptions of the world, being able to make predictions, and even being able to classify different types of objects. So for us, to say that a scientist or someone in everyday life has an explanation is really making a pretty significant claim. You're giving a deep understanding of the world. There are a lot of implications we can talk about as well. For us, we need to be able to show how a candidate explanation isn't just a description of the world. It isn't just something that has predictive success. So if a model has predictive success, that won't be enough for us because mere correlations can be predictive. We don't think that the rooster crowing is an explanation of the sun rising, even though we might be able to use that in predicting that outcome. There are lots of other examples. So that's a first step: distinguishing different projects that scientists are engaged in. It isn't just showing that a scientist has a legitimate explanation versus a non-explanation, but also that it's not a mere description. Once we're in that space, part of what a number of current philosophers of explanation use as a base is the deductive-nomological model, which was the most recent model for thinking about explanation in science. It involved two main components: having a law of nature and making a deduction from that law of nature. That model was articulated by Hempel. That model for thinking about explanation involves a law of nature making this deduction. But there were criticisms of that model that led to our current position in philosophical work on explanation, which is a view that, instead of thinking of explanations in that DN model, many explanations are causal in nature. That was the second main wave of explanation. That causal account of explanation solves various challenges that the DN model had, and that became the current favored way of thinking about scientific explanation: that scientific explanations are causal. When scientists are explaining something, they're identifying the causes that are responsible for the outcome, maybe the mechanism that produces the outcome. More recently there has been criticism of that causal model for being too restrictive.

[07:42] Lauren Ross: Now there's work that suggests that while there are a number of explanations that are causal, there are also other types of explanations that don't fit that causal picture and that are non-causal. There's a lot of work on what those look like. They typically involve a mathematical element or piece that doesn't just represent causal structure in the world. Different philosophers have provided these accounts of non-causal explanation. You engage with some of this literature in your own work. I think there's interesting similarities to some of your discussions of patterns. The current stage in philosophy of science pays a lot of attention to two main categories of explanation: causal explanations and non-causal explanations, sometimes called mathematical explanations, but there are different labels here. There are other types of explanation that receive less attention. Functional explanation might be another category. There are different views about whether that should fall in the causal category or how to think about it. But essentially in current philosophy of science work, those are the two main categories that get significant attention. In my work, I've engaged with both, and have paid more attention to causal explanation: different types of causes and causal systems that scientists cite in their work. A first step into causal explanation is understanding how to define it in the first place — what is causality? Once you answer that question, you can ask: you've given a definition of causation, but what about these different types of causes, causal structures, and causal relationships that scientists are citing in their work — pathways, or, if they're interested in a possibility-space framework, something that looks different from our simpler ways of thinking about causal explanation. There is an interesting pluralism that shows up in science that we need to capture and accommodate. Nancy Cartwright has this paper title that is "Causation: One Word, Many Things." You talk about causal explanation as if it's a unified monolithic thing, but there are many different types of causation that scientists discuss in their work. They seem genuinely different. Part of the project of a philosopher of science is to say: if many explanations are causal, what kinds of causal things are scientists citing? What are these different types of causal systems that they're studying and identifying in their work? That's the landscape for this work on scientific explanation.

[11:09] Michael Levin: A couple of questions that I'd be curious to know your thoughts on. You had mentioned mechanism as a kind of explanation. Talk a little bit about what that is. Does that always mean downward in terms of parts? Is it like what counts as a mechanism?

[11:28] Lauren Ross: Great. There's a complicated and intricate history here with philosophical work on mechanism. The first main discussions of mechanism that show up in philosophy are the 17th-century mechanism views of Descartes and many others that compare living systems to machines, simple machines, pulleys, levers, fountains that have water flowing through them, analogizing that to blood flowing through vessels in the human body and in the bodies of other animals. That's the first main notion of mechanism that gets discussed in philosophy or that we refer to in our current ways of thinking about mechanistic explanation. You can see there's this comparison of living systems to machines. In later work, Herbert Simon and Bill Wimsatt discuss mechanism in a way that's similar but different. For them, they're focused often on hierarchical causal systems where there's lower-level causes and some higher-level effect, sometimes an expectation that the causes are spatially proximal, like the simple machine analogy. After that is the more recent — they call themselves "new mechanists" in philosophy of science: Bill Bechtel, Karl Craver, Lindley Darden, many others in this tradition that starts receiving a huge amount of attention in 2000. There's the Machamer, Darden, Craver paper in 2000. Ever since then, in philosophy of biology and neuroscience, the dominant view has been that all causal explanations are mechanistic. But what mechanism means varies depending on who you talk to. Even the same philosopher, their ideas have changed a little over time. One way that most of them have changed is they've broadened. There's first a more narrow notion of mechanism that's more reductive, similar to the earlier notions. You see pictures of these mechanisms where you have lower-level representations of causes interacting and then a higher-level behavior of the system. Causal elements in a neuron — ions, ion channels — are all interacting. Firing behavior is a higher-level outcome where, if you're explaining memory, this is a higher-level behavior and you would decompose and localize that effect into lower-level causal parts. It starts with a reductive focus and with that hierarchical picture. There's a second main picture, which is a more ideological one. That's more of a linear chain picture of mechanism.

[14:30] Lauren Ross: This makes sense because there's sometimes reference to mechanism as intermediates between a cause and an effect. And that's another common way mechanism is used, mechanism in between cause and effect. Here maybe a drug's mechanism of action. The two main notions that have received attention are that more hierarchical picture and the etiological picture. But then the challenge is that a lot of these philosophers want to claim that all causal explanations are mechanistic, but scientists are citing causes that don't fit those two pictures, downward causation, which of course you engage with. How on earth do you capture causes at a higher level that influence something at a lower level if you have this hierarchical picture where they're all at a lower level producing a higher level? Or even the etiological picture can't capture cases of downward causation. That's just one example where there's challenges in the mechanist program because they really wanted to say, and still many of them do want to say, that all explanations are mechanistic. But sometimes their notion of mechanism is narrow. It means something, it's a clear type of causal system. It might be that hierarchical one. In other cases, mechanism is synonymous with just having a causal system. And so it's not that helpful. We already know that many explanations are causal. If mechanism just means you have a causal system and the attempt is to contribute to how causal explanations work and you say they're mechanistic, but mechanism just means you have a causal system, then it's hard to make progress here because mechanism means very different things to different philosophers. Sometimes I think it's more helpful to just talk about what kind of causal system does one have? What are the features of the system? Are the causes at different scales? Are the relationships of a different nature, strength, stability? We're just being very clear about what mechanism means, but basically we have that more narrow notion and that broad notion, and there's a lot of crosstalk. A lot of challenge isn't just communicating about causation, because mechanism is a status term. It's an important term for philosophers, also in many scientific contexts, but it means different things to different people.

[17:35] Michael Levin: What is the thinking about? Are explanations supposed to bottom out at some point? Is it OK if they don't? Or if they do, what does that look like? How do they end if they end?

[17:51] Lauren Ross: So for me, the way they end has to do with first specifying the explanatory target of interest. And then once a scientist has specified the target they're interested in, identifying the explanatorily relevant factors involves identifying the factors that give you control over that target. So for me, the catchphrase I have for this is explanation isn't a game of how low can you go, but what gives you control. And so once you have that control, in some cases, you have scientists saying the causal action is at a higher scale. Ken Kindler says that about psychiatric conditions. Scientists say circuits, there are circuits that give you a better explanation for some diseases. I think a way to understand that the causal action is at a higher scale is that those are the factors that give you control over the target. And in many cases, you get better control with those higher level factors than the lower level ones. So that's how you would justify it.

[19:03] Michael Levin: I think that's incredibly important and also very poorly understood today. This is something that we deal with a lot. We have many examples; for example, in our work on unconventional cognition in cells and tissues, we can show that by taking some of these things seriously and accessing them either through the bioelectric interface or through various other interfaces, you can achieve degrees of control that were previously unachievable. What people always try to do is they see that and they say, I see that underneath there's just chemistry. If you look low enough, yes, you will always just see chemistry and say, okay, good, then that's how it should be. I'm very interested in—you can maybe tell me if this is one of those ways that people think about explanation. I'm always interested in the forward-looking. What does it help you control, but also what does it help you discover? Because it's one thing to look backwards and say somebody did something cool and now we're going to drill down and show you that it was all just molecules. That's one thing. But to me, a better and a really good explanation is one that helps you find the next interesting thing. The problem is you never see that from any one experiment. After many years of a research program you look back and say, yes, that did in fact open up, or it didn't. So that's what I always emphasize: what does it enable, the fecundity of the thing? Is that something people think about or is that not really one of the?

[20:47] Lauren Ross: Absolutely. I think this is one of the most important goalposts for philosophers and accounts of explanation in different important topics and concepts in science, because if the account and the concept aren't useful or functional or don't serve a goal, then it's hard to know why we should prefer it over another. I think those are important goalposts to have for thinking about a lot of these different types of higher level or theoretical concepts. So this is how I think about causation. It should be an account that is useful or functional. It should also capture what scientists are interested in, what they care about. It would be interesting to talk with you about this more because I do encounter this quite a bit with scientists, which is that they feel this draw and there's this attraction to being reductive, and they feel like they can't get out of it. There are at least two things I see. I feel like some of your work has touched on this too. One of them is the explanatory target changes. So you start with the target initially, and then they're like, and then you give this explanation that involves factors that give you control at a higher scale. And then they're like, well, what about this part of it? So they change. It's uncontroversial that if you keep changing the explanatory target, you can go down forever. Or you can make it seem like you need that lower level of detail. But I wonder if sometimes keeping the target fixed is a way out of that. Sometimes it's assumed that if you say higher level factors are more explanatory than lower level ones, the audience hearing that thinks you're denying that there are things at lower levels. It's almost like they think you're denying that there is chemistry and you're not denying that at all. And so I wonder if it's sometimes helpful to keep separate this agreement and acceptance of these elements at different scales. But there's a different question here. The question is, what's explaining the outcome. I wonder if that's a way to get around that too.

[23:45] Michael Levin: It's really interesting. I find, and this is probably related to issues of emergence and what that means. This is something that I hear from time to time. Somebody said to me; I remember being stunned the first time I heard it. After a conference, I gave a talk, it was an hour-long talk. The first half talks about all these things that you can do by using this bioelectric interface. And then the next half, I show what a lot of people want to see, which is, on a single cell level, here's the transaction mechanism, here's what happens to the cells, here's the genes that are turned on and off, all this stuff downstream. Somebody said to me, "The first half of the talk I thought was really interesting and important, but then you showed the mechanism and it turns out to be just chemistry. So now I don't care anymore." I thought, that's amazing. That's an amazing thing to say because knowing that there's stuff underneath drains the magic, and it was all cool when it did seem like magic, but now that you see that, no, it's still not fairies. There is chemistry underneath, even though what I'm trying to point out is that you couldn't have, in fact didn't get there by watching the chemistry. You got there in a completely different way. And I think that's interesting. And it relates to something that we deal with a lot, which is this business of emergence. In terms of, is that really different from surprise, if you have an explanation for it, then is it really emergent? What is the role in this of the observer's expectations and the degree of surprise they're getting? I think it's a really interesting intersection between the actual science and the psychology of observers and what they expect.

[25:30] Lauren Ross: It also helpfully reveals a distinction between ways that explanation — what explanation means colloquially and what we're interested in in philosophy of science — which isn't always just someone giving a reason or rationale or telling a causal story. It isn't. Explanations are provided by scientists. The work that they do is incorporated into an explanation. But for an explanation to be good, it isn't just that it satisfies an audience. It isn't just that it makes them feel good. This interestingly relates to notions of explainability debated in the AI context. What does explainability mean? Is it that it satisfies an audience, that it's persuasive to them? We would never view those as the criteria that make an explanation good because explaining COVID by appealing to 5G wireless networks might be very satisfying for some audiences, might persuade them. Or if you give me an explanation that has justification but comes from physics, and I'm not an expert in that domain, I might not be satisfied, but that doesn't mean it's not a good explanation. So it is interesting to sometimes encounter colloquial meanings of that term. The focus is a bit more narrow in philosophy of science, but it still has some psychological element. It's important to keep distinct how to think about that element. Surprise is a way. Explanatory why questions are viewed as capturing the explanatory target. So scientists ask this question: why is it the case that the fruit fly has this eye color versus another, or if it's about intelligence or other more complicated targets? Philosophers of science have sometimes viewed that the question is capturing something they're surprised about, and then the explanation reduces the surprise. It's interesting to think of that framework of surprise reduction as a kind of psychological element. How do we want to include psychological elements in an account of explanation? Something might be very satisfying or persuasive, but we don't want those to be the criteria for good scientific explanations. You look back at the history of science: explanations that were persuasive to those communities. We want to say that explanations of hysteria or even scurvy that appeal to long lists of climate factors or factors having to do with the character of individuals or their religion are not successful explanations.

[28:54] Michael Levin: I think it's very interesting because it again speaks to this time asymmetry. It's one thing after somebody does something or finds something, and then you can always drill down, if you want, to the lowest level. Before, the question that I always bring up is what actually led? Which view got there at the beginning? In other words, did it actually facilitate? There are some interesting examples of this too, besides our stuff: Karina Kaufman and I just did a review of clinical cases where people have very minimal brain matter and normal IQ. You find these things and people start laying some epicycles onto neuroscience and say, "well, there's maybe some hidden redundancies." You can do that after the fact, but the reality is we don't have anything that predicts that would happen. That's a problem. You can try to wedge some things in after you find the thing, but it's a problem that we actually don't have anything in neuroscience that would say, "oh yes, one out of 1000 times you're going to see this." You don't have anything like that. I really think that looking forward more than just backward at explaining and trying to provide an account, what I see a lot is patching up the paradigm. You find some crazy new thing and we can stick that in somewhere and we'll be fine. You just patch up some stuff and then you're okay. I really like keeping an eye on the forward-looking things. How did you find it? Because you weren't going to find it with the standard tools.

[30:37] Lauren Ross: I wonder what you think about this in terms of capturing similarities and differences between the kind of fields that we're in. In a good amount of philosophy of science work on explanation, the focus is different in the sense that philosophers are sometimes focused on cases of more consensus, and they look like simpler explanations, where we're trying to extract the principles, features, justifications of genuine explanations. So we focus more on cases where there is consensus, where there's more of a complete explanation or some kind of rationale. That's where we spend our attention and we want to get lessons from those cases and then extend them. But in your work, is it more forward-looking, challenge cases and more focused on these complex explanatory targets and things that we need to do to give explanations of them, ways that the perspective that we've had needs to change and adapt? When you've engaged with a lot of philosophical work, how do you think of the similarities and differences between your project and this other kind of work?

[32:20] Michael Levin: It's also interesting. One thing that I've found a lot, and I still don't know, maybe you could tell me what the alternative is. Sometimes what happens is I'll describe a way of thinking about, for example, one of these things where we apply aspects of cognitive science to things that aren't brains. And I say, look, this thing has memory, it has learning capacity, it's building a model of the outside world, we can change it, we can rewrite the memories, and then somebody will say that can't be a good way of thinking about it. I'll say, but look at these things that it's enabled us to find. So empirically this thing is leading, and they say, "well, empirical success isn't a marker of anything." I just don't know what the alternative is. I find this is actually really powerful. They'll say things like, "it's a category error." You have these ancient categories and you've just transgressed the categories. I keep thinking, but these categories aren't, they're not given to us by God. You have to update the things when science comes along. How do you even know what your categories are? I feel like there are a lot of folks who have way more allegiance to some of these categories than to anything that happens in the physical world. And so that actually ends up being a way out. Once you confront somebody with "look, here are the things that it enables." Is there something fundamental I'm missing here? What else is there?

[34:08] Lauren Ross: This is really interesting and helpful. I might get myself into a little hot water with other philosophers. There are so many different perspectives in philosophy on how to think about causation and explanation. The framework that I use is hooked up to pragmatism and utility. It's got to make sense relative to a goal, and often these are goals we have in science. That's not always the focus that other philosophers have. In more metaphysically motivated work on causation, it doesn't matter that your account captures progress that scientists have made, or even that it's useful for certain types of goal. Control is an example of one kind of goal that matters quite a bit. For me, my work should capture both that scientists have identified causes that give control and different types of control that they care about. It's one thing to show that you have causation by showing that a factor gives control, but there are different types of control that a cause can give. Causes operate on different time scales. Control can be faster or slower. Control can differ in how much it boosts the probability of the effect and how stable it is across contexts. Those are all important differences that matter practically for how we make changes to outcomes in the world, how we attribute responsibility and blame, and how we give explanations for biological systems producing things. The precision with which they need to produce outcomes, even in very simple cases, is exquisite; if they don't produce an outcome on a particular time scale, chaos breaks loose. We don't often capture those in philosophical accounts of causation or explanation. If I gave those reasons to other philosophers, they might not care about them as much because there are other approaches that adhere more to categories or to thinking hard about distinctions, thinking about causation in a way that doesn't attend to scientific work, scientific goals, and everyday life considerations. A lot of the work that I do on philosophy of causation compares how we causally reason in our everyday lives to how scientists causally reason. These different types of control that causes give matter to us in everyday life too. This kind of work that I'm interested in is sometimes called methodological or epistemological, maybe in contrast to heavy metaphysics, where methodological means there's a goal.

[37:00] Lauren Ross: The account of causation we give should be functional. We should be able to show how it matters and what hinges on it. How it matters for how we reason in our everyday lives, how scientists reason. There are all sorts of ways you can start to show how it matters. You can drop causes out of your picture and say they matter less if they have some of these features. If they're operating on some time scales, you can say, I don't need to care about those causes as much. Or we might prioritize causes with certain features: causes that are stronger than others, causes that are more stable. This gives us a way to think about principled reasons for selecting some causes as more explanatory than others. It just captures these more functional, pragmatic features that have to do with making changes in the world. It's interesting being a philosopher of science that works on causation. I sometimes joke that I have to tell philosophers why science matters, and scientists why philosophy matters. And the audiences, sometimes philosophers of science are a very unique group. We're a subfield of philosophy. So sometimes it's much easier for me to speak with them and scientists. But when it's a more metaphysical focus on causation, I can't come into the room with the assumption that they care about pragmatism or that they even think scientists are good at identifying causes. I have to justify that and talk a good amount about that. What is this like for you? Because you speak to so many different audiences and so many different scientists.

[39:54] Michael Levin: This is something I hear a lot. We like the data and the new capabilities. Stop talking about all this philosophical stuff. You don't need it. Just do the experiments. I keep saying, you don't know which experiments to do or what your experiments mean if you don't think about these things. I try to make a very concrete case. In fact, I've given that specific talk: here are the 14 things that we discovered only because of a specific philosophical approach or a dissatisfaction with something that was going on, not because just turning the crank on some kind of philosophy-free science was going to get you there. I think it's really important to have both, and this is not a common opinion. I think it was Dan Dennett who said, "You don't have philosophy-free science. What you have is philosophical baggage that you've smuggled on board without knowing what it is."

[40:54] Lauren Ross: I love that quote. Yeah.

[40:56] Michael Levin: That happens all the time. I think there are a lot of people in the community, not everybody; there are some very sophisticated scientists on this point, but there are a lot of people who think that it's not a useful undertaking and that you can somehow do away with it and it's going to be obvious what things mean, what to do.

[41:16] Lauren Ross: This will partly depend on what philosophy means to those people and what it refers to. Part of what's interesting about our discussion is that both explanation and mechanism came up as these words that can mean different things to different audiences. Part of what can be helpful about scientific, theoretical, philosophical work is really getting precision about how we define those terms, what they mean for us, and then making progress based on a definition. Sometimes it's also saying, "this is not what I mean by mechanism," or if "mechanism" refers to all causal systems and you can't say what isn't a mechanism, we have a problem. If it's a useful term, we should be able to say what doesn't sit in that category. Interestingly, philosophy is another one of these terms that can mean extreme rigor or anything goes, and then all sorts of things in between. It's really interesting attending conferences, sometimes scientific conferences, where an audience member will raise their hand to ask a question. They'll sometimes say they're asking a philosophical question; it might mean that it's not a question that's well thought out or it's not fully formed, or it's just interesting what that means to them. In other cases, we have people that think it's funny that scholars in my profession call themselves philosophers because it sounds like you think you're maybe Aristotle or something. It's even just philosophy being able to say what it is and what it's doing is non-trivial. It again means very different things to different people. Sometimes it means this more unprincipled, we don't know yet. In other cases, it means rigor. Those are very contrasting.

[43:38] Michael Levin: That's an interesting point. Minimally, what I try to do with people in my lab, even if I don't necessarily bring up philosophy, I will often ask them to work backwards and to ask themselves, what kind of an explanation would you be happy with? In other words, do you know what you're looking for when you find it? Because what you often find is the default is always down. That's how people in our field are educated these days. Everything is down. And so people immediately try, I'm going to say, step back, imagine you've got all this stuff going down. Is that actually what you want? Is that going to make you happy in the sense of what you actually want to accomplish? I think it's interesting to think about what the default is and what you're expected to do and what the reviewers of journals expect. This is an example I always give. I know what the reviewers want for all these things. I give it to them in terms of here's the transduction mechanism and then here's the second messenger and here are the genes. But if you actually step back, it turns out that almost none of that has been useful in driving new discoveries, getting to new capabilities, all the things that have come out of this research program. Almost none of it has come from that kind of drill down. You have to have it, otherwise you can't publish. But if you look back and ask what has been the utility of all that stuff, it isn't, I think, what people think it is; that isn't what generated all the progress. Maybe it will someday, but at some point it's nice to have those facts. That isn't what's generated the progress.

[45:18] Lauren Ross: It's really interesting to think of the norms in social situation of science, and that scientists have to engage with and deal with where there are certain terms and perspectives that are dominant and that are valued. I see this a good amount with the mechanism term and concept. Danny Bassett and I worked on a paper where we examined the mechanism concept in neuroscience, where it's a status term, and grant calls and top journals will specify that if scientists want to get their work funded and published, they need to reveal mechanistic insights and identify mechanisms. But then the editors very quickly say that they can't tell you what a mechanism is. Two or more scientists reviewing the same paper totally disagree about whether a mechanism has been provided or not. So it's a status term for sure. It's totally unsurprising that a scientist who's working with and examining causes at a higher scale will call it a mechanism. It makes sense that scientists have to work within these norms, but then it can constrain progress, which I feel like is part of what you're suggesting: how to accommodate reviewers, but then that can actually prevent and stifle progress or getting work in. It'd be interesting. I wonder if, sometimes for me, the struggle is finding other words and other analogies that get us away from the under the hood and the going down. When I think of similarities between my work and yours, I think we're both non-reductive, but it'd be great to have a different word than just saying what you're not. Maybe you've also suggested that emergence doesn't always fit the bill.

[47:32] Michael Levin: I don't love emergence. I didn't want to come up with new words for a field that has a lot of vocabulary. Maybe it sounds like we need one. What you were saying, this forward focus demonstration of fecundity or generativity, that you've compressed past knowledge in some way that allows you to get to the next new thing. It doesn't just explain the stuff you've had, but it guides you to better creation of new experimental setups, new capabilities. On the practical level, new biomedicine. That's what I think. Chris Fields has a saying, he says, arguments are only settled by technologies. This is on alternate days: I don't care, we're doing basic stuff here. Other days, once it's in the clinic, I don't need to have a lot of these arguments because then it'll be completely obvious. I go back and forth. But that's the kind of thing that I'm looking for. It often isn't downward at all. So sometimes it is, but often it isn't. In the last 10 minutes, I also want to get to this issue: what explanation looks like in mathematics and what mathematical explanations look like. Because now we're getting to the point where I think, because of some of our specific work that we've already done, some of the Zenobot stuff and some crazy stuff that isn't out yet that I want to show you sometime once it gets a little more cooked. I think that the right explanation for a bunch of stuff now is it's math. That's the answer. It's just a fact of the math. I want to know what you think about it. How do mathematicians handle explanation of mathematical facts? And how do we use mathematical facts as explanations for things that go on in the physical world? I see evolution using these facts as free lunches. And how do we think about that?

[50:00] Lauren Ross: I think there's so much connection between the way that you invoke math in a lot of the evolutionary cases, and some examples of mathematical explanations in philosophy of science. In my view, the current work in philosophy of science on scientific explanation with respect to non-causal mathematical explanations is underdeveloped. There's a lot more to say. Some of the cases that get brought up are overly simple. They're not always that representative of what scientists care about. It's probably harder for philosophers to find good examples. Sometimes they use very trivial everyday life examples of mathematics being explanatory where it's just not capturing scientific practice or methodology. This is a very open space. There's also lots of different views on how math is explanatory. Currently, it's been appreciated that mathematicians give mathematical explanations in math. In the context of scientific explanations, it's not enough to give those examples because there's an interest in mathematical explanations of the natural world or mathematical explanations in science. There's a distinction made between what kind of mathematical explanation one is focused on. In this work on scientific explanation, what you have to show if you're going to argue that there are legitimate types of non-causal mathematical explanations is that scientists use math that isn't representing causation to give an explanation in science of some natural phenomena in the world. That's the standard that a lot of philosophers have tried to meet. Baker's work, Bob Batterman's work, Mark Lang's work are in this space. They're all focused on explanatory targets that are in the natural sciences where you just can't give an explanation without a math piece. There are different ways that they suggest this works. What we need more of in this space are explanations that actually capture scientific work, I think, in a better way. Some of the explanations that get discussed are pretty simplified. In that "Explanation in Biology" book that I recently put out, I discussed three main categories of non-causal mathematical explanation: topological and constraint-based, optimality, and minimal model explanations. The optimality category is where a number of these evolutionary ones show up, which I think are very similar to cases that you examine, where there's this mathematical feature that we see in the world. We think there's a causal story for how that was selected. It's serving a goal, or it has some influence or impact. The suggestion here is that the mathematical piece involves this mathematical dependency relation that isn't empirical. Part of what philosophers need in this space is to show that that math is really not representing causality in the world. So if there's a dependency relation that's mathematical and not empirical, that's one way in which this has been argued for, which I think is easy to argue for with many of the cases you discuss. This is where a lot of work on explanation has focused. There are lots of interesting open questions here. Causation matters to us because it gives us control. Do mathematical explanations give us the same thing? They also allow for that. Do they serve other goals?

[54:45] Michael Levin: Yeah.

[54:46] Lauren Ross: Yeah.

Michael Levin: I think they do because I know this is a weird way of looking at it, but I tend to think that when people say, "these networks have this amazing property: they can learn," where do you suppose that comes from? They say, "well, it's just the fact that holds about the world. It's emergent." That's a very mysterious explanation, because it means you're going to write it; you'll collect it in some big book of surprising emerging facts. But if you don't think that there's a structured, ordered space of these kinds of things, then you're not going to find the next one. Knowing about them and being able to study that space and say, "I see this," and I understand what's happening with the math. So I'm going to predict that by doing this and that, I'm going to access this other pattern in the math that's going to cause this and this. I think these things, if you take them seriously as a systematic research program, are very practical. What you don't get to do is change the math; the math doesn't have inputs the way biology and physics do, so you can't change it. Those things are what they are, and you don't get to change them, but you do get to change which ones you manifest in the world by building things or changing things. Then you can traverse that space.

[56:21] Lauren Ross: That's really interesting and it relates to the similarities and differences between the mathematical dependencies and the causal ones. In some of Lange's work, the suggestion is they're unchangeable in a similar way that you mentioned and that they necessitate the outcome in a stronger way than causality.

[56:54] Michael Levin: I think that's really true in a certain sense, because there are aspects of biology and physics that people can say, well, if the constants at the Big Bang were tuned differently, then this would be different. And so you've got some input, you've got some control knobs that you can then. But this other stuff — E, it's just E. There's nothing you can do in the physical world that would make it different. And a bunch of these crazy — I was watching this thing on YouTube the other day that was talking about all kinds of different things where, in six dimensions, this new thing happens in just six; it didn't happen before and isn't going to happen again. That's just how it is. These are the kinds of things you can't change. So I do think in a way that's stronger than stuff that can get traced back to the fact that there's certain tuning of physical constants and so on.


Related episodes