Skip to content

Discussion with Tim Jackson, Karl Friston, and Chris Fields

Tim Jackson, Karl Friston, Chris Fields and Michael Levin discuss the free energy principle, realism and metaphysics, evolution and exaptation, and how generative noise and time scales relate to novelty and construction in complex systems.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~1 hour conversation between Tim Jackson (https://www.researchgate.net/profile/Timothy-Jackson-4), Karl Friston (https://www.fil.ion.ucl.ac.uk/~karl/), Chris Fields (https://chrisfieldsresearch.com/) and I around the free energy principle, philosophy, evolution, and novelty.

CHAPTERS:

(00:00) Realism, FEP, Coarse-Graining

(09:55) Literalism, Attractors, Evolution

(23:43) Physics, Metaphysics, Maps

(37:06) Generative Noise And Novelty

(45:08) Time Scales, Exaptation, Construction

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:00] Tim Jackson: I'm a biologist, so Chris and Karl don't know me. My area of specialization is, broadly speaking, chemical ecology and toxinology. I'd like to get to that area, my comfort zone, and maybe talk about the generation of novelty, generative noise, molecular babbling, exaptation, improvisation, play. That's my comfort zone. I'd like to get to talking about that in the context of the free energy principle and how the free energy principle approaches the genesis of novelty. But I would like to do the slightly more risky thing and begin with physics and indeed even metaphysics. So I'd like to motivate an argument in support of a realistic, and I've been using the term constructive realism in some of the discussions with Mike, a realistic interpretation of the free energy principle. Correct me if I'm wrong, it feels like there is a controversy surrounding this and the concern that the FEP might have some anti-realist leanings. There are also warnings — Mel Andrews, fantastic paper, "the math is not the territory." There are worries about a certain literalism that might come along with a realistic framing of the FEP. I want to look at how we can avoid that literalism, but also not slide into anti-realism and not give an account where we say things like perception is 100% hallucinatory. Maybe we can be illusionists and preserve the fact that we're subject to delusion. There is a hallucinatory aspect to perception, but still catch that in a realistic framework. I'd like to get there by talking about coarse graining as a fundamental physical principle and looking at the FEP itself as a description, as a formal framework which describes coarse graining. I think one of the probably fundamental challenges that a lot of people have with the FEP and the FEP literature is that it's not just math-heavy. It has all these different analogical formalisms. They seem to come from different areas of physics but describe essentially the same process. So the process is quite simple in its essence. There are lots of different ways of cashing it out formally. Does that sound like a reasonable thing to say?

[03:21] Karl Friston: From my perspective, yes. You've touched on 101 issues there. I think there's definitely two separate conversations: the philosophical interpretation of the free energy principle, or more specifically the application of that principle in different domains. And then the intriguing problem of novelty and play and exploration and model building and exploring various structures. I think the first issue, you very nicely scaffolded the issues. I think most of them, from a physics perspective, could be fairly easily resolved, if not dissolved. Having said that, you're going to have to be very careful in explaining terms that are not familiar to me. I have no foundational training in philosophy. A lot of these distinctions I find entertaining, but very often I regard this as a spectator sport. It doesn't so much affect the application of the free energy principle. My normal line in this conversation would be the free energy principle is just a principle read as a physics principle. That means it's a method. All you do is apply it. You don't argue about it, or you can, but it's there to be used. It's not there to be the target of philosophy papers. How might you apply it? You can apply it in many different ways. The utility of having that principle at hand is that now you can effectively reproduce or simulate sentient self-organization, or at least it equips you with an interpretation that has a teleology in terms of perception and sentience and play and novelty if you want to go that far.

[06:37] Karl Friston: But this is just a license to put a teleology on it in and of itself. The principle doesn't appeal to any teleology. It's much more tautological than that. It's a description of things that are self-organised. In that deflationary aspect, it's important to remember that there is a very practical utility in the principle because although it describes things that are self-organised to some attracting set in the classical formulation, a pullback attractor. What you can do when you know the dynamics that underwrite that self-organization is you can write down that attracting set in a computer, and then, by applying the free energy principle in terms of gradient flows or fixed point iterations, however you want to do it, you can build self-organizing systems where you specify the kind of system in terms of the attracting set that you want to reproduce and model. Once you've done that, you can take that in two directions. You can either put your teleology on it so you can wax lyrical about this bacterium's plague exploring, and you're fully licensed to tell the basal cognition story, which is, from my perspective, a way of equipping people with intuitions of a teleological sort, which are not part of the physics, but they're certainly extremely helpful and allow joining the dots between different fields and disciplines. Or you can use digital twins or the simulation of the system you're interested in as an observation model for empirical data. You can take an actual system, the measurements of some system of interest, and then optimize the parameters of your attracting set or your generative model until your simulated system reproduces what you're observing empirically. That will be one way of reverse engineering the generative model that provides the best account of this particular system. That is a fairly standard procedure in cognitive neurosciences, particularly in computational psychiatry: I get an addict or somebody who's very depressed responding to a well-defined experimental paradigm. I model choices, which could be novelty seeking, for example, formally defined under the free energy principle. I model the responses under the assumption that this subject is behaving and making decisions under this generative model with these priors. Then I optimize the priors until it reproduces the observed behavior. That allows me to phenotype the ideology of this decision making formally, using straightforward procedures. I don't think it's ever been done, but in principle, you could do exactly the same thing for a cell or an organ or something along those lines. That would be a practical response to the map and the territory. Literalism, can you explain to me what that means? That means that people think that cells are actually perceiving. Is that the?

[09:55] Tim Jackson: Literalism in the broader context, as opposed to where the cells are actually perceiving, would be: another Korzybski aphorism gets at it nicely. So Alfred Korzybski is the one who said, "the map is not the territory." He also said, "whatever I say a thing is, it is not." Literalism is taking a representation or a description of something to be the thing itself, to map onto it one to one. So to say that the model is in fact describing exactly the dynamics of the system that is being modeled.

[10:33] Karl Friston: I don't have a conversation, but that is certainly not the case for the free energy principle in the following sense: in elaborating or deriving the dynamics of any given system, you are interpreting that in terms of a variational free energy that plays the role of a bound on the actual probability of occupying certain states described in terms of the self-information, or, if you are a Lagrangian, adaptive fitness if you wanted to. That variational bound is assumed to be infinitely tight under the free energy principle. When there is a departure or a difference between the actual dynamics of the system and its attracting set and the way that you would write down or describe that attracting set in terms of a probability distribution, which you can read in many different ways, it can be surprise and the self-information, the negative log preference of the states, a potential based upon the states that are valuable to that system that characterize that system, then you have to assume that the free energy is equal to the self-information and that everything else follows. But because the free energy is a bound, your description of the system that is evolving to its attracting set, minimizing its thermodynamic free energy in terms of minimizing a variational free energy, now only becomes a bound approximation. So you can't say that it's doing inference explicitly because to do inference, there would have to be a gap between the variational free energy and the self-information. However, if you assume that gap is small, you can say it looks as if this system is making inferences. It looks as if it is self-evidencing. It looks as if it is trying to minimise its variational free energy, or it looks as if it is trying to maximise its marginal likelihood. You can make a move in which you just say stipulatively the generative model of the system is actually its probabilistic description of the attracting set, and that theory becomes exact, but you'll never know what that particular generative model is. So it's a nice philosophical move, but it doesn't help you practically because the whole point of applying the free energy principle is to reverse engineer the underlying generative model and thereby equip you with the ability to describe, simulate, reproduce that kind of behavior and understand it in terms of the priors implicit in that generative model.

[13:58] Tim Jackson: I think that's very helpful. The point that I'm suggesting is that I definitely don't think it's up to you or necessarily anyone who's utilizing the free energy principle in a more or less instrumentalist manner to define an ontology that it corresponds with. But there is a popular discourse and a philosophical discourse, an academic discourse surrounding that ontology. That's inevitable. And especially given that you make yourself very available for public discussion as well. In that framework, there's generally a desire to interpret these things in a slightly different manner and potentially in a more realist manner. So to say, what does this actually refer to? But even before we dive into words like metaphysics and ontology. And one of the reasons I wanted to return to the basic physics is I wanted to build the story up from dissipative structures. Chris and all of you have published on this quantum dissipative adaptation, but the kind of story that someone like Jeremy England is telling, I'm also relating that to the principle of least action and interpreting these things as constructive. So one of the questions that arises for me as an evolutionary biologist, and would arise for Mike as a developmental biologist, when you're discussing the free energy principle in precisely the terms that you were just discussing, and talking about how once you have defined this and you can write down this attracting set, you can talk about the behavior of this entity in relation to this attracting set. We do want to know where the attracting set comes from in a certain sense. That is what evolutionary biology and developmental biology are. That's what biology is all about. So even before we are, strictly speaking, philosophical with a capital P or making our living in philosophy departments, we're asking these sorts of explanatory questions. We're asking these sorts of why questions. And when we find a principle like the free energy principle, a formal framework like the free energy principle, useful and applicable, apt in some way.

[16:47] Tim Jackson: We can go back to saying the map is not the territory, but there are good maps and bad maps. If the free energy principle is apt in some way, what is that saying about the kinds of dynamics that it is describing and how can that potentially be plugged into an explanatory model? You've published a number of papers, and I know Mike and Chris have as well, on the basic isomorphism between the free energy, the basic logic of the free energy principle and the logic of natural selection. You've also defined them both as principles of least action. We should note that natural selection is taken to be an explanatory principle, and the principle of least action is essentially an ecological principle; it's about how the environment determines the path of least action. If we take the constructive logic we find in evolutionary biology—this attempt to explain forms or explain attracting sets—and we're making attracting sets equivalent to paths of least action, in the FEP, in the path-dependent way of thinking about it, those are equivalent. We want to ask this basic evolutionary and ecological question, which is not confined just to biology now. We're thinking about generative, constructive processes in evolutionary terms, and we're thinking about a relational landscape in terms of ecology. Generalizing those notions, we're asking how the free energy principle, which appears to describe those things perfectly, aligns with our aspirations for gaining explanatory knowledge through frameworks like this. I've heard you'd be asked the question a number of times: why do you speak about latent or hidden variables as opposed to just appearances? This is another way people try to draw you into philosophical conversations about realism. We're talking about what is the actual environment, the real environment that is then defining the attracting set, defining the landscape and its energy minima, and therefore its paths of least action. The FEP seems ripe for a kind of realist ontology that fits with this ecological and evolutionary framing.

[19:40] Karl Friston: Yes, I agree entirely. I'm going to stop talking now because I'm sure Chris and Mike have lots of things to say. That relational aspect — the phrase that comes to mind is the scale-invariant aspect of applying the FEP. And that's, I'm sure Chris will have something to say about this. So the phenotype in context, what is the context? Well, the context itself in a scale-free or in the spirit of the renormalisation group should also be amenable to application of the free energy principle. That induces a kind of George Ellis–like top-down and bottom-up causation. That's the most important relational aspect. There are lots of important relational aspects in terms of ensemble self-organisation between cells at any particular space-time scale. But I think the deeper question you're speaking to is much more of a scale. And it's the modeling issue again, looked at from the point of view of the renormalisation group. How does the free energy-minimising process, which could be natural selection or Bayesian model selection, contextualise the play of a phenotype with its conspecifics or its exploration of the environment? How does the environment perceive, in the literalist sense, the phenotypes that are co-constructing in that environment? So I think those issues, or perhaps the frame afforded by a scale-invariant, if not the scale-free application of the free energy principle, is probably the frame in which you would exactly address these deeper questions.

[21:50] Tim Jackson: I think that's one of the reasons why I like to think of it in terms of coarse graining, and thinking of coarse graining as a constructive process that actually can occur endogenously. If we look at dissipative structures, it's self-organization in the sense that the environment ends up coarse graining itself in a certain way. So I say endogenous coarse graining. That gives us an image of how you get a hierarchical structure in a certain sense. The nested coarse graining. And that can take us all the way to Mike's poly computing and collaborative coarse graining in multicellular organisms. If the FEP is completely general and a fundamental principle, like a principle of least action, then the environment is not only about its relationship to an individual agent that's navigating and finding energy minima; we're thinking of the way the environment itself is constructed. So the environment is constructed in exactly the same way as the organism. That's why, to get to the origin of novelty, the landscape is constantly shifting. Natural selection shouldn't be thought of as an optimality model. It's a local optimality model, but there's no global optimum. The fitness landscape is changing all the time, just as the energy landscape is changing all the time. And that's precisely because every component of the landscape is continually trying to find but also constructing its own path of least or stationary action. That's the active principle and active inference.

[23:43] Chris Fields: I'll toss in a couple of comments too. What Carl said, which I think was very wise. I think the FEP looking at the philosophical literature around it is deeply at risk of becoming like quantum theory and spawning an entire discipline of interpretation that goes on for decades and achieves nothing. As you point out, that may just be inevitable. But if one sticks very close to the physics, then what, for example, you referred to as literalism when you were defining it generally, is actually forbidden by the physics. The physics tells us we cannot construct such models in principle. So that issue, in a sense, is entirely made-up and is based on the assumption of a kind of epistemic perspective that dates from the early part of the 19th century. And, at least since quantum theory is no longer valid, it's no longer worth considering. I like this idea of sticking extremely close to the physics because the physics itself, as you were describing at the very end, has a lot to tell us about this generation of complexity. We have this very common point of view that things start out simple and they get more complicated. It's not at all clear that that's correct. Things are potentially complicated all the time. It's a matter of who's looking and how they're looking to be able to tell how complicated they are at some particular scale where we're choosing to do some modeling. I would recommend against trying to get too deep into a philosophical discussion that's motivated by concepts that are actually inconsistent with the physics that's being used to formulate the principle. One has to get deep into it to see where the inconsistencies are, but that's possible.

[26:57] Tim Jackson: I think that that's very good advice, and I fully agree. I think it's not only the physics that tells us that such a model is impossible. I think what I was saying about there not being any global optimum, that's an evolutionary principle. And there's a whole branch of epistemology called fallibilism, evolutionary epistemology of people like Charles Sanders Peirce and latterly Karl Popper, which really fits with that idea. It's specifically directed against the fallacy of literalism, of imagining that we could ever have a final, perfect map, so to speak, partly because the territory itself is continually changing. But I also fully agree with what you were saying about quantum theory and the profusion of literature on interpretations that maybe doesn't get us anywhere. But I do think, this is partly my temperament, that there are ways to do philosophy well and there are ways to do it poorly. There are also some great process-relational thinkers who in many cases are themselves deeply grounded in mathematics and logic, like Alfred North Whitehead and Charles Sanders Peirce, who come up with very elaborate systems that are in many ways strikingly similar to the free energy principle, also reducible on some level. Peirce in particular is extremely explicit about this, that all of logic, according to Peirce, is reducible to his basic categories of firstness, secondness, and thirdness. He takes it to be the task of metaphysics to reduce to, or derive, the most basic set of categories from essentially which all of reality can be derived in some sense, which might sound preposterous on one level, but it's a kind of min-maxing exercise in human thought. The minimal fundamental postulate for the maximal diversity that can be generated from it. It's very striking that Peirce's categories, when translated into his evolutionary cosmology, are: firstness is tychism or chance; secondness is anarchism or evolution according to a law; and thirdness is agapism, evolution in relation to some kind of goal. These can also be mapped onto. Peirce explicitly does this well before Richard Lewontin, but Lewontin quite famously reduced natural selection to a triad of principles.

[30:02] Tim Jackson: Variation, heredity, and selection. Peirce has essentially that exact same logic, and he looks to derive the entirety of logic, but also a metaphysical scheme from that. And so I do think these things can be done better or worse, essentially. There are principles there. I think there's a need for it to be done well. I agree that it's going to be inevitably done poorly. But I'll quote Peirce again: "the only antidote to poor opinions is more opinions." If you think about the community of inquirers, there's going to be a lot of noise out there. I'd love us to get to generative noise and get back to the ooey gooey biological realities. I do think that there's a need for people who have a very deep familiarity with the physics, who understand what it is to be very close to the physics, exactly as you guys are describing. One needs to have that familiarity with the physics to be able to do the philosophy well, somewhat self-evidently. Perhaps the reason a lot of the philosophy will be done poorly is because of a lack of that familiarity. All I'm suggesting here is that's why there is a need for people, perhaps such as yourselves, but perhaps not, who have that deep familiarity to engage in those spaces, rather than saying that that's just a waste of time. One of the questions that does come up again is this question of something like solipsism, and this has been around in philosophy for a very long time and a certain misreading of Immanuel Kant, which comes up a lot in philosophical discussions of the FEP as a kind of anti-realist and potentially a solipsistic view. But solipsism and this kind of anti-realism, where we can't say anything about what's behind the appearances or behind the blanket, so we may as well say there's nothing behind it, has unethical consequences, which are important. But it's also just profoundly boring because, going back to the need for explanatory hypotheses, it makes it impossible to actually explain anything. Those are some of the things that motivate a kind of constructive realist reading of the FEP, which again is going to be a philosophical gloss for sure. It's going to deviate; it's going to be an interpretation of the physics. I do think that there is a need for those sorts of things. I'd love to hear Mike weigh in on that.

[33:07] Michael Levin: I'll be pretty brief because I want you guys to get to the generative noise discussion. Two things that stand out to me in what you guys have said. First, this question that was touched on briefly: where does it come from? Where do things come from? I've been thinking about this a lot because I get asked this all the time. When I talk about bioelectric patterns: where do they come from? To me, what's very interesting is the philosophical status of that question because it seems to me that it's one of those questions that no answer is ever satisfying. No matter what you say, it never feels like you've answered the question. People say, okay, but where does that come from? Any five-year-old will take it in an infinite direction. To me, what I spend a lot of time thinking about is what is a constructive answer to that question that at least — we can always go further at some point, but when have you really said something as opposed to saying nothing, knowing that you're never going to say everything? It often, in my case, comes down to genetics that set the parameters of the hardware — the ion channel properties and so on — then environmental factors and selection. But there are also laws of mathematics and laws of computation that are neither of the previous things. You don't have to evolve them and they're not contingent features of the environment. They're just there, but they have massive implications for what patterns you do or do not get. You name these three and say it's a function of those three. No one finds that satisfying for some reason. That's what I spend a lot of time thinking about: what kinds of answers are functionally, scientifically useful, because that actually works quite well — knowing those things lets you do experiments and find new things. A lot of what we build explores this Platonic space outside of directed evolution and genetic hardware. What other attractors are there in the space that we enter with xenobots, anthrobots — things that have never been here on Earth before — and yet they have specific behaviors and specific forms. Philosophically, what's interesting to me is can we do work to help people decide what kinds of answers are useful? What are they really looking for when they ask that, given that they won't be satisfied with the kinds of answers that move science forward? That's one thing I often spend time on. The other thing is the distinction people want to make between: is it real or does it just seem like it? Often people say you're saying gene regulatory networks can remember, but that's just a metaphor; you don't really mean that. I'm philosophically naive, but I don't know what people mean when they try to make that distinction. It seems to me that it's all metaphor. Many colleagues think we have real things like pathways, and then there's metaphorical stuff like memory in cells. Pathways seem metaphorical to me. It seems like it's all metaphor. I don't think we need to say that nothing exists behind it, but it seems inevitable that we're building different maps that facilitate certain kinds of things and obscure others. That seems fine. I don't think that leads to terrible dilemmas. What do you think about that?

[37:06] Tim Jackson: I think there's another long conversation to be had about the use of language, but we should probably pivot to this discussion of the generation of novelty. Language is metaphorical. Words don't have intrinsic meanings. We're going to rule out literalism from the get-go in language, just as we do in physics and in any kind of modeling endeavor. We can think about language as a way of modeling the world and communicating our models of the world to one another. Nonetheless, there is a need to draw boundaries in our usage of certain language. There are context-dependent meanings that are different in different discourses. A context is defined by a culture, by some socio-cultural network, maybe an academic network, maybe a popular one. We don't want to commit what Alfred North Whitehead called the "fallacy of the perfect dictionary." This was his dismissal of so-called ordinary language philosophy, where people argue endlessly about meanings and exactly what a word should mean. That's a fallacy and a dead end. At the same time, words do have meanings; they are polysemous and have different meanings in different contexts. We all probably have some boundary that we would want to draw, and everyone's going to have a different intuition about that. It may depend on your disciplinary background. If you're an academic in particular, where you draw those lines might be quite dependent on your disciplinary background. If people have resistance to using the word memory for a gene network or whatever it might be, it's worth probing that resistance, not to find a final answer as to whether this is the appropriate term, but just to work out what's going on with that resistance and that usage of the word. We should probably pivot to this discussion of the generation of novelty, which I just take to be among the most interesting questions. I'm an evolutionist. I also have a music degree; I'm very interested in composition and improvisation. For me, my understanding of music — both my practical understanding as an improvising player and my understanding of the evolutionary and cultural history of music — has a big impact on the way I think about biological systems. That's why something like the FEP, with its scale-free nature, is very appealing. We talked about the landscape a little bit: the landscape is constantly shifting because it is itself constructed of free-energy-minimizing or path-of-least-action-seeking entities of different kinds. This means there is inherent context specificity. We know that for fitness: fitness is highly context specific.

[41:04] Tim Jackson: If I take an organism that's highly specialized for one environment and transplant it to another, what was adaptive can very readily become maladaptive. We see that in catastrophes, as the catastrophists in evolutionary theory can tell us. A basic model from evolutionary theory — and I am going to link this to Charles Sanders Peirce again — is that there is Peirce's firstness, Peirce's tychism or chance: there's always a stochastic element, an irreducible stochastic element that we see at all these different levels of organization. In evolutionary contexts, we can speak of genetic drift and things like that. We can speak of lots of different mechanisms. We can speak of stochastic gene expression. As Chris once memorably said in a discussion with you three, he talked about babbling. And that's just stuck with me ever since. I've got a 2 1/2-year-old daughter, so I've been through all of that in the last couple of years. Molecular babbling — this is what I study. I study the genesis of novel functions in chemical ecology and stochastic gene expression; genetic mutation are these chancy elements that are ineliminable. They are constitutive of the explanatory logic that we have in evolutionary biology. So that's the principle of variation, what generates variation. You have theorists like Brandon and McShay who speak of this as a so-called 0-force evolutionary law. There's always something driving in the sense that there's always change going on. And then there's selection acting on that change. Speaking of the evolution of complexity, you've got this constant input of novelty in these ways. Elements of the environment complexify, and any given lineage or path through this landscape has to respond in kind, may have to respond in kind to the complexification of its environment. We get this driven evolution of complexity, even though I take Chris's point that things can be complex from the get-go. It'd be great to get into exaptation and even things like chemical ecology and the reorganization of phenotypes that Mike and I have been discussing for a long time. But I think it would be really great to get a kind of FEP take on this. We talked about how do we get this attracting set? What about how do we move from one attracting set to another? If that's a possible thing to talk about: when you were talking about a certain deviation, Karl, earlier from the attracting set — there are perturbations deviating an organism from its attracting set and it's undergoing homeoretic return to that attracting set, homeostasis, allostasis, et cetera. But there's also generative noise. There's also deviation from a particular phenotype — it could be a molecular phenotype. And you talk about experience, plasticity and all of these things, but there's deviation from a particular attracting set and eventually even the discovery, the creation, the construction of novel attracting sets. How does the FEP want to speak of that sort of thing?

[45:08] Karl Friston: I think the key theme here will be a separation of temporal scales, time scales, in the sense that once one goes scale invariant, and I think one has to talk sensibly about this relational aspect that is not just a horizontal one, but vertical in terms of I live in a culture and that culture lives in the, and my cells in my brain live in me. Then the change in the Waddington landscape, which for me is just the free energy landscape, is simply a reflection of the fact that there are sufficient statistics or parameters that control the shape of that landscape that themselves are pursuing a trajectory on another free energy landscape at a slower time scale. So that practically licenses what could be construed as an adiabatic approximation. So if you just pick your particular scale of inquiry, you can then make the assumption that the parameters that shape your particular landscape, what it is to be a good phenotype in this particular eco-niche for this generation, for example, you can assume that they are approximately not changing in time. They're changing so slowly that for this phenotype or perhaps during this critical period of development or this day, the solutions to my dynamics that define the attracting set, the kind of states that I want to occupy are fixed. They may change tomorrow, but for today, these are the solutions that best describe what is happening to me. On that view, you're looking at very slow changes from the perspective of any scale you picked when looking upwards. When looking downwards, the fast changes become noise. If one starts from the classical formulation and the path integral formulation of the free energy principle, one generally starts with a random dynamical system. For example, you could call it a Langevin equation. But in effect, it's just a statement that there is some lawful relationship between the dynamics and the state plus some noise, the random fluctuations. And then you have to ask yourself, what's the difference between the random fluctuation and the fluctuations that are described by the motion of the states? So what licenses you to say this particular kind of state is a random fluctuation, a noise process, whereas this kind of state actually lives in a state space that I would need to describe as a random dynamical system. For me, the answer is just again, separation of time scales: states that could change very, very quickly, the ones down there are just noise. So what that leads to is a notion that the renormalisation group is very useful here and it really speaks to your focus earlier on coarse graining. If you coarse grain in time, then you've now got a way of understanding how stuff down there at lower scales is averaged away by the coarse graining simply because you've treated it as noise. The noise averages away. You don't get those irrelevant variables, relevant versus irrelevant in the sense of the renormalization group. When you move to the next scale up, be it a cell or be it a person, the RG operator has in it that coarse graining, that grouping and reduction operator that throws away the noise at that level, but you now retain the itinerancy of the dynamics and that constitutes the time scale of it and the time constants of the dynamics at this scale. Of course, some of those dynamics will be very fast and they will disappear when you move to the next time scale. For me, the free energy principle gracefully accommodates noise in the genesis of self-organization of a scale-free sort via exactly the coarse graining you were talking about before. And once you now admit, in both the sense of George Ellis, but also I subscribe to Herman Hagen's notion, synergetic notions of circular causality, I'm speaking now not of a George Ellis-like approach, but much more the kind of approach you'd find in sector-manifold theorems, and specifically the enslaving principle in synergetics, where the context that's shaping the current landscape is enslaving the microscopic dynamics at the level below.

[49:25] Karl Friston: But the microscopic dynamics at the level below are being coarse-grained to provide the dynamics of the level above. There's an inherent circular causality. You can never say this causes that or where did that come from? The answer is always both. It's always circular; it can be no other way. In inducing that sort of slaving principle, or synergetics perspective, I think you are really foregrounding the notion of these coarse-graining operators and what they're applied to: they're being applied to suppress certain kinds of fast fluctuations and noise to reveal slower ones, and so on, all the way up and all the way down. That's how the free energy principle would accommodate this kind of scale-free itinerancy, where there is no ultimate attracting set, because to be an attracting set you have to pick out one scale. But that's not the story. The story has to be scale-free. And as soon as you do that, then there's always a scale above that's changing your attracting set or your goal. I was intrigued by Pierce. You told me I should read about Pierce as part of semiotics, but I never heard that three-way thing. It does strike me if you wanted to start with the classical pathogen formulation, the Fiangian principle, you need those three elements. You need, one, the random fluctuations — that's omega. Two, you need the systematic lawful relationship between the state and the flow. And three, you need the existence of a pullback attractor before it's worthwhile talking about it. Otherwise, you get exponential divergence. There's nothing to talk about. So you need those three ingredients. You need to take a random dynamical system with a lawful relationship plus the chance, the noise, and then say there is a pullback attractor. And just to note, the pullback attractor is itself a random variable. So it is also inherently noisy. It is a stochastic object. So there is no unique pullback attractor. The very fact it's a pullback attractor means the attracting set is itself, at any given scale, a random variable that has an infinite number of realizations. Does that help?

[53:44] Tim Jackson: I'm glad that you returned to Peirce there. I think he was really onto something with his triadic foundation for logic. I think that can be mapped onto the free energy principle very elegantly, as well as onto evolutionary systems in general, and that's certainly what he was thinking. So that's beautifully described. The key thing to bring in is this notion of exaptation in biology and evolutionary biology; there's much to say about that. But it's a principle of multiple realizability as well. It's to say that what's irrelevant from a given reference frame can become relevant in a certain sense to a slightly different reference frame. So how does noise get converted into signal? And so I think how does the irrelevant become relevant in this sense? I think you've addressed that to a degree, but I'd love Chris perhaps to jump in on that.

[55:04] Chris Fields: Well, I think that I would add one thing to Carl's basic story about coarse graining and the renormalization, and that is that one of the things that we are renormalizing here is what we call the system and what we consider to be the system whose generative model we're describing. And all systems that we know about as active inference agents, one of the kinds of action that they're capable of is to organize some of the degrees of freedom of their environment into a little package that they don't control but associate closely with. You see this starting with quarks. You see it in protons that go out and find degrees of freedom of the environment, what we would call an electron or a neutron, and grab that degree of freedom and exert control over it. And that's one of the basic things that active inference agents do. And what you get out of that is a system at a larger scale with a different generative model. A deuterium nucleus looks very different from a proton, and a hydrogen atom looks very different from a proton. And if you're a proton, you can fork and do one or the other, or you can do both. And in all cases, what you're doing is decreasing your free energy. The proton's much happier if it has an electron, an association with it; it's much happier if it has a neutron, an association with it, from the point of view of its free energy. But by doing that manipulation of the environment to decrease its free energy, it's constructed, a new system that has its own model and its own Markov blanket and its own free energy definition. And then that system is going to do exactly the same thing. Mike and I years ago wrote a paper about cell division as a way to decrease your free energy if you're a cell. You just make a copy. But what does that involve? It involves harvesting a whole bunch of the degrees of freedom of the environment and packaging them up in a particular way, using yourself as a blueprint, and then turning them loose, but not very loose. And when you do that, you've suddenly decreased your free energy a lot, but you've also constructed something else that has a new generative model.


Related episodes