Watch Episode Here
Listen to Episode Here
Show Notes
This is a ~2 hours 8 minutes discussion among contributors to the Platonic Space Hypothesis (https://thoughtforms.life/symposium-on-the-platonic-space/).
CHAPTERS:
(00:00) Platonism versus Darwinism
(14:37) Forms, mathematics, and invariance
(29:44) Multicellularity and enabling constraints
(46:03) Representation, computation, and evolution
(01:03:04) Concept space and attractors
(01:24:26) Myth, metaphysics, and explanation
(01:34:46) Pluralist science and metaphysics
(01:57:15) Infinity, strategy, and closure
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:00] Tim: It might be great to start with just discussing people's sense of those options. I heard you discuss that a bit in the previous session, Mike. You brought that up and a few people gave some ideas about that. I think this is also a way of us surfacing the question of the relevance of, broadly speaking, metaphysics to science and also the relevance if we're going to invoke the name of Plato, metaphysical themes, what is the relevance of the history of philosophy to a consideration of those things, since a lot of these concepts have been around for a very long time. A lot of the possible alternatives, not all of them, are actually present in the history of philosophy. Referring to some of those models can be a way of clarifying our thinking about what we're trying to achieve by either extending upon or returning to some prior conception. Without articulating in detail what each of them is, I would say four options I could think of would be so-called physicalism or the mechanical philosophy sometimes referred to, a form of Platonism, maybe a classical Platonism or a just Platonism, which presumes the existence of these two worlds, one with the forms and one the actual world. Of course, they're eliding the nuance in Plato's own thinking. A neo-Darwinian conception, which may have things in common with both physicalism, the mechanical philosophy, and Platonism. Then what I would think of as a properly Darwinian perspective, in which the role of the forms is quite different in the sense that it's a way, a mode of reasoning, which tries to account for the genesis of forms without presupposing their a priori definiteness or their a priori existence. I think what we would want to discuss then in terms of something like a diagrammatic schema or a series of schemas is how, for example, form itself operates in each of those different schemes. That might help get us clear about what's at stake when we say we're moving beyond physicalism or we are invoking Platonism in a certain way, or we are or are not being Darwinian or Darwinism is or is not sufficient to account for the kinds of questions that Mike's asking in his research and that a lot of us are asking in different domains. Certainly I've been asking in terms of origins of novel functions at the molecular level. That's one thing I put out
[02:47] David: What do you mean by Darwinian? Do you mean that whatever forms there are totally as a result of chance in history? That different organisms living in different environments could come up with a very different mathematics?
[03:14] Tim: Big question and something that I've spent a lot of time trying to articulate in the last few years. What is the difference between what I'm thinking of as Darwinian and neo-Darwinism, for example. But really briefly and schematically, this Darwinian mode of reasoning is one that puts a process of variation prior to the existence of the forms. So it is a mode of reasoning which takes variation, or Darwin would say the overproduction of variation, as the primary given and then invokes a principle of selection in order to try and account for why that variation is clearly not continuous. Why there are clumps in the distribution of that variation and those clumps roughly correspond then to species or to forms. Now, when you open that up to a metaphysical program and when you ask a question would they have different mathematics, you're already moving, we're moving well beyond Darwin's own conception. Darwin's desiderata are relatively humble in comparison. He's willing to take Newtonian physics for granted and things like that. He's not trying to explain life, the universe, and everything. But when you do, as various process-relational philosophers have attempted to do in different ways, when you try to generalize that Darwinian mode of reasoning, you do end up coming up with some very different perspectives. And so you certainly could think of different mathematics or different logics emerging from this kind of variation and selection and inheritance scheme, absolutely. And this would not preclude the fact that all organisms everywhere may be similarly adapted to physical constraints that are exceptionally ancient, that might be more than 13 billion years old. So they form part of the environmental background that all living systems would necessarily be adapted to. But we might even want to ask questions in fundamental physics, as many people do these days: what are the origins of those constraints? Why these laws? Why these constants of nature, etc.? And people are turning to what I would call slightly Darwinian evolutionary modes of reason to try and ask those sorts of questions as well. So, in general, it's just the attempt to minimize the a priori content of our approach, try to get behind the defined forms and see if they have themselves a process of genesis that might account for them. That's what I would think of as the Darwinian attempt.
[06:14] David: So, if I understand you correctly, it seems like there are different levels of this Darwinism you're talking about. We could take the physical world as fixed. Then, within those parameters of, say, medium—the size of objects that the organisms we know, cells and things, work with—we could talk about some parameters of the physical world that actually control what their perceptual systems are, how a cell has to know where it is in space and time. Even if, in some quantum physics sense, those are illusions. If we start there, is that a base? But what you're suggesting, it seems to me, I heard you saying that deeper than the level of physics that we find ourselves dealing with on planet Earth, it's possible that the laws of physics themselves are susceptible to some kind of Darwinian evolution or natural selection. That's what it seems like you're suggesting, that they're not themselves fixed. And some cosmologists have said things like this. I just want to see if that's what you're suggesting.
[07:44] Tim: That approach would be the consequence of a speculative generalization of Darwinian modes of reasoning. How successful that's going to be is a completely different question. I'm saying this is a method. But it does come along with the necessity of re-evaluating the role that invariance, things that are taken to be fixed, play in our explanatory schemas.
[08:22] Matt: I like your four-part typology, Tim, of the mechanical philosophy, Platonism, Neo-Darwinism, and then proper Darwinism in your sense. But it could just be that we have two options here: Platonism and Darwinism, a kind of speculative Darwinism, as you're suggesting, because the mechanical philosophy rooted in Newtonian mathematical physics is a degraded Platonism, as is Neo-Darwinism with its emphasis on information being carried by a genome. These are both degraded forms of Platonism, or at least inheriting the Platonic mode of thought, whereas Darwin is the real alternative, even though, as you acknowledge, in the final paragraph of "Origin" he refers to the fixed law of gravity. He wasn't yet thinking cosmologically about evolution, but there's a lot of reason to want to do that. We have the Darwinian and the Platonist versions of the two options here, where Darwin would say that all form is a function of chance variation selected and historically accumulated, whereas Plato would say, nope, the forms are there already. Whatever evolution might be is a selection among pre-existing forms. Those seem to me to be the two options on the table here. I don't know if that's an oversimplification.
[09:48] David: But it seems to me that you're leaving out the Kantian.
[09:52] Matt: option. That science is limited to a phenomenal realm.
[10:00] David: Science is constructed by the mind. Space, time, all the categories we use for perceiving, making judgments in the world, these are all constructed by the mind.
[10:22] Matt: And the mind being a historical and not
[10:25] David: There's Darwinian spins on that, but I think Kant was talking about pure phenomenology. It is this sort of logic of what an experience must be like. That it must be like a certain way or it's not any experience at all. So, from the perspective of this Kantian, you have to go deep into understanding what it is, but it is for the possibility of any experience. That's what Kant's talking
[11:03] Matt: You're right to bring up Kant, and I think there are various examples of philosophers who want to overcome this dichotomy. Kant was pre-Darwinian, but my approach would be we don't need to choose one or the other. We need some kind of a synthesis here. Kant would be an example of that. But I think Kant's understanding of the mind was that the categories appeared from nowhere. We needed a genetic account of that or an evolutionary account of where the human mind comes from.
[11:36] David: No, I think you're right. It's a miracle. Did God make it or why does it work? We need some account of that. You're
[11:45] Tim: So in terms of the dichotomy that Matt was giving, Kant is pretty "platonic" in this sense, but the a priori forms are transcendental instead of transcendent. And so that's how Schelling reads Plato as a kind of proto-transcendentalist. I like that dichotomy, because I think the radical nature of the Darwinian intervention, and I'm not saying that he was the first to say these kinds of things. You can even think of it as a pre-Socratic way of thinking. But the radicalness of that is typically underestimated. The neo-Darwinian attempt to integrate it into the physicalist or mechanical philosophy to identify a privileged sort of basis of causal reduction in the gene, et cetera, actually moves us back into that platonic conception in a really important way as well. So I do think, incredibly schematically, that dichotomy is pretty useful for us because it's all about what's the a priori, to what extent are we relying on something that doesn't have its own genesis. You can always say that variation in the Darwinian thing is the thing that doesn't have its own account. It's just the thing that's taken a priori as given. But I would want to also signal that there is a way in which this Darwinist view is also related to a Platonism in the very weak sense, that it acknowledges the reality of possibility or potential. So it's not deterministic in the way the mechanical philosophy becomes when it collapses the forms into the actual. Whereas in the Darwinian account, there's a very open-ended and real sense of possibility. It's just about how is that possibility structured? How does it become what Mike would refer to as a latent space rather than proposing or postulating that it's always already a latent space with forms inhabiting it? As Peirce would say in his attempt to generalize Darwin into a metaphysical way of thinking, the account of the evolution of the universe has to also be an account of the becoming and the evolution of the forms, not taking them as a priori. I don't want to turn the whole conversation into this. I just thought that was a useful schema for us to begin with.
[14:37] Unknown: This was wonderful. It was a wonderful kickstart. I agree that calling it, or at least putting it in these terms, even if loosely, is helpful. Perhaps there is an even deeper dichotomy inside what it means to be Darwinian, because in "On the Origin of Species" he lingers a lot on this question of polymorphism as variation. There are fluctuating elements, specifically in polymorphic species, where Mike raises a question of the embodiment or disembodiment of memories. This dichotomy then seems to extend to Brower and his lectures on mathematics, philosophy, and consciousness, where he says that the purest thought is mathematics. Given what we know today, and in frameworks like TAME, we can read these texts in a new light. We can ask: is there a non-Platonic sense now that we know these things, or now that we're tackling the problem from this perspective?
[16:12] Tim: I love the Brouwer reference, so that opens us up onto a whole other world of discussion. Another thing that I know was brought up in the previous discussion would be about mathematics as a language versus, say, natural language and versus other modes of expression like music or chemical modes of, I'm a chemical ecologist, chemical modes of expression and other things. Whether, if we agreed with Brouwer that mathematics is the purest form of thought, is quote-unquote nature so pure? So in fact, is that purification a kind of simplification or coarse graining in order to achieve that level of purity and precision? I think that would bring us back to the history of Platonism in some sense and this association of the forms with something that's pure, it's not fallen, it's not full of accidents like the world of appearances. These things get incredibly rich. I also want to talk about what Adam said in the chat about convergent evolution, because I think that's profoundly relevant. It has been invoked for its relevance to these discussions of Platonism by people like Alfred North Whitehead, but also in this recent discussion of the Platonic Representation Hypothesis. I would say, there's a lot to say about convergent evolution and the role of shared descent, as well as shared adaptation to the same environment. Before, in a sense, we appeal to a kind of Platonic hypothesis that organisms are converging on shared a priori forms. We have quote-unquote mechanisms or ways of thinking about convergent evolution that don't rely on those. The question is, what's the limit of that, broadly speaking, Darwinian mode of reasoning? You brought up Carcinization. I always like to say, crab forms have evolved six times. Venom, which is one of my areas of study, has evolved more than 100 times independently. So, there are some incredible examples that philosophers could pay attention to when it comes to convergent evolution.
[18:39] Unknown: I think when you look at the two different spaces, you can start today formalizing the architecture of how that works. And I think what's really interesting is when you treat these processes as computation, when you define what a finite observer, a bounded observer, can do in this infinitely complex space of forms, you get this coarse grading and you get this dynamic of trying to sample efficient structures that are predictable to increase what you can sample later and have more choice. So you get this Darwinian mechanic from the structure of a computational object or a computational possibility space. The model for observer theory is based on Stephen's rule, which is computational. It's any causal chain. And to make that space have any meaning in physics, it has to close. So you have to be able to get geometry out of it to get maths out of it to make physics predictions, which is what Stephen does. That point at infinity gives structure to the space, but it's the point where every causal chain ends, where every diagram commutes, where everything limits. And that's a sink. And that acts like a telic attractor. It's a sort of informational attractor. It's got every possible causal history in it. Any multiverse, any type of math, any platonic form you can imagine, any physical instantiation is an integrated map. And because that map commutes overall, you can say that structure has that telic pull, that gradient, that fitness, like in a fitness landscape, which is driving observers towards computationally efficient forms that enable them to sample more of that space. And I think the innovation of that computational language allows you to start doing things with the tools we have today, modeling with LLMs. One of the things I'm working on at the moment is a test to probe whether different computational architectures converge. There's a paper called the Platonic Representation Hypothesis, to see if that applies across different architectures, narrow architectures such as AlphaZero or chess engines, to see if they have a hierarchical mapping of concept space of these embeddings that they have in their models. And I think you can start to probe these objects more today than any time we've had before because of the advent of technology. So I think we'll start to get more answers on these directional questions, whether it has to be separate or it's the same. And I think the idea is that structurally, if there is a structured computational space, or a set of all possible computations, and you can import physics from it, then that structure should be found in a coarse-brained fashion across these experiments. It won't be definitive, but it'll give a hint that maybe this thing is actually a real thing as opposed to an abstract thing we're constructing to make sense of the world. And trying to see if top-down and bottom-up causation can work together or whether it's really all constructed bottom-up and it's all emergence is a question that computational experiments are going to let us answer over the next few years. We're going to start getting directional hints about it.
[22:23] Michael Levin: My current model, I don't know if this is a chimeric version of the two views that you guys were talking about, or if it's a third thing yet. It seems to me that there could be a variety of different forms. It doesn't seem to me like the forms all have to have the same character, either they're pre-existing and that's it, or they're evolved. There are numerous different ones on that spectrum. For example, there are biological ones that I'm perfectly happy to have modified by evolution and various other things. There are others that seem like they have a lot less of that character. For example, the value of the natural logarithm e. I don't see it being downstream of evolution. I don't see it being downstream of anything that happens in physics. Maybe it can change. It seems like one of the more stable ones out there. I think we could say that there are ones that have this really fundamental stable character. There are others that are either novel or modified by things that have happened later. This gets into the naming, because when I started talking about this stuff I said "Platonic space" only because then at least the mathematicians knew what I was getting at. Some percentage of them said, "Yes, we're already on board with this." Clearly, the model that I'm pushing is not fully Plato's model. I don't know what to do with the naming of it, and some people hear "Platonic space" and they're very upset and they say, "Absolutely not." They say, "Fine, 'latent space.' That's good. Now we're happy." I don't know what exactly they see as the difference. People also will say I'll point out certain things that happen where it seems like you get more than you put in. People will say, "These are just regularities." I say, "What does that mean?" "These are just things that hold true in our world." What are those things—random? No, they're not random. I don't want a realm. We've got some things that seem to hold true. We don't think they're random, but they're not a realm. Somewhere the terminology needs work; we're going to have to work on the different variants of these views to really say what it is that people really hate so much when they think it's a realm. What else do they have that isn't a realm that to me always sounds like a realm anyway? I think the nomenclature is going to need some work.
[25:13] Matt: I don't know if it's good news or bad news, but when you read Plato's dialogues, there's no one model that Plato leaves us with. He leaves us with many different possibilities. The best criticisms of Plato's forms are in Plato's dialogues. But obviously, the term "Platonism"—anyone who's read some philosophy of science, and maybe some Karl Popper, is going to have a reaction to Plato, all sorts of associations. I understand why you chose that. You're right, Mike. I'm glad you're pointing to that. There are different forms of forms, as it were, some which we can understand as historically emergent in a Darwinian sense, and others which seem more necessary or almost metaphysical. It seems to me that rather than having to choose either variation first in the Darwinist approach or invariance first, which we could say is more the Platonist approach, for variation to lead to anything of significance in terms of historically emergent forms, you already need seeds of invariance. So there could be some forms that are truly invariant that allow there to be a selection process by which useful forms, other types of forms, could emerge historically. I'm always driven to try to think of the interplay between invariance and variation. It becomes difficult for me to make sense of the idea of a full-bore Darwinism in the speculative sense of variation first, getting all form out of that because of the examples that you would point to, Mike, that seem not historically emergent. So I want to have it both
[27:12] Tim: ways. I also want to signal agreement that it's highly likely, almost certain, that there are those forms which we're not going to get behind from our position radically in medias res. So what I'm calling this Darwinian mode of reasoning is a wager; it's a method. You could think of it as an attempt to identify those forms that we absolutely can't get behind: which are the ones that are absolutely non-deconstructible? And they may end up appearing to us as conditions of actualization. For there to be anything at all, it would appear from our situated perspective that these forms were required. But there is, of course, a speculative evolutionary account of that. There are anthropic principles. There are still options available in the way we think about those sorts of things. But just to say that it's very different to claim that we can explain everything and we can get to bedrock variation first and somehow bootstrap ourselves up to a full cosmos. That's already the rationalist claim in the history of philosophy. The rationalist claim is that you won't be able to do that, essentially. And I am saying there is a limit to the rational intelligibility of reality, in fact. I tried to say that in my talk for this session. We are going to have to recognize those limits, which means there may be things we need to take as given, things that we simply can't explain. I would just want to signal the agreement that it's very clear that there are different kinds of forms. And I've previously spoken about this and published about this as a temporal hierarchy of constraints. I don't know if I like that term myself. Terminology is always really difficult. Some forms came into being very, very recently. Some forms are incredibly ancient. Those are real salient differences. They are going to impinge upon our capacity to give a genetic account of certain forms.
[29:42] Michael Levin: Yeah.
[29:44] David: David? So let me switch gears on the philosophy here. I want to talk about some practical biology for a second. Let's imagine ourselves the first cells to come up with the idea of multicellularity and they start communicating in some way, chemically, electrically. What constrains the kinds of shapes that they can make, the kinds of behaviors they can have in this very primitive state? Is there something already there that they can or cannot do, possibilities they have? Michael.
[30:40] Michael Levin: You probably have some thoughts on that. I'll just throw out one thing because it's the same X that I always grind. Probably Tim has other thoughts. There's a lot of really good work on bacterial biofilms that are almost multicellular. Gerald Soel does this amazing work showing what he calls brain-like electrical signaling in biofilms that allow them to coordinate and act as a collective. But one issue that I always talk about is how much do you put in and how much do you get out? What are the examples where you get out more than you put in? Here's an example of this. Once evolution finds a voltage-gated ion channel, you've got yourself a voltage-gated ion conductance. It's basically a transistor. You have a couple of those, you can make a logic gate. Now you automatically inherit all of these cool things about the truth tables — NAND is special and all this other stuff. You didn't have to evolve any of that. You get all of those cool properties for free, right? Having made that interface, you now suddenly inherit these things and you don't have a choice about most of it. That's just what it is from the laws of computation or math or logic. I think evolution can make use of all of that. There will be facts about the way that computation is done in networks of 2D surfaces of biofilms: some constraints, some enablements, and some free lunches. I'm sure Tim's got a bunch of examples that you can make use of. I think looking at those bacterial cases is pretty informative.
[32:36] David: It goes even earlier than that. When you have genetic regulatory networks, you also have logic gates.
[32:50] Michael Levin: This stuff isn't published, but I have a student who's looking at training. We've shown training of gene regulatory network models. She's doing training of Lotka-Volterra style population dynamics, and you can train those two. If you actually look at the space of parameters of what does it take to make them have habituation, sensitization, these various things, that space is really interesting. It has very specific shapes in this space. It isn't homogeneous. And where does that come from? There it is.
[33:34] Tim: I think when I said constraint, I didn't mean not enablement, of course. I meant enabling constraints as always. That's the role of invariance that we're talking about here, which is that you need something to hold things in place so that you can do a theme and variations. I'd love to get to a chat about music here as well, because I know, Mike, you're planning some of those discussions. But to say with the biology for a second, without getting into heaps of detail, but to respond to what David was saying, if certain physical, enabling constraints are 13.7 billion years old or whatever, when life emerges four plus billion years ago, it has to be in conformity, but it's enabled by those; those are already the enabling constraints of living systems, right? And then thinking about things like logic gates and all the amazing stuff that work that Mike has done on the capacities that minimal cognitive systems or minimal biological systems have, et cetera. I still think we can think about this in terms of relationships of adjacency. We don't have to posit that all of the Boolean logic associated with the use of logic gates pre-existed the genesis of that ion channel. We can say that in some sense, when you have a certain kind of actualized relational structure in the world, it then brings into definition a set of adjacent possibilities, possibilities to use Stuart Kaufman's term. Again, it's hard to understand what it would mean to say that all of that logic pre-existed the logic gate itself. We talk about an interface theory. We're never going to pull things out of the platonic realm, so to speak, without the existence of an interface, in Mike's terms, whose structure of functional operational capacities is what enables those forms to be ingressed, if we're using that language. But it's a further metaphysical step to say that those forms somehow pre-existed, as opposed to are themselves given a form of definiteness because of their adjacent relationship with that definitely structured actual physical, if you want, interface. What the Darwinian conception here is saying is that interface naturally contains within it this potential, which is just variation itself. So if we look at biological systems and we look at stochastic gene expression and the non-stereospecificity of interactions between molecules and Brownian motion in and between cells and all this stuff, there's all this crazy non-indeterminate variation going on all the time, which in a sense you can think of as always spreading out, palpating a space of adjacent possibles from the actual form structure that is in existence. It's a little bit of a jump, but I think of this also as the way the mathematical landscape itself expanded in the history of human mathematics. We know that there's a whole load of maths that is not applied, that is not physics. Physics, the maths of relevance to physics, is this relatively small aspect of the mathematical landscape. We could therefore get to thinking that that's just a subset of something that pre-existed it and is much vaster than it. But if we look at the history of mathematics, it's the other way around. People discovered things in the relations in the empirical world. They learned how to reason about them mathematically. There were economic and other utilitarian justifications for the development of those tools. And from understanding the principles, like the relational principles diagrammatically, so Poincaré would say, mathematicians are interested in relations, not objects. You can remove the objects as long as the relations stay the same. It's no different to us. So it's diagrammatic. But by understanding the principles, there's a way that you can keep spreading by unpacking the consequence of those principles. Again, those relationships were found in the empirical world first.
[38:11] David: I want to push back on that just a little bit. Let's get back to our cell forming a gate. It has to be that the potential for on and off is already in the
[38:28] Unknown: material. Like resolution. It has to be there.
[38:35] David: There's no making an on and off switch unless the material that you're making with can already be an on and off switch.
[38:42] Tim: But you're saying in the material.
[38:46] David: No, I'm not saying Plato is out there, but Plato is actually in the material itself. I can go with that. But when you start talking about it, I want to push back on what you're saying about mathematics, because it seems to me that mathematics is not just a flowering tree that could go in any direction. I think it has a structure to it. I think the way you understand the relationship between, say, geometry and algebra and calculus, the more you look at it, in group theory and set theory, logic, there seems to be a structure to it; it has some kind of a unity to it. You just can't make up any kind of math you want.
[39:40] Michael Levin: This, I think, is the issue. Tim, I'm okay with — we don't have to say it pre-exists because I don't know what time would be doing there anyway. So that's fine. It doesn't have to pre-exist. But there's some specificity. In other words, you've got this particular fact about NAND, or that there are four colors: the four color theorem, not the eight color theorem. You get a very specific thing out of it, and you can say that it sort of came into being when you made the interface. I'm okay with that, but we still need to say, is it random? And I agree with David, I don't think it is random. So there's some pre-existence. Now we're back to there's some reason why you've got this and not something else. So something is making that selection.
[40:27] Tim: I think random is a very misleading term, the way random is used to talk about indeterminate biological variation, for example. Abject randomness is in some sense an abstract fiction. So if I'm going back to biology and I'm talking about stochastic gene expression or whatever, it's not like just anywhere in the universe that those genes are being expressed. It's in a very strict relationship of adjacency with all of the "quote-unquote" machinery that exists to produce those genes. It's just that there's this distribution of genes, the concentration of genes, say, in different tissues, different cells, is tightly regulated, but it's never regulated perfectly. It's never regulated absolutely. A protein structure can evolve to achieve a relatively high, a very high degree of specificity, but it's never absolutely specific. There's always a chance that it's just going to stick to something else because molecules are just sticky and it might have some kind of off-target effect. And that's one of the major ways that novelty emerges in biological evolution. So I'm absolutely not saying that it's abjectly random or anything like that. I'm saying as soon as you have any kind of structure, it acts as an enabling constraint on the development of further structure. So it makes complete sense to me that mathematics would have in some sense this kind of unity. And even complete branches of mathematics that are considered to be completely distinct keep discovering the same structure. It turns out you can say the same thing in a different language in some sense. That makes total sense to me if the fundamental, if mathematics in some sense is born from this shared origin in the practice of mathematizing humans in actual contexts. I'm not saying Mike, you and I have been back and forth on this for a couple of years, I think. I'm not saying I have an account on how I would explain the genesis of the four-color theorem or fucking bounce constants or whatever it is. I'm just saying it seems premature to me to say that no such account is possible.
[42:35] Unknown: I agree. I tend to agree with opposing views. I like this example of cells communicating, especially because I don't have a concrete stance, but I asked the question if biological forms, for example, were hearing shapes and not forms, in the sense of Mark Ack's question about hearing shapes, that you can effectively recover this infrageometric information or some sort of data. Since it is persistent, you can also posit that there is some Platonic prior that you can recover consistently, which I find really interesting. Perhaps cells—let's speak of an architecture, a plant—and you ask if a plant can hear shapes. In that sense, you just do the same path that Mark did. It is completely plausible if you understand hearing as processing some sort of signal by mechanical transduction, and then you have specific genes, and then you have ciliary arrays. It's completely possible that you would do wavelet transforms. For example, if you want to recover the peaks of a transform like this, that would modulate the oxygen signals; it is completely plausible. It would give you intervals, and in terms of mechanistic expression of a pattern, it is also plausible that we ought to relate it to symmetry because the peaks of the Fourier transform—it's completely plausible. I would also invite this other theme, which is Hermann Weyl's conception of pure infinitesimal geometry when he was trying to unify the electromagnetic fields. He came up with many beautiful constructions. I know we are past 150 years of Weyl's work. But the fact is that even though Einstein commented that his ideas were beautiful but unphysical, now, 150 years later, we have light–matter interfaces coming out from it. We have Weyl points that have been experimentally observed. Perhaps we don't need to choose between a metaphysical or a physical perspective. There seems to be something here by which we can recover this kind of information. I find it interesting on a cognitive level if we bring that from ciliary arrays doing these transformations and then architecture expressing these patterns. I find it interesting, but I don't know how to answer the cell question specifically on a biological level.
[46:03] David: Let me ask another question about this. What is the difference between a group of cells that are just responding to a chemical stimulus in their environment — they're moving toward a food source or away from a toxin — and a group of cells that's actually processing that as information about where they are in the world? Or a plant that's growing toward the sun automatically, or one that's actually processing information about where it is in the world. Michael, you want to take a stab at that?
[46:49] Michael Levin: I'm going to see if I can find a cool example. Have you guys seen the Physarum example that we have? What you have is a dish like this, about 10 centimeters in diameter. We put three glass discs on one end, one glass disc on the other end, and a little slime mold in the middle. The glass discs are inert. There's no food on them. There's no chemical. What you're going to see — I'm going to try to find this because this has to be seen — is that for some hours the Physarum sits there and it vibrates and it tugs on the gel that the whole thing is sitting on. It reads, as it turns out, the strain angle of the different masses in its vicinity. For several hours it does this and it doesn't do anything. It doesn't go anywhere. It just does this. I think what it's doing is gathering information about the environment. Then it goes preferentially to the heavier mass. That's one of my favorites.
[48:17] David: Examples. It seems that example is crucial for pushing back against the sort of emergence physicalism view: if you can experimentally show that organisms are actually representing where they are in the world. That's very basic math. I would say that you have to have some kind of a representation of spatiotemporal orientation if that's what we can actually show.
[49:01] Michael Levin: So here it is. These are the glass discs here, 3-1. And this is the little Fisarm. So for the first few hours, it just does this. And it's going everywhere at once. And I have a video where you can see it tugging. And then, boom, at that point, it decides to go for it. Wow. And then bang, that's what it'll
[49:29] Tim: do. It's doing a random walk, and then suddenly it becomes oriented. And I think this is really fascinatingly consonant with something like Waddington's conception of the neutral accumulation of genetic variation and then the reconfiguration of the epigenetic landscape, but the process of genetic assimilation when an organism enters into a particular environment and something elicits that adaptation from it. As Mark knows really well, these are very big and ongoing conversations in evolutionary theory around things like evolvability, the role of redundancy, the role of robustness, and where those two things are the same and where they're different. I'm always wiggling my hands this way, I'm a big gesticulator. This is 'random, spontaneous' behavior. If suddenly something elicited a reaction from me, I might point directly or I might make a shape with my hands. My point there is just that biological systems are always doing this spontaneous thing at the molecular level, at the behavioral level. They're reaching out, they're palpating an environment and they're seeking a signal to bring this into the information territory. They're seeking something which would tell them, go this way and not that way. Be this and not that. This is what you need to be right now. You've got this capacity to be lots of different things. You're phenotypically plastic, but right now, this would be a good thing to be. And so information then is this relational thing that happens between two different systems, organism and milieu, or two different organisms. It's a mutual reciprocal relationship of elicitation. When the signal comes in and it is 'meaningful' because that plant actually requires light in order to photosynthesize because of its evolutionary hit, that's how I tend to think about these sorts of things. And I think, again, Mike, your work is incredibly pioneering in this way that you can look at the slime mold. On the one hand, you could have told this story that at the molecular level, and it would almost be a kind of evolvability story, then the states of the slime mold are evolving. But you could tell the same story at a different level or in a different aspect in a way which becomes a behavioral story or a cognitive story. And so there's this fascinating unification of a kind on offer there. It turns out that I've said this to you before, Mike, but there's a way of thinking about cognition, which I think, in this general framework that you give us, in which it almost becomes synonymous with what evolution means, if you think about evolution in a really generic sense. To Sam's point, you said some really fascinating stuff about Wolfram's model and computation that we haven't picked up on. But computation is an evolutionary process, always already. It's no shock if there's a really intimate relationship between evolution and computation because they've always been intimately related. And you can even just go into the history of the word evoluteo and how it means unfolding, but it has an algebraic connotation before it ever had a connotation in biology. There are so many rich resonances here. I wouldn't want to be seen here as flying the physicalist flag. I'm not advocating for some kind of physicalism. I think physicalism is more platonic and more idealistic. I know that's counterintuitive compared to 'Darwinism' or 'Darwinian' or whatever. I call it ontogenetic because I prefer not to invoke Darwin's name so often because it's like invoking Plato's name. People are like, that's what this means. So an ontogenetic alternative is definitely not what I would call physicalism. I think physicalism is a formal theoretical approach to a way of understanding the world basically grounded in effective theory. That's a whole other conversation. I'm not allying myself with that. Maybe it's a genuine alternative.
[54:05] Unknown: I think one of the things that's interesting is you can model evolutionary processes on very basic cellular automata. And when you talk about patterns, you get this linear progression, then some exponential jump as that cellular automata discovers a novel rule that increases the amount of steps they survive for. And those jumps are discrete. And those discrete jumps are really when we say the object has changed from one thing to the other. So in your cell question, the idea of bulk orchestration or a top-down causation from a group of objects that have bound together exhibited small world network properties where the communication channels reach some synchronicity that means that decision is basically everywhere in the network all at once. It's called a superlinear speed up. That dynamic gives you that top-down causation where that single cell thing has within it the communication ability, the ability to couple and find information from the environment or from other cells in its neighborhood. Once enough of them come together and they're close enough, that orchestration kicks in, and that's where those free lunches come in. Because you've now gone to a different, you've exponentially risen up the curve of how much information you can handle, how big your internal model is, how much you can predict. And here, the model is that this world of latent space, idea space, possibility space, whichever name you want to give it, that structure is invariant, and objects are bigger or smaller based on how many equivalences there are within the computational network. Now, it's not saying the actual thing is a big computer, it's saying that model is a coherent way to then make predictions with it in a way that this is the language of these formalisms from Plato to even theologies. All the metaphysics, all the major theologies describe the structures of these spaces. And I think today, with these network models, you can now be more specific and you can now test things like evolution through that space. What, if you have an object with n many equivalences in the network, does an agent put in that space discover it faster or slower? You can actually start to run quite coarse simulations of the dynamics of the space that, I think, for a long time have just been talked about. And that's really interesting because all of the experiments that are coming out of Michael Slab and some of the other people on this panel are pointing in that direction. I think it's a super interesting formal program where these tests, these things can be tested not just in observers like us or animals, but they can be tested across novel substrates like computers. You can start to answer that question. And when you have a structured space like that, then you start to ask deeper questions: are ethics computationally valid? Can you model ethics computationally? If so, can you teach a computer them? And those languages you get out of discovering this space are causally effective. Whether it's ontologically real — whether there's a giant Indra's net all around us — is hard to tell. Whether it's causally effective in our world is probably the more important question that we can, I think, start answering. So I think that's the most interesting thing that's happened in the past two years. All of these ideas start to bring ideas of infinitary space and infinitary explanation back into physicalism in a way that should be quite explanatory.
[57:51] Tim: I think that's beautifully put. I really agree with the promise of that new kind of science, the experimental or computational method. And I do think sometimes that promise, that potential, gets collapsed a little bit when we immediately feel the need to move into metaphysical territory and say, well, that means that the universe is a computer. I think it's an incredible way of experimentally testing various evolutionary models because they're all evolutionary to me, because intrinsically, computational models are evolutionary. And I love what you said about saltations, jumps, phase shifts, leaks in a state space. And I think we see a ton of that in biological evolution, actually. So I don't think that the so-called gradualist assumption particularly holds. It holds on certain scales, but we also see a lot of leaps. What I brought up with genetic assimilation and Waddington and Richard Goldschmidt's idea: these ideas have always been present in evolutionary theory, even though there had been a mainstream of neo-Darwinism that tried to squash them.
[59:09] Unknown: That dynamic's not just seen across biology. It's seen in our social structures. It's seen in how we organize ourselves. It's seen in how economic growth is. It's seen in how political systems and change work. We have a long linear progression or some mildly chaotic but linear progression. And then there's a change and there's an exponentialization. The network reorganizes. It settles to a new local loop, a new peak or valley in the fitness landscape, depending on which way around you've got it. And then it keeps going. But this applies not just in evolution because it's computational and ultimately we compose our explanations computationally to communicate them. That dynamic, if it's proved in computation, the simplest system must be running in more complex systems at much higher resolution. So these dynamics can now be explored in that space of memetics, that Dawkins sentence in the book that should have probably been another book in Susan Blackmore's work. They now become causally effective if you can also put physics within that same language. And that's one of the interesting things about these models. It's not that you can now compose between those structures. And because there's a natural geometry inherited in those objects, you can compare the properties. And so you can have ideas about symmetries, ideas about the boundaries of those objects and how hard they are to capture, how much coarse braining goes on, what happens when we actually sample these objects. Do they become easier or harder to sample? Is there a point where that changes, where that object becomes invariant under repeated sampling so that we know it's maximally reduced for us? What happens to that concept in Platonic Space when that happens? And it starts to put these observer-centric models as explanatory in the context of how we interact with information that isn't wholly explained by physics, biology, chemistry. The content of an emotional experience can be explained with an EEG. But if you ask someone the contents of that experience and you say, "Is this data—unspooled, all the data?" you will normally get an answer that's no. And because of this language, because you can now compose those things in an integrated map, you can start to make harder empirical statements about what you think the structure of that space is, whether or not the space is structured, how my paper hypothesizes it or speculates is beside the point. It's that you can speculate within this architecture all of those dynamics to try and formalize and test these sometimes quite intuitive, but also informed by lots of experience, ideas about metaphysical bigger questions that are harder to answer. And that's one of the interesting things about this change in the language because it starts to join up so many different domains. And you start to get ideas that are mathematically proven that can be applied across spaces and concepts that don't normally seem to lend themselves to it, at least in how we think about those subjects today. That's quite an interesting thing about this symposium: you get those perspectives on what those optimal models are from 20 different disciplines in 20 different languages. And so you get this coming together, pulling apart those ideas, which is how you formalize something like this, which is going to be quite important over the next few
[1:03:04] Unknown: Let me ask you what you think about biology — the easier models. Let's take two different neural networks, two different architectures of neural network, trained on the same data set. Do they create a different world perception or not? That's the same data set.
[1:03:35] Unknown: This has been tested, right? Above a certain amount of tokens, large language models trained on transformers have convergent representations in their weights. This was a result from last year. They've now applied that test slightly more broadly with some different measures across a couple of vision multimodal, vision and transformer-based word architectures, where again they're finding, in that paper called "Universal Subspace," areas where the representations converge. Now, whether or not that's constructed in the data set, i.e., because we've chucked in all of our pictures, our words, et cetera, comes out like that because of us, or it's discovered is not yet an answered question, nor is how that space is structured or the properties of that space, because those architectures find it harder to import coherent geometry across different models and different types of design. There's a guy called Marcus Beuler who does some really excellent work on this. I think he's at MIT, and he's been doing graph theoretic representations of these concept spaces. What you're trying to do is move past that test to whether you can test if there's some discovery or some construction where the domains are so separate that it might point to it. But if it's totally different training data or it's a narrow domain, is there a structural discovery, not what's in the structure, but is there order or hierarchy that suggests that this platonic space is not just words, there's still girders holding it up. But say this is roughly how we split things
[1:05:25] Unknown: Large language models, I suspect, have influenced the creation of that kind of concept, platonic board. But I'm saying, assuming that you have a completely different model that doesn't learn based on attention but learns on something else. If they create the same world, does it mean that there is only one platonic presentation of the world and we just need to find that world? Or are there many?
[1:06:14] Unknown: I guess the way to think about it is in terms of the size. So the form of a chair is as an object informationally bigger than all the elements of that set or that category of the form of the chair. So every individual chair that you can possibly imagine is contained within that object. So when you have a word, imagine that as an object, a category. Now it's a smaller category than maybe chair. It's a more bounded category. There are fewer things it connects to, or fewer instantiations of it, but it's still got to be mapped. You're only mapping with a lens that's small. You're mapping an object. It might be countably infinite or even finite in terms of the composition of it, but you're counting it with something that's doing it one at a time. You're not going to ever fully map that space. Even with something as simple as a word, you're going to get multiple embeddings, but they'll be close together in that space. Similarly, as you move down to things in physics, those things will become discrete. Why is math powerful? Because it's discrete; you get an answer. Those objects become invariant and you can map them fully, which is why they're useful in the computational observer model. Because if you have finite computational power, you need to do more mapping. You want to see more of the space. You want to reduce or compress as much of that into your model. It's a discrete measure version of something like FEP, where that surprise is I have to do a lot of computational work to fit this object into my model. Then I need to make it smaller and compress it. When I sample that thing again, when I practice doing something or when I learn something, that thing gets compressed more and gets more equivalences in the object. It becomes easier for you to integrate into your world model to make predictions. That dynamic means you map that space. Even though that object exists, you're not going to fully map it or perfectly map it with a finite budget.
[1:08:20] Unknown: Hananel, it was a great answer, Sam. Your work is fascinating. In going back to your question, Hananel, experimentally it could be interesting to test for something: in terms of computation or computational power in terms of operation, not only do you have allocation operations, you have this thermodynamic or dissipation layer at play, which, depending on where the model is being run or what the computational constraints of the architecture are. That is, whether it's more related not just with the words, but also with, let's call it, the kernel dissipation in a sense.
[1:09:20] Unknown: In the last couple of years, a lot of really talented researchers have come up with multiple measures to figure out this stuff. Some are kernels, some are the graph representation. There are five or six different measures. What you're trying to do is get to the right measure for it, where it probably is some composite of those measures. It's a very live question for me and something that I need to get to a tighter answer.
[1:10:00] Unknown: In terms of computational topology, we can find defects or you can do eigencompositions or expansions. It feels really interesting to see what would be the form in points, pointed spaces, for example, and the density, and also articulate with the volume in area proportionality.
[1:10:26] Unknown: Wild's law. This is one of the interesting things about an LLM's architecture because it's very complex, reducing that non-trivially from n-dimensional many weights to some 3D representation or some dimensions. It's hard. So I think there's a bit of work to do, but what's quite interesting at the moment is a lot of people are working on geometric computational engines for inference or for fusing or for virtual machines, and these create maps that prefuse computation, so that representation, because it's a coherent map that has easy composition between it, might be more able to accurately map that physical representation of a shape with those properties. But it's a bit early. They're still in a really interesting foundation called UL.
[1:11:28] Unknown: Perhaps you would get more discernment in terms of effective temperature measures you want to take. I've been experimenting with Mike's data, especially the bioelectric code. Normally, on a substrate, it seems the same temperature will get you a very large spectrum and then you cannot make heads or tails of it. If you make this discernment between what would be an effective temperature of a bioelectric wave and discern it from the medium, you get a much narrower space. Although it is early, it's an inverse-inverse problem that maps exactly to what you were saying. In, for example, cell-cell communication or a damaged embryo, you're discerning the mechanical wave of an embryo trying to engage in intercellular communication. I'll keep looking at your work.
[1:12:57] Tim: Really fascinating stuff. Returning to one of the broader themes of the conversation, but a couple of things that you were saying, Sam, including when you initially brought up Wolfram models in your first contribution and talking about convergence. One of the things I think in the history of philosophy that these conceptions that start with, say, notions of infinity, an infinite plenitude of forms, for example, struggle with is what I call the selection problem.
[1:13:34] Michael Levin: you end
Tim: up with an issue of why these forms are not that forms. Alfred North Whitehead speaks to this very, very directly. This is a problem that you get in string theory; there are so many different solutions for the vacuum. Why this particular one? And so you end up with a formal system that's, of course, capable of encompassing the actual world in some sense, at least at the level of the abstract language that it's using, but it is radically underdetermined by the world in some sense. It encompasses much more than the world, and you have this problem of selection. This is somewhat related to what you said subsequently after I mentioned saltation, where you mentioned these leaps in cellular automata. You were saying it's a general phenomenon, which I completely agree about. Phase shift, criticality, all of that. We all acknowledge that these are general phenomena, but the way they tend to be explained is in a Herman-Hagen synergetics way: there's a decay of the order parameters, the system goes into a more chaotic phase of its evolution, and then it's captured by another attractor. It leaps in the landscape. You mentioned a fitness landscape; it could be an energetic landscape, whatever, to another attractor, another basin of attraction, and it ends up there. And now the new non-equilibrium steady state that defines it is there for it. This comes up when Carl Friston talks about these things a lot as well. You mentioned the FEP in passing; his contributions to the previous Platonic space discussion, and a discussion I had with him and Mike and Chris Fields on Mike's channel a while ago. The real question we're butting up against in this discussion of the platonic space is: where do those attractors come from? If you can explain all the behavior in terms of attractors, these models coming out of non-equilibrium thermodynamics, historically, have relied on, as physicalist models tend to, a predefined space where the attractors are essentially already there. Then you can model the evolution of a system through that landscape and it's captured by this one or captured by that one. What such a model struggles to deal with is the actual genesis of those attractors. So, again, the question of the genesis of forms arises. It's really the same question.
[1:16:15] Unknown: That's what religious metaphysics literally does in every single form of persistent theology. The major monotheisms plus Hinduism, Buddhism, Taoism as well culturally are the biggest and most persistent structures. They all do that in their metaphysics. They all have, in the language of those traditions, their own world of forms and their own way of that structure. It normally got formalized maybe a thousand years after the tradition started, and philosophical traditions start to deepen these spaces and talk about the evolution of this possibility space, from the one to the two to the ten thousand things in Kabbalah, from the unending infinite through some contraction to the biggest infinite objects we've got in Hinduism. It's the tattvas and Brahman and Atman and how those work together. They all have this very loose language, but that language can be expressed computationally within that model. It's basically categorical. It's saying these are infinite categories of things. It's not the whole thing, but you can now create a coherent model of how this selection principle applies to an infinite object. Because an infinite object, if you're going to posit that at the very top of the chain, which is what all these persistent things do, give or take some nuance on Buddhism, then you need to be able to walk through that process. This infinite thing must do it infinitely faster or as fast as possible, at least until in some traditions we get free will and choice. You can now attempt to bridge it with models where this platonic space and the coherence across not just all the religions, but Greek philosophy and aspects of the chakra system all across the world are connected. If these attractors are real and these forms are real, the linguistic construction around the names or what those attractors tell you to do as the sub-order rules for that person are less important than the existence of a structure that seems to be consistently created or discovered. Again, this creation versus discovery point is critical because the question is really, is this space closed or is it open? Now, something like postmodernism in philosophy functions this way in this model: what's my biggest possibility space or my biggest model that I can search with, that I can explore with, that I can exploit to structure my space. And we get to postmodernism, which is the mathematical equivalent of an open category, an ever-branching tree where at the very limit, it doesn't come together. Every computation is infinitely far away from everyone else. So there's no speedups at the limit. It's all irreducible, all devolves to randomness. If you were thinking about it as a network structure, I'm really picking on a deconstructivist's view of postmodernism and not constructivist postmodernism, but these convergent ideas, whether it's Plato, whether it's religions, whether it's Leibniz, whether it's Spinoza, they didn't have a way to create the architecture of those structures in a way that was coherent with maths, physics, and computational sciences of their time because the language wasn't there.
[1:20:23] Michael Levin: Yeah.
[1:20:24] Unknown: I think we're in a position today where you have tools in language and results in empiricism where these things get connected. You can now start to express an argument, not a proof, but a coherent, internally consistent structure that says we can work through how Platonic space has these forms. You may not agree with it, and it may make axiomatic or metaphysical assumptions, take away theism. There's some bucket of infinite information or structured space that we are discovering. That's a fundamental brute fact axiom. You go to the other view: the brute fact axiom of materialism is there's a single singularity at the Big Bang, or the universe was always there, or a block universe, or a multiverse. So you accept different brute facts. But it's an interesting exercise because I think it speaks to intuitions and structures that we continually create, but don't have process-based explanations for in any formal language. I don't mean logic. I mean this connects directly to results in maths, physics, chemistry. It's the same dynamic, not a different dynamic. I think that's what you can do.
[1:21:53] Tim: I substantially agree with the vast majority of that. You're advancing a kind of perennialist thesis on the history of religion. I published a paper several years ago on creation myths and evolutionary process. I absolutely agree that all those myths can be decomposed into operators. I think that's what metaphysics is. The explicit practice of metaphysics is the decomposition of myth into operators. That's what I call diagrammatic metaphysics. Certainly that's what Plato was doing with the "Timaeus". Aristotle was very good at coming up with these diagrammatic schemas, his four causes. I substantially agree with everything that you're saying. Can the computational approach, which I hold out great promise for, move us? How far can it move us? It's a similar question to what I was calling a Darwinian approach: how far can it move us out of that? To what extent will it still be reliant on taking certain invariants for granted? To what extent could we say that is not unrelated to the physical structure of the computational object itself? It's not going to get us out. I don't think it's going to really get us out of, if one is so inclined, asking those big metaphysical questions. The other thing I point out, which is adjacent to what you were saying, is that physicalism is already a monotheistic mode of reasoning that comes directly out of Christian and scholastic philosophy. People like Newton, certainly, and Laplace developed a highly developed mathematical language for trying to bring together a certain conception of theology with their physical science. That was their project in many ways. That's what I also explicitly critiqued in my talk for this seminar, for this symposium. To your broader point, it's important to recognize the theological or mythological origins of most of these thought forms. I want to hear Matt weigh in on this because this is something that he has a massive amount to say about.
[1:24:26] Matt: I know it's very, very interesting. I'm glad this connection is arising. What comes to mind now is, Tim, you and I were talking offline about the difficulty of putting some of these ideas into natural language. We're searching for diagrams. We're trying to formalize this. And yet we also want to be able to communicate meaningfully about how it changes our self-understanding as human beings. I wonder whether metaphysics can be understood as a translation of these mythic intuitions into some kind of formal operation or set of operations. But if we go back to Plato it seems to me he's never trying to translate one into the other but instead to play them off each other — let's see how far dialectics can get us. It's still a natural language, but he was using the geometry available to him at the time, like in a dialogue like Timaeus, to work out some of the ratios he was perceiving in the movements of the wanderers, the planets through the fixed stars, and so on. Almost in every dialogue dialectic ends in an aporia; rationality meets its limits and then he offers a myth which in some sense illustrates symbolically, imaginatively, what reason can't quite grasp because it is inherently limited. Earlier, Tim, you said that rationalism meets this limit, and I think all the best rationalists from Plato to Hegel — if I can call Hegel a rationalist and Hegelians wouldn't be happy about that — recognize that the recognition of the limit is already to overcome the limit. So I think rather than imagine we might ever get out of myth, my own orientation is as a more or less Neo-Platonist, which I think it's very hard to think outside the grammar that Plato left us. Whether you're in the West or even in the Islamic world, there's just so much that's structured and canalized by Plato's way of thinking. At the end of the day we're not going to get out of the need for myth, and however science advances we're still going to need to tell ourselves a story about what those formalisms and what the math means.
[1:27:12] Tim: I fully agree that myth has a very pressing and ongoing role. We may slightly disagree in what that role is, but I absolutely agree. I just want to throw this back to Sam because earlier you were saying that if you show someone an example they would say "that doesn't really explain my experience to me" — that's paraphrasing what you were saying — "that doesn't seem to represent my experience." I think one of the things we're always running up against, and this is Matt's point about what we were discussing offline in terms of trying to express these things in natural language, is that we have a bunch of different modes of expression and you can think about them as different languages if you want. I said this earlier: I feel very strongly, just to bring this up, I can say things when I'm improvising as a musician that I would never be able to say in natural language, but they're nonetheless expressing something. There's something non-overlapping. I'm not sure this is what you're saying, Sam, but I would worry about any claim that a computational language could become a master discourse in a way — that the computational language would succeed where, say, the EEG didn't. It would certainly explain different things and it might have vastly more
[1:28:33] Unknown: it's your point that it explains a different layer in an integrated fashion as opposed to saying this is explaining everything in the internal part of the function. If you imagine an observer-theory perspective, you're this complex observer with your big cognitive light code of all the causal history that's built you up and you have some boundary of what you can compute, and then you have some limit that you've set in your world model of what you think is possible to compute or what you think you can see—things like myth and religion and even superstructures like fascism, nationalism, socialism. What they all function as are limit-setting devices to coordinate observers in that space to try and get them to go the quickest route, or what they deem with their limited world model is the fastest route. So it's effectively selection and evolution all the way to the birth of myth, and you see it in how religions evolve through time. There have been loads of papers and books on this, but you start with small conceptions and eventually what survives is a big conception. So the role of myth is a top-down cognitive apparatus, a way to set the biggest space. When you get to the limit of that space through the way our apparatus works, the way our brains work, because we can't really explain things well beyond cause, input, function, output, cause, effect, we can't really get beyond that boundary. That's where proto-myths or these bigger conceptions of one substance, monism, or unity have utility, because they have a computational function in the way a bounded observer computes limit objects. They can actually compute it. They can say, "oh, this infinite thing I can't compute is equal to one, in order to get my computation to compute." That's a very trite example, but that's the rough idea of how these things function in this sort of way.
[1:30:44] unknown: object. I wonder if it would be useful in the context of being interested in explanation in particular to pin down and distinguish different types of explanatory why questions or different types of targets. What I often see in my space is an interest in capturing what's distinct about genuine legitimate explanations, what doesn't count as an explanation. There's an appreciation that not just scientists but humans in everyday life ask different types of questions about even what we think of as the same system in the world. We ask different questions about gene expression or pick the physical stuff of interest, but there are different types of questions. Sometimes we might ask a causal question or a functional question or a question that requires some kind of optimality or efficiency answer. We think of those as very different types of explanatory why questions, and we think of explanations as answers to those questions. The frameworks that were very nicely listed out — mechanical philosophy, forms coming from Platonic frameworks — are sometimes pitched as associated with different types of questions. But we think you can't give an explanation for something unless you specify a well-defined explanatory target. There are very different types of targets showing up in these discussions, and we think of them as different types of explanatory why questions. I wonder if it would be useful to distinguish different types of explanatory targets. One of them is "why does this form exist" versus "why does it change the possibility space of what can" — those are very different questions, and a standard causal explanation isn't wired up to handle that possibility-space question in the standard way we think about causal explanation. It also relates to the challenge of words and terminology because "mechanism" and "mechanical philosophy" mean about 800 different things to about 800 different people. Getting precision for what we mean when we say "constraint" or "mechanism" — one way to start wrangling that is to distinguish different types of explanatory targets. It's interestingly challenging to find the right term to start with before you even unpack. Metaphysics is coming up too. That's going to mean very different things to different people. I'm interested in thinking about all of those more, but we don't think there's just causal explanation. There are lots of fascinating debates now about non-causal mathematical explanations and functional evolutionary explanations, which are viewed as distinct from standard causal and mechanistic explanations. Lots of interesting, complicated things. Just this question about the potential use of distinguishing different types of explanatory targets.
[1:34:46] David: I think that's a really excellent question. I would go with more of a pluralistic approach. Has someone mentioned Aristotle's four causes? The way I would look at it is that it's what's most useful for guiding particular research programs and experiments you're doing. With Michael Levin's work, I've had some conversations with him. The move toward teleological and functional explanation is driven by pragmatics. It provides a certain kind of guidance for hypothesis testing and model formation that's very useful. So when you're stuck with causal explanations of what he's trying to deal with, it is very difficult to explain—trying to explain how a computer works without allowing yourself to talk about software. This gets back to Sam's discussion of computation. The whole computational angle on this is that what makes an explanation useful is partly how we are able to use it, manipulate it in our minds, understand it. Things get to be so complicated that we can't deal with them.
[1:36:31] unknown: can't. I think one way to understand a strategy scientists use to manage complexity is they pick explanatory targets that are precise and they specify them in a way that's very narrow. And so that anchors what they want to explain. And there's lots of detail that now they can say, that doesn't matter. That isn't a difference maker for this target. But then what can happen is sometimes they stray from that target they started with, or we try to lump everything in the kitchen sink into the explanatory target. And you just can't, at least the way I think of explanation, you can't give an explanation of everything about a system. There isn't a complete whole explanatory target; it's not even a well-defined question. And so it's only once you specify that you could ever give an answer. But sometimes it's hard to define that target in a way that's well-defined. And then it's hard to stick with it. Because someone, we might start with saying, I'm going to give an explanation for why this form explains this set of potential outcomes. And then someone asks, what explains the existence of that form? We've changed the explanatory target now. So you're asking a new question. We got to change the goalpost. Or there's this interesting attempt to put it all into, let's give the whole explanation. And there's debates about the standards that a kind of explanatory target should meet and also what notions of causation are useful. I definitely like the pragmatic. I'm not sure Aristotle's four causes are what scientists currently use or what are going to get us reaching the goals we want. So it's also which of these frameworks do we want to use or do we need to develop them or change them or add to them.
[1:38:57] David: I could see having a whole zoo of explanatory frameworks, explaining things in terms of different phenomena and different levels of organization. We've seen that in biology: at all the different levels of organization you can explain things. So I'm perfectly happy to be very pragmatic and pluralistic about that. I don't see anything all that wrong with it either. You mentioned getting things very simple and then finding the problems when you do that. Sometimes that's what you need to get a program going. Look at behaviorism in psychology. It turned out to be badly wrong about a lot of things and short-sighted, but for a while they had it going pretty good. They were able to get a lot done with just a very simple way of explaining, say, animal behavior, and made a lot of progress. After that progress, people said this doesn't explain this, it doesn't explain that; you need to go beyond behaviorism.
[1:40:17] Tim: I think it's such an important contribution. And I think we end up potentially talking past each other or mudding the waters continuously to the extent that we don't get clear about the kinds of questions that we're trying to ask. Recognizing that researchers from different disciplinary backgrounds may have very different default modes of explanation. So as a biologist, one might think nothing in biology makes sense except in light of function. If I want to know why an organism has a trait, the character state that it has, I might need to appeal to a functional explanation in order to feel that I've explained that. But a physicist might not have the intuition for that mode of explanation at all. We can explain it at this lower level, and maybe that would correspond to a causal explanation. The two intuitions can just be gliding off each other. And this is one reason why biology isn't reducible to physics in some important sense, because we deploy very different explanatory modes. We ask different kinds of questions. But one of the things that happens a lot in this specific context, in my opinion, is an appeal to instrumentalism. This is the most useful approach, and that's what adjudicates whether I employ it. But then in these broader conversations, there's a subtle sliding over into now I'm talking about the nature of reality. I can't necessarily tell when, because it isn't clearly specified, we've moved from this is a useful methodological approach in some scientific domain to someone making a claim that is a statement about the nature of reality as such. To the extent that those things get muddied, we have a lot of problems. We've had problems historically and we still have problems with a reification of a methodological stricture. For example, certain things have been methodologically excluded so that science can proceed in a certain way so that we can ask very clear questions. But then there is a tendency, shaped by thousands of years of myth and attempts to understand our status and relationship to the world, to forget that that was a methodological exclusion. We end up saying metaphysically that's just epiphenomenal or that's not a thing, that's just woo-woo, whatever it is, and we don't notice. Lauren, I'm in an extended way saying thank you for that contribution because it's incredibly important and something that I think about a lot. I need to read your book, by the way. Different modes of explanation within biology are very important to me. Even the basic Donetian distinction between the how come and the what for is really important for us to get clear on. When I brought up the little typology at the beginning and I said mechanical philosophy and Plato and blah, I did say without getting into unpacking exactly what these mean, because that would be a whole presentation in and of itself, but it then becomes really important. We've got a few options here. Now we do the diagramming. How do these operators work in each of these things? We can ask, what does each of these potential modes of explanation afford us? What can we not ask when we're thinking in this way? I agree.
[1:44:18] unknown: I wonder one way that this can help too in presenting work to audiences for the first time or audiences that are critical is to suggest that it isn't intended to explain everything. This is intended to explain a certain kind of thing, certain kind of explanatory target. Sometimes there's this criticism of it doesn't do this. That's fine. It's not supposed to. And if you expect that there's a single framework that should explain everything, that's not that accurate of how we think about what scientists are doing and the massively complicated and different types of questions. So it can be protective, and maybe it can also satisfy that audience because we're not saying this is the way to do all explanations, but it does this thing. I wonder if there's a way to specify. I appreciate the assumption that science and the methods that we use and the utility element should be of a certain kind. I wonder if it's related to reductive assumptions too, where the way you understand everything is by going further down always. I wonder if there's a way to specify the goals that are associated with these explanations, such that you could say it is useful for these goals, even if it's not useful for those ones you're interested in.
[1:46:00] Tim: But valid goals, right?
[1:46:03] unknown: They could. The hard part is arguing about the goals. The easy part is once you fix them, we can say my approach gets you to these goals in a more objective way, and yours doesn't. I have fascinating discussions with scientists who think of causation as the only way explanations work. And so they want their model to be a causal model. It's dynamical; we think of it as explanatory. It's of course very informative and useful. They want it to be called a causal model because that word means it's a real explanation to them. So there are really interesting issues with the fact that these words have a status. Dealing with that is non-trivial and fascinating. Sorry, Matt.
[1:47:03] Tim: Baggage, right? Philosophical baggage, those terms.
[1:47:08] Matt: I love that we've ended up here because to me, this speaks precisely to the importance of distinguishing between metaphysics and the special sciences, where each of the special sciences is trying to offer a domain-specific explanation based on a very specific question or problem. And I see metaphysics not as really engaged in explanation, but rather descriptive generalization. So looking at what all the special sciences have found and the sorts of explanations that have proven, very often proven instrumentally explanatory in the sense that this helps me make predictions and to control the domain-specific phenomenon that I'm interested in as a scientist. Metaphysics then tries to generalize across what are the categories that would apply across all of these special sciences, not to seek explanation, but description that's general enough to be inclusive of what all the special sciences are doing. And so that helps us avoid any special science saying, I found the one cause to rule them all, and now I can explain everything else. That's a bad form of metaphysics. That's metaphysics as explanation. Whereas I would say we want metaphysics to remain descriptive generalization, not explanatory. Because when you're pointing out, Lauren, an explanation very much depends on the question you're asking. There's no global explanation, or at least I think we should be very suspicious of the idea of a global
[1:48:44] Unknown: explanation. I think there's a language point here about what we talk about when we talk about metaphysics as these overall, really huge, overarching general points and these huge questions that are unanswerable. And then what we can formalize in a common language. So the layer down from that. So the world of these causally effective abstract objects or these attractors. And I think the interesting thing is that there are many different frameworks, formalisms, and theories in all the hard sciences. But generally, they're all expressible in computational language. And so when you comport a map of those things with a common language, there's some non-trivial benefit because typically the structure of science, at least in the 21st century, is a lot of people work on the edges of a discipline, pulling out the frontier of whatever their specific explanatory target is. But by joining the language up in a single map, there's a lot more low-order, easy-to-exploit computational free lunches from copied equivalences across domains and across different formalisms that might get you deeper explanatory power within that graph or at the edges of the graph as well, because it unlocks something in some other part of it. And so, here, what's interesting is that if you comport the language of metaphysical systems, not the overarching question, but the systems they describe, you can also map that in a coherent fashion with the computational expressions of all of those theories in a way that perhaps is connected, isn't an explanation, but is a structural architecture that can hold those things together so that they can be probed in a more joined-up way than the 20 different mathematical languages we have for lots of different things in physics to lots of different things in maths. And that's, I think, part of why pure math is really valuable, because ultimately they're determining the bounds of that structure or the operators for that structure that are most universal, most useful to enable that detailed mapping from bottom back to some universal expression of that
[1:51:14] Unknown: language. I agree. And I would like to ask you what you think of, let's call it pre-linguistic, conceptions. Because from as far as I understood it, one also has as basal cognition this mechanism — it's synesthetic, but some would argue that this is the basal mechanism for perceiving, for example, patterns in space without direct observation. But there is more to it, as the potential to find new ways to express this, because otherwise one can always fall into Google's prediction that whatever it is that we will try to describe and not explain will have some sort of blind spot to something that may be rather relevant to our quest.
[1:52:31] Unknown: That blind spot is always there because you're ultimately coarse-graining. You're going to get a very lossy representation of a big meta object that is pre-linguistic. When you think about pre-linguistic structures, we've talked about them in psychology and Jungian archetypes are the typical example that people use. But they're meant to be big things; every decision you make is a composition of them. In this language, they're highly causally effective. They're a structure that is always present in the function you're running as an observer when you're figuring out how to use your internal model to get to wherever you're pointing in that space. When you go down to the level of basal cognition, you think of very young babies and how they can make out shape only in black and white. Why? Because that's simple. It's the basic distinction, the basic binary distinction that you can get to start building a world model that's stable. And so that's how that complexity gets constructed. You have these very, very highly causal categories of things that you identify first that are pre-linguistic. And then they're scaffolded with linguistic conceptions or more detailed or fine-grained conceptions of those objects as you create more equivalences and as you go through that process. So you handle it in this mapping as a domain. It's a domain where the computational object has lots of coverage over the lower domains or the subcategories. If you're carving up the domains, you do it by negation. You exclude certain informational objects that are more complex, i.e., don't meet a threshold. And then you work down to the most fine-grained and most specified part of that structure where the most rules or the most computational rules have to be on for anything to happen, which is the real world we live in today. This domain structuring or this foliation of this computational structure is one of the things that is starting to come out also in empirical results around things like IIT, and they did a decomposition of it into four layers. You're seeing it in tests of how LLMs map spaces. Everything maps in layers and pulls together. You see it in brain regions and tests around which parts do what and where they come together. This dynamic feels like it's again working all the way up where you're constructing very simple, very few primitive objects first. As we explore and exploit those objects to explore more of that space, we get to the boundary where we live today in the present moment, where we're effectively doing that. We're either exploring or exploiting some object in our causal history that we've already got utility from to make use of it. It's a computationalist model, which can sound quite cold, but it implies that those things actually exist and are real and they matter. The fallout of that hypothesis is that things like pure relation or pure difference are incorrect. That's actually a very good exploration policy. It's not a good exploitation policy at the limit. And again, when you're dealing with an actually infinite space of possible computations or possible states, then that becomes quite important as time goes on. It might work well in finite time. What we do as systems or groups of observers or groups of people traversing these spaces is we bounce between exploration and exploitation as an optimal strategy to colonize or search the spaces or to capture as much of that structure as we can. That gives you an informational memetic angle to something physical in evolution. So you go from boundedness: you don't just have persistence in time as a metric. You don't just have survival, you have boundedness, how much computation you can do. Those things balance out as exploration and exploitation in that dynamic.
[1:57:15] Tim: Yeah, I think that's a great way of framing things. When we say pure difference isn't enough, because you're cryptically referencing Deleuze there, and said a couple of things about post-structuralism earlier. It's important within that discourse and even within the philosophy of the person that we're referencing there what is the operation that pure difference, that conception of pure difference was looking to achieve. Certainly in that philosophy, which is a very evolutionary philosophy, there's no sense that the exploit aspect is neglected through a doctrine like stratification. There's a methodological priority that is being posited by a thinker like that, where they're saying, instead of erecting the strata or the forms as something that is a priori and thus has a certain authority associated with it, it cannot really be deviated from because we're always going to be recaptured by it. The function there of the difference is to say, novel forms, novel strata can be generated in an open-ended way and we will then exploit them.
[1:58:47] Unknown: I think you're totally right. It's more like it's a question of finite and infinite time. So there's a theory of finite game theory and infinitary game theory. So if you have infinite time, that strategy is optimal. But if the structure is closed, again, it comes back to the structural component of the object. If the object's formalism and structure is proven wrong and it functions and you can generate physics from this object from an open category, then this is wrong. But if the structure is closed, it means that the infinite limit, that strategy is suboptimal, because as you start to asymptotically approach that end point of all these possible states, you can't, by going for difference, you're not mapping the simplest connections that will bring you closer to that state. So it's computationally inefficient as you approach a limit point. So in finite time, it's absolutely fine. And because we're finite and get 80 or 90 years, it's probably the best strategy we've got. But at the infinite limit in the structure, it's an inefficient strategy because it will fail to achieve convergence in the fastest possible way. Therefore, if you're thinking this idea is a best of all possible worlds argument—where if you have some infinite space of informational object, because information can be expressed in form of energy, that imagines that space has infinite energy as well—if you abstract that to a physical explanation, and something with infinite energy has to go as fast as it can, that's the idea that you can take that jump from finite to infinite time. Would you think that strategy is optimal given that predicate? Probably not. But if you get rid of the predicate, you don't need it.
[2:00:49] Tim: axiom. I'm really fascinated by what you're saying, Sam, but there is an inherent tension between you using game theory and talking about strategies and what would be optimal to do, and then bringing in the infinite time scale as a way of adjudicating between strategies because strategizers are not working at that time scale. There's a discussion to be had about conditions of closure, and then, of course, when you are running things and developing your models using a closed system, essentially, that has been intentionally closed in its design. Then there's a question to what extent are you just recovering your priors by recognizing the importance of closure if you want to achieve a specific goal in a finite time period.
[2:01:53] Unknown: I think there are two points on the construction of the priors and the formalism that gets you to this closed object. It's built from bottom up, right? It's built from a two-cell category that they import up. So, it's a proven object that imports this structure. Now, it's not that it's the only formalism, but it is not an arbitrary take. This construction must work given the properties of these computational objects. I think the point on finite versus time is right. In finite time, I'm not saying—yes, this is absolutely fair. You can pick whatever strategy you want because that's the ability to choose. But what's mathematically imposed by the structure is mathematically imposed by the structure, so it's not a preference that is in it. There's a huge difference between the limit and what we can sample. So, when you feel the light cone that we get to sample from, we get to choose a broader range of strategies than is the optimal strategy for closing that space as an evolutionary agent. And that is implied by the idea of computational irreducibility. We can't compute, we don't know that that's the best way, or we don't know that structure has to close because we can't get to that boundary. Therefore, that gives us the real choice in the boundary and which strategy we choose to exploit that space or discuss structure in it. So, I think you're absolutely right. It's a function of the math, but not a function of the point where you're at.
[2:03:42] Tim: I'm going to have to rush and eat because I'm having a blood sugar crash, but I haven't had Bricky yet. I also just wonder if there's a constructivist argument that can be brought to bear, a constructivist mathematical argument against the function that the infinite limit is actually playing in.
[2:04:06] Unknown: Constructive theory is doing that. Constructive theory is still using an infinite base object, right? They're still using an infinite multiverse as a base object. I don't think they've specified a geometry of that structure yet or something that imports geometry. It's a metaphysical assumption at the moment, but the common thread is that they're importing some structure and building it bottom-up in a way that doesn't require that endpoint.
[2:04:40] Unknown: you
Unknown: This is just one construction. The other construction is totally valid and being worked on by some. Their work is unbelievable. Those ideas are pretty critical in translating the minimal observer model that the physics project team did into this category theoretic construction because it's all about possible and impossible transformations. You're absolutely right, it goes both ways.
[2:05:05] Tim: For sure. I'm really looking forward to reading your paper. I'm gonna look it up. It would have been really interesting to have the conversation that Adam Safron gestured to right at the beginning about convergent evolution, because you're relying on or continually deploying a notion of convergence. It would be interesting to compare and contrast that given the context of this discussion: Platonic space in biology, or stimulated by biology and reaching beyond, and the way convergence has in fact occurred many, many times. I always like to bring up the example of venom evolving more than 100 times in actual biological evolution at finite time scales. Then we start to look at what is the role of history, at finite, definable but vast time scales, in stimulating those convergent events.
[2:06:09] Unknown: So the paper, I did an extension or an application of the paper to some of these ideas where convergent evolution is effectively finding some optimal point, some valley or peak in a fitness landscape that's optimal for the entire landscape for that class, given their computation, that computational potential. And so those things become very important in saying, does this contention align with empirical results? The contention here is that the hints are starting to be there, not just in historic work on convergent evolution, but more personally in Michael's work, where you're getting this idea of some structure of space. Some subcategorical object, some low-down information object that might be sampling from something bigger. And that may be eventually to the level of geometry in maths or the actual shape of the object, the properties of that object. It might stop somewhere else. But what's interesting is we can now probe that space in different domains, just in words and in pictures and see how that space maps. And the mapping may well be totally different to what everyone thinks, but the fact that mapping is now coherently possible is, I think, one of the most exciting things that will happen in the next 10 years of science. I think more exciting than whatever's going on in string theory.
[2:07:52] Tim: I'm sure I agree. Fascinating stuff, Sam. Thanks, everyone. Really fascinating discussion. And I hope to speak to many of you again.