Watch Episode Here
Listen to Episode Here
Show Notes
This is a ~1 hour conversation between Josh Bongard (https://www.uvm.edu/cems/cs/profiles/josh_bongard), Tom Froese (https://www.oist.jp/research/tom-froese), and I on Irruption theory, polycomputing, and the mind-body relationship. Some relevant links from the discussion:
Tom's paper on irruption theory: https://www.mdpi.com/1099-4300/26/4/288
Josh and I on polycomputaton: https://www.mdpi.com/2313-7673/8/1/110
Eduardo's paper on multiple embodiment: https://scholar.google.com/citations?view_op=view_citation&hl=en&user=KWCQjl0AAAAJ&citation_for_view=KWCQjl0AAAAJ:9yKSN-GCB0IC
CHAPTERS:
(00:01) Polycomputing discovery and implications
(05:47) Multiple use and nonreductionism
(11:48) Observer realism and closure
(20:20) Intrinsic computation and grounding
(29:14) Values, noise, and regulation
(37:18) Physics, AI, and noise
(45:01) AI, noise, and multiscale
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:01] Tom Froese: I've read through your paper, but the best thing is you can summarize briefly what you think are for you the most important take-home messages to focus on, and then I can give some feedback or any idea you have of how you want to do this?
[00:20] Michael Levin: Sure, yeah, this is the Poly Computing paper, right?
[00:23] Tom Froese: Yeah, that's right.
[00:24] Michael Levin: Okay, Josh, you want to go first? Why don't you say what you think it is?
[00:27] Josh Bongard: My then PhD student, Atousa Parsa, who is now starting as a postdoc in Mike's group, was interested in computing in different kinds of substrates, physical computing, and stumbled into this observation that you can design a physical, inorganic material that, if you vibrate it in the right way, you can read out two different results, two different computations at the same place at the same time. You can zoom in on an individual particle in that material, take the fast Fourier transform of it, look at the power of vibrations at different frequencies — at one frequency you can see the result of an AND, and at another frequency you can see the result of an OR. We were super excited about that and we published; it was interesting from a physical computing point of view. This was five years ago now, but the implications of that, for me at least, are growing by the day. The fact that these things are not as separable as we thought, that it matters how you observe the material — you can have two different observers observing the same object at the same time and see two different results — led Mike and me to really think differently about what it means to compute and possibly what it means to be alive. I'll leave things there and hand over the baton to Mike.
[02:00] Michael Levin: What I'm interested in is the kind of broader implications of this. First of all, there's some fundamental philosophical implications here because we spend a lot of time in biology and other places arguing about what something really is. This gets to a very concrete example because if somebody produces, let's say, an algorithm or a machine, they say, okay, I know what this does because I'm the one who designed it. I know exactly what it does. Well, it turns out that, yeah, you may have designed it and you may have a model of what it does, but it's entirely possible that someone else from a different observer, from a different perspective, whether it's vibrational or some other, is seeing something completely different. This idea of arguing about what it really is, I think is mistaken. In biology, one really useful and interesting way to think about the biology is that in any body, there are a variety of levels and subsystems, all of which are observing each other and seeing different computations happening. All of these things are overloaded in that sense, that they're not just doing one thing because nobody knows what it's really doing. There isn't anything that it's really doing. It's just what can every subsystem squeeze out of the signals that it's getting in terms of some kind of computational model. That has some interesting implications, for example, for evolution, because one of the issues is, let's say you've got this highly evolved system that has high fitness because it's doing all these intricate things. If you want to make the next advance, the next improvement is really hard because once you start messing with things, yeah, you might get an improvement in some aspect, but it's going to screw up a lot of the stuff you've been doing before. So making changes later is really hard. But what if in addition to making changes in the system, what if you leave the system in place and instead what you're evolving is your ability to interpret what's already going on? You're making advancement and improvements on the observer end such that without changing the system, without running that risk of screwing up all the other things, I can become better at interpreting the things that are going on, almost like a reservoir. The reservoir is what it is, it does what it does, I'm going to be better at interpreting it, and therefore I can reap benefits without screwing up what all the other observers have evolved to take advantage of. So there are aspects like this that, in general, emphasize the primacy of the observer. Atousa's and Josh's practical implementation of this puts a lot of teeth into a more general philosophical thing about everything being observer relative. There's a point of view, and you have to specify things from a point of view. Now we see this is a really minimal model where that's already true. You don't need to be a cell or anything else. At the beginning, you can already see this. I think this idea is incredibly powerful — this idea that you can squeeze, and the other thing is computational density. If the exact same physical process can be seen as multiple computations, then how much compute do you get in a cubic millimeter of space-time? Because if the observers can see multiple different things, it seems to me you could pack in a lot more than you would conventionally think if you're willing to do more work on the observer end, of course. So that's what I like about it. I like the biological implications. I like the observer primacy and so on.
[05:47] Tom Froese: Sounds good. If it's okay with you, I'll highlight a few things that I'm in agreement with and that I also like. And then we can talk a little bit about some concerns I have about the philosophy mostly.
[06:04] Josh Bongard: I think there are many connections to your work on irruption theory and activism. I'd love to explore those connections.
[06:12] Tom Froese: So the second part of what I'm going to say is going to get us into a nice discussion to see, can we come to an agreement or if we don't agree, what are the sticking points and can we clarify what they are and what they entail? We can have a nice discussion about that. Let me start with this multiple using of systems. I really like that idea. Even on the side of the technology — because we're emphasizing the biology a little bit — there's a sense where technology's use is open-ended. There's a general sense that, especially when it comes to traditional technologies — maybe less so nowadays — think how many uses wood has as a material. You can give someone something made out of a traditional material and they might adapt it to some way; recently we were seeing some ruins on a trip in Spain. All through history, you see that people were repurposing what previous societies were building; the Romans recycled the Greeks and vice versa, so you have a head of a statue as a brick. This happens all the time, but it has happened less recently with computational technology. It's become more difficult to say I'm going to use the screen that's here on my desk to do something else, become a brick in the house. They have very specific functional design and are even purposely sealed off from this kind of multiple usage. That's the kind of marketing strategy to some extent. Going against that and saying no — what we really want is systems that allow this multiple using — is a little like the artificial life idea of living technology. So it's a good direction to be conscious that we're closing off spaces rather than opening them. We could consciously try to design systems such that they allow for multiple uses. Totally on the same page there. Also, about the importance of observers: the question is, what is an observer?
[08:59] Tom Froese: I think we can try to get a little bit more deep into that. The fact that they matter and that their perspective matters, I'm fully on board with. I get a sense that one of the big novelties of what you're saying has to do with this multi-scalar hierarchy. A lot of people are saying the reductionism doesn't work because of complexity, especially in biological systems and technology, but then no further implications follow from that. If you're taking non-reduction seriously, things should change in practice and in principle, and I see your framework as one attempt to work out what it means to take non-reductionism seriously. And what does it mean then when you have different scales about how the scales should relate to each other? For example, how does a multicellular organism relate to its cells or the cells inside the body to the whole organism? I feel that's a natural fit with the option theory framework, where the idea is to take non-reductionism seriously and then ask questions about what it means for the communication between the different semi-independent regions of phenomena that we can put into contact with each other. We're in the same trend; there's a broad stream of "let's take non-reduction seriously," and that implies ambiguity, multiplicity, diversity, maybe diversifying of ontologies even. We're in the same frame of reference. Now let me highlight one thing where we can go more deeply so I can get a sense for your commitments. For my current point of view, it's a little too much on the constructivist side. I was on that side for a long time, especially if you look at the early work of Maturana and Varela; it's radically constructivist — observers' points of view. But if we do that too much, we lose grip on the science and materiality, and there's a slippery slope to saying the work is all done by the observer. Then we treat the observer as just something relative, like an intentional stance, just a convenient adoption of a frame of reference to talk about observers, but we don't commit to the extra step of saying that these have some autonomous being or efficacy that can make a difference. I see that in your paper where sometimes it sounds like you want to say that the polycomputing itself makes a difference in the system. Other times it sounds like you're saying the system that is polycomputing is not really called polycomputing; it's just the external observers that can see it as polycomputing. That's where all the work is happening. One discussion we could have to get the ball rolling is: How do you feel about this? Where is this polycomputing happening? Is it in the system or in the perspective of the observer?
[11:48] Michael Levin: Great question. Josh, do you want to go first?
[11:52] Josh Bongard: You go ahead, Mike.
[11:54] Michael Levin: Maybe what you're pointing out is that we need to write something specific on this. I'm not sure that Josh will agree with this part, but we'll find out. First of all, polycomputing is as real as it gets. It's not an "as if" kind of thing. That distinction is a little unhelpful. In other words, I come from an idealist position where what there are in the world are perspectives. The fact that an observer sees something doesn't mean it's just as if it's not really real. That's as real as it gets. That is absolutely real. What we're seeing is that if there is an observer — and this is where it comes to science and not philosophy — if you can meaningfully find, study, manipulate, and take advantage of an observer that sees something, then that's real. By seeing something and by making use of it, you are making it real, and that's what it means to be real. This addresses the question you asked earlier: can we dig into what an observer is? To me, an observer begins with a closed-loop process. The point of an observer is that you care about what you're observing. Photographic film is an open-loop system. The light hits it, something happens — that's it. It makes no difference to the film what it actually observes. But any system that takes in information and then processes it in some way that makes a difference to it will do different things. It has set points that it tries to maintain. If it has a closed-loop homeostatic mechanism, it has preferences of valence and so on. Now you have an observer. Having observers is what reifies all of this. When we say it's an observer seeing something, that doesn't mean it's not really real. It's just a convenient way of speaking. If there is a useful observer that you can find in the system that's taking advantage of it, that's as real as it gets. That is absolutely real and functional and causal and all of those good things we want to study in science.
[14:30] Tom Froese: Josh, the address, should I?
[14:33] Josh Bongard: I would agree with you, Tom, that you can see this through Western philosophy. The observer is a good scapegoat. It's an easy out often for these problematic questions about the nature of mind and physicality. The very word observer in English, the ob means toward. There are some deep assumptions in the very use of that word and the way we think about an observer, that it is an observation, and we talk about an observer seeing because we're humans and we tend to rely on our visual sense, that it implies it's from a distance. There's some separation between the observer and whatever it is that the observer is observing. One of the things that polycomputing and your work on irruption theory and enactivism and a lot of stuff that's happening in robotics is leading me to think, these things that we treat as separate are possibly not as separate as we think. What would it mean for the observer and the observed to be conflated? Some of the work that you've done on value and irruption theory is helping with that. I agree with you. It is very easy to put the interesting stuff onto the observer, assuming that the observer is separate in space, in time, in physical modality. But is that really true?
[15:53] Tom Froese: So I think I hear both of what you're saying: for you, the notion of the observer is actually quite a strong one; it's not the kind of constructivist observer that someone sits out of the picture that is constructing. I agree with that. You said irruption theory to some extent is trying to take observers seriously in the most serious way that we can take them and then work out the implication of what that means in terms of efficacy, which is also a word that you used, Mike. I'm all about trying to work out how do we quantify efficacy of agents and observers. So maybe then it's just something that comes up a little bit of the paper, but "all the work is done on the observer's end" is one of the things that you say in the paper. It's just that maybe then that gives the suggestion that the system that's being read in multiple ways, like the ambiguous figure and so on, that there is a separation: there's this figure that has colored dots in it, and then depending on the observer you see either a duck or a rabbit. I'm just wondering whether that does justice to the kind of framework that you're proposing. It feels like you're after something stronger than that.
[17:02] Michael Levin: I think to piggyback on what Josh was saying, two things. One is that it's really important to say that observers don't just observe stuff around them. One of the most important things they do is they observe themselves. And so this puts them together. You don't need someone else to observe you for you to be real. You can observe yourself. That "strange loop" thing that Hofstadter worked on so well. It locks it in place so that there may be other observers that are observing you too, but you don't actually need them if you can observe yourself. This is also very important in biology as well, is that you're constantly making models of yourself as well as the outside world and trying to figure out what am I, where are my borders, what am I observing, what do I get to observe and what do I cause, what is caused outside of me and so on. It's not about putting it off and seeing something at a distance. I'm not sure what comes first: the observation of the outside world, and then you say, "wait a minute, I can also observe myself," or conversely, first you're taking measurements of yourself, and then later you realize that there's a world out there. But it's critical to close that loop and to be the observer of yourself. I'm not talking about humans. I'm talking all the way back to before life. The other point is this: the scientific program is not to take all the difficult stuff and say, "it's in the observer somewhere." What it's doing is shifting research attention onto the observer. So now you've got all the hard work to do: say, okay, what are the processes? Is it active inference? What's the observer actually doing that makes all this possible? How does the self-referential differential loop work? This is never about putting all the difficult questions off into philosophy land. All it is is shifting the effort and saying there's the system itself, but don't forget to study the observer. The strongest version of this view — I'm not sure I go that far — is that as we think about what proliferates differentially through the universe: is it genes? Is it information? What is the unit? One could think that maybe what you're really looking at is different perspectives, different possible observers. This gets to Wolfram's ruliad and so on: the space of possible observers and how they interact and how they change. That's what I'm interested in in biology. You start as a single cell observing things in physiological and metabolic space, and before you know it, you're this other thing that is now observing states in anatomical morphospace, which didn't exist to you before you joined. So the growth of these observers as their cognitive light cone increases, as it projects into new spaces, that's where a lot of the hard work now goes: how do they change and where do they come from?
[20:20] Tom Froese: I hear that. I'm still hesitant to go further than the origin of life, but I guess you're familiar with Elise Mullen's work on this. It's like the universe basically consists of a network of observer perspectives, and then he tries to build up everything from there. One warning, to give context of why someone might be concerned with polycomputing and putting the observer in this way, is that, and I think I just touched on this briefly the last time we talked, Mike, there is a history in the philosophy of cognitive science where this polycomputing, and the fact that it depends on an observer's mapping of a subset of possible variables to a particular function, was used as an argument against intrinsic computation, right? So in the sense that depending on my interest, if I know how to map things, if I have a large enough system of particles just randomly bouncing around, I can take any possible subset, and I can get any possible function out of that system. All the work is done on the side of the observer. I need to create the mapping, I need to track the particles, I need to interpret them. That means that all of the computation done by the system is completely observer dependent, and it's hard to argue that there's something intrinsic going on in the system being observed. I know that you're not trying to say that, but I'm saying there is a debate about this. If you want to have a stronger notion, it's good to be aware that this is where you don't want to go.
[22:00] Michael Levin: Let me poke around in that a little bit and understand what the risk is, because if we add to this the fact that everything you just said, but also the system itself is interpreting itself, so it is also an observer of itself and it has its own picture of what it's doing, do we need to say that there is some kind of privileged... you could say that the system's own view is in some sense intrinsic. That's fine. Can you unpack a little bit? What do we gain or lose by trying to say that there is an intrinsic, given that I've stipulated that the system itself is observing, it's not just about external things, right? But given that, what do we gain or lose with this intrinsic idea?
[22:44] Tom Froese: I'm not an expert in that area, but intrinsic is that they want to have a grounding for something — representational content. Their computation is about something. If you don't have any way of arguing why the computation is an intrinsic property of the system, then you can't get off the ground of saying why observers care about anything at all. That's where they're going. They want to have an intrinsic notion of computation in order to solve the problem of the symbol grounding.
[23:15] Michael Levin: But I think that's only an issue if you insist that there's one real story underneath. You can correct me if I'm wrong, but I think that's only an issue if you insist that there's one real story underneath. If you can do symbol grounding if you're willing to say whose symbols. So the system itself will have its own symbol grounding; some other observer observing it will have a different symbol grounding. I can see it's a problem if you insist that there has to be one right answer to this. I'm not trying to throw away—I absolutely think there's representation in symbol grounding. I just think it's all relative: first you specify an observer, and then you do the good science of saying how does that observer ground its symbols. I do think it's an interesting question of whether the system's own grounding is in some way privileged over others. I'm not even sure, because I could imagine that if you're a relatively primitive system, an external observer who's way smarter may actually have, quote-unquote, a better—I'm not sure what that is yet—but a quote-unquote better understanding of what you're actually doing than you yourself. Your own self-model may not be as rich and not as useful as some other smarter system that's looking, "You're actually doing all this cool stuff. You don't even know it." I'm not even sure that we need to say that the system's own perspective is somehow privileged. I don't want to give up grounding. I just think it's relative.
[24:41] Tom Froese: It goes back to Josh's comment about value. You mentioned caring about something. Let's say there's an organism and one observer says, "I can see it caring about this thing, A." Another observer says, "No, I can see it caring about this other thing, B." The question is, is there a fact of the matter whether it cares about A or B, or is there no fact of the matter? It depends on your point of view. The question from irruption theory point of view is, does any of this caring or the values that they're striving for make quantifiable differences in the system? I would argue that we should find some trace of this making a difference for the value in the system. If it's only external, the nice thing about that is that you need to care about efficacy. The system itself just is what it is, and anybody can read into it what they want. Then we don't have the efficacy part in the system. Things get more complicated when it's self-referential and you're observing yourself. Now we're talking about observing another system.
[25:54] Michael Levin: I like it. I want to 100% stick with efficacy. If your perspective on what it cares about does not give you added efficacy, then you're just wrong. My view isn't that every hypothesis is as good as any other hypothesis. If you as an observer have made a hypothesis about what the system cares about, and if you haven't squeezed efficacy out of this, then you're wrong. I don't want to make it binary; to the extent that your perspective has given you extra efficacy in the world, that's the extent to which you are right. The system itself, and we could work out some way in which the system's own view is in fact privileged because it gives it extra efficacy over itself. I'm thinking that we should try that. Overall, I would say it absolutely has to have efficacy.
[26:48] Tom Froese: And to the extent that... One example that comes from the book Evolving in Activism by Hotto and Mien is that they look at place cells and grid cells and things like that, which are taken by neuroscientists as classic examples of parts of our body with representational content, which basically means if this neuron fires, it means that you're in this place of the maze. From an external point of view, it can help us if we know this mapping so we have efficacy in predicting what the mouse is going to do. But if you zoom into this particular neuron and describe its activity, we might end up with the usual spiking firing model in which we look at the flows of chemistry across the membrane and the building up of potential. Then we can ask the question: Given that I know that it being active means there is a content here, a particular location in the maze, does that change anything about what I can say about the activity of this neuron? Or is it completely exhaustively described by the biochemistry and the electricity that's happening here? What would your take on this be?
[28:04] Michael Levin: I would say that in that case, if the external observer is better able to, it almost sounds to me like psychoanalysis or something where the patient says, well, I know why I do it. This is why I do it. And the psychoanalysis, that's not why you do it. Here's why you really do it. And then you get to find out. Because if you take that perception, you say, oh, wow, that has really given me a lot more efficacy in life, then there's something to this. I just want to take really seriously this idea that the way how good your perspective is, by how much efficacy you can exert. The only thing I'm not sure is whether, because it's over yourself and even beyond the boundary of the self, it is to some extent observer dependence. Maybe it's efficacy over your environment. I don't know. But I bet, and I like that example with the maze, because somebody like Gio Pizzullo or Carla could probably quantify this and just write it.
[29:14] Tom Froese: Here's my thinking. This is a good example to also introduce irruptions. If you take the biochemist perspective and we just take quantitative measurements of everything around, the neurons, so we can get the most precise recording that we could possibly get. But in all of that description, we don't say anything about the semantic relationship of the cell and its environment. To the extent that we want to argue that the semantic content of being a place cell, for example, of being corresponding to a particular location orientation makes a difference in the brain, but yet that kind of aboutness, the semantic content of it, is not contained in the perspective given to us by biochemistry. That means there must be things happening that escape our ways of describing it in those terms. Efficacy in that sense would be the counter-efficacy. This efficacy is not enough to say everything and there will be a noise term left over, which will correspond to the efficacy that we would then have to attribute to something else, like the fact that it's about something rather than just pure biochemistry. That would be irruption theory applied to a neuron.
[30:30] Josh Bongard: This is also all predicated on the assumption that the rat is in one place at one time. If we make the Xenobot equivalent out of rats, where you've got some semi-independent millimeter-sized bits of rat material, rat organoids swimming around, what does it mean to be in this place? I think it's comfortable for us to look at wild type organisms and privilege the organism level and say, the rat, we're choosing to look at the whole rat rather than the part of the rat, is in one place at one time. But that's not necessarily always true. What about the mother rat that's running around in a maze with her children and she can see some of her offspring at different places in the maze? Presumably she cares about where they are in the maze and she's thinking about them in different parts of the maze. It's very easy to problematize the observer and the observed. A lot of this is the Cartesian inheritance about this omniscient and omnipotent observer. I'm here, I'm in this place, I'm observing, I'm representing. But some of the work I've done with Mike and other things that are happening in robotics, that's not necessarily true. The observer is not omniscient, omnipotent. It's not separate and above and privileged. It is literally within the observations themselves.
[32:00] Tom Froese: You're totally right, Josh. I should say that this example is a very classic representation example; you have to be careful. Thank you for reacting. The correspondence here is established by an external observer who is a well-informed scientist who has access to the environment and everything that's happening in the brain. We should be careful not to attribute all of the scientist's knowledge to the neuron. Assuming that the neuron is about something is already enough to get the example working.
[32:32] Michael Levin: Which I have no problem with. I think cells do have representations, but as Josh said, they can be in weird spaces that are hard for us to recognize. Here's an example that Chris Fields put me onto a while ago. You have a bacterium, and the bacterium wants to maximize nutrition. It's moving up a sugar gradient. It's got effectors in 3D space. One other thing it can do at any given point is turn on some other gene that's an enzyme to metabolize a completely different sugar and raise things that way. Now we as scientists call those different spaces. You've got transcriptional space, and that's the space of your gene expression, and you've got three-dimensional space. It's solving problems in both, and also in physiological space and metabolic space that's hard for us to see. It's doing all these things. From the perspective of what's the inner model of the bacterium: does it split those spaces the way that we do or not? To it, turning on a gene and twiddling the propeller so that you move in a particular direction — are those actually different actions? I'm not sure at all, because the cell has way more dimensions in transcriptional space than it has in physical space. You might imagine that most of its processing time is occupied moving around in transcriptional space. Does that make those spaces less real, the ones that we see? Or conversely, is our perspective somehow better than what the bacterium sees? I don't know, but I definitely do think that single cells represent, and when they join into brains or embryos — never mind the neurons, but just like cells in the embryo — we need to understand the scaling of these representations. When you're a single cell and you're really good at thinking in transcriptional space and navigating physiological space, how do you pivot those competencies to navigate anatomical space when you're faced with those kinds of puzzles?
[34:36] Tom Froese: We can have a whole other chat about representations and the pros and cons of them, but we can at least agree that biological systems are directed towards goals and they do regulation, and there's something happening here that needs an account. I guess coming quickly back to Joseph's point, and then we'll also say a little bit about how all these extra dimensions just mentioned, about the mother and the pups moving around together. From my point of view of the story that we were telling, if you care about something or part of your body cares about something, then that will basically give a remainder that can't be reduced to the efficacy that you get from a biochemical account, for example. This is a kind of proportional thing. If there are more things that you care about, including not just yourself but your pups, there will be correspondingly more complex, unpredictable things happening if you adopt a purely reductionist lens. That would be the prediction. It's still preliminary, but we submitted a paper earlier today where we show that in EEG hyperscanning it's possible that something like this is happening in the sense that when the co-regulation between two people increases, the inter-brain synchrony goes down. That's unexpected. Most people think it should always go up, but when I'm regulating actions not just with respect to my own interests but also by taking into account what the other person is doing and what their interests might be, it increases significantly the complexity of the things that I need to care about. That then means there's a greater remainder of things making a difference that are not just biochemical. Therefore the noise term gets bigger, which means less opportunity for inter-brain synchrony. I think the same is true for things happening within the organism. If the cell has to coordinate all of these different aspects, the more things it has to care about, the noisier it will be. Noisy doesn't mean that it's actually just noise. If you adopt the quantitative lens of the biochemist and ask what you can measure here in terms of voltage or chemical concentrations, what you're not measuring is whether the system cares about what's written in its genes, or what's happening in this other system, or what's happening at the whole-organism level. To the extent that we're realist about those other processes making a difference, we have to accept that our limited perspective will always have a remainder of efficacy coming from somewhere else, which will not be intelligible from within the narrow point of view.
[37:18] Michael Levin: I'm 100% on board with that. And I would even push it in this direction. If you zoom in to the biochemical molecular perspective, it looks like a remainder. It looks like, as you said, noise that you weren't expecting. But if you push out in terms of scale and time especially, then I think what we can say is that instead of a remainder, we were pretty good at predicting all these things, but then there's this noise term. I think the noise term overwhelms the rest of it. As you get further in time, it becomes the majority of the thing. Here's a dumb example. Let's say there's a chess game going on, and you take a molecular reductionist view of it, and you specify where all the atoms went during this chess game. In the local view, you're not exactly wrong. It is what happened. But where you see the impoverishment of that view is when it's time to play the next chess game. You've learned absolutely nothing about what to do next. On the local scale, that's where all the atoms went, and you have a perfectly coherent story about what happened. But now comes the future, and it's time: what benefit was this view to? It was of no benefit, actually. If you take this molecular view, you have a short-term gain. Looking backwards, this happens all the time. We show some crazy biological effect, and then after the fact, somebody comes up with a molecular explanation that says this is consistent. It's chemistry doing what chemistry does. It's not going to be fairies. It's going to be chemistry. But the problem with doing it that way is that it didn't help you get to the next thing. That's what that looks like in science. In the life of the organism, as Kierkegaard said, you live your life forward. That's the problem: when you take these very molecular views, you're okay immediately. It's a fine story right now, but it's not a useful story later on. I'm 100% in agreement with you about the molecular story not being the whole story. The influence of the other stuff grows and grows. The more you're oriented as an active agent towards the future, the more that's the most important stuff: to know what you're going to do, what anybody else is going to do. And the zoomed-in molecular stuff becomes the smaller part.
[39:55] Tom Froese: Okay, so here I'm going to stick up my neck. This is being recorded, so we have to be careful. But I would say that we should be cautious to give too much credit to physics being able to explain everything, including chess games. There's always a tendency to try to generalize what has worked somewhere to everywhere. And in this case, what we found out in our large particle colliders, we assume applies everywhere. And to the first approximation, that's the best bet. That makes sense. But we should keep an open mind that life is so weird, really so weird compared to non-life, that things might be a little bit more noisy when it comes to organic matter than what we're used to in particle accelerators and elsewhere. And could it be the case that living behavior is another kind of unexplained acceleration, as we find at other scales of the universe, but now at the mesoscale? Just putting it out there, not super controversial. Josh, you were going to say something.
[40:59] Josh Bongard: I just want to, with respect, push back a little bit against the two of you. So this noise term and efficacy. So we choose the biochemical level, we choose the mechanical level, we choose the thermal level, the optical level. We're trying to make predictions into the future from this material, and we can't. It's a noise term we give up. I would say the AI comes in and says, wait a second, you human observers, give me this noise term. You can't, you're not a good enough observer. Let me have a go at the noise term. I want to share a quick anecdote. When I was a postdoc working with Hod Lipson at Cornell, we were developing some AI that could take raw data and generate equations. We were working with some geneticists that were doing some gene regulatory network stuff. The AI kept generating equations that made sense to them. They said, this is amazing. This is exactly what our field derived decades ago about genetic behavior and all the rest. But then there was this noise term that they couldn't explain. They said, no, that doesn't make sense. Hod would keep putting me back on the computer to find the bug in the AI about why it was not being able to make sense. I kept coming back to Hod and the geneticists saying, I don't know, it just keeps hitting on this noise term and these weird, very nonlinear terms. After a while, the geneticists said, wait a second, we've been looking at this noise term from your AI. We hadn't thought about it, but it makes sense. This, to me, was one of the great success stories: the AI stumbled across something that was very non-intuitive, but actually was efficacious, was able to make predictions where the humans couldn't. So I want us to be careful. If we say, oh, there's a noise term, maybe we should look for things other than biochemical or purely physical mechanisms at work. We as humans are not very good observers. We're not very good at making predictions from physical phenomena, but we are no longer the only ones that can try and make long-term predictions from physical phenomena. So I think we should let AI have a go at things before we move on to non-physical explanations of things.
[43:16] Tom Froese: Noise is efficacious. I completely agree. The fact that it's there makes a difference to the system. If it wasn't there, it would operate in a different way. So totally on board with that.
[43:34] Michael Levin: Josh, I take your point. Is it possible that the reason the AI was able to do a better job of understanding this noise storm is precisely because it did what we find hard to do, which is to find that global large-scale pattern that is not apparent from the details? If we got better at visualizing, there's a whole field of trying to understand what the heck it actually sees when it does these things. Maybe that's exactly what it's doing. It's latching onto the stuff that is not apparent from the details and this global gestalt.
[44:33] Josh Bongard: It could very well be the case. We have to resist the temptation to say it's not possible. It might not be possible for us, but for other observers and other actors, it may be possible. Predicting long-term human behavior from purely molecular mechanics is very difficult for us to do, but for someone or something else, perhaps not.
[45:01] Michael Levin: Total agreement. My guess, and this is an empirical prediction, is that to the extent that any system can do that, it is actually getting good at what the science is supposed to be, which is to go beyond the low-level description.
[45:20] Tom Froese: I was going to say the same thing. It already needs to be multi-scalar and have access to all the different levels that are interacting. That would be my feeling too. It's like what psychology does. Psychology asks, "What are you interested in?" It doesn't look into your brain.
[45:36] Josh Bongard: I would add a question mark. I would turn that into a hypothesis. Is that true? Does it need to go beyond, or can it do it directly from very low-level, local-in-space-and-time building blocks and jump to higher-level, long-term predictions?
[45:55] Tom Froese: I would use your own paper to argue against you, Josh. Once you accept non-reductionism in the sense that you have multiple realizability, for example, and the only way in which you have different levels interfacing is through indirect deformations of their state space, but there could be many different ways in which it was deformed and it's not accessible at that level—what exactly is happening? So I'm the cell producing this stuff and suddenly all my gradients are changing, I have no idea what's going on. To then say, now from this, try to work out what the multicellular organism is doing. Is it playing chess or playing football? I think it's going to be very hard for principled reasons, which are exactly the ones of polycomputing that you're outlining in your paper. That's taking non-reduction seriously. I think that it means that there are, in principle, not just in practice, limitations to stay at one scale only and then derive the rest. It has to do with the fact that these different layers or levels are autonomous to some extent. It's about closure, as Mike said. So once you accept that some things are closed on themselves, then that means that you can't just treat them as another component in the same level. And so that kind of interface then becomes complicated. It brings me to one point in your paper where you say this deformation forces the lower level to change its behavior. I'm not sure whether that terminology is precise enough. I would say that there's always a way of interpreting things in a lower level that doesn't exactly work in the way that the higher level wanted. And this indirection means, in terms of predictability, for example, that there will be, in principle, limits of what can be predicted.
[47:40] Michael Levin: Josh, this leads me to what you said about turning it into a prediction, and let's see if that's true. The flip side of that is, if this is true, then the implication is to make better AIs that are better able to do this, we should, in fact, facilitate them to look beyond the details. Now, how would you do that? Here's one idea. One major distinction between the one example we have, which is living things that do this, and the current AIs, is that living things are always resource constrained. So the AIs, for now, have all the energy they want. They have all the time they want. Life is under pressure from the word go, under incredible pressure to coarse-grain. If you're going to be a Laplacian demon, you'll be dead in no time. You can't track microstates. You have to coarse-grain. So my empirical prediction: If we make systems that are designed from the beginning as mortal computations in the sense that they are at risk and limited in energy, we will force them to get better at doing the kind of global view that abstracts from the details, and they will get better at understanding what the noise is about. That's my prediction.
[49:10] Tom Froese: Given that we now have this AI as assistive technology for analyzing data, it will be much easier for us to work with noise and unobservables. This is another general point: physics, for example, is so used to working with unobservables. It's daily business. But when it comes to cognitive science, we still seem to insist that everything that is efficacious must also be directly observable, whether it's your own mental state through introspection or something that I can measure under the microscope or with brain imaging. If I can't quantify it or observe it, then that's already magic and that's no longer science. We need to be a little careful because there could be a lot of things going on that are not directly observable, but they are efficacious. We need to think about how they would appear to us. They will appear to us through the observables, but they will not make sense in terms of those observables because they're coming from other sources of activity. The natural sciences in general are a good model for how to work with those kinds of systems. We should try it out in terms of explaining behavior. That has been very hard, because if you already have 80 billion neurons, even the observables are too much for us; now try to add unobservables to that — it's just too complicated. As we're scaling up these systems, they don't care whether it's observable or not. What they're interested in is the patterns of efficacy of the activity. It's not surprising that the AI came up with "here's this noise term." That's pretty cool. We'll see more of that in the future.
[51:01] Michael Levin: My crazy view on all this is that aspect of this unobservable efficacy that comes into the system, I think comes very early on. I don't think you need cells or neurons. I think it is extremely prevalent in the universe. This is why that paper that's going to be out in Adaptive Behavior shortly is like, we took this sorting algorithm, freaking bubble sort, already has, if you look at it the right way, you're like, Whoa, it's doing things that the algorithm did not specifically specify — recognizing its neighbors and delayed gratification, all this kind of stuff. I just think the noise thing is incredibly important, but I think it shows up really early, in the grand scheme of things, and we should be open to it in all sorts of unconventional places. And maybe the AI can help us find it. Now, listening to you guys talk about this, I'm thinking that we looked at that sorting algorithm, we found a couple things that it's doing, but we're not that smart. We should be using AI to look at it and it might find seven other things that it's doing. Maybe actually we can deploy AI tools exactly for this because they're going to have different cognitive biases than we do. And they may find other things going on that we're just not seeing.
[52:29] Tom Froese: It's just an interesting shift of perspective that if you're looking for observables, the noise is just annoyance. Maybe it means that it ruined your experiment. Maybe you can't publish it. But actually, now it seems like that's where all the interesting stuff is happening. So then just turning this around and saying, what's the relative quantity of predictability I have at each point in time in my system? And is that changing the predictability over time? Those moments where suddenly I'm losing grip of the system and I can no longer predict what's happening are probably moments when you're going to have lots of biological regulation, values making a difference, maybe a new goal is enacted, whatever it might be. Those are the really crucial moments. And to some extent, we already know that. If you have gait transitions, there are moments of turbulence in between the stable patterns and so on. So dynamical systems theory has already struggled with it. How do I change from one stable behavior to another? That's been really hard to do formally. But if you do it with machine learning or statistically, things should get easier.
[53:44] Josh Bongard: I think we found a few of those things. You were saying, dynamical systems theory has been helpful and attractors tend to map onto different behaviors and there are transients. But again, there's a lot of work that says that's just the tip of the iceberg. Those are the intuitive things. I would point to some of Eduardo Isquierdo's work. He dropped artificial brains into two different robot bodies and it looked at the beginning like that brain would fall into two separate attractors, which represented good gaits or good behaviors for those two different bodies. But Eduardo went deeper and showed that actually there were two different transients inside those two different bodies. It wasn't as simple as what we might have assumed or wanted, which is different bodies force different brains into different attractors, and those are different behaviors. What I took away from Eduardo's paper about that is it's much more complicated than we think. And I've read that paper many, many times. I can barely wrap my mind around it. And to me, that's just yet one more indicator that the answer is much more complicated. We want AI, we need help in looking for these unobservables. We need AI to help us find, by new agents, what are the subjects in there? What are the things that, the passive things that are having things happen to them? Tom, in your irruption paper, you talk about happenings and actions. So what are the passive things to which things are happening, and where are the agents, the things that are acting and causing things to happen? I think we've just scratched the surface on trying to pull those passive and active components out of living systems.
[55:30] Tom Froese: It's a nice example of poly computing, actually. The fact that you can have the same brain in different bodies and then it will operate differently. And it's nice that you're saying it's not forcing it. There is this ambiguity, and so it becomes more like a transient that it's a back and forth between the brain and the rest of the body. And I think that's just the signature of multi-scale hierarchy. As I think Mike said in the beginning, that's the only way you can get it working under precarious conditions you need to have this flexibility. But from our limited point of view, it's hard to do science with that, unless we really embrace it and say, Well, that is the core phenomenon that we want to study. That is what it means to be alive and to be minded, and then it means no, it's not the attractor that's the interesting part. I mean, that's almost like just being dead, that that's the attractor. What's the really interesting stuff is that there's this transit that we can't predict ahead of what's going to happen next. I think dynamical systems theory has slowly lost some of its popularity compared to where it was, let's say, in the 90s or early 2000s. It has to do with the fact that some of this complexity is much richer than what we can capture there easily. So, if we are really talking about noise and stochasticity and things like that, then having an information theoretic approach sometimes is more appropriate. A lot of us in the field of artificial life, including the two of you, have been working with wet ALife, and we start messing around with real systems and getting a feel for those. As Mike said, this goes all the way down. So there's already a lot of efficacy that we don't fully understand at all scales of the world.