Watch Episode Here
Listen to Episode Here
Show Notes
This is a ~1 hour discussion with Lisa Maroski ( and Richard Watson ( about the role of language in shaping our thinking in the field of diverse intelligence and beyond.
CHAPTERS:
(00:00) Language and systems thinking
(04:25) Parts wholes and resonance
(07:34) Who versus what
(12:45) Recursion and new structures
(17:47) Searching for a word
(25:16) Holding multiple polarities
(29:41) Nested biological agency
(35:42) Patterns and Platonic minds
(45:41) Cross-level observers and time
(52:45) Knowing and owning beliefs
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:00] Lisa Maroski: Let me just run down some of the list, and if any of them strike you as, yeah, I want to start there.
[00:10] Michael Levin: Sure. And also, if you wanted to take a few minutes and just talk about your work and tell us where you're coming from and what you've been working on, that's great, too.
[00:22] Lisa Maroski: OK, I'll give a-- since I'm the unknown quantity here. I'll do that. So my work has been very transdisciplinary, but focused particularly on language. When I was in college reading systems theory, von Bertalanffy, I kept thinking, if only he had developed a better language for this. And so I'm not setting out to do that for von Bertalanffy, but I started seeing things, seeing aspects of both language and our worldviews, our cognitive models that I think keep people constrained. I saw both and everywhere. I saw the interconnectedness of nature and nurture and mind and body. And it just seems silly that people would argue about, is it this or that, instead of looking for a way to language the dynamics, the interpenetration, the both-and-ness of them. And I was also influenced by topology, the Mobius strip and the Klein bottle, and saw them as interesting metaphors for doing just that, for maintaining both the distinction a Mobius strip seems to have two different sides, just like a piece of paper, but looked at globally, it only has one side. And same for the Klein bottle and inside and outside. And so I started looking for ways to bring this kind of multi-layered thinking into language itself. Is that enough? Yeah, that's great. So, yeah, where it overlaps with your work, this can be used for both local and global perspectives, multiple scales, and specifying... What I'm really interested in is seeing how, particularly in biology, when you are cognizant of the multiple scales of intelligence and goal setting and cognition that are going on, how do you talk about all of that together? But I'm not asking you for answers. I'm saying this is what I'm working on, and maybe you have some insights that could help me, and I have some insights that could help you. I don't know. Let's find out.
[04:25] Richard Watson: I would very much like to be able to talk about the relationship between parts and wholes and nested selves in a way that respected the autonomy of the parts and the autonomy of the whole and the relationship between them. I guess we're relatively comfortable with the idea that some concepts are relational, that we're not just talking about things, but we're talking about relationships between them. And we're relatively comfortable with talking about processes rather than structure or material. But when it has that nested relationship, it's usually treated in a very dull way of just parts and wholes. And I feel like there's something quite deep about what we need to get to in our understanding of mechanisms of cognition and what cognition is and how cognition works. That's intrinsically about the relationship between parts and wholes and the... But those aren't the right words, right? Between the selves within and the self without and the self that is the two together. That it's something to do with-- because I think about cognition as being about causes that operate on different time scales. So multiple instances of a process that is rapid versus one instance of a process that is slow and that-- memory is about bringing causes from the past into the here and now, which is just another way of saying there's multiple timescales involved in the causes that you're talking about. And that does, yeah, well, at the very least, I agree with you that the existing language is insufficient to be able to talk about those things, and I would like to be able to talk about them more easily. I often resort recently I've been resorting to metaphorical language or possibly advocating for the literal interpretation in terms of things like resonance. In particular, resonance between a tune played at one frequency and the same tune played at a lower frequency that nonetheless are the same tune and resonate with each other and hold the shape of each other, right? Which has a sort of the insides reaching out to the outside and the outside reaching into the inside sort of feeling to it. So maybe if there were other words available to talk about such concepts, then I wouldn't have to call it a metaphor and I wouldn't have to say it was literal either. I could say it was that kind of thing that I'm talking about.
[07:34] Michael Levin: From my side, in terms of language, I've been thinking that one of the fundamental limiting aspects of our language, and I don't know if this is true in other languages, the ones that I know all have the same problem, but I don't know very many. So maybe other languages do this better. But we only have two options. We have a what and we have a who. And that's it. Everything is either a what or a who. And I mean, I don't even think that works when you have a dog. You're like, well, it may not be a who, but it's definitely not a what. And so this idea that we're going to just divide the world into two sharp categories, and that's all that our language allows. So I started thinking, and I don't know what the answer is, but my crazy version for this was to put a little exponent on the O. So, like, you could have, I'm a level 10 who, right? So, and maybe if I go to some meditation retreat or whatever, I'll gain a, I'll be a level 11 or something, and maybe my dog is a level 7, and maybe my Zenobots are a level 3. I have no idea. But this notion that at least, even if we don't agree on what the exponent is or whatever, this idea that it just can't be too sharp categories for this sort of thing, I think. You learn that so early in your language, and it just freezes everything from then on, so that we have to keep having the same arguments again and again about the spectrum of cognition. And maybe that's why it made, right? Because it's just baked into the language. So I don't know. I'd be interested to know if there were other languages that have wider options, but it's, yeah, I think that's one of those things that has to be melted down and redone.
[09:26] Lisa Maroski: Other languages do divide the categories differently. Some indigenous languages include many more beings, types of beings, in the who category. They will give beavers and mountains and trees personhood, knowing that they're not human persons, they're beaver persons or mountain persons. So it's not just a language issue. It's a category structure issue.
[10:07] Richard Watson: Yeah.
[10:08] Lisa Maroski: Which relates back to culture then.
[10:11] Richard Watson: Yeah. I guess in English, we talk about the spirit of something. And in some contexts, we mean that in a quite who-like way. In another context, we mean that in a quite what-like way. So part of what we're talking about here is language in the sense of, hey, wouldn't it be useful if we had a word for this? And together with that, if we had a word for this, it might change the way that we think about things on the ontological structures that we impose over things. But also part of it is perhaps the thing that we want to talk about is a linguistic thing, that the thing that we want to talk about is how can one part refer to another, or how can one part establish identity or non-identity with another? And that we're talking about, for example, when we're talking about the sort of strange loops that you mentioned at the beginning, in Hofstadter's term, like the Möbius loop and the Klein bottle, that have the idea, the feeling of the inside reaching out to be the outside or that the inside and outside isn't clearly defined or that it's flipping back and forth or something like that. That's a thing that you can do when you can do language. But when you take everything literally non-declaratively, non-referentially, it's just a concept you can't have. It's all, do you know what I'm reaching for? He says, lack of language.
[12:08] Lisa Maroski: I think you're reaching for the distinction between metaphoric language and literal language, and in some circumstances, like science, we try to reach for literal more often than metaphoric, even though I know you both understand that science is full of metaphor at multiple levels, both in the level that we talk about things and also at the level of scientific models are a kind of metaphor as well.
[12:45] Richard Watson: So take Chomsky's notions of linguistic structures, productivity, compositionality, systematicity, all of which are involved in recursion. And it feels like there's, it's not just that, it's not just that we need words for those things as though, as observers looking at those things, we need words for it. But the thing that we want to talk about is intrinsically of that kind. It's intrinsically linguistic in nature, that the kind of concepts like center embedding, systematicity, compositionality, and things like those are like the kind of construct that we want to be able to talk about when we're saying these Mobioid, I just made that up, structures where the inside reaches out to the outside and vice versa. And that's not just because we need a word for that thing that's out there, but because the thing that we're talking about is a sort of abstraction, sort of something that can only exist when abstractions are possible. Maybe that's what I'm trying to say. Like you can only create a paradox by using words which label things in a particular way that creates paradoxes, right?
[14:18] Lisa Maroski: I'd also like to add another caveat to my interests. In looking at how, as you know, language and culture and the different aspects of language are so interconnected themselves, I'm not just looking for new words. I'm actually looking for new structures for language to be able to express this kind of mobile, mobioid, I'll use your new word, types of relationships and ways of expressing the complexity of certain types of, again, there's no word, experiences, processes that we're trying to have a way to discuss without having to reduce them to the old categories. So it's kind of a difficult project because it involves change at multiple levels simultaneously, which is often difficult to do. So yes, change at both the cognitive level, meaning how we think about and categorize the world and how we speak about it and write about it. All of those forms, I think, need simultaneous changing. Otherwise, the system, language is a system that likes to maintain some level of homeostasis or homeodynamics. The various already existing structures help to keep the whole intact when one part of it wants to go off and do something different. Does that make sense? In other words, while I'm all for neologisms, I don't think they're enough. I think language has to really embrace, or we as language users have to create or evolve our own language to be able to express the kind of things that you're doing at interesting multi-level systems dynamics.
[17:46] Richard Watson: Yeah.
Lisa Maroski: Yeah.
[17:47] Richard Watson: So I'm gonna have a go at describing the concept that I want a word for.
[17:52] Lisa Maroski: Okay.
[17:56] Richard Watson: So building on those words for the kind of properties that one might want from a systematic language, compositionality, productivity, and systematicity. And concepts like being able to refer to something rather than already being it, being a reference to something. From that, we build up to an idea of something being self-referential, that it refers to itself. And then I don't quite want something that refers to itself. I want something like whole referential parts and part referential wholes so that they're referring between levels. But I don't just want either one of those either. I want both of them at the same time. The whole is referring to the parts and the part is referring to the whole at the same time. But I don't quite want that either. What I really want is one where you can't really tell which is the whole and which is the parts because it keeps turning inside out. I'd like a word for that.
[19:08] Lisa Maroski: That's beautiful.
[19:14] Richard Watson: Maybe that is the word, just beauty.
[19:19] Lisa Maroski: Yeah, so you also have notions in there that are holographic and fractal. So one of the neologisms, new forms that I did make up, I'm not sure it fits everything, but I think it at least fits perfectly. Part of what you're looking for is I invented a glyph that I call Mobi, which means distinct but not separate from. And so the distinct part is that linguistically, you can distinguish this bit of a whole, but ontologically, they're not separate. So it's a way of capturing something about a system that allows you to say, well, okay, this part of the system does this and this part of the system does that, without turning both of those into different what's, to use Mike's distinction earlier. It's a way of retaining the wholeness and the partness simultaneously. And so we are Moby at many different levels. I am Moby, my microbiome. So I am distinct, but not separate from my microbiome. My microbiome makes up me. I would not be me without my particular microbiome. But yet those are also whole organisms within themselves and collectivities within themselves, within me. And I can also say I am Moby, by place here in California, or I am Moby the Earth because I am not separate from the Earth. If I got separated from the Earth, and then I just thought, oh, we just sent astronauts, maybe this isn't going to work. But while I'm on Earth, I am interdependent with it. I need it for my sustenance. It needs me as well. And so there's, I think we're heading, a term like that, be heading in the direction you're looking for. I don't think it fully captures what you're looking for though.
[22:34] Richard Watson: So I often these days return to the notion of things being the same and different at the same time.
[22:42] Lisa Maroski: Yes.
Richard Watson: Which is not quite the same as being distinct, but not separate from, because that doesn't necessarily imply that there's a symmetry there, right? That there's a sameness there. There's an interdependent parts-ness and so a separateness and a non-separateness at the same time, but not necessarily a sameness and a distinction at the same time. So an example of a sameness, same and different at the same time, is an object and its reflection. Yes. So an object's reflection, if it was different, it wouldn't be its reflection, right? It has to be the same, but it also isn't the same because this is the object and that's its reflection. Unless, of course, I was inside the looking glass and then that would be the object and this would be the reflection, right? So there are two different things there, two different things that I can refer to and also they're the same, right? Or maybe they are different, but only in one respect, right? That there's a line of, there's a plane of symmetry. So the distances are all opposite in that one dimension. And what I would really like is to be better able to articulate that same and different at the same time, but with the nested whole. You know, the whole is different from the parts, but it's also a reflection of the parts and the parts are different from the whole, but they're a reflection of the whole and that they are, there's a sameness there and a difference there, but in that, in that scale relationship, that containment relationship, more particularly than an object in its reflection. And also still keep that in a, and I don't really know which one is the parts, and which one is the whole, and which one is the whole, and which one is the parts, right? I don't know if I can really articulate why I'm attached to that last bit, but I am. So I could, but I don't know if it would help.
[25:16] Lisa Maroski: So one of the ways that I tried to address that kind of wanting to hold both at the same time, whether it's sameness and difference or some other concept, is to put concepts like that in a structure like a yin yang symbol, just to present both of them simultaneously so that when you refer to one, the other is right there. That sameness can't be sameness without difference.
[26:15] Richard Watson: Well, in particular has that foreground, background ambiguity and the contained, containing ambiguity as well. That does do a lot of the work.
[26:33] Lisa Maroski: And you can combine multiple ones. For example, and this is where I think our culture really needs some help to be able to hold multiple polarities like that simultaneously so that we can think about, for example, I'm just going to use the example in the book because it's simplest. It's one that, in American culture, we talk a lot about freedom and that word gets bandied about, but freedom can't be freedom without some sort of responsibility behind it. And when we talk about freedom, it's not just every person's freedom. The individual's freedom is essentially constrained by and given by the collective freedom. So there's a freedom responsibility polarity. There's a self-other polarity. And then there's also a temporal one, like my freedom right now to do X versus, and considered along with, how is that gonna play out in the long term? So there's like a short-term, long-term. So how can we think about all of these multiple polarities simultaneously and be able to express them? I don't know. I mean, I'm sure that sort of thing comes up in biology as well, just to try to loop Mike back into the conversation. Because the body is doing, is balancing all kinds of different polarities, whether it's the sympathetic and parasympathetic nervous systems working simultaneously along with interactions with the environment, along with cognitive interactions, emotions and feelings, and all of those things in play simultaneously. I should probably come up with a question here.
[29:35] Richard Watson: Why is Mike scowling?
[29:41] Michael Levin: Oh no, I'm not scowling. Yeah, no, please, if you have a question, let's do it. I mean, you're right, of course. I think in any body there are numerous different agents with agendas and priors and different capabilities, and they're all hacking each other, and the higher levels are bending the lower levels, and the lower levels are constraining and then enabling things at higher levels too. I mean, this is a huge ecosystem for that kind of stuff. And I don't just mean bacteria versus cells. All of these things are nested and whatnot.
[30:16] Richard Watson: But also, you as a whole can feel the stress of your parts, and I think your parts can feel the stress of the whole, that they tune in with one another directly, that your identity and the identity of your parts is not just a containment relation, but that there's a sort of skipping levels. Like, it ****** my cells off, it ****** me off, and I was like, why? That doesn't have to equate, does it?
[30:53] Michael Levin: Yeah.
[31:00] Richard Watson: I can just know that they are without connecting with them in that way, without identifying with their affect.
[31:21] Lisa Maroski: Yeah.
[31:24] Michael Levin: And isn't there a language issue there too, in the sense that in order to have the kind of relation that Richard just mentioned, there has to be some impedance match? Like, at the very least, you need to share a concept of being ****** *** and whatever, so that you can be in vaguely similar states. And so then you wonder, what are the states that we don't share that we don't know about, right? And that's also a language issue. There could be all kinds of, and in fact, they're almost guaranteed to be all kinds of things that the cells and then the molecular networks inside of them and the bioelectric gradients and the tensile forces and everything else, they could have all sorts of other states that we can't really, you know.
[32:08] Richard Watson: They're feeling all mobilacking right now and we're struggling to tune in with that.
[32:18] Lisa Maroski: And they might have differing goals than what the whole organism has. If I had a bunch of candida, I might be craving sugar. And while the whole organism, me, is trying to diet and not wanting to eat sugar. And so the other language issue in this scenario that I find interesting is the agency at multiple levels. The candida in my gut have their own agency. They're trying to live their life in their environment. Their environment just happens to be me. I'm trying to live my life and my environment. There's probably all sorts of other bifidobacteria and other creatures trying to live out their lives, needing different things, wanting different things. And how all of this maintains balance and to stay healthy, to continue the infinite game of life. There are so many infinite games going on, just in every single organism. Do you, are you familiar with cars?
[34:05] Michael Levin: Sorry, with what? Say that again.
[34:06] Lisa Maroski: James Carse's Finite and Infinite Games, the distinction between them.
[34:12] Michael Levin: I don't know the name. I mean, I think I know about the distinction, but I don't know who James Carse is. Tell us.
[34:18] Lisa Maroski: He was a philosopher. I think he was at NYU, no longer with us. Wrote a wonderful little book called Finite and Infinite Games.
[34:26] Michael Levin: Interesting.
[34:28] Lisa Maroski: You know that some games are made to be played to win or lose. And some games are made to be played so that the game can continue to be played. And those have very different kinds of rules than the games that are made to be won or lost. And yet, also that there are finite games that keep the infinite game going. So the finite games within our body are that cells senesce and die, and to keep the infinite game going, we have other cells that come along and process them and take their parts and recycle the parts. So there's a living and dying game going on, both at the cellular level and at the more than cellular level, that keeps the infinite game of life itself going.
[35:42] Michael Levin: Yeah, I mean, I think it gets even more, like, the combinatorics get even weirder because I think it's not just the tangible things like cells and the bacteria and whatnot, but you can think about the patterns as well, right? And so this is something that we've been working on lately is kind of fuzzing out that distinction between thoughts and thinkers in a sense, between real beings and just patterns and excitable media, that sort of thing. So when you have the butterfly-caterpillar, the caterpillar-butterfly transition, there are multiple, so you can take the perspective of the caterpillar and face a kind of singularity and sort of think about what it means, whether you're going to exist or not. And you can take the perspective of the butterfly and ask, because they do inherit some of the memories of the caterpillar, in fact, remapping them to their new embodiment. And so the butterfly might ask, like, I have these, I have some weird, you know, feelings about certain stimuli. Why is that? I don't remember ever having encountered it. Like there's no specific encounter, but I have these, I've been saddled with these odd behavioral propensities for some reason that I don't think I own, but clearly I do, and so on. And so that's bad enough, but you can also take the perspective of the memory itself as a pattern within the cognitive medium of the caterpillar and the idea that you're not going to survive as a caterpillar memory. You can't because you're about things the butterfly doesn't care anything about. But if you are plastic to the extent that you can remap and generalize, and sort of now you can be about things. So whereas before it was the kind of motion that a soft-bodied creature can do in order to reach some leaves, in the butterfly you might be the kind of motions that a hard-bodied kind of creature like a butterfly can do in 3D, by the way, with an extra dimension to your thing. And it's no longer about leaves. Now it's about nectar. And the perception is different because your eyes are different and everything is different. But the associative conditioning that you received as a larva passes on, right? So you can persist. And so this notion that we've been exploring around that, like, spectrum, you know, we have fleeting patterns, so fleeting thoughts, these sort of like a wave sort of comes and goes. And then you have these sort of persistent thoughts, which are hard to get rid of. They do a little bit of work to keep themselves going, maybe a little niche construction in your brain, you know, as depressive and those kind of repetitive thoughts do. And then there's some other stuff. And then eventually you get to something that's maybe a personality fragment, from dissociative identity kind of scenarios where you're not a full human personality, but you're way more than a simple thought pattern because you can plan and you can have, you know, preferences and so on. And then there's a full human personality and then who knows what's on the other side of that. So, you know, whatever vocabulary we have for all these things has to take those kinds of things into account as well, potentially.
[38:59] Lisa Maroski: So I've heard you talk about platonic spaces a lot. So it seems like you're making distinctions between different types of, let's just use Plato's words, platonic forms within a platonic space.
[39:25] Michael Levin: So I am not trying to stick close to Plato's ideas to whatever extent we even know what they are. The reason, and this may need a total vocabulary refresh at some point, I went with Platonic because I wanted to anchor it first in mathematics and go from there. And when you say Platonic patterns to mathematicians, they know exactly what we mean, and some percentage of them agree with this idea that there are important facts, or more broadly, patterns that are not physical facts. These are things that you would not discover as a physicist. These are things that you can't just disband the math department and hope the physicists find these things. More importantly, you can't change these facts by tweaking the fundamental constants of physics. You're not going to change why quaternions don't obey the whatever it was, the property of multiplication and so on. You're not going to change that by changing the fine structure constant and things like that. So that's the idea: you start with the simple notion that, like it or not, there appear to be, at least temporarily, I think we have to say that there are at least two realms. I say it on purpose because people hate this notion that there's more than one realm. But I think it's important to say that if realms is to have any meaning at all, you have to be able to say that there are things in this realm that are quite different than the things that we're used to dealing with. And then from there, some other stuff, but basically then I want to drop the assumption, because that's all it is. I think it's not a result. It's an axiom that people add for some reason that I don't think we should add, that these patterns are only relevant for mathematics. That basically it's only the low-agency static forms that mathematicians study. My suspicion is that once you've accepted, and I don't see any way around it, once you've accepted that there are these kind of patterns that are important for physics and biology and so on, but their origin is not in the physical world as we study it, then you might ask, but could you have patterns that have various degrees of agency, including ones that we might recognize as kinds of minds. And so now you sort of shade smoothly into an old class of theories in the philosophy of mind, where minds are simply not of the physical world. They're something else. And then you have this interaction. And then, of course, you run into the interaction problem. But you already had an interaction problem between math and physics, is what I claim. So you're not, this is not new. This is, you know, Pythagoras already had all this. So that's kind of the idea. And we can even show some of the intermediate steps. So one of my favorites, do you know Patrick Grimm's work at all? No. So he's a philosopher. I think he's at SUNY in New York. And he basically started out by saying that, well, you've got this liar. So this is interesting because it gets the language and so on. So he says, you've got the liar paradox. And the reason it's a paradox is because you insist on one truth value and then it's a problem.
[42:30] Michael Levin: But if you treat it as a dynamical system, no problem. You've got an oscillator, true, false, true, false, true, false, right? You just have an oscillator. And once you do that, you can start making dynamical systems maps of English sentences that have various degrees of paradoxical self-reference, and you can have multiple ones. So if you have two sentences, and sentence A is, I am 80% as true as sentence B is false, and sentence B is, well, I'm only true if sentence A is less than 70%, whatever. You have these things, and you can plot them. And so he shows these beautiful fractal structures that these things have, and some of them settle down, and some of them don't settle down. But what's interesting is, so you have your static patterns like, E is a certain number, pi is more than three, and that's like a rock. It's not going anywhere. It's just how it is. It's the electron of that world. And then you have these little oscillators. It's kind of like the liar paradox, it's just kind of buzz up and down. But once you have those things, you can do something, you can take the next step and you can make sets of sentences that act exactly as the gene regulatory network models that we studied, they can be trained. And so I have a student that's actually training sets of English sentences because in the end, they're just the dynamical systems kinds of things. And some of them, if you give them stimuli, and what I mean by stimuli is a temporary bump in one of the values. So you have, let's say, 10 sentences. They're all sort of about each other. And you can give a little bump into one. And so that stimulates and some stuff happens. And then it settles down and you do it again and you do it again. And you just ask the question, as you keep doing it, is there habituation? Is there sensitization? Can you condition to stimuli on each other and give it a placebo effect kind of thing, as associative conditioning? Turns out you can, and more. And so then you can, they don't have to be closed off and only be about each other. You can, some of the sentences can refer to things in the outside world. So you can give them an embodiment by saying, okay, here are your sentences. And also there's a lamp or a clock or whatever. And one of those sentences might be, I'm only true if that thing is green, or I'm only true if the car is running. And so now they're about the outside world, but they have their own learning capacity, right? And it affects how they interact with the outside world. So you can build all of this that's grounded in these kind of language slash logic systems. So you can imagine a whole set of, and once you have, as far as I can see, once you have associative conditioning, possibly you could keep going. I don't know how far you can keep going. We haven't gone terribly far yet, but so you can imagine things like that.
[45:41] Lisa Maroski: So, it seems that, at least the liar's paradox, and I don't know about the conditioning experiments that you're working on. It relies on interaction between two different levels. So you have the level of the sentence. This sentence is false. But in order to judge the truth value of the sentence, you have to go to a higher level, which is the self-reflective judgment level. Let's take a different example. If you're looking at a written text and it says this sentence is red, but it's written in black ink, you have to jump out to that higher level to judge whether it's red or black, which sounds similar to what you're doing with the lamp: this is true if the lamp is green.
[46:49] Michael Levin: Yeah, I mean, I think it's consistent with what we've been talking about here, which is numerous interacting observers at different levels. Because if you have multiple sentences, sentence B is just looking at sentence A, and that's okay. Different observers can, right? It's what Richard was saying is that it crosses levels. So the observations and the sensors and the effectors can cross levels, and biology is full of that kind of stuff where you can sense something that it might be mediated by a chemical signal, but the point isn't that you're sensing some specific chemical, you're sensing some systemic, systemic state that's like high-level state that's mediated. So yeah, you can absolutely make and maybe you can even do crazy things and keep going further. Like, I am only as true as this whole thing is consistent, or worse, I am only as true as this whole thing is interesting. Or I'm only as, you know, I'm true if and only if this thing doesn't settle to a stable point, right? Then maybe that's some kind of crazy Turing, you know, halting problem.
[48:06] Richard Watson: Yeah, I was just writing those.
[48:11] Michael Levin: Yeah, we haven't tried that yet.
[48:12] Richard Watson: But this sentence is true only if those sentences are stable, and these sentences refer to each other in an unstable way. Only if that sentence is false, that kind of thing.
[48:22] Michael Levin: And that introduces another degree of freedom, which is time. Because if you want to know that they're stable, it's not enough to take a snapshot. You have to watch them for some period of time and say, to know if it's settled down, you have to have multiple time points to compare it with.
[48:43] Richard Watson: Like if it's in or out of the Mandelbrot set.
[48:47] Michael Levin: You have to have observations over some period of time. And now you're back to having different observers operating at different time scales and watching things at different time scales.
[48:59] Richard Watson: What you said about that stimulus being a bump, which was to temporarily modify the truth of one of the statements, did that hold the truth of all the other statements true as they were while you did it?
[49:15] Michael Levin: So we don't touch them externally, but of course the minute you do that, it's going to propagate. So we have to do the whole thing as discrete time, unfortunately, right? So during the time point that we're bumping it, we don't touch the others, but then we have to recalculate all the others, and the ones that are connected to it will immediately update state. So they will react to it themselves.
[49:45] Richard Watson: So changing one of the variables without changing any of the others is a bit like a jump in time, right? So there's the state that you had before you bumped it and the state that you had after you bumped it. Imagine that those two states are state. by running it forward in time, that if you hadn't bumped it, you would have got to a state where everything was the same except for this bit, if you just waited, right? So that little bump is like nudging it in time at different amounts.
[50:22] Michael Levin: It is. And I'm sure from inside the system, it looks like magic because all of a sudden, this one node bumped up. And if you're the node that normally feeds into it, you say, what is this? I didn't do that. How did this thing get bumped up all of a sudden, right? Because we're acting on it from outside the system.
[50:45] Richard Watson: Or it looks like nothing at all. So it only looks like something if you have a time scale with which you can reflect upon what it was a moment ago before you bumped it. Otherwise, you say, hey, that thing just moved. And you say, what thing just moved? It's like, well, it used to be low and outside. No, it isn't. It's like it's already gone, right?
[51:03] Lisa Maroski: So you're bringing up another important thing that I think language needs to have a little more specificity about, which is context and perspective. When you're looking at things from multiple perspectives, to be able to specify, okay, from this perspective, it looks like there's a bump. From this other perspective, it does not look like there's a bump. How do we reconcile these two perspectives? Or 3 or 4 or 10 when you're talking about more complex systems. And I think, I'm not just thinking about language as the two-dimensional kinds of words that we write on a page. I'm thinking about expanding language to be more graphic, more fully two-dimensional, to be able to show relationships like that, not just say them one word after another like we're doing now. That's going to take a lot of work. That's going to take a lot of people coming together with a lot of different expertise. But I think what you're doing is helping to create a model system to start working on things like that.
[52:45] Richard Watson: I'm reminded of what Mark Somes said in our conversation earlier today about forgetting why you believe something, forgetting that it becomes, what was the word he was using, automatized, that it becomes automatic. You don't know why, you don't know what the evidence was or what the thinking process was that brought you to that conclusion. And now it just becomes automatic. It's like, it's just who I am. It's not a belief I have. It's just who I am. The converse is a system that knows something about the process by which it arrives at its own truth, something like that, right? That's a weird kind of system already, right? How do you know? Because that already has that reaching between levels sort of feeling to it that it knows. It's not just the same as knowing something about your parts. Like I can look in a microscope at my own belly fat. But it's like that knowing something about the process that constructs my own knowing is a bit more strange loopy than that. Yeah. And it feels so that the real reason why I want this inside outside thing to connect or to flip is because I want the recursion, I want a stopping clause for the recursion, right? It's like, well, you know, if I did know the process by which I knew this, how would I know the process by which I came to know the process by which I knew this, right? And then that, you know, that sort of, you know, feels like it has a recursion that can't bottom out. And that bottom level has to somehow resonate with right all the way back to the top of, that just because that's what it means to know something, because that's what it means to say that you know what the process is, right? So that it becomes not a material or substrate-dependent constraint that makes you think that not a historical contingency or happenstance that makes you think that it becomes a logical truth that you think that because that's the only kind of thing you can really think that is self-consistent. So the platonic constraints that come from mathematical boundary conditions that you can't change, that just are truths, that are absolute truths, and the dirty, nitty-gritty historical contingencies, substrate dependency fads, they become the same, so that they're not really different from each other.
[55:35] Lisa Maroski: So there is a structure in language. It's not very prevalent in English. It's not required in English, but in some languages, it's called evidentials. And grammar, you have to specify grammatically how you know what it is you're saying, whether you know it firsthand, so I saw the bobcat, or whether it through inference, there was a bobcat there because I saw the tracks that it left, or whether you got it third hand. My neighbor told me he saw a bobcat around here yesterday, so be careful. Some languages require you to specify those sorts of things. And it sounds like it would be good to have a way to specify how we know other kinds of things that you were just talking about, from the nitty gritty to this is an abstract truth that is unchanging.
[56:56] Richard Watson: Earlier, Mike mentioned the idea that the butterfly doesn't know where it got the odor aversion from. It's like, I don't know why. Why do I like this? Or why do I not like that? I find myself thinking about, well, whose preference does it think that is then? It's like, you can't think that it's somebody else's, right? In order to act, you have to own it, right? It has to be my preference in order for me to act on it, right? And then that sort of feels like, well, if, am I doing that when you tell me an idea that you have, right? When you tell me an idea or a concept that you have or a fact that you have, do I take that in in a way that it becomes, like I know it or am I still holding it? Like that's what Mike thinks, you know, or like if when I grok it, it's like something shifts, right? That it becomes my own thought. It's like, even though I also remember that you said it a moment ago and a moment ago I said I didn't get it. And now I do, but it still feels like my thought, right?
[58:16] Lisa Maroski: I've had experiences like that where I've had an idea that I thought was mine, and then I go back and reread a book that I read 30 years ago, and it's like, oh, that's how I got it.