Skip to content

Conversation between Josh Bongard, Atoosa Parsa, Richard Watson, and I

Josh Bongard, Atoosa Parsa, Richard Watson and Michael Levin discuss agency without free will, oscillatory computation, polycomputing, and evolving notions of selfhood, including fluid, cyclic, and relational models of the self and their links to perception and sensory plasticity.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~50 minute conversation between Josh Bongard (https://www.uvm.edu/cems/cs/profiles/josh_bongard), Atoosa Parsa (https://www.atoosaparsa.com/), Richard Watson (https://www.richardawatson.com/), and I on topics of agency, computation, polycomputing, oscillations, selfhood, and similar subjects.

CHAPTERS:

(00:00) Agency Without Free Will

(03:08) Data, Agency, Observers

(11:21) Oscillators And Observer Tuning

(18:41) Histories, Priors, Fluid Selves

(27:15) Relational Bundles Of Self

(32:19) Cyclic Selfhood And Harmonics

(41:51) Sensory Plasticity And Self

(45:42) Vibration, Agency, Harmonic Computation

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:00] Josh Bongard: Your second item, inner alignment, higher level decision-making via bottom-up phase synchronization. So if you're an agent and things get phase locked so that suddenly there's fewer things you can do, you retroactively say, yeah, I made a decision. This is the thing I chose. To me, it seems like an interesting place to start that this whole thing about agency I chose, I have control, it's not. There's something about resonance and the constraints of the physical world that restrict your actions. Does that comport with your thinking? Is it meaningless?

[00:47] Richard Watson: No, I understand the question. I can answer it from the point of view of that particular model, which I think you've seen.

[00:57] Josh Bongard: Yeah.

[01:04] Richard Watson: You're talking about free will, right? Is there an option to have made a different choice?

[01:14] Josh Bongard: I like to, I feel more comfortable with agency.

[01:18] Richard Watson: But only because, only because why?

[01:22] Josh Bongard: Because that word scares me. The minute that word is said, that's it. We're going to spend the next 58 minutes talking about what free will is, and that'll be it. I value my time with you guys.

[01:33] Richard Watson: So we can call it agency, but the aspect of agency that you're curious about is: could you have chosen otherwise?

[01:41] Josh Bongard: Yes, fine. Just don't use the F-word. We'll be fine. Please.

[01:47] Richard Watson: In that particular model, the new higher-level collective couldn't have chosen otherwise. But prior to the new high-level collective emerging, it couldn't choose at all because it wasn't a thing that could make a decision.

[02:19] Josh Bongard: Okay.

Richard Watson: Becoming a thing which has a sensitivity to an environmental cue that you didn't previously have is as close as you get.

[02:34] Josh Bongard: Okay.

[02:35] Richard Watson: There's this thing and it's out there and it has the potential to push me around, but at the moment, I'm not even sensitive to it.

[02:43] Josh Bongard: Yeah.

[02:44] Richard Watson: If I can organize myself in such a way that I have sensitivity to that, then I can react to it. I couldn't have reacted differently, but at least I've got myself into a situation where I have a sensitivity to it I didn't have before.

[02:58] Josh Bongard: Okay.

[03:02] Richard Watson: It's not everything you want from free will, but it's not nothing.

[03:08] Michael Levin: Richard and I were just talking a little while ago about this. What I think is interesting about that is that if you think about it, you've got some data moving through an algorithm or a machine or whatever. From the perspective of the data, it meets Dennett's definition because whatever happens is caused by the content of the data. I may not have had control over my parts. I am what I am. But the things that are happening in this world now, I cause that. It's because of me that it's happening. So that's good enough. But I think we can crank it one step further in this kind of self-referential thing and say that sometimes you can have data that changes the data. You can have self-modifying. This is the stuff that I've been thinking about in terms of memories and living things and so on. Then I think you get an extra click on that knob because, over the long term, none of this works immediately, but if you have the chance to change your own structure and the way in which you are going to then cause the machine to do other things, now you get more agency. That's a one click up.

[04:18] Josh Bongard: If this is semantics, we can just stop and switch gears. The data alters the data; that's an action. To me, data cannot act. That's why it's data. The minute data can act on other data, it's an agent or it's something else. Is there a specific reason you're talking about data acting on other data?

[04:41] Michael Levin: Well, the reason is I want to dissolve the distinction. I think from one perspective it's data, and from another, if you take the perspective of the data, it's no longer data; it is the agent. That's the part I'm trying to play with. So it looks like data.

[05:05] Atoosa Parsa: I was saying that I've been experimenting with this idea of having observers apply an action to apply an oscillation to the substrate in order to see the computation that they desire to see. Instead of just passively looking at a dynamical system to find the result of a computation that they want to see, they, in my case in the experiment that I ran, just start vibrating a particle at some frequency. Before applying that action, before oscillating a particle, they are not seeing the results of the computation that they want to see. It's not happening in the system. Then they start applying the vibration for a short time. Because of that plastic wave propagating in the material and interfering with all the other mechanical waves in the substrate, you can now see the results of the computation that you wanted to see. That's really interesting to have multiple observers and then you have to look for these processes that are not interfering with each other. That's the biggest challenge: finding independence between different computations. One of my lab mates, Piper, was trying to work on it for a while. That's the biggest question right now. If you have these observers looking at the same cost system through a computational lens, one other problem would be to optimize this computational lens. Not necessarily looking at different frequencies. They can look at different spatial scales or a mixture of frequencies. That's also the next step for me. That's what I'm interested in pursuing next. Just different ideas about the role of observer in the computation.

[07:38] Richard Watson: Interesting. So can I check what you mean when you say the "observer"? Vibrates something in the system in order to see a particular repertoire that it does. Do you mean that because they do that vibration, the system computes this rather than that? Or do you mean the system was computing both this and that anyway, but by doing this vibration, they can see this rather than see that?

[08:27] Atoosa Parsa: In my substrate, it wasn't being computed before.

[08:32] Richard Watson: It wasn't what before?

[08:34] Atoosa Parsa: The computation that we are looking for wasn't computed before because it's the result of those waves interfering and mixing with each other that produce the specific response at the output.

[08:47] Richard Watson: Almost as though the computation is being done on demand, because I put this vibration in, computation occurs.

[08:55] Atoosa Parsa: Yes, yeah.

[09:00] Richard Watson: Which is a way of saying that they are programming it.

[09:07] Atoosa Parsa: But also, the input is temporary. They apply for a short time and then remove it. After a while, they look at the response, the output. It's not like the input to the system. If you have a logic gate, it's the signal that I'm talking about. It's not one of the inputs.

[09:31] Richard Watson: You have some input signals and then they stop and you do a probing signal and that gives you a particular computation out.

[09:39] Atoosa Parsa: Yes.

[09:40] Richard Watson: Interesting.

[09:42] Michael Levin: Could I ask? Are we sure it wasn't doing it before? Because to check, you have to vibrate it. So is it that you couldn't see it before or it really wasn't, is there a sense in which it wasn't doing it before? I'm not sure about that.

[09:58] Richard Watson: It was latent.

[10:01] Michael Levin: Yes.

[10:02] Atoosa Parsa: Before applying that vibration, I can look at the frequency spectrum of the output and I don't see what resembles the computation that I'm looking for. That's what I mean by it wasn't being computed before.

[10:25] Richard Watson: The answer to the question, does it?

[10:29] Michael Levin: It sounds like a way of cheating, all the quantum observer stuff: was it doing it before you look? Then you try to get a peek at it before actually really looking.

[10:47] Josh Bongard: Because that's the thing, you do an FFT and you're already choosing the frequency spectrum that you're going to look at. You're not looking at the infinite. So is it just beyond the edge of the FFT? It's impossible to say it's not there. It's just I didn't see it. If it is there, I didn't see it.

[11:05] Atoosa Parsa: I see your point. Maybe it was being computed, but not at the exact frequency that we were looking at. After applying that frequency, we can now see the computation.

[11:21] Richard Watson: That reminds me of two oscillators at the same frequency that are in anti-phase and cancel each other out. Then you say, so there isn't an oscillation. If I take away one of those oscillators, I can see the other one. So it was there, right? So was it there all along or only there when I looked? It's not hard to get yourself twisting in a knot there, is it?

[11:55] Michael Levin: But one interesting implication of all of this is that we typically program by asking, what can I do to the medium to make it do whatever? And in this case we're pushed to saying, what do I do to myself to help me see what is already going on. It's a different focus.

[12:18] Richard Watson: What is it doing? What question do I need to ask of it? Versus what do I need to be in order to see the answer? Or what? What do I need to be in order to ask the right question?

[12:49] Josh Bongard: This relates back to what you were saying, Richard, about the observer not being sensitive to something and then becoming sensitive. The stronger you are as a self, there's a greater range of questions you can ask. I like the one about taking away an oscillator. All of traditional electronics has primed us all to think that "technology is restful." By default, it's not doing anything and you have to add and push. But in the natural world, everything is on all the time. So you've got to remove, you've got to twist, you've got to rotate.

[13:24] Richard Watson: You've got to cancel out all of the stuff that's there that you don't want.

[13:28] Josh Bongard: Right. Yeah.

[13:30] Michael Levin: That's exactly how we got Xenobots. We didn't do anything to the cells themselves. We removed the other stuff that was forcing them to be a boring, two-dimensional object.

[13:42] Richard Watson: We took the merit of the Zenobot canceller.

[13:45] Michael Levin: Yeah.

[13:46] Josh Bongard: Nice.

[13:47] Michael Levin: So you can imagine some crazy, the real cloud, this cosmic AWS that's — we're already doing everything. You figure out how to see it, but then here it is.

[14:04] Josh Bongard: This also relates to Sean's recent paper that I still keep trying to wrap my mind around: the way he's posing observers, the observers in his paper are more complicated than what we've done so far. You have one observer who wants to see things in parallel in space, but sequentially over time. There's another observer that is looking for multiple things at the same time, but at different places. Atoosa's work helped condition me: there can be multiple observers observing the same thing at the same time, but at different frequencies. There are more sophisticated ways, more angles from which an observer can make observations, or, as Richard was just saying, posing questions. Sean's paper made me think about that. How do we stretch ourselves and think about the increasingly exotic ways an observer can make an observation or probe a system? There's tons we could do there.

[15:16] Richard Watson: That's the question that you put in the chat.

[15:20] Josh Bongard: It's related to observers. That's just about whether you can have multiple observers in the same place at the same time. Then there's a question: if we're comfortable with multiple observers observing the same thing, how? What are the different — you mentioned rotations, and I don't know if that's what you meant, Richard, but there's so many crazy ones. There seem to be so many crazy ones that are at least at the edge of my intuition that we're not thinking hard enough about the many ways. If there's some biological material that's rich, it's got rich dynamics, how much can you pull out of it? You've got to be able to look from as many different angles as you can.

[16:02] Richard Watson: Looking at it at a different frequency, that's what Atoosa's work did before. You see different computations by looking at different frequencies.

[16:12] Josh Bongard: Yeah.

Richard Watson: One can imagine that there are different computations going on in any one frequency by looking at different phases. One of my bullets is how to compute XOR by combining frequencies. But when you do that, you're also computing the complement of XOR if and only if you look on the answer frequency but pi out of phase — you're computing if and only if instead.

[16:41] Josh Bongard: Oh, that's cool. That's cool. Yeah.

[16:45] Richard Watson: The way to change the inputs is to slide a phase difference of the frequency of the input variables. To get the combinations of the input variables, if you're computing XOR of a particular pair of inputs, you're also computing XOR of all of the other possible inputs you could have. In other words, it's the whole function, not just the function of a particular pair of inputs. And you're also computing the inverse of that function, because that's also just sliding the function around in phase space as well. But then when you think that, being a different phase is like being a different distance from the source. Assuming that your channel takes time, rather than looking at it in situ at a fixed distance from the source, moving in space then becomes the same. I'm going to see different functions depending on where I am, not just what frequency or phase I'm tuned into. And if the function is computed with two different oscillators in plan view, then what I see at this distance from one oscillator and what I see at the same distance from the other oscillator are different. So as I move around in a circle like this changing location, I'm changing the phase difference of these two things, which changes which function I see.

[18:13] Josh Bongard: Yeah, cool. Very cool.

[18:15] Richard Watson: So my answer to the question is no. You can't have two observers seeing different functions in the same place unless those observers are in different places.

[18:28] Josh Bongard: I got it.

[18:31] Richard Watson: The same as being tuned into a different frequency, tuned into a different phase, being in a different location, looking at it from a different angle. They're all interchangeable.

[18:41] Michael Levin: Got it. Is there any way that an agent's priors alter what's going on? Could you be in the same place right now and have had a different history? The difference is not that you're in a different spot now; it's that you used to be in a different spot. Now you're interpreting differently.

[19:10] Richard Watson: Either that means I'm not in the same spot or I'm not the same person, right?

[19:16] Michael Levin: I'll take the latter.

[19:18] Richard Watson: If we imagine that it's possible for me to make a decision which alters future trajectories, if I decide this, then this future will come about, and if I decide that, then that future will come about. Then if I turn that around and look backwards into the past and I have a different story about how I got here — I got here from this history versus I got here from that history — that's like saying if I came in on this history I'm going to go out on that future, and if I came in from this history I'm going to go out on that future, which means that the difference between being able to decide between those two futures is the same as coming from those two different histories, which means if I could change my mind about how I thought I got here or what I thought my history was, that would be the same as choosing a different future. Both of them are just deciding to be something other than you are. I don't know how you do that. But if you think you can do it about the future, you must be able to do it about the past as well, to become a different person now, and then whatever that free will I didn't have was going to make me do would be different because I became a different person because I got here from a different history.

[21:08] Michael Levin: That's a really interesting symmetry about choosing the future requires you to choose an interpretation of the past. I think it's very biological in the sense that agents have to constantly reconstruct a model of their past from the memory traces they currently have. And there's your opportunity to reinvent yourself for the future because you don't actually know what your past was. It's a story that every 300 milliseconds or so you have to cobble together and maintain.

[21:42] Richard Watson: One way, the classical way to view it, is: here's my history; that never changes. And here's what's happening now. What do I do given what I am? It's determined by my history. What do I do given what's happening now? That's the sort of classical way to think about it. But if instead I have to reconstruct myself from memory all the time, then how I reconstruct myself now in this situation is different from how I reconstructed myself a moment ago or in a different situation, which is saying memory was faulty, or saying my history is different, or saying my history is context dependent, it's context sensitive. Who I am and where I came from is different in this situation than in that situation. What fun.

[22:54] Josh Bongard: The present can push you to be different observers. You can look at your past history. You could look with different cognitive frame rates. I look back on my history and I see this, which means it's easier for me to go forward in this way because I'm looking at my past. A moment later my now present enforces a different cognitive frame rate on how I look at my past. I'm looking at the same data, but in a different way. It suggests it's easier for me to move into the future.

[23:24] Richard Watson: This whole idea of being a self is nonsense, isn't it?

[23:31] Josh Bongard: It's increasingly untenable to hold up. It doesn't make any sense.

[23:35] Richard Watson: Because if you put me in a different situation and I do a different computation, then I'm a different me. Or if you look at me differently, I'm a different me. You don't even have to put me in a different situation; taking a different perspective on me is a different me. The thing that's weird about it is that it ever feels like a coherent thing in the first place.

[24:09] Michael Levin: I think that's because it takes time to do all of that. It takes some amount of time. Over short enough periods of time, perspectives change slowly; they're coherent. Over longer periods, they can change radically, as we know from childhood, from metamorphosis, from all kinds of stuff. But in the short term, it takes—there's some inertia there. There's some.

[24:39] Richard Watson: I'm not sure that even that's true, because if you were to look at it time-sliced on a frequency that was twice as fast, then in between you wouldn't be there at all. If you're recreating your mental state every 100 milliseconds, that feels like it has continuity at that particular frame rate. But in between those frames, who knows what the **** is happening? That might be like this; there isn't anything there at all.

[25:27] Michael Levin: It's also held together. Your inertia and my inertia hold each other together because now there's some dependence I can have on the fact that tomorrow, if we have a conversation tomorrow, you're not going to be all about natural selection being the answer to everything. There's some ability I have to know that that's not going to happen.

[25:57] Richard Watson: Been absolute truths, some platonic realities, right?

[26:00] Josh Bongard: Mike, that's the thing: if you ask Richard in eight hours, at 2 a.m. his time, and he's sleepwalking and sleep talking, you might get that answer, and it's just as valid. You're still talking to Richard, the thing in that bag of skin over there, but it's just at a different time, and you're going to get a different answer. But you should probably ask him when he's awake and caffeinated, and then you'll know what to expect.

[26:26] Richard Watson: I'm already saying that natural selection is the answer to everything, but you're not tuned into that version of me.

[26:34] Michael Levin: That's what I mean, your inertia and mine are holding each other together. Because I'm committed to a certain way of having a conversation with you and that's what allows us to write a book over the next three years. There's some balance.

[26:49] Richard Watson: If you could change quickly enough, you would see that I wasn't consistent.

[26:54] Michael Levin: It's not that there aren't selves; it's that what it refers to is a temporary mutual agreement on some inertia about how we're going to measure each other.

[27:15] Josh Bongard: But is it useful? The self, to me, always advertises a unity. There's a thing. Not plural, singular.

[27:23] Michael Levin: Yeah, that doesn't seem, that doesn't seem.

[27:28] Josh Bongard: But if we relax that, maybe this is semantics. Self doesn't seem like a good word for it anymore. The best we can do is selves. Which implies something very different.

[27:43] Richard Watson: There's a bundle of selves which are not quickly diverging for a given observer, for a given other bundle of selves.

[27:56] Michael Levin: It's a commit, it's a more or less temporary commitment to a perspective. It defines a way you plan to observe. You may change that later, but it has some comet tail on it that's not instantaneous.

[28:15] Josh Bongard: And so we're claiming now that there is a plurality of things and they stay aligned under certain conditions, under a certain time span. And it seems, again, using that language, that we're implying that it does so more than other stuff usually does. Everything else out there is diverging and more independent. But there are certain things called selves that are locally aligned in space and time more. Now, is that a reasonable assumption? Or, depending on where you look.

[28:50] Richard Watson: Diverge even less, right?

[28:52] Josh Bongard: So are selves privileged in any way or not? Maybe it's debased.

[29:03] Richard Watson: You're a self to me. If there are changes to my perspective that make you do different things, then you're a self to me. Whereas to the chair, it doesn't matter how I look at it; it still looks like a chair and it still does chair stuff. I can't look at the chair in a way that makes it do something different.

[29:33] Josh Bongard: Yeah.

[29:34] Richard Watson: It's right down to the quantum level. But that's not within my range of ways of looking at things. But you're a self to me because we're in similar frequencies, in similar phases, such that we can phase lock in such a way that you have some coherence. So you at least stay still for a while, which you would have in common with the chair. But unlike the chair, the range of motion that I have makes you look like you're responding to what I do, having some sensitivity to the kind of things that I can do. There's a back and forth.

[30:23] Michael Levin: I do think there's a metric here, something related to the cognitive light cone and yourself to the extent that there are some radius of bundles of things that I think we can relate over and goal-directed. But I would be really careful with the chair thing; yes, clearly a chair is at a different level, but what I take away from the stuff that we've done on the memory and gene regulatory networks and some of this new sorting algorithm stuff is that things that look like a chair from one perspective — they're not going to be you, but they're not quite a chair from other perspectives. If you look at the GRN in the right way, you start to see, okay, there's some learning here and some other stuff.

[31:13] Richard Watson: So I didn't mean to be cherished.

[31:15] Michael Levin: Yeah, right. I was very cherished to be.

[31:20] Richard Watson: I don't know how to look at a chair in such a way that it does something different, but maybe a chair does. Maybe a chair knows how to look at a chair in such a way that it does something different. Then it would be a self to that observer, but it's not a self to me. That is an observer relative.

[31:41] Josh Bongard: It's your failing, not the chair.

[31:43] Michael Levin: Yeah.

[31:50] Josh Bongard: So for selves, there's a Goldilocks zone. You can't be too integrated and you can't be too divergent and independent. It connects back to embodiment and empowerment. There are certain actions you can do that pull apart the strings or decohere something else, give it a little bit more independence or hold it still for a while. That's an important aspect of selves.

[32:19] Richard Watson: Can I tell you about repeatedly recreating the conditions for its own origination?

[32:30] Josh Bongard: Sure.

[32:33] Richard Watson: So a biological self that I'm imagining now going through multiple lifetimes, a lineage going through multiple lifetimes, development, maturity, death, birth, again, you can think of that as a cycle, which is just going around. If you think about it as a cycle, then that means that the information at any one point on that circle is actually the same as the information on any other point in that circle, because otherwise you couldn't get back to where you were. If you lost information as you went around, then you wouldn't be able to get back to the spot where you started. So there has to be the same information there in the adult form as there is in the embryonic form. It's just expressed differently. But it feels like the selfhood comes and goes. That when I have an embryonic cell, it's not much of a self there. But when it's in its adult form and it's walking and talking, I think there's a self there and then it goes away again. So how can I reconcile this idea of the selfhood coming and going whilst I know that it's a cycle which is perpetual? And so the information must be there all the time. And the way that I have come to reconcile that is to imagine that you have a cycle which is resonating with other frequencies so that you have an oscillator that resonates with other oscillators, but they're not at the same frequency. It's just harmonic oscillation. For example, it can oscillate with something that's half the frequency, a longer wavelength, or twice the frequency, a shorter wavelength. And the information that's carried on the focal frequency that you started with gets pushed out onto these other frequencies above and below. And that means that the information at this frequency that you were looking at appears to have gone, and then it comes; it gets pushed back again from those two frequencies to the focal frequency. So it appears like you have a system which is there and then it's not there and then it's there and then it's not there. But that system was creating these vibrations in its environmental variables and in its genome, levels of organization, which are not the level of organization of the organism of the self that my self sees. They look like universal constants, environmental constants, which don't look like a self at all, or they are high frequency vibrations that just look like noise. If you can push them very far away in frequency, they look like they're not there at all. But actually there's a coherence between them because they were created from the same source. That coherence necessarily recreates the original; that memory is recreated. So if you take a step back from that whole thing, okay, so that's a cycle then. If you want to see that come and go, then you have to step back another frame of reference in order to see that other way. The point is, there's an observer dependent way of looking at it, which makes it look like a system which comes and goes, a system which repeatedly recreates the conditions for its own origination, which is different from just persisting, just going around.

[36:53] Josh Bongard: Yeah, that's awesome.

[36:58] Richard Watson: And I think that I can do that in very simple form. It's related to Mike's bow tie stuff. Because I had this idea that whatever the information is that you're interested in, in order for that to not just be a sort of one-to-one imprint or memory of that in a boring way, it has to be compressed and then expanded back again. Because when you compress it, then you can do generalization, and when you expand it back again, you don't necessarily get something. That's important because otherwise you're not really doing learning, you're just doing memory. But I realized that wasn't enough because if you compress it a lot and then expand it back a lot, it becomes almost anything because you compressed it so much that you lost all the information. So how do you get this balance between I want to do a lot of compression because I want to have generalization, which is non-trivial, but I also want to be able to do recall, which has high fidelity, so that I'm not just generating everything from the class of all possible things? The way that you do that with frequencies is that you take two different harmonics. So you compress it with one harmonic, for example, the octave, which has a two-to-one ratio. But you also compress it with the perfect fifth, a two-thirds ratio. When you project them back again, they're both ambiguous, but the ambiguity resolves in a way that gives you one perfect memory again. It's like taking stereoscopic vision. With one eyeball, I've lost the depth information. So I compress 3D down to 2D. But if I have two eyeballs, I can take two different perspectives. And that enables me to recreate the third dimension that I'd lost. So when you do that resonance on different frequencies, you can't just do it on one frequency or an octave. That would give you lots of compression, but then when you push it back again, you wouldn't know what the original was. But if instead of just doing it on one, it's almost like one harmonic stack, like octave stack versus the perfect fifth stack, they give you a way of compressing it in two, from two different perspectives, that when those two different perspectives are brought back again, the one place you get a proper recall. Then you can do generalization, which is non-trivial, but you can also do specific recall. It's not just that you have to sacrifice fidelity in order to get generalization because you don't want to do that. I want to be able to recall really specific things, but I also want to have a really deep generalized understanding of them.

[40:50] Josh Bongard: The ability to juggle in that manner is what makes selves special.

[40:58] Richard Watson: If I wanted to recover the event which caused those two memories, I have to be at the same position that generated them. If I don't know where my two eyeballs are, I can't recover the third dimension. I need to know something about the point of intersection of those two perspectives in order to get the memory back. Otherwise, I'd get some other memory back if I didn't know how they related to one another.

[41:51] Michael Levin: I was thinking about our tadpoles with the eyes on their butts.

[41:58] Richard Watson: Oh, yeah.

[41:59] Michael Levin: So evolutionarily, their eyes are absolutely not where they have been for the last however many millions of years. Of course, we don't know what they see. We know they can see usefully and adaptively. There's a lot of work on human sensory substitution and sensory augmentation. People can tell you what they see with these weird devices and different modalities that they get connected to. So I think that's interesting. The plasticity of how fast you adapt to changes in what you were just saying. The rubber hand illusion. Six or seven minutes of experience overrides millions of years of being a tetrapod.

[42:52] Richard Watson: Give me a second, Emily. Do you want me to go to the kitchen? That is extraordinary, isn't it?

[43:01] Michael Levin: That's the thing about being a living observer: you're so ready. I think that's what explains developmental plasticity and xenobots and all of this stuff: you just don't take the past all that seriously. You are very willing to reinterpret what's going on. Most of the time it happens the same way, but if it doesn't, no problem, because you weren't expecting that in the first place. You were making up a story to begin with.

[43:29] Richard Watson: But you haven't forgotten that it's a hand or that hitting it with a hammer hurts, right? So there's lots of depth to the concepts that you're holding on to, you project them onto something else.

[43:46] Michael Levin: When you know when people get and I mean they've now done all kinds of weird weird sensors and effectors, but just the simplest thing is when you when you get a prosthetic hand where the hand can rotate 360 degrees, which our hand does not do, Very quickly, people learn to do it that way. When you reach for a coffee cup, you go, and you rotate it the wrong way. Your hand never, ever used to do that, but no problem. It, you know, it'll get incorporated, right?

[44:10] Richard Watson: Very simple examples where the goggles that turn your...

[44:15] Josh Bongard: But Mike, your hand used to be half the size it was and 1/4 of the size it was and an eighth the size. Maybe not rotations, but spatial, there was enough spatial transformations of your hand in your own lifetime, literally, that suddenly rotating 360 degrees, that's a mild transformation based on all the ones you've experienced firsthand.

[44:36] Richard Watson: What I can't do, though? I can't use tweezers to pluck my eyebrows when I'm looking in the mirror. Which is it? Is it closer? Is it for?

[44:47] Michael Levin: That's hard.

[44:49] Josh Bongard: We'll work on that. We'll add it to the list.

[44:51] Richard Watson: I should be able to do this. It's just the opposite of what you're thinking. You can do it.

[44:58] Michael Levin: Have you ever seen those bicycles? They put one thing on the steering wheel. It's hard, but people do learn. The other thing about the selves, we were talking about the self. I think another interesting thing about the self is the self-referential thing that Hofstadter was always talking about, which is that the self actually has a perspective on itself. So it's not just that we can look at something and see what we see, but it is doing it to itself. I don't know if that's a phase transition or if that's perfectly smooth with chairs and whatnot, but I think there's something there. The systems that actually also interpret themselves.

[45:42] Josh Bongard: All I do is think about vibration these days. Is there something special about that modality? I feel bad for heat and for light and for electricity and for mechanical shear forces. Is it just because that's what's cool these days? Or is vibration really it? Rich, your "Song of Life", we should have been focusing on vibration all that. We got distracted by electricity after the Second World War.

[46:16] Richard Watson: That pertains to Everything is a rotation. And agency is just the right amount of late.

[46:35] Josh Bongard: Is just the right amount of what?

[46:37] Richard Watson: Late.

[46:38] Josh Bongard: Late, got it.

[46:40] Richard Watson: The thing is that if you take it that the thing that you really want to explain is agency, not free will, we don't do that. What you mean by that is that instead of being a consequence of actions that happen to you, you want to take actions because of their consequences that haven't happened yet. You have to turn that whole thing around. This action occurred because of its consequences, not these consequences occurred because of those actions. That's the thing that makes agency weird. The way that you do that is you say, if there's something which is happening at a regular interval and there's a consequence of that which happens with a little delay, then if the delay happens to be just a little bit less than the period, it ends up looking like the cause and effect is the other way around. It's related to the phenomenon of the rotating wagon wheels on spaghetti westerns when you watch them on TV, that occasionally the wagon wheels look like they're spinning the wrong way. They look like they're spinning backwards because of this stroboscopic effect. If you're just the right amount of late, then it looks like you're coming before things rather than after things. Instead of being a reaction which happens afterwards, you appear to be a cause which is happening before. If the world really is a rotation, you can't attribute cause because A happens before B rather than B happened after A, but that's just not relevant if everything is really a rotation. That's one reason why I think vibration is the thing. It's because you want to have a circular notion of cause. Because without a circular notion of cause, you're never going to get to agency. You're never going to break that. When you have a linear notion of cause, agency is always going to be mysterious. When you have a circular notion of cause, it's possible to let go of your notion of time going in a particular direction and accept that these are all part of a cycle and that they all have the same amount of causal power as every other part of the cycle. Insofar as they are a cycle, all points on that cycle have the same information about all the other points on that cycle. There's no one part of it which is causing. That's one reason why I think it has to be about vibrations — because it's about rotations. It's about cycles.

[50:06] Josh Bongard: That's what's so important about vibration.

[50:10] Richard Watson: The other thing that I think is important about it is harmonics, that the symmetries that you get from doubling the frequency and doing other ratios of the frequency, there's nothing special about doubling or taking thirds when you're doing in a linear system. I cut it up that way. But when you take a circle and you pinch it in half and you fold it over, that's a way of getting a correspondence that's natural for the size that you started with. But in a linear model there isn't really any particular size.

[51:01] Josh Bongard: That's interesting.

[51:04] Richard Watson: In a linear model, why is that a unit?

[51:07] Josh Bongard: Yeah.

[51:08] Richard Watson: I could have picked any unit I like, but for a cycle, that really is a unit. Then all the other units are defined from there: it's a half or twice that.

[51:20] Josh Bongard: If all those symmetries are useful and we're starting to look at computation in vibration, are there natural symmetries there that have yet to be found? If you can find a NAND at this frequency, you're likely to also be able to find an XOR at double the frequency. It's not arbitrary where you look if you're treating what you see as computations. I wonder if there's some mapping there, some structure. You always get, when you get a computation here and there, something.

[51:54] Richard Watson: That question is about what are the natural laws of induction in that space? What are the natural kinds of interpolation and explanation?

[52:06] Josh Bongard: It's not all arbitrary. You can try and force it, and it's arbitrary if you do it without vibration, but if you do it with, there is that latent structure and you might be able to exploit it.


Related episodes