Skip to content

Conversation #2 between Chris Fields, Richard Watson, and Mike Levin

Chris Fields, Richard Watson, and Mike Levin explore how gene regulatory networks store memory in trained yet static pathways and what this reveals about learning and mind-reading technologies. They discuss attractor dynamics, semantic ambiguity, embodiment, and limits of decoding internal states.

Watch Episode Here


Listen to Episode Here


Show Notes

Chris Fields, Richard Watson, and I discuss how memory is stored in trained but static pathway models, and what this means for memory and mind-reading technologies in general.

See https://thoughtforms.life/but-where-is-the-memory-a-discussion-of-training-gene-regulatory-networks-and-its-implications/ for more information and background on this topic.

CHAPTERS:

(00:00) Gene networks as learners

(05:04) Attractors and pure read

(08:50) Detecting learning from dynamics

(16:13) Quine and semantic ambiguity

(19:21) Attractor basins and learnability

(24:35) Computation limits of reading

(32:36) Internal perspectives and anesthesia

(40:39) Embodiment, resonance, and meaning

(46:20) Hierarchical attractors and transfer

(52:27) Communication, channels, shared meaning

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:00] Michael Levin: So in the broader picture that people who do neural decoding often think about is, can we scan a brain and by reading whatever properties you think are important, could we extract the content, the cognitive content of the mind that is implemented there? And then that's the big context. And the question of, is it, why is it harder for others to do it than for you to do it? So from the outside, what's the difference in reading your memory engrams as a neuroscientist from the outside versus for you reading them inside? Again, not perfect access by any means, but much better than we seem to have from outside, and what does that mean? And so I was thinking more in a much simpler system of the thing that Richard first did in, I think, 2010, and then we did after that, and it's a slightly different way, which is to train these gene regulatory networks. This idea that you've got a network, it's deterministic, the rules by which the nodes turn each other up and down. The structure does not change. So there is no synaptic plasticity as such.

[01:19] Richard Watson: But in my model, the structure does change over evolutionary time.

[01:24] Michael Levin: Yes, over evolutionary time. But during the learning, does it?

[01:28] Richard Watson: The only learning that occurs is structural change over evolutionary time.

[01:33] Michael Levin: Ours is different in that way. Ours are completely locked down. The nodes don't change, the weights don't change, the relationships between them don't change. All of that is completely locked down.

[01:48] Richard Watson: Only the activation state.

[01:51] Michael Levin: What we then do is we treat the thing as an agent and we provide stimulation. Let's just talk about the case of associative learning. We found 6 different kinds of memory, but let's just look at associative learning. We pick a node and you call that the unconditioned stimulus. We know that if I stimulate that node, there's another node somewhere, we call it R for response, that other node is going to go up. Every time I tweak the UCS node, the R node goes up. Then we pick another node, which we call the first; it begins as a neutral stimulus. If we tweak that node, the R does not go up. That's how you choose them. What we do is we apply those stimulations together. It's a classic Pavlovian thing. You've got the thing that's salient to the agent. You've got the thing that initially is not salient. You apply them together. After a while, the neutral stimulus becomes a conditioned stimulus. When you hit that node alone, the R goes up. Now you have a trained network that has associated those two stimuli. We looked for this in 60 different biological networks, and it was pretty prevalent. We looked for it in random networks, and it was quite rare. It's not the kind of stuff that Stu Kaufman studies, where these are properties of all networks, including random networks. We found it.

[03:32] Richard Watson: That's so cool. I didn't know about that work.

[03:34] Michael Levin: We have two papers, one looking at that in Boolean networks and one looking at that in ODE models. Both of those have a website repository where you can download these models. Then we simply put them through a battery of tests like you would for a new animal that you were studying to ask what it is capable of learning. You got habituation and sensitization and associative learning, and they can count to small numbers and things like that. And then there's a million other things to check that we haven't gotten to yet. The question that I started thinking about was this, and this tripped up reviewers for both papers. This was a real nightmare for both of them, because the reviewers asked initially a somewhat reasonable question, which is, where's the memory? If the structure doesn't change, and there is no explicit scratch pad to save stuff in, the structure doesn't change, the node weights don't change, where is the memory? And that's what a lot of people complained about during that review. So I started thinking about this question specifically, and we explained, okay, it's in the dynamical state and you've got this notion of an effective network, which is different once it's been through all that experience, the phenotypic network, as we would put it, is different from what the original structure was. And it's all in the dynamic state, but people found that very unsatisfying.

[05:04] Richard Watson: Can I say it back to you to see if what I would guess about it resonates with how you understand it?

[05:12] Michael Levin: Yeah, go for it.

Richard Watson: Since it's a recurrent network, it can hold a dynamical state in its attractor. In one attractor, the relationship between two nodes can be different from the relationship between those two nodes in another attractor. When you train it, you're pushing it from one attractor to another.

[05:31] Michael Levin: Exactly. That's how we understood it. Santosh and Sarama and I understood it the exact same way. But the first thing I realized is that people are very uncomfortable with this because they're looking for a physical location, they're looking for the n-gram. And the n-gram is supposed to be a thing that they can get their hands on and read out. It took a lot of work to bang through this idea of the different attractors.

[06:07] Richard Watson: But that's not precluded. If you could read the dynamical state, then you could read what the memory is.

[06:14] Michael Levin: Let's talk about that because I'm still not sure. Maybe you can tell me. I started thinking about exactly that question: where is the memory from the following perspective, just like the brain-scanning idea. Given a network, can I tell whether this thing has been trained or not? Can I tell whether it's gone through the process? And more specifically, what do I need to do to be able to say, "Oh yes, this one has seen the stimulus 10 times; this one has not." What information do I need? And in particular, can I do it with a pure read? In other words, can I do it without disturbing or stimulating the network or a model of the network? For this purpose, being the same thing, can I do it without doing things to it and seeing what happens? Can I just purely read it? Out of curiosity, I posted a question on Twitter for the neural decoding folks. I said, do you guys think that brains can be read out in a pure scan that doesn't interact with the brain, doesn't stimulate it — just assume you can read anything you want. About 80% of the people said yes: they thought that eventually, once we know how to read the right bits of the brain, we don't need to interact with it. You can do a pure scan and you'll know everything there is to know. I have more thoughts about that, but I'll shut up and I want to hear what you guys think. What can you derive from a network like that without interacting with it?

[07:55] Richard Watson: So let me see if I'm still catching up. It's clear that if you wanted to know whether this network has a particular relationship between the conditioned stimulus and the response, then you could poke the conditioned stimulus and see if you get the response. But you want to know, can you tell that that linkage is there without actually frobbing the widget? You're allowed to read what the exact state, the activation state of all the neurons is at any particular time. It's not like it's immaterial in that sense. It is there. The question is whether that could ever not be enough to determine what the relationship would be without simulating the relationship?

[08:48] Michael Levin: Exactly. Exactly.

[08:50] Chris Fields: Can I back up and ask about the random networks in the paper with Sarama and Wes? Do the random networks have random dynamics and the biological networks have saturation dynamics? Or do the random networks all have saturation dynamics and all the reactions?

[09:24] Michael Levin: The first thing to say about random networks is that actually generating random networks is a non-trivial task, because if you don't want to bias, how do I generate the space of all possible networks? It's not easy. So we did our best trying to be fair, but no doubt we're still caught in a region of the space of possible ones. But what do you mean by saturation dynamics?

[09:53] Chris Fields: Dynamics that are basically S-shaped, that have a long saturated region at the bottom. Some curves, roughly linear region, curve, roughly saturated region. Because if you drive a network or some components of the network from the linear region to a saturation region, it's going to behave differently. It's going to behave now like an infinite source or an infinite sink, depending on which of those you drove it into. Once you have something that acts like an infinite source, it's going to behave as a memory for some time until that source is depleted by constantly being read and having to feed activation back into the system. So I would think that one answer to your question about reading is: can you tell what parts of the network are saturated by looking at the activity levels plus a plausible model? And that might tell you what's being remembered. Suppose you have some kinase that's saturated, the whole population has been activated, the whole available population. So regardless of how the network is modulated, that kinase is going to look active for some period of time until those active kinases are deactivated somehow or other by the activity of that network or some other network. So it's going to look like it remembers this part of the state space is always active. But that kind of reading would require having a model of the dynamics of that particular network. And I think that's where your query to the neuroscientist is very strange. Given that neuroscience recognizes that brains are unique, they have a unique developmental history. And so they're similar at some growth scale, and they get less similar as the scale goes down. So without a predictive model of an individual brain, you wouldn't be able to look at its dynamical state and say much about the semantics. You could say gross things about the semantics. This person's representing arm movement. It's saying exactly which one would be difficult.

[13:00] Michael Levin: I think the first part of what you said about looking for the saturation and sinks and all that, it sounds like you could tell that it's likely that there was some kind of memory, but by finding these sinks you wouldn't be able to say what it was a memory of. You wouldn't be able to recover the actual association.

[13:30] Chris Fields: That may well be the case. You may be able to tell that it's in some memory without knowing which one.

[13:36] Michael Levin: Which is still cool. That's part, that's a step in the direction I was thinking about. You can tell, maybe, that this thing has learned something, but you couldn't tell what.

[13:52] Chris Fields: If that model is correct, you should be able to look at the architecture of a network and make predictions about what unconditioned stimuli to use to drive the system into a memory state.

[14:08] Michael Levin: That's very interesting. We didn't do any of that. We did a purely empirical study. We didn't try to predict how many memories there were going to be. That's clearly a future thing we should do. We just put every network through every possible combination of holding every node as this role, that role, and just did statistics on what works. We just found that the biological ones have multiple memories, of course, that they can do simultaneously. They can hold more than one.

[14:40] Chris Fields: Some of those networks are quite complex, such as the ones you showed in the picture in the paper.

[14:47] Michael Levin: Some of them are, although Santos pulled together, I think, at least for the first paper, some minimal ones that show each kind of memory that are extremely simple, 3 nodes, just to illustrate so that people could clearly see how it works. So they don't really need to be very complex just to show the basic phenomena. I was really interested in this notion that if it's true that there are things you cannot recover about the content of the network without interacting with it, without being some kind of a pair with it, you are now stimulating it. Now there's an interactive process that's interesting. There might be a metric of the degree to which a system is resistant to understanding from the outside without interacting with it, some measure of how much you are missing by not being able to poke it by pure inspection. I think it's a very steep curve because it looks to me even these simple GRNs have some of that going on. By the time you get to brains, it must be very high.

[16:13] Chris Fields: This reminds me of an ancient paper by the logician Norman Quine from the late '50s called "Ontological Relativity." He does a thought experiment of an anthropologist who's visiting some previously uncontacted tribe. He wanders around with members of this tribe, and they point to various things and say various words. Quine's point in this paper was, without the assumption that their conceptual scheme is exactly the same as yours, which is not a justified assumption, you will not be able to know that when the tribesman points to something that looks to you like a rabbit and says "gavagai" — Quine's made-up word — "gavagai" means "rabbit." It might mean "furry animal at a certain distance" or any number of other things. His point is that even asking a finite number of questions, even manipulating the person or the situation in some finite number of ways, you're not going to come up with a unique, correct meaning of this word.

[18:01] Michael Levin: The way to improve the situation would be precisely with interventions like show them something else and to start testing the hypotheses. I think that's really interesting. That leads to the idea that the reason the interior inhabitant of the brain has an easier time of it is because we are constantly doing functional experiments via this active inference loop. You are constantly poking your own brain. You do have access because of that loop to these functional experiments that cannot be done by pure reads.

[18:44] Chris Fields: You've been doing it starting in infancy, if not before. During your entire period of learning the identities of external objects and the identities for you of words, you've been monitoring and building a model of how your own brain is working. This gets back to verbal babbling and motor babbling.

[19:21] Michael Levin: Richard, you look unhappy.

[19:23] Richard Watson: I should expect this mind bending from these conversations. That's all. Two things. One is some thoughts about what kind of attractors there are in the natural networks that enable them to do this kind of learning. And the second one, who's doing the knowing when you say, I know what this memory means. The first one is I'm imagining that random networks might not have very many different attractors in them, in which case they just couldn't hold them. Or they might have lots of attractors, but the basin boundaries between one attractor and another might be far away from the fixed points so that you can be in this attractor, holding a state, or you can be in that attractor, holding a state, but you can't teach it anything because you have to get it from one attractor to another in order for it to be in a different state. So you need to be able to move it from one attractor to another with training or with small perturbations, and then it falls into the other attractor, but it won't fall back because otherwise it won't hold that state. So the relationship between the positions of the basin boundaries and the fixed points of those attractors needs to be close, basically. That's what's necessary, as my intuition is, for it to be possible for it to move from one state to another through small perturbations and hold those states, in other words, to be a flip-flop. But it's more than that, because you want it to be able to, I can pick any attractor, and by nudging it repeatedly with this particular stimulus, I can get it to go to a specific other attractor, one where it's more comfortable with that kind of nudging, which I think might even suggest something like there's a basin boundary that's suitable for the thing that you want to learn close to any fixed point attractor that you might be starting from. Something like that—the basin boundaries need to be all folded up and fractal because otherwise they can't all be close enough to where you started from to where you want to be. Any thoughts on that first about what kind of attractors and attractor-basin structure a system would need to have and whether you have any observations about whether natural networks have that and random networks don't?

[22:37] Michael Levin: We haven't done that analysis. That remains, because being able to predict that kind of stuff is very valuable for the biomedical perspective. You want to know if these networks. We're trying to exploit that now in addressing drug habituation, which is a big, big problem in pharmaceuticals. That would be great. I don't know yet.

[23:09] Chris Fields: I wonder about the idea that anything is learnable from any prior state. This is the great idea of the Jesuits, B.F. Skinner, and various other people. But it may be that learning is more local and that one can only learn certain things if one has followed certain previous trajectories. What you say about many attractors from which a local basin is accessible may be true, but not globally.

[24:00] Richard Watson: There's always going to be an inductive bias, isn't there?

[24:03] Chris Fields: That would be one way of saying that, or a prior knowledge bias.

[24:10] Richard Watson: Which is what you're saying with the thought experiment with the language too. Yes, I gave you that stimulus and now I want to read it back, but I didn't mean for you to learn that. I meant for you to learn this.

[24:22] Chris Fields: This other thing. Yeah.

[24:23] Richard Watson: If all possible states were reachable through stimulation from all other possible states, then there wouldn't be any inductive bias and nothing would be learnable.

[24:35] Chris Fields: Or anything would be learnable given enough flogging, which was Skinner's position.

[24:40] Richard Watson: But that doesn't make sense because of the thought experiment and because all induction has an inductive bias. With respect to being able to read a state to see what the memory is, this is just to test what exactly you mean by the question. Suppose I wanted to know whether the attractor of the logistic map was period four, period eight, period two, or period three. And I could read R. What do I need to do? I can change R in a continuous way. It's just a continuous variable, and the memory can be two, four, eight, or three, depending on where I am in R space. Does that mean that if I can read R, then I have read the memory, or I could read R but not know what the memory is unless I run it?

[26:08] Michael Levin: I don't think you know what it is until you try it at the different points you want to try it at.

[26:19] Richard Watson: Yeah.

[26:20] Chris Fields: In a sense, knowing what the memory is, knowing from what previous attractor the system fell into this new basin?

[26:35] Richard Watson: So the question is whether there's a syntactic manipulation you can do to arrive at the result of what the period would be without running the dynamical system. Which is like saying, if I give you a program, can you tell me what the output of the program is without running the program? So when I give you a lambda calculus expression and you do beta reductions on it, which are syntactic operations, you're doing a shortcut that means you don't have to run that bit. Because if you can do a beta reduction, you found a sub-expression that was the inverse of another sub-expression and you cancel the margin, you don't have to run it.

[27:36] Chris Fields: This question suggests Rice's theorem, which says that no Turing machine can tell you the function computed by an arbitrary program.

[27:45] Richard Watson: If you can get different machines by parameterizing continuous variables, then you could read those continuous variables all day long and you don't know what the machine does without running it or having a simulation of it to run.

[28:10] Michael Levin: Yeah.

Richard Watson: I think that might even be theoretically watertight: you can't necessarily do that.

[28:20] Michael Levin: That's very interesting. I didn't realize that even something like this already has the halting problem. Something as simple as this — we are already in the halting problem. Part of what motivates us is it's funny: a number of people, and I don't know who the reviewers were, I don't know if they were biologists or computer scientists or what, but there was at least one reviewer who flatly claimed that it was impossible — that if there isn't some register they can get their hands on about where the memory is, if nothing changes, that was his argument. If literally, as we said in the introduction of the paper, nothing changes about the structure of the network, then where are you keeping these memories? They just thought it wasn't.

[29:02] Richard Watson: But that's only because he used the qualifier structure. Something is changing, but it's not the structure.

[29:08] Michael Levin: That's it. You might think the other thought I had about this is it's very much related to the scale you're looking at. Because if you did have a conventional flip-flop or something that did have a register that could store one or the other, if you were at the wrong scale and you picked up all the electrons, you would say, none of these are scratched; they're all in their original mint condition. There is no memory here because you haven't changed the hardware at all. If you go down too far, necessarily it will look like there isn't anywhere to store it. You can have it at one level and it doesn't make any sense at the level below, because none of the parts have been altered.

[30:07] Richard Watson: I wonder if you placed resolution limits on the continuous numbers, which might correspond to placing depth limits on the computations, then you don't have a halting problem anymore and you can do syntactic reductions. So it is to do with the idea that in a continuous space, you can keep zooming in and there's more information and you never know whether that information is going to matter or not.

[30:44] Chris Fields: I don't think continuity is the issue. Turing machines aren't continuous.

[30:50] Richard Watson: No, they're not. But if you.

[30:56] Chris Fields: Yeah, they have effectively infinite discrete memories.

[31:01] Richard Watson: So the only way that a continuous fixed dimensional system could do the same thing that a Turing machine does is if it keeps folding extra information into the spaces in between the numbers it's already used. Otherwise, it's going to be finite.

[31:17] Chris Fields: Right.

[31:18] Richard Watson: You won't have a halting problem.

[31:21] Chris Fields: But that can go to a discrete infinite limit as opposed to a continuous infinite limit.

[31:35] Richard Watson: I see.

[31:37] Chris Fields: Continuity isn't the issue, but it is a resource issue. You're correct about that.

[31:47] Richard Watson: So they could be, if there isn't a maximum and minimum on your values, they could be discrete but still be an infinite set, and you could have a halting problem. If there was a maximum minimum limit, then they would have to be continuous.

[32:10] Chris Fields: No, there's still a countable number of rationals between one and zero.

[32:19] Richard Watson: The old infinities thing. Okay, yeah.

[32:22] Michael Levin: That all makes sense to me. What's the deal with our first paper? It was all Boolean networks. They weren't continuous.

[32:36] Richard Watson: So that means that there's a finite number of states they can have and a finite number of programs they can be, which could be large and effectively hard to predict, but not necessarily impossible.

[32:56] Chris Fields: If everything's finite, you can in principle enumerate. Just enumerate all the possible states and all the possible trajectories through the state space. But solving that enumeration problem is exponential, so it's not feasible.

[33:24] Richard Watson: And besides, Mike would count that as you poked it and examined all the things that it did.

[33:33] Michael Levin: Yeah, that's sounds like.

[33:34] Chris Fields: If it's a finite number of parts and a finite number of states, then in principle you can enumerate all of the things that it could do. But the problem is still intractable.

[33:48] Michael Levin: Imagine the same Boolean network. Here it is when we started. Here it is once we've trained it. The same finite number of states you can enumerate, but how do you tell the difference between them? You would still have to do what you said.

[34:08] Richard Watson: You're allowed to read which state it's in, right?

[34:11] Chris Fields: You're starting from different states. When you probe it, it's got this gigantic Boolean state space. And different parts of that state space respond very differently to bit flip perturbations of part of the network.

[34:31] Michael Levin: What you have there is a surrogate of the real thing, because you've got all the possible things it could ever do. You've got this exploded surrogate model of it, but you don't have to perturb it. You can just read it once you know which state. Is that true?

[34:49] Chris Fields: You could probe it from some computer, select some initial state out of this two to the however many states. And you could then look at all the paths of length two or three or four from that state, which would explore a little local neighborhood of the search space.

[35:16] Michael Levin: I'm trying to decide if that counts as cheating or not in my exam.

[35:23] Chris Fields: I see.

[35:24] Richard Watson: Yeah.

[35:25] Chris Fields: You couldn't extract semantics from that unless you knew what perturbations were used to get into that little local part of the state space. If you think of the semantics in this historical sense, what did the system learn?

[35:47] Richard Watson: Even if you could say what it does, you couldn't say the meaning of what it does. The meaning of what it does is given by the history which caused it to do that.

[35:58] Michael Levin: But isn't that interesting? I've been shocked by this for weeks now, this idea that even something that simple has an internal perspective that cannot be read out equally without becoming an interactive partner with it in some way.

[36:19] Richard Watson: In your conversation earlier, what was hurting me was this idea of how come I know what my memories mean, but it's more difficult to read somebody else's memories. Well, if I had had a lot of exposure to somebody else's memories, then perhaps I could disambiguate what their internal model was through the impoverished observations which I make over a long period of time. But is that what I'm doing with my own memory? Am I internalizing a model of my own internal memory by experiencing it over time? If so, where am I putting that internal model? That's not what I'm doing, right? It is the model.

[36:59] Michael Levin: Yes, I think Dan Dennett would say that whole way of phrasing the question is too dualistic or whatever, but you're not reading your brain, you are your brain. I understand, but you can imagine clinical cases. What you see when people are coming out of anesthesia is basically a small version of it, where if it wasn't the case that it was so easy for us to interpret our engrams correctly, every 300 milliseconds we would lose track of what the hell was going on. And then we would be as confused as modern neuroscientists are when they try to read a brain. They can tell a few things, but it's terrible. You could easily imagine what it would look like for us to be bad at this. I do think there's a difference there. When people come out of anesthesia, general anesthesia has decoupled all the gap junctions. The network has to reform and find its way. For the first couple of hours, they think they're pirates and gangsters; they're trying to make sense. You can watch people come in; it's very humorous. This is why they don't like to use general anesthesia when they don't need to: some people have psychotic breaks from it, and it's rare, but it happens where they just never come back to the right place. Presumably the materials are there. You haven't killed any cells. You haven't destroyed any; Glanzman thinks it's RNAs. You haven't taken out any RNAs. It's the dynamical thing. And if you don't know how to interpret your own memories, it looks pretty bad for a while until you figure it out. I do think there's a way that it wouldn't be this way. Dennett's argument would be, how else could it be? It has to be this way, because you're just a big dynamical system and it is what it is.

[38:54] Richard Watson: It seems fine to me that you might not recover the same dynamical state, but it doesn't seem fine to me when you say, "if you don't know how to interpret those memories," that language seems bothersome.

[39:08] Michael Levin: Maybe we could eliminate all of that talk except for memory transfer experiments, right? Something like what Glanzman does in Aplysia or what people did in Planaria where you take out some structure. Now we're back to having a physical register for things and shove it into a new brain. The Aplysia work is amazing: he just injects the stuff into the brain, the space in between the neurons. They didn't even put it in any specific place; it's taken up and interpreted in the correct way, which is the same way that the training happened. Much like with the plenary experiments, the recipient isn't confused; they have the link between the stimulus and the fear conditioning. There is some aspect.

[40:10] Richard Watson: You can't separate the meaning of the symbol from the symbol. If you're going to transfer that bit of stuff, it's going to mean the same thing wherever it goes. It's binding; the grounding between the symbol and the outside world is intrinsic to it. It's not interpreted by an external observer.

[40:39] Chris Fields: Well, this is a case where, in a sense, humans are more vulnerable. I'll take the example of coming out of anesthesia. One of the things that has to happen is that the language production system that allows you to report what's going on to the doctor has to recouple correctly to the whole body representation, as well as all the bodily event memories and things like that. Just think of the body representation. If something goes wrong in that recoupling, you may well get the wrong words assigned to emotions or things like that. Those systems are sparsely coupled. They're not deeply coupled biochemically. In this, the way I think in the amplegia case, it seems like a much deeper kind of coupling than we have between our language systems and the rest of the brain.

[42:01] Richard Watson: Suppose that different parts of my body had structures that caused dynamical activation patterns to resonate in a particular way. Inside my brain, a resonance that's created in the electrical activation patterns between the neurons resonates to whatever is going on in that part of the body. If it does that in the brain for a while, it will create structural connections in the brain which make those resonant dynamical patterns in the brain easier. Then the structure that was the physiological part of the body and the structure that's inside the brain are really little models of each other in so much as they create the same resonance and, if both switched on again, would couple up in the same resonant connection, which is like saying, the thing that was in the brain wasn't an arbitrary symbol. The thing that was in the brain was a model of the thing that it means. It can be in a different substrate, and the connections can be made of different materials, but it still is a dynamical model. It's a deeply causal, dynamical model of the same thing. And its binding to the thing that it means comes from that analog relationship between them.

[43:55] Chris Fields: That way of thinking would suggest that reconnecting the somatosensory homunculus is a lot more reliable than reconnecting the language system or naming body parts.

[44:14] Michael Levin: Chris, that's a super interesting idea for experiments because that suggests that if we were to give people sensorimotor augmentation, new hands and extra thumbs, all this kind of prosthetic stuff that people get now, doing that during and after anesthesia might be really interesting. Because then you're coming back to not quite the same body. You've got extra effectors and you've got a remap and how fast does that work? That connects to this issue: if Richard is right, you ought to be able to move these things across widely divergent body implementations. We've thought about that in terms of looking at memories in things like Xenobots versus the frog they came from versus whatever else we can make the xenobots into. To the extent that these things will carry their meaning internally, what else can you put them into? Somebody in the 90s did these crazy experiments putting Drosophila neurons into human, I think, Parkinson's patients or something. So the question is how much can you get from a fly brain into a human brain, but with all this synthetic stuff, we can do some of that now. We can do those experiments and ask how much does it carry over. But it's amazing to me. People ask, I've got this spectrum of persuadability, and people say, well, how far down does it go? It's amazing that all of these issues already kick up by the time you have three or four nodes in a GRN; you're already knee deep in all of this stuff. It doesn't take much at all. It's so early that you already get into this.

[46:20] Chris Fields: That's a very cool observation. Yeah.

[46:27] Richard Watson: I am very intrigued by these thoughts and the possibility of the ability to move between one attractor and another in a nested or hierarchical way so that an association can be made that might be shallow or an association could be made that might be deeper, that shares more history, shares more experience, that creates deeper resonances. And the possibility that what that would look like — a way to tell the difference between a biological network and a random network — would be to do with how fractal the attractors are and how harmonic the dynamics are. In random networks, there wasn't anything very harmonic about the dynamics, and that's why you couldn't push them from one to another.

[47:37] Michael Levin: I think there's a bunch of work that could be done on analyzing these things. Very similar to what people do with connectomics in the brain for neural decoding and things like that. And there's a thing I hadn't thought of before. Can we move memories between GRNs? What would you have to do? What does a memory transplant look like from a network that got trained?

[48:15] Richard Watson: Isn't that the embedding theorem? If you have a continuous dynamical system in one network, and you can read at least one of the variables on that network and connect it to another network, which has suitably high internal dimensionality, you can induce the dynamics from one network to the other.

[48:44] Chris Fields: The question would be, what does the same dynamics mean when moved to a different network?

[48:50] Michael Levin: We can start with a net with a cancer network and then we can move that into some sort of plant metabolism network that's in the database and ask how does that get it? We may or may not want to use the word "interpreted", but in a different context there is some version of asking that question: what does this now mean?

[49:18] Richard Watson: It ought to mean the same as if you've been exposed to some training and then I've been exposed to you; what it means to me ought to mean the same as if I'd been exposed to that training.

[49:37] Michael Levin: So that.

[49:38] Chris Fields: That's assuming a lot of shared semantics and communication, though. I think this scenario is very similar to what you see with side effects of drugs. The drug is moving some network in some target cell into a new state. But that same drug may move the same or a fairly similar network in some non-target cell into a new state. And that manifests as a side effect that's undesirable.

[50:18] Richard Watson: Messing.

[50:19] Chris Fields: Constructing the memory in one case leads you to one semantics. Doing the same thing in a closely related case leads to a completely different semantics because it's embedded in a different context.

[50:38] Richard Watson: But the drug wasn't moving the memory from one place to another. It was just moving a chemical from one place to another. And there wasn't any good reason to believe that the effect of the chemical in one system would be the same as the effect of the chemical in the other system.

[50:52] Chris Fields: If the memory is encoded by some part of the network state, you may be reproducing that network state in the different network. If that network state is the engram in this case, then you're moving the memory or you're reproducing the memory.

[51:11] Richard Watson: But if you were reproducing the memory, then the drug would work.

[51:16] Chris Fields: Not necessarily, because that memory may be pathological in a different cellular context.

[51:26] Richard Watson: The drug did have the effect that I wanted it to have, but in this patient, that effect wasn't the one that they wanted.

[51:39] Michael Levin: Which really suggests something interesting for the therapeutics, which is that Richard, what you were saying before is that if I'm exposed to something and I learn it, and then you're exposed to me, that works because there's this communication interface between us, which may be linguistic or it may be something else. That suggests that in order to move these memories, what you need is the equivalent of a communication interface to make it make sense. I take it from this context that I can't just plop it down exactly as it is. If I've been exposed to something, me showing you my brain states aren't nearly as good as going through this interface.

[52:27] Richard Watson: It's almost as though you have to be replaying it for me, that the interface has to induce the same dynamics in me that it did in you, which suggests that the communication channel can't be only symbolic unless you presuppose that we have a shared inductive bias. Otherwise, it needs to be non-symbolic, a resonance, an imprint that's an analogical transfer. But you have to replay all the experiences you had in the same dimensionality in which you observe them so that I can just learn from them as you did.

[53:27] Chris Fields: This reminds me of a problem that I'm currently working on, which is general descriptions of communication protocols that involve two different kinds of resources. In this case, it's a quantum resource and a classical resource. And the bottom line is you need two channels to have effective communication. If you have two systems and they're manipulating some shared resource, they have to be able to talk about what they're doing to each other to coordinate their manipulations so that my manipulation of the resources is interpretable to you. If you assume that they share the same language, then you don't have this problem. But the shared language is another communication channel. It's just historic, not real time. The acquisition of the shared language is a historic shared communication channel.

[54:46] Richard Watson: If Mike learned something by watching a video and then wanted me to know what he had learned, one way that he could do that is just by being a video camera that recorded what he looked at and then playing it to me. I could just learn it from that. It didn't require any interpretation and it didn't require me to know anything about Mike's internal state to do that. But if he's going to communicate it to me in a compressed way, now I need to have some knowledge about his internal architectural structures in order for that to make sense to me in the same way that it made sense to him, which I could get by knowing something about his history. If that would be like saying, if we both watch this video and then Mike watches another one and tells me about it in the language of the first video, then I know what he means in so much as it was interpreted through the history of the first video.

[55:51] Chris Fields: If the two of you watch the same video and your conceptual scheme is completely scrambled with respect to Mike's, then you won't get the same message out of the video. If you have entirely different notions of what counts as an object, for example, you're not going to get the same information from the video.

[56:17] Michael Levin: Just thinking about that in real world cases, I've certainly watched things with someone and realized that we got completely different things out of what we just saw. In fact, doing what you just said, being the camera wouldn't have worked at all. If we want to get the shared thing, I have to do all sorts of manipulations of the data to make sure that we actually got the same thing out of a video.

[56:45] Richard Watson: Yeah.


Related episodes