Watch Episode Here
Listen to Episode Here
Show Notes
Conversation between Douglas Brash, Chris Fields, and Michael Levin
Douglas Brash - https://medicine.yale.edu/profile/douglas-brash/
Chris Fields - https://chrisfieldsresearch.com/
CHAPTERS:
(00:00) Introductions and scientific backgrounds
(06:54) Cell competency model
(16:23) Directionality and intelligent evolution
(27:16) Goals, attractors, landscapes
(37:24) Selfhood, hacking, individuality
(52:01) Objects, measurement, identity
(59:22) Representations, language, AI
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:00] Michael Levin: Super happy to be able to have you guys both on because I think we have some really interesting stuff to talk about. I figured if you haven't met before, each person take a couple minutes and go over your background and interests.
[00:19] Douglas Brash: My background for the present purpose is going to be different from what I tell everybody else, but it's relevant here. I got interested in science when I was a kid because I was interested in time and what it is, besides just a letter T that you let get bigger. I majored in physics and halfway through college I stumbled into Heinz von Forrester and eventually became convinced that this was more of a biological or cognitive problem than a pure physics problem. I minored in physiological psychology and switched into biophysics. I went to Ohio State to do theoretical biophysics with Karl Kornecker, who was a theoretical biophysicist. If you go back and look at Waddington's theoretical biology books, you'll see he's in there as a young man. After a while I became convinced that you really had to do experiments at some point. I noticed that the molecular biologists, when they were done, knew what they had done, whereas the neurophysiologists were still making up stories about what might have happened when you stuck electrodes into things. I wound up doing molecular biology. One of the things that happens is that once you become an expert in something, it's very hard to convince anybody to let you do something else. These other things I've continued to be interested in, but it's more of a spare-time project. With regard to the cognitive stuff, I continue to work on that. We have a way of parsing language that is essentially using semantics as the syntax. Linguistics in particular is a vicious field, but it works. I think there are implications as to how you do cognition. This particular conversation goes back more to the molecular biology stuff where what I wound up doing is working out how sunlight causes skin cancer. One of the things we're finding out now is the number of mutations you have in your skin is totally outrageous. The question is why the skin's working at all: 100,000 mutations per cell. That's what started this conversation with Mike. Mike sent me some of his papers on organizing things from the top down, which I appreciate because I'd already read Patty and so forth. Karl Kornecker is the guy who first pointed me to the ion channels in development, which is why when Mike called me up 12 years ago, I immediately understood what he was doing.
[03:49] Michael Levin: Oh, Jaffe, Lionel Jaffe.
[03:51] Douglas Brash: Lionel Jaffe, right. It sounds like you guys have overlapping interests on the biology side that could carry over into experiments with cancer, demonstrating genetic assimilation kinds of phenomena in cancer. One of your proposals for that looks a lot to me like a physically embodied neural net, although I haven't had time to see how literally one could take it. Mike sent me some of your papers, Chris, which I wish I'd read years ago; some of them I'm going to have to read twice and I have some questions for you. It looks like the three of us believe the same things are possible. It looks to me, having seen all this stuff at the same time now, that it may be possible to build from the bottom up in a consistent way that achieves all of these things and do some experiments to back them up. I could give you more details later if you're interested, like the language stuff, but I don't want to hog up all the time. So Chris.
[05:17] Chris Fields: I also started in physics and ended up in molecular biology, but with a turn in philosophy and then in AI in the middle. And then I departed from science altogether for several years before coming back and taking on some of these much broader kinds of issues at the borders between physics and cognitive science and biology, which seems like a very comfortable and interesting place to be right now. It seems like a lot is happening and there's a lot of room for putting these different disciplines together in a productive way.
[06:09] Douglas Brash: Do you find that people are listening to each other or is it all siloed off?
[06:15] Chris Fields: I think the bulk of academia is still very siloed. But I think there are increasing numbers of cracks in that siloization. For example, I think Carl Friston's done a huge service in breaking down some of the barriers between neuroscience and physics and biology in general with his work and accumulated a highly multidisciplinary group around him.
[06:48] Michael Levin: Start off on that and we'll talk about it.
[06:54] Douglas Brash: Where would you like to start? At the beginning or at the end and work backwards. You guys have already been talking about the embryo kinds of stuff. What I could do is ask a question to make sure I'm understanding things. There's this issue about competency, which I now understand means swapping cells around positionally in space to see whether you get a better phenotypic fitness. One thing I didn't understand from the paper is whether that swapping is random.
[07:41] Michael Levin: You're talking specifically about this Sharisha paper that my student Lakshman just put out. What we were trying to do there is determine what it does to evolution when the components are not purely passive. We have a very simple one-dimensional model: a one-dimensional axis with a bunch of cells along that axis that have a preferred positional information. You can either be completely passive and let the standard evolutionary algorithm sort them, and eventually it sorts them. Or you can put in a developmental phase between the genetics and the phenotype, where the cells have tiny preferences about who their neighbors are. The moving around is not random. Every cell has a little ability to sense who its neighbors are and to be happy or unhappy with who its neighbors are relative to the positional information. If I'm A5 and my neighbor is a nine, I know I've got a problem. The nine knows he has a problem too. They have variable amounts of competency to recognize the problem and try to rectify it during that developmental phase. Now the question is: when you're dealing with this material, it doesn't just sit where the genome puts it, but actually has the ability to sort itself out, what happens to evolution? Long story short, what we see is that once you have an individual that's a little bit sorted, selection has a hard time knowing whether you're sorted because your genome was amazing or whether the genome was so-so but your competency was good and you sorted yourself out. It ends up not being able to see the best genomes, but rather doing more work on the competency gene, which then makes the problem worse. That hides more information from selection, and that ratchets up. Eventually you get to something like a planarian, where the genomes are basically junk and the algorithm is so good that it almost doesn't matter what the genome is. It will sort itself out in the right way, which is what we see in planaria. Other organisms do some of that, but not as well as planaria. There are clearly some factors that prevent this from ratcheting all the way in every lineage. This question was what implications for evolution the competency of these things has. Can we explain this amazing fact about planaria? Planaria have the most disorderly genome, yet perfectly regenerative, no cancer, no aging, no mutants. There's no such thing as a planarian mutant line the way you have with Drosophila, mouse, and C. elegans. The only abnormal line of planaria you will ever see is our two-headed line. Those are not genetic; those are not made by any genetic change. Why is it that you can't get a strain of weird-looking planaria? I think it's because they've learned to basically ignore what much of the genome is doing, because they have to. They accumulate so much junk by not going through an egg phase; they just accumulate so much junk.
[11:11] Douglas Brash: In your model then, if you're swapping cells around and the cell knows that it's a six and it's next to a nine, there's a level of selection already in the sense that selection is not going to be the right word, but there's a level of recognition of a problem just at the cell level.
[11:38] Michael Levin: The competency is I'm not a passive Lego block. I have the ability to look to see who my neighbor is. I have a local preference about how much, what kind of different neighbor am I willing to tolerate? If my neighbor is super different from me, then I'm not happy. I want a neighbor that's pretty close to me.
[11:58] Douglas Brash: It's perception, plus a goal or what you want and an ability to move. Is the place you move dictated by the discrepancy between 6:00 and 9:00, or is it a random movement?
[12:17] Michael Levin: It's not random. You try to go in the direction that you think you should be going. But you don't have a God's-eye view of where to go and when to stop. You only have a limited ability to take a couple steps in the right direction and that's it. And you can crank that up either by us, the experimenter, or you can let that be evolvable. We did that too. There's a level of competency, how much look ahead do you want? How much crawling ability do you want and so on.
[12:52] Douglas Brash: Not selection, but it's local anyway.
[13:03] Michael Levin: In this case, it's purely local. One of the things we're going to do next is we're going to implement this kind of stress mechanism. I have this other wacky idea that one way to coalesce individual cells into common purpose is to let them share stress. Because if I'm a cell and I'm really unhappy about where I am, and you, my neighbor, are perfectly happy where you are, you're not going to let me by. And I'm not going to get to move where I need to go because you're fixed in place. Why should you move? But if I am stressed and I can let that stress leak out and start to stress you out, then you become a little more plastic too, because you are getting the same stress molecule that I have. You don't know that it's not you being stressed. As far as you're concerned, now you're stressed. So you're a little more plastic. And now I can get by. By my problem becoming your problem, it sort of binds everybody to a common purpose. The temperature goes up, and everybody gets a little more plastic. And then when I finally get to where I'm going, everybody's stress can drop because I'll stop stressing everybody out. So now what he's going to do, the Lakshuin, is they're going to put in this mechanism where basically evolution gets to say, is the stress leaky? And if so, how leaky? And we'll find out whether you can actually use this stress mechanism, and we're doing that experimentally too. Experimentally, if you make the eye of the tadpole off to the side where it's not supposed to be, it'll eventually move to where it needs to go and everything rearranges. We need to track the stress markers during this process. Is there tissue-level stress when the cells are not dying, they're not poisoned, DNA is not broken, no heat, and none of that, but the eye is in the wrong place? Is that stressful at the tissue level? We're going to find out. We have some preliminary data already, but we'll find out.
[15:03] Chris Fields: I think this could perhaps be formulated as saying, Not only does competency involve this ability to perceive and act in a certain limited way, but it also involves the very particular perception-action loop that we call communication. So spreading the stress around is just a special case of communicating something to the neighbors. They can then confirm, perhaps by using their communicative competencies as well. I'm very reminded, in looking at this experiment, Mike, of human evolution working on our cognitive competence as opposed to turning us into creatures that can run faster or lift bigger weights or fight better or have bigger teeth. Evolution didn't do those things. It worked on learning ability and general cognitive ability instead.
[16:23] Douglas Brash: That's a nice example. In my notes, this thing I say about the R selection and K selection just occurred to me this afternoon. It's not exactly right, but I think you have the same dichotomy here. Selecting for a genome or selecting for a phenome. You're saying, Chris, the brain did it one way. You got this rapid evolution. There's this guy at Berkeley who was pushing that idea 30 years ago and he died prematurely. What was his name? I've forgotten. The idea — do you know this story? Alan. I'll dig it up. That could go faster than genetic evolution. I guess humans are doing it one way, be careful. The planaria are doing it another way. The brain has decided to do it the planaria way. Your competence in your computer experiment, Mike, you're swapping positions of cells around, but it could just as well be differentiation states of a cell, right? You just do some other differentiation. That could well be what a tumor is doing now. You've got the society of cells that's doing that evolution. That now gets you into this other issue: is there a direction to all this?
[18:15] Michael Levin: It seems to me to be a ratchet. I think it does have direction. Because once you start, it's very hard to go back. Once you start, once you have a little bit of competency and you have trouble seeing, judging the genomes and so you keep going, it's very hard to. Steve Frank gave me this amazing example where RAID arrays in computers. For disks, you have these arrays of drives. Basically, there's a parity system where, if there's an error on one of the drives, you correct it because you have copies of the data elsewhere. What he was pointing out is that once these RAID arrays became popular, the quality of hard drive media went down because it was no longer that important to have a RAID, because you fix it all in software. Now you're trapped. Because if the quality of your disk is crap, you can't do away with a RAID anymore. It's not going to work. It's a one-way ticket, but clearly some species flatten out. Planaria went all the way with this thing, but we can do some of that and certainly the salamanders can do more, but Planaria just took it all the way. So it does seem like this has a direction to me. To me, this is a direction, an arrow for intelligence baked into this whole thing.
[19:39] Douglas Brash: Long ago, in conversations with Kornack back when I was a grad student, it occurred to me that most of evolution, particularly if you look at the nervous system, you could describe it. People are always asking, is there a direction? What's "higher" mean? It struck me that what it is: separation of functions. You have one part of the brain in particular that used to do two things badly. Now, if you go from chimps to humans, you have two parts and they each do one thing better. Enzymes are the same way. You could pick a direction. Then all you have to do is say that one or the other of these schemes is going to have the net phenotypic result of giving you separation of functions. Separation of functions allows you to control them independently. It looks like intelligence. The classic example for me is the red nucleus. There's a pars magnocellularis and a pars parvocellularis, big cells and little cells. In monkeys, there's just one nucleus. In humans, they go to two different places; the connections go to two different places. In humans, you have a center, which is, I think, the parvocellularis, surrounded by the larger cells, the magnocellularis. So there's now an anatomical distinction, and they project to two different places, or they go to the same places they did but are separated and have separate controls. Once you start looking at the nervous system, you see all kinds of things like that. Enzymes are the same; enzyme evolution is the same way. What you were saying about the stress response looks like it goes back to Waddington, Smallhouse, and Baldwin. I noticed today when I was rereading that paper that what you're saying is you're going to do all these swaps before you select. That gives you a chance to do stuff before you can get killed by the selection pressure. That's essentially what Waddington's environmental inducibility is doing: if I'm going to develop calluses on my ostriches but it's only inducible and there's a bad side effect to it, no problem. It's inducible; it's not on all the time. Get away with it until I make a perfect callus by mutating three other genes. The same argument holds even better for cancer, where you don't have a million years. You have to do it in six months.
[22:48] Michael Levin: I think you're right that capacitor function is definite. If you take, let's say, that eye in the tadpole or the mouth, you make a mutation and it has a positive effect somewhere, but also makes the mouth off a little bit. That mouth is going to get back to where it needs to go before the embryo has to eat. That means that whereas before you would have had a deleterious mutation that would have wrecked the whole thing and you would have never seen all the positive effects, now it buffers it. A lot of these mutations become neutral, whereas they would have been negative before. You get to explore whatever else this thing is doing because the mouth will find where it needs to go. Competency at the tissue level smooths and makes the search space much nicer because all the things that otherwise would have killed you, now you have the ability to tolerate it and maybe use it later on. It lets you carry all this stuff at the genetic level. It isolates you from mistakes at the genetic level because if your cells are slightly the wrong size, your kidney tubules will still figure out how to be the right diameter. If your mouth is off, it'll come back.
[24:01] Douglas Brash: That's the answer to Behi's intelligent design argument.
[24:07] Michael Levin: This is something that's interesting. I've been talking about this with Richard Watson, that we now know from computer science and other things that you can have an intelligent system that is made of less intelligent parts. You don't need everything to be intelligent. To me, the thing about evolution now is two things. One is whatever intelligence it has is provided by the fact that you're dealing with an agential material, you're not dealing with passive Lego blocks. Evolution is searching: all these cells used to be independent organisms, so they have their own agendas. What evolution is really searching is this space of behavior-shaping signals so that the cells can get each other to do various things. That's where the intelligence comes from. Mutations are random. Nobody has to keep track of the whole thing. There is a degree of intelligence to the process because the parts are able to make up for errors. And it doesn't tell you where you're going to end up necessarily, but it does tell you that the process isn't as blind and stupid as it's made out to be. Nor does it need to be superhuman-level intelligence. It just has a little bit of smarts.
[25:26] Chris Fields: I think an aspect of this is that the parts don't need to be intelligent in the same environment that the whole system is. In fact, they can't be. They need to be intelligent in their own environment, which for cells is an environment of other cells, some of which are similar and some of which aren't. You can expect the kind of social relationships modulo their communicative strategies and abilities that we have. They might use the same kinds of tools in terms of social communication that we do. The environment itself that they're communicating about is very different.
[26:22] Michael Levin: Yeah. The problem?
[26:24] Douglas Brash: Oh, go ahead.
[26:25] Michael Levin: The problem space is different. We have another paper, a preprint out of this: my postdoc Leo has the system where you've got cells and the cells are operating in metabolic space. They're little homeostats and all they know how to do is try to get more food, but the collective as a whole makes a French flag developmental pattern. You see how you shift problem spaces. The cells have these little tiny local goals, and they're competent in that. They don't know anything about French flags or whatnot, but the collective is able to do that. Chris and I had that paper on navigating different spaces. Evolution kind of pivots these tricks from one space into another by basically the same kind of scale-free dynamics.
[27:16] Douglas Brash: Do either of you have an opinion as to whether there is some larger concerted behavior, whether that's because of some goal and set point and cybernetics, or whether it's an attractor that doesn't know where it's heading.
[27:43] Michael Levin: Chris, you want to try first?
[27:48] Chris Fields: I'm not sure. I don't see the distinction between those two alternatives.
[27:53] Douglas Brash: I hadn't thought about it until recently. I think it's because I don't understand attractors well enough. But for a cybernetic system, you have a set point, which is stored someplace. You have an input signal, and you have somebody comparing the two, and then telling somebody else which direction to change to reduce the difference. So you have a goal, which is arguably some representation of some future state sitting there, and some computational apparatus. And then, as I understand it, attractors are like a gravitational hole, and you just get sucked there.
[28:46] Chris Fields: In a sense, you can view the attractor as a representation of a goal state.
[28:57] Douglas Brash: Okay, yeah.
[28:58] Chris Fields: Each particle or agent that's moving within the field of the attractor has to make a local measurement about whether it's going up or down. It tells itself what to do based upon where it finds itself.
[29:26] Douglas Brash: There is a comparison, but it's to the local state and the attractor is set up as a gradient.
[29:36] Chris Fields: Right.
[29:37] Douglas Brash: Okay.
[29:39] Michael Levin: I'd like to look at it slightly differently: somebody had this amazing phrase that said that it's the continuum between two magnets trying to get together, and Romeo and Juliet trying to get together. There's a continuum: how much cleverness or competency do you expect along the way? I think the system that Chris just laid out, in systems where that's true, that's kind of the magnet case, and all they can do is, if you put a barrier there, that's it. They're going to stay there pressed up against the barrier. They're never going to go around. They're never going to do anything clever. That's all they know how to do. That's on the left of that continuum. All the way on the right of that continuum are some super smart systems that not only do they see the gradient, but they have delayed gratification. They can avoid being trapped in local minima. They may have some planning and some metacognition. In the middle, you have all kinds of simpler systems that have some of those capacities. Maybe all you're doing is following a local gradient. For example, we've been finding all kinds of examples, some of them extremely simple things, like very simple algorithms, that when you play them out, they have some capacity to resist perturbations. To me, one of the things is a lot of people will look at a system and immediately have a built-in feeling: that's just dumb chemistry. I don't think you can tell from purely observational data. You have to start probing it by getting in its way and seeing what it does. How much can you get this thing to resolve the problems that you're giving it? Does it always do the local gradient thing? Or can it do something a little more clever and get around? There's a wide variety. In biology, one thing you can do is start everything off in the wrong position. With metamorphosis in the tadpole, it used to be thought that all the organs only know how to go the right way in the right direction and for the right amount of time, and that's it. We scrambled them all in different positions, and we found out they will keep going in weird paths to get to the right place. It can't be as simple as all they know how to do is go in the right direction. They're context sensitive. There are many systems like that where they seem to only do that, but when you interfere, you find out that they have all kinds of capacities and they're further along than you think on that spectrum.
[32:40] Douglas Brash: That's a nice analogy. The difference between the two is that the space between where you are now and the actual end point, the attractor, that intervening space is also already programmed or deformed. Whereas in the Romeo and Juliet case, it's not. You have alternatives.
[33:07] Michael Levin: You have different capacities to navigate that space. You have different strategies. People who build autonomous vehicles think about that stuff all the time. You want to get from here to there, but you're not just going to do as the crow flies unless you hit a barrier. You've got all sorts of different tricks up your sleeve about both local and global information. One in particular that we use a lot is I call it delayed gratification. It just means are you able to temporarily get further from your goal in order to then do better later. Some systems can't. I've got a great picture I took of two dogs across a fence and they're trying to get at each other. There's a hole in the fence 5 feet down. But in order to do that, you have to get away from where you want to go. They're just like this, and maybe they'll figure it out eventually or maybe they won't. But that ability to get further away is, I think, pretty critical. It's a basic capacity not to get trapped in that local optimum. And then there's all kinds of complex tricks after that.
[34:22] Douglas Brash: An end run. The classic one is this experiment. You have food. The monkey's in a cage. There's food outside, just outside his reach. There's a stick behind him. Is he going to figure out that if he turns around, grabs a stick, and then goes back, he can get the food? Evidently primates and birds can do that and other animals can't.
[35:02] Chris Fields: I think in terms of this landscape picture, when we think about an attractor, it's typical to imagine a potential minimum and then just a smooth surface coming out of it. That's the simple picture. Whereas in real life, even if you have a fairly deep attractor like Romeo and Juliet, the surface coming out of it is very complicated.
[35:40] Douglas Brash: Bumpy attractors.
[35:42] Chris Fields: With all sorts of channels, some of which might be solenoidal even, to keep you from ever getting over the edge. And so these competent systems have to be able to increase their temperature, as Mike was referring to earlier, to get out of local minima in this very complicated landscape that surrounds wherever it is they want to go. But it's also the case that if you think in terms of a static landscape, then that corresponds to an environment that's not malleable by the organism. Whereas in real life, what the organism does in navigating the space not only changes its position in the space, but it also changes the shape of the space. The organism actually modifies the environment by taking each action, not just its position in the environment.
[37:04] Douglas Brash: You can change where the attractors are.
[37:07] Chris Fields: The organism can even make the attractor move around based on what it's doing. If it's a sufficiently competent interactor with its environment and if the environment is sufficiently plastic.
[37:24] Michael Levin: Yeah.
[37:25] Chris Fields: So we see that in social interactions, for example, the environment, the social environment is incredibly plastic. We can change it in various ways just by talking. Whereas the environment of the desert is not all that plastic. You can wander around. You don't change where the mountains are.
[37:44] Michael Levin: You can imagine. This is important during evolution, I think. Here I have my thermostat and all it knows how to do is check the set point and get the temperature. There's a more advanced version of a thermostat, which also has a metacognitive module that makes sure that the set point isn't being twiddled too much. That monitors the set point. Why do you need that in biology? Because you're constantly at threat of parasites and exploiters of various kinds. The minute you become programmable, like a thermostat, somebody out there is going to want to change your set point. You need to be able to resist that. In a more advanced version, you need to be able to tell whether it's you resisting it, whether any changes that are made, are you doing it or is it somebody else doing it to you? I think there would be a lot of evolutionary advantage to knowing: why did my set point just change? Did I change it or did somebody else change it for me? Am I being exploited in some way? As soon as you say that, it starts to make sense. I was just talking to Eric Welker, and he asked me how important threat perception was to this whole process. I was saying that I think it's absolutely critical because if you weren't in danger of being hacked by competitors, you wouldn't have to have a strong sense of self. I've been obsessed with this thing that you're an embryo, a blastodisc, so 50,000 cells. At some point you look at that and it becomes one embryo. You look at it and you say, how many embryos do that? People say one embryo. What are you counting when you say one embryo? What you're counting is the following. A bunch of those cells are going to merge together to, among many other things, determine a barrier between themselves and the outside world. Every cell is some other cell's neighbor. Where do I end and where does the world begin? The embryo is going to make that determination and you'll have one embryo. If you take a little needle and make some scratches — I used to do this in grad school when working with avian embryos — until the scratches heal, every individual region which can't hear the other regions becomes an embryo. When they do heal, you have conjoined twins or triplets. The question of how many individuals are in that embryo is not known in advance. It's not genetic. You don't know how many it is. It's something that emerges over time. To me, that has really interesting implications for the brain. Same deal.
[40:37] Michael Levin: If you didn't know what a human was and I showed you a three and a half pound brain, I said, how many individuals in there? Who knows how many are in there? It's this process of taking individual pieces and pulling together some sort of unified thing that makes a model of itself as distinct from the environment. If you weren't at threat of being hacked, you wouldn't need to do that. It wouldn't matter, you wouldn't have to maintain a strong self-boundary because you wouldn't need to know why things are changing; nobody would be trying to exploit you. Of course, in biology, that doesn't work. You're always at threat of being hacked. Then that drives this requirement of determining a strong self-identity and having ways to understand whether I am learning or being trained. I think this is a very important point Chris was saying about the environment. If it's just you and the desert, the environment is a very low-agency thing. You can assume that you're the boss. You're learning. I'm deciding what I pay attention to. I'm learning. If you're in a social milieu and you're learning, it could well be that you're actually being trained. The partner on the other end is a high-agency thing, maybe higher than you, and maybe you're being exploited. Someone gave a talk on how to give talks. He said every act of writing is a violent act because what you're hoping to do is to change the listener's cognitive structure. Success means I've reached in there through my signaling and you walk away altered, you believe things you didn't believe before. That's a successful talk. I wanted to show you a cool picture. This is something else I've been obsessed with. Here it is. Check this out. You see this thing? This is a gall formed on the leaf; this wasp embryo induces the flat green cells of the leaf to form this crazy thing. This is what we're up against. If you're a cell that's smart enough to form a leaf, you're also subject to hacking and being made to make something completely different. That's the arms race you're in. Now every single that comes in, you have to decide, is that me changing what I do or is that somebody changing me? That probably has various psychological implications too, where threat level is somehow proportional to a sense of ego separation. I'm out of my pay grade here, but there's something like that.
[43:32] Douglas Brash: I raised this issue of defining a thing, which gets to your paper about objects. That was one reason I was so interested in the goal versus attractor thing, because the goal seems to me to require an internal representation of some sort, whereas the attractor does not. Rodney Brooks, for example, with his robots, is maintaining that there isn't any internal representation. Just speaking for myself, I'm pretty sure I do have internal representation, mental representations. I think that's a key part of cognition. I'm happy for Rodney Brooks's computer robots not to be cognitive, but if we're interested in that question, then what is it? Frank Furster got me thinking about things years ago. He had lots of little pieces to the problem, but I think never put his finger on how to put them together. The thing that occurs to me for things: you tend to define a thing, you think about the edges and so forth. I think what it is actually a little different. You have a list of properties, and then you divide that list of properties into two columns. Those that are essential, which I call specified, and those which are allowed to change without you saying it's no longer the same thing. I call those substitutable. So that lets me take off my hat or put it on. It lets an apple have a drop of water on it, and it's still the same apple. My dog, my daughter's dog has trouble with the hat thing. If I put on a hat or if I put on a mask, I get barked at. So I think that's the primordial cognitive distinction that you make is the separation to specify and substitutable. One place that went is, okay, if you do that, now you can start out with percepts and build up this hierarchy of what I call constellations and things, which can now change. Quantum mechanics is full of things that only have one property, and so they can only be created and destroyed. But as long as you have both columns, things can now change, and you can just build up this whole cognitive thing. And then one day I wonder if language works the same way, in the same hierarchy. To make a long story short, I can show you because it doesn't take too long. If you say, okay, that's how language works. And if you have a template that's a lot like the genetic reading frame, which, like three codons, codons to three bases, there's a similar thing for natural language. Well, I didn't write, a friend of mine wrote, a three megabyte program that parses human language. I can talk about that in a minute or show you if you want. But with regard to the thing issue, you're pointing out and I have to read your paper a couple of times, Chris, the one on objects, this issue of superposition: it seems to me that before you can talk about an external thing, you have to solve the superposition problem. I didn't quite understand how it got resolved. One thing that Heinz used to emphasize was you've got two eyes, two ears. They're giving you conflicting signals. So if you have two measuring instruments, does that solve the superposition problem and let you talk about an external object?
[47:54] Chris Fields: Let me say several things. One, your distinction between constant and variable properties of objects is what we refer to as reference and pointer degrees of freedom in our work.
[48:11] Douglas Brash: Okay.
[48:12] Chris Fields: That's physics language that comes from the pointer on an old voltmeter. That's the thing that moves that you're interested in and everything else you're not interested in. It's just what lets you identify the instrument. And physicists tend to ignore the non-pointer variables, which I think in large part is responsible for the quantum measurement problem as a philosophical and theoretical issue. Because if you have to identify the system, it clearly can't be in a superposition of any of your identifying variables. If I identify my laptop by its position on my desk, then it can't be in a position superposition or I'll never identify it. So that's my criterion.
[49:20] Douglas Brash: The superposition has to be in the pointer variables.
[49:23] Chris Fields: You can only see superpositions in the pointer variables. By definition.
[49:31] Douglas Brash: Yeah, that's nice.
[49:35] Chris Fields: In a sense, as you were pointing out for elementary particles, you do have to have two measurement instruments or reference frames or concepts. They do have distinct semantic roles by definition. They aren't just syntax. And one of them has to be a reference that you keep fixed in order to identify the system. And the other one you can allow to be variable to get some interesting information. The system having the properties that define it is not interesting information. The system being identifiable or not is interesting information.
[50:41] Douglas Brash: Right. Okay.
[50:42] Chris Fields: If you look at things like electrons, identical particles of any kind, we distinguish them by spatial position, which, if you take a field theoretic perspective, is a completely artificial variable. They're identical because we think one of them's over here and one of them's over there from a field theoretic perspective. These are just field excitations. There's no such thing as a distinction. You just have two of them. I think this is all very much on the right track from a physics perspective: one has to include these kinds of distinctions in a description of the physics itself, or you get things that don't make sense, like the measurement problem.
[51:54] Douglas Brash: Is that still considered to be not understood?
[52:01] Chris Fields: Everything I'm saying is a minority position. Most people don't talk about system identification physics at all. Engineers talk about it every day.
[52:17] Douglas Brash: Oh, really?
[52:18] Chris Fields: Physicists tend not to talk about it at all.
[52:23] Douglas Brash: You mean the engineers saying, here's the one box, here's the other box, and they talk to each other?
[52:30] Chris Fields: And here's how I tell them apart. And here's how they tell each other apart.
[52:34] Douglas Brash: The physicist is saying, here's the needle, but never mind the housing.
[52:42] Chris Fields: You'll see papers that say that explicitly. There are two very well-known papers by Max Tegmark at MIT that talk about decoherence, and in both of them he uses exactly the same diagram, which splits the universe up into three pieces: the part of the observer's mind that observes the pointer state, the pointer state, and then everything else, which is the environment. That reduces the whole object identification process to just some feature of the external environment that we won't talk about.
[53:35] Michael Levin: I love the distinction, things you can change and things you can't change and also telling things apart because in biology, what we see is it's like the ship of Theseus business. So in a body, what varies? All the molecular details vary all the time. Molecules come and go. In fact, cells come and go. So you have to have a system that stays the same. So what stays the same? The higher level system stays the same. On the lower level, nothing stays the same. Everything is different within a couple of years. You're all swapped out, as I understand. So this leads to a couple of interesting things. One is that in a cognitive system, you have a similar scenario where if you're going to be a coherent mind, what comes and goes are different thoughts. You don't want to be different every time a new thought comes in. So you have to have some kind of stable structure that can persist despite the fact that new thoughts, new experiences, all of this is going to come. There's some kind of higher level structure that has to stay. So this question of what are we invariant to, and how do we tell the difference? When I see you, I'm not a Laplacian demon who says that's not Doug because all the atoms are wrong. I ignore the microstates, and I say, no, that's definitely you. And then cognitively, the same thing, even though you've had a million thoughts, and maybe today's thoughts are different than yesterday's thoughts, we can still recognize each other. That's interesting. And then going back to the pointer thing, another thought I had of the physics instrument: I wonder if the reason why for physicists everything is bottom up and low agency or zero agency and mechanical is because all their tools are. The voltmeter and things like that are low-agency things. They only measure the microstates of things. You need a completely different apparatus like a brain or an artificial neural network to detect these high-level invariants like a being, whether a body or a cognitive being, that does not change when the parts swap out. Physicists have no apparatus that will detect these large-scale virtual governors. You don't see that. All you ever see is microstates because what you're using is apparatus that always looks down at that level. But if you're a biologist, you can use a different apparatus that's very good at detecting these things. The brain is really good at detecting agency in the environment. It skips over all the details. That's where my mind went.
[56:38] Douglas Brash: So that's nice for two things. One is for the physics instruments; they're simple so that you get an unambiguous result. If it measured 10 things, you'd never really know what you had measured because you got some function of 10 things. Conversely, I realized as you were speaking that we always say, "gee, I had a thought." We never say, "my mind changed." These thoughts are something we project into something external, even if it's inside our head, it's external to me.
[57:18] Michael Levin: That's an interesting question. We all have an innate feeling of free will, but we know you cannot control the next thought that you're going to have. Whatever pops up is what pops up. You have long-term control in the sense that you can undertake practices that will change the statistical distribution of future thoughts you are likely to have, whether by education or meditation. You have hope of long-term changing the ensemble of your thoughts, but you have no hope of controlling what your next thought is going to be. This goes back to you saying that you had representations. I feel like I have representations, but, for example, Nick Chater would tell you we don't have representations. He wrote a book called "The Mind is Flat." Mike Gazzaniga has a model of this too: there's a part of your brain that does things, and then there's another part that tells stories about why they happened. He would argue that, and I'm not saying I go all the way with him, but the argument is that the deep underlying stuff is a post-hoc story. Here's a simple experiment. There's a video of this on YouTube. There's an electrode in the brain of somebody who I think was being treated for epilepsy, and it happens to be in a center that makes him laugh. When the experimenter pushes the button, the guy starts laughing. He's sitting there thinking about something serious. You ask him why he is laughing. The answer is never, "I don't know; I was thinking about serious things and then my mouth suddenly started laughing." The answer is always, "Oh, because I thought of something funny." That's always the answer you get. This is confabulation. We're definitely good at telling stories post-hoc about stuff that happened. Whether that's all there is, I don't know. But some of it is.
[59:22] Douglas Brash: Daniel Dennett had a little bit of a story like that, just making up stories about what your brain's saying. I could buy that. One thing that the brain top level does have to do is say that all your agents are down in your joints and the robots or wherever else. Even if the various properties that I'm assigning to a thing, one of them comes from the visual part of the brain, another one comes from the tactile. Somebody has to say, I'm going to say those properties all belong to the same thing. Otherwise, it's just your different joints doing whatever they do. That may be what the top level guy up here is doing.
[1:00:11] Michael Levin: I think Dan would say there is no binding problem because there is no binding. I think he would say it seems like there is. That brings up another question: it seems to whom? I don't know. I still find that a little tough. He's been battling for years this idea that there is any executive control that binds it all together, and basically saying that it's a set of parallel processes which occasionally sync up with each other long enough to tell a coherent story, but mostly it's just independent modules that do stuff.
[1:00:50] Douglas Brash: Interesting. I didn't know it was that extreme.
[1:00:54] Michael Levin: There is that extreme view that who knows where the truth is.
[1:00:58] Douglas Brash: Interesting.
[1:01:00] Chris Fields: There is also a lurking ambiguity in the use of the term "representation." That comes down to the question of representation for whom. Take a story like Shader's, which I do think he has a lot correct. This meta-processor is constantly, for some of the time, whatever it needs to, constructing some story about what the brain is actually doing, what the organism is doing. It does that in some kind of representational language, for example, self-dialogue in the case of humans. Nothing in the lower-level computations is using that particular representation for anything. However, the lower-level processes are full of representations that they are using. For example, retinotopic maps in the visual system are representations in a very straightforward sense. Somatosensory systems have representations of the body in a very straightforward sense. Those representations are never accessible to this meta-processor that's telling a representational story about what the brain is doing. They're only accessible to neuroscientists or other levels of the low-lying cognitive system itself. They're representations that the brain is using, but they're not representations that we can talk about in our own case.
[1:03:14] Michael Levin: This idea of representation to whom is really interesting. Josh Bongard and I wrote this paper on poly computing, and the idea is there is an objective story, an objectively true story about what a given algorithm is computing. He's got a data paper where you can show that he's got a porous medium that, if you look at it one way, is computing one logic function. If you look at it a different way, the exact same process is computing a different function. Now you've got this idea that to even say what such a simple thing is computing is also observer dependent. You could imagine that the brain and everything else in our bodies, there are multiple observers that are looking at these things in different ways and reaching different conclusions about what the heck is going on. We looked at, for example, gene regulatory network models, and you can do the same thing. You can say, without changing this thing at all, I can find a perspective from which the action of this thing looks like associative learning. There are six different kinds of memories that you can find if you look at it in the right perspective. Then you reach an interesting philosophical question. Somebody writes an algorithm and you reinterpret it a different way. Is your opinion of what it's computing as valid as the guy who wrote it? That person will disagree. He'll say, "What are you talking about? I know what it does. I wrote the darn thing. This is what it does." But you're free to interpret it in a different way if you can make it work out for you. That maps on to the neuroscience nicely. Another thing that bugs me is if that's true, why is it so easy for our verbal module, whatever it is, to interpret our own brain states, and it's so damn hard for neuroscientists to interpret somebody else's brain states? Neural decoding is really hard, but we do pretty well — not perfectly, of course, but we do pretty well with our own brain.
[1:05:32] Douglas Brash: I hadn't thought about it that way. I would argue that at the top level there's a format by which we represent things. I used the same format and applied it to language and it worked. The format is simply an engineer's: an entity, another entity, and a relation between the two of them — Earth, sun, gravity; chair, the seat of the chair or the back of the chair and a bolt holding the two in a particular relationship. The question is, how far could you go with that? Each of the entities has to specify the substitutable division in it. That's what I build things up with. Then it turns out that for language, you can do that. In English, almost all words have multiple meanings. What that tells you is you need an exogenous structure, an external structure, as the words are coming in, in order to sort out whether something's an entity or an entity word or a word relation word. You have this track, the reading frame, divided into entity relation, entity relation. As the words come in, you drop them in. We have no trouble at all with the sentence He saw that I saw, he saw the saw that I saw. He had no trouble with that. I'd argue that there's some neural thing that is creating that track. Beyond that, it then imposes the semantics. Once you've decided that something's an entity rather than a relation, that's a thing. Both cognition and language are building hierarchies out of these three-piece pieces. You've got entity, relation, entity, another one over here, there's a relation between those, and now you build a bigger one. The only reason that language looks complicated is that there's some data compression and we leave stuff out. It was not too hard to come up with a table of what got left out. For particular words, you can build a dictionary and then put the things back in. Essentially, what happens is we leave out words for system component relations. I know he can drive a car. I know that he can drive a car. We leave out 'that' all the time. There's the same thing between adjectives: a red apple is essentially an apple that is composed of the property of redness. If you adopt this, then it all becomes very simple. You drop things into this reading frame and pop things in whenever something has been left out. If you get two nouns in a row, you can use the table and pop them back in. Then the argument becomes, if that works, is that how we think about the world? I should say, English works that way. Japanese is entity-entity relation. There are languages that do it that way and also have word endings so that you know where the first entity ends and the second entity begins, even though you don't have a relation in between them. Languages that use entity-relation-entity use the relation itself to separate the two. We've got a three megabyte program and a three megabyte dictionary. Mike has to leave. I wondered whether your brain has a format in it that lets you do both cognition and language. As you point out, that's probably not at all what the somatotopic representation is in your brain. This would be a top-level thing.
[1:10:33] Chris Fields: Yeah. So Mike, thanks.
[1:10:37] Michael Levin: Thanks, guys. This is amazing. Let's keep talking. Super interesting. I think the language thing has a lot going on there. We should maybe talk a little bit about the large language models later on. I'd love to hear what you guys have to say about GPT-3. There's a bunch to talk about.
[1:10:59] Chris Fields: It'd be interesting to try to apply these kinds of structures to cell-cell communication languages and much simpler languages than human languages.
[1:11:21] Douglas Brash: Complicated things don't work. This has been terrific. Anytime you guys want to do it again, maybe the topic can start with the language thing. I don't know a whole lot about the big models, except that they have amazing failures, amazingly simple and obvious failures. It seems to me they haven't got it totally nailed down yet.
[1:11:51] Chris Fields: Thanks. Have a good one. Nice to meet you.
[1:11:53] Douglas Brash: Yeah. All right. Bye-bye.