Skip to content

Conversation with Nic Rouleau, part 2: neuroscience, memory transfer, aging of cognition, and more

Neuroscientist Nicolas Rouleau joins for a follow-up discussion on consciousness, memory transfer, cognitive plasticity and aging, goal decoding in the brain, and unusual experiments on conditioning and learning in materials like Play-Doh and neural tissues.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~55 minute discussion following up on Nic's talk and our brief conversation ( comprising part 2 of a conversation with a really interesting young neuroscientist, as well as friend, collaborator, and our Center member, Nicolas Rouleau ( We cover topics of consciousness, neural decoding, the meaning of neuroscience, memory transfer, cognitive plasticity and its relationship to rejuvenation therapies, intelligence throughout the universe, and the weirdest work Nic has done (he chose his work on memory in Playdoh). For more information: Nic's website: X account: @DrNRouleau Recent papers to check out: Sellar, E.P., Rouleau, N. (In Review). A cybernetic framework for synthetic biological intelligence in the era of neural tissue engineering. Preprint doi: 10.31234/osf.io/md2wf_v1. Kansala, C., Cicek, E., Nkansah-Okoree, V., Golding, A., Murugan, N.J., Rouleau, N. (In Review). Superstitious conditioning forms the experience of free will under causal determinism. Preprint doi: 10.31234/osf.io/fk3yt_v2. Roskies, A. & Rouleau, N. (Forthcoming, In Press). Research on brain organoids should prioritize questions of agency, not consciousness. AJOB Neuroscience. Rouleau, N. & Levin, M. (In Press). Brains and where else? Mapping theories of consciousness to unconventional embodiments. Philosophical Transactions: A. Preprint doi:10.1098/rsta.2025.0082. Rouleau, N., Levin, M. (2024), Discussions of machine versus living intelligence need more clarity, Nature Machine Intelligence, doi:10.31219/osf.io/gz3km Rouleau, N., and Levin, M. (2023), The Multiple Realizability of Sentience in Living Systems and Beyond, eNeuro, 10(11), doi:10.1523/eneuro.0375-23.2023 Rouleau, N., Cairns, D. M., Rusk, W., Levin, M., and Kaplan, D. (2021), Learning and synaptic plasticity in 3D bioengineered neural tissues Neuroscience Letters, 750: 135799

CHAPTERS:

(00:00) Rethinking unconscious experience

(07:50) Immortality, memory, and aging

(20:11) Regeneration, identity, and continuity

(34:52) Goal signals and decoding

(44:01) Conditioning strange materials

(49:15) Neuroscience and cosmic minds

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.


[00:00] Michael Levin: First of all, I'm wondering what you think about this. In the study of consciousness, for example, what people study are, they say, okay, here's conscious learning, non-conscious learning, right? There are processes that go on that they say, okay, the subject had no awareness that this happened, right? And it always surprises me, and tell me if I've got this wrong or there's a good explanation for it, because saying that it's not conscious because the human subject, i.e. typically the left hemisphere, has just told you they have no awareness of it, seems to completely beg the question. In other words, okay, the human subject you're looking at told you that there was no consciousness, but how do we know that the various components which were involved in perception, memory, all the things that took place, we don't actually know that those things didn't have a conscious experience during that time, right? It seems like you're assuming the very thing you're trying to prove. So I'm curious what your thought about that is, and if anybody studies this, and this, you know, is there really a good example of some sort of non-conscious behavior? Do we have any way of actually knowing that? What do you think about that?

[01:31] Nicolas Rouleau: This is something that I think about often in the context of anesthesia. It is said of people who are under the influence of an anesthetic that they're not conscious, because when they are suddenly roused from their unconscious state, they have no memory of what had happened previously during that period of time. But it could just be that they were having experiences, but none of them were encoded. And I mean, the counter evidence for that is when you look at all the physiological markers of how one would respond physiologically in terms of heart rate or galvanic skin response. Are they sweating when you're administering pain or a noxious stimulus or something? And you don't see any of that. And so it's concluded that the person didn't have any conscious experience because they're not responding to external stimuli. And then also they don't have a memory of the thing. But it could just be that you respond differently in those states. You don't have an emotional response, for example. Maybe those parts of the complex response are attenuated during that period. So it's very difficult to know whether in fact there was no experience. The other way that I think about this is in the context of state-dependent learning in general. If you study for a test under the influence of a drug, and that drug isn't totally impairing and doesn't affect your memory, or at least not in a severe way, you actually perform slightly better on the test if you're on the influence of the drug that you used when you were studying, because your memories are encoded in a certain state.

[03:26] Michael Levin: It's almost like place conditioning, right? That seems like...

[03:30] Nicolas Rouleau: That's another thing that does happen is if you're in a lecture hall and you attended the lectures on the left side of the room, when you take the test, if you're on the left side of the room, you tend to do better than if you were on the right side of the room. And I mean, it has to do with cues and it has to do with all sorts of other things, but it comes down to whatever the state was that you were in when you learned the thing, that state seems to be the optimal state within which you can actually recall the information. And I think you can think of that in the context of this conscious versus unconscious learning. Instead of calling it unconscious and conscious learning, I mean, you could just say it's all state dependent. And when you're in one state, you tend to be able to retrieve that information more effectively. And it could be that there are whole sets of information that you can only access when you're in that other state. And so I often wonder if the catalog of our lifetime of dreams is actually accessible in the dream state. Right now, it's very difficult for me to recall all the things that I've ever dreamt about. I can really only remember the things that I recalled slightly after rousing from sleep. But it could be that in that sleep state, you actually have a whole inner life that you can access in the same way that I have an autobiographical memory in this conscious waking state.

[04:55] Michael Levin: That's really interesting. I mean, there's the recall thing. And then for me, there's also the issue of the different sub-components, right? So whatever sub-modules had the experience in your mind, they may be permanently or completely inaccessible, not just because of a memory failure, but because you weren't the one that had the experience. And this comes up all the time. People say to me, well, this diverse intelligence stuff that we do. They say, well, I don't feel my liver being conscious. Well, of course, you don't feel your liver being, you also don't feel me being conscious. That's not shocking. If the liver were, you would not know about it, right? That makes sense. And so in a lot of these things, right, they seem to beg the question of, they focus on one subject and whatever that linguistic subject says, that's taken to be the conclusion, but yeah.

[05:54] Nicolas Rouleau: Yeah, and I think you can interpret what used to be called multiple personality disorder. You can interpret the clinical presentation of that disorder as just the extreme version of what most of us experience when we have context-dependent responses. Like, I behave totally differently in the context when I'm speaking professionally versus when I'm speaking with a close friend or my child, or when I'm speaking to my parents, or I behaved differently in public than I do in private. And that's all normal. Context-dependent responses are a normal feature of human psychology, but in multiple personality disorder, you have inappropriate displays of context-inappropriate behavior in the wrong state. And so you could think of each one of these individuals as separate people, but in the average person, they're integrated. And if I was to behave suddenly right now with you, as I do with my child, you would really think I'm a different person. You would say, wow, you're so condescending, right? And it's just because that's just not how you speak to adults and it's just not how you speak to mentors and colleagues. Like, that's just not how you do it. And so, yeah, I think that the other version of that is, when we start talking about unconventional embodiments and unconventional minds, now we get into some really hairy territory where it's unclear how one ought to behave in these situations or how many different kinds of states they can hold or how many different kinds of repertoires of responses are available to them that are discretized, that aren't part of an integrated whole. Yeah, it's fascinating.

[07:50] Michael Levin: Okay, something else related to this question of how many states. What's your prediction? If we were to, let's say, regenerative therapies get off the ground to the point where a standard human can live forever with brain rejuvenation and all of that stuff, indefinitely, let's say, right, let's say it were possible to just keep rejuvenating it. Do you think that, well, two questions. So memory capacity and learning capacity: finite or infinite? Like if you just sort of, this is the physical part, but we stave off DK forever, limited or unlimited.

[08:36] Nicolas Rouleau: It's a great question. I think it's limited in the sense that you can't just infinitely increase the information that's encoded, but you could rewrite. So I mean, we already have that kind of system without a regenerative technology where memories are forgotten or their resolution is diminished and then new memories take over the real estate. But your cranial capacity is a certain finite size and neurons can only make a certain number of connections with their neighbors and you can only pack a certain number of spheres of 10 microns into a given space. I mean, if you made it such that the cells, if your technology allowed the cells to sprout more axons and form more synaptic spines, then their genetically encoded blueprints allow them to do so. I mean, I think you could increase the amount of information.

[09:51] Michael Levin: But your prediction is that it's limited by the physical capacity of whatever the encodings actually are.

[10:04] Nicolas Rouleau: I mean, it has to be. So I would say that for a memory, for a long-term memory to remain crystallized and accessible, it has to occupy some space. And so space is your limiting factor. I mean, you could encode it in different ways. Perhaps the information is now encoded in the extracellular space, or maybe some of it is encoded in a higher dimensional plane in terms of how the cells are being connected. And so now you have this whole new layer that's not physical strictly, but still occupies some physical space. But the information content is not linearly related to the amount of space it's occupying. Maybe there are some things that are possible, but there is ultimately a space limiting factor. Because the way that I view memory is memory is a trace of the environment encoded in a new space and you require space. So I think space is the limiting factor.

[11:20] Michael Levin: And do you think that, so let's say, the kind of loss of plasticity that we often see with age, do you think that's, is that a hardware problem or a software problem in the sense that if we did have rejuvenation therapies and you had an 80-year-old with the brain of a 20-year-old in terms of the cellular architecture, would they still be stuck in their ways and cranky and whatever it is that is happening to us? Or do you think that once you get the cellular medium refreshed, then we go back to that, we could keep that plasticity for long periods of time?

[12:01] Nicolas Rouleau: That's a great question. I mean, if suddenly I woke up, if I was 70 years old and I had certain habits and I didn't want to change them, and you have to ask yourself why you don't change your habits. And part of that is they're adaptive. I mean, you've created certain kinds of behavioral strategies to navigate through your life. And as long as the environment doesn't change, which it will, by the way, but as long as it doesn't change, you're actually optimized for the environment. That's your brain's doing that all the time. So if I was suddenly given the motivation to change and the regenerative ability and the plasticity and the hardware space to adapt, of course, I think of course you would do that. What do you think about this?

[13:03] Michael Levin: Yeah, it's a good question in the sense that I've been thinking about what the social implications are of radical regenerative therapies. So at some point, you'll be 20, and I don't think it'll take all that long actually, but you'll be 20 and you'll meet another 20-year-old, somebody that looks like they're 20, and you find out that, yeah, actually they're 85. And so the question there is, physically, like, all good, compatible; mentally, what does that mean? In other words, when I say software problem, I mean that. Is it possible that just the fact of dealing with cognitive input in life and all of that for some number of decades just puts you in a mental state that cannot be, you know, there are some software states that you can't get out of with hardware, right? There are issues, computational issues. A related issue to this is one of the things we've been working on in our aging program is, so people think about aging as being fundamentally a physics problem, meaning you accumulate entropic errors, or it's a biology problem, meaning that evolution wants you to die. And so there's like certain clocks and stuff like that. But our simulation suggests that there's also a third problem, which is a cognitive problem. And a cognitive problem doesn't require damage and it doesn't require selection forces. It's basically a problem of goal-directed systems after they've completed their goal. What do they do after that? So you can imagine that the homeostatic process that creates the body, right? So the cellular collective intelligence creates the body, you're an adult. Well, it hangs out that way, minimizing disorder for a while, but eventually, if there is a second order, so some sort of metacognitive loop that says, okay, well, you've already done this goal, but you haven't been given a new goal. You're not like a planarian which basically refreshes, like sweeps the decks every two weeks, rips a thing in half, and you got to do it all over again. Is there, you know, basically almost like a boredom theory of aging, right? Where that part's not the conventional cognition, it's the cognition of the body, where morphogenetically, we've already done this, what is left to do? And they sort of, and we actually have data on this, both from simulations, from analyzing, this is Leo Pio-Lopez's work, analyzing what happens to the cells, and they start to, transcriptionally, they start to disband. They roll backwards, right? The phylostratigraphy shows they start expressing more ancient genes, but they diverge from each other. They're no longer in agreement about what should happen because the goal is the thing that was, right, the set point was the thing that was keeping it together. So I just wonder, right, so the way I think about this is like a silly sort of thought experiment. Let's say the standard sort of Judeo-Christian version of heaven, right? So you get there, everything is perfect forever. So you imagine, right? You get there and it's you and your pet snake and your dog. And so you get there, there's no damage from the bottom up. Nothing's getting degraded. Everything's perfect. The hardware is going to work great forever. So, I don't know. You tell me what you think. It seems to me the snake would be just fine doing snake things for a trillion years, like probably fine. The dog, I don't know. Maybe if the environment is good and every day is exactly like every other day, the dog may be fine too. I don't know if dogs are capable of some sort of existential ennui or something like that. But the human, like, okay, you know, it seems to me you can keep yourself busy for the first 10,000 years or 100,000 years. But a billion years in, are you still sane? And if you're not, that's not a physics problem and it's not a biology problem. That's some sort of cognition problem, right? So I don't know, that, it seems to, and maybe the real limit is way longer than, you know, than we have to ever worry about. But it gets to the fundamental problem of how much of this is the hardware and how much of this is the purely cognitive dynamics that are right on top of it. I don't know what you think.

[17:20] Nicolas Rouleau: Super cool. I mean, I think we have to consider both the agent as well as their environment in this case. And if the heaven that you're describing is unchanging and it's just, like we often just say, well, it's just the best version of life, just whatever that means. And that could mean the same thing every day for someone, according to if you ask people, like, what's a perfect day, they might just say, well, it's the same thing every day. For some people, it might be something new every day. I suspect that you would be able to endure longer periods of heaven if there were, if things were changing and you had the hardware slash software to actually adapt to those new situations over and over. So you have to, I think you would have to actually wipe the slate at some point, partially or in whole, in order to maintain that cognitive engagement that you're describing. And I think it's really fascinating, this idea of the boredom-based model of disease or cancer, or I think that's really interesting. So do you think it's because the mechanisms that basically quiet those processes are then removed later on? Like in other words, like the system becomes less vigilant about quieting these sort of processes that would be a nuisance if they were generated? Because I've always thought of the brain as being fundamentally non-regenerative because its function is anathema to regeneration. Like you actually don't want a system that is endlessly flexible if you want it to be crystallized in such a way as to have representations that can build world models and can retain something like a stable personality and maintain memories that aren't always changing or aren't suddenly erased so that you can maintain your social bonds and so on. Like I see the brain as like, non-regenerative for a purpose. And so if it suddenly became regenerative, or if it was just given some degree more plasticity, I think it would cease to be the thing that it is currently. It would be more like a general learning machine, but without all the things that we seem to care about as humans, like self and personality and love and all these kinds of very personal things.

[20:11] Michael Levin: Yeah, I don't know, axolotls, right? So axolotls, extremely regenerative, including the brain. Now, we could argue about whether axolotls have individual personalities. I suspect, like, I think they do to some extent, obviously not as rich as advanced mammals, but ground squirrels. So ground squirrels, when they hibernate, they have a significant reduction of brain volume. They basically chew up a lot of their brain cells. They come out in the springtime, regenerates, it comes back. And the cool thing about it is it's exactly what you said about the social bonds. They have, apparently, these ground squirrels have very intricate ledgers of who did what to whom and who's cooperating with these social structures, and all of that comes back. So right now, okay, they didn't chew up their whole brain. This is not a planarian story. Like, so, but I'm not sure, you know, I'm not sure. And I'm also, that's a whole conversation for, I think, for another meeting about, I'm not even convinced that all information is on board here. I have a feeling that, you know, I'm exploring some models in which, I mean, familiar things in which this is basically an interface, like a front-end thin client, and some of the action is on the back end, which means that it may well be possible to be regenerative and still index into the structures that cells that are elsewhere. So I don't know. But what's really, what was really wild to me is we did these, so Leo did these simulations where it's a simulation of morphogenesis. So you have individual cells, the collective has homeostatic states and so on. So they build an embryo. In that model, there is no noise. So there is no damage underneath, we don't have that, nor do we have any evolutionary pressures, there's no evolution, there's nothing telling you to die at any given moment. What we see is that already there, spontaneously, you have this error reduction that builds the embryo, and then it sits there for some time as a nice embryo, you know, continuously upkeeping and whatever. And then the whole thing basically spontaneously starts to disband and goes all to hell. And there is no underlying, we didn't have to put in any cause for that. And the other thing that's wild to me is it seems to me that takes 2 levels of cognition, because if you're just the thermostat, you'll be fine doing that same loop forever, basically. What you need is a metacognitive loop that says, well, this goal has been achieved for a really long time. Something is up, right? It's like, yes, surprise, minimizing surprise, yes, but eventually you need to generate some new surprises so that you can learn, do better. And so that second-order loop, we didn't put that in, right? So we did not explicitly encode that, and yet it has this dynamic, which I think is wild. And so, for, you know, I'm thinking that with these radical life regenerative technologies, maybe it'll be enough for the micro-level regeneration, so that as long as we sort of repair all the individual stuff, maybe that's enough to keep things exciting, as it were. Or maybe the answer is, you can't live forever as a caterpillar. But if you're willing to change things up every so often, then you can. And the magnitude of the degree to which you're going to have to change things up, I don't think we know. But it's not, I think what you said makes sense. It's quite reasonable that if you want to stick around longer periods of time, you're going to have to make significant changes and then force the adaptation, the accommodations to it.

[23:56] Nicolas Rouleau: I think people would be willing, at least some people would be willing to take that gambit. But I think that what people are not willing to give up would be like a through line of consciousness that carries you from form A to form B. I think people would be willing to give up their memories eventually. If thousands of years had passed and whatever had happened in the past was now, perhaps you're no longer interacting with the same people, or you're not in the same environment, or that information is no longer relevant, I think just like the files on your computer that are 20-plus years old, you may be willing to purge them or at least offload them and really just never look at them ever again. But consciousness is not something people are going to want to give up. And so there needs to be some mechanism for the experience to continue from form A to form B. Do you think that it could? Well, first of all, do you think it does continue in the case of the caterpillar?

[25:01] Michael Levin: So the one thing we know about the case of the caterpillar is that functional memories are not only retained, but I think even more to me, to this point, even more interestingly, they're remapped because the actual memories of the caterpillar are of no use in a butterfly body. You have to completely remap them onto new, not only new hardware. So caterpillar is a soft-bodied robot, meaning you can't push on anything, so your controller is all about inflating and deflating and stuff like that. Whereas the butterfly is a hard-bodied creature, which means you have to push and pull on things to fly around, so it's so completely different, but also the preferences, right? So the caterpillar got trained to, what was it, eat leaves at a particular color stimulus or something. Well, the butterfly didn't want leaves, it didn't care about leaves, it wants nectar. And so now you have to go from just like, you know, there has to be some generalization to take place that, right, that this was good. And now not only are your eyes different, who knows what the hell you see now that might be different, but also, I also don't want the thing I ate last time. How do I know that I'm going to get something new that actually is more appropriate, right? So all of that stuff. So that happens. I don't know about the consciousness. I don't know what it's, you know, obviously what it's like to be a caterpillar during the most interesting part of this, of course, is the middle part, right? It's like how they, during the remapping. But even if it maintained, I don't even know if it's possible that being in a butterfly body, you could have the same consciousness as a caterpillar. For one thing, you're living in a world that has an extra dimension. So you were this like two-dimensional thing crawling around. Now you can fly. Like if we had it, right, if we had an extra dimension, would that, you know, could you even say you have like continuity, I suppose. But I do think it's interesting that it sort of goes to sleep for a little while to some extent, right? I would say while everything's getting ripped up and rearranged. So what, right? What comes out on the other end between lives like that? There's all sorts of, you know, wacky things we could talk about there. But I, yeah, I don't know.

[27:05] Nicolas Rouleau: I think, from the perspective of the child, it probably seems very unlikely that they may ever have the conscious experience of being an adult, and yet that transition occurs.

[27:18] Michael Levin: No, you're right. And because one of the things that happens across puberty, for example, is a radical reprioritization. So things you really cared about before, now it's like, what, who cares? And things that before you thought were completely useless and irrelevant, now they're occupying tons of your time, right? So from that perspective, are you even the same being? To what extent?

[27:45] Nicolas Rouleau: I get the sense that we're like, as you're describing the remapping and reprioritization, especially from the caterpillar to the butterfly, I sort of had this out-of-body experience where I'm supervising this conversation. And it's interesting that what we're describing is reproduction and just life cycle. When you reproduce and when you actually give rise to offspring, you might ask the question as a third-party observer, well, how did the consciousness travel from the parent to the offspring? Or how does that continuity actually happen there? Because clearly, this is the organism's mechanism to move on past death: it creates this little clone of itself. I mean, it's not exactly a clone, but it creates this little bud. How exactly does the consciousness move from one to the other? And yeah, I just think that there's something interesting here about when your body ceases to function and the parts that make up who you are are redistributed in the world and reintegrated with other organisms, we think that at least if some of those particles make it into the composition of other humans, that there is some sense that there has been a reorganization here that's taken place structurally and functionally that has now emerged as this new organism somewhere else that has a conscious experience. And although the memories and the conscious experience of that other organism are different and even quantifiably different, maybe it is the case that there is something that gets transferred over, even in this sort of very entropically guided case of you have really just complete dissolution and scattering of all the parts of the system. I mean, it's much more extreme than the caterpillar and the butterfly, but to some extent, you do have a kind of remapping of a cognitive system into another when you have ingestion of another organism. How do you think that relates to the McConnell studies?

[30:25] Michael Levin: Yeah, I mean, I think I, and I haven't replicated the brain regeneration stuff with Tal Shamrat. We didn't try the cannibalism stuff. There are data on memory transfer by transplants, by tissue transplants. And if it can survive a tissue transplant, then going through the gut, all it has to do is not get digested, I suppose. So I'm not, it doesn't seem crazy to me that it would work. I think that in the end, I suspect that all of these things are pointers in an important sense. They're indexes into a different space, so I'm not sure what that model should look like. But there's an in-between case for this reproduction slash death thing, which is, I wrote this, it's called Life, Death, and something else, I forget what, it's a paper, where I start out by talking about an imaginary visit of scientists to an imaginary planet where they, you know, there's an ecosystem and they do a bunch of sequence, you know, they sequence the hell out of everything. And they find some amoebas that have the same genome as some of the large animals. And they're like, what the hell is this? And I basically go through this notion that you could have a life cycle that's basically a xenobot life cycle, where at some point, and you could even imagine, I don't know whether any creature on earth does this, but I think there's not any particular reason why a fish or a frog or something that already lives in water, it's hard for mammals, they need us to make anthrobots, they can't do it themselves. But I don't see any reason when a salmon beats itself to death on a rock somewhere, some of the cells that come off, there isn't any fundamental reason they couldn't live on as amoebas for some amount of time. And that's a viable life strategy in lakes, right? And potentially reassemble as some sort of a bio, like a xenobot or something. And who knows whether given enough time that thing can make some germ cells and go back to being a fish. I don't know. But in general, like that kind of thing, when we make a xenobot by taking apart the cells of an early frog embryo, what happened to that frog embryo? Like, is it dead? Well, not really. Is it still here? No, not really. You have this xenobot, it continues, right? And in the case of the anthrobots, we have plenty where the donor is deceased, but there is a being that continues. That's something we've talked about doing, these experiments where we can get anthrobots from smokers who had a nicotine addiction and just asking whether A, whether anthrobots pursue nicotine from those patients specifically. And if they do, whether implanting them, so here's your, there's your memory transplant studies, whether implanting them into a rat or something would then convey that behavior. I don't know. One of the weirdest things about it is that it doesn't seem to at all, which is consistent with this pointer notion, it doesn't seem to at all match the size of the, you might think, how's a tiny anthrobot going to redo the preferences of a giant rat body, right? It's not the same thing, but maybe it's relevant. In planaria, if we take a little tiny piece out of a two-headed worm and implant it into a one-headed worm, in something like 17% of the cases, the recipient becomes two-headed. And this is, to me, super interesting because, and again, maybe goes back to the boredom thing because why would this giant body listen to a few cells? All the other cells are in agreement that worms have one head. This little tiny piece is saying actually we should have two. Why even 17% of the time, why does it win? And maybe it's that novelty thing again. Maybe the other cells are willing to listen some percentage of the time because, well, we've already been a one-headed worm for 400 million years. Here's some new information. Maybe that, you know, maybe that's lit up as higher priority now.

[34:27] Nicolas Rouleau: Yes, especially if the environment is really harsh or has changed suddenly, I imagine really extreme responses, maybe like a kind of Hail Mary. That's fascinating.

[34:52] Michael Levin: So related to these issues of memory storage, memory interpretation, another thing I wanted to ask you is neural decoding. Why do you think third-person neural decoding, meaning that I'm going to measure your brain and try to figure out what you're thinking, is so much harder than first-person neural decoding, which is like most of the time under normal circumstances, we don't have a lot of difficulty knowing what the meaning of our engrams is? We sort of reconstruct it and whatever, but we're pretty good at accessing our own. But in third person, it's really hard, right? I mean, people have had some success, but it's really hard. What do you think is going on there? Why is it so hard?

[35:35] Nicolas Rouleau: It just occurred to me what I wanted to ask a minute ago, if you don't mind.

[35:38] Michael Levin: Sure, go for it.

[35:40] Nicolas Rouleau: So it would be interesting if we were able to identify some molecule, like just imagine a hypothetical molecule that exists in systems, that the sole purpose of the molecule is to transfer goals. It doesn't transfer structural building blocks. It's not a physiological tool. It's literally just a goal. It's like a message that says, This is what your job is. And if that were the case, all this would be, it would make a lot of sense, right? If you take, because under a neurobiological explanation of what you're describing with the anthrobots, suppose they were able to acquire some kind of nicotine-pursuing strategy, you might say, well, that's because perhaps they have more of the nicotinic acetylcholine receptor, and that's being co-opted as its main chemotaxing module or something like that. And you can make some sort of argument like that. And then once you implant it, the whole question would be like, well, how then do you tell the rest of the system to take on this new goal when in fact the new system isn't equipped with the same concentration levels of the nicotinic. But if it actually had this little message that it could pass on and say, well, this goal has been really useful for me. Why don't you try it out? Or maybe just add it to your repertoire of potential goal orientation strategies. I mean, I'm not saying that thing exists. I'm just saying, like, you would need some kind of goal messaging system that goes beyond just simple building blocks.

[37:27] Michael Levin: For sure. And there's another interesting piece of data. This guy Heper back in, I want to say, the 80s, did these experiments where he would take certain odorant molecules and inject them into a frog egg inside. So we're talking cytoplasmic. And then when that animal became big enough to have behavior, it would preferentially seek out those molecules in its food choices. Now, here again, so that interpretation, so again, the question is, well, what's the transduction? So you've got some sort of weird molecule inside the cell. It then has to convert that into a presumably multicellular neural something that will lead from smelling it to actually going to find it and all that. So you have to analyze it and then modify your large-scale nervous system somehow. So I feel like these systems have a ton of this plasticity interpretation, and they like to sort of pass it on. This information moves, it moves within bodies, it moves between bodies. That, yeah, I think that plasticity is going to be sort of massive, and I think it's underappreciated.

[38:41] Nicolas Rouleau: That's interesting. Neural decoding.

[38:46] Michael Levin: Why is neural decoding of somebody else's brain, as opposed to your own brain, so difficult? What do you think makes it so challenging?

[38:57] Nicolas Rouleau: I mean, I think the way that we neurally decode from a third-person perspective between humans is usually through the medium of language. Do you agree?

[39:08] Michael Levin: Well, I would say, so the comparison I'm making, and you don't have to buy into the comparison. You can just talk about neural decoding as it stands today in general. But for me, I see two versions of this. I see my own neural decoding, which means that most of the time in the absence of various defects and so on, I don't have a lot of problems knowing whatever memory, structures, molecules, processes, whatever we're using, we typically know what they mean, right? So, I access whatever structure that is, and they say, yes, that's because yesterday I had toast or something. Whereas if I were to, if in third person, if I were to figure out, okay, did Nick have toast yesterday? It would be, I would have a hell of a time trying to interpret, right? And there's only been some success, but it's really hard. That's the comparison, right?

[39:56] Nicolas Rouleau: Again, the interpretation of. In your example, what's the connection? What's the information that I have access to that I'm trying to decode?

[40:05] Michael Levin: In the third-person perspective, whatever neuro, whatever you want, electrical MRI, what do people typically use, right? They typically use physiological readings from brains of animals and human subjects to try and say, can I tell what, you know, you've seen 10, you've seen 10 pictures at some point, then I ask you to imagine one, and I try to guess, right, from doing brain readings, I try to guess which picture you're looking at.

[40:34] Nicolas Rouleau: I think that if I was to take an EEG reading of my brain when asked the question, or an fMRI recording, I think is equivalent in this case. But if I were to take a recording from my brain when asked the question, what's your favorite food? And I took a recording from your brain, I think I would have just as much trouble interpreting both of those signals. Agreed. And in a sense, they're both third person in that case, right?

[41:02] Michael Levin: Exactly, that's exactly what I'm getting at, right? So from the outside, whatever that means, it's really hard, but from the inside, whatever that means, it apparently is much more, much smoother, right?

[41:12] Nicolas Rouleau: Yeah, I think that one way to answer this would be that all of these measures are imperfect analogs of what's actually happening. It's like the original analog computers, how you'd have a buoy and then the buoy would go up and down with a wave. And then you could have the buoy basically trace a line on a graph, on graph paper. So if I was to present you with the graph paper with the sine wave or whatever wave is on there, and I gave you no context at all, you may not just guess that actually what this is the analog of a wave in the ocean. I'm not sure that you'd be able to guess that with no context. So I think that all of these measures that we use as stand-ins for qualia, for phenomenal experience, for decoded memories are just extremely imperfect. If I was able to measure your brain and we had a new technology that actually converted your brain activity into a video, and you could affirm that video in fact was a pretty accurate representation of your experience, I think that if I presented that video to someone else, they would also be able to have a pretty accurate experience of it. They could describe the video, and they would be describing your experience. So I think that the tool is imperfect in these cases. And in that case, that would be an encoder that is getting very, very close to the actual experience in the form of a video. If we added audio, it would get even closer to your experience and would have more features of your memories. So you need something like that. Language is what humans use to describe all that. And it's very much an imperfect thing because you lose so much with words. But words evoke this kind of mental theater that is the cognitive, the low-resolution version of this technology I'm describing, where you would be able to convert your memories into these actual messages on screens. But I think that's really the problem is that it's, and you can have good decoders and bad decoders, and it's very difficult to infer anything about someone's experience based upon lines on a graph or numbers on a screen. Let's see.

[44:01] Michael Levin: Cool. So I want you to describe what's the weirdest work that you've ever done.

[44:08] Nicolas Rouleau: There's so much to choose from. I think the weirdest thing that I've ever done is, as part of my master's thesis, I asked the question, and maybe I should preface this, but I asked the question, can you classically condition materials? And the reason for that was I had read this really cool paper from the 1950s where these guys out in England, they classically conditioned an iron bar. You know about this. So the question was, could we do it with something like electroconductive Play-Doh? Because Play-Doh, you could run current through it. You know that the current is taking a certain path through the Play-Doh. So could it actually carve out a particular path that could be in a re-entrant kind of way, continuously carved out? And so, just like running current through a piece of wood and noticing that it has this kind of lightning pattern, could you do that sort of thing in a given material and have it dynamically respond to a previously neutral stimulus by having the kind of unconditioned stimulus, neutral stimulus pairing? I was able to classically condition Play-Doh. Basically, we took small bits of Play-Doh and it was like, you know, it's Play-Doh with lemon juice in it. And we ran certain current through it. And then that current was pair associated with a flashing light. And basically, you would have the current that goes through and then you would measure the current output. And so you could get, you could create a spectrogram based upon the electrical noise in the Play-Doh. And what we found was that when the light was on, when you actually flash the light after the pairing had occurred, the noise, the electrical noise in the Play-Doh seemed to correlate with just running current through the Play-Doh. And so this was just a light-induced, current-type response in the Play-Doh. So, we had successfully demonstrated a conditioned response. But then we went a little bit further and we started developing a histo technique on the Play-Doh. So we took the Play-Doh and ran it through histological analysis and sectioned it and stained all the little grooves and things like this. And we were actually able to find these little microstructures that corresponded to when you actually ran electricity through the Play-Doh. So it had more like little grooves inside of it. We actually ended up publishing it. So, you know, it's on Plus One somewhere. And it's very weird. And I don't think anyone has cited it and probably no one will replicate it. It's just a very weird study. But yeah, that's my answer.

[47:14] Michael Levin: Amazing. And so the fact that you found microstructures, does that mean that if you were to, or maybe you tried this, if you were to take the trained Play-Doh and rejigger it at a higher level, does it keep the information or no? What's the scale of the...

[47:30] Nicolas Rouleau: We did exactly that. So you take the Play-Doh, have them paired, and then just deform it and reform it into a ball, and it didn't display the response. I see.

[47:40] Michael Levin: So some kind of larger structure. Interesting. Out of the space of all possible materials, what's your guess as to what percentage? Presumably, we don't think there's something super lucky about Plato, right? What percentage of materials out there do you think have these properties?

[48:02] Nicolas Rouleau: I don't know. Percentage is difficult, but I think it would have to.

[48:05] Michael Levin: Overall, is it a needle in a haystack thing or is it a general feature of matter or somewhere in between? What do you?

[48:11] Nicolas Rouleau: On this planet, it's probably a relatively general feature, I would say, because of water and because of all the organics. I would assume that a system would have to have, because Plato, of course, is made up of the stuff of plants, right? So I think you would have to have a system that is sufficiently plastic and responsive to some kind of deformation, be it electrical or photonic or mechanical. It would have to be changed by inputs of some sort and retain those changes for some duration of time. I think that describes a lot of materials. Like we have memristive materials now. We know that mushrooms do this kind of thing. And there's all sorts of living and non-living materials that have these basic properties, that they could be changed by inputs. I think it's pretty general, not some special feature of a small subset.

[49:15] Michael Levin: I agree with that. Then two final quick questions before we have to wrap up. So I, for example, don't think that neuroscience is about neurons per se at all. Do you agree with that? And if so, in a sentence or two, what do you think neuroscience is really about?

[49:39] Nicolas Rouleau: I agree with you because I know what you mean. And I know that, in the same way that plant neurobiology isn't really about the nerves of plants, we're describing what we might say are neural systems in the absence of neurons. Like we're talking about networks or we're talking about cognitive systems or we're talking about some functional label that isn't bound to a specific structure. I agree that much of neuroscience is actually about that. I totally agree. And yet the field is defined by whatever it is that most people are doing or saying in the field. And I would say most neuroscientists would probably disagree. They would say that, no, it really is just about cells and brains. But no, I agree with you. I have a more functionalist kind of view of these things. What do you think?

[50:33] Michael Levin: I think fundamentally the deepest lessons of neuroscience are about cognitive glue. So they're about understanding, and of course, neurons are a great example of that, but as we said, there are many others, of ways in which competent smaller subunits get harnessed together and aligned towards larger scale causality goals, memories, preferences that none of the parts have, but the collective does. And I think that's, to me, that's one of the biggest things that neuroscience offers us is an example where we take seriously all the levels. We take seriously the synaptic proteins and the networks and eventually psychoanalysis, like all the, right, we know that all of these levels are interesting and important. And it's this amazing field where lots of people are working on the transitions between the levels, right? So that, you know, for example, for molecular biology and things like that, that's a deep lesson that they haven't yet, I think.

[51:31] Nicolas Rouleau: Isn't that interesting that our fields are defined by all this matter stuff and not process? If we actually define the fields by process, we could have fields of study like multicellular connectomics. And then that would just describe regenerative biology and cancer and neuroscience and all sorts of things. And it would just be functionalist. There would be different cell types based upon certain structural markers within these fields. But that wouldn't matter because we're actually just talking about processes, shared processes.

[52:05] Michael Levin: I think the problem there, or the resistance to that, is that you couldn't keep it in the biology department then. The departments would have to go too, which I think would be true. It would be completely fine.

[52:17] Nicolas Rouleau: Immediately computer science and neuroscience departments are now the same department.

[52:23] Michael Levin: That's right. And as you said, certain material science departments as well, right? What do you see as the, again, I'm going to say percentage, but I don't mean a number. What's the prevalence of intelligence in the universe? Is it like super rare and precious and maybe the earth is the only one? Is it a common feature? Is it embodiments beyond water and carbon and all of that? What's your take on the whole thing?

[53:00] Nicolas Rouleau: Great question. My intuition is just to scaffold it almost a one-to-one correlation with whichever planets would host life. But not because I think that it's just a life thing, but because the kinds of planets that have the kinds of interactions that give rise to life would have the kinds of causal structures required for an intelligent system. And yet, I think you could maybe view all sorts of intelligent things at scales that are much larger than planets. I don't know how. I mean, so we have to ask what is intelligence, and that's a whole rabbit hole that we can go down. But if we're just talking about problem solving and adaptation and this kind of cognitive flexibility.

[53:54] Michael Levin: Well, you can throw in consciousness as well, right? So first-person perspective, how common is that? I mean, you choose any of that.

[54:05] Nicolas Rouleau: I think that you probably have to have, like, so I don't think this stuff is happening at the level of atoms. And I don't think it's happening at the level of galaxies, because I don't think galaxies can, I don't think there are enough units in a galaxy, like in the universe in terms of galaxies with connections between them that allow them to have sufficient causal structure to solve problems. So you probably do need to have something at a scale that is less than a planet to have intelligence, just because of the size of things. I think it's just sort of like a spatial problem, and how close things are to each other. In space, things are really spread out, but on a planet, because of gravity, everything's been kind of brought together. So I think if you have a planet where everything has been kind of brought together and squished together, you have the capacity for the kinds of interactions that can lead to problem solving. And then I think that, you know, that means, you know, if we're just talking about planets in the universe, now we're talking about a really small subset of the universe. And then if we're talking about like only the planets that are far enough away from the sun to have like, you know, organics and light and life or water, that's an even smaller percentage. So I think it's a small percentage in the universe, but how small it is, I'm not too sure.


Related episodes