Skip to content

Conversation 1 with Mijail Serruya, Alessandro Napoli, and Wesley Clawson

Neuroscientists Mijail Serruya, Alessandro Napoli, and Wesley Clawson discuss brain-body-machine interfaces, from BrainGate and biohybrids to aging, memory, plasticity, and hypnosis as emerging clinical and conceptual tools.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~1 hour conversation with (including a short talk by) Mijail Serruya (https://research.jefferson.edu/labs/researcher/serruya-research.html), Alessandro Napoli (https://www.linkedin.com/in/alessandro-napoli-8383a164/), and Wes Clawson (https://allencenter.tufts.edu/wesley-clawson-staff-scientist/). We talk about brain-body-machine interfaces, the clinical aspects and the deeper conceptual connections.

CHAPTERS:

(00:00) Introductions and backgrounds

(02:15) From BrainGate to biohybrids

(16:32) Platypus-inspired cognitive augmentation

(24:28) Model systems and plasticity

(34:52) Aging, memory, and interfaces

(49:58) Hypnosis as biointerface

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.


[00:00] Mijail Serruya: My nickname is Misha. I'm a physician scientist, and I have about 14 slides, but I can give most of our time just to talk. I'll tell a little bit about myself, but Alessandro, why don't you briefly introduce yourself?

[00:14] Alessandro Napoli: Alessandro Napoli. I'm a biomedical engineer by background. I did my PhD in neural signal processing. And I've been working in brain computer interface applications and development for medical devices for the past 15 years.

[00:34] Michael Levin: Great. Yeah, Wes.

[00:35] Wesley Clawson: I'm Wes, Wesley. You can call me Wes or Wesley. I'm a senior scientist in Mike's lab. I got a PhD in neuroscience where I did basic neuroscience research in systems neuroscience. It's a mix of weird in vivo rat studies with epilepsy and computational neuroscience. I have a background in electrical engineering and physics because I was going to do brain-computer interfaces, but then never found my way there. In Mike's lab, I built a system we call HAL that does closed-loop training with neural tissue. Instead of taking a human brain and trying to interface it with a computer, we try to grow weird substrates on microelectrode arrays and build software that lets us define interactions with them. That's the base of the work that I do here.

[01:33] Michael Levin: I'm Mike Levin. My group works at the intersection of computer science, which is my original training, biology, and cognitive science. I'm fundamentally interested in diverse intelligence, extremely unconventional embodied minds and all kinds of weird substrates. We study decision making and collectives of cells during morphogenesis. We study minimal computational systems. We study weird chimeras of different kinds of biology with technology and so on. I'm interested in interfaces to novel intelligences and how different minds can interact and communicate with each other and what technologies can help that happen.

[02:15] Mijail Serruya: Well, you will see, Wes, there's a lot of overlap with what you mentioned. Just briefly about me to remind you guys. Over 20 years ago, long before there was Neuralink and things, helped create cyberkinetics, the first BrainGate trial and brain-computer interfaces, I still know all the CEOs of the major IBCI companies that exist and that have orders of magnitude more funding now to do what we tried to do 20 years ago. Here are some of them. I'm happy to introduce both of you to them if and when that makes sense. I've had some interesting discussions with some of the scientists at some of these companies about biohybrid interface. I didn't put on here: Science Corporation is working on biohybrid systems. If you didn't know about it, the IBCICC is a collaborative community where the FDA, CMS, NIH, people with disabilities, doctors, patients, engineers, all are involved. It's an interesting organization to work together on a pre-competitive space, as the industry people call it. I'm a physician scientist. I'm a board-certified neurologist. I did my doctoral work in the lab of John Donahue back at Brown 20 years ago. Now I work with kids and adults who have chronic pain, cognitive symptoms, motor impairments. Raphael is the name of our lab because it's named after the patron saint of healers, and that's what we're hopefully working on: healing. Those are our three main areas of focus: movement, pain relief, cognition, in terms of trying to create devices. We have an interdisciplinary team. That's the core team. They have lots of collaborators all over the place. We'd be delighted to collaborate with you guys too. We'll see what this conversation leads to. I look at this as we have multiple shots on goal of trying to help people from the short term, right now in 2026, out to who knows what the future will bring. We have different kinds of devices that can mechanically move the arm, voice activation, really simple mechanical systems, electrical stimulation, and brain-computer interfaces. This is a gentleman who has electrode arrays in his brain. We're decoding the ensemble activity and using it to open and close his hand. EMT is controlling his biceps, triceps, and the brain is controlling his hand. These cables are literally plugged into his motor cortex over a large subcortical stroke. Normally his hand is totally paralyzed. That was a few years ago in the middle of the pandemic. Now we're working with Precision. They have a fully implantable system. Short term, in-between term, and then longer-term living neural interface components. Working with Casey Colin at Penn on living electrodes and living amplifiers, living antennae, living mux, demux, to basically modify the brain for better IBCI integration. The basic idea there is that you make this collagen noodle, a rigatoni, fill it with different cell populations, implant this whole noodle, then it biologically integrates to the brain and becomes the intermediary. That's what it looks like. I've listened to some of your podcasts and read your team's articles. I'm not sure I totally know what all the terms mean, but I guess, could we use an anatomical compiler to induce a brain port?

[06:58] Mijail Serruya: So these are used by taking a pipette or an acupuncture needle and positioning little blobs of things. But maybe there are other tricks using chemical baths and electric fields to actually induce things to grow the way we want. Then we can talk about not just having a brain-computer interface to talk to a device like this, but maybe biological constructs to make some construct in your abdomen, extra bonus brain blobs that could take over if someone has a disease or injury. Then this idea of neural computing, which begins to overlap with what Wes was talking about, taking different kinds of specimens and using them for computing with the idea of connecting them ultimately back to a person to restore their function. We'll talk about that in just a second. The current brain-computer interfaces are a narrow relay pipe to restore sensory motor function. Some people, it has gotten a lot of investment because people think it will help us keep up with artificial general intelligence or some superintelligence that we have to race against, which is a whole other discussion. Obviously, that throws a different goal of traditional medical devices. But there is some overlap. But there's an alternative idea, which is to expand the substrate of neural processing beyond the skull, adding neural real estate. And so then the question is, what kind of processing and consciousness could that allow? Again, with the goal being to help with restoration. So again, here you have this person, you have these different kinds of implants, maybe they're purely biological, maybe they're purely synthetic, maybe they're a hybrid, and then they connect to neural tissue somewhere in the abdomen, or they have an external wireless system and then it can talk to neural tissue in a dish. What does this hybrid system look like? It's a cross between a seeing eye dog, a digital biological twin, and a third cerebral hemisphere. Something that allows the brain to expand its function, but ultimately to have an assistive function. And this tries to reframe the way that a lot of brain-computer interface language talks about, with a lot of engineering physics focus on number of channels, number of electrodes, and this tries to think, Well, how do we do the translation? Focusing more on that rather than saying we need to up the number of channels, then we're supposed to get some magical benefit. That has some overlap with your lab's perspective of thinking of diverse intelligences and trying to talk to the systems the way they want to be talked to and having concrete, testable ways of mapping that out. One way to think about this is the mammalian, and this goes from Max Bennett's "5 Breakthroughs of Intelligence: The Brief History of Intelligence." Even before reading that, there are repeating quasi-crystalline modules. We have the hippocampal lamellae, corticobasal ganglia loops, Mountcastle columns, canonical microcircuit, and thalamocortical neocortical circuits. Max's point is that the big difference between a chimpanzee or bonobo and a human is that we have more of these. But their basic architecture is unchanged. We have a lot more. That raises the question of what if you added more to us? With the rationale being that if someone has a stroke or a degenerative disease or multiple sclerosis or a brain tumor that has to be cut out, or other brain injury, and you start having lots of ports, you could actually give these back. And then what does that look like? How do we, rather than waiting millions of years, quickly converge on something that's actually useful to that person? This is a citizen science gaming platform that I mentioned in the e-mail, which may have some overlap and possible collaboration. The idea is, can we use players or AI, automatically or some combination thereof, to find optimal input-output parameters through virtual white matter, which is simply recording from one tissue and using it to trigger the other? Wes, you talked about this on one of your podcasts with Foresight. We have a system that does the same thing: virtual white matter. People have had versions of this in the past, including in implants. The systems are agnostic in terms of what the neural tissue is and where it is. Then we can ask about inactive sensorimotor transduction. By "inactive," I'm using the term from Evan Thompson. We can also use other signals, ones having to do with reinforcement and modulatory signals. I know you've had your YouTube sessions with Carl Friston, who has worked with Cortical Labs, and they've talked about reinforcement signals as tonic versus stochastic. But every group that does this has only so much time, and there's actually a huge parameter space. So the question is, can we create a platform where we can actually look at a lot of things?

[11:42] Mijail Serruya: Can we actually use the platform as a common embodiment framework so you can compare different things? You could have different kinds of specimens, you could get rid of the biological specimens, put computational systems. It could be a perceptron or an expert system or a simple microcontroller. Or you could use the things you work on, gene regulatory networks, bulvox. As long as you have a compiler or a transduction of input-output, they could have a shared platform. The platform imposes weird semantic interpretations that could distract us. Even if you are making some distortions and reducing the dimensionality, the complexity of things, this leverages our primary ability to understand these eco-like situations. Then you can look at the comparative advantage on different tasks in this world: speed, scaling, abstraction, memory duration. To try to understand your language, see if by training them, you can expand their cognitive light cone. We can compare letting this thing run by itself versus human-guided optimization. We have this vast parameter space, and we only have so much time with a human or an animal who's implanted. We have the amplitude, the duration of pulses, the shape of the pulses, frequencies, bursts—phasic or tonic. If we have sensory signals, how do we map something in the virtual sensory world or, if it's a robot, the physical world into the system? How do we use reinforcement signals? This is an example of a potential toy system. This could be an organoid or an aggregate of neurons. Let's say this aggregate has dopamine or serotonin or acetylcholine. You could play around with the time-varying characteristics and stimulation for long-term depression or plasticity as purely electrical, or you could stimulate and drive dopamine. You could have other signals that have to do with an error bit or a multiplexing channel select, and we have a grad student working on that right now. This is from a conversation with Conrad Cording over at Penn. If I'm stimulating an input here and we call that X1 and we stimulate here X2 and then we record from Y, we have this simple linear equation. Maybe I should stimulate dopamine to help solve that regression and see what this tissue can do compared to a pure in silico system. Another parameter to think about is the endogenous activity. Going back to Sherrington more than 100 years ago, if he stimulated the exact same spot of the monkey's brain with the exact parameters, he got totally opposite responses. He said this is an office of the cerebral cortex to have this reversibility and behavioral contingency. Neural living systems have incredible hysteresis and spontaneous endogenous activity such that identical stimuli have totally different effects. This is a huge space to study. My impression: Alessandro, in his doctoral work, did some stimulation in vitro human neuron tissue networks, but overall the history of this is that people take square wave biphasic pulses, which in general look nothing like how the body talks to itself, and then take a tiny spot in this giant parameter space and study the heck out of that. Often they cook the tissue. We know that if you use these stimuli in a person, you can induce reliable percepts. There's a lot to be learned about how to optimally talk to this system. This is one virtual environment we're working on. We're trying to find something that will be more engaging than some of the other citizen science games out there, like AlphaFold, working with a colleague who's a game designer. The idea is that by having people play the game, we're exploring those parameter combinations and discovering various functional mappings for the specimens. This is a crowdsourcing approach to complement, Wes, what you're doing to develop a bioelectric programming language. Rather than just two labs in the United States, you can work with FinalSpark in Switzerland or others, having just a handful of specimens, or partner with philanthropy or big pharma that has gymnasiums full of thousands or tens of thousands of organoids, and search this space and combine them in different coalitions. I could either take a pause there or keep going and run through it, finish it up. Any preference?

[16:27] Michael Levin: You can run through it. Keep going. I'm just taking notes first to talk.

[16:32] Mijail Serruya: The platypus inspiration. In dealing with these folks who have these severe impairments and wondering how we're going to reconstruct their brain in a way that's not just this single sensory motor port, trying to see if there's principles we can learn from other creatures. Here we have platypus electroreception, modified input to the trigeminal nerve, like we have the child stroking their mom's face. It's a novel sensory organ on the outside, the electroreception mucus-lined column nerve structures in their bill that plug into the trigeminal nerve that has the same architecture as it does in most other mammals. They can leverage these thalamocortical loops to process those electric fields like they would anything else. They learn to interpret those signals as they're swimming around the muddy rivers of Australia and grabbing shrimp that generate those fields. The question is, can we engineer transduction organs for abstract data and have dedicated thalamocortical modules that we could grow or model in silico, and then integrate it with equivalence of basal ganglia, cerebellum, hippocampus, such that we can start having direct perception of abstract items? What any scientist does is we take this bandwidth amount of language that we have, and then we build our own models internally, which is very different than seeing it directly. So the question is, is there an advantage if you interface with the human brain to create an actual transduction organ and then add extra cortex? You have eyes, ears to different thalamic nuclei, and you could take abstract data of different flavors and then create its own virtual relay nucleus of the thalamus and its own virtual cortex. You could connect it to the brain and aim for areas of the human brain that are already multi-sensory areas. I chose the angular gyrus and the pulvinar as natural. In one of your talks you mentioned the liver doesn't live in 3D time. It lives in a chemical concentration space with gazillions of dimensions like pH and cytochrome concentrations. We are biased by our primate evolution. We can try to cognitively bulldoze our way through intellectual tricks to remap things, but could we transduce it directly? Could feel the liver state space like proprioception. We're already doing this, but what if we made it literal? We could ask the question, what's the difference between outside in and inside out? We don't know. In certain cases where a person has damaged their original system, could there be an advantage of giving this back to someone? The idea is that you have a human brain, it could entrain these cultures, the tissue learns human-like organizational patterns, maybe inherits state space. Meaning if this neural tissue in vitro is reciprocally connected to an in vivo brain continuously, maybe it can arrive at a neural state space that it wouldn't arrive at otherwise. It could function as extracranial cognitive support, help with memory, perception, and motor, and it gets connected for a therapeutic need. That also leads to an interesting question: if this person is now interlinked with this, if you lose the Internet connection, could these things still preserve some interesting abilities and become an autonomous computational agent and carry forward learning intelligence? Clinical neural twins to test therapies, edge computing, human compatible priors. Anthrobots. These are questions for discussion, maybe a future discussion. As a physician scientist, I'm always on the lookout for things that will help my patients who are living with pretty severe impairments. Anthrobots, I know you've shown some interesting rehabilitative abilities.

[20:29] Mijail Serruya: Could they help rebuild a cortical stroke? I'd be curious how you guys think about Max's breakthroughs of taxes, reinforcement, simulating, mentalizing, speaking to your broad, diverse view of intelligences. He was very clear that he was aiming for human, but these principles seem to be the same cognitive principles you guys talk about. An ionoceutical IBCI, to complement what Casey and Alsan and I work on. This is a picture of a blob of myocytes, a blob of motor neurons implanted into motor cortex. Maybe there are ways to learn from what your lab has developed to induce a transduction port, a biological USB. As a physician scientist, I spent my life doing first-in-human pilot trials and trying to look at novel therapies for patients. If whatever is available in the near term, I'm happy to work with that and find the right use cases. Another short-term advantage: patients with stereo-EEG have spatially widespread stimulation patterns. We can interweave those with sensory inputs. We can play their electrodes like a chord, and we can coordinate with precise sensory behavioral context and adapt that based on this activity. An idea is that we can link this participant's brain to auxiliary neural tissue. A lot of the brain-computer interface companies are focused on paralysis, so they're going into prime motor cortex. It turns out many of them are still interested in these higher cognitive functions, but implanting electrodes in someone's brain is a big feat, and how do you get there? We can piggyback on the fact that we are already doing this all the time. There are many hospitals all throughout the planet, certainly in Boston and Philadelphia, where there are patients with refractory epilepsy that can get up to 15 of these depth electrodes, each with anywhere from 8 to 10 contacts or more. They have 30 electrodes, 100-plus contacts throughout their brain, a broad coverage. They have this for several weeks. Most people have treated this as a whack-a-mole where they'll stimulate one spot in this almost neurophrenology correlational map. If you look at it as an overall system, your lab's view of looking at systems from a broader, dynamical way could open up opportunity here, which no one's really leveraged. The idea here is that you could then have this whole brain talk to different kinds of neural tissue, including what's in the dish. It also leads to a potentially interesting idea of what I would call bow tie kiss. Mike, this is referring to your idea of the latent space as the middle of the bow tie. You have encoding and decoding. In the human brain, you have the fusiform gyrus and you have a gradient of perceptual to conceptual, fine to coarse. You could wonder how you might take transformer middle layers and interface them directly there so a person could feel what the middle layers and residuals are doing in a transformer model and leverage what evolution has given us in terms of these mappings. Just an idea. Can we reset the brain-body setpoints of pain-depression? Can we build a biohybrid construct that itself is a compiler? Can we combine AI-driven and human crowdsourced science to help the neural interface states? Can we reciprocally connect a person's brain to additional neural tissue to expand their light cone beyond their skull? Your framework, our platform, could we make a biodigital Rosetta stone? That's what I got. Let's see if I can stop sharing.

[24:28] Michael Levin: Alessandro, did you want to say anything before we get rolling?

[24:39] Alessandro Napoli: No, I think it's a lot. Let's see if we want to talk about some details or you guys have questions or if there is anything you guys want to talk about.

[24:50] Michael Levin: I've got a bunch of stuff, but maybe Wes, you go first.

[24:58] Wesley Clawson: I know one of the things you're going to say, and it may be better to piggyback off of it. So I'll let you start and then I'll jump in and interrupt you.

[25:06] Michael Levin: A bunch of thoughts that I had. First of all, this notion of the noodle that you showed at the beginning. One of the things that I've been kicking around is this idea that you need an impedance match between the tools that you're using and the thing that you want to connect to. I like what you said very much because having a kind of agential interface to the system, having the front end of your thing be a living agent, that I think that's very smart. We're doing some of that with Xenobots, for example, and using them as the front end of a sensor array that will connect to the ecosystems. I do think that we could try building some things like that, as you said, maybe using some of the morphogenetic control handles that we have to try and grow something appropriate. Anthrobots are a great component. We don't know yet almost anything of what they can do. We've only seen one or two things. We have a project with David Kaplan to see. He's got these in vitro brain constructs, right? These pucks that grow all kinds of stuff. So we're gonna find out what the Anthrobots can repair in that context. But if you have in vivo models, this is great. What do you have access to? Rats? Is there an immune issue that we can put human anthrobots in there and have them live?

[26:41] Mijail Serruya: We work with Casey Cullen, who runs the lab, a neural tissue engineering lab at Penn. Mostly rat models. They have some other species. He does some; mostly these are just wild-type creatures. In addition, at Jefferson there are groups that have animal models of certain diseases. Stroke, MCA occlusion, where they have experience putting in stem cells and things like that. Between the two sites at Penn and Jefferson, there are many models where people are putting in modified cells, tissues.

[27:24] Michael Levin: There's no issue with immune rejection, or how do they handle it?

[27:32] Mijail Serruya: Again, it depends what it is. Yes, that can be an issue. My recollection is there's a lot of different protocols; I don't know all. I think most of them are mildly immunosuppressed, but in other cases there are autologous cells, so it's not too much of an issue. But the proof is in the pudding: the fact is the animals are healthy and integrate these. I'd have to dig into the details of the various protocols. That is not trivial. I don't know the details of that.

[28:13] Michael Levin: It'd be cool to talk about that and see what's possible. Another thing that I want to talk about is what you mentioned about the third hemisphere. Expansion, cognitive augmentation. That's very interesting to me. I have two points there. One is we do a lot with the amphibian model. We showed years ago that if you make tadpoles where there are only eyes on their tail, they can see perfectly well, they can get around, we can train them in behavioral assays for vision. The eyes do not connect to the brain; at best they make an optic nerve, sometimes they connect to the spinal cord, sometimes the gut, sometimes nowhere at all, and they can still do it. The plasticity is incredible because you don't need new rounds of mutation and selection; it works out of the box, it just works. I'm curious, what do you think is the prospect? Let's say developmentally in models that we have in the lab, even warm-blooded chicken, we can make a third hemisphere—no problem. What do you think is the level of plasticity in the lab? In the human. My amateur knowledge of this is that, for example, in people who lose sight, sound processing takes over some of that real estate. If we already have in our system the ability to take over new real estate when you get it, what do you think? Can we actually just keep adding, or is that going to run out? What's your prediction?

[29:48] Mijail Serruya: I think that if we can nail the interface, the brain will use it; it will just happen spontaneously. I think the key is that it's not going to be like you just flick a switch. It will—creatures, all living things—have to have some experience in the environment in their own body for things to map up and learn the covariance statistics of the world. But I think given that opportunity, they'll get there. They'll use it. If it's available, the brain will exploit it.

[30:23] Wesley Clawson: If I could jump in to make sure that I got your stance on the thing is that, especially when it comes to humans and patients, which is awesome, is that given access to appropriate real estate, we'll take over it and you can use it in a meaningful way. It'll take some training. The big issue is the communication—the interface between human and the object—and the third thing. And so, your thought on this is that by developing some crowdsourced citizen science platform, that will partially solve the interface problem because they will, as a group, search it better than individual labs. Does that sum up the...

[31:12] Mijail Serruya: That's definitely one of the ideas.

[31:15] Wesley Clawson: I wanted to make sure I was connecting all the pieces together before.

[31:20] Alessandro Napoli: We also have an AI-based approach towards that solution. In addition to the crowdsourcing, where people are trying to do specific things and people are different, they could be trying to do their own thing. You are exploring a lot of different outcomes. We are also thinking about doing this using statistical computational approaches in a more mathematical and rigorous way.

[31:53] Wesley Clawson: I'll write down thoughts, but I'll make it clear.

[31:58] Michael Levin: I think that's great. And in complement to the regenerative kinds of things that we're working on. I think cranking up a regenerative response, which is basically a rise of plasticity anyway in the search for a new path to a functional system, in an area and then putting in something that has whether the interface be some kind of an ectopic corpus callosum or whether it's some other way, that I think could be a very effective interface for this kind of thing. We can do a ton of stuff in amphibians, but I think the big question is going to be what happens in mammals. And so working up a mammalian model in which we can try some of these things would be really valuable.

[32:59] Mijail Serruya: There's all kinds of experiments in humans happening already just by virtue of clinical care. Vagus nerve stimulation is being used for stroke recovery. I don't know if anyone can fully understand what's going on, but the basic idea is that they're driving the afferent signals into the vagus nerve, which is making various brainstem nuclei go bananas, including the locus coeruleus. It's not just about making the connection; it's not just about having the brain talk to this third hemisphere. It's about having the environmental stimuli all line up so that it ends up exploring that space and using it, so the occupational therapist has to hit the buzzer at the exact right time. The locus coeruleus barfs out its norepinephrine and it turns on the basal forebrain acetylcholine at just the right time as the person's trying to do this, so that these synapses floating around in the penumbra of the stroke suddenly get strengthened. That's the idea. If the skilled occupational therapist doesn't have those settings right, then this goes nowhere. It undermines what they've taught in medical school for years, which is that if you're a year past the stroke, you're not getting anything better, but suddenly we're unmasking all these things with what's essentially an electrical ice bucket challenge. What's going on? That's a very blunt instrument. You're hitting the brainstem and the whole reticular activating system goes bananas. Presumably, if you had something much more precise and specific, and actually added new tissue that the person doesn't even have to begin with, then that could open up a lot of doors.

[34:52] Michael Levin: I think there are opportunities for that. We're working on that and hoping to crank up neural proliferation and plasticity in general, morphological plasticity. I've been polling relevant people on this. If that were solved to the point where you could either reverse or prevent aging, do you think that the kind of changes that human cognition has in old age are a software problem or a hardware problem? In other words, if the brain was young and reversed, would we all become mentally flexible or would we still be grouchy?

[35:47] Mijail Serruya: That's right, we lose liquid intelligence and we gain crystalline intelligence as we age.

[35:55] Michael Levin: Yeah.

[35:56] Mijail Serruya: Right, so would we become these ossified dictionaries? I certainly read some of the literature where it makes it sound like if you replenish your microglia or take the cerebrospinal fluid of your great-grandchild and infuse it into your own brain, you're going to become the fountain of youth. But on the other hand, even with all the turnover of proteins and cells, components—synapses as a gestalt—are preserving memories from 100 years ago, right? People can remember things vividly. That question has come up in some surprising situations, venture capitalists getting concerned about this, meaning, are we going to be in this situation where we have bodies that essentially are 18-year-olds but with demented brains that we can't leverage the joys of these situations? I think one solution is to add new neural real estate and actually leverage the way that the brain does rejuvenate itself and communicates to itself and broadcasts information to itself and see if we can leverage that. I suppose if one was really cynical, you could say that as the brain is degenerating, if you put a bonus brain in your abdomen or something, you could transfer things over as one is degenerating. Again, this already happens naturally in someone who has a degenerative disease or someone who has a stroke. The plasticity of the brain will automatically leverage whatever it has. And then you seem to hit some phase transition where the straw that breaks the camel's back—suddenly the person looks like they have a massive collapse. I see that all the time clinically, where someone will have Alzheimer's or a prevascular dementia, and they seem like they're doing okay, and then suddenly they crump. You realize that they've been running a marathon of trying to compensate with all the other residual circuits. I don't know if normal aging would match that because humans can only live so long. I think if there's more neuroreal estate, the brain will use it as long as it can be calibrated to the rest so that it can be useful, as long as it's behaviorally useful.

[38:23] Michael Levin: In the amphibian systems that we have, we can add brains as much as you want. We can certainly put in extra brains or induce extra brains, create extra, anywhere you want. We have another model system for memory movement, which is in planaria. In planaria, you can train them. Then you chop off their heads, which includes the centralized brain, and the tail will sit there and not do anything until it grows a new brain. It grows a new brain, and it still retains its memories. There's some movement — wherever it is, and we don't know where it is, it's got to get imprinted onto the new brain as the new brain develops. So we know the information can move around. I don't know how far beyond planaria that goes, but my suspicion is that it's universal. We just haven't figured out how to activate it yet in these other systems. I think there's a lot of very fundamental work to do in planaria and in Xenopus and things like that. But ultimately it would be really cool to have a million stuff.

[39:29] Mijail Serruya: Yeah.

[39:33] Michael Levin: What do you say, Wes?

[39:37] Wesley Clawson: A lot of things. Some of the amphibian stuff I'm far from, but I whiffle waffle with some of this because I understand the desire to figure out the best way to — this happens a lot because inevitably it seems like you deal with them as well. Venture capital people show up and they say, "what's the best way to do this thing and then we can sell it." One of the big things is how best do we let BCI function? It's always about this interface. How do we read out the thing? Especially the two of you, we hemmed and hawed about how everything is so plastic. If it's there, you'll use it. I'm not sure if the interface matters. You just have to make sure it's not destructive. If you reject an organ because of an immune response, then for sure it's not useful. It might not matter how the things communicate in the beginning. If the communication channel is fixed — let's say you put in a third brain or some external thing — it will use it in some meaningful way. What's interesting is studying a phase transition from when it goes from two things to one thing. If you haven't learned to couple with it, you're two things. Eventually it can couple. Even given AI and everyone a game where they can participate, you still won't solve the parameter space because you're going to have degenerate solutions. They'd all do different things. What's more interesting is seeing if you have two systems that you would like to interact, and they have some form of agency — and most of these we're discussing do — letting them control the communication channel. That's an interesting path forward that I would be keen to work on. I'm not clever enough to understand the brain, but the brain seems to understand itself quite nicely. It works all the time, and I never have to think about it. If you want to induce a new behavior from an external perspective, when you want to train an animal, you train people all the time, you train a dog all the time and have no understanding of how it works. You could claim that we crowdsourced roughly over a long period of time the best way to do it by pairing it with food. The dog also trained us; the two systems that were separate worked together to find the best communication scheme. There wasn't an external person searching a parameter space of how best to interact with the dog. They just let the two things work together. The dog only has so much agency. I only have so much agency, but we managed to work it out. One really interesting take on BCIs for me: I've always wanted to do translational stuff. I just never really had the opportunity. I found myself now in the basic science realm where hopefully someone clever can make it more translational. Because I work in tissue culture, a lot of my work — the things that I'm interested in on a five-year plan — is not necessarily how best to engineer something, but to engineer a setup where they can engineer themselves. I realize I've been talking for a long time.

[43:23] Mijail Serruya: No, no.

[43:24] Wesley Clawson: That's the take I always have: I'm not smart enough to do this. Maybe you seem very brilliant, but I'm lazy and not very smart. If I could engineer a system where the other things could do the work for me, that would be cool.

[43:41] Mijail Serruya: I think, as physicians, we always want to lean into the brain's and the body's ability to heal, and this idea of making it good enough rather than optimal is definitely right on. Cochlear implants are essentially like playing a piano with boxing gloves, and yet it works. If you put in a cochlear implant, even the early single-channel ones, these kids could understand speech and do things that seem impossible, that they can get enough data out of this impoverished signal, and yet they can pull it off. But I think the key is that, just like humans and dogs have been training, occupational and physical therapists and other kinds of rehab have been working with humans with strokes and other things for a hundred years, and they've only gotten so far. The question is, once we have these little ports, what can we do with them to really unlock those abilities? Part of my concern is that, except for very constrained use cases, it'll be hard to. It's one thing to say, "I'm going to put in a plug and now I've got a cursor." Now I have a person who has a complex aphasia. Lots of parts of their brain are damaged. They've already done a hundred years' worth of speech therapy and other therapies. I know how to do hypnosis, and I use it. With the rationale, I'm going to go as far as I possibly can without drilling a hole in the head. Once you hit that barrier, you need something new. The question is, once you have that, what do you do with it and how do you quickly converge something that's useful to the person?

[45:33] Alessandro Napoli: Can I jump in for a sec? Wes, it's what you said is very, very interesting. We didn't get into the details of the AI-based stuff on what to do and what we want to do it for. But one of the reasons, one of the main reasons is, let's say if you can grab these two systems, have them talk to each other and let them figure something out. Maybe you're augmenting or restoring something that you didn't even think about before the experiment, because now that's where they're converging. I really want to do this thing. I really want to talk about this thing. Let's do this thing. That's great. That's definitely on our radar. The main issue now is if you do this in biology, it's fine. Those specimens kind of talk the same language already. You just have to make sure they grow together, they make those connections, and they're going to talk to each other. Then you can study a lot of things. The issue here that we're seeing already is when you want to interface any kind of computer with the biology, the problem is they don't speak the same language. We can look at all of these spikes and we can interpret them, which is fine, but how are you going to send anything back in? Are you going to send back in spikes? Or are you going to send back pulses? What kind of pulses? How many pulses? Where, when, how? And are these pulses disrupting the natural physiology of the tissue more than talking to it? They're basically shooting at it. As part of those experiments, we are looking for that communication interface. How do I talk to this thing in the first place? Once you have that map, you can leverage the map, because ideally you can say, These are all the things that I can talk to you about. These are all of the tricks that this dog can learn. Let's see if the dog actually learns the tricks, and then there's going to be another dog that might learn different tricks, with a different mapping.

[47:38] Mijail Serruya: Or it's not even a dog. It could be a person with stereo EEG who can subjectively tell us.

[47:42] Alessandro Napoli: That's always nice. That's a bummer with dogs. That's the same. The point is you have to figure out first what the specimens can do and understand and talk about. And then you can put two together at that point, three together, four together. You can have a whole brain and the specimens. You can have amphibians and the specimens. At that point, the sky's the limit.

[48:07] Wesley Clawson: Yeah.

[48:08] Alessandro Napoli: Because you can make those connections in a meaningful way. They are talking the same language. They're leveraging the correct communication channels.

[48:16] Wesley Clawson: It's a really interesting problem. I think it seems like we're in the same path. And the nice thing about the work that Mike does, half the reason I came, is there are so many different systems. And it comes down to just agential engineering in general. So whether it's an amphibian, whether it's a patient or a dog or something synthetic or a computer that has a mild amount of agency, for example, when you want to engineer something that has an agential component, all the rules change, and that's the hard piece. I definitely get what you're saying, and it's not an easy problem.

[49:00] Mijail Serruya: But I think, yeah, go ahead.

[49:02] Michael Levin: No, please, you finish.

[49:03] Mijail Serruya: When we have humans who have these implants, one of the things that's always tempted me as a physician scientist is that you can sometimes do things faster with a human because you can query them from both inside out and outside in. The question is, can you leverage that to make a flywheel to accelerate these things and converge faster. So that way, if you can have in vitro things being complemented with an in vivo interface, the person is also giving you some insight.

[49:42] Wesley Clawson: That's incredible. We just don't have access to humans. At the moment, in our lab I have access to tissues.

[49:52] Mijail Serruya: If you work with us, you have access to humans.

[49:54] Wesley Clawson: Yeah, there you go, there you go, there you go.

[49:58] Michael Levin: I wanna follow up on the hypnosis business.

[50:03] Wesley Clawson: I knew you, he said it.

[50:07] Michael Levin: Do you know the hypnodermatology story? It was Albert Mason, really interested in being able to communicate across levels. He was able to, and people still do, give prompts to skin cells. I'm interested in that aspect of it. I want to hear what you have to say about that. The part that's underdeveloped, at least as far as I can see—I haven't seen anything on this—is the opposite: using that interface to get information out of the system. It's one thing to say, okay, I want you to have more of this cell type in your skin, but it's something else to be able to say, how's your Wnt pathway doing? How is the inflammation? How is the pH of your basal lamina? Because you talk about hypnosis, what are the prospects of using it to get actionable information out of tissues? Use the linguistic interface to pull physiological information out of the body.

[51:17] Mijail Serruya: That's wild. I definitely haven't thought of it quite that way. I use clinical hypnosis as a clinical tool to help my patients and myself and whoever wants to learn. I find it incredibly powerful and useful day after day in clinic, teaching people self-hypnosis to help with pain relief and sleep and executive function and all kinds of things. I'm familiar with the work on the dermatology stuff and I certainly read the literature on profound changes in cell counts and cytokine cocktails. I haven't looked at that again. In my MD/PhD hat, my clinical hat uses it as another tool, another arrow in the quiver. Could one design an experiment to really dig at that? Absolutely. I could think about how to do that, especially if you combine it with things like biofeedback. The body is all interconnected. If I'm modulating it here, eventually information can be transmitted, and we can see that. Historically, neurology and psychiatry were one field, and so hypnosis was both a treatment as well as a diagnostic. Let me see what is rapidly reversible through these different suggestions and ideo-motor communication. By seeing what is and isn't reversible, it helps you map out what's actually happening in this patient. But in terms of doing what you're talking about, speaking to these multiple levels and steering them somewhere, I haven't really thought about it. I'm a follower of Milton Erickson, who himself is operating at this wizard-like unconscious level and not really thinking about it; the body's just going to deploy the instructions, however mysteriously it does. Our job is just to communicate the high-level instruction, and how it unfolds is, I don't know.

[53:20] Michael Levin: Who's this? 'Cause this sounds very on the nose for stuff, say again who that is.

[53:26] Mijail Serruya: Oh, Milton Erickson, the father of American hypnosis.

[53:29] Michael Levin: Because what you just said, I need to read up on this because this is exactly my shtick about regenerative medicine. When we induce an eye or a limb or something else, we, at least in the cases where we can do it, we've learned to give a very minimal prompt. I have no idea how to build an eye or a leg — hundreds of thousands of gene expression changes need to happen, stem cells, all that stuff gets taken care of. I'm able to say, build an eye here. And to the extent that it's convincing, and it's not always convincing, you have to get the cells to take up the set point. But once they do, they handle the downstream molecular biology. We don't need to know it. We don't need to worry about it. So I really think there's a deep parallel here. That's our whole thing, right? Showing the symmetries between the morphogenetic intelligence and the behavioral intelligence. That kind of multi-scale communication thing where you can talk to the system at the highest level, and then all of that gets transduced down to make the chemistry dance to it is hugely powerful.

[54:36] Mijail Serruya: To me, it's very linked to the brain computer interfaces in the sense of, I wanna push someone's brain as far as I possibly can. My brain to their brain, a Vulcan mind meld without me having to actually drill a hole in their head. But then as a physician scientist, I still hit a wall. I do things that surprise my colleagues who were, "What?" Dentists and OB gyns used to do hypnosis all the time in the 50s. Yes, they did. It works. And here's the science. But then at some point you hit a wall and you're, "All right, this person's ALS is still progressing. This person's aphasia is still a problem." Now, I need, my language is no longer enough. I need some other prompt to communicate to this tissue. But I think they're very interwoven. I don't think they're separate. I think, and I say this all the time at the BCI meetings, it's not enough just to stuff the thing in the brain. You have to now also be talking to the person and training them in many ways. Otherwise, it's useless. It has to be embodied and contextualized. And then you can leverage whatever that person actually has left and it will unfold. So that combination.

[55:49] Michael Levin: Do we know that you need more than talking? For example, if in hypnodermatology you can talk to the system to get changes in skin cells, have you or anybody else tried doing an implant in a human and then using hypnosis on top of that to grow some connections? I don't see why, if you can talk a new cell type into the skin as in that original study with the kid with ichthyosis, we couldn't get better integration of implants and prosthetics if we knew what we were doing. Has anybody tried that?

[56:36] Mijail Serruya: I don't know. I'm willing to try it.

[56:38] Michael Levin: That might be a thing to try. I really want to see what the body's good at, which is transducing super abstract, high-level mental goals down into the chemistry of voluntary motion. If we can do that to muscle or skin cells, why the hell not? Let's try and improve the interface.

[56:58] Wesley Clawson: Because I think it's also interesting, Mike, we've talked about this before, when can a system tell that it's being read from and observed? Does the behavior change? It would be very cool to—I'm not super familiar with hypnosis. I just have a note to Google Milton Erickson, but if you could say, outside of me talking to you, are you being observed in any way? 'Cause for example, if you put an EEG cap on them, blood pressure is different, but rather than saying, yeah, I feel pressure on my arm, could they unconsciously know what you're looking at? If you had a glucose meter in them, would they know what you're looking at and how? And that would be really interesting with these implants, because they might say, "Oh yeah, implant, you're looking at my brain activity," but I know sometimes internally you'll represent it in a different way. They might respond with something quite interesting, some weird abstract answer that might help understand it. So I think hypnosis and BCI have a potentially very cool line of investigation there.

[58:06] Mijail Serruya: I'm happy to try. I have tons of patients who are in need, and we have clinical scenarios where people already have wives in their head. It's a matter of articulating the question properly and then knowing what we want to measure and look for.

[58:17] Michael Levin: Let's design something because you've got both sides. It's not that common to find somebody with both sides of that equation covered, which it sounds like you have. And I think we have some formalisms from the morphogenesis side that might be useful.

[58:32] Mijail Serruya: Okay.

[58:34] Michael Levin: Yeah.

Mijail Serruya: You might find that very interesting.

[58:36] Michael Levin: I think so. Yeah.

[58:39] Mijail Serruya: That's a hypnotic suggestion.

[58:41] Wesley Clawson: Yeah.

[58:42] Michael Levin: I've already absorbed it. It's already that well.


Related episodes