Skip to content

Conversation between Adam Omary, Roy Baumeister, and Michael Levin

Adam Omary talks with Roy Baumeister and Michael Levin about collective intelligence, self-organization, and how emergent group minds and multi-scale order may inform understanding of social and economic systems.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a 1-hour discussion between Adam Omary, Roy Baumeister, and Michael Levin on the topics of collective intelligence and possible relevance to economic/social issues above the level of the single individual.

CHAPTERS:

(00:00) Introductions And Self-Organization

(07:50) Emergent Collective Cognition

(15:46) Boundaries, Bodies, And Plasticity

(27:16) Fuzzy Selves And Consciousness

(36:15) Cellular Minds To Cosmos

(43:34) Group Minds, Selves, Freedom

(56:29) Synchronicity And Multi-Scale Order

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:00] Michael Levin: What you guys think?

[00:02] Adam Omary: Okay, cool.

Roy Baumeister: All right, got it. So where shall we start?

[00:11] Adam Omary: I could introduce what I think we have as common ground. So Mike is a biologist. He does very technical work on bioelectricity and how cells intelligently communicate and self-organize into complex systems. Roy is a social psychologist, one of the world's most famous social psychologists. So he's written on everything. But most notably, he has a book on self-control, a book on the nature of the self, and recently another book on free will, which you and I have talked a lot about. I see myself somewhere in the middle between you two doing developmental cognitive neuroscience. I think we all have in common these shared philosophical interests in the self and self-control, self-organization, and what are the biological evolutionary mechanisms behind all that?

[01:14] Roy Baumeister: Self-organization is one of the deep mysteries of the universe, or one of the most important things by which we get from electrons floating around in the ether to the global economy.

[01:32] Michael Levin: I'd love to hear your take on a few things because people ask me all the time. I study the journey that we take. We start life as a single cell, this unfertilized oocyte. Then you get to an embryonic blastoderm, which is this flat disk of 50,000 cells. We look at that and we say, that's one embryo. What are we counting when that's one embryo? What's going on? I study this slow process of this emergent self where all the cells are committed to the same goal in anatomical space and the collective intelligence of these cells and dealing with novel problems they haven't seen before. People often ask me, they see this stuff and they ask, what are the implications for social structures? Can we scale that up? I often talk about scale-free dynamics that you get in molecular networks, cells, organs, and so on. People naturally will say, what are the implications for social structure? I'd love to hear your take on the self, the social self, how much of this collective intelligence stuff you think is in fact multi-scale, that you can talk about these things meaningfully at a higher level. I'd love to hear that.

[02:49] Roy Baumeister: There's the common process of self-organization of things lending themselves or taking on parts of larger systems. I don't know that the one by which single cells become embryos and embryos become babies would be the same as by which people merge into groups and groups merge into nations. Nations merge into the global economy. With the global economy, they discover advantages. It's like globalization is coming whether we like it or not, because it makes the system work better and ultimately creates more resources, which filters down to individuals benefiting. If it filters all the way down to the embryos, I don't know that embryos work any better because of multinational corporations or international trade, but maybe they do. But still there is the common practice of moving up toward greater systems.

[04:14] Michael Levin: What we see is that there are certain basic principles by which these things connect that allow them to do two interesting things that allow them to scale up the goals that they're able to pursue. Single cells pursue little tiny local goals, but groups can pursue very large goals like making limbs. When I say "pursue goals," I don't mean emergent complexity. I don't mean this open-loop process where there are a bunch of simple rules. They all follow these rules and complex things come out. I don't mean that. Beyond that, there's a second-order situation where what you get is actually a system that is able to specifically pursue certain goal states. And if you try to deviate it, it will find quite clever ways to get there. The scale of those goals gets bigger and bigger, but also in other problem spaces. Cells start off solving problems in physiological state space, in gene expression space, and so on. Embryos, groups of embryos, solve problems in anatomical space. Animals with nervous systems will also solve problems in three-dimensional space and move around, and then eventually linguistic space. What are some of the principles that you think underlie the scaling? In your area, when you go from individual people upwards, what are the policies that underlie all that?

[05:45] Roy Baumeister: I was going to ask you, what are the principles at the cellular level? There's the obvious point. If any of us were punked down in the jungle alone, we'd have a hard time surviving, let alone reproducing or making a comfortable life. There are advantages to the individual person to being part of a group. There's safety in numbers, basic communication in the simple herds. Humans do a lot more with shared information and division of labor. Division of labor improves what the economists call the efficiency of a system so that the same amount of work by the same amount of people can produce more resources. I'm a professor; if I had to catch my own food from nature and build my own shelter, I would be in sorry condition; I don't have those skills. But with division of labor, everything is done by an expert, a specialist. And so things get done better. And then there are economies of scale, which is why the large corporations can basically out-compete the small ones. The same amount of energy expenditure produces more resources in the larger, more complex system. Now, is that true at the more basic level also?

[07:50] Michael Levin: So far that tracks exactly. I also wonder. There's this other phenomenon that you can call cognitive glue: imagine you've got this rat and you train it to push a lever and get a reward. It forms this associative memory that pushing the lever is associated with getting a reward. There's no individual cell in that rat that had both experiences. The cells at the bottom of the foot touch the lever, the cells in the gut get the reward, but no individual cell had that experience of both. The memory, that associative memory, is owned by the rat. What's this rat? This rat means a bag of cells, but it has a mechanism for doing credit assignment and memory of things that belong to the collective and not to any of the individual cells. That's one of the cool things about being an emergent individual: you get to have memories that none of your parts have. I'm curious as to what that means: you become trainable as a collective. You can train the collective on things that no individual in that collective actually knows, but the collective knows. I'm curious if you think that societies, corporations, whatever, you pick the level, whether those are trainable. We did a project once trying to train an ant colony, not the individual ants, the colony. The collective intelligence of the colony. And we didn't get to finish it. It's inconclusive. But I wonder if anybody's done those experiments, and I wonder what you think the prediction would be, whether collections of humans, not the individuals but the group, are actually trainable.

[09:35] Adam Omary: I wonder if this connects to Roy's idea of ego depletion, where he's shown that across a wide variety of tasks, people have this finite pool of cognitive resources or effort. And if there could be a computational mechanism underlying that where any of your individual neurons, they don't get tired, but collectively, regardless of what problem you're working on, there's some generalized pool of cognitive resources that you're pulling from. And over time, it can drain.

[10:12] Roy Baumeister: When we talk about the rat, there is a central nervous system. You're right that no one cell participates in both pressing the button and receiving the food reward. But the brain manages both.

[10:34] Michael Levin: But you've got the same problem when you say the brain. There really isn't any brain. There's a huge collection of cells. For sure, the cells are connected via policies. This is the electrical communication, and the same thing happens in the rest of the body that allows the collective to act as though it were in possession of this associative memory. But there isn't one of anything. The whole thing is very much a collective and you need those policies to make sure that the collective can know things that the individual parts don't know. The brain is particularly good at it, but there are also creatures that are brainless that can do it. There are examples like this in other body parts. It's not just, but yes, of course, the brain is part of the mechanism of this collective.

[11:26] Roy Baumeister: I had a book on the self last year, and then it struck me as the creation of unity. Usually the brain learns to process systems and use that to direct the actions of the body. Walking would be an obvious one: an animal with four legs, the brain has to understand left and right, front and back, and coordinate them to move in alternating fashion, to move forward versus to plant. The brain gets the whole body to operate as an integrated system by sending out separate commands to the separate limbs based on this integrative system that organizes the whole thing. That is the emergent organization. Of course, it's very adaptive too.

[12:27] Michael Levin: A couple of things. One is, we study a lot of things that don't have brains. Slime molds and single-cell organisms, tissues and everything else. They all do this stuff. They're all made of parts, as is the brain. Once you start thinking about what a truly centralized controller even means, it's really hard because everything's made of parts. I've got this slide that I sometimes show in my talks. Descartes was really into the pineal gland because it's the only unitary thing in the head. He felt that our unified experience as humans should have a single locus in the brain. If he had had a microscope, he would have looked in there and said, "My God, there's not one of anything. This thing's full of cells." Still, the hard problem remains: understanding how you're going to bind all these things into a coherent self. I've got this model because we deal with cells and tissues and synthetic organisms. We make synthetic life forms. I've been trying for a framework that is agnostic about the medium. What do all decision-making agents have in common that need to act in physiological space, in anatomical space and physical space and financial space, linguistic space? We've been working on this notion of a cognitive light cone, which is just the size of the goals that you're able to work towards. Of course, there are different competencies in reaching those goals. I'm really interested in the social question, because oftentimes one of the tricks we use is we try to take tools from behavioral science and apply them to things that aren't animals with brains. We've shown, for example, that you can do sensitization, associative learning, habituation, anticipation — gene regulatory networks already do this. Molecular pathways already do this. People will say, since you're stretching the cognitive spectrum all the way down, next you're going to say that the weather is intelligent. They say, "Have we tried training it?" I have no idea, but it's an empirical question. You can't just sit back and make assumptions. You have to. I wonder if the social structures are amenable to some of the same techniques that we use in behavioral science to probe collectives, because that's what it is. When you ask if something can do habituation or association or planning, you're probing the competencies of a collective. I think we could probably test that, but I don't know. Maybe you know if anybody's tried that in social circumstances.

[15:42] Roy Baumeister: I don't, I don't.

[15:46] Adam Omary: I've been talking to Roy a bit about active inference and these entropy-based predictive processing theories of consciousness. We've been talking about it in the context of drive for exploratory play, sensation-seeking, reward sensitivity, and boredom. But I'm wondering if that generalizes as the underlying computational mechanism you're talking about that connects human cognition, more basic animal cognition, or even higher-order organizational social systems.

[16:22] Michael Levin: We study a variety of connection policies and one of the policies is a kind of memory wipe. So if I have two cells sitting next to each other, if this cell sends a signal, some chemical signal and it hits the neighbor cell, that way of communicating makes it very easy for the receiving cell to know that signal came from outside. That's not my information, that's somebody else's information. And so you can choose to believe it or ignore it. But there's this magical thing in bioelectricity, which is there are these electrical synapses called gap junctions. What they do is connect directly the internal milieus from one cell to the next. And so what happens is if something happens to cell A and there's a memory trace of it, let's say a calcium spike, that's a memory trace of that event; it propagates directly into cell B and it doesn't have any metadata that says I'm coming from someplace else. Cell B sees this calcium spike and says we've been poked. When you're sharing memories, if you and I are sharing most of our memories, it's really hard to keep independent identities. We start to have this mind meld and now we're bigger, so we have bigger computational capacity, we're physically bigger, but we have a joint memory and are bound to cooperate because we can't even physically entertain the thought of defecting against each other because we're sharing the same thought and we are the same informationally, we're the same being to some extent. That memory wiping property is one thing that I think really facilitates this joining into collectives. Another one is shared stress. This idea that if one cell is under stress, it might export some stress molecules to its neighbors, which then also feel stressed. They don't know that this is somebody else's stress. My problem becomes their problem. If I need to get somewhere and they're in my way, everything gets a little plastic because they're jiggling around because they're not happy either, whereas otherwise they would have sat there and not let me pass. The sharing of this globalization of stress. There's some other stuff too, but I don't know, does any of that sound relevant?

[18:45] Adam Omary: It starts to look like pheromones and then you get into my world of hormones and development.

[18:50] Michael Levin: Somebody said to me recently, "Boy, it's really hard to change your mind and your priorities," and they said, "look at what happens at puberty: a few hormones and bammo, everything changes. All your priorities are upside down, things that seemed great are now stupid, and things that seemed disgusting are now great. It kind of turns everything out. So it's not that hard, actually."

[19:22] Roy Baumeister: This collective approach has given me a different perspective. I start from a point one of my philosophy professors made, which is that part of the essence of life is a boundary between inside and outside and that every living thing maintains a boundary. Stuff moves across the boundary as we eat, for example, but we know the difference between the hamburger we ate and the one we didn't. It remains outside. That's long before the self, but there is that boundary, and then the brain presumably evolved within organisms to take care of the whole self within that boundary.

[20:16] Adam Omary: Not only in space, but in time as well.

[20:18] Roy Baumeister: Space and time. Most animals live pretty much in the here and now. There is integration across time, with brief expectancies for a few seconds into the future. But only humans really have a full narrative sense of self.

[20:43] Adam Omary: Psychologists say that a lot, that only humans think about the future. Does that check from your perspective, Mike, that other animals are only focused on the here and now?

[20:55] Michael Levin: No, I don't agree with that. I think that it is true that we are the only ones that have metacognition — we know when we are thinking about the future — but even bacteria and yeast can anticipate future events. So memory and anticipation do not require a brain. Most critters do it.

[21:20] Roy Baumeister: They anticipate events how far in the future? Next year.

[21:25] Michael Levin: No, not next year. Humans have a much bigger cognitive light home. Part of it is that we have a huge, and in fact ours is special in another way because it's bigger than our lifespan. Probably uniquely, as far as I know, we are capable of pursuing goals that are for sure longer than our lifespan, in other words, not personally attainable goals. Most of these other creatures have much shorter term — for bacteria and yeast that might be 20 minutes or something like that.

[21:58] Roy Baumeister: Okay, that's good.

[21:59] Michael Levin: But it's there. I also think your point about the boundary is really critical. Establishing the boundary between self and outside world is absolutely critical.

[22:17] Roy Baumeister: Plants clearly. You can dig up a plant and wash off the dirt and move just the plant and nothing else. There's the unity there, but the brain improves your ability to operate that way. If we look at the collection of cells, they are somehow learning to act or evolving to act as if they are a unity long before there's a brain.

[22:48] Michael Levin: Even in humans, at the very beginning of human life, you can see — I've done these experiments in ducks and so on. When you have this blastoderm and you look and say, "There's an embryo that's going to develop into a human individual." What you can do is take a little needle and make some scratches in that blastoderm. For about four or five hours before they heal up again, every island is going to decide that it's on its own and is going to start making an embryo. When they do heal, you have conjoined twins, triplets, whatever. So the number of individuals in a blastoderm is not fixed. It's not one. It's anywhere from zero to probably half a dozen. So this excitable medium generates individuals. This raises the question, because when you have two embryos sitting next to each other, every cell is some other cell's neighbor. So now the question is, am I part of this embryo or part of that embryo? Sometimes they get confused. This is why conjoined twins often have laterality defects, because left and right they can't quite tell what side they're on. But this issue of deciding where I end and the outside world begins is very fundamental.

[24:00] Roy Baumeister: I had a baby girl who didn't cry very much, but one day my wife heard her crying. She walked in and she was lying there, her finger was poking her in the eye. She didn't know which was her own arm yet.

[24:24] Michael Levin: That's funny too, because the plasticity of the body — if you've ever seen the rubber hand illusion, you see these videos on YouTube where they put a rubber hand next to you and you watch them pat it with a little brush, and then somebody takes a hammer and goes to hit it, and the people freak out because now you think that it only takes 10 minutes to override. I don't know how many millions of years as a tetrapod. Your brain knows exactly how many limbs you've had, but you can override that in just 10 minutes of watching somebody pat this rubber hand and suddenly it's your hand. It doesn't take very long at all. That's great. That plasticity is why people do sensory and motor augmentation. This monkey with the third arm uses it to eat marshmallows. In humans, they get prosthetic arms where the wrist goes all the way around, like your normal wrist doesn't. They'll do that when they pick up a coffee cup; they'll go the way that a normal hand would never go. So that plasticity — I think embryos have to figure it out from scratch. What do I have? What sensors do I have? What effectors do I have? What do I have control over? Where is the boundary?

[25:44] Roy Baumeister: So they all learn that or figure that out.

[25:47] Michael Levin: Yeah, in the lab we can make a tadpole where the eyes are on his tail. And no problem, they can see. We can do visual learning tasks; it all gets sorted out. We have many examples like this. We make these Xenobots, which are a frog skin that's given a new life and it makes this little motile proto-organism that runs around on its own and does all kinds of things. It's just skin. I think it's because all of these questions of what am I, what is my structure, what space do I live in, all of this gets solved from scratch when it's mostly not hardwired.

[26:31] Adam Omary: There's some type of built-in error correction. If you put the eyes on the tail, they slowly begin to migrate towards the head, even if they don't make it to where they're supposed to.

[26:41] Michael Levin: No, the eyes on the tail don't move. What happens is the eyes stay. There are things that definitely correct themselves. We make these so-called Picasso frogs, in which all the craniofacial organs are scrambled; they do correct by the time they get to a frog. But you can also teach them new patterns. For example, when you cut a xenobot, it heals back to its new xenobot shape. If you have a salamander and you keep chopping off one limb, after about five or six trials it gives up and it's not going to do it anymore.

[27:15] Adam Omary: It's done.

[27:16] Michael Levin: It learns that it's just not going to work.

[27:20] Adam Omary: Thinking about that type of self-correction plus this computational Markov blanket idea of identity and the confidence intervals or error bounds. On one hand, when we're talking about identity, you have this postmodern idea that it could be anything. The boundaries are arbitrary, only to the extent they're functionally useful across time. Are they going to be stable? The simplest version of this would be: think about what's a Markov blanket that can define the scope of the sun. It can just be a sphere of arbitrary size. For the most part, we agree on the boundaries, but you could keep extending it and capture some residual solar flares. You could extend it and define a sphere that goes all the way out to Mars, and collect even more of that solar mass, but it's a diminishing-returns type thing. There must be some optimum point if you define a function that, on one hand, wants to maximize the amount of sun you're capturing, but on the other hand, wants to minimize the amount of false positive or empty space. You're trying to optimize across those two variables. I'm wondering at the level of the boundaries of an organism if you have something like that too, where you can have all these philosophical questions: am I still me if I take away one skin cell? Yes — even a hair — but if you keep removing cell by cell, eventually there will be no more of you left. It must be that you do have boundaries, but they're fuzzy boundaries, and there are confidence-interval bands for how much you can remove before you no longer have the thing.

[29:07] Michael Levin: I think that's super interesting. There are a couple of things that feed into this. One is blindsight, where someone doesn't think they can see, but in fact, if you ask them to guess what's in front of them, they guess correctly. They in fact can see; they just don't know they can see. The original patient studied for this would say that that's not part of him. It's that vision; it's something else, it's something extra. He didn't think that was part of him because he didn't have a direct perception of it, even though it was part of his behavioral repertoire. It wasn't something that he had conscious access to and he didn't feel that was part of the self. We also have to keep in mind that's asking his left hemisphere, presumably the one with speech, because there's actually another hemisphere in there that you don't normally hear from. There are other things in there that we don't know how to talk to yet, other organs that might have their own boundaries in different spaces. On a practical level, that's what we deal with when we work on cancer. When individual cells electrically disconnect from the rest of the body, their cognitive light cone shrinks; they're back to their amoeba goals. As far as they're concerned, the rest of the body is just environment to them. Similarly, when we make these xenobots, you start with a frog embryo, you dissociate it into individual cells. The cells are all alive; you take some of them and make the xenobot. But the embryo is gone. The tadpole is gone. Where did it go? Because the cells are all alive, and it's exactly what you just said: you can take one after the other, and then eventually you have something else. You have a bunch of loose cells, or you can make a xenobot, but the individual is gone. These boundaries are fuzzy indeed. The end point of the story is that you can actually go backwards and you can force these. This is a cancer approach that we work on: not to kill those cells, but to force them to reconnect to their neighbors. When they reconnect to their neighbors, they once again become part of the collective that's working on making nice skin, nice muscle. They stop being metastatic and they go back to...

[31:32] Adam Omary: Wow.

[31:33] Michael Levin: We've done it in Frog. It works great in Frog. We're now moving into human cells, but it's a different way of thinking about it because you take advantage of that memory wipe. They forget about their little local goals and they start working on whatever the collective goal was.

[31:55] Adam Omary: How far does the cognitive light cone extend? When you talk about humans as the pinnacle of that, our subjective experience of being one, you can go narrower into all of the sub-levels, down to the cellular level within humans, and all of those are narrower versions of that light cone. But when you expand across humans, it seems there's not only a jump to me and to you and to different selves, but there's not the same connecting continuum. On one hand, I'm tempted to say you can look at broad-scale social organization or network dynamics as an even larger portion of that light cone, but it doesn't seem to have the same continuity.

[32:47] Michael Levin: You mean it doesn't, first person continuity, you think it doesn't, it isn't like anything to be that social agent.

[32:57] Adam Omary: We both are sympathetic to panpsychism. Even if we only have conscious access to what it's like to be us at this higher level, it's possible that there's something it's like to be a cell. But I'm not sure it's possible that there's something it's like to be, say, a country.

[33:21] Michael Levin: There's a really good paper about that; I'll look it up in a minute and try to put it on the chat. It talks about the philosophy of that. What is it like to be a bat? No, there's another one — a different one. I'll find it momentarily. If we didn't know any cell biology or neuroscience and somebody said to us, "I could take 3 1/2 pounds of these little electrically excitable things and smush them together in this process where the whole thing folds on itself for nine months," did you know that's going to give rise to an internal perspective of a human? I would say, hell no, that doesn't sound plausible at all. We have no idea what kinds of things give rise to those kind of first-person perspectives. I don't think that just because you're bigger in scale your light cone is bigger. So I wouldn't make the claim that groups of people have a bigger cognitive capacity than individual people. Roy, you could say what you think about the intelligence of groups versus single individuals, but I don't think there's any reason why it has to be bigger. But we don't — our intuitions for what is sufficient to have that inner perspective are really badly calibrated. I don't think we have a clue.

[35:04] Roy Baumeister: I'd be inclined to think you'd need a brain and central nervous system to have conscious experience.

[35:12] Michael Levin: That's fine, except that the brain and nervous system show up very slowly, both evolutionarily and developmentally. That means we need some sort of story about when it kicks in. I've never heard a good story about that.

[35:28] Roy Baumeister: I've long speculated that the first conscious experience would be pain. That would be adaptive. And there was a signal that the tissue was being damaged.

[35:41] Michael Levin: That would be the disruption of a cell membrane in a microbe, because that depolarization that neurons have when you trigger them, which presumably they don't like, is as old as the hills. That kind of basic damage and that delta from physiological homeostasis is the origin of it, but it's really old.

[36:15] Roy Baumeister: This is the other side of your example of the rat, no cell experiencing both the biopress and the food reward. But with pain, you step on fire ants. The brain says, "ouch," and you grab onto something and pull yourself away. That would be the same thing. And perhaps you could find an example in which no one cell is involved in both the input and the output. The input being the signal of pain and the output being the motor movements to escape it.

[37:01] Michael Levin: Want to see something fun? I'll show you a quick video. This is the work of postdoc Nirosha Morugan in my lab. This yellow thing here is a piece of slime mold. The slime mold, you're going to see it grow, but the whole thing is one cell. This is a very thin glass disc. It's about 5 milligrams. There's no food. It's just glass. These are three glass discs. Here's the time lapse. For the first few hours, the slime mold is growing in all directions. It's also gently tugging on the medium and reading back the vibrations that it gets from pulling. It's sitting on soft agar. It's pulling on the soft agar. By this time, it's made up its mind about where the bigger mass is. How do we know? Because boom.

[38:13] Roy Baumeister: Whoa.

[38:14] Michael Levin: This is reliable. Not quite always, but most of the time, what it'll do is it'll go to the bigger mass. We don't know why it likes a bigger mass, but what it's able to do, and the coolest part is right here, because at this point, it's already integrated the information about its environment. It already knows where the bigger mass is, but hasn't done anything yet. Up until now, this is all from here, the first 600 minutes or so, this is all pondering time. This is where it's collecting that information. And the whole thing is 1 cell, and it's collecting this biophysical feedback from its environment. And then bang, at this point, it's here, 795, it's still, it could go in any direction, but boom, that's it, at this point, it's made its checks.

[39:03] Roy Baumeister: That's very nice.

[39:04] Michael Levin: The whole thing is one cell. We have tons of experiments; you can test it, and other people like Audrey de Sauture have done training. You can train them; it has memory. You can take a trained one and a naive one and fuse them. They'll fuse together and then the memory propagates and the naive one will now remember half the memory that the other one had. No nerves, no brain, single cell.

[39:41] Adam Omary: Mike, we've talked a lot about zooming in, down, and back on the evolutionary ladder. There's no obvious point at which intelligence emerges, and there's a nice elegance to panpsychism. It's always there, and it's just on a continuum, and maybe there's some bare minimum unit of consciousness. But if you scale it upwards again, past humans, even past social networks, at the most extreme level you would have: treat the entire universe as a single system. You get this kind of pantheist, cosmo-psyche, mind of God, in Spinoza's terms. What do you think of that?

[40:22] Michael Levin: First of all, I think all of this is an empirical question. In other words, I don't think we can assume in either direction. I think you have to do experiments. So when you have some sort of weird collective and you want to know what kind of cognitive system it is, you can do experiments like we do with everything else in behavioral science. You can give it stimuli, you can see if it has memory, you can see what kind of attention-keeping properties it has. I don't know how you do experiments on the whole universe, but for example, I've talked to physicists to try to design a planetary-scale synapse. You could have a system where there's a gravitational system and you can send objects through as stimuli and it will permanently change the way that would react to new objects in the future. Basically, synapses. You could imagine building a gigantic nervous system out of something like this. I have no idea if we live in one or not. We probably are not going to be able to know for sure if that's what we live in. I worked with this amazing graphic artist, Jeremy Gay, and I've asked them to make a little cartoon. It's a cartoon of two neurons in a brain and they're talking to each other. One neuron says, "We live in a cold, mindless universe. Nobody cares what we do. There's nothing out there." The other neuron says, "I don't know. Every once in a while I get this idea that there's something, there's more here. The universe would want something from us. It's got some kind of order to it." The first neuron says, "You're crazy. There's nothing, there's no mind out there." They're both part of this big, big brain. In that case, the second one is right, of course, because they are in fact part of a larger-scale individual. So are we part of it? I don't know. I think it's possible. Roy can say something about this: is it possible for us to gain evidence that we are part of a collective that has collective dynamics, like learning, preferences, attention? I don't know. I was contacted by some guys who work on the market, the market-mind hypothesis or something. It's an attempt to understand economics using some tools of behavioral science. Maybe some of these concepts — training, attention, perceptual illusions — would work on these larger structures. Ant colonies fall for the same visual illusions that mammalian nervous systems do, not the individual ants, the colony. You can do experiments; they make the same perceptual mistakes as we do. You can use many of the same illusions. Maybe in economics that holds too.

[43:34] Roy Baumeister: I'm still intrigued by the question. I think Adam brought it up. Is there something that it's like to be a country?

[43:42] Michael Levin: Yeah, let me find it.

[43:46] Roy Baumeister: I don't.

[43:48] Adam Omary: Mike, is the paper you're referring to there? Does it have the line in it? What if we all held hands?

[43:53] Michael Levin: I don't recall, but I will.

[44:00] Adam Omary: It might have been in "What Is It Like to Be a Bat?" or a paper in that same space, but it was addressing arguments against the possibility of something like that. And one argument is it has to be physically connected. So that was a joke. What if we all held hands? But I think the more serious take on that is something like there needs to be some physical medium for information exchange. Back to this cosmic neuron idea. I think it's dangerous getting psychologists to comment on quantum physics, but I'll take that leap there. When you hear about entanglement, where you have what seems to be genuine information exchange across large physical distances where there's no actual contact, I don't know what to make of that. But then if I start thinking about it in more of this—treat the universe or some high-level system as some computer that can transmit information, this planetary synapse going on—I wonder if there's anything there?

[45:16] Michael Levin: My understanding is that you actually can't use entanglement to pass information faster than light. You're not actually passing a signal back and forth. You're not violating relativity. Nevertheless, there is an interesting notion of synchronicity here. Paoli and Jung wrote a book together on synchronicity. It's the idea that patterns that look like anomalous information transfer at lower scales are in fact perfectly reasonable cognitive kinds of things at a larger scale that you're not aware of. Richard Watson and I are working through some of this stuff right now. I put a link in the chat. This is a paper by Eric Schwitzgebel called "If Materialism Is True, the United States Is Probably Conscious." It addresses exactly the issue that you're talking about. It's basic philosophy.

[46:23] Adam Omary: I like Schwitzkubel. He's sympathetic to panpsychism as well.

[46:27] Michael Levin: I don't know him at all, but I thought this was a pretty good paper. I don't know anything here. The one thing I feel strongly about is that our intuitions for what kind of systems are going to have an inner perspective are not calibrated well. We have an N of 1 example: ourselves; we just have no clue. I don't know about you, but if I didn't know what was between my ears and someone showed me a brain and said, "This thing is going to have an inner perspective," why would you ever think that? People have tried. There's IIT and some other things that try to put down some constraints around the kind of architectures that are going to have that metric. I think we're very brain-focused, and I think that blinds us to a lot of things.

[47:29] Roy Baumeister: Maybe blind, brain-focused, but it's hard to imagine the United States being conscious as a thing.

[47:40] Michael Levin: It's hard to imagine. Are we good at imagining a lot of things that are true? I find it very difficult to imagine that in the whole universe, the only way to be conscious is to have this kind of thing here. That strikes me as a priori very improbable. Maybe it's too much Star Trek when I was a kid, but I have this background expectation that the universe is a very weird place and that there are going to be minds out there that don't look anything like us. If we're expecting to see a frontal cortex with this and that, we're going to be badly disappointed. I just can't imagine that this is the only way to do. So you can imagine. Sci-fi has had this for over 100 years. You're sitting there at home and this thing lands on your front lawn and some kind of thing on wheels trundles out and hands you this poem of how happy it is to meet you. So what are you going to do? Are you going to assume that, who knows what's in it? Are you going to assume that it's somehow faking because you can't find a single neuron in there? That can't be right.

[49:02] Roy Baumeister: The idea that the corporation is like a person was a deliberate cultural invention, a fiction which facilitated trade. It was a big advance and it's one of the reasons the Islamic world fell behind, even though it was historically ahead for a number of centuries. But it's understood as a fiction. You don't think the partnership itself is conscious as a unity, but the key was to protect the individuals involved from liability. So if three Arabs want to get together to make an investment and send out a ship for profit by trade, if it goes bad, their whole fortunes, their house and everything is forfeit. And so it made trade very risky. But the Europeans invented the corporation. And so if it fails, then the assets belonging to the corporation can be accessed by the creditors. But the homes and the personal savings of the individuals are not at risk. It's a legal fiction. It's understood as such. To say that the corporation becomes conscious as a unity seems a stretch to me.

[50:39] Michael Levin: That's very interesting what you just said, that the adoption of an agential perspective makes certain relationships easier. I think that's very interesting. There are a lot of philosophers of mind who think that human consciousness is fiction too. I think they would tell exactly the same story that you just told around primates sitting around a fire at some point where this idea that we're going to assume that each of us has an internal perspective that's like me is just a lubricating fiction for being a band of cooperating individuals, that there actually is no such thing. That's in fact exactly that kind of fiction. It's a fiction that makes it easier to be successful and to do things. And then eventually it's a fiction that we turn on ourselves. After you tell stories about other agents doing things, then you say, wait a minute, I'm an agent too. I do things. I have free will and I'm a person. And that is just a user illusion. There are plenty of people who write books on human consciousness where that's the conclusion: that you're basically a user illusion that provides for social lubrication.

[52:08] Roy Baumeister: I'm not familiar with that perspective, I guess.

[52:11] Michael Levin: I could name a number of books. It's a very mainstream thing. It's very hard to internalize that view because it basically says that you are a walking illusion in many ways. I believe that's the mainstream perspective.

[52:36] Roy Baumeister: I read a bunch of things about the self being an illusion, and none of them seem very convincing. I noticed they all put their names on their books.

[52:49] Michael Levin: That's true. And that gets back to the issue. I think Adam mentioned free will at the beginning. I had lunch with a philosopher once who didn't believe in free will. The waitress came by and said, what do you have? They said, let me see. They said, are you going to choose a sandwich? How come you don't sit back and see what the unit, what the Big Bang ordained? It's impossible for us. It's not true. I've never met anybody who can actually do that. People say they don't believe in free will, but I think it's a false introspective report. They don't act as though they don't. Would you agree with that? You would never conclude from observing them that they don't believe it.

[53:40] Roy Baumeister: I was talking about Metzinger, the German philosopher who wrote a book about there's no free will and no moral responsibility. Someone asked him, "Is this how you raise your sons?" "No. Of course. They have to learn to behave by the rules."

[53:59] Adam Omary: In these mainstream reductionist views arguing against free will, I don't disagree with their logic, but they seem to frame it as all or nothing. If you don't have independence from the entire causal chain of the universe, then there's none, as opposed to taking more of a degrees-of-freedom approach that Kevin Mitchell and Roy advocate, or this cognitive light-cone approach that can be narrow or broad and give you more freedom.

[54:25] Roy Baumeister: Why would that even evolve? What would be the advantage of being able to act completely independently of the environment? What you want to do is spot multiple possibilities in the environment and make advantageous decisions on that basis.

[54:43] Michael Levin: That's right. I think a lot of these accounts are very backwards looking. It's all about explanations and pointing out that there's this whole chain of chemistry going backwards that's responsible for everything that happens. That's fine if you want to take the micro reductionist perspective and you're content with looking backwards at what's already happened. But that isn't what we're trying to do here. I'm not interested in explanation. I'm interested in invention. I want to know what next. What are you going to do next? That kind of framing, where you're just looking back and thinking which atom zigzagged to get you here, is completely useless as a frame for invention. You're not going to do anything new if that's it. A couple of days ago I wrote a blog post where I asked GPT-4 to write a dialogue between a hiring manager at a software company and a young applicant who had just read one of these books saying there's no free will. So he goes in there and they say, "So are you a good coder?" He says, "Code? The computer is a physical system. The electrons go where they go. What do you mean code? There's no room for this magical code that's going to make the electrons dance. The computer does what it does." That's exactly right. If your perspective is that everything is going to happen according to Maxwell's equations, where the electrons go, you're not going to code a damn thing. You're a terrible candidate for a job that requires forward-looking creativity. Backwards is easy; forwards is where it's at.

[56:29] Roy Baumeister: I see we're in the final couple minutes. Thank you, Michael, for doing this, and thank you, Adam, for setting it up. This has been very stimulating, giving me a bunch to think about.

[56:38] Adam Omary: Thank you both. If you have time, I wanted to ask one more question. Mike, you brought up union synchronicity, and we haven't talked about any of this, Roy, I'm curious to hear your thoughts, but I'll add one more piece where some of what he says is very compelling to me. On the other hand, I have Steve Pinker as one of my advisors. He looks at that in very cognitive rationalist terms of, okay, this is just one of many biases that we have where chance events occur all the time. Most of the things that have no meaning, you just dismiss. Every now and then, type one error, right? Now and then you're going to get a false positive. I go back and forth between thinking, when these chance encounters happen that hint at something connected? Is that just me imposing my predictive processing onto the world? It's just things happen all the time, so there's bound to be coincidences.

[57:40] Roy Baumeister: I really liked Jung when I was young. I read a whole bunch of his books. I even had plans to go to the Jung Institute and learn that. Somebody kindly talked me out of that. He was certainly brilliant and operating outside the normal range, and that's part of the appeal to try these things. But synchronicity requires you to postulate there's some higher power integrating or organizing things. It's a variation of the everything happens for a reason argument. In terms of human evolution, a lot of things happen for a reason, and that's how we learn to relate to each other instead of just looking at each other as an animal doing something. How does this affect me? But I start to infer the person is doing that for a reason. So we learn to think in terms of invisible causes behind things. But synchronicity had to take random coincidences and see a deeper meaning. That's seductive, but I believe there's a lot of randomness in the universe. So I'm not a believer in synchronicity. Now, some things do happen for a reason. You can explore coincidences and some of them will not be coincidences, but a lot of them are.

[59:20] Adam Omary: When it gets all mystical, it seems to fall on its heels. But when I think about Mike's example of the comic strip neurons talking to each other, firing really in accordance with a whole plan of a higher intelligence that they can't perceive, and how narrow our perception is, is it possible that there's some large-scale computation that we're all part of, this evolutionary game theory or these sorts of natural laws of selection that are guiding in a certain way. And then other people say it's not selecting for some end goal. It's selection; what survives is.

[1:00:04] Michael Levin: I think the question of whether there's a larger pattern is an empirical question. I'm not going to claim an answer to that. But I do think that the concept of synchronicity, and I've only recently started working on this, I don't have a mature version of this idea, that the concept of synchronicity goes way beyond our human tendency to notice patterns. I think that a proper version of the concept of synchronicity would talk about multi-scale patterns so that when you're looking at electrons in the computer, you would say, isn't it amazing that these electrons went over here and those went over there, but together, that's an AND gate. That's part of this other calculation. Amazing, down below, all they're doing is following Maxwell's equations, but looked at another level, they just computed the weather in Chicago. I think it's not about us and our human tendency to pick out patterns and things, but actually I do think it's that too, because if synchronicity is simply how things look at other scales, of course we're going to get good at it. We're also going to make mistakes. We're not going to be perfect at it. We're going to be looking for these kinds of things. And sometimes we're going to overdo it. But that's not to say that there are mundane explanations. Things that look random and physical and deterministic at one scale, you're missing a lot if you don't know whether there is a different scale of observation. You could study your computer using Maxwell's equations, but you're really better off if you know what a high-level programming language is. That is total mysticism from the perspective of the electrons, this idea that there's this giant algorithm that's determining where we go. It's mystical nonsense, except that if you don't buy into that, you're not going to code much. We would still be in the 50s as far as information technology if we didn't buy into that particular brand of mysticism. I feel like there's definitely a revolution along those lines coming for biomedicine. Our obsession with the biological hardware and with the molecules instead of the expectations, the memories, the preferences of our collective cells and so on, I'm betting a lot of our lab work on that; that's all got to change. But maybe it goes higher than that, I don't know. Thank you so much for putting us together, Roy. Real pleasure. For me too. Send over if you don't mind. If you have anything that's relevant to this from your work that I should read, we'll definitely send it over. I'd love to.

[1:02:56] Roy Baumeister: All right. Okay, well, thanks again.

[1:03:01] Michael Levin: Yeah, thanks Adam.

[1:03:03] Adam Omary: This was super interesting. Thank you both.

[1:03:05] Roy Baumeister: Yeah, thanks guys.

[1:03:06] Michael Levin: Appreciate it. Thanks guys. Bye.

[1:03:08] Adam Omary: See you later. Bye.


Related episodes