Skip to content

A discussion between John Vervaeke, Gregg Henriques, Justin McSweeny, and Mike Levin.

John Vervaeke, Gregg Henriques, Justin McSweeny, and Mike Levin explore mind, selfhood, Platonic spaces, and diverse intelligences, connecting cognitive models and meaning crisis to questions of identity, agency, practice, and compassionate personhood.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~1 hour 20 minute working meeting conversation between John Vervaeke, Gregg Henriques, Justin McSweeny, and myself; we touch on topics of mind, Selfhood, Platonic spaces, and other ways in which the emerging science of Diverse Intelligence speaks to greater transpersonal issues.

CHAPTERS:

(00:02) Framing cognitive light cones

(12:30) Relevance, anticipation, self-modeling

(20:27) Interaction, relations, perspectives

(28:33) Meaning crisis and identity

(41:02) Patterns, agency, participation

(51:58) From theory to practice

(01:01:25) Humanity, compassion, personhood

(01:16:49) Closing reflections and gratitude

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:02] Justin McSweeny: Normally I say welcome to my guest when I start an IdeaCast interview, but I have three persons here who are of great significance to me and in my learning path and journey towards understanding wisdom and relating to wisdom. I'm very much looking forward to this conversation. I welcome the YouTube audience. John, Greg, Michael, welcome to the three of you. I'm so glad to have you all here today.

[00:27] Gregg Henriques: Pleasure. Thanks for having me.

[00:28] Justin McSweeny: I want to acknowledge that you all have your own YouTube platforms, your own huge social relevance. To be here on the show, again, I'm humbled in gratitude to the three of you for doing this and coming here and allowing me to open up the space for you guys and be a close attendee of what's about to unfold. Gratitude to you all for that. I'll ask your charity here. I'm a layperson. I'm one of those dreaded autodidacts, but I ground a lot of it in the humility of FOK in Pyrrhonian inquiry. I consider myself a student of the work of the three of you. In my trying to think of how we start this out today is to look at what Michael talks about in the audacity of the imaginal being grounded in empirical evidence and the data and that dance between the two of them. As we're opening the aperture on what intelligence does and what it might be. Looking at Greg's layered ontology, the joint points between different stages of self-organization, I think there's so much richness in territory. Of course, John, with his compendium of ideas and epistemological rigor. I see a beautiful convergence here of the three of you. I will be a good host. I'm going to step to the side. Greg, if you would like to start the conversation, and I know it's going to flow really well. Over to you, Greg.

[02:06] Gregg Henriques: I appreciate Justin. It's a wonderful opportunity to be here and to share ideas with people I admire greatly. Here's what I'd like to throw out there and see. Michael, you may have seen my depiction of a tree of knowledge diagram of upside-down cones emerging out of an energy-information singularity source, a one-world naturalism from a big history view. It's an expansion of complexification. But something happens at different points of it that result in a qualitative shift. In particular, a plane of life emerges, a plane of what I call mind, animal, and culture person. It took me a long time to think about what these planes were coming off of. The matter complexification seemed okay, but I wasn't sure exactly what was happening with these life, mind, and culture cones for months and years after I developed it. Then I realized it's a complex adaptive system networked together through information-processing and communication systems that afford particular potentialities. And they're mediated by certain kinds of systems — gene, cell, nervous system, animal, propositional language, collective human intelligence. I was enormously struck by your cognitive light cone analysis. I wanted to talk with you about how you conceive of a cognitive light cone, how you conceive of intelligence, emergent evolution, and whether there's a relationship between that cognitive light cone idea and the cones I'm depicting in terms of emerging complexification through information networks and exploration of design space. And then I'd like to connect the edge and contact of that with recursive relevance realization as a dynamic process we can apply to see how these things emerge. I'll throw that out there and see if you can riff on it, pull John in, and then see where it might explode.

[04:40] Michael Levin: I should start by describing this cognitive light cone idea. I should preface this by saying that the version of it that's out now, which I published in 2019 or so, is very much a 1.0 vision. It needs significant improvement, and I'm working on that now. But for what exists, it's the following. I was at a Templeton meeting, a conference of people studying diverse intelligence. Pranab Das tasked us with a challenge of coming up with a framework in which you can simultaneously think about truly diverse beings. We're talking about not just the familiar apes and birds, and not just an octopus or a whale, but really diverse: colonial organisms, synthetic biology beings that are and will be made, cyborgs, AIs, whether software or robotically embodied, hybrots, combinations of living and engineered material, possible aliens. I've been thinking about this for a long time, and I took that as an opportunity to formalize some of this. How do we do that? What I felt was really fundamental to any agent, any being that we're interested in is the scale of its goals. I thought that goal-directedness — some degree of it. I don't like binary categories. I don't believe that there is a thing as goal-directed or non-goal-directed. I like Norbert Wiener's cybernetic scale that goes from passive matter all the way up to human metacognition and whatever is beyond that. So, for each of these potential beings, let's map out what is the largest goal they could possibly pursue. You collapse space and time onto a two-dimensional sheet and get something that looks like Minkowski's light cones. The size of the goals lets you start to think of different cases. If I ask you what you care about and you say, I care about sugar concentration within this 10 micron radius, and I have a memory that goes back about 20 minutes and predictive capacity that goes forward about 10 minutes, you're probably a bacterium. If you tell me that you care about things that happen within a 100-yard radius, and you've got some memory going back, but you're never going to care about what happens three months from now in the next town, I'm going to say you might be a dog. If you've got planetary-scale goals about the way the financial markets and world peace are going to look 100 years after you're gone, you're probably a human. If you tell me that you can care for thousands, millions of sentient beings, I'm going to say you're something beyond a standard modern human. I don't know what that is, but we can't really do that yet. That's the idea of these cognitive light cones. There are two things I'll say about that. One is that I think these cognitive light cones interpenetrate.

[07:31] Michael Levin: In other words, let's take a human body, for example. There are many subsystems that have their own inner perspective and their own goals that they're following in various problem spaces. That doesn't just mean three-dimensional space. Your body is home to all kinds of structures that live and suffer and strive in other spaces: physiological state space, metabolic space, transcription, gene expression space, anatomical space if you're an embryo or something like that. We're not very good at recognizing these. And there are many of these that cooperate, compete, and so on, all at the same time. And that leads us to the second point, which I think is pretty critical. And then there's this concept of teleonomy, which is defined as apparent goal directedness. Now, some people use that word to soften the impact of teleology. They say, well, look, it's not really teleology. It's just apparent teleology. I'm not using it that way. I am full-blown into teleology. I think it absolutely is a necessary concept to have proper understanding. What I think is important about teleonomy is this. It is, in fact, apparent goal directedness because it reminds you to take the perspective of some observer. There is some observer who has to make hypotheses about what they're looking at. Now, that observer — in terms of what problem spaces the agent is operating in, what their goals are, what degree of competency they have in reaching those goals when situations change — all of these are hypotheses from the viewpoint of some other being or, in fact, the system itself. So once you're past a certain level of advancement on that spectrum, you too form internal models about yourself. You have a model of yourself as an agent. Parasites, scientists, conspecifics, predators, and the system itself all have these perspectives on things. And so I think keeping that apparent in mind — that all these things are not, I think, objective universal truths, but actually some observer trying to make sense of the world as they look at themselves and other things — to me is the idea of these light cones.

[10:24] Gregg Henriques: Lovely. Would you say life gets started 4 billion years ago and then explodes, and we would actually see in the universe, at least on planet Earth, essentially an emergence of life light cones?

[10:43] Michael Levin: I know it's weird for a biologist to say this, but I don't think life is a super interesting or discrete category. What is more interesting is cognition, the spectrum of cognition, and a wide range of those things overlap. If you think of a Venn diagram, the cognitive circle and the life circle overlap quite a bit. They're not the same circle, and you can have things on the spectrum that currently people would not call alive, which is why I'm less interested in that characterization. One thing: if I had to give a definition of life, which I don't, I would say life is what we call things that are really good at scaling up their cognitive light cones. If you have a collection of pebbles, which are basically only good at energy minimization and things like that. I don't think that's zero on the cognitive life. It's very low, but not zero. When you have a rock made of those pebbles, you have not scaled the cognitive light cones. It's got exactly the same capabilities. Once you have life, it's arranged so the components have little light cones and the collective has a bigger cognitive light cone that actually extends into new spaces. When we see goal-directed systems being multiplexed so that the size of their goals — they get these grandiose, longer-term spatial and temporal goals — we call that life. That's what life is. Things that we would be hard-pressed today to recognize as life can have cognitive light cones and maybe large ones at some point.

[12:30] Gregg Henriques: I'll pause and see if John wants to jump in here.

[12:35] John Vervaeke: There's a lot I want to talk about there. I want to build off of Michael's idea of light cones, which I do mention in some of my lectures at U of T, University of Toronto. I want to note that there are at least two parameters within a light cone, as I understand it. There's reach and clarity. I think that brings in some of the work I'm doing about cognition in which I talk about the two meta problems of adaptivity. So if you're going to be a problem solver, there are two problems you're always solving as you try to become a more adaptive problem solver. One is anticipation. Your ability — I don't just mean prediction; that's a misnomer. If you predict and can't prepare, that's not very adaptive. We have experimental evidence for at least living creatures that this is the case. So I use the term anticipation. You want to anticipate as deeply as you can. Typically it enhances the number and kinds of problems you can solve because the earlier you intervene in a causal pathway for a problem, the easier it is to solve that problem. It's much easier to avoid the tiger than fight the tiger — that's the idea of the light cone. But the problem with that, the second meta problem, is as you increase reach, you increase the problem that has been the besetting obsession of my career, which is the issue of relevance realization. The amount of information that you have available, the amount of information you have to store, all the possible combinatorially explosive combinations go up exponentially. You can't just arbitrarily choose from that what to pay attention to. You can't algorithmically search. You're somewhere between the arbitrary and the algorithmic. This gives you the issue of relevance realization. I have proposed a way in which the two problems depend on each other: you can avoid relevance realization, but you shrink your cone of anticipation considerably. If you want to increase your anticipation, you increase the relevance realization problem. Predictive processing always tries to minimize error. It hits inevitable trade-off relationships of error. If it tries to reduce bias, it increases variance. If it tries to reduce variance, it increases bias. If it tries to reduce the errors of exploration, it will crash into other errors of exploitation. There are all these inevitable trade-off relationships. The idea is predictive processing will create component processes that give what's called an "optimal grip on the world." That's what I mean by clarity. It's not just that you reach out well, but you know how to optimally grip what falls within your light cone. That's how those two go together. What comes out of both recursive relevance realization and especially predictive processing is this idea of mutual modeling. In predictive processing, you always have to model yourself. I don't mean model yourself as a self. You have to model yourself when you're modeling the environment because you have to deal with conflation errors. That stuff happening because it's inside you gets projected onto the environment. You're always trying to model the self to some degree to discount the errors caused by your own embodiment.

[16:30] John Vervaeke: This is the great insight that predictive processing runs off. Don't try to directly predict the world, predict yourself interacting with the world. That will help to solve those problems in an interlocking fashion. What you get is, when you're modeling the world, you're always to some degree modeling yourself. As you're modeling yourself, you're always to some degree modeling the world. The two are interpenetrating. I think that goes a lot towards the teleonomy that Michael was talking about, that there is something like a self-modeling going on. For me, and this might be where Mike and I are different, I think that self-modeling and relevance realization depends on a system in some sense taking care of itself. My argument is to the effect that relevance realization is always caring about this information rather than caring about that information. By care, I'm not meaning the experiential affect. I'm trying to use this in a very broad, almost Heideggerian sense. Caring for yourself is what gives you the capacity to genuinely care about this information or that. This information matters to you, that information doesn't, initially because perhaps that matter actually matters to you. You literally have to take it in or you're not going to continue. I think that relevance realization grounds in autopoiesis. That's something we can talk about. I do think life represents a significant capacity change. We can talk about whether or not there is cognition without caring, or maybe you have an analog for caring going all the way down, Mike. I'd like to hear that, because as you know, I'm very interested in this deep continuity. I would put two points to you at a more abstract level, and then I'll stop talking. One is if we are non-reductionists, and if you have a continuum with non-reduction, differences of degree eventually become differences of kind. Because with non-reductive continuums, you have to have properties at upper levels that aren't in lower levels. I think you get real emergence, and that's a difference in kind. I think that is a way in which your continuum and Greg's series of cones could plausibly mesh together. Here's my final point. This is the point that I've also been doing a lot of work on, and Greg and I did a lot of it together on our "Transcendent Naturalism" series: as we start to get this understanding of cognition, we see it as properly transjective, always between the system and the world, always between the organism and the world. That means these discoveries about minds are ontological discoveries about the structure of reality itself. Those two have to be understood together. I get it as a continuum, but we talk about it in levels, and I accept that distinction. The levels are properly epistemic. The reality is a continuum. What I mean by that is as we find levels in the mind, unless we're willing to bite the bullet of a profound solipsism and skepticism, we have to say that there's something corresponding in levels of intelligibility in the world. That's an ontological claim. For me, that means we are deeply committed to a different kind of ontology than the flat ontology that we have been doing science in for quite some time. I won't belabor this: it's some of the deeper recovery of an older Neoplatonic ontology rather than the sort of flat ontology we've been with. I think this is important because I think that can ground a spirituality that is not just about psychological hygiene, but about genuine epistemological and ontological realization. Mike, you always say tremendously provocative things and I wanted to respond to them in kind.

[20:27] Michael Levin: Here's how I think about this difference in terms of when differences in degree become differences in kind. This is why I called my framework tame, as in, technological approach to Mind Everywhere, because I really want to ground it, not because technology encompasses everything that there is. Obviously that's not the case, but the technological approach is interesting for the following reason. Let's imagine the paradox of the heap. You have a pile of sand and you start taking the grains off and you say, why not? Here's what I think all of these claims are, including any cognitive claim about what systems can and can't do in terms of intelligence. I think these are all interaction protocol claims. They're engineering claims in the sense that what you're really saying is here is a way I can interact with that system. For example, let's talk about the heap first. If you tell me you need to move a pile of sand, I don't want to know whether it's a heap or not. Here's what I need to know. Am I bringing a spoon? Am I bringing a shovel, a bulldozer, dynamite? What are we bringing? There will be lots of scenarios in which either one—a big shovel or a small bulldozer—will do. I think all of these things are fundamentally around a claim about what is the right way to interact with it. When you tell me that a given system is somewhere on this spectrum, I'm less interested in finding sharp categories and looking for emergent new phase transitions. I'm much more interested in the question of what tools you are telling me are going to be appropriate. If you're telling me something is a simple machine, I understand it's rewiring and hardware modification, and that's all you got. If you tell me it's a cybernetic thing, I have tools of resetting set points and other aspects of cybernetics. If you tell me that it's a learning agent, I understand we have training and behavioral science tools. If you tell me that it's at the level of human or above discourse, that means I have certain other tools and also I may be changed by the encounter. Unlike with a simple machine, after we're done exchanging, I'm also going to hopefully benefit from your agency and we're going to have a different relationship. All of these things are not about looking for categories. They're about looking for ways we're going to relate to whatever the system is in question in a very specific way. I say engineering, which is applicable to all of the left side of that spectrum. And then after that, it becomes other things: psychoanalysis, love, and friendship. That's what I think these things are. I think these things are interaction frames that we take up. A lot of people have philosophical pre-commitments to where things are. They'll talk about category errors. They'll say it's a category error to say that cells and tissues can think or have intelligence. In the Middle Ages, it was a category error to think that it was the same forces that moved rocks on Earth and celestial objects in the sky. Except that these categories need to evolve with the science. These are all empirical questions. We don't get to sit back and have feelings about what is and isn't intelligent. We have to do experiments. You pick a frame, you try it, you make a hypothesis about the space you think it's working in, the goals it has, and the degree of competency you claim it has. Let's try it. Do the experiments. We'll intervene in some way. We'll see what happens. Then we'll know: am I overdoing it? Am I under-recognizing mind? Am I over-recognizing it? Then we pick. It's a scientific problem about optimizing relationships in the end.

[25:02] John Vervaeke: That's great, and I'm mostly in agreement with that. Two things come to mind. There is still a proper philosophical job in that scientific endeavors experiment presuppose things that therefore can't be given by scientific experimentation. That doesn't mean that philosophical level gets to dictate. It means that the two discourses have to continually talk to each other. That's why I'm a cognitive scientist. For example, the model you propose, which I think is good, has a fundamental presupposition of relationality being central to a grasp on ontology. That opens up the question: notice that information and intelligibility are inherently relational things. Maybe we should be prioritizing relationality over the relata in our ontology. That's a kind of philosophical question that emerges by reflecting on what is presupposed in the science. Mike, you're actually doing work that is pushing towards that, that's saying pay attention to the relationality over the relata and prioritize that. That is a deep and fundamental challenge to our kind of standard ontological grammar, which goes back to a Cartesian substance where we talk about individual things having properties that can independently exist. There are a lot of people, and I'm one of those people, who say we need to challenge that fundamental Aristotelian ontology in order to actually accommodate into our worldview what the current science is disclosing. What do you think about that?

[26:43] Michael Levin: I think it's exactly right. If you ask some people, what is the central thing that persists through time? And they'll say, well, it's genes. And somebody else will say, well, it's information. And what I think it is is perspectives. A perspective is a chosen reduction of all the stuff you could take in from some vantage point. You agree to ignore some things. You emphasize other things. So perspectives are what change, evolve, interact. I think it's all about interaction and perspectives. Observers, perspectives, interactions — I think that's the basis of everything we have to do in science.

[27:25] John Vervaeke: And that's very similar to Ladyman's structural realism, that what is persistent across all the sciences are these kinds of broad, real patterns by which we're doing this compression and selection of information. And what survives are not the particular semantic content we give to it, but these structural patterns. I think that's something deep. And so what you have is you have not only a neoplatonism up and down, you have a neoplatonism across time, which I think is really interesting. So I'm going to stop for a bit because you and I are starting to get into a rhythm and I don't want to exclude Greg at all. I really appreciate this, what you're doing. I make reference to your work a lot. I think our work is complementary and we mutually strengthen each other's positions in a way that's intellectually respectable and justifiable. But I do think the same thing is the case for my work with Greg. So I want Greg to talk now.

[28:33] Gregg Henriques: That's a nice segue, because I do want to check in with you, Michael, in terms of what — you're doing such intense, brilliant theoretical work. You and I touched on this a little bit in our private conversation. John and I talk about this meaning crisis. I'm a clinician. I'm deeply concerned with how we see ourselves as human beings and what science says about what we are, what we know, how we think about it, and how that connects to wisdom traditions in a particular way. I see your work as brilliant empirical work that opens and challenges certain old pre-existing notions that have at times dominated the paradigmatic natural science view, or at least it opens up a wide variety of different perspectives. As a psychologist who looks at the way people think about themselves in the world, I hope we evolve to new frames. John and I are doing a series called Transcendent Naturalism, basically anchoring us in a naturalistic way to the potentialities of transcendence at individual and collective levels. What kinds of worldviews afford that, and what kinds of scientific understandings of the world afford that? I'd really like to hear your thoughts about that. What has been your experience as you open up this realm, as you share this teleomic perspective, open up our thinking about light cones across a wide variety of different domains? What does it say about us and the universe from a scientific perspective? And what does that mean?

[30:12] Michael Levin: So I'll say something general first, and then I'll dive into a specific example of what I think this means. Overall, I think the whole crisis of meaning thing is incredibly important. The work that I try to do, I view very strongly as trying to climb out of it, not trying to reductively dig a hole deeper. This is really important because I'm not a clinician, but I get tons of emails from people who say, okay, I've read your paper. I understand that I'm a collective intelligence of cells. And now I'm not sure what to do with myself anymore. What should I do? Maybe I've read some Sapolsky and now I don't think I have free will anymore, so I'm really confused and I don't have any idea what to do. So I think that's important because it's critical that the stuff that we do is seen as providing a way to climb out of the things we were told by evolutionary theory, by neuroscience, and by physics—that it's all about competition and survival of the fittest. There are a lot of bad ideas that needed to go, but now we've got to climb our way up the other side of this and rebuild on a better foundation, rebuild some of the things that are necessary for us to flourish. I think that's partly what we're doing. A huge part of that is the whole diverse intelligence field and this idea of building tools that go beyond our very narrow monkey-brain affordances for recognizing other kinds of minds. Once we are able to recognize other sentient beings around us, and we commit to this notion of enlarging our own cognitive light bones so that we actually can recognize and have compassion for beings that don't look like us, they don't have the same origin as we do, they're different in every way, I look forward to a future in which the kinds of distinctions that we make currently within the normal human variation—"oh, these are like us, that is other, they're not like us"—are going to be so laughable in the future when a freedom of embodiment takes off. You come into this world and you're not stuck with whatever body evolution happened to give you or genetics happened to land you in. The diversity of bodies and minds that are going to be out there is going to make all these current distinctions completely laughable. I think we have to mature; we have to drop a lot of old categories, which made sense in olden times, but they don't make sense anymore because they don't actually capture what's unique about beings worthy of compassion. That's the general stuff. I want to say one thing about the more specific issue of what we are. This goes back to John's point about the problems that any being faces. There's one more interesting problem, which is this. It goes across scales in evolution and is called Bateson's paradox. The idea is that if you're a species, the world's going to change and you've got two options. If you don't change, try to remain the same, you're done for; you're going to disappear. If you do change, in a certain sense, you've also disappeared because now you're something else; you've changed. So every agent faces this problem: if you're going to persist, or learn and improve, you are not going to be the same. Committing to a static representation of what you are is doomed. It's doomed at the evolutionary scale; it's doomed at the personal scale for the following reason.

[33:48] Michael Levin: This also goes back to the point that John raised about the salience of information. Imagine the butterfly–caterpillar situation. You have a caterpillar; the caterpillar lives in a two-dimensional world, eats leaves, and it's a soft-bodied creature, so it's a very particular kind of controller you have to have when you can't push on anything; there are no hard parts. It has to turn into a butterfly. In order to do that, the brain basically gets dissolved: most of the cells are killed off, all the connections are broken, and a new kind of brain is built. One amazing thing that has been found in various systems is that the butterfly or moth actually remembers things that you train the caterpillar on. Memories persist. You might focus on the question of where the memory is: if you refactor the brain, how do you still have it? That's a fantastic question for developmental biology and computer science. We don't have any memory media that work that way. There's a deeper issue here, which is what it is that they learn. You have a disc of a particular color, say purple, and the caterpillars learn that they get fed on this purple disc. When you get the butterfly, it will go there and try to eat. Not only do butterflies and caterpillars not eat the same stuff—caterpillar wants leaves, butterfly wants nectar—but the physical embodiment is completely different. It's not enough to keep the memory as it is; the memory as it is would be completely useless. You have to transform that memory, keep the salience, dump the details, and remap it into a new form. In your new life, in your new higher-dimensional life, because the butterfly lives in a 3D world, you will not keep the memories of your past life, but you will keep the deep lessons you learned. You're not going to know that moving certain muscles in response to a certain stimulus gets you to leaves. You don't care about leaves. You don't have those muscles anymore. You have something completely different. Being able to remap across when everything changes—remapping that information—is really fundamental. When we think about what we are, here's what I'm getting at. The butterfly–caterpillar example is really extreme. Planaria learn, then you chop off their heads and they regrow a new brain and they retain their memories. We don't do that.

[37:25] Michael Levin: I think this is all of us. We are absolutely that type of being that is not a static structure and our job is to keep that structure intact against all the things that happen. Fundamentally, at any given moment, you don't have access to the past; what you have access to are the engrams, the messages that your past self left for you in your brain, in your body, and you have to interpret those. Puberty will alter your brain in various ways. All your priorities will change. Your preferences will change in many ways. When you're 90, you will still have memories of your childhood, but not because you've kept them; there is no molecular structure in the brain that stays the same for that period of time. Everything's bubbling around, molecules come in and out, cells. What you are constantly doing is reconstructing yourself and your memories to make them applicable in the new scenario. What does this look like across scales? For the human, it just means that as things in your brain and body go in and out, you are maintaining a coherent self-model of some sort. In evolutionary terms, it means that evolution, long before we had brains, doubled down on this idea that everything is going to change. The environment is going to change, your parts are going to change because you will be mutated. We know you're going to change. This is why we have examples when we make tadpoles with an eye on its button instead of in its head, they don't need new generations of adaptation. They can see and they can learn in visual assays immediately. There are many amazing situations where you can radically change not just the environment, but actually the parts themselves. You can put in weird nanomaterials. You always get something coherent because what biology does is assume that you can't just learn the structure of the past. You have to learn, you have to make problem-solving agents and the body and then eventually the brain and the mind are continuously reconstructing because everything has changed. There are other things that could be said about that, but I'll stop in a minute. One of the things we're learning is that if you want to know what we are, it is less plausible to think of ourselves as some sort of static structure that tries to hold on to the engrams of the past. We are a continuous process of sense-making and reinterpretation. We now see that across scales, from evolutionary to molecular to developmental, from the robustness of the body to the robustness of cognitive systems, confabulation and the noise and the unreliability of the substrate are not bugs; they're features. It's the thing that makes us intelligent and robust because you assume right off the bat that everything's going to change and that our number one fundamental capacity is to remap onto new scenarios. If you think about what happened in computer science and robotics, they went a different way. We work hard to make sure that hardware always works correctly, and then we code on top of that, knowing that our hardware is reliable. You end up with a completely different set of systems versus what biology does, which knows all the stuff underneath is going to change, it's going to die, it's going to mutate, it's going to be poisoned, and we're still going to remap. This is one thing we're learning about what we are.

[41:02] Gregg Henriques: That's really fascinating. Have you ever come across relational frame theory? I don't know if it's a bridge off Skinnerian theory, but what it's saying is the operant is a relational set of patterns rather than a particular thing or stimulus. What you're doing is you're tracking patterns and being pulled into operant patterns through relational frames. Listening to that, that's consistent with John's mention of structural realism — the idea that what we can track in the world are patterns. And if we're building our recursive relevance realization salient structures in an unbelievably changing world, what are the things that we can track? Pattern relations might be the thing that affords our cybernetic goal tracking.

[41:54] Michael Levin: One last piece to throw in there is that if what biology does is take a complex state of stimulus and effect and squeeze it down into a very compressed representation and then try to re-expand. The caterpillar learned all this stuff; it gets squeezed down into a molecular substrate and then re-expanded or remapped onto the butterfly. That squeezing—just two quick things about that. One is that this squeezing and expanding thing is everywhere. In metazoan organisms: you have your organism, squeeze it down to an egg, you re-expand. You and I having this conversation, I have a complex brain state. If I gave you a spreadsheet of all my neuronal activation levels, that would do you no good because your brain is different. What we do is use language; we squeeze it down to a simple low-bandwidth message. You will have to re-expand and reinterpret that message. Do I know that you re-expanded it the way that I did? No, but we do our best. You can think of science this way: writing papers and giving talks is like this squeezing down. I've been thinking a lot about what are the features of the architecture that would allow this, that would enable this kind of amazing process. One of the things that struck me was William James had this really cool thing when he said, "The thoughts are thinkers." If you can dissolve—I just like doing that—dissolving boundaries between things, if you dissolve the boundary between data and the cognitive system that operates on that data, you might say the data isn't just passive. The thing you learned isn't just a passive thing that sits there hoping for this other cognitive system to come and read it and remap it. It may have a little bit of activity on its own. Maybe it's got an agenda. Maybe the agenda that it has is to be properly or optimally placed in some cognitive system. Maybe it wants to be understood; I'm using quotes because I don't know to what degree, but I actually don't think that there's a sharp boundary here. Memories are not actually—because of the frame theory thing you mentioned—these patterns, these frames, and even perspectives have a little bit of agency to them. They help. The reason that any of this works is because it's not just a passive molecule; these things have a little bit of activity in terms of working to get themselves remapped. It's a two-directional thing. That's some stuff we've been working on lately.

[44:53] John Vervaeke: I want to reply to a lot of this. This is really rich. I want to start with that idea of a bidirectional conformity — it's not only that the mind is conforming to the world, but the world is conforming to the mind. Of course, you might get tired of me doing this. This is a Neoplatonic claim, right? This is the central idea behind what I call participatory knowing. It's not just a passive reception, it's a co-shaping, it's a mutual affordance, it's a coming together, it's a logos. I think that is deeply right. I think that's at the core of what I try to get at when I talk about "it is between knowing," and I think relevance is a cognitive psychological phenomenon that is exactly that. We aspectualize the world, but its relevance isn't just objectively given. We don't just read it off, but we don't just project it onto an empty canvas. The world and us shape and coordinate each other so that we fit together. This is analogous to how niche construction works. There's activity on both ends, there's shaping on both ends. So I think that's deeply right. What you just said about compression — in the 2012 paper we did on relevance realization, we talked about compression and particularization as the engine of how you get the mind, how you get it to be doing something that is structurally the same as what evolution is doing. You get the variation and then the compression. This means that noise in the system is inherently valuable, as you indicated a few minutes ago. What's happening is machine learning is finally figuring this out: at many stages you have to throw noise into the system to break it up so it doesn't get locked into local minima and can explore many more environments than the one it's getting locked into. That maps onto your butterfly example, and this is L.A. Paul and transformative experience. Human beings go through these profound changes. She does the Gedanken experiment of people offering to turn you into a vampire, which is very much like your butterfly example. The problem is you don't know what it's going to be like, what your perspectives will be. You don't know who you're going to be, what your preference structure is, your traits. So you don't know if you should do it or not because you're deeply ignorant. You can't do standard decision-theoretic inference your way through. She says you face this when you decide to have a child, take up long-term education, or enter a long-term romantic relationship. This is exactly right. Transformative experience is pervasive in our cognition. When you put that together with what we said about noise, our model of rationality has to be fundamentally changed. Here's the point, and this is what Agnes Callard is: "Well, I'm not very rational right now and I'm aspiring to be more rational. I'm actually aspiring to go through a transformative experience." So this is central to being rational.

[48:22] John Vervaeke: Being rational is a normative demand that I become more rational than I am. And it's not just a quantitative more. It's a qualitative, it's a transformative experience. So somehow these non-linear, non-inferential processes are central to being a rational agent because rationality is fundamentally a transformative experience. And what I'm saying is this feeds back. And then that rationality also has to take account of this perspectival and this participatory knowing. We're not representing things over there. We're, as you're suggesting, Mike, we are participating. The world and us, we're participating together in the coinstantiation of important real relationships. I think therefore that Bateson's paradox actually slams into the paradox of self-transcendence, which is, if I become something other than I am, then it's not self-transcendence because something other has come in. And if I just extend what I am, then it's not transcendence, it's just growth. That paradox is only a paradox if you have a static single model of the self. But if you have a model that is flowing, a model of the self that is inherently collective and flowing, the way you're doing it, I think you put those together and you get multiple, mutually evolving selves. I don't think we are a self in any kind of monadic sense. I think a lot of the therapy—parts work and the IFS—we are properly dialogical. We are dialogical within, we are dialogical without, and trying to find the sole thing that is the self is a mistaken category. This becomes important because when you look at debates—I'm teaching a course on the self right now, and Greg and I with Christopher Mastopietro did a series called The Elusive Eye—people will say the self isn't real. They admit that all this stuff we're talking about is going on, but that's not a self. Why? Because it doesn't give you something like a soul, a single monadic substance that's the unchanging bearer of properties. Then they say, therefore it's not real. I turn around and I say, then by that standard, nothing is real, because what science is showing us is that nothing is a substance. So all you're saying is the self is as real as everything else or as unreal as everything else. I think saying everything is unreal is a useless thing to say. I don't think that gets you anywhere or advances anything. So I think this self/no-self debate is ultimately pointing to something deeply continuous with the biology you've been arguing: we are facing a fundamental transformation in what we understand the self to be—dialogical—and what we understand rationality to be. I think those two things are profoundly important at a cultural level, but if you've agreed with the argument I've made, they ground out in deeper stuff in the biology and in the physics. I think this gives them powerful plausibility because we're proposing a fundamental paradigm shift. Here's the final thing I'm going to say. I think that mutual transformation of the notion of self and rationality is crucial to getting out of the meaning crisis. I think as long as we remain in that Cartesian framework, we are locked. We are locked into normalism. We are locked into dualism. We are locked into antagonistic processing. We are locked into many of the central drivers of the meaning crisis.

[51:58] Gregg Henriques: Did you want to respond to that, Mike?

[52:02] Michael Levin: I have things, but please go ahead.

[52:04] Gregg Henriques: I think one of the things that I would be looking for, and this is what John and I are doing in Transcendent Naturalism, is to consolidate certain kinds of messages that afford people ways of gripping the world, that enable them to make sense of their lives, make meaning in their lives as ecological agents. In a particular exploration of design space and finding that kind of participatory relation, there is a way to embed oneself on the cusp of this aging arena of relation, I believe, that many wisdom traditions have identified as being fundamentally core to one's sense of being present in the world. And to me, one of the things that your work is doing, and one of the things that I was so drawn to John's work, as a way to share with people ways of being in the world, what it is that this shows scientifically, philosophically, and participatory is it points a particular direction. In many ways, at the core, being in the world, there's a relationship to the world that emerges in this dynamic process. I think from both of your work, that is a very, very important transformation for us to communicate to society and embrace as we go through. So that's gripping these elements to embed our structure, our grammatical structure of relating to nature, to the world, to the future in a particular way is deeply important to me. I just wanted to make that point and resonate with it.

[54:05] Michael Levin: I think I love all of that. I think you're absolutely right. I think it's critical for people to realize that when we reimagine what the self is and take us away from this notion of a substance, some kind of monadic substance, it's different than what you said before — that everything is equally illusory. There's nothing at that point. That's a deeply destabilizing concept for a lot of people. I think that's where they think we're going. An example I try to help people think about is this. It is true that we are patterns more than anything else, but you've got a rat and you train the rat to press a lever and get a reward. If you zoom into what's going on here, you've got some cells that have interacted with the lever. You've got some cells that got the sugar of the reward. They're not the same cells. There is no single cell that had both experiences. Who owns the collective? Who owns the associative learning that just took place there? There's this rat, which is a group of competent subunits. And there are some mechanisms; that's the research program, to study them. I call them a cognitive glue. That's what we work on: to figure out that something has appeared here. It isn't nothing. Something has appeared here, which is a pattern that has memories. It can have goals, it can have preferences, it can have competencies that the individual parts don't have. And it's perfectly reasonable. Somebody literally said, "I read your thing about this collective intelligence. What do I do now?" And all I could say was, "Whatever amazing thing you were going to do before you read my paper, go do that." You can still do it. You can still do all that. Because you can, even though you're a set of patterns that are interacting in a particular way, you can become a better pattern, a more interesting, a richer pattern. And that is what we can do. And so commit to a bigger cognitive light cone to helping others have a better embodiment, whatever it's going to be. It doesn't dissolve all that stuff. It just gives you a new window on it. After all that is said and done, you still have the opportunity and the responsibility of moving forward as that and doing things.

[56:39] John Vervaeke: I want to reply to that. I think that's right. I think getting it clear to people that we're not dissolving, we're revealing or disclosing, we're disclosing as opposed to dissolving, and that's what I was trying to argue for. I agree with what you said, tell people to go back and use this to reinterpret so they can recover what has been lost because of an inappropriate frame. With the Verveiki Foundation's help, we set up ecologies of practices. We have a practice called Dialectic Indidia Logos that helps people get into mutually shared flow states of cognitive exploration and people discover collective intelligence as something that is phenomenologically present and almost agentic in what's happening. They get the we space that takes on a life of its own and leads people into each other and everybody beyond each other into something deeper and more profound. People will say things like, I discovered a kind of intimacy I didn't know existed and I've always been looking for. If that doesn't sound platonic to you, I don't know what does. That's anamnesis through and through. I agree with what you said, but what I'm also suggesting, Mike, is we have to do this carefully and ethically, virtuously, and we can reverse engineer practices for people that help them to do the recovery and also the development of the cognitive light cone of a recovery of a lot of what is lost for people in the meeting crisis. They say they had always been looking for this kind of intimacy, but they didn't realize that they were. We get this across groups who come in. I'm not pretending it's a random sample. It's obviously self-selected. People are coming in because they have some orientation to my work. So I'm not claiming this is like a scientific study, but it's not nothing either. The fact that many different groups of people, religious, non-religious, many different backgrounds, different places in the world come together, and this is a reliable thing that happens, I think that's indicating something. So yes, we can tell people, go back and try to recover, do the wonderful things you're trying to do, and don't try and dissect it away because of a Cartesian framework. On the other hand, here's a bunch of new practices, or at least old practices that have been recovered or reverse engineered, in which people can deeply recover a lot of the experience and the learning of what we're talking about here. So it goes from being something they may propositionally assert into being something they procedurally and perspectively and participatorily realize. That's an important thing to say as well. When people ask, I'm not saying you have that responsibility. I have chosen to take on that responsibility with a lot of people; I'm not taking single credit. One of the things to say is, by all means say what you say, but also try ecologies of practices that are based on this and see the positivity that comes out of being in these practices. See what you realize and recover in these practices. I know Greg is doing something very similar. Greg is a powerful theorist, but he's also creating an ecology of practices, and his work and my work and the foundation's work, we're doing a lot together. There is great risk here: people turning into gurus, weird cult formations, exploitation, money pumping. You have to try to build a lot in to safeguard against this. But I'm proposing that we could reverse engineer a complex ecology of practices that could be properly understood as spiritual in that it affords people transformative experiences in which they are recovering this deep connectedness, this intimacy. They're learning and reality and themselves are being deeply disclosed together within and without and between each other. And you're getting the cultivation of a reorientation towards meaning, virtue, wisdom. This is also something we can say to people now.

[1:01:25] Gregg Henriques: I think that's where you find the bridge from a lot of the is of the science to the ought of humanism and a new opening for fusion and connection. One thing I wanted to ask you, Mike. I know you focus on continuity and I know the approach that you take. I'm curious: when you think about the human condition and the human intellect, and you think about is there something, what is the thing, what are the multiplicity of things, when you think about the human intelligence structure, what do you identify, if anything, that is at the root of our explosion over the last half million years into dominating the planet, building technologies, given rise to certain kinds of thought? Where do you see that? Do you think much about that particular question? Have you reflected on that? I'd love to get your thoughts since I have you here. Well, it's a switch topic. I wanted to check in with you on it.

[1:02:41] Michael Levin: I don't think I have anything brilliant to add over what a lot of smart people have said about the unique capacities of humans and why we're such a successful embodiment. I can say a couple of things. First, someone said, maybe Yuval Harari, that the special thing that humans have is that we're storytellers. I think that's a compelling vision, except that I think all agents are storytellers, fundamentally, from the first bacterium that had to compress a very chaotic, noisy experience into a simple model of what the hell is going on and which of my effectors can I use to improve certain scenarios. You are no longer Laplace's demon trying to track microstates. You have committed to a certain story of what effectors you have and what's going on. I think we're all storytellers. We crank it up to an amazing degree. I think that language is an important part of it in the sense that it's a compressive tool that can compress complex brain states into a simple thing that can be passed on to somebody else for uncompression. It's super powerful, and as much as I like to use various tools of cognitive and behavioral science in other places, I've not seen anything that suggests that language exists other than in brains. I wouldn't claim that; we don't know. I'm not saying it's impossible. I'm just saying we haven't seen anything like that. I think language is key. One weird thing about humans is that we have a cognitive light cone that's longer than our lifespan, which is a bit different. If you're a goldfish, all of your goals are likely achievable. You might have a 20-minute horizon of goals, and you're probably going to live 20 minutes, and most likely your goals are all achievable. Humans are unique. We have many goals that are absolutely not achievable in our lifespan, and we know it. What kind of unusual pressures or capabilities does that unlock? Having goals that you can commit to that are not achievable within your own lifespan—maybe that's something. This becomes very important because, because of AI, people are trying to define proof-of-humanity certificates and these kinds of things. I want to say a couple of things about what a human is and isn't, according to my humble opinion. The first thing to realize is that, and I have a diagram of this, but I'll pantomime it. You have your standard modern human in the middle, and it's got this gentle glow about him and all the philosophies about the human. Going back above him is a very smooth gradation of evolutionary stages all the way back to a single-cell microbe. When you say "the human," which human? The human of today, the human of 100,000 years ago, the human of 300,000 years ago. They say it developed, this and that developed very fast. They say, what's very fast? One generation? No. Well, then what was going on in between? If you think that humans have responsibilities, they can be good. Where exactly? What can you blame one of these hominid ancestors for what they did, or are they still? You have this spectrum. I like a continuous spectrum.

[1:05:57] Michael Levin: Down below, you got the exact same thing on a developmental time scale. What human? You used to be an unfertilized oocyte. It was a very slow and gradual process of how we got here. Which human are we talking about? Widen this out horizontally: you can imagine now, as we already are and will more with technology, you can step away and you say, well, I can modify, I can be modified biologically. I might get some tentacles and I might live underwater someday, and I would like to see in infrared. What's this with these limited retinas? You can be modified biologically and technologically. I can have implants. At some point today, maybe 2% of my brain is an implant that's helping me out, but eventually it might be 58% of my brain is some kind of construct. You got all of this. Science fiction has been on top of this for 100 years, but a lot of people, especially who talk about AI, are just now catching on to this idea that human is not a sharp category. That raises the question: what do we really mean? I tend to think about this as the kind of thing that you're going to Mars for the next 30 years. You get to take something with you. What do you take with you? What's important that you take? You don't want a Roomba. What are you really looking for? I want a human companion. What does that mean? Is it the DNA? Do you care about the DNA? I don't. A lot of people are into the DNA. And if you change your DNA, you're no longer — I don't care about DNA. It's the standard body. Once you've put wheels on and gotten tentacles or a propeller, you're no longer human. I don't care if you have all your standard parts that evolution happens to have given you and you're subjected to lower back pain and astigmatism, all this dumb stuff that we ended up evolving. I don't think that's what we mean by humans. I think it's really interesting to think about what's essential about it. I think what we mean when we say human is a certain impedance match between us with respect to the size of your cognitive light cone of compassion. What is the size of your goals and what is the radius of compassion that you can muster? The mismatch can be in either direction. If the cognitive light cone is tiny, we're not going to have much of a relationship if you can't care about the same level of thing. Conversely, if you've got this galactic-scale mind, we may not be able to do the normal human interaction. I think that's what we're talking about. We're talking about the size of your goals and the things you can care about in the compassion sense — the practical, not the affective, pursuit of goals. That's what I think.

[1:09:14] Gregg Henriques: Lovely. I really appreciate that. Yeah.

[1:09:17] John Vervaeke: I want to respond to that because I think that's important. I think that I agree with Mike. The discussion around human is actually an equivocation. I think it's equivocation from some biological notion. Mike can devastate that as he just did. Another notion, which is a moral-legal notion, which is a person. We've got enough science fiction that lets us know that you don't have to be humans to be persons. I think we try to find some anatomical locus of personhood within a biological humanity. That is a doomed project from the beginning. That will not work. I think a lot of the tech people and the AI people are bumping into this. We've said this multiple times with old categories and old schemas, and they're saying often equivocal and sloppy things about it. Mike, you brought in the notion of compassion. This is ultimately Kantian, but even properly a Hegelian move. Persons are beings that can recognize each other as having moral responsibility and moral obligations. I can obligate you as you obligate me. Do unto others as you would have them do unto you. The golden rule. I'm compressing a huge amount of much more sophisticated argument. This notion of reciprocal recognition of our responsibilities and our authority: I can obligate you. I can say, don't do that because that's immoral. I don't have to appeal to your desires. I don't have to appeal to your projects. I can just say, don't do that. That's immoral. You are, if you're a moral agent, at least responsible to that. You don't have to agree with me, but you're responsible. Hegel said this is when we become Geistlic. We become spiritual beings when we become capable of this reciprocal recognition of moral authority and moral responsibility, such that we are no longer driven just by our desires. We can be driven by what we are obligated to do. Reason is that kind of obligation thing. You should conclude this because of that. I can say that to you regardless of your desires. In fact, we criticize people, motivated reasoning, who deviate from what they should conclude because of their desires. I think that compassion, if you understand it more broadly as this reciprocal recognition of normative responsibility and normative authority, that's what we're talking about when we're talking about personhood. Notice we do that even with human beings. We don't obligate two-year-olds to our moral obligations. We say, well, they're persons. Well, they are and they aren't. They're in this nebulous status. They're persons in that we have moral obligations to them because by undertaking those moral obligations, we will actually turn them into persons. We don't let two-year-olds get married. We don't let them vote. We don't let them bear arms. We don't let them drive cars. We can hold them in a location, kidnapping them. We can force them to go where we want, but many of the standards of personhood we don't allow them to have. What needs to be done is a clean separation from this discussion of human, which can mean some kind of psychobiological, psychosocial biological entity. I agree totally with you, Mike. I think trying to pin that down is a fool's errand. I think the reason why people are trying to pin that down is they're trying to find a place for personhood. I know you don't like it, but I think that is a category mistake. Personhood is different from, and it is not locatable in a psychosocial, biological entity. It's about this capacity for mutual recognition.

[1:13:33] Michael Levin: I don't disagree with that. I think that's exactly right. I would just say that it's a degree. That's all I'm saying. For example, let's take the legal system: we've arbitrarily decided that 18 means adult. It's total nonsense. Nothing happens on your 18th birthday. However, at least in the US, if you want to rent a car, you got to be 25. Why 25? They didn't do what the legal system did, which is just to guess and set it. They have actuarial data and they just realized that 25 is when your brain's mature enough that you can be trusted with a car. That's empirical. And so I think understanding that it is a continuum and that certain things develop faster than other things. If we agree, it's way beyond my pay grade to try and figure out a legal system that will work in the future of hybrids and all this stuff. But just as a step, accept that it's not a yes-or-no thing, that it's not the ******* defense is crazy, but serotonin actually does make neurons go. There's going to be a spectrum. We need to figure this out. That's part of it. I agree with you. I can think of too many in-between cases which I think will all show up. Right now you've got people that we say are non-neurotypical. Wait till you see what's coming. When everybody's got all kinds of, somebody's got a third hemisphere grafted on, so now they've got extra IQ points. And so you say the rest of us wouldn't have been responsible, but you really should have known what you were doing, because you got that third hemisphere. These kinds of things are eventually going to show up. We're going to have to figure it out.

[1:15:27] John Vervaeke: I agree with you. That's why I brought up the example of children. We don't have a definitive thing where they're persons. In fact, we have this weird capacity. We can even do it to some degree with the raising of dogs. If we treat things in the right way as persons they start to approximate personhood. We have individuals and people have seriously, and I don't mean just sloppily, reflected on whether psychopaths—people who seem to be amoral, blind to moral normativity—are properly persons. Precisely because they can't, they lack the ability to undertake that reciprocal recognition. Again, I agree with you. I wasn't proposing a hard deadline, but I was proposing that there's confusion around personhood and humanity. I think calling somebody a human being should be largely a psycho-biological designation. Calling somebody a person means we're bringing in a whole bunch of other criteria. Those criteria are probably going to shift. They're not finally definitive because nothing is intrinsically or inherently relevant. That goes back to your butterfly. I do think there are mistakes happening around this. That's what I'm trying to point to.

[1:16:46] Michael Levin: Yeah, absolutely.

[1:16:49] Justin McSweeny: Gentlemen, we've reached that time that we had agreed we would all have to come to a stop. So if you want to wind it down in the next couple of minutes, some closing thoughts, and then we'll wrap this one. I'll say right now that if you want to come back to this show, or if you want to take this on to transcendent naturalism and continue the conversation, the option is yours. If you want to go ahead and make some final thoughts. John.

[1:17:21] Gregg Henriques: I'll offer some. First off, it's been a joy. Your continuum of intelligence, Michael, is a beautiful thing to play with. It's an enlightening thing. I deeply appreciate the way you think about it, the way you have researched it, and the way you've articulated it here. For me, I'm coming back to I built Utah Unified Theory of Knowledge for us a potentially new grip both in relationship to the world and in relationship to ourselves. It affords this deep ontological continuity and the potential for enormous change going forward. It embeds our understanding of categories in a structural relational patterning, close to a process theology of Whitehead, but also giving a basic optimal gripping of energy, matter, life, mind, culture. There's a continuity and discontinuity that then can frame us and place us as agents in the arena in a particular way that orients us more towards meaning in life. This is John's work. To then get together and to jam and riff around that and have that music come alive here has been a real pleasure. I've deeply enjoyed it.

[1:18:52] Michael Levin: Thank you, Justin, for putting this together. And I think you guys, the work that you do is super important. And I'm extremely happy that some of the biology and the computer science that we do can be connected with these issues of personal, interpersonal, these things that are very important for people. Thank you for doing that. I think that's really important.

[1:19:16] John Vervaeke: Yes, thank you, Justin, for putting this together. Always a great pleasure to interact with you, Greg. And Mike, I think this is the third time we've spoken, and I'm continually amazed by the deep convergence between our work. We started in very different places, and in some ways it looks like we're tackling very different problems, but when you push on them, they seem to converge in really important and mutually supporting ways. And I find that very, very powerfully encouraging about the plausibility of the overall framework. I'm deeply grateful for your work, and it's always a pleasure. I hope that you and I talk again. We share students here and there. But it would be nice if you and I talked a little bit more regularly. So just opening the invitation to that.

[1:20:12] Michael Levin: Thank you all so much. Great fun.

[1:20:16] Justin McSweeny: I'll say goodbye to you gentlemen after we stop recording, but I also want to acknowledge the YouTube audience. Thank you, guys. I hope this was — I was jokingly thinking it's trialectic in the trio logos. We had three incredible minds. So that was a frame that I had set up. And speaking of frames, I really appreciate that you guys shed some light on the dimensionality of A-frame that's very large that we're working with, also some frames getting broken on the smaller scale of beingness, selfness and the fluidity, the continuation of these selfness, beingness, of the psychosocial dynamic of that. So thank you guys so much. This was everything and more that I was aspiring to when I imagined this get together. So thank you very much. I appreciate your hard work. I am a loyal student, a little shallow in depth, but I aspire nonetheless. I'll say goodbye to you all off camera and bye YouTube audience. Thank you.

[1:21:13] Gregg Henriques: Thank you.


Related episodes