Watch Episode Here
Listen to Episode Here
Show Notes
Discussion: Chris Fields, Mark Solms, Michael Levin
Mark Solms - https://scholar.google.com/citations?user=vD4p8rQAAAAJ&hl=en
Chris Fields - https://chrisfieldsresearch.com/
CHAPTERS:
(00:02) Introductions and Backgrounds
(08:49) Observers, Physics, and Consciousness
(17:37) Engineering and Experiencing Sentience
(27:01) Designing Affective Artificial Agents
(42:14) Bonds, Shared Agency, Collective Minds
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:02] Michael Levin: It's good to see you guys. I was really looking forward to this.
[00:08] Mark Solms: Nice to meet you. You too, Chris. Very good to meet you.
[00:18] Michael Levin: Chris, we can't see you. It's okay, but FYI, the camera's working.
[00:24] Chris Fields: There we go. Okay, that's better.
[00:26] Michael Levin: Excellent. Do you guys already know each other?
[00:31] Chris Fields: No.
[00:32] Michael Levin: Okay, each person take 30 seconds and give your background and then we'll go from there.
[00:42] Mark Solms: I'm a neuroscientist, Chris, and I was trained in the early '80s and I was frustrated by how little mind there was those days, what would be the most interesting thing about the brain, namely its mental functions, seemed to be treated in a very desiccated way. So I took the rather strange step of training in psychoanalysis to immerse myself in a discipline that took seriously the subjective experience of the mind. I then spent the rest of my working life trying to reconcile the two, to bridge the vast chasm between those two disciplines. Since I was limited to 30 seconds, that's the best I can do.
[01:40] Michael Levin: No, that was just a suggestion. Take as long as you want.
[01:46] Mark Solms: In recent times, I've been working on—I have always been interested in brain mechanisms of consciousness. In recent times, I became convinced of the view that we shouldn't be using higher cognitive forms of consciousness as our model example. We should be looking to much more rudimentary processes, and specifically to the upper brain stem and associated subcortical arousal mechanisms, which we were always taught are prerequisite for consciousness but are themselves devoid of phenomenal qualia. The evidence increasingly accumulated to suggest that that's wrong, that these arousal mechanisms are intrinsically affective, that they're valenced and have affective qualities. My view is that if we're wanting to understand at a fundamental level what the mechanisms of consciousness are, at least from a neuroscientific point of view, we should be starting there with these fundamentally homeostatic mechanisms that regulate the affectively valenced raw feelings of us biological creatures. That's probably the dawn of consciousness: when organisms became aware of how they're doing within a biological scale of values, which is what affects are. Being relatively simple things, they become computationally tractable. I became interested in the free energy principle and active inference framework more generally, in terms of trying to see, can we define mechanistically the fundamental mechanisms whereby feelings are generated? Very recently, I've become involved in a project where we are trying to engineer such a system, a system that has rudimentary feelings. That's probably a fuller account of what I'm up to.
[04:20] Chris Fields: I was originally trained as a physicist and then got my degree in philosophy, and then went to work on the Human Genome Project. I'm a lapsed physicist, a lapsed philosopher, and a lapsed molecular biologist all together. I spent a long time in the private sector and out of it, not doing any science at all. One of my motivations for getting back into this crazy game was Damasio's book. I read your Frontiers paper from a couple of years ago, to find out something about you. It was one of those papers that I was able to say yes every couple of paragraphs. I really appreciate your point of view and what you were saying about basal awareness, which all seems completely correct to me. What I've been doing recently is trying to understand the free energy principle in terms of fundamental physics. In Carl's group, they're using classical, statistical physics. I've been working with some colleagues to try to understand it using quantum theory. We're doing this in part because we're all interested in the emergence of space-time. There is this idea that space-time isn't fundamental and that it's a product of information processing. The free energy principle is a theory of information processing. We as organisms find ourselves embedded in space-time. But we also have to measure space-time. We have to measure time, measure geometry, and measure distance. We have to have the perceptual and cognitive wherewithal to do all of that. When one starts to try to describe those computations physically, they seem to provide all you need to generate at least an apparent space-time. It's straddling the border between biology and cognitive science and physics, which seems like a fun place to be. Mike and I are very interested in this notion of having an account of things that's completely scale-free, that operates in the same way from the micro-scale, however you want to define that, up to a macro-scale that includes not just organisms, but communities of organisms, ecosystems, biosphere-like entities. We're very interested in consciousness and cognition as collective phenomena that appear at larger scales because they're also implemented at smaller scales. At the smaller scale, the interactions of the smaller-scale entities implement the capabilities of the larger-scale entities.
[08:49] Mark Solms: Thank you. Let me say what I should have said at the very outset. Mike, I'm very grateful to you for facilitating this meeting because who else in the world would you want to talk to than people who are interested in things like that? It takes me back to my youth. When you're a kid, you think about these big questions in a naive way, and then you gradually get it beaten out of you by your professors, that you shouldn't ask questions like that. They're bad for your career. And then there comes a point where you end up back where you started. The problem is that it requires one to dabble in fields which are not one's own. It's clear from all three of our backgrounds that we found it necessary to straddle different fields and to immerse ourselves in them properly. But in recent years, I've found myself having to immerse myself in fields that I've come to late in life. When you describe your starting points, I often rue the fact that I didn't study physics deeply, and I now find myself feeling like an amateur with all of the nervousness that comes with that, that you feel that you're not qualified to make observations. This is why it's so gratifying to hear you saying things like what you've just said. Intuitively one feels everything you've said must be true. It must be doable, must be possible. This is the way one should be thinking about these fundamental questions. This is also why it's great to meet you so that I can interact with and get around my own deficiencies by collaborating with and communicating with people who do not have the same sets of deficiencies as I do. It's really what you're saying is fantastically interesting. And I've read your recent papers, and especially the ones that you've done together. And it amazes me and alarms me at the same time because it makes me realise, compared to my colleagues, just by going down to the levels I've gone to, in other words, the levels of biological mechanisms at the brain stem level and speaking seriously about consciousness in terms of these fundamental body regulating mechanisms. And that already puts me deeply out of step with most of my discipline. But then when I met Mike, and became acquainted with the work that you guys are doing, I realized that it isn't nearly basic enough, the level at which I was thinking. And so it's, as I said, breathtaking and scary at the same time to realize, oh my God, I'm going to have to go even further away from where my peers are. But why not? Life is short.
[12:37] Chris Fields: I would say that not only are probably most neuroscientists not interested in basal cognition, and I think people such as Christoph Koch, for example, are deeply uninterested even in the idea. Most physicists are particularly interested in consciousness or awareness or saying anything in a language like that and would like to get away from the idea that physics involves observers or that science involves observers. This attraction to the idea of describing everything from the point of view of the eye of God is very deep within the field.
[13:36] Mark Solms: It's clear that that's where the trouble begins. As soon as you try to write out of the script where the phenomena are registered as phenomena, you're on a hiding to nothing. It can't work, and you end up with absurdities like the hard problem.
[13:57] Chris Fields: Yes.
[14:00] Mark Solms: So tell me, are you sympathetic with John Wheeler's interpretation?
[14:09] Chris Fields: Yes. I received a preprint of his paper "Is Physics Legislated by Cosmogony?" which was the first paper where he used that diagram of the universe looking at itself. I came upon this when I was still in high school. That had a deep effect on me, not expressed for a long, long time. But yes, I think he's the one person who everybody in the field respected so much that he could get away with talking about all of these issues very openly.
[15:08] Mark Solms: And what has happened to his standing? Is his star rising again in theoretical physics or is he still a relatively marginal figure?
[15:21] Chris Fields: I think his star is very high in the quantum information community and quantum computing community because they all view information processing as fundamental. This idea that the fundamental ontology is an ontology of information or of information exchange is deeply attractive in that community, even if people don't want to or aren't equipped to pursue its philosophical implications very deeply. But people like Lise Molin or Carla Ravelli, their more popular writings are touching on those issues. That's good to see.
[16:31] Mark Solms: Yes, I noticed that Ravelli recently, after having said very deflationary things, saying things like "physicists know nothing about consciousness. We can't comment on these things." More recently — I'm only familiar with his popular writings — I see he's realising that what he's calling a "relational interpretation of quantum mechanics" speaks directly, very directly, to the point that we are discussing now. So it's good to know that it's not only in popular circles that this idea is gaining traction.
[17:21] Chris Fields: Carla's the poet laureate of quantum information at this point in terms of writing very eloquently about it.
[17:37] Mark Solms: I see that also in recent times, Carl is no longer shying away from speaking of sentient machines and of it being possible for us to reduce all of this to something engineerable. Which leads me to the question that I wanted to ask: would you agree that where we have to go in terms of trying to— because it's all well and good to interpret these things in this way. But where we have to go is to engineer something, to be able to. I like that statement from Feynman's blackboard: "If I can't create it, I don't understand it." I feel that if we're on the right track, we must be able to produce consciousness. We must be able to produce something that feels like something that is not biological, that we've actually been able to engineer. Before I ask my next question, do you agree with me that that's, or do you think that that's not the best way to proceed in terms of trying to demonstrate the validity of this way of thinking.
[19:10] Chris Fields: I would say that it's either engineer or something, and then one is faced with the problem of convincing other people that it has the phenomenal properties that one claims that it should have, or even that it says it has. The other route that seems equally plausible to me is to convince ourselves that we've already done that in some sense. One still has the convincing and explaining problem to do. There doesn't seem to be any way around the latter problem. I suspect that the solution to that will be generational, not argumentative. I agree. It's certainly a strategy very much worth pursuing.
[20:24] Mark Solms: What you say there about it being generational, I think that might well be so. Certainly there's been a sea change. In very recent years, young people don't seem to be at all dismayed at the serious talk about sentient machines. In the media recently, since this Lambda thing, there's been a whole flurry of talk about it. It really was just a few years ago; you were immediately considered slightly mad, slightly nutty if you were involved in this sort of project. But I also very much agree with what you've just said about what ultimately comes down to the problem of other minds, which is an impossible problem. How do you demonstrate the existence of subjectivity objectively? How on earth does one do that?
[21:34] Michael Levin: Two kinds of thoughts on this question. One is that it always seems strange to me that people who find this perspective impossible or unpalatable. I wonder whether people don't read science fiction anymore. Is that what it is? It seems this was addressed many decades ago: if you are confronted with an exobiological agent that shares no mechanistic or evolutionary lineage with you, and you're not prepared to take interaction evidence as evidence, what else are you going to do? If you can't look for the frontal cortex and see if it looks like a human, what do you have left? This was addressed in sci‑fi a long time ago: unless you're prepared to say that only humans—only things that look like us—get to have this magic stamp on them, it seems completely ridiculous not to be open to novel embodiments for this kind of thing. I don't get it. The other thing, in terms of convincing others, is very important. To me, this dovetails with the problem that one thing that makes theories of consciousness different from all other theories, and what makes it a difficult problem, is that it's not at all obvious what format those theories' predictions would take. So theories of physics or behavior—you may not know the answer yet, but you know what format it's going to be. It's going to be some numbers, some observables about this and that. What kind of answer is a proper theory of consciousness supposed to give? Never mind what the answer will be, but what form will it take? I think the closest we can get to this, and maybe this has something to do with convincing others, is that theories of consciousness ought to output protocols, not measurements—protocols for making the recipient or conversation partner feel the same state. In other words, some sort of art or output that, to make you understand what I feel, I produce something that, when you take it as input, will put you in a similar state. And then we have a shared state. This is still very fuzzy, but it seems to me that's the only way. It's not going to be third‑person objective numbers or facts. None of that will do. What might work is putting yourself into the relevant state that I'm in, and then we can share something. So I think it's going to be like that. In that TAME paper, I try to do that progressively, slowly, where there's the objective study of the brain and some idea of what's going on, but really just behavioral and physiological data. But eventually you can take out the middleman, remove the electrodes, and fuse us together—eventually fusing the brains together the way the hemispheres are fused. Different degrees of that connectivity might allow us to gain evidence of consciousness from a first‑person perspective, not a third‑person measurement. I think that's about the only hope we have.
[25:07] Chris Fields: An idea that came to me listening to both of you is provoked by a recent paper from Carhart-Harris looking at the effects of different hallucinogens on the construct of nature-relatedness. The gist of this paper was that psilocybin use consistently correlated well with nature-relatedness, whereas other psychedelics didn't. This may be an aspect of a generational change: nature relatedness seems to be going up with the sustainability movement and climate change. That may be correlated with people being much more willing to sympathize with the idea that plants are conscious or microbes are conscious or at least other living things are conscious. But taking nature relatedness to its limits involves relatedness to mountain ranges and rocks and rivers and all sorts of not straightforwardly biological things. That particular construct may be a useful way to formulate the criterion that you're talking about.
[27:01] Mark Solms: I know Robin Carhart-Harris from about 10–15 years ago when he was still living in London. He only recently moved over to your side of the pond. I haven't had personal contact with him for a few years. I remember what you're just saying now reminds me of an experience in my misspent youth, not with psilocybin, but with LSD, of exactly that kind of not thinking, but rather experiencing initially that I was in a nature reserve. This was in South Africa. I experienced the fact that I and this rock rabbit that was hopping around near me were both conscious beings and that I had a very deep connection with this thing. We were one and the same; there was a nature relatedness. Much more interesting was the next step in my trip, which was realizing this didn't stop with the rock rabbit: it applied to the grass that it was hopping over, and I had a very moving and affecting experience of the truth of the fact that we are all one and these blades of grass were of a piece. The bond with nature was a very pleasant feeling. The thing that I'm reminded of by what you're saying, which I think is the crux of what you're referring to, is that it wasn't a thought, it wasn't an inference, it wasn't a deduction, it was an experience, it was a direct perception of the fact. So that's an optimistic view that you have there. I agree with all of that and with what you were saying prior to that, Mike. I'm also dealing with concrete baby steps because I want to make progress with all of this, as I'm sure we all do, before we die. I don't want to wait for the next generation to wake up and realize that we wrote something along those lines long ago. I want to try at least to be part of persuading our colleagues now while we are alive. Let's just be frank. These are such profoundly important things that affect each and every one of us as scientists and as living beings. To be able to have the opportunity to engage at a fundamental level with these questions here and now needs to be grasped. That little project that I mentioned in my introduction that I'm involved with: let me tell you two concrete steps that we're envisaging rather than taking. The steps that we're taking involve engineering this agent, which has basic needs, survival needs. I hasten to add, as you know, Mike, it's not an agent that's embodied in the three-dimensional sense of the word. It is a purely computational agent; it's artificial, in a virtual environment that it needs to survive in.
[30:49] Mark Solms: And it has to find the resources. It needs to breathe, it needs to find energy resources, and it needs to rest because it needs to repair damage to itself as it's banging around trying to obtain its energy resources. An important part of the design is that these things operate on different time scales and they compete with each other, these needs. So, while you're seeking energy resources, the resource depletes relatively slowly, you have to breathe all the time. So it's constantly having to skip to states where there's oxygen, while it's on a longer trajectory toward where it thinks its energy supplies can be found. While it thinks its energy resources are, what the hell else is it doing other than representing to itself, this is where the energy resources are, I think, and I hope I will find them there. In other words, it's busy palpating its confidence in the policy as it executes it. So that's the basic idea, that it's having to breathe while it's heading toward its energy resources. It bangs into things which give it a mark of there's a weighting of tissue damage. Then it needs to rest in order for that to be repaired. Obviously, while it's resting, it can't be seeking energy resources. And all it can do is this oscillatory breathing thing that's compatible with rest. Having done that and mastered that environment, we then change the environment. Move the energy resources, shift so that the oxygen supply is not evenly distributed, but rather is more available at the one end of the environment than the other. We've also created a little hill where it's able to disambiguate, am I in epoch A or epoch B? And so it also has to learn that sometimes exploring is more valuable than exploiting.
[34:37] Mark Solms: You're getting the basic idea because I want to move on to what I was going to say. What we are envisaging doing next is, first of all, having a conference, a symposium of colleagues. I've just found two of the people that we should invite to this. We want to physically get together so that we can talk to each other and have a drink together and get to know each other. Just to work out what sort of criteria we are convinced by, what do we think is what sort of behaviour. I know that what you just said, Mike, is absolutely true. No behaviour, no behaviour or numbers or anything, no objective anything, is going to prove that the agent feels like something, which is the fundamental claim that those policies in which it has to palpate its confidence, it has to feel — there's no other word for that than that it is feeling its way through the problem. Objectively stated, that this Markov blanket agent is registering its own states, it's inferring its own states, that these states are of existential consequences for the agent, and that there's a goodness and a badness to those states; it's intrinsic to those states. From the point of view of the system, it just is bad as its energy resources are depleting. It just is bad as it loses confidence in its policy. So I think that is a valence state. What else can you call it but a subjective valence state? Because there are multiple needs, they are categorical variables. They have to each of them be met in their own right. They can't be reduced. They can ultimately be reduced to a common denominator of free energy, but they have to be treated as categories by the agent because it has to know, how am I doing in this? These are qualitatively distinctive variables, and it's registering how well and badly it's doing across these different qualitatively distinctive variables. Those just are feelings. That's what it's using in order to make the decisions that it makes in this POMDP way. I'm persuaded that when you said earlier, first of all, we have to get over our own inhibition about recognizing we've done it already. I think we have done it already. I don't just mean my little team. I think that in all sorts of ways, because we're talking about something that varies, that there's something that exists by degrees. This is the other major insight that I have to say I got primarily from Mike.
[38:25] Mark Solms: As obvious an insight as it is, sentience is something that has to exist by degrees, and that there must be very simple forms of sentience, which one has to attribute ultimately to the single soul and beyond. But to the extent that the agent is making choices which are rooted in these, I think you can only call them feelings. In other words, it's monitoring of its own needs, its own existential needs across the different parameters that I was just mentioning. And to the extent that it is able to make choices that are the products of its own cognition, that are ultimately tethered to these need states. I'm persuaded that such an agent — it feels like something to be such an agent. I wanted us to have a symposium where we can look at what this agent's doing, talk about what kind of behavior do we want to see, what sort of behavior would we predict if we were to do this, what do we think would happen, et cetera, just among ourselves, not trying to persuade people who start from prejudices that are insurmountable, so that we can, as a community of relatively like-minded people, come up with some criteria which are at least rational, if not empirically persuasive; they are at least rationally persuasive, that makes sense in terms of those principles for those who share basic assumptions that such a thing is possible. That's the one concrete step that I very much hope I'll be able to persuade you guys to come over and experience the joys of Cape Town for a week or a few days. We'll look after you very well. We can talk about these things and come up with a consensus statement about what we would consider are the kinds of behaviors that would convince us at least. Then the second thing, which goes more to what you were saying, Mike, is that I've been thinking with my group that if we, once we've done this, we're saying, here, we believe the agent is feeling this. That's how it's doing what it's doing. Here, we believe the agent is feeling that in relation to thinking this. In other words, the numbers are showing what's happening to the precision weighting in relation to the currently active policy across the three different need states and so on. If it had a feeling, at this point, it would feel like this to be that agent. Then to work with colleagues in virtual reality to create a virtual reality that we can immerse ourselves in, that is basically the same as our agent is confronted by. So I've got these three needs, I'm monitoring them on the screen, and here I'm in this environment, and I'm trying to find the things that I need and deal with the challenges that I'm confronted with. Do I feel what we infer the agent would feel? I know it's artificial in all sorts of ways because we're not literally going to die in that virtual reality. It's at least a little bit more than just empathizing with the system. It's a little more literally putting yourself in the system's shoes. The prediction then, which is the beginning of something empirical, is to say, I predict at this point I would feel something, or a human being in this environment will report feeling this is feeling better and this is feeling worse. Here I'm losing confidence in the subjective meaning of the word. I don't think it's a very brilliant idea. It's a very concrete idea, but I think something along those lines might be a way to go.
[42:14] Michael Levin: I think, go ahead, Chris.
[42:17] Chris Fields: What you describe is very much like a video game type of environment. There's loads of available data about how people who immerse themselves in video games, which I've never done, feel about what they're doing and interacting with virtual environments that have those kinds of requirements for resources, for repair, et cetera. I don't know of any literature in this regard, but I expect that it would be data that would be fairly easy to acquire, even with physiological monitoring or EEG, as a supplement.
[43:28] Michael Levin: Some data that might come of this. There are data on people using various prosthetics, the kind of extended mind idea. When you get an extra hand or an extra finger or something like this. But in those cases, as far as I know, all of the prosthetics that people use are themselves very low agency devices. They don't do much on their own. But I wonder if we could imagine your bot is a prosthetic where we are joined in, the player in this virtual environment is joined to that system. The way that two minds would be joined in the same sort of biological and the way that people immerse themselves in, this hand that rotates a full 360 degrees, that's now my hand. And that works perfectly well for neuroplasticity. But I wonder if at the other end was another, you could make a temporarily joint agent within that video game where you do share a consciousness.
[44:35] Mark Solms: A model for that is the experiments introduced about 10 or 15 years ago with the body swap illusion, where the research participant has VR goggles onto which is projected what the other person sees through a camera mounted on their forehead. So you experience the world from the point of view of the experimenter. And in fact, it's also been done with mannequins. I'm Mark Solms. I've got a VR goggle set on my face, and onto this is being projected what the experimenter or the mannequin is seeing, and you rapidly feel yourself to be the mannequin or to be the experimenter. You then shake hands with the experimenter, and you feel as if you're shaking hands with this guy called Mark Solms. You don't feel you are Mark Solms. You feel that guy is Mark Solms, and I'm shaking his hand. To your point, Chris, about skin conductance, if you threaten to stab the mannequin, even though I know I'm not the mannequin, the physiological stress response is greater when the knife is lunged toward the mannequin than when it is toward me, which is objective evidence that I am identified with the embodied space of the mannequin. That's very interesting. It's building upon that paradigm so that what you're experiencing is from the point of view of the agent that is navigating this environment.
[46:45] Michael Levin: It also occurs to me that you said something really important earlier, which was this bond, this concept of bond. And I think that it addresses both the issue of this generational attitude shift and the attempts to synthetically create these kinds of agents. This developing idea of making synthetic companions of various types — AIs that we're going to interact with in various ways. One thing I hear all the time from some of my dad's friends and even my generation is, "I don't want a bond with a synthetic. I want a bond with a human." I argue that you don't really want a bond with a human. What you actually want is a spiritual bond. So what does that actually mean? I don't think most people who say that actually know what's under your skin, and they don't care about the physiology and the genetics. When you think about what you want from a bond: is this somebody you're going to be friends with, somebody you might have a spouse, somebody you might want to take with you if you're going to Mars? What is the bond that we want? Two things come to my mind, and I'd love to hear what you guys think about this. One is that you need to have a shared existential struggle, specifically that all the questions of "where do I end in the world" begin — how do I know what I am, where I am — all of these things at the basic level. I think all three of us have spent a lot of time thinking about what happens at the very beginning when you have this undifferentiated medium, whether it be cells or neurons or whatever it's going to be. Out of this massive multiplicity comes some sort of semi-unified agent that considers themselves a single thing and separate from everybody. So what are the dynamics that lead up to this? It's not obvious. You don't know up front what you are, what kind of effectors you have, what kind of senses you have, what your environment is versus where you stop. We don't know any of that up front. We have to develop models of it. So what we're really looking for is something that has been through that authentic process, not a modern robot or AI where your energy needs are met, where everything is given to you. You've not had that issue. That shared existential struggle and that cognitive light cone that I talk about — which is this idea of what types of goals, what spaces, and what size of goals you can potentially work towards. If there's a huge mismatch, it's not going to work. It's very hard to be friends with something that has a bacteria-sized cognitive light cone. I'm sure the reverse is true too. I don't know how you would have that relationship with a cosmic intelligence, a massive light cone that you couldn't even begin to understand what they're doing. So that kind of impedance match between the goal setting and that struggle that comes from emerging as an individual in the first place. All that goes to say: young people in the future who are going to be surrounded with the kind of agents that you're talking about, Mark, and various other things — that's in the end going to be the criterion. It's not going to be the origin story of where you came from or what's inside of you or any of that stuff. It's going to be this kind of: what can I count on you to do? What kind of struggle have you been through? What are your goals? After that, it doesn't matter what your provenance is or what you're made of; then you're real as far as I'm concerned. That's where things are moving.
[50:46] Mark Solms: Chris, forgive me. I'll be brief. I think two sets of things in different directions flow from what you've just said. One is a reaction to the other. The one thought is, these values, these shared values, existential values, of course they are, for want of a better word, arbitrary apart from the actual existence of the system. That's at the heart of the free energy principle, or what Carl calls his particular physics, which you guys are building on, the idea of mineness or thingness or a system as opposed to not-system. Apart from that, which seems fundamental, the other values, like fear, I don't want to be harmed by this thing, or I don't want to be impeded by this thing, make it into a not-me type of thing, as opposed to we're on the same side, we're in the same boat together. Your survival is my survival. We're a team, we're a group, it's us versus them rather than me versus them. So the affiliative bonds of that kind in attachment bonding in mammals, for example, and play, which is a wonderful emotional drive that all mammals, at least possibly birds too, share, are relatively arbitrary because of what kinds of creatures we are. So that was the one line of thought: we want something fundamental that doesn't reflect our mammalian values. That led me to this other thought, which is an extremely simple thought: speaking more formally, anything that justifies your drawing a blanket around you and these other things—if mechanistically it's justifiable to say that your mark of blanket and this other agent's mark of blanket can themselves be justifiably blanketed by a meta-blanket—then you are formally describing the state of affairs that you are describing. That raises the much more reductionist question of what justifies the drawing of the blanket. Because I am stuck in the envelope I'm in, I find it much easier to think my nervous system is blanketed, and that within that, the various nuclei are themselves blanketed, and within that, the individual cells are themselves blanketed and so on. I find it easier to feel that I am a composite of blanketed agents, which is something, Mike, that you have forced me to see. I have no difficulty seeing it anymore. When I say forced, I mean I was compelled by the force of your arguments to realize, of course that's right. Carl says the same thing. It has been for a long time. But I find it much harder to go in beyond that. As soon as I start thinking of myself—am I; is there a consciousness that is shared by all of humanity or all living things? Is there literally a consciousness of all living things that you can speak of as a—and formally I think you can. It must: the whole of natural selection ultimately is just a self-organizing system. But I find it harder intuitively; I want to stop with myself.
[55:11] Chris Fields: The flip side of this discussion is what would be required to make it experientially evident in the way that you were discussing earlier, that I am a component of a larger system whose experiences I am not having, cannot have, and can't even properly conceive of with the tools available to me, except perhaps in a very abstract way. We have to have this very abstract way to even have this conversation. The FEP gives us that abstract way.
[56:16] Mark Solms: I too have to go in four minutes. To address the second part of his comment, I would love to carry on. Not today — I can't, but to arrange a follow-up meeting. What you're saying there, Chris, that's it. I agree with every word of what you're saying.
[56:35] Chris Fields: I would just propose adding that to your earlier goal for trying to sort out as an important question. This has been great. Thank you.
[56:48] Michael Levin: Amazing, guys. Thank you so much. Super interesting. I will connect again and we'll make the next part 2.
[56:55] Mark Solms: Marvelous.
[56:56] Michael Levin: Yeah.
Mark Solms: Thank you again, Mike, for facilitating it. And thank you, Chris. Great to meet you.
[57:01] Chris Fields: Yes.
[57:03] Michael Levin: Absolutely.
Chris Fields: Thank you.
Michael Levin: Thanks, guys. Talk soon.
[57:04] Mark Solms: Bye-bye.