Watch Episode Here
Listen to Episode Here
Show Notes
We discuss what the relationship is between intelligence and agency (since my definition of intelligence focuses on competency for goal-directed problem-solving), AI, biological evolution, and what is important about maintaining humanity in the deep future.
CHAPTERS:
(00:00) Embryos, Planaria, Intelligence
(09:52) Biological vs Artificial Agency
(16:56) AGI Risk and Hybrids
(26:49) Continuum, Personhood, Compassion
(34:49) Future Diverse Compassionate Civilization
PRODUCED BY:
SOCIAL LINKS:
Podcast Website: https://thoughtforms-life.aipodcast.ing
YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw
Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099
Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5
Twitter: https://x.com/drmichaellevin
Blog: https://thoughtforms.life
The Levin Lab: https://drmichaellevin.org
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
[00:00] Michael Levin: What I was going to talk about is this issue of part of agency, goal directedness is really critical, problem solving, but also this autopoietic step of knowing or estimating for yourself, having a model of where you end in the outside world begins. This issue of informational self-construction. And one model I have for this has to do with early embryogenesis. Imagine you're a blastodisk of a bird or a human, some kind of amniote. You're a disk of about 50,000 cells, something like that. Typically we look at this thing and we say, okay, there's one embryo, one individual, it's going to be one agent, it's going to be a duck or a human. But what are we counting when you say there's 50,000 cells? What do you count? There's 50,000 cells, there's one embryo. When you say there's one embryo, what are you counting? What you're counting is the reliable alignment of all the cells towards a specific path in anatomical amorphous space, meaning that left to its own devices or even perturbed in various ways, all of the cells will work together to make exactly the right thing, whatever the target morphology for that species is. You can do all kinds of stuff and move things around and they'll move back. There's all kinds of ingenuity there about getting to where it needs to go. That's what you're counting: there's one job to be done to build this particular thing. We all know what it is. But it's actually much more interesting than that. Because what you can do, and I used to do this as a grad student with duck embryos, is take a little needle and make some scratches into that blastoderm. When you make the scratches, you separate it into several islands, two or three. That early system that aligns cells towards being one embryo has, among other things, the ability of local activation, long-range inhibition, such that a few cells become the organizer and they tell everybody else, "you guys are not the organizer, I'm the organizer." They then organize the embryo. When you separate this thing into disconnected islands, each one of them can't feel the others because there's an empty space between them. Later, they will heal. But until that happens, each one is an individual and each one becomes a separate embryo. When they do heal, you get twins or triplets. This is very interesting because if you think about that 50,000 cells, every cell is some other cell's neighbor. So the question is, where is the agent? Well, there's three of them. That's even more interesting because there are some cells on the boundary and they're confused about who they belong to. There are all kinds of medical implications of that I used to study. There's this idea that the number of agents or selves in this medium, this Freudian ocean of potential selves, could be zero, one, two, three, maybe up to half a dozen in a typical blastoderm, and you don't know how many there are. It isn't preset. It's not preset by the genetics. It's not preset by the hardware. It's an emergent fact of the physiology, where some number of selves will demarcate themselves from the outside world. I really like this and I really like thinking that what's important — and that's just one thing that's important beyond goal directedness — is that process of delineation of a self as separate from the outside world, and the fact that you have to do that yourself, as opposed to a lot of our robotics and AI, where that's given and predetermined from the outside. The agent doesn't have any choice about it. This is your limits and that's it. That, I think, is very important. It has all kinds of implications in biology that lead to a lot of the cool things that we like about biology. One of them is this weird intelligence ratchet. First, I'll tell you about the ratchet and then how it relates to all this. The ratchet: one of the funny things about planaria, these flatworms, is that they're incredibly regenerative, very reliably regenerative. They are cancer resistant. They are immortal. They don't age. And yet they have a really chaotic genome. The reason is because when they divide, the way they reproduce, unlike us, if we get a mutation in our bodies, our children don't automatically inherit that mutation.
[04:55] Michael Levin: In the planaria, they rip themselves in half and then each half regenerates. That's how they mostly reproduce. There are other species that do sperm and egg, but there are species that just regenerate. That means that any mutation that doesn't kill the cell is amplified into the next generation and makes up the body. So they can be mixoploid. Every cell could have a different number of chromosomes. It's incredibly, incredibly messy on the hardware end. And yet, the animal with the messiest genome is the one that has the most cancer resistance, regenerative ability, and less aging. That seems strange, and that bothered me for many years, how that can be. Why is the animal with the noisiest genome that has the most morphological stability? Recently, we studied this in a computational way, and I think we finally have a little bit of insight into how it works. This evolutionary process is not dealing with a passive material. It's dealing with cells and tissues that themselves have various agendas and physiological space and anatomical space and so on. For example, if I take a tadpole and in the embryo I move the mouth off to the side, what happens during development is that it fixes itself. The mouth will come back to where it needs to be. If you had a mutation that made that change, your fitness is not zero because you can't eat. Your fitness will be fine because the mouth will come back. The ability of the cells to make up for these kinds of weird mutations and other problems means that it's quite hard for evolution to gauge the quality of the hardware. When you come up for selection, if you're a pretty good tadpole, selection doesn't know whether you're a good tadpole because your genetics was amazing, or you're a good tadpole because your genetics was so-so but you fixed it. The structural genome was, but the competencies fixed it up. That means that evolution has a hard time improving the structural genome, even with a little bit of competency. But what it can do very easily is improve the competency. When you improve the competency, it becomes even harder to gauge the actual structural genome. That means all the effort goes into increasing the competency. You get a positive feedback loop where the more competency in the individual parts, the less emphasis and the less pressure goes on to the hardware, meaning the genome, and the more onto this actual competency. It's like Steve Frank once gave me this great example. He said that once RAID arrays started being popular in computers, the quality of the actual disks has gone down because you don't need to have great disks anymore because you have a RAID array. So the pressure on having really low error media goes down. The pressure is released because of the competency. If you're a planarian where for a fact you can't rely on your genome being very clean, your hardware is unreliable, all the pressure is on to develop an algorithm that makes a good worm no matter what the hardware looks like, but with limits, obviously. This actually explains a very weird fact, which is that unlike most other creatures where you can get mutant lines—flies with curly wings and albino rats and things like that—there's no such thing in planaria. There are no abnormal lines of planaria except for the two-headed form, ours, and those are not genetic. Those are made by altering the bioelectrical memory; there's nothing genetically wrong with them. All of that brings us back to the original discussion. It means that because you can't know ahead of time as an embryo whether you have the right number of cells, the right size of cells, whether your DNA is okay, you don't know any of this. There are many examples in biology where embryos construct themselves despite all sorts of crazy variabilities. It's because none of this is baked in. You have to solve these problems on the fly. However many cells, bigger cells, smaller cells, they're very good at this. Evolution gives us these problem-solving machines. It doesn't just make specific solutions to specific environments. Because of that need to solve this thing from the word go, I think that the kinds of things that we associate with biological agents are part of this intelligence ratchet, where in different spaces, some of it visible to us, some of it we're very bad at noticing, there's a constant ratcheting of problem-solving capacity because they have to do this. No one is there to reliably tell them where you end, what are your sensors, what are your effectors, what space you work in; it's all up for grabs. No life form that took that for granted could really survive nowadays. Maybe early life did, but nowadays that wouldn't fly with all the competition. That's my very long answer to your point.
[09:52] Carlos E Perez: In essence, we have agential matter that has evolved over billions of years and has the competency to solve different problems for keeping itself alive or growing. But you're always beginning with this kind of matter, which a deep learning system does not have.
[10:29] Michael Levin: Yeah, right.
[10:30] Carlos E Perez: It's just calculus and linear algebra, right?
[10:40] Michael Levin: The things that I think are important are that it's a multi-scale system where at every level. I give this talk sometimes called "Why Robots Don't Get Cancer." It's because our current technology only has intelligence or some degree of agency at one level. The parts are pretty dumb. It's the whole robot that you hope has some intelligence. In biology, every layer is a problem-solving thing. That's useful. The fact that it has to construct itself from scratch every time is useful. The fact that it's always metabolically on the edge of starvation, constantly has to find energy, which means that it has to be really good at doing causal coarse-graining on the environment or else it'll die. That's what makes it different from current devices. I really believe there isn't a fundamental divide; engineers will be able to duplicate this. I think today's technology doesn't do this well. I don't see any reason why, if we learn the lessons correctly from biology, and I'm sure there are others that we don't know about, we couldn't engineer this way later. And in fact, Jamie Davies and I wrote this thing called "Engineering with Agential Materials." I don't think it's unreachable for us. I don't think it's magic. I don't think evolution has a monopoly on this. I think engineers could do this. But I do agree that today's technologies are not this kind of system. They're different.
[12:18] Carlos E Perez: Yes, I don't disagree with that. I agree that if they train these systems with a particular curriculum such that it needs to survive, then it will learn the heuristics to do that. And then that would get you to an AI, an AGI with agency?
[12:46] Michael Levin: I don't like binary categories on almost anything. I think they have extremely low agency, and I think it's possible to make ones with more. We have to be ready for the fact that, up until now, all of these things tracked very well together. Anything that spoke, you knew that it had the same existential struggle that you did, and you could make all these assumptions. For the first time, we see some truly diverse intelligences. We see some things that are problem-solving agents that are not like us at all, and we're now finding that, in fact, you can dissociate some of this stuff. A lot of people are finding it really hard to get away from this binary vision that either it's a dumb lookup table or it's just like us. Those are not the only options. There's a huge space of possible minds with different failure modes and different capabilities, and some of them look like us and some don't. It's easy to be fooled. But I think the solution to all this is a proper grounding in the diverse intelligence field where you start to understand that these are not our only options. There are many ways to have a kind of mind. We're going to have to learn to relate to all these things in a useful and ethical manner.
[14:22] Carlos E Perez: I think one of the problems has been that the only general intelligence that we are familiar with is humans. Cognitive psychologists have built their models based on humans who have agency, so they can't conceive of a general intelligence that is devoid of agency. It's not in their current models. Now we have something that appears to be generally intelligent, but it's completely devoid of agency.
[14:57] Michael Levin: I think one of the biggest issues is that all of our major sense organs are pointing outward in three-dimensional space, and we're very good at noticing medium-sized bodies moving at medium speeds in space and saying, oh, look, here's a crow or an ape or an octopus doing something clever, and we can recognize some intelligence that way. Imagine if you had a primary sense of your blood chemistry. If you had another sense, like taste but more aimed inwards into your blood and richer, almost like vision. I think if we had that sense and we grew up with it, we would have absolutely no problems recognizing that we also live in a physiological state space and that our kidneys and our liver are these amazingly intelligent agents that navigate that space because we challenge them with various stresses throughout the day and they do all kinds of interesting things and they navigate this space and solve problems for us. We have a hard time envisioning intelligence in other problem spaces. Humans are general intelligences, but we as humans don't typically solve problems in some of these other spaces where individual cells, bacteria, and other things do very well. This human-centered approach is blinding us to a lot of examples of intelligence out there. That's why people are freaked out about suddenly being confronted by linguistic intelligence. These radically different minds are all around us all the time. We're just very bad at noticing it.
[16:56] Carlos E Perez: So what do you make out of the argument that if we continue to go on this path of accelerating AGI, that it's an existential risk to humanity? Does that make sense?
[17:13] Michael Levin: We have many things that could potentially pose an existential risk that are not necessarily like us or agential. There are many ways that we could kill ourselves off. I think it's not impossible that if we don't learn the lessons of diverse intelligence, if we fail to understand how to properly relate to these things, that we could end up having problems. I don't think that's impossible. At the same time, I don't think the solution is to try to stop research. I think it's impossible, even if it was a good idea. If it does end up causing a major problem for us, it's not going to be because of anything it does. It's going to be because of our refusal to learn to relate to other kinds of minds in a novel way. All we know is how to relate to other humans, barely. We can relate to animals, not very well. That's it. We're really not willing to understand anything else. At that point, if we don't learn that lesson, we're going to have a major problem. Not just because of AGIs. I think that because of all the biotechnology and advances in biorobotics, we are going to be surrounded by, and certainly our children will be surrounded by, beings that don't resemble us at all. Meaning cyborgs, hybrots, chimeras of all kinds, augmented humans, augmented biorobots. There's going to be all kinds of stuff in our environment. Regardless of software AIs, we need expanded ways of predicting the goals of new composite systems that we haven't seen before. That's an important science that we don't really have yet. Ethically relating to other beings that don't share a path on the evolutionary tree with us and have a completely radically different intelligence is important. If we don't wrap our minds around all of that, I'm pretty sure we're going to have issues. It's not going to be just because of the software agents. It's going to be because of our inflexibility in dealing with radically different beings.
[19:36] Carlos E Perez: The premise of this existential threat, I believe, is that it boils down to this idea that we cannot imagine a different kind of intelligence other than ourselves. We look at ourselves and can imagine that humans could extinguish every other non-other agent. It's related to having this limited viewpoint or model of other intelligences, which is the same problem you're bringing up: if you don't expand that, we're going to have a problem.
[20:22] Michael Levin: We have to, we really need an education in this diverse intelligence, these ideas. This is going to be required because there's all kinds of stuff coming, not just the software version. I also see some people are really worried in the opposite direction. Okay, if these things become that, I get email. I'm not even in that side of the field, but I get emails all the time. What am I going to do when the AI is so good at doing all the things that it's going to do? I think if you can't do things just because there's somebody else out there who's better than you at doing them, I assume that everything I do, somebody else is better at it. And maybe somewhere out in the world, out in the universe, there are aliens that are way better than us at art and math and science. Fine, does that mean we can't go on and do our thing now? I'm not bothered by it. I think that's fine. I think we can use it to raise our game as much as possible. Ultimately, I think everything about us is changeable. Bring it on. I'm okay with it.
[21:50] Carlos E Perez: The expectation here is it's not just the AGIs that are going to give us trouble. There will be alternative biological general intelligence that will eventually prop up.
[22:06] Michael Levin: One is, when people say, oh my god, we're going to make these incredibly intelligent agents and release them into the world. We already do that. It's called having kids. We already make them all the time. Right now, as we speak, somebody is making a really intelligent, a future intelligence with minimal control over its behavior, education, upbringing. Who the hell knows what it's going to do? Some of them do amazing things. Some of them do horrible things. We already do this. So we already know how this kind of plays out. All kinds of consequences. Imagine you've got a human being that's 98% human, but there's a chip in his brain helping him control a wheelchair and maybe adding some IQ points. Over here, you've got a Roomba vacuum cleaner. It's 98% robot, but it's got some human brain cells on-board culture to help it get around the room. 98 to 298, every possibility in between is a viable being. Every combination, 60-40, 50-50, whatever. It's all up for grabs. You've got hybrids where you've got living brains driving weird robotic bodies and new augmented prosthetics and new senses. If you want to have a sense of the solar weather, you can do that. If you want to have a sense of the stock market instead of smell, you can do that. All of these things already exist. We're going to have these creatures. People will have hybrid robotics and bioengineered beings. If you wanted to make a mammal with one-third hemisphere, we can do that. We can graft on one-third hemisphere of the brain. No problem. All of these creatures are going to have novel bodies and novel embodied minds. All of the old categories — it's a human, it's a machine, it's a robot, it's a living organism, it's intelligent, no it's not — it's all going out the window. It was never very good, but it's definitely not going to last the next couple of decades.
[24:35] Carlos E Perez: And I don't think anyone's prepared for that.
[24:37] Michael Levin: They're not prepared for that. They're not prepared to think about it. I constantly have arguments with people who are still using these binary categories. It's just a cell. It's only chemistry and physics, but I am a human. Let's follow you backwards. Guess what: a few years and nine months ago, you were a single cell, a little blob of chemistry and physics. There's a smooth, gradual continuum. There's no magic lightning bolt during any of that time period where somebody says, "boom, now you've gone from physics to mind." That doesn't exist. There's this continuum. I have a slide where I show a human in the middle: up here is the evolutionary path and you used to be a microbe; there's a developmental path and you used to be an unfertilized oocyte down here. You can go sideways, and you can do all kinds of biological modifications this way. You can do all kinds of technological modifications this way. All of this stuff is completely continuous. There are no binary categories anywhere. When people insist on thinking "Is it intelligent? Is it a machine?" these categories are worthless now. We have to rejigger all of that. Then comes the hard part of making the institutions fit. The notion of an adult: what's an adult? Does anything happen on your 18th birthday to make you an adult? Nothing. We have this binary cutoff to help the legal system figure out how you're going to get charged in court, but that's it. There's not really a sharp boundary there. I've talked to legal scholars and so on about how we're going to figure this out and what it means to be a person. These are not new. Science fiction has been dealing with this stuff for 150 years. But I think now we're definitely getting to the point where we need to be figuring this out. This AI stuff is just the tip of the iceberg.
[26:49] Carlos E Perez: All these other things would likely, if they're coming from biological material, have some kind of agency greater than the kind of agency that we have for computers and, say, deep learning systems. You're assuming a world where all these things are autonomous and alive and have some sort of free will to do whatever they want.
[27:32] Michael Levin: That's a whole other kettle of fish. When you look down, again, the continuum is really helpful. People say that it doesn't have any free will. It's just a machine obeying physics. Well, if you look at a paramecium, what do you see in a single-celled organism? What do you see? You see chemistry and physics. You don't see any magic glow. Some people will say people go in a couple of directions. Some people say, okay, the paramecium has no free will or whatever, but I do. And so now you've got a real problem because you were a single cell organism once. You were a single cell. So where did it show up? At what point? Nobody has a good story of where it shows up. That's one way to go. That doesn't work. Then some people go the other direction and they go, okay, fine, the paramecium does have this magical whatever because it's a living being and machines will never. If you look inside a paramecium, what do you see? You see a bunch of little cogs and wheels and things that grab onto each other and things that obey the various pieces of physics. That's about all you see in there. Right now we can't make one from scratch, but there's no reason why at some point, usually in the field of active matter and all of this, we can make things that do that. I really don't believe that these binary categories are helping us. I think for all of these things, the question is what kind and how much. So when somebody says, is it intelligent? Is it cognitive, whatever? I don't like the yes or no. I want to say, where on the spectrum is it, how much, and what kind? What kind of problem-solving capacities? How big is its cognitive light cone? That was the paper before the Tame paper that talked about this notion of the cognitive light cone, which is the spatial-temporal size of the biggest goals you can pursue. So how big is your light cone? In what space is it? Is it in metabolic space? Is it in physiological? Is it in three-dimensional space? Where is it? That's what you really need to know. The binary categories don't really help you much.
[29:46] Carlos E Perez: But wouldn't civilization—the purpose of civilization is to ensure humanity's survival. Wouldn't civilization itself legislate that humans would be prioritized over every other general intelligence? It's more of a legal thing.
[30:14] Michael Levin: The legal system is going to go crazy. It already has a million problems because of the ******* defense and things like this, where if you really follow through neuroscience, the question is, what does it mean that this person could have done otherwise? What exactly does that mean, given a materialistic view of the brain? The legal system already has issues, but the issue of prioritizing humans suggests that we have, again, a binary category of what a human is. To a primitive early human, somebody like us—we've got some glasses, we've got some hearing aids, some shoes, a toothbrush, maybe an iPhone in our pocket. To them, you're not a human, you're a walking multi-system, you're way beyond what a human is. You've got all this other stuff. In the future, a brain implant could give you direct access to a Google search and some infrared eyes at the back of your head. What, are you no longer human? That's of course going to be argued in court. Somebody's going to say, listen, I may have tentacles and I may have a wheel or two, but what's wrong with me? How come I'm not human? I have the same. That, of course, has been dealt with in science fiction a lot. That brings up the question of what is an essential human? What is it to be a human? What do we want out of that? Let's run down the list. Is it the genome? I don't really care about the genome per se. Some of these things, the thing to me is that in pre-scientific times you could have held the view that we are the pinnacle of creation—whatever we have in terms of our body's limitations, capabilities, our IQ, our lifespan, our capacity for compassion, whatever it is. Those limits were set to us, set by this benevolent process, and those are the best they're going to be. We're out of that Garden of Eden now, and we realize that that's just where evolution left us. There's nothing magical or optimal about where we are. I think that evolution is this meandering search process, and it happened to find that this particular form is good enough to survive and leave a bunch of offspring. That's great, but I don't see any reason we have to stay that way; it's arbitrary. I like this notion of morphological freedom. I think each of us owes no allegiance to this random process that happened to have dumped you at this particular IQ level, with this level of damage or birth defects or whatever you've got. That means, do I care about keeping the genome pure? No, I don't. Do I care about keeping the anatomy pure? We gave that up when we started using canes and glasses and things like that. I don't care about that. There's nothing magic about this anatomy. I'm not trying to keep that preserved. What I do think we have that is fundamental is, in particular, the minimal cognitive light cone as far as compassion is concerned. What I mean by that is the moral ability to actively care about some quantity of other beings' well-being. I think that level is what makes us human. Going up beyond that, fantastic, bring it on. Going down below that, I don't think that's good; that's what I would argue against. Modify away, change whatever you like to give yourself a better life to fulfill your potential—just don't reduce your capacity for moral care. In fact, you should increase it. This is an argument that we made with a few colleagues when we wrote that Buddhism paper. That's what I'm interested in as far as humans. I'm not interested; I don't care what the genome is. All of this is completely arbitrary to me: how our genetics ended up, how our morphology ended up, whatever. Let's improve it. But that cognitive light cone with respect to compassion for others should only increase.
[34:49] Carlos E Perez: So your vision of the future of a civilization is that the citizens of that civilization would have a minimum compassion light cone. Anything below the minimum compassion light cone disqualifies you from citizenry.
[35:06] Michael Levin: That's what we have now, right? So if you're a dog or if you're whatever, you have certain protective rights, but you are not a full member of society, right? And it just so happens I think one thing is that we are special on this planet in the sense that it just so happens that there's basically one dominant species. Didn't have to be that way. Imagine that there was another dominant species that was, I don't know, 40 IQ points lower. That would be a really tough case, right? Because not low enough that you could just say animals, but also not high enough that you want them running a nuclear power plant or flying airplanes. What do you do? That would be really tough. We're just fortunate here that there's such a gulf, right? So we could always say, oh, look, there's this category, but it didn't have to be that way. So I see a future where we look like, one of my favorite scenes is the Star Wars Cantina scene, where there's every kind of alien, every kind of robot. This is what the future looks like to me. I think that as long as you've got the light cone to participate with a degree of responsibility and compassion for others, you are part of the society. And then whether you've got wheels or you decided to have tentacles or a propeller on your head, that's going to be. I think people in the future are going to look back on us and they're going to look at all of our wrangling about gender and skin color and prosthetics and all of this stuff. They're going to laugh at this. There's going to be such variety of embodiment at some point where you can pretty much live in whatever body you want to live. You can have more IQ. You can have a different kind of perceptual system. This is all of our wrangling over this stuff and what's a human and it's going to be laughable. That's one reason I really like the Star Wars vision, where the friends with all the droids and all this. In Star Trek, it's different, which I find just horrible. It's whatever the year is supposed to be, 2500 or something, 2400. And they're still arguing about what Commander Data's status is. He serves on the Enterprise. They're still arguing about what his deal is hundreds of years later. I think that's ridiculous. I think that within 100 years from now, maybe sooner, assuming we're still alive and we haven't blown ourselves up, all of this will be hilarious to people of that time.
[37:56] Carlos E Perez: To have a civilization that accepts that kind of diversity, you would have to have a guiding principle that says it's not acceptable to have just a single general intelligence, but we're going to accept all these kinds of diversity.
[38:13] Michael Levin: I think we're well on our way. The principle, you want to relate ethically to someone, no matter what they look like and where they came from. Does that seem radical nowadays? It used to seem radical. Nowadays, that doesn't sound so radical. When I say where you came from, I mean, were you evolved? Were you engineered? Young people of today don't find those two things particularly weird. I just think people haven't fully figured out what it means yet. But society's already going that way. You're not supposed to treat people worse because of what they look like or how they got here. We already know that.
[38:56] Carlos E Perez: But the fear of AGI is that this other general intelligence, which could become superintelligence very quickly, would be a threat to our own existence.
[39:11] Michael Levin: It's not impossible that we engineer something. We did it. We put leaded gasoline and all these other horrible chemicals in the environment and created a hole in the ozone layer. We did all this stuff that could potentially kill us all off without any agency in any of these things. We were all walking around with high levels of lead in our bodies because of all the leaded gasoline. The lead doesn't have much agency. It wasn't trying to kill us. We were just idiots. We did it to ourselves. Could we engineer software agents and put them in charge of important things and have failure modes that we never anticipated and that screw us over? I think it's possible. It isn't going to be because of the intelligence. But it's going to be because of our intelligence, not because of its intelligence.