Skip to content

Cellular automata and models of health and disease

Michael Levin speaks with Willem Nielsen from the Wolfram Institute about using cellular automata to model health and disease. They discuss robust automata, planarian morphogenesis, minimal competent models, and bioelectric control mechanisms.

Watch Episode Here


Listen to Episode Here


Show Notes

This is a ~35 minute discussion with Willem Nielsen (https://community.wolfram.com/web/wrn2001,https://medium.com/@wnielsen) from the Wolfram Institute about their cellular automata models of disease and our approach to this problem.

Papers to which I referred:

https://link.springer.com/article/10.1007/s00018-023-04790-z

https://www.mdpi.com/1099-4300/25/1/131

https://www.mdpi.com/1099-4300/26/7/532

https://www.sciencedirect.com/science/article/pii/S0303264722001435

https://journals.sagepub.com/doi/10.1177/10597123241269740

Uri Alon's book: https://www.taylorfrancis.com/books/mono/10.1201/9781003356929/systems-medicine-uri-alon

CHAPTERS:

(00:00) Automata-based disease modeling

(06:12) Evolving robust automata

(10:10) Planarian morphogenetic competency

(19:02) Designing minimal competent models

(29:05) Bioelectric mechanisms and control

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Podcast Website: https://thoughtforms-life.aipodcast.ing

YouTube: https://www.youtube.com/channel/UC3pVafx6EZqXVI2V_Efu2uw

Apple Podcasts: https://podcasts.apple.com/us/podcast/thoughtforms-life/id1805908099

Spotify: https://open.spotify.com/show/7JCmtoeH53neYyZeOZ6ym5

Twitter: https://x.com/drmichaellevin

Blog: https://thoughtforms.life

The Levin Lab: https://drmichaellevin.org


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.

[00:00] Willem Nielsen: Yes, I was hoping to show — the main thing I wanted to show you is, I don't know if you saw, but Stephen Wolfram did a post on biomedicine and I was helping him with the code. I was hoping to get your thoughts on how accurate the connections are, and, as someone who knows the real literature and biology better than I do, where the model fits and where it doesn't fit?

[00:34] Michael Levin: Okay, that sounds great. You want to take me through it?

[00:36] Willem Nielsen: This is the automata that we're using. It's K equals 4 and R equals 1. What we're trying to do here is use cellular automata. The basic idea is we change one cell in the body, and then we see what happens to the pattern. With the idea that the changing cell is like a disease, eventually we're going to look for a cell to change to try to put it back to the original pattern. What we find is that, because of the complexity of the automata, it's really hard to do all the steps of medicine. You have a disease classification. For example, these are some of the possible perturbations that you can do to the automaton by changing a single cell. If we try to put that in feature space using a machine learning algorithm, and try to cluster the different potential diseases into different categories, we see that it's not completely discrete, meaning there are always some in-between cases between diseases. It's potentially saying that in a textbook, in ICD-10, where they classify diseases into different branches neatly, in the real world, that's not completely possible. Going another step: now classifying diseases — what about making predictions and diagnosing diseases? One of Stephen Wolfram's ideas about medicine is that the medical observer is basically observing something much more complex than they can possibly understand.

[03:23] Willem Nielsen: So because of that, certain features emerge. For example, in this case, we look at just the width of the automaton at all steps. Here the green is the normal disease histogram. Those are all the lifetimes if you perturb that automaton. The yellow one is what happens when a doctor does a test and sees that something's wrong, which in this case we're using the width of the automaton. Basically what that's saying is it's observing that the lifetimes are all messed up because the doctor observed a different width. The goal is: from the width of the automaton, can you predict what's going to happen to that organism? What this is showing is all the orange lines are the diseases, and the blue line is the original. You can already see just from the width that there's going to be a lot of chaos, or it's going to be a little bit unpredictable what's going to happen based on a high-level metric like that. Even if you just look at width at a single step — width at step 25 — compared to the lifetime, which is the length of the pattern, we can see that it's just not a very predictive measure. You'd like to see a trend there. This graph is a machine learning prediction of, based on the width at step 25, what lifetime you will get. It doesn't do any better than the median because there's not good data here. That's one of the big takeaways that Stephen Wolfram is making here: if you have better interfaces with biology, then you can make better predictions because it's complicated. If we continue to get more data faster, then we'll be able to make a better diagnosis and prognosis. Do you have any thoughts, or should I just keep walking you through?

[06:12] Michael Levin: I have a question. I'll give some thoughts. When you start these CAs, what starting string do you use? What's the starting position for these things?

[06:27] Willem Nielsen: When we evolve them, we always start from this one red. We call it the seed conditioner. It's just one red cell. Then we change the rules by one bit at a time. If the bit results in a longer or the same lifetime, we keep it. It's a mutation to the rules. If it's neutral or advantageous, we keep it.

[07:13] Michael Levin: When you're talking about "lifetime," can you tell me what that is?

[07:18] Willem Nielsen: Lifetime means just the length of the automaton. Another point I should mention is that if it lives forever — a lot of times a mutation will get it into a loop and it will live forever — we count that as a fitness of 0 or a lifetime of 0. It's basically trying to get a longer and longer finite length.

[07:45] Michael Levin: A longer finite length, yeah, okay, I get it.

[07:50] Willem Nielsen: Anyways, that's a normal evolution, but when we're evolving these guys, because we were looking from the perspective of medicine, we wanted them to be robust. And this is one of the main things I wanted to ask you about, which is what we do is we do that same evolution, but we perturb it during the evolution. This is an example of a single step of evolution. We take the minimum fitness. If you perturb it 10 times, you take the minimum fitness. In this case, it would be this 60 right here. And that's the fitness. All that's doing is forcing the organism to be able to deal with perturbations. If any one of these goes to infinity, or, as we sometimes call it, cancer, then the fitness of the whole organism is 0. It has to get good at not going to infinity. It also has to get good at trying to keep its length despite a single point perturbation. What we see is, you may have noticed, we get these specific looking automaton, which are, Siebel from calls them attractors; they have this randomness in the beginning, and then they eventually hit this state. In this case, it's this purple cap and they have what I call a program death. No matter what the initial conditions are, they always die. I'm wondering how much that's actually what you see in planaria. Do you think it's something as simple as this that tells them? Because I know that's one of the things you mention a lot, which is how do cells know when to stop when they're growing, when you're doing morphology. I definitely wanted to ask — I know you haven't seen these before, but maybe you have some thoughts on that.

[10:10] Michael Levin: We've done some work on this, and I can send you some papers. We used a slightly more complex CA. In one case it's a neural CA; in another case it's something else. I'll put some papers in the chat. Also, the planarian story — we now have some pretty well-worked-out thoughts about what's going on with the planaria. I can go over it. Do you know the competency ratchet business? Does that ring a bell?

[10:51] Willem Nielsen: Not really, no. All right.

[10:53] Michael Levin: The deal is this: planaria, at least the ones that we study, reproduce by ripping themselves in half and regenerating. Unlike most of us who have this thing called Weissman's barrier, which is that mutations in your body don't get passed on to your offspring because you jettison the soma and it's only the gametes that move on. You're the sperm and the egg. We don't keep the mutations we acquire during our lifetime. Planaria are not like that because any mutation that doesn't kill the cell is propagated into the next generation as these stem cells reproduce. They accumulate; for 400 million years they've been accumulating mutations and their genome is incredibly messy. They can be mixoploid, meaning that the cells can have different chromosomes. It bothered me for a couple of decades when I first found out about this, that the animal that is highly regenerative, immortal, resistant to cancer is also the one with the dirtiest genome. That's the exact opposite of what everybody tells you in biology classes: the genome is very important. You have to keep it clean because it determines this and that. We've been studying this. There are a couple of other interesting features that finally snapped into place for me a couple of years ago: for almost every other model system, you can call the stock center and get mutants. You can get flies with curly wings, mice with weird eyes, and chickens with funny feet. There are mutant lines that you can get. In planaria, there are no mutant lines. The only non-standard planaria that exist are our two-head form and our, what we call the cryptic form. Neither of them was created genetically. Long story short, here's what I think is going on there. The most important thing about living tissue is that there is not a mechanical mapping between the genotype and the phenotype. That is, not only is it complex — it's not one-to-one — but it's not just complexity, pleiotropy, redundancy, degeneracy. It's that the middle layer that leads from the genome to the phenotype has intelligence. What I mean by that is it's a problem-solving system. We put out something recently on the genome being a generative model for development. There's a paper on this by Nick Cheney and Kevin Mitchell. The point is this. Imagine what we found in our tadpoles: when you move the craniofacial organs to a novel location, they will eventually find their way back — the thing adjusts to perturbation. Now imagine a creature that has that level of morphogenetic competency between genotype and phenotype. Let's say you make a mutation, and that mutation, as all mutations do, has multiple effects. It might do something good somewhere else in the animal, but one thing it does is move the mouth off to the side. Under standard circumstances, that animal will die because it can't eat, and whatever other consequences that mutation might have had never get explored. What you would have to do otherwise is wait until it finds a way to do that good thing without affecting the mouth, which could be a long time. Instead, what happens is the face self-corrects, as many things in embryogenesis do. Now, lots of mutations that otherwise would have been deleterious become neutral, and you get to explore that. But here's the other thing about it. That makes evolution go faster, but when that animal comes up for selection, selection doesn't know: do you have a good face because your structural genome was great? Or do you have a good face because your structural genome was actually crap, but you had a lot of competency and you fixed it?

[15:20] Willem Nielsen: Right.

[15:21] Michael Levin: As soon as you start hiding information from selection, I’ll send you the paper — we modeled all this computationally. Evolution has a hard time seeing the structural genome; the pressure on the genome comes off. But what it does do is crank the competency. All the work ends up being spent on improving the competency. It becomes a positive feedback loop because the more you do that, the less you can see the structural genome. More of the effort goes into the competency. At different positions on that spectrum, I think planaria are all the way to the end. Planaria, because their substrate was so unreliable, all the work of evolution went into making an algorithm that makes a proper worm no matter what the hardware looks like. They're the biggest example of that. That's why there are no mutant lines: you try to edit the genome, but they ignore it because they mostly ignore their own genome too. You have intermediate cases like salamanders, which are pretty good at regenerating but they're not immortal like planaria. Then you have mammals. All the way on the left you have nematodes, which are completely cookie cutter organisms where every cell is numbered. I think what's really going on here is this deep competency, and it's not just when you make defects that it looks like repair. When you make really drastic defects, for example, when we make Xenobots or Anthrobots and take the tissue completely out of its normal context, it doesn't try to repair to the normal standard default outcome; it makes something else. It makes something else that's completely viable, that has all kinds of behaviors and morphology. The key here is this middle layer, the competency layer. This is what we've been modeling and what I've been writing about. In all of our CAs that's what we've been doing. We have the genome that comes in with the rules. We have the phenotype, which gets selected, but in the middle it's something like a neural CA, which has competencies to get certain goals met even under novel conditions. That's how I see this process. As far as diseases, I want to recommend Uri Alon. He's a synthetic biologist in Israel with a ton of great work. I'm going to put the title in the chat. You should take a look at this. If Stephen hasn't already seen it, he should. It's called "Systems Medicine" by Uri Alon. The cover is basically like a periodic table. He's trying to ask, from physiological circuits, what can we say about failure modes. It's basically the clustering that you did for the different diseases. That's what he was trying to do based on data of real physiological circuits — why we have certain diseases, the morphospace of possible diseases, and so on.

[19:02] Willem Nielsen: Wow, that's definitely relevant. Thank you. I wanted to ask you, I don't know if it's a paper, but you guys were doing the neural CA morphology, the one with the lizard. Is that the one you're talking about?

[19:20] Michael Levin: No, it's not. I'm going to put a couple of things on the chat right now. First is Peter Smiley's paper on competition between subunits becoming a type of coordination mechanism for morphology. Growing something of finite size that does stop is really critical. You can see in Peter's model how that happens. It basically happens because of competition for resources. Competition for resources is a very cool mechanism for enabling that. Here's the neural CA stuff. This is Ben Hartle and Sebastian Risi. That's one way to do it. And this is the evolutionary competency model.

[20:27] Willem Nielsen: Thank you so much. These are awesome. One of the things I'm wondering is how much of that, because the neural CAs, they are basically just a bit smarter than the ones I'm looking at. And I'm wondering how much of that, you're basically saying that these guys, in order to do what the planaria do, have to be smarter than biology initially thought. And I'm wondering how simple do you think you can get this model to be in order to achieve what the planaria achieve? Have you guys hit the bottom in terms of the neural CAs or do you think you could go even simpler than that?

[21:19] Michael Levin: No, I think you can go simpler than that. It's some stuff; I have a student working on some stuff. I'm not really ready to talk about it until I check it and make sure that everything is what I think it is. It is even much earlier than that. I'll show you something else. There's this other thing that you can look at, which, because of all the work on diverse intelligence and basal cognition, I'm really interested in this question of what are the simplest systems that show this kind of competency.

[22:01] Willem Nielsen: Yes.

Michael Levin: We started looking at it in sorting algorithms. We have a paper on sorting and a couple of blog posts about it. Bubble sort already does some of this. Very simple deterministic systems already do this. I think there's a ton of such competencies to be discovered. I have a student working on some of this in various kinds of CAs.

[22:40] Willem Nielsen: What are the features you're looking for when you're trying to get the simplest possible model? What are the most essential things you're trying to capture that you see in the plenary?

[23:02] Michael Levin: The home run is creative problem solving. Because the kind of thing we see in biological tissues is not just homeostasis. It's also the ability to creatively come up with solutions that they've never seen before in evolution. In planaria, for example, we put them in barium chloride. Barium is a non-specific blocker of potassium channels. The cells are unhappy. In particular, all the neurons in the head are unhappy because they can't pass potassium. Over a day, their heads explode. But then if you leave them in the barium, after about a week or two, they grow back a new head. And the new head is completely insensitive to the barium, no problem. We asked the question, what's different about these new heads compared to the old heads? We found out that they express under a dozen new genes that enable them to live, to get all their business done with blocked potassium channels. That's amazing. There's no pressure in the wild to be resistant to barium. In a 20,000-dimensional action space, how do you find exactly those few genes that are going to help you? You don't have time to randomly poke around in that space and see what happens. There's not time; the cells proliferate quite slowly. It's not like bacteria. You don't have time for gradient descent. In fact, if you start messing around with different genes, you're going to kill yourself long before you find the solution. The question is how living material is able to achieve the standard outcome despite all the noise and unreliability of the substrate, but also find new solutions, whether they be physiological solutions like barium or anatomical solutions and behavioral solutions for anthrobots and xenobots. There's some degree of homeostasis and navigation of that anatomical space, but also towards novel solutions, which I think is the exact same thing as creative problem solving.

[25:33] Michael Levin: That's the kind of thing we want to see. What that means is we put these things in weird scenarios and we see what they do. It's basically behavior science. What you'll see in that sorting algorithms paper is that we found different ways of looking at what they're doing and we started interfering with it. We found all kinds of crazy competencies like delayed gratification. Turns out the sorting algorithms can do delayed gratification. Who knew? Because it's not in the algorithm. It's not just complexity, it's not just unpredictability, it's actually different degrees of problem solving competencies. That's the kind of thing we want to see. Part of it is what makes it really hard, but also really fun, is that it isn't enough to be able to quantify. I like what you guys have with the persistence, because it's not enough to say I want the same shape I had before, because it doesn't just find new solutions to the same set point, it actually finds novel problems to solve. An anthropot is not an answer to the question of how to be a good human embryo. That's not what it's doing. It's doing something completely different. Being able to quantify that and recognize that is really hard because it's in the field of open-ended evolution in A-life. You can do it, but what do you reward for? If you don't want to constrain what it's going to find, what do you reward for? I like this business of the growth control, but I also think you guys should look for other aspects that are interesting. For example, look for some kind of morphogenesis. Not just growth control. In Peter Smiley's paper that I just put up, we looked for two things. We rewarded for a couple of different things. We rewarded for growth control, size, but also for specific shapes and specific topological relationships. Can you evolve or can you find something that makes a specific shape, let's say multiple layers or some kind of topology that's more biological?

[28:05] Willem Nielsen: We've looked at, or Stephen Wolfen really looked at, aspect ratio.

[28:13] Michael Levin: That's good. We did that. Peter did exactly that. He looked at aspect ratio, so that's pretty good. I love morphogenesis as a behaviour in anatomical space. So I would recommend thinking about that, looking at different aspects of the shapes. Aspect ratio is good, and then go beyond that too. That's what we do in Ben Hartle's paper that I just put up: look at the ability to regenerate after anatomical damage. There are a couple of other papers coming soon on this kind of topic that might be interesting. As soon as we've got it wrapped up, I'll send it to you.

[29:05] Willem Nielsen: These are simple models, but I know you guys have also done more complicated, trying to get closer to actually how it works. How well do you guys feel you understand the actual? Bubble sort has many of the features, but obviously it's not actually doing bubble sort. The question is the actual mechanism that it uses, how well do you understand that?

[29:44] Michael Levin: That's an interesting philosophical question. We need to define "mechanism" and we need to define "understand." Our situation is somewhat like neuroscience in that if you're looking at the proximal molecular mechanism, we understand it quite well. We know that a lot of this is mediated by bioelectrical circuits. We know the molecules involved. We know we can see the electrical computations happening. The mechanisms aspect of it, we have it. But that by itself is utterly unsatisfying because it's like saying you understand the computer because you understand copper and silicon and you can watch the electrons flow. But what's the computation that it's actually doing? That's a much more challenging thing. We have a little bit of it. And what we're trying to do is exactly what the neuroscientists are trying to do, which is neural decoding, except not a neuron. We see the electrical activity, we model it as a navigating agent, an animal navigating anatomical space, and we try to understand what are the algorithms that guide that navigation. Some of that we have, and we have it to the extent that we can make tadpoles with eyes on their tails and the two-headed flatworms, and we can fix certain kinds of birth defects, and we can induce certain kinds of regenerative events. We obviously have the interface; we're learning some of the prompts that we can give it. I really think that's what's going on here. I don't think medicine is going to be solved by mechanical kinds of approaches. I think all of this is prompt. I think this is why we have so few good drugs in the sense of dependable efficacy and consistent lack of side effects among patients. We have very few drugs that actually fix anything because we're trying to micromanage the holding down of specific molecular states. Instead, what we should be doing is developing prompts for the decision-making intelligence that's implemented by these physiological circuits.

[32:08] Willem Nielsen: When you're basically writing to the organism, is it set up so that there's only a few cells that control? I know you say it's hierarchical — how hierarchical is it? Can you just send one input to a single cell, and how much of the organism can you change by changing just one node in the network?

[32:39] Michael Levin: In some cases that's exactly how it works. For example, when we can induce basically metastatic melanoma in a normal tadpole—no drugs, no carcinogens, no oncogenes, no DNA damage—and you can do that by messing up the electrical communication with some cells. It only takes about three cells in the whole tadpole before they trigger every other melanocyte to go metastatic. We have other examples where what we do is manipulate cells. It doesn't even have to be geometrically proximal. We've shown both in cancer and in brain repair that you can actually trigger cells on the opposite side of the animal and still get effects on the other side. These things move. There are propagating waves of this information that go through tissue, and we can now see them to some extent; we have a visualization modality for it. My favorite story goes years back: when you induce an ectopic eye on the tail of a tadpole, or in the gut of a tadpole, you don't have to hit a lot of cells. When you section those ey

[34:06] Willem Nielsen: Right.

Michael Levin: A lot of collective intelligences do this. Ants do this too. When they come across something too heavy for them to lift, they recruit their nest mates to come and help. So it's a feature of a self-scaling property of collective intelligence of certain kinds. But there's a more interesting piece to the story, which is that sometimes you do this, you get no eye at all. When you look, what's actually happening is that there's a battle of patterns going on there because the cells we injected are saying to their neighbors, you should help us to build an eye. All the neighbors have a tumor suppression mechanism that says, you guys have crazy voltage, you should stay skin. That's what cells normally do to suppress neighbors that start to acquire weird voltage patterns; they use these gap junctions to equalize it out. They try to buffer it out. That's a ubiquitous cancer suppression mechanism. You have this battle and sometimes the eye story wins and sometimes the skin story wins because I think it's fundamentally a battle of patterns and which ones are the most convincing to these cells. Sometimes you get an eye and sometimes you get nothing. You get your normal organs. The other thing is that I think there's a lot of both competition and cooperation. Some of it takes place from the perspective of the cells, but some of it, I think, takes place from the perspective of the patterns.

[35:36] Willem Nielsen: Yeah. God, it's so freaking cool, man.


Related episodes