Professor of philosophy
A full-time professor of philosophy at William Paterson University, Mandik's research concerns points of intersection between philosophy of mind and the cognitive sciences, especially neuroscience, psychology, and artificial intelligence. He is the author of Key Terms in Philosophy of Mind and This Is Philosophy of Mind: An Introduction as well as numerous articles.
Science fiction has long been influenced by philosophy. Sadly, the inverse doesn't seem to happen nearly enough.
Works as diverse as The Matrix (Descartes, Baudrillard), Neon Genesis Evangelion (Schopenhauer, Hegel, Kierkegaard), Frankenstein (Darwin, the Enlightenment) and Labyrinth (Berkeley, Leibniz, Pascal) have come to spread philosophical theory through mainstream culture like wildfire. They've all drawn narrative and artistic strength from treating philosophical subjects seriously. Not to mention sci-fi scribes like Stanislaw Lem and Philip K. Dick having their own influence on metaphysics and epistemology or Ursula K. Le Guin and Aldous Huxley, on politics and ethics.
But philosophy rarely takes its influence from science fiction, a fact that distinguishes Pete Mandik, a professor at William Paterson University, from many of his contemporaries. He teaches classes on science fiction and philosophy, writes about mechanical brains and AI, and recently gave a talk at A Night of Philosophy about "mind uploading." Not only does Mandik imagine that the stuff of fantasy—transmuting your consciousness into a robot—is possible. He thinks that the future of the human race might depend on us taking it seriously.
Hopes&Fears: Why are you interested in mind uploading?
Pete Mandik: There are some people who think that there is a way of surviving death by having your conscious experience emulated or replicated by a computer program. One way of thinking of how this would work is that we would scan your brain while you are still alive and take a really highly detailed 3D snapshot of all the structures and activity in the brain down to something like the molecular level. And then, using that information we gain, we run a computer simulation. Just like we have simulations of things like hurricanes, we’d run a simulation of all the activity in your brain.
Some people think that not only would this computer simulation give rise to conscious experience, but that it would be you. This would be a way of you personally surviving in another state. Let’s call these people optimists about mind uploading. Pessimists disagree primarily on two different points. One point is by arguing that a computer simulation could never have conscious experiences. This line of thought goes hand in hand with being skeptical about the possibility of artificial intelligence or the possibility that a machine could have feelings or experiences. But another point where the pessimists disagree is to grant that a computer system might have conscious experiences, but they would insist that this thing would be a mere copy. So there would be two entities having conscious experiences. We’ve created a second set of experiences that, no matter how similar they are to yours, they are still a copy. And if the original dies, that’s it. That identity dies with it.
H&F: Where do you see yourself falling in this debate?
Mandik: Well, part of what I’m trying to say is that, like most metaphysical debates, this is going to be irresoluble by argumentation. There’s really nothing that pure reason is going to allow us to settle one way or another. All the evidence that we have we all tend to agree on. That evidence just underdetermines whether computers could have conscious experiences or whether they would be mere copies or actual survival of personal identity.
What I try to do as a way of resolving that metaphysical impasse is to look at it from a Darwinian or evolutionary point of view. The basic point of Darwinian evolution applies to any kind of system where you have things that are replicating and various degrees of fitness that would apply to the things that are reproducing. On this kind of abstract characterization, we could describe various hypothetical systems as having features that would be more fit.
Now one of the features that these computer simulations would have is something we could describe as being belief-like. In particular, these things are going to have the belief that they are going to survive the procedure. Now the metaphysical debate is about whether that belief is true, and what I’m trying to argue is that we can say, regardless of whether that belief is true, that belief would have survival value. Physical systems that have that belief are more likely to make more copies of themselves than physical systems that lack that belief.
Pete Mandik's recommended Reading list:
buy on Amazon.com
Diaspora and Permutation City
by Greg Egan
"Diaspora [explores] the three cultures I mentioned and Permutation City, really investigates life in a virtual world, or what it’s like to create a whole other universe to explore within a simulated environment."
buy on Amazon.com
by Charles Stross
There are a lot of stories about the singularity depicting different ways that would play out before and after, but Accelerando has got to be my favorite. I think it most captures how weird it would be without just saying “this would be really weird.” He cooks up these of what it might really be like.
Rapture of the Nerds
by Charles Stross & Cory Doctorow
Another good singularity tale that explores post-humanity.
buy on Amazon.com
H&F: Can metaphysical beliefs have survival value?
MANDIK: I describe the different beliefs about replication as being on a scale of metaphysical daring versus metaphysical timidness. The metaphysically daring position is the one that makes a bet that it will survive. What makes this daring is that this position is taken from a state of imperfect knowledge. You don’t know that you’re going to survive. There is a pretty big chance you’re taking. In contrast, the more timid position is unwilling to make this leap from imperfect knowledge.
This contrast between being daring and timid becomes starker when you think about how the uploading process would play out. A lot of people think that, especially in the early days of upload technology, that the scan will be destructive. [They feel] that in order to get the information needed we’re going to have to do something pretty destructive to the original brain. For example, you’d freeze a brain, slice it into extremely thin segments, and scan them so thoroughly that each slice would effectively be destroyed. So it’s a pretty high-cost procedure, especially if it turns out that this metaphysical position is false!
But even with that risk, what I’m saying is that the daring metaphysical view is going to result in a proliferation of beings that have that view. Regardless of whether that view is true, it’s going to be useful or pragmatic in this Darwinian way.
H&F: When you’re talking about survival in this abstract way, it seems more like information, simulations, or genetic identities that are continuing to survive, not necessarily human beings with personal identity. Darwinian survival is on a species and not a personal level. That seems plausible, but is the end goal for the personal identity of individuals to survive?
MANDIK: Well, I think the end goal is definitely the latter. But there is the question of whether it's achievable. But again, what I’m saying is that if you think it's achievable, you’re going to be better off in some sense than we can define independently of knowing if that belief is correct. Regardless of whether you survive, believing you are is going to make you better off in this Darwinian sense.
Mind Uploading in other pop Sci-Fi:
— Tron (1982) A human programmer gets digitized by an artificial intelligence which brings him inside the virtual world of the computer.
— Ghost in the Shell (1989) takes place in a future world in which human beings replace their body and mind with mechanical and electrical parts. Sometimes going as far as complete mechanization/replacement of all original material. Its sequel, Ghost in the Shell 2: Innocence deals heavily with the philosophical ramifications of this problem
— Cowboy Bebop (1999) Episode 23 Brain Scratch is about a cult that is focused on transference of the mind into a computer network.
H&F: So in this way, metaphysical daring is like any other valuable Darwinian trait like attractiveness, intelligence or any other trait that helps you survive.?
MANDIK: Yeah, these traits are going to be better for a species, or a type or kind. But then you might wonder, what does that do for the individual. Is it better for the individual to have that trait?
You can think of other situations to clarify this. Think of a person that lead a really good life and then died before he or she reproduced, and someone that lead an equally good life up to the analogous point in time but did reproduce. In an individualistic sense, those people had equally good lives. So you’re not going to have a good answer for why you should care, or why any particular individual should care. Nonetheless, people that do have values that are future-oriented or type-oriented (as opposed to token oriented), there tends to be more of them.
Now it is possible to have a set of values that are future or group oriented. I think you might truly value that there would be more individuals like yourself. I think human beings can value just about anything, we’re really flexible that way. And if you did have a value that was future oriented or species oriented (in a larger non-biological way of “beings like us”), I think that’s a good reason to be metaphysically daring.
Mind Uploading in other pop Sci-Fi:
— Black Mirror (2014) In the episode White Christmas, there is a procedure that copies a living subjects' mind and uploads it to a device that handles household control jobs, judicial investigation, and criminal sentencing. An operator can adjust the speed of time to make the mind experience a thousand-year-long sentence in a few hours of real-world time.
— RoboCop Vs The Terminator (1992) In this Frank Miller comic the human brain of RoboCop is uploaded into Skynet, the killer artificial intelligence from the Terminator series. RoboCop's mind hides out for yearsinside Skynet until he seizes an opportunity to destroy it.
— Metroid Fusion (2002) Protagonist Samus Aran's commander and friend Adam's brain is uploaded the federation network after his death. This procedure is carried out for all well-known scientists in the future.
I think a lot of people are worried about the fate of the human race. People like Nick Bostrom and Elon Musk are worried about what are called “existential risks” or things that pose a threat to the existence of life. And there are a lot of them once you start thinking about it. There are self-caused catastrophes like environmental ones or experiments with pathogens gone horribly wrong. There are astronomical ones, like an asteroid hitting us. Once you start thinking about this, we seem to have an imperative to change our values and start thinking, culturally, in these future-oriented ways. Literally the survival of the human race depends on these values.
If we value long-term survival, on the one hand there is space exploration and colonization, but there is also the possibility of re-engineering the human form, and that’s where mind-uploading comes in. For example, moving a human being through long distances of space is really expensive: we weigh a lot, require a lot of stuff to survive, [and we] produce waste. Computer programs need a lot less. The current human brain, with all its complexity, is still a pretty big waste of matter. Physicists have calculated what the upper bounds of how much information could be fit in any given bit of matter, and our brains aren’t even scratching the surface of what the laws of physics allow in terms of storage and processing. In theory, we could fit the entire human race's minds [in a piece of] matter that is about the size of a house.
H&F: If you continue future-oriented thinking, won’t all of this end up being undermined by heat death or a big crunch?
MANDIK: If that’s really the ultimate way that physics plays out, the second law of thermodynamics dictates that the universe will end in a state of complete equilibrium, everything dies. There’s no hope for an infinite lifespan. You need disequilibrium of energy for even small reactions to take place.
But that’s a long way away. People that worry about existential threats settle for just a longer finite existence. Also, the longer we survive, the more chance we have of discovering that this model is flawed or thinking of a new way out.
H&F: What if you upload yourself to this computer, but it turns out to be a really terrible existence. You’re trapped ineffectively in this echo chamber without anyone else to talk to. Camus argued that a lot of the freedom of human existence comes from knowing life is a choice and we can kill ourselves at will. Could our computer-simulated selves commit suicide?
MANDIK: I think that, as far as questions about agency or will or freedom go, these physical systems won’t be any different than we are. They’re going to have as much free will as we’re going to have. Human brains follow the same deterministic laws as a computer would, so that doesn’t really change things. If you’re a compatibilist (free will and determinism can exist together), it’s still entirely consistent to think that these simulations would have freedom in that sense, freedom to adopt or reject values, make choices, and so on. In a way, we’re currently so free that we can even make the choice to live radically different ways of life, even non-biological ones.
H&F: Okay, here’s another problem. A lot of human consciousness seems to be the result of participating in a causal network. We have light impacting our retinas and we see things, air hits our ears and we hear things. Our existence is dynamic, as opposed to static. Do you imagine mind-uploading to be able to accommodate this fact? Or would these simulations be frozen in time?
MANDIK: I think both options are available. I’m inspired a lot by this science fiction author Greg Egan. He writes about exactly these sorts of questions. He’s imagined a post-human future with three different cultural scenarios or groups. One, what I’d called the timid group, pretty much sticks with a re-engineered, biological earth-based form. But then there are two non-biological groups. One lives in a complete virtual reality; they’re just software creatures. The third group still values interaction with the external world, so they are robots which spend very little time in a virtual reality and interact with the world. And Egan even describes cultures within these groups. Like versions of the virtual reality group that still have video cameras to the world and they still investigate natural sciences, and other groups that pretty much stick to a priori reasoning. So I can imagine pretty much all of these scenarios working.
H&F: I noticed you had a ring with a skull on it. Is that a memento mori?
MANDIK: It’s become that, but originally I just liked it because I thought it was cool. Part of the reason why I wanted a weird ring was because one of my most influential philosophy teachers, an ethics professor, had this really big scarab ring. He’d fiddle with it in class and use it when discussing the Ring of Gyges, and it was really crazy and I thought “this is what I want to be when I grow up. I want to be like this guy”. But I thought a skull was cooler than a scarab ring.
Not about mind uploading
— The Matrix (1999) While it's easy to mistake this films premise for a mind-uploading story, it is only about virtual reality and simulated reality, since the protagonist Neo's physical brain still is required to reside his mind. His physical brain is connected to the Matrix by a brain-machine interface.
Malcolm T. Nicholson