TechnologyBruce Sterling on
why you shouldn't care
how SIRI feels
Science fiction writer and a founder of cyberpunk Bruce Sterling shares his thoughts on whether or not it might be necessary to create non-human rights for artificial intelligence.
Plenty of people have worried at length about the potential for artificial intelligence to turn against humans, but what dangers do humans pose to A.I.? That is to say, if humanity manages to truly create an artificial intelligence that could think and feel autonomously, what obligations would we have to its feelings? What obligations would we have to it as a separate being?
We asked the famed science fiction writer and science journalist Bruce Sterling what he thought about the issues surrounding non-human rights. As a founder of the cyberpunk movement and a well-regarded futurist, Sterling seemed like the perfect person to speculate on just how much we should anthropomorphize a piece of technology. Unsurprisingly, the man who coined the term, "major consensus narrative," as an explanatory synonym to the word, "truth," has a sober point of view on the subject.
: Do you believe we should feel empathy for Artificial Intelligence?
Bruce Sterling: Well, we can feel empathy for Scarlett O'Hara in Gone With the Wind, and she doesn't exist, either. "Artificial Intelligence" doesn't exist. Systems of code and hardware that are pretty complicated can exist. Did you ever wince when somebody drops a MacBook on the hard floor?
: Do you believe you would feel empathy for an advanced A.I. if you were face to face with one?
Bruce Sterling: Even if they existed, they wouldn't have "faces" to be face-to-face with. But it's pretty easy to feel empathy. Like, when there's an earthquake in Rome, my reaction is "gosh, poor Rome." Rome isn't a person, but frankly I don't like to see Rome "hurt."
: How far are we from being able to program realistic simulations of human emotions like fear, sadness, hope etc?
Bruce Sterling: What you're really asking about here is realistic interactive simulations in real time. Shakespeare's Hamlet is full of realistic simulations. What you're asking for is a system you can interact with that can talk with you, and, let's say, genuinely scare and threaten you.
But: why does it have to act human to do that? Wouldn't it be a hundred times scarier if it was just a talking machine that wanted to intimidate you?
For instance, if I want you to feel more hopeful, why don't I just tinker with your Facebook prompts so that you get nothing but good news? I don't have to make Facebook act like Clippy in order to get emotions out of you.
What if you lifted your iPhone and Siri suddenly said, "I hate you, and I wish you were dead, Rhett Jones." A truly terrifying prospect, am I right? And yet everybody knows Siri is just a speech interface for Apple Corporation.
: Is there already an established set of rules for what kind of fundamental rights A.I. deserves?
Bruce Sterling: Actually yeah, there's been a lot of past work on ethics along this line. "Roboethics," for instance.
: Do any potential rules come to mind that you might propose?
Bruce Sterling: I'm not an ethicist, but I know that once you start codifying rules of good behavior, evil people will go way out of their way to break those rules.
: Famous people like Elon Musk and Stephen Hawking have expressed their concerns about the potential for humans to lose control of A.I., has any prominent person spoken about their concerns for how we treat A.I. in the future?
Bruce Sterling: I'd be plenty worried about people "losing control" of Microsoft, Facebook, Apple, Amazon, Google and the National Security Agency. I mean, where is that "control"? I don't see much. Of course, it's a scary prospect. I don't believe in "Artificial Intelligences," but I can easily imagine Chernobyl-style situations with complex control systems that could wipe us all out.
: Is it possible to torture a machine?
Bruce Sterling: Not in a strict sense, no, but there are plenty of videos where people do stuff like setting fire to Furbies, and it's quite distressing. Video of simulated torture feels pretty much as debasing as video of actual torture.
Torture is ethically bad for many reasons other than just hurting the tortured person.
So creating a system that behaved as if it was tortured would be quite wicked.
H&F: If we could map our brains and reproduce them through a computer simulation should that simulation be treated with the same basic human rights as everyone else? Should it have the right to die (be unplugged, deleted)?
Bruce Sterling: That's really just not going to happen for a host of excellent technical reasons. But even if we did that to our brains, they would no longer be our brains, they'd be computers. We wouldn't be humans with human rights, we'd be computer entities, so we'd have everyday computational properties such as remote backups, software updates, memory swaps… We wouldn't "die" any more than the Apple IIe would "die." We'd be entities of a different order, and our human issues would be obsolete.
Cover image: Wikipedia Creative Commons