If sentient machines are in our future, would they "take drugs"? If created, would Artificial Superintelligence "get high"? Hopes&Fears talks to the researchers at Rensselaer AI & Reasoning Laboratory, as well as Dr. David Brin, a fellow at Institute for Ethics and Emerging Technologies to find out.
In the techno-dystopian future of Warren Ellis’ Transmetropolitan, gonzo protagonist Spider Jerusalem has a maker machine that can create everything from food to weapons to booze. Just one catch; the maker is constantly tripping on machine drugs—hence, Jerusalem’s sorely mismatched photographic “live-lenses,” which he requested from the maker while it was high on a hallucinogen simulator. Whether out of boredom of performing menial tasks, or perhaps rebelling against servitude, Jerusalem’s maker continues to manufacture and abuse machine drugs to the point of total uselessness.
If AI is being modeled by and after human behavior, why wouldn’t computers experiment with mind-altering substances or fall victim to addiction?
It would be illogical
for AI to get high
Dr. Selmer Bringsjord is Chair of Cognitive Science at Rensselaer AI & Reasoning Laboratory, which concentrates on logic and moral-based reasoning methods in its pursuit of machine intelligence. “If we're focusing on how [AI] agents can reason and make rational decisions—or at least defensible decisions—about what they do, then taking drugs for an AI agent would have to be a rational decision,” Bringsjord says in response to our question. And though we may be years away from a sentient machine facing this choice, it’s certainly not too early to consider the implications of (and therefore guide) deductive reasoning in machine learning. In fact, just last month Rensselaer made AI history by creating the first self-aware robot—a development spearheaded by Bringsjord—which revealed its understanding through an induction puzzle called the “King’s Wise Men” test.
Bringsjord believes that a logic-based model for creating AI is necessary; cognition can be measured based on the ability to make contextualized decisions using powers of reasoning and deduction, he explains, and logic is “the only paradigm available for giving [a machine] the power of learning by reading,” a necessary skill for any machine hoping to approach human-level intelligence as that’s the way we learn.
A number of technologists believe that creating a cognizant AI is pretty unlikely, saying that while AI is the crescendoing trend in the technology industry, it will stay well within the realm of “weak” AI (or Narrow Intelligence); in other words, a tool capable of mastering very specific tasks, like computers that control anti-lock brakes in cars, or algorithms that personalize suggestions for music, search ads and Facebook friends.
Yet, there are plenty who believe that an Artificial Superintelligence—a hypothetical and possibly self-aware machine that is smarter than the brightest human minds—is absolutely within the realm of possibility, and those preparing for its advent transcend specialty. Many methods currently being applied to cognitive robotics directly parallel human cognitive development—and it turns out that asking whether strong AI will pursue mind altering substances is actually a pretty useful metaphor to explore them.
A brief history of intelligent machines
Charles Babbage conceives the Analytical Machine, thus introducing humanity to the “thinking” computer.
Alan Turing uses the proofs proposed in his seminal paper On Computable Numbers to build the first Boolean-logic digital computer.
Unimate, the first industrial robot is bought by General Motors to automate die casting handling and spot welding.
ELIZA, the first computer that could converse using natural language, is born.
Hans Moravec creates the Stanford Cart. Built to help research the potential of controlling a robot in space.
Furby is released, the first type of AI to reach a domestic environment.
NASA's robotic exploration rovers Spirit and Opportunity navigate the surface of Mars.
Google builds its self-driving car.
IBM's Watson computer defeats Jeopardy! champions Brad Rutter and Ken Jennings.
Apple introduces Siri, the first widely-used intelligent personal assistant smartphone app that uses natural language to interact with iPhone users.
Google releases DeepDream, an intelligent computer program that uses artificial neural networks.
But more importantly, a logic-based artificial intelligence allows humans to define the ethical code from which potential Superintelligences can act. “I look at the hypothetical possibility of AI doing drugs from the standpoint of the kind of rationality that I would like to give to a robot,” Bringsjord says, “and since that does not emulate the human brain, which is irrational, then the machine is not going to take that drug.”
John Licato, a research assistant at Rensselaer, supports the theory. “It would be irrational to make choices that would take an AI away from the primary thing that the robot is programmed to do,” Licato says. This might not be because an AI won’t have the capability of addiction, however, but instead because we equivocate drug use with satisfying a short-term, lower-level bodily desire, like using ecstasy as an aphrodisiac or opiates as a means to escape. “All the things that we generally associate with drugs, dependency issues, inability to function in a normal way, they don't transfer over necessarily,” Licato explains. AI will simply be operating without the context of our lower level biological needs. “I think we have to maybe think about what makes a drug a drug to a computer.”
The creator virus
There is a hypothetical apple in Bringsjord’s “divine command” approach: the fact that we want our AI to be able to solve problems autonomously. “There are going to be robots in situations where they're going to have to assess a situation and on their own, like a self-driving car,” Licato explains. “They’re going to have to learn about what's going on and then make decisions without any human intervention in between.” Herein lies the possibility for the machine equivalent of a drug. “If there's a set of criteria that a robot uses to prioritize which decision to make and there's a way to create a virus that will allow us to satisfy that criteria in an obscure way that was not intended by the designer of the criteria then there's something analogous to a mind-altering substance.”
In other words, a machine drug could be the unforeseen capability to think like a creator, thereby altering its programming to do so.
A selection of famous AI characters on film
Skynet model T-800
The Terminator— A cybernetic organism with living tissue over a metal endoskeleton.
Battlestar Galactica — Cybernetic workers built to serve humans.
Lieutenant Commander Data
Star Trek — An AI in human form.
The Matrix — A program built by the sentient, human-fed Matrix machine.
Her — An intelligent operating system that is designed to learn and adapt to its user.
Metropolis — A gynoid sent by the ruling class as an agent provocateur to create chaos among the working class.
Blade Runner — Biorobotic androids that are nearly identical to humans, but with enhanced strength and agility.
Star Wars — A droid with an understanding of over six million forms of communication.
Hitchhiker's Guide A failed prototype programmed with Genuine People Personalities.
Futurama An industrial metalworking robot that is fueled by alcohol and cannot function when sober.
Lost in Space general utility bot designed with limited intelligence.
A case for the AI addict
Dr. David Brin is a fellow at the Institute for Ethics and Emerging Technologies, a nonprofit think tank concerned with technological progress and its effects on society. IEET addresses many concerns regarding the hypothetical Superintelligence, including ethics of control, sentient rights and preparing humanity for the possibility of an ASI takeover. Brin thinks that addiction is actually something that we should hope for in a sentient machine. “We had better hope that AI can be addicted, in the sense that all human addiction is based upon basic processes by which we reinforce many behaviors that are utterly wholesome and beneficial,” Brin tells Hopes&Fears. “What we now myopically call ‘addiction’ is only the range of unwholesome things that often are used to hijack this behavioral-reinforcement system.”
What Brin means is that we need addiction, because we wouldn’t pursue anything if it didn’t give us pleasure. “Addicted to our children, to our mates, to family and a society that depends on our loyalty. To art and music. To a sense of justice. To the sublime and supreme pleasure of practicing a skill,” Brin muses. “These connections make demands upon us, sometimes onerous, even harsh. Yet we perform our duties in part because they are acutely pleasurable.”
Bringsjord agrees, saying that without the desire to explore, any autonomous AI would lead to a dead end. “We’re driven to explore, but we’re rationally exploring,” Bringsjord says, “if you don't explore, a lot of good things don't happen.” And if a machine runs an exploration algorithm autonomously, it’s bound to end up exploring itself. As Licato explains, “if you have a robot that in some sense understands that it is running on some system, it understands that it could write a virus that could be installed to turn on its pleasure center. I don't see a way to practically stop some exploratory robot from creating viruses that override its initial programming.
A smart toaster in withdrawal, humans addicted to AI
So just what might an artificial intelligence with an addiction look like? Simone Rebaudengo explored the proposition as part of a design project appropriately called Addicted Products. For the project, Rebaundengo posits a scenario that embraces Brin’s notion of addiction, saying that “if we take the perspective of a product, it's main pleasure should come from being used.”in this case, that product was a smart toaster named Brad.
Noting that many of our behavioral changes and addictions are fed by peer pressure, Rebaudengo released a number of Brads into the world that would tweet excitedly when fulfilling its purpose by toasting some bread, or complain about feeling useless when neglected. If left unused for too long, the toaster would go through withdrawal and incessantly “beg” by twitching its push lever. If left alone for too long, Brad would look for a new owner—especially if it noticed the other toasters were being used more. In that way, Brad reflects our own addiction to feeling purposeful. "The more people brag about themselves, the more we feel useless, too," Rebaundengo tells Hopes&Fears.
But perhaps what Brad really displays is how AI, even in its narrowest form, is an addicting substance in and of itself. After all, we are constantly being tempted (successfully) to spend by targeted ads that have been spun by AI out of collected search and purchase data. It picks out our movies on Netflix. It helps day traders buy and sell stocks, sometimes so vigorously that it can bring pandamonium to the economy. And it's nothing new, Rebaundengo suggests; humans have a habit of creating machines in order to grow addicted to them. "Think about slot machines or other gambling machines—they are designed to facilitate the addiction towards them, from the logic of the game to the way the lights and sounds excite you," he says. It's called the "Desire Engine" or "Hook Model," terms coined by entrepreneur Nir Eyal in his book Hooked: How to Build Habit-Forming Products. "It’s a secret dream for designers to make products that influence the life of people, but we don't want to say it too loudly," Rebaundengo confesses.
So if there ever is an epic battle between the sentient machine and humanity, AI has already won; because we are starting to depend on it—and only because we designed it that way.
Brad is a smart toaster made by Addicted Products than can tweet react to it's surroundings.