Is it in the best interest of AI not to kill us all?. Image 1.

Leighann Morris

Author

Is it in the best interest of AI not to kill us all?. Image 2.

Leonard Peng

Illustrator

 

"The development of full artificial intelligence could spell the end of the human race," Stephen Hawking warned us last year on BBC. And it looks like that's not such a faraway worry.  

This past July, Apple co-founder Steve Wozniak and Stephen Hawking signed a letter written by over 1,000 tech experts, scientists and researchers, warning that, not only is artificial intelligence feasible "within years, not decades" but that "the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms." The letter followed "killer robots" as a subject of debate by the UN, a body that is considering the potential for a ban on some types of autonomous weapons. 

If AI can make decisions about killing, then is it possible that AI will understand its own best interests? And if so, would those best interests include killing us all? We asked futurists, science writers and academics specializing in AI studies and emerging technology ethics.

 

 

A timeline of AI

Is it in the best interest of AI not to kill us all?. Image 3.

384 BC–
322 BC

Aristotle describes the "syllogism"—a method of formal, mechanical thought.

Is it in the best interest of AI not to kill us all?. Image 4.

1206

Al-Jazari creates a programmable orchestra of mechanical human beings.

Is it in the best interest of AI not to kill us all?. Image 5.

1495

Leonardo Da Vinci sketches plans for a humanoid robot—able to sit up, wave its arms and move its head and jaw.

   

Is it in the best interest of AI not to kill us all?. Image 6.

1642

Blaise Pascal invents the mechanical calculator, the first digital calculating machine.

Is it in the best interest of AI not to kill us all?. Image 7.

1769

Wolfgang von Kempelen builds and tours with his chess-playing automaton, The Turk (later shown to be a hoax).

Is it in the best interest of AI not to kill us all?. Image 8.

1863

Samuel Butler suggests that Darwinian evolution also applies to machines and will one day supplant humanity.

   

Is it in the best interest of AI not to kill us all?. Image 9.

1921

The term "robot" is first used to denote fictional automata in a 1921 play R.U.R. by the Czech writer Karel Čapek.

Is it in the best interest of AI not to kill us all?. Image 10.

1939

Humanoid robot Elektro was debuts at the New York World's Fair.

Is it in the best interest of AI not to kill us all?. Image 11.

1941

Konrad Zuse builds the first working program-controlled computers.

   

Is it in the best interest of AI not to kill us all?. Image 12.

1950

Alan Turing proposes the Turing Test as a measure of machine intelligence.

Is it in the best interest of AI not to kill us all?. Image 13.

1951

The first working AI programs are written to run on the Ferranti Mark 1 machine of the University of Manchester.

Is it in the best interest of AI not to kill us all?. Image 14.

1956

Computer scientist John McCarthy coins the term "Artificial Intelligence".

   

Is it in the best interest of AI not to kill us all?. Image 15.

1965

Joseph Weizenbaum builds ELIZA, an interactive program that carries on a dialogue in English language on any topic.

Is it in the best interest of AI not to kill us all?. Image 16.

1993

Ian Horswill extends behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds

Is it in the best interest of AI not to kill us all?. Image 17.

2004

NASA's robotic exploration rovers Spirit and Opportunity autonomously navigate the surface of Mars.

   

Is it in the best interest of AI not to kill us all?. Image 18.

2009

Google builds a self-driving car.

Is it in the best interest of AI not to kill us all?. Image 19.

2010

Apple acquires Siri, a feature which works as "an intelligent personal assistant."

Is it in the best interest of AI not to kill us all?. Image 20.

2015

19273 AI and Robotics researchers sign an open letter against the use of autonomous weapons.

 

 

Michael L. Littman

Award-winning Professor of Computer Science at Brown University, served on the editorial boards for the Journal of Machine Learning Research and the Journal of Artificial Intelligence Research, general chair of International Conference on Machine Learning 2013 and program chair of the Association for the Advancement of Artificial Intelligence Conference 2013, fellow of AAAI

The question presumes that "AI" has an "interest" independent of its designer's, which is something that is not true of today's software. Apps, AI software, and computer viruses are all built by people. They all have varying degrees of independence and can make decisions like which of the available WiFi networks to choose for transmitting information to the Internet. Each piece of software has a kind of worldview, but that worldview is extremely limited compared to that of human beings. There is no software that I'm aware of that can meaningfully pose the question, "am I better off by killing all humans?" let alone try to answer it nor try to carry it out.

It is certainly not in our interest to bring about a system that believed it was in its best interest to harm us... The creation of such a system would be a slow and perhaps tedious journey and we would have many opportunities along the way to check whether an artificially intelligent system posed any kind of threat.

 

 

 

Stuart Russell Berkeley

Professor of Computer Science, Director of the Center for Intelligent Systems, and co-author of Artificial Intelligence: a Modern Approach

Artificial intelligence does not have any "best interest" or indeed any intrinsic interests at all. It is in our best interest to specify objectives for AI systems such that the resulting behaviors do not involve killing any of us.

 

 

 

George Dvorsky

Futurist, science writer, and bioethicist, contributing editor at io9, producer of the Sentient Developments blog and podcast, currently serves as Chair of the Board for the Institute for Ethics and Emerging Technologies (IEET). the founder and chair of the IEET’s Rights of Non-Human Persons Program

A common misconception about artificial intelligence is that it will be conscious and self-reflective in the same ways that humans are, but this won't necessarily be the case.

These machines will work according to what AI theorists refer to as "utility functions", which is a fancy way of saying they'll simply do as they're programmed. The trouble is, some of these systems may be programmed irresponsibly or poorly, or they may act in ways we can't predict. What's more, they may also be weaponized and be forced to work against humans or other artificially intelligent systems (hence the fears of an AI arms race).

Eventually, AIs will surpass human capacities, both in terms of intelligence and processing speed. Once control is relinquished to these machines, we'll have to sit in the sidelines and hope that whatever decisions they make will be in our best interest—which by no means can be guaranteed. What's more, some AI will not be programmed with the goal of self-preservation, heightening the potential for a catastrophe.

 

 

 

Jamais Cascio

Co-founder of award-winning World Changing and founder of Open the Future, featured in National Geographic Television’s Six Degrees and on History Channel’s Science Impossible, Distinguished Fellow at the Institute for the Future, member of Institute for Ethics and Emerging Technologies

It’s not in the interest of an AI to kill all humans because humans wouldn't go quietly.

If the AI has any self-preservation programming, it would try to avoid things that increase the likelihood that it will be harmed or destroyed. Intelligent machines would be vulnerable in many ways to human aggression, even something as simple as pulling the plug.

Moreover, the AI wouldn't just be trying to fight humans, it would be fighting humans plus "friendly" AIs. Loss, stalemate, or mutual annihilation would be much more likely than the killer AI winning decisively.

As the AI JOSHUA said in the movie WarGames, "The only winning move is not to play."