by Nicholas Stehle
May 20, 2015

Concerns about out-of-control Artificial Intelligence (AI) are nothing new in the minds of those who read We shared thoughts on “Why There Will be a Robot Uprising“, and the ethical dilemmas presented by the creation of automated killer robots are numerous. The idea that humanity might not be at the top of the food chain forever is more than a little disquieting.

It might seem far-fetched to you, but two of the world’s greatest minds, Elon Musk and Stephen Hawking, have their concerns: AI, if left unchecked, poses a threat to humanity. That’s the message of a letter penned by the two, along with dozens of other scientists and technologists in a letter earlier this year.

More recently Stephen Hawking warned that computers will overtake human intelligence within the next 100 years:

Speaking at the Zeitgeist 2015 conference in London, the internationally renowned cosmologist and Cambridge University professor, said: “Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.”

Elon Musk (of PayPal, SpaceX, Tesla, and Solar City fame) agrees:

“I’m really worried about this,” Musk is quoted as saying in Elon Musk, a new authorized biography of the CEO of Tesla and SpaceX. 

“This,” according to the book, refers to the possibility that Page would develop artificially-intelligent robots that could turn evil and have the ability to annihilate the human race. 

Musk goes on to question the endgame of the work of his friend, Google founder Larry Page.

Page may be well-meaning, but as Musk says, “He could produce something evil by accident.”

Musk sees potential trouble in the very near future, perhaps even the next decade:

A few months after that, Musk told an interviewer that he believes “something seriously dangerous” may come about from AI in the next 5-10 years. “Please note that I am normally super pro technology and have never raised this issue until recent months. This is not a case of crying wolf about something I don’t understand.”

Hawking’s solution is simple enough: don’t let machines turn into cylons.

Hawking believes that scientists and technologists need to safely and carefully coordinate and communicate advancements in AI to ensure it does not grow beyond humanity’s control.

That would be helpful, and nothing that Isaac Asimov didn’t carefully think through for us more than half a century ago. The last thing we need is to find ourselves under attack by our own kitchen appliances.