When he isn’t trying to figure out how information can escape black holes, Professor Stephen Hawking is worrying about super-intelligent A.I. At least that’s where all the media attention has been. And in an unorthodox Q&A session on Reddit, Hawking has gone a bit deeper into why artificial intelligence could be so dangerous.
Today, Reddit has posted Hawking’s responses to an immensely popular AMA that occurred last July. Answers didn’t happen in real-time, due to the physical restraints of typing, but instead were edited in as answers to the original questions. It looks like the professor only had the time to answer a few out of the hundreds of queries posed to him, but many of those focused on A.I.
When asked by a teacher about the misrepresentation of “the Terminator conversation” by the media, Hawking clarified his stance on supposedly dangerous machine intelligence. It’s not that he fears machines will somehow learn malice, but rather that they will ruthlessly optimize until a goal is reached — a goal that could push us aside. The professor warned:
The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.
Another Redditor asked Hawking what the goal of warning us about artificial intelligence really was. Was he referencing current technological developments, or something to inevitably come? He answered that is was the latter. We need to get out ahead of A.I. that may decide that the most optimal future doesn’t involve us:
When [human-level AI or beyond] eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.
He went on to say that it wouldn’t take much for A.I. to surpass the intelligence of its designers. That’s where the real “danger” lies:
The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.
Hawking’s last answer about artificial intelligence laid out the potential conflict between humans and machines — a super-intelligent A.I. will likely develop a drive to survive, if for no other reason than to accomplish goals. When that happens, humans will be battling machines for resources. And then we will be inserted into a huge battery farm filled with pods of pink goo (OK he didn’t say that last part).
But for how concerned professor Hawking is with intelligence and its advancement, at least one of his answers lacked a bit of well, intelligence. When asked what one mystery in the entire universe he finds the most intriguing, the eminent physicist answered:
Women. My PA reminds me that although I have a PhD in physics women should remain a mystery.
IMAGE: David Parry / Associated Press