Earlier this year, an open letter urging a “ban on offensive autonomous weapons beyond meaningful human control” was signed by over 1,000 A.I. researchers, technologists, engineers, academics and physicists. Among the signatories were elite members of the scientific and technology communities, including Stephen Hawking, Apple Co-Founder Steve Wozniak, and Tesla and SpaceX CEO, Elon Musk. Now, Musk, who has been exceptionally vocal about the dangers of artificial intelligence, is announcing the formation of a new non-profit organization called OpenAI.
According to OpenAI’s press release (accompanied by a new twitter account, @open_ai), the goal of the research company is to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” In other words, tackling the problems A.I. poses is more important than making money. And it’s no surprise the team at OpenAI feels this way, considering researchers like Oxford’s Nick Bostrom, author of the book, Superintelligence: Paths, Dangers, Strategies, has said that intelligent machines are “quite possibly the most important and most daunting challenge humanity has ever faced.” Musk himself has compared advancing A.I. beyond human control to “summoning the demon.”
OpenAI’s press release also discusses the difficulties of developing A.I. to the point of “general intelligence” — creating a machine that can learn to do many tasks, rather than simply excel at solving one specific problem — but notes that there have been huge strides in this effort with the development of “deep learning.” “Deep learning,” the release goes on to say, works by “designing [algorithmic] architectures that can twist themselves into a wide range of algorithms based on the data you feed them.” This type of computational power has allowed machines to imitate master painters, learn about the world like a toddler would, and even create hallucinatory images that are in some ways similar to dreams. Not to mention Google’s self-driving cars that have so far, not once “been the cause of a collision.”
OpenAI has a committed fund of $1 billion dollars, which is certainly significant, although quite likely only a drop in the bucket compared to what will be needed to properly handle all of the potential threats A.I. poses. Along with Musk and close to two dozen other big names in the tech world, OpenAI also has some serious talent backing it, with the likes of Sam Altman (president of Y Combinator), Peter Thiel (co-founder of Paypal, and venture capitalist), and Amazon Web Services (AWS).
What do you think about OpenAI and Musk’s efforts to decrease the risk A.I. poses to humanity? Is Skynet inevitable or are all these experts making Megatrons out of Marvins? Let us know in the comments section below!
Image: deviantART / jkno4u