close menu
UK Government Outlines Isaac Asimov-Style Rules to Protect Us from Artificial Intelligence

UK Government Outlines Isaac Asimov-Style Rules to Protect Us from Artificial Intelligence

There’s a reason so many science fiction movies, like Ex Machina, I, Robot, and The Matrix, revolve around the idea of artificial intelligence run amok: it seems so plausible and terrifying. That’s why, as mankind gets closer and closer to creating truly independent AI, we have to be conscious of the pitfalls of creating consciousness and guard ourselves from any unforeseen—and unfortunate—consequences. It’s a concern the UK government is taking seriously enough that it has issued a comprehensive report about regulating artificial intelligence, including rules to safeguard mankind that are reminiscent of Isaac Asimov’s “Three Laws of Robotics.”

The House of Lords Artificial Intelligence Committee has issued a report—that we first heard about at Gizmodo—titled “AI in the UK: Ready, Willing and Able?” Appointed to “to consider the economic, ethical and social implications of advances in artificial intelligence,” they say they were guided by five key questions:

  • How does AI affect people in their everyday lives, and how is this likely to change?
  • What are the potential opportunities presented by artificial intelligence for the United Kingdom? How can these be realised?
  • What are the possible risks and implications of artificial intelligence? How can these be avoided?
  • How should the public be engaged with in a responsible manner about AI?
  • What are the ethical issues presented by the development and use of artificial intelligence?

The incredibly thorough paper covers a myriad of topics, from the history of AI and robotics, to issues of engaging, developing, designing, mitigating the risks of, and working and living with artificial intelligence. But because we’ve seen too many sci-fi movies, we’re most intrigued by Chapter 9’s AI Code, which suggests “five overarching principles” to protect humanity that feel like something straight from Isaac Asimov himself:

  • Artificial intelligence should be developed for the common good and benefit of humanity.
  • Artificial intelligence should operate on principles of intelligibility and fairness.
  • Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  • All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Those sound like guidleines for developers as much as they do hard and fast rules for an AI program (that might be a difference in semantics only), and it remains to be seen if they can effectively be implemented (super villains never care about the law). However having overarching principles in place now can only help as the technology improves at a rapid pace.

But hopefully they turn out to be slightly less perfect than Isaac Asmiov’s Three Laws were.

What do you think of these guidelines? Will they help? Are they comprehensive enough? Tell us why in the comments below.

More Unbelievable Science:

Images: 20th Century Fox, Universal Pictures

Todd Phillips Reveals First Look at Joaquin Phoenix in His JOKER Movie

Todd Phillips Reveals First Look at Joaquin Phoenix in His JOKER Movie

article
What Are Captain Marvel's Superpowers?

What Are Captain Marvel's Superpowers?

article
Toto's

Toto's "Africa" Gets a '50s-Style Cover from Postmodern Jukebox

article