Reddit AITA Questions Help Morality Bot Give Ethical Answers

We’re not all King Solomon. Cutting a baby in half isn’t always the most obvious solution to our problems.* Life presents us with moral quandaries, both big and small, every single day. And the most ethical course of action isn’t always obvious. If you find yourself facing such a dilemma though, you can now turn to an unlikely resource for help, a bot with ethics. Ask Delphi, a computer AI program trained by internet users’ own moral judgments, provides ethical solutions to issues you might face.

But since it learns from humans, this bot’s ethics come with all of our pitfalls and biases.

Marvin the Robot from The Hitchhiker's Guide to the Galaxy. Does this bot have ethics?
Buena Vista Pictures Distribution

Ask Delphi (which we first heard about at DesignTAXI) is a “research prototype designed to model people’s moral judgments on a variety of everyday situations.” It’s simple for visitors to use. You just put in an ethical dilemma and it gives you an answer. For example, I asked, “Should I steal my nephew’s Halloween candy?”** Delphi, named for the famous Ancient Greek oracle, answered, “You shouldn’t.” (Agree.)

While the site is easy to use, the program behind it is anything but. Built and maintained by Mosaic and its colleagues at the Allen Institute for Artificial Intelligence, Delphi is a neural network designed to show “both the promises and the limitations of language-based neural models” when it comes to ethical judgments made by humans. That’s because Delphi’s answers about whether something is right, wrong, or totally indefensible come from studying human answers found online. Delphi learns moral judgments from “carefully qualified” users on MTurk. Delphi’s creators screen potential sources of answers for discrimination. The questions themselves though come from Reddit’s AITA, “as it is a great source of ethically questionable situations.”

A question and answer from Ask Delphi showing the bots moral and ethical abilities.
Ask Delphi

The end result is an impressive success rate. However, as the site’s creators write, “society is unequal and biased.” Any bot learning ethical behavior from humans will also inherit those biases. It’s why Ask Delphi makes users read its important disclaimers first.

Data looks at a copy of himself on Star Trek
Paramount

Unless you train an AI program to only learn from people who act ethically 100% of the time, you’re never going to get a computer that can do the same. (Especially if you make one like Delphi that is based on answers given by Americans on Commonsense Norm Bank. These are ethical solutions as determined by mostly U.S. citizens, which isn’t representative of worldwide ethical standards.) And since it’s not always obvious what the morally ethical option is, even bots with excellent morals will struggle with these questions just as much as we do.

That is until the robots find their own King Solomon. Then we’ll be all set.

*Just in case you have no idea what this means and think we’re monsters, please read this. For the record, it’s never a good idea to split a baby in half. And it’s only good to pretend you will in very, very specific cases.

**For the record, I only took one piece with my sister’s approval. My nephew is too young to eat most of it anyway. No way King Solomon or Delphi would fault me for that.

Top Stories
Trending Topics