A good April Fools' joke from some sort of large entity is usually a fun time, especially when there's more truth behind them than you'd expect. A famous example over the past couple years has been Rick And Morty, which unexpectedly aired a new episode in 2017, and played a bizarre, lower quality Australian version of the show this year, which was still entertaining in a strange way, and not in the usual strange way. This year, the folks at MIT got into the fun as well, unveiling what they call the "world's first psychopath AI" (via Geekologie).
"Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set. Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms."
When it was asked to caption ink blot images, its answers, compared to those of a more traditionally trained AI, were pretty dark. For example, while the normal AI saw "a group of birds sitting on top of a tree branch," Norman saw "a man is electrocuted and catches to death." Normal AI: "A person is holding an umbrella in the air." Norman: "Man is shot dead in front of his screaming wife."
Keep this AI away from me.
What do you think of the dark neural network? Would it be interesting to see a similarly trained happy and empathetic AI? Let us know what you think down in the comments!
Image: Warner Bros.
More tech news:
- Adam Savage built a 1,000 shot Nerf gun.
- Sanrio characters created by predictive text are...unusual.
- Flamethrower tractors could be better than pesticides.