Warning: Some of the classifications ImageNet Roulette provides are highly offensive. Classify with caution.
Here’s a fun website that will both entertain and frighten you to no end: It’s called ImageNet Roulette, and it’s an AI image classifier that will tell you what kind of thing you are. And yes, “thing” is vague, but go ahead and pop an image of yourself into this classifier and you’ll get anything from biographer to skinhead to beard to lassie. Beware though, figuring out who is what can be quite addictive.
ImageNet is one of the most significant training sets in the history of AI. A major achievement. The labels come from WordNet, the images were scraped from search engines. The ‘Person’ category was rarely used or talked about. But it’s strange, fascinating, and often offensive.
— Kate Crawford (@katecrawford) September 16, 2019
Despite its name, ImageNet Roulette—which comes via J.D. Schnepf on Twitter—does not allow you to exchange any of your classified images with random people online. In fact, it’s not even necessarily supposed to be an effective tool for classifying things; it’s part of an art exhibit taking place in Italy that’s been dubbed Training Humans. (Although ImageNet, the neural network ImageNet Roulette is based on, was established in 2009, and trained on 14 million labeled images sorted into 20,000 categories with the intention of becoming a way “to expand and improve the data available to train AI algorithms.”)
#ImageNetRoulette is part of the exhibition #TrainingHumans on view at #Osservatorio @FondazionePrada https://t.co/v9VrHefqT8
— Fondazione Prada (@FondazionePrada) September 17, 2019
Training Humans, which will be at the Osservatorio Fondazione Prada in Milan through February 2020, is the brainchild of artist Trevor Paglen and NYU professor Kate Crawford. Paglen notes in his web bio that his “chief concerns are learning how to see the historical moment we live in and developing the means to imagine alternative futures,” while Crawford, a co-founder of the AI Now Institute, seems to be mostly focused on how AI is going to affect society.
For Training Humans, the artist and professor/AI researcher duo are putting on what they say is “the first major photography exhibition devoted to training images.” I.e. the endless wealth of images used by scientists to train neural nets on how to visually interpret and categorize objects. The summary post for the exhibit notes that its aim is to explore how people are interpreted and classified by AI, which will likely have enormous social impacts in the very near future. The exhibit’s post notes that “As the classification of humans by AI systems becomes more invasive and complex, their biases and politics become apparent.”
Oh wow, if you don’t like the first result you get, try uploading a higher-resolution version of the same photo. pic.twitter.com/ILYPCXgorv
— Scott Enderle (@scottenderle) September 16, 2019
This means that ImageNet Roulette isn’t only supposed to give you and your friends a gimmicky laugh, but also reveal how trained neural nets can profoundly misclassify people. This could be damaging beyond imagination when one thinks about how companies and governments could deploy these classifiers to decide who gets hired, who deserves a loan, and who belongs in jail. Or the classifiers—which are already obviously in use in different capacities—could provide totally new truthful insights into ourselves. Or both. We’ll have to wait and see what the neural nets “see.”
What do you think of ImageNet Roulette? Classify your opinions in the comments below!
Images: Montclair Film remixed with ImageNet Roulette