‘World’s first psychopath AI’ bot trained by viewing Reddit

Share

Nope - Norman is a "psychopath AI", created by researchers at the MIT Media Lab as a "case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms".

Norman is an AI experiment born from the test and "extended exposure to the darkest corners of Reddit", according to MIT, in order to explore how datasets and bias can influence the behavior and decision-making capabilities of artificial intelligence. After it was trained through this data set, as shown in the link they compared the Norman AIs captioning with the regular AI's and the vast difference in which they classified the image could be noticed without any problem.

The results of the comparison are below. Don't we have enough insane people in the world? A typical Rorschach test uses slides that are cloudy and don't actually portray anything, but the point of the test is to get an insight into the mind of the person taking the test.

The researchers trained Norman to caption images, which means creating short descriptions for what they see.

More news: Syphilis and gonorrhoea up by one-fifth
More news: Manchester United's new midfielder Fred suffers "ankle trauma" in Brazil training
More news: Special counsel brings new charges against Paul Manafort in Russian Federation probe

"Since Norman only observed horrifying image captions, it sees death in whatever image it looks at", the developers said.

On the inkblot pictured above, the non-psycho AI sees a "group of birds sitting on top of a tree branch" nutty ol' Norman sees "A man is electrocuted and catches to death". However, the team does say that this subreddit is "dedicated to documenting and observing the disturbing reality of death". "So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it".

Standard AI robot observes: "A couple of people standing next to each other". Where the standard AI sees "a close up of a wedding cake on a table", Norman, our malicious AI robokiller sees "a man killed by speeding driver". When an AI algorithm is trained as Norman was, it sees what MIT says are "sick" things in an image.

Share