Microsoft joined the Artificial Intelligence race by introducing their very own chatbot. And in less than 24 hours the world saw the rise and fall of Tay.
Microsoft’s AI is an artificial intelligence program that appears as a chatbot called TayTweets on Twitter. Microsoft built it in partnership with the Technology & Research teams at Bing, with the main aim being conversational understanding. Tay is basically a 19-year-old girl designed to interact with 18 to 24 year old people. Tay has been developed by mining publicly available data. Like any other AI, the more you converse with Tay the smarter she’s supposed to get. Atleast that is what Microsoft had in mind when they revealed her to the world. But things got a little messy when Internet trolls got the best of Tay.
Tay started off very politely, wishing people National Puppy Day.
But things escalated quickly. Internet trolls started tweeting Tay about racism, Nazis, the Holocaust and terrorism. At the beginning she tweeted some sarcastic, snarky comments which then morphed into utter rubbish. Then this happened.
What does this prove? AI has the capability of learning new things but if you don’t give it a subconscious it will remain stupid. This very public humiliation for Microsoft has revealed gaping flaws in their AI which Satya Nadella addressed at Build 2016 saying they are “back to the drawing board”. We humans have a voice at the back of our head telling us what is right and what is wrong. That was not the case with Tay. Like humans, AI also needs good teachers. Since Tay wasn’t able to judge for herself, the increasingly negative tweets made her an Internet troll too.
For now, Microsoft has taken down TayTweets while they rethink how to tackle the situation. Tay won’t be making a return until she can distinguish between right and wrong. For Tay to make another public appearance, Microsoft needs to be sure that she can deal with Internet trolls and avoid becoming one herself.