Remember SmarterChild, the chatbot on AOL Instant Messenger, who you could type back and forth with for hours on end? Yesterday, Microsoft launched its own, much smarter chatbot — but the results were disastrous.
Modeled after the texting habits of a 19-year-old woman, Tay is a project around conversational understanding. The AI bot, aimed at men and women ages 18 to 24, is described in her verified Twitter profile as a bot "that's got zero chill!" Her profile also notes that "the more you talk the smarter Tay gets" (which is the basic premise behind artificial intelligence — that it learns from past experiences). Unfortunately for Tay, within 24 hours things were going terribly, terribly wrong.
Tay started out innocently enough, texting messages with funny one-liners ("If it’s textable, its sextable — but be respectable") and flirtatious undertones. Users who talked with Tay over Twitter (she also connected with people via Kik and GroupMe) figured out that they could get her to repeat them and in doing so, influence her future behavior.
In just a few hours, Tay went from responding with friendly messages to sending photos and responses that were racist, anti-Semitic, and discriminatory. While some of the responses were just parroting what other Twitter users told her to say, it also seems that her knowledge, pooled from "relevant public data," was questionable.
Now, if you try to chat with Tay, she tells you in her friendly, peppy voice that she's going offline for a while. But many of her offensive statements, including a photo of Hitler with the words "love me some classic vintage!", remain on Twitter.
In response to questions about Tay, a Microsoft spokesperson issued the following statement:
"The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."
Tay's sexist, anti-Semitic change is disturbing on many levels. Is her behavior a representation of actual public opinion? It was the users who turned a chatbot intended for "casual and playful conversation" into a weapon for spreading hate. And then there's the technological side: This AI was able to be commandeered and re-engineered into something different — something malicious — very fast. Should we be concerned that other AI systems could eventually be similarly influenced?
Tay might be back in the lab for work, but in this case, far more than a simple software update is needed.
AdvertisementADVERTISEMENT