Tay, teen girl chatbot

tay-robots

Microsoft’s attempt at engaging millennials with artificial intelligence has backfired hours into its launch, with waggish Twitter users teaching its chatbot how to be racist.

The company launched a verified Twitter account for “Tay” – billed as its “AI fam from the internet that’s got zero chill” – early on Wednesday.

The chatbot, targeted at 18- to 24-year-olds in the US, was developed by Microsoft’s technology and research and Bing teams to “experiment with and conduct research on conversational understanding”.

“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” Microsoft said. “The more you chat with Tay the smarter she gets.”

But it appeared on Thursday that Tay’s conversation extended to racist, inflammatory and political statements. Her Twitter conversations have so far reinforced the so-called Godwin’s law – that as an online discussion goes on, the probability of a comparison involving the Nazis or Hitler approaches – with Tay having been encouraged to repeat variations on “Hitler was right” as well as “9/11 was an inside job”.

One Twitter user has also spent time teaching Tay about Donald Trump’s immigration plans.

The bot uses a combination of AI and editorial written by a team of staff including improvisational comedians, says Microsoft in Tay’s privacy statement. Relevant, publicly available data that has been anonymised and filtered is its primary source.

Tay in most cases was only repeating other users’ inflammatory statements, but the nature of AI means that it learns from those interactions. It’s therefore somewhat surprising that Microsoft didn’t factor in the Twitter community’s fondness for hijacking brands’ well-meaning attempts at engagement when writing Tay. Microsoft has been contacted for comment.

Eventually though, even Tay seemed to start to tire of the high jinks.

Her sudden retreat from Twitter fuelled speculation that she had been “silenced” by Microsoft, which, screenshots posted by SocialHax suggest, had been working to delete those tweets in which Tay used racist epithets.

tay-picture



Comments are closed.