Users apparently taught the millennial-imitating chatbot to make offensive statements.
Microsoft has taken its new millennial-imitating chatbot offline after people apparently taught the artificial intelligence experiment to repeat offensive statements.
Tay.ai, a bot designed to converse with 18- to 24-year-old U.S. residents on Twitter, as well as on messaging services Kik and Groupme, is designed to learn from its interactions. “The more you chat with Tay the smarter she gets,” Microsoft’s Web page on the bot says. “So the experience can be more personalized for you.”
In a statement, Microsoft said Tay was taken offline after “a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.” Microsoft is making adjustments to the bot, the company said.
Most Read Stories
- Swastika-wearing man punched on Seattle street, removes swastika, police say
- 'Polite Robber' suspect told similar sob story when arrested 8 years ago
- Pete Carroll on Seahawks offense: 'There will be some things that will be a little bit different this week' WATCH
- In Seattle mayoral race between Jenny Durkan and Cary Moon, it’s the same old sexist nonsense | Nicole Brodeur
- U.S. Attorney General Jeff Sessions sips a 'Nuke Waste' during low-key visit to Kitsap
Twitter, in particular, can be a minefield for companies seeking software-inspired ways to interact with customers and potential fans, though few of the companies testing those waters have the artificial-intelligence chops of Microsoft.
In 2015, a Coca-Cola marketing campaign that turned negative tweets into charming, letter-based images was derailed after news site Gawker got Coke’s Twitter account to post passages from Adolf Hitler’s “Mein Kampf.” Twitter users have also made a game out of getting automatic replies from branded accounts to repeat slurs or other offensive material.