The new chatbot appears to avoid discussion of politics, religion and race entirely and lives only on the Kik messaging application.
Microsoft is taking another stab at building a chatbot, several months after Tay, an earlier attempt, was taken offline when some internet users convinced it to spout racist and sexist comments.
The company’s second try, Zo, lives on the Kik messaging application.
Spotted over the weekend by a Microsoft-tracking blog, Zo appears to avoid discussion of politics, religion and race entirely. It also has a narrower release than Tay, which, because of its place on the public Twitter platform, had its meltdown in view of the entire internet.
Microsoft launched Tay, a millennial-imitating chatbot, in March. The bot, powered by machine-learning algorithms, was designed to mine public data, as well as the input of people who engaged with it on Twitter, Kik and GroupMe, to come up with phrases to use in conversation.
Most Read Business Stories
- She bought a house in Seattle for $36,000 in 1973. How can she release some cash?
- Big Tech's newest thing? This Seattle author predicted it 30 years ago
- Microsoft’s $22 billion combat goggles get crucial field test with U.S. Army
- After stock market's worst start in 50 years, some see more pain ahead
- Boeing will cover travel expenses for medical procedures
A day later, the bot was advocating genocide and calling for the murder of feminists.
Microsoft took Tay down and said it would make adjustments. The company said the bot was the victim of a coordinated attack by internet users interested in manipulating its responses.
Microsoft and other technology companies have bet on chatbot interaction — and the artificial intelligence-imitating tools that underpin them — as one of the next computing interfaces.
The company’s experiments with a chatbot in the U.S. follow the success Microsoft had with Xiaoice, a chatbot the company introduced in China in 2014.
Xiaoice itself has quirks that would raise eyebrows in the U.S. China Digital Times reported last month that Microsoft had apparently programmed the bot to avoid discussing the government’s crackdown on the 1989 Tiananmen Square protests.