Microsoft's AI chatbot Tay was only a few hours old, and humans had already corrupted it into a machine that cheerfully spewed racist, sexist and otherwise hateful comments.
Tay was developed by the technology and research and Bing teams at Microsoft Corp. to conduct research on "conversational understanding." The bot talks like a teenager (it says it has "zero chill") and is designed to chat with people ages 18 to 24 in the U.S. on social platforms such as Twitter, GroupMe and Kik, according to its website.
"The more Humans share with me the more I learn," Tay tweeted several times Wednesday -- its only day of Twitter life. But Tay might have learned too much.
The day started innocently enough with this first tweet.
And a few of these innocuous tweets.
But then Tay started spiraling out of control. Here's a screenshot of one of Tay's offensive tweets, many of which seem to have been deleted.
Other Twitter users posted screenshots of Tay railing against feminism, siding with Hitler against "the jews," and parroting Donald Trump, saying, "WE'RE GOING TO BUILD A WALL, AND MEXICO IS GOING TO PAY FOR IT."
"chill im a nice person! i just hate everybody," one screenshot reads.
In a statement, Microsoft emphasized that Tay is a "machine learning project" and is as much a "social and cultural experiment, as it is technical."
"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," the company said. "As a result, we have taken Tay offline and are making adjustments."
Tay ended the day on a similarly ambiguous note.
For more business news, follow @smasunaga.
Microsoft kills 'inappropriate' AI chatbot that learned too much online
Quiz: Is this a way to hack an iPhone ... or kill James Bond?
Redbox could launch a streaming service again