Home - Critical Computer Company Limited
Search:    Start Date:    Detail:           Sources

Show Items:     Beginning 
12/09/2019 02:03:15  

When an artificially intelligent chatbot that used Twitter to learn how to talk unsurprisingly turned into a bigot bot, Taylor Swift reportedly threatened legal action because the bot's name was Tay. Microsoft would probably rather forget the experiment where Twitter trolls took advantage of the chatbot's programming and taught it to be racist in 2016, but a new book is sharing unreleased details that show Microsoft had more to worry about than just the bot's racist remarks. Digital Trends reports: Tay was a social media chatbot geared toward teens first launched in China before adapting the three-letter moniker when moving to the U.S. The bot, however, was programmed to learn how to talk based on Twitter conversations. In less than a day, the automatic responses the chatbot tweeted had Tay siding with Hitler, promoting genocide, and just generally hating everybody. Microsoft immediately removed the account and apologized. When the bot was reprogrammed, Tay was relaunched as Zo. But in the book Tools and Weapons by Microsoft president Brad Smith and Carol Ann Browne, Microsoft's communications director, the executives have finally revealed why -- another Tay, Taylor Swift. According to The Guardian, the singer's lawyer threatened legal action over the chatbot's name before the bot broke bad. The singer claimed the name violated both federal and state laws. Rather than get in a legal battle with the singer, Smith writes, the company instead started considering new names.

Read more of this story at Slashdot.

If this link has not opened automatically, please click here to jump.