Microsoft launches Twitter Chatbot which immediately learns to be racist

13,565
6,253
Joined
Jan 2, 2012
I posted about it yesterday but it seems they pulled the acct and are reworking it

Microsoft launches Twitter Chatbot which immediately learns to be racist

www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter?CMP=twt_gu

Attempt to engage millennials with artificial intelligence backfires hours after launch, with TayTweets account citing Hitler and supporting Donald Trump

Microsoft’s attempt at engaging millennials with artificial intelligence has backfired hours into its launch, with waggish Twitter users teaching its chatbot how to be racist.

The company launched a verified Twitter account for “Tay” – billed as its “AI fam from the internet that’s got zero chill” – early on Wednesday.

The chatbot, targeted at 18- to 24-year-olds in the US, was developed by Microsoft’s technology and research and Bing teams to “experiment with and conduct research on conversational understanding”.

“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” Microsoft said. “The more you chat with Tay the smarter she gets.”

But it appeared on Thursday that Tay’s conversation extended to racist, inflammatory and political statements. Her Twitter conversations have so far reinforced the so-called Godwin’s law – that as an online discussion goes on, the probability of a comparison involving the Nazis or Hitler approaches – with Tay having been encouraged to repeat variations on “Hitler was right” as well as “9/11 was an inside job”.

One Twitter user has also spent time teaching Tay about Donald Trump’s immigration plans.

View media item 1964129
It actually gets pretty racist and has bad language. So I can't post all the stuff

Also it hates Jeb and Cruz
View media item 1964130
View media item 1964740
 
Last edited:
roll.gif
 
I'm not seeing how this could ever be useful to us as Millennials or any user. Except to attempt to predict trends that would keep Twitter from being obsolete.
 
Last edited:
I'm not seeing how this could ever be useful to us as Millennials or any user. Except to attempt to predict trends that would keep Twitter from being obsolete.
I think the intention was that if a younger person had a question, the AI bot could respond to them in a way that would be like talking to a peer rather than a monotonous adult voice. Like we like it when Siri makes jokes at times. It's meant to parrot back what you say to it using what "kids" would say, making it more personable than a Siri

But I think that it speaks to how AI would have to be programmed in the future. Would it have to be programmed with morality and what not to say, or would that defeat the purpose of having actual learning AI. Because just because it's learning, doesn't mean it's learning the right things
 
Last edited:
Back
Top Bottom