Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

steve2470

(37,457 posts)
Fri Mar 25, 2016, 06:27 PM Mar 2016

Microsoft Is Sorry for That Whole Racist Twitter Bot Thing

http://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/

As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.

I want to share what we learned and how we’re taking these lessons forward.

For context, Tay was not the first artificial intelligence application we released into the online social world. In China, our XiaoIce chatbot is being used by some 40 million people, delighting with its stories and conversations. The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment? Tay – a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes – is our first attempt to answer this question.

As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. It’s through increased interaction where we expected to learn more and for the AI to get better and better.


More at link. MS obviously doesn't understand Twitter and the internet.
17 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

MADem

(135,425 posts)
1. Yeah, they don't understand that the internet is full of trolls and assholes, basically.
Fri Mar 25, 2016, 06:31 PM
Mar 2016

It's a shame that they get more attention than they deserve. Not sure how that could be mitigated.

Hayabusa

(2,135 posts)
2. People make a chatbot
Fri Mar 25, 2016, 06:32 PM
Mar 2016

the first thing that people try to do is to break it. Make it say silly/disgusting things, etc.

 

Sen. Walter Sobchak

(8,692 posts)
16. Siri day one...
Fri Mar 25, 2016, 07:44 PM
Mar 2016

All I saw anyone doing, male or female, young or old, was crudely propositioning their phone.

irisblue

(32,969 posts)
3. srsly...
Fri Mar 25, 2016, 06:41 PM
Mar 2016

since many computer systems came from basement tinkers they did not recognize the potential issues?

Renew Deal

(81,856 posts)
10. I agree
Fri Mar 25, 2016, 07:06 PM
Mar 2016

Best headline I saw was something like "Microsoft chat bot turns into racist sex loving Trump supporter"

ohnoyoudidnt

(1,858 posts)
8. Maybe instead of filtering Tay's responses,
Fri Mar 25, 2016, 07:00 PM
Mar 2016

they should filter the questions people are allowed to ask it by dumping questions with certain keywords or phrases, at least until it responds to those kind of questions in a better way.

0rganism

(23,944 posts)
12. i think they need to adjust the weighting model for learning, not adding filtering
Fri Mar 25, 2016, 07:19 PM
Mar 2016

just give it some "opinions" of its own to start with, during the learning process have it reduce the importance of tweets with um... undesirable characteristics. that way it can learn from and adapt to whatever it finds in media without being the proverbial blank slate.

ohnoyoudidnt

(1,858 posts)
15. That could help. The 'opinions' could help simulate the human traits/instincts
Fri Mar 25, 2016, 07:42 PM
Mar 2016

that humans who are not sociopaths have. I do think environment in early development also plays a role, though. There's a lot about the human mind and psychology that we still don't understand and we are trying to create AI.

cemaphonic

(4,138 posts)
14. I still can't believe that nobody at Microsoft saw this coming.
Fri Mar 25, 2016, 07:20 PM
Mar 2016

Technical hacking challenge + epic trolling potential = catnip to the troglodytes at 4chan and the other seedier corners of the internet.

Latest Discussions»General Discussion»Microsoft Is Sorry for Th...