How the Microsoft Tay chatbot debacle could have been prevented with better AI

banner-768x90

.com/blogger_img_proxy/

Image: screenshot, Twitter

On March 23, in an effort to appeal to a prime social media demographic, 18-24 year-old women, Microsoft launched a teen-girl-inspired chatbot named Tay. Less than a day later, the bot was swiftly removed from the site after tweeting things like "i fucking hate feminists"—and that's one of the tamer messages.

But while the quick denigration of Tay's conversations may have been unexpected for Microsoft, most AI experts agree that it was inevitable. And most say it could have been prevented, by using tools like emotional analytics and better AI testing.

The Tay debacle would never happen in the academic world, for instance. "In a university, you can't just run what is essentially an experiment on millions of users (or even on 10 users), without getting permission from the Institutional Review Board," said Marie desJardins, AI professor at the University of Maryland in Baltimore County.

SEE: Why Microsoft's 'Tay' AI bot went wrong (TechRepublic)

"We have checks and balances to ensure that researchers are doing an ethical analysis of the experiment, and what effects it might have on the user population being studied," she said. While this can be a time-consuming process, it forces researchers to "slow down, stop, and think about possible consequences," desJardins said, which is "something that apparently the Microsoft team didn't have anybody telling them to do."

Others in the field see this as highlighting the importance of emotional understanding in AI.

Bruce Wilcox, director of natural language strategy at Kore, said that Microsoft "made a classic mistake that has been seen before, and shouldn't have been seen again when they built Tay."

"You don't just allow unfiltered information from random users on the internet to be regurgitated back," Wilcox said. What it means, he said, is that "all of the trollers on the internet will be saying, 'look, here's an opportunity, let's play with this'—and that's exactly what happened."

SEE: Tech firms have an obsession with "female" digital servants, and this needs to change (ZDNet)

He's not sure how they could have made such a big error. "Their China edition did it right," Wilcox said. "But the Washington team completely overlooked the obvious." It could have been prevented, he said, by putting code in to test for certain kinds of information. When there's a filter, Wilcox said, they will get it back out there again.

Sarah Austin, CEO and Founder of Broad Listening, a company that created an Artificial Emotional Intelligence Engine (AEI), told TechRepublic that Tay was an example of "AI that doesn't have AEI. It doesn't have rules," Austin said. "Microsoft threw the child to the wolves."

Her company, Austin said, would have approached the problem differently.

"If you want a teenage girl, let's look at teenage girls," Austin said. "Let's collect data and see what a teenage girl would actually be like."

SEE: Microsoft's Tay AI chatbot goes offline after being taught to be a racist (ZDNet)

For example, after she analyzed the personalities of teen Disney celebrity girls on Twitter, Austin found that the Disney girls are prone to focusing on themselves and their friends, seen through their tweets.

"The Tay personality should only be talking about herself, what she's doing, and talking to her friends," said Austin. "Tay wouldn't be responding to every single tweet, and would be ignoring the haters."

Also, the tone was not typical for a teenage girl. "If you look at these Disney stars," she said, "they only put out one negative tweet out of 400." The tweets themselves, Austin said, are at much more negative level. "And Tay's tweets were going negative more frequently."

So why does this matter? While the Tay experiment likely taught AI researchers what to be aware of moving forward, it is also important to recognize the impact of this kind of machine.

It's one of the reasons, desJardins said, that engineers and scientists must receive training in ethics. She redesigned her school's ethic course. She wanted, desJardins said, to "give students more hands-on, practical opportunities to analyze ethical situations and how to handle them."

"The Tay chatbot is a great study for that class."

Stay current on self-driving cars, robots, AI, and more by signing up for our Innovation newsletter!

Also see...


Source: http://techrepublic.com.feedsportal.com/c/35463/f/670841/s/4ecaf1c0/sc/15/l/0L0Stechrepublic0N0Carticle0Chow0Ethe0Emicrosoft0Etay0Echatbot0Edebacle0Ecould0Ehave0Ebeen0Eprevented0Ewith0Ebetter0Eai0C0Tftag0FRSS56d97e7/story01.htm
How the Microsoft Tay chatbot debacle could have been prevented with better AI Reviewed by Unknown on 4/07/2016 Rating: 5

Post Comments

Powered by Blogger.