Why We Can’t Just Let Algorithms Be Algorithms

Picture this: you’re in a busy restaurant having a quiet meal with a friend. Suddenly, one of the patrons, obviously drunk, starts getting loud and obnoxious, going from table to table insulting the other diners. Within a minute or two, all of the other customers are very uncomfortable and wishing the management would throw the bum out. That’d be the sensible thing to do, wouldn’t it? But the management is actually powerless to do that. Instead they ask everyone to leave. Then they shut down the restaurant until they can figure out a way to prevent other random loudmouth drunks from ruining their business.

Well, Microsoft just had a similar experience on Twitter. In 2014, the company launched a learning “chatbot” driven by artificial intelligence on two popular social media platforms in China. The chatbot, named Xiaoice, has been a huge success; tens of millions of users enjoy interacting with “her.”

But recently, when Microsoft launched on Twitter the same kind of chatbot, this one named Tay, things went disastrously off the rails within a matter of hours. As you probably know, there are certain Twitter users whose favorite activity is sowing chaos and disruption on the platform. When word quickly spread through their grapevine that Tay was programmed to learn through its interactions, they bombarded its account with sexist, racist and anti-semitic tweets. The result? Very quickly, Tay itself started tweeting highly offensive hate speech. Helpless to “throw the bums out,”Microsoft quickly issued an apology and took Tay offline while their engineers figure out how to  prevent a recurrence.

Microsoft’s experience with Tay shows, once again, that technology can be too easily co-opted to serve as a force multiplier for the offensive views of a small handful of idiots. And as a recent NPR story pointed out, some of Google’s algorithms have learned socially discredited biases, even without a concerted effort to corrupt them.

Should we just learn to expect these kinds of incidents and just chalk it up to “algorithms being algorithms?” Why is this a big deal?

I could argue that allowing algorithms to reflect and especially to magnify intolerant biases runs counter to our values. And while I believe that, I don’t even think I have to go there to argue that this is a problem worth trying to solve. From a strictly pragmatic point of view, biased algorithms are bad for business. Who wants to risk offending and alienating large segments of their market? Sure, Google and Microsoft are big enough to survive embarrassing incidents like these, but many businesses probably aren’t.

Algorithms can’t just be programmed to learn from data. They must be programmed to discern which data is worth learning from and which data should be discounted.

Originally posted in Forbes.com.