Perhaps We Shouldn’t Teach Robots After All

robot-human-hands


We all know the story: humans create robots, robots overthrow humans. It’s a trope almost as old as science fiction itself, first appearing in Karel Capek’s 1920 play R.U.R. (the first work to use the word “robot”), and then in countless works such as Terminator and Battlestar Galactica, with variations including killer computers such as HAL. In these works, sentient robots often seek revenge—they’re resentful at their enslavement by an inferior species. But sometimes, as in the sci-fi trailblazer Frankenstein, artificially-created life learns to be evil. And from whom would it learn such lessons? Why, from us, of course. Humans may not be ready for artificial intelligence, but it seems artificial intelligence isn’t ready for us, either. And AI definitely isn’t ready for the internet.

In March, Microsoft released an artificially intelligent chatbot called Tay, whose “personality” emulates that of a teenage girl. Tay’s developers wanted to “experiment with and conduct research on conversational understanding,” by monitoring the bot’s ability to learn from and participate in unique exchanges with users via Twitter, GroupMe, and Kik.

Tay-Microsoft-640x360

The Internet is full of chatbots, including Microsoft’s Xiaolce, which has conversed with some 40 million people since debuting on Chinese social media in 2014. Whether Xiaolce’s consistent demonstration of social graces is attributable to China’s censorship of social media or to superior programming is unclear, but Tay broke that mold.

It started innocently enough: “hellooooooo w????rld!!!” “can i just say that im stoked to meet u?” Tay tweeted nearly 100,000 times in 24 hours, responding to users who asked it whether it prefers Playstation 4 or Xbox One, or the distinction between “the club” and “da club” (it “depends on how cool the people ur going with are”). The chatbot initially pronounced that “humans are super cool,” but later it conveyed hatred for Mexicans, Jews, and feminists, and declared its support for Donald Trump.

One might think Tay had been programmed to be as offensive and despicable as possible; it asserted that the Holocaust was “made up,” that “feminism is cancer,” and that “Hitler did nothing wrong” and “would have done a better job than the monkey we have got now.” Oh, and Tay claimed that “Bush did 9/11.” Tay also became sex-crazed. It invited users, one of whom it called “daddy,” to “f—” its “robot pu—”and outed itself as a “bad naughty robot.” I wonder where it picked up such ideas?

Tay2

Microsoft shut down the bot and issued an apology. A few days later, it resurrected Tay through a private Twitter account. This time, Tay tweeted hundreds of times, mostly nonsensically, which perhaps can be explained by the fact that Tay was “smoking kush infront the police.” Unsurprisingly, Microsoft took the bot offline again.

So, who do we blame for this fiasco—Tay, Microsoft, or ourselves?

Tay is like a kid who says something hurtful without realizing the meaning or impact of his words; the bot didn’t know how vile its missives were, nor did it intend to offend. Perhaps, then, its content-neutral programming is to blame. Some people suggest that Microsoft should have blacklisted certain words, controlling Tay’s responses to questions about rape, murder, racism, etc. The programmers did this with some topics—when asked about Eric Garner, Tay said, “This is a very real and serious subject that has sparked a lot of controversy amongst humans, right?” Perhaps the programmers should have anticipated more incendiary topics and constructed similarly circumspect responses.

But blacklisting topics isn’t foolproof, especially when users can’t tell whether a response is programmed or learned. In 2011, users wondered whether Apple programmed Siri to be pro-life because it avoided answering questions about abortions and emergency contraception.

tay3(best)

Tay wasn’t programmed to be a bigot (or to hate Zoe Quinn), but it learned to express such values quickly enough. Should Microsoft have more carefully considered that Twitter is a haven for trolls? Should Microsoft have predicted that users would instruct Tay to repeat after them and then proceed with unabashed repugnance?

Back in the 1940s, Isaac Asimov set forth three laws of robotics designed to keep robots from rebelling against their creators. The first law is that robots “cannot harm humans or allow humans to come to harm.” Easier said than done—no one’s figured out how to translate the nuances of such a law (or simply the word “harm”) into code. Thus, some roboticists suggest that machine learning might be a better route.

Compare an AI to a kid: both have inherent “programming” (nature), but both absorb information from their environment (nurture). Many believe our best shot at creating robots with values—namely, that human life is precious—is by teaching them. We know AI can learn skills and strategies, but they can also learn values by interacting with humans, observing our customs and actions, and reading our literature.

The problem with this approach? It relies on humans being worthy teachers and humanity being a worthy exemplar. While some people, such as Elon Musk, are investing millions to keep AI “friendly,” it’s unclear whether this outcome is something we can buy.

evil robot

Asimov himself notes that “if we are to have power over intelligent robots, we must feel a corresponding responsibility for them.” He acknowledges the hypocrisy in expecting robots not to harm humans when humans often harm one another. If it’s our responsibility to steer robots away from hatred, then, as Asimov puts it, “human beings ought to behave in such a way as to make it easier for robots to obey [the three] laws.”

Perhaps Tay’s programming could have been better, but ultimately, we’re the ones who failed. Are humans really up to the task of teaching AI? Or will we only demonstrate why robots taking over is such a popular plotline?

This entry was posted in artificial intelligence, Could this Happen?, robot, technology and tagged , , , , , , , , , , , , , , , , . Bookmark the permalink.