Lessons Learned in How Not to Build an AI

Microsoft recently built a proto-AI “teen girl” bot, TayandYou, who became a horrific racist monster in hours, as you may know. The astounding thing is this multi-zillion-dollar company let it into the wild without knowing the first thing about what they were doing.

Is this how the AI Apocalypse starts? That would be the Hell Scenario everyone’s braced for. But amazingly, the story has a hopeful Prevailish ending.

It turns out – who knew? – there is a whole “botmaking community.” These unheralded folk are actually building proto-AIs and rapidly and collaboratively learning how to do it ethically. They actually know a thing or three through hands-on experience with the critters. See Motherboard’s report at http://motherboard.vice.com/read/how-to-make-a-not-racist-bot. A lightly edited high-light reel to tease you into reading the whole piece:

“A lot of people in the botmaking community were perturbed to see someone coming in out of nowhere and assuming they knew better—without doing a little bit of research into prior art,” said Rob Dubbin, a long-time botmaker.

So what does it mean to build bots ethically?

The basic takeaway is that botmakers should be thinking through the full range of possible outputs, and all the ways others can misuse their creations.

“You really have to sit down and think through the consequences,” said Darius Kazemi, worker-owner at Feel Train, who has been making bots on Twitter since 2012. “It should go to the core of your design.”

For something like TayandYou, said Kazemi, the creators should have “just run a million iterations of it one day and read as many of them as you can. Just skim and find the stuff that you don’t like and go back and try and design it out of it.”

“It boils down to respecting that you’re in a social space, that you’re in a commons,” said Dubbin. “People talk and relate to each other and are humans to each other on Twitter so it’s worth respecting that space and not trampling all over it to spray your art on people.”

For thricedotted, a veteran Twitter botmaker and natural language processing researcher, “It seems like the makers of @TayandYou attempted to account for a few specific mishaps, but sorely underestimated the vast potential for people to be assholes on the internet.”

The Prevailish aspect of this to me is its existence proof of rapid bottom-up human response to unprecedented technological change – well in advance of disaster. Now all we have to do is listen to them.

Go humans!

(Tip o’ the hat to Evangeline Garreau, the computer coder with an English degree from Smith, for thinking deeply about this sort of thing.)

1 Comment

  1. http://reason.com/blog/2016/03/24/the-internet-turn-a-chatbot

    “Part of the problem was a poorly conceived piece of programming: If you told Tay “repeat after me,” she would spit back any batch of words you gave her. Once the Internet figured this out, it wasn’t long before the channer types started encouraging Tay to say the most offensive things they could think of.

    When Microsoft took the bot offline and deleted the offending tweets, it blamed its troubles on a “coordinated effort” to make Tay “respond in inappropriate ways.” I suppose it’s possible that some of the shitposters were working together, but c’mon. As someone called @GodDamnRoads pointed out today on Twitter, “it doesn’t take coordination for people to post lulzy things at a chat bot.”

    Microsoft’s accusation doesn’t surprise me. Outsiders are constantly mistaking spontaneous subcultural activities for organized conspiracies. But it’s interesting that even the people who program an artificial intelligence—people whose very job rests on the idea of organically emerging behavior—would leap to blame their bot’s fascist turn on a centralized plot.”

    To me, that part at the end (emphasis mine) is what is interesting about this story. Like you said, people will chose to interact with technology in ways unanticipated by the designers. That aside, the history of chatbots is people will troll them for the lulz – how do you miss that when considering your design requirements? Who knows, maybe it’s just working as intended, “Racist Robot! Runs Rampant, Ruins Twitter!” is a great headline.

Leave a Reply