According to a new Blizzard interview on Kotaku this week, Overwatch toxicity has a new enemy: AI. Jeff Kaplan says that Blizzard has “been experimenting with machine learning” in order to teach its “games what toxic language is.”
As Kotaku points out, Blizz has already touted its successes in the war on toxicity, having claimed back in January that it had boosted reporting by a fifth and reduced chat abuse by almost that much. But its next move is to teach its AI to detect abuse before it’s ever reported. That’s the real trick, of course, since teaching AI context – the genuine and abusive versions of “GG,” for example – is much harder than it sounds.
Furthermore, Kaplan tells the publication, Blizzard is eyeing a horizon where it’s focusing as much on the “positive version of reporting” as the negative.
We’re not quite to Minority Report, but it’s getting closer. As Massively OP’s Eliot put it in work chat this morning: Leave it to video game companies to treat human beings as an engineering problem.