You’d be forgiven for believing that reporting someone’s behavior through in-game tools simply sends that report directly into some unseen void that devs don’t pay attention to, and while plenty of studios have tried to combat toxic behavior in-game, nobody has come up with a suitable solution. Enter Dennis “Thresh” Fong, a longtime esports pro from the Doom and Quake days, who is creating an AI program to try and do something about it.
The program is called GGWP and was developed by Fong alongside Crunchyroll founder Kun Gao and data and AI expert Dr. George Ng. GGWP uses a fully customizable API system that aggregates player data to generate a community health score and break down the types of toxicity common to a game; it even assigns reputation scores to individual players based on an AI-led analysis of reported matches and understanding of a game’s player culture. These tools allow devs to address reports with a mixture of automated and in-person responses.
According to Fong, the installation of GGWP’s API is “literally a line of code,” and he further reasons that a lot of reports are similar, meaning “the vast majority of this stuff is actually almost perfectly primed for AI to go tackle this problem.”
GGWP isn’t only about straight punishment, as it can reportedly let devs issue real-time warnings when a player does or says something unacceptable in chat, and it can also recognize helpful behavior like sharing weapons and reviving teammates under adverse conditions, applying bonuses to that player’s reputation score as a result. Fong admits that a little extra moderation work would be needed for deterrents of this type to be implemented, but GGWP could grant MMO and multiplayer devs the means to do more more easily. Assuming they want to in the first place.