A small, seemingly insignificant addition to Battle for Azeroth might have a positive effect on World of Warcraft’s modding community. Blizzard is adding a function that will allow reports on offensive behavior to be sent from within addons themselves, which finally allows the mod community a way to police its previously lawless empire.
The “SendAddonMessageLogged” function won’t be automatically instituted into every addon; mod creators have to enable and integrate it themselves. However, once it is functional, this tool can help players report toxic behavior that is taking place in mods right to Blizzard’s CS department.
In other news, with artifact weapons heading out the door with the upcoming expansion, the question of the hour is how Blizzard will handle the removal of these legendary items. Players on the public test realm got a look at the artifact retirement questline that will come with next month’s Patch 8.0, and if you’re totally fine being spoiled, you can peek at what it will entail over at Icy Veins.
As RPS reported this week, Valve has taken the relatively unusual step of making your Dota 2 and CSGO report cards semi-public – that is, players can see reports made against their accounts, and the rationales given, even if Valve took no action on them. The author was bemused to find that he’d been reported for “intentional feeding” when in fact, he just sucked that match. Hey, it happens.
But I wonder whether the reports are useful to actual toxic players who’ve been actioned to teach them where they went wrong; it’s certainly an idea League of Legends clung to for years. MOP reader TomTurtle recently suggested something similar in terms of forum moderation too. “I’d like to see how viable it’d be to have moderators give an infractor a chance to edit their post to be constructive in an attempt to have them learn why their initial language was against the rules” in the service of “informing players why they were infracted in the first place,” he wrote to us.
Even if we agree that moderators’ and gamemasters’ jobs should include not just protecting the community from toxicity but actually attempting to – as Raph Koster puts it in his new book – “reform bad apples,” I wonder whether it’s even worth the trouble, never mind the expense. Does knowing what they did wrong actually help toxic players become less toxic? Or does it just cause them to double down to save face? Is the industry just wasting time and money trying to reform players who aren’t just poorly socialized or clueless but willfully destructive?
MOP reader BulletTeeth pointed us to a piece on The Verge this week about an incident in online shooter Battalion 1944. A highly placed e-sports team member, SUSPC7, apparently went off on Discord about the studio’s slow rollout of skins meant as prizes, trollishly threatening to shoot up the studio. It got back to the devs, who decided to “teach [him] a lesson about comedy” by proposing to reskin his weapon, not with his earned prize but with a hand-drawn penis icon. Yeah, they pranked him.
“I thought you were kind of being a dick,” the studio rep tweeted, going on to tell the player he wanted him to become an “ambassador” for the game.
As The Verge writes, it’s an unusual tactic for a game studio to take against a toxic player in this day and age. While it might be nice to think that studio have the time and money and resources to hand-hold every lost boy and talk him down to being an ally, it’s not particularly realistic, and it creates a perverse incentive system whereby toxic players mop up studio attention that ought to go to non-toxic players.
I thought it would be interesting to reflect on what we think studios ought to do when disciplining players. Does this sort of reverse-prank actually work, or would it be better for companies to just boot the problem children and move on?
Toxic players, beware: Hi-Rez may not be talking to its Hand of the Gods
players, but it’s cracking way down on SMITE
miscreants. The studio apparently banned or suspended over 2000 people last week
“based on player reports and in-game behavior,” just a fraction of the number punished this season alone.
“Over 20,000 players have been suspended or banned in SMITE during Season 5 so far. However, this latest action today represents a ramp-up in our suspension activities, especially on Xbox and PS4, where our tools and processes have improved the most. One of our top priorities is making sure the player experience is positive and fun, and we’ve done major work recently to help us handle in game toxicity. We’ve been working hard to improve our machine learning tools to better identify players that have shown trends of negative behavior, as well as ramp up the efforts of our internal team at Hi-Rez that checks player reports and chat logs.”
The company stresses that this is all past of its “initiative to promote positive player behavior and handle negativity in game,” with more on the way. It also requests that players keep the reports coming.
Are studios starting to wake up and take action against particularly odious instances of gaming toxicity in their products? Blizzard, at least, is working to police its precious Overwatch League, which certainly does not need more controversy or bad publicity in its first season.
The studio levied a three-game suspension, a $2,000 fine, and revoked the streaming privileges of Philadelphia Fusion’s Josh “Eqo” Corona after Corona made a racist face on one of his streams. Blizzard is reported to have tight control over the League’s players with its code of conduct, in which it wrote that no player or team could bring the League or studio into “disrepute” with their actions. (This is not the first fine the League has issued.)
Speaking of disrepute, the League’s Boston Uprising went ahead and suspended Jonathan “DreamKazper” Sanchez due to allegations that he, an adult, was pursuing a sexual relationship with a minor.
According to a new Blizzard interview on Kotaku this week, Overwatch toxicity has a new enemy: AI. Jeff Kaplan says that Blizzard has “been experimenting with machine learning” in order to teach its “games what toxic language is.”
As Kotaku points out, Blizz has already touted its successes in the war on toxicity, having claimed back in January that it had boosted reporting by a fifth and reduced chat abuse by almost that much. But its next move is to teach its AI to detect abuse before it’s ever reported. That’s the real trick, of course, since teaching AI context – the genuine and abusive versions of “GG,” for example – is much harder than it sounds.
Furthermore, Kaplan tells the publication, Blizzard is eyeing a horizon where it’s focusing as much on the “positive version of reporting” as the negative.
Toxicity in online gaming just keeps popping up – specifically as it pertains to chat and commenting.
MOP reader Tanek pointed us to a thread about Standing Stone Games, which is apparently blocking specific words in LOTRO’s chat, including supposedly “political” words, leading some players to demand the company publish the full list to prove to said players they’re not “biased” (not gonna happen).
Reader Stephen then linked us to the amusing story of a Norwegian site that’s developed a WordPress plugin that requires people to take a quiz on an article’s contents before being allowed to comment.
Finally, there’s Saga of Lucimia, which this week spent its Monday dev blog discussing the Fair Play Alliance and its own home-grown play nice policy – and the fact that it will take a zero-tolerance, insta-ban approach to dealing with racism (we’ll assume other bigotry too).
All of these are approaches to handling specific community problems that MMO players deal with in text-based chat and forums (vs other online games that are more focused on toxic voice chat or grief play). Do you think they’re effective? Do text-based games have a bigger problem than voice-based games? Are chat blacklists, intelligence vetting, and dire threats enough to thwart text toxicity, or is there another way?
Forget group-kicks: If you’re a tool in Sea of Thieves, your own shipmates might just opt to stuff you in the brig – “a holding cell located on the bottom of the ship that disruptive players can be sent to after a democratic vote is held by their shipmates,” explains Polygon in a piece last week. The idea is to give toxic or obnoxious players a chance to apologize or shape up, even roleplay their way out of the situation they created.
This kind of penalty isn’t entirely new to MMOs, whether we’re talking jail in Ultima Online or Age of Wushu, but it’s certainly creative, right? At least as long as the majority of your ship isn’t toxic and you’re the one being shoved into a cell.
What’s the most creative in-game way you’ve seen an online game studio thwart toxicity?
After years of trying to crack the serious issue of negative behavior and toxicity among their individual communities, 30 game studios and industry leaders are teaming up to see if their combined strength can win the day.
League of Legends’ Riot Games, World of Warcraft’s Blizzard Entertainment, EVE Online’s CCP, Fortnite’s Epic Games, and Twitch’s Twitch are among those companies that have formed a “Fair Play Alliance” in an effort to combat bad player behavior. The coalition’s goal is to create a set of behavioral standards that will be shared among the whole community and help up-and-coming developers as they try to break into the e-sports markets.
“As an industry and as a society online, we’re trying to find our way. Having to be a company that steps out and says ‘We’re gonna be the ones to do this’ is kinda scary. This is an opportunity for all of us to say ‘What if we walked together as an industry?’” said Riot Senior Technical Designer Kimberly Voll.
This week, The Ancient Gaming Noob posted up an image of RIFT Prime, where Trion asks people to… play nice. “Just a neighborly reminder that 1-29 chat is for RIFT chat, ideally things relevant to level 1-29 gameplay,” the UI HUD reads. “Please be good to each other. We’ve muted some and shall mute again. Have a great evening!”
Meanwhile, over in Trion’s Trove, I’ve had to report-and-block dozens of fellow players just in the last few days for disgusting slurs in multiple languages, stuff the filter doesn’t catch. For a free-to-play game that’s also on console, yeah, I guess I expect no better from the playerbase. But but but RIFT Prime is subscription-based. Surely that means a strong community, where such polite warnings from developers aren’t necessary? Yeah, not so much, as anyone who played old-school MMORPGs can tell you. This is a problem even in games whose devs prioritize community and care a whole lot.
So this week, let’s talk about in-game chat. Do you use it? Do you watch it? Do you turn it off? Is it really terrible everywhere, or just in some games? Which one is the worst and the best, and what should developers do about chat specifically?
Last week, we covered an ESPN piece in which the author called out Blizzard for sitting on its hands after an Overwatch League player signed to the Dallas Fuel, Timo “Taimou” Kettunen, was caught openly using homophobic, racist, and ageist language toward other players, not the first time for the Fuel. It was just one more piece in a long series of incidents in Overwatch toxicity that’s now spilled over into the e-sports league itself.
Or is it? After initially reportedly dismissing the complaint back in January, Blizzard announced this weekend that it was fining Taimou $1000 for the slurs. It also fined an LA Valiant player $1000 for account sharing, issued a “formal warning” against a Houston Outlaws player who posted an offensive meme, and fined a fourth player, Félix “xQc” Lengyel from the Dallas Fuel, $4000 for having “repeatedly used an emote in a racially disparaging manner on the league’s stream and on social media, and used disparaging language against Overwatch League casters and fellow players on social media and on his personal stream.” In fact, we’ve covered Lengyel before when he was fined, suspended, and benched back in January for homophobic remarks to an openly gay fellow player.
What’s going on in the online video games business this week? Let’s dig in.
Steam, toxicity, and Kartridge
The Center for Investigative Reporting (via Motherboard) has a scathing piece out on Steam toxicity this week. Valve has traditionally maintained a hands-off approach with Steam groups, which means that the groups can easily become a toxic cesspit. The platform is accused of being loaded with hate groups, many of which support racist agendas or promote school shootings. Motherboard notes that Valve has refused to respond to questions on this topic since last October.
Meanwhile, Kongregate is launching Kartridge, a potential Steam competitor that says it will embrace indie “premium” titles and small-fry developers. “Our initial plan is that the first $10,000 in net revenue, one hundred percent will go to the developer,” Kongregate’s CEO says. “We’re not coming in just to build another store. No-one needs that. This is about building a platform that is focused on creating a very fair and supportive environment for indie developers” – as well as on social and community tools.
Ubisoft is sick of toxicity in its games, and to combat it, it’s whipping out the banhammer as a “first step” in getting the playerbase under control.
“Starting next week, we will be implementing an improvement on the system we have been using to ban players that use racial and homophobic slurs, or hate speech, in game,” the company told Rainbow Six Siege players on Reddit over the weekend. “The bans for this will fall within the following durations, depending on severity” – that’s everything from two days to a permanent ban. “Any language or content deemed illegal, dangerous, threatening, abusive, obscene, vulgar, defamatory, hateful, racist, sexist, ethically offensive or constituting harassment is forbidden.”
Moreover, toxicity-related bans will be broadcast via global message for all to see.