As RPS reported this week, Valve has taken the relatively unusual step of making your Dota 2 and CSGO report cards semi-public – that is, players can see reports made against their accounts, and the rationales given, even if Valve took no action on them. The author was bemused to find that he’d been reported for “intentional feeding” when in fact, he just sucked that match. Hey, it happens.
But I wonder whether the reports are useful to actual toxic players who’ve been actioned to teach them where they went wrong; it’s certainly an idea League of Legends clung to for years. MOP reader TomTurtle recently suggested something similar in terms of forum moderation too. “I’d like to see how viable it’d be to have moderators give an infractor a chance to edit their post to be constructive in an attempt to have them learn why their initial language was against the rules” in the service of “informing players why they were infracted in the first place,” he wrote to us.
Even if we agree that moderators’ and gamemasters’ jobs should include not just protecting the community from toxicity but actually attempting to – as Raph Koster puts it in his new book – “reform bad apples,” I wonder whether it’s even worth the trouble, never mind the expense. Does knowing what they did wrong actually help toxic players become less toxic? Or does it just cause them to double down to save face? Is the industry just wasting time and money trying to reform players who aren’t just poorly socialized or clueless but willfully destructive?
Toxic players, beware: Hi-Rez may not be talking to its Hand of the Gods
players, but it’s cracking way down on SMITE
miscreants. The studio apparently banned or suspended over 2000 people last week
“based on player reports and in-game behavior,” just a fraction of the number punished this season alone.
“Over 20,000 players have been suspended or banned in SMITE during Season 5 so far. However, this latest action today represents a ramp-up in our suspension activities, especially on Xbox and PS4, where our tools and processes have improved the most. One of our top priorities is making sure the player experience is positive and fun, and we’ve done major work recently to help us handle in game toxicity. We’ve been working hard to improve our machine learning tools to better identify players that have shown trends of negative behavior, as well as ramp up the efforts of our internal team at Hi-Rez that checks player reports and chat logs.”
The company stresses that this is all past of its “initiative to promote positive player behavior and handle negativity in game,” with more on the way. It also requests that players keep the reports coming.
Toxicity in online gaming just keeps popping up – specifically as it pertains to chat and commenting.
MOP reader Tanek pointed us to a thread about Standing Stone Games, which is apparently blocking specific words in LOTRO’s chat, including supposedly “political” words, leading some players to demand the company publish the full list to prove to said players they’re not “biased” (not gonna happen).
Reader Stephen then linked us to the amusing story of a Norwegian site that’s developed a WordPress plugin that requires people to take a quiz on an article’s contents before being allowed to comment.
Finally, there’s Saga of Lucimia, which this week spent its Monday dev blog discussing the Fair Play Alliance and its own home-grown play nice policy – and the fact that it will take a zero-tolerance, insta-ban approach to dealing with racism (we’ll assume other bigotry too).
All of these are approaches to handling specific community problems that MMO players deal with in text-based chat and forums (vs other online games that are more focused on toxic voice chat or grief play). Do you think they’re effective? Do text-based games have a bigger problem than voice-based games? Are chat blacklists, intelligence vetting, and dire threats enough to thwart text toxicity, or is there another way?
Forget group-kicks: If you’re a tool in Sea of Thieves, your own shipmates might just opt to stuff you in the brig – “a holding cell located on the bottom of the ship that disruptive players can be sent to after a democratic vote is held by their shipmates,” explains Polygon in a piece last week. The idea is to give toxic or obnoxious players a chance to apologize or shape up, even roleplay their way out of the situation they created.
This kind of penalty isn’t entirely new to MMOs, whether we’re talking jail in Ultima Online or Age of Wushu, but it’s certainly creative, right? At least as long as the majority of your ship isn’t toxic and you’re the one being shoved into a cell.
What’s the most creative in-game way you’ve seen an online game studio thwart toxicity?
This week, The Ancient Gaming Noob posted up an image of RIFT Prime, where Trion asks people to… play nice. “Just a neighborly reminder that 1-29 chat is for RIFT chat, ideally things relevant to level 1-29 gameplay,” the UI HUD reads. “Please be good to each other. We’ve muted some and shall mute again. Have a great evening!”
Meanwhile, over in Trion’s Trove, I’ve had to report-and-block dozens of fellow players just in the last few days for disgusting slurs in multiple languages, stuff the filter doesn’t catch. For a free-to-play game that’s also on console, yeah, I guess I expect no better from the playerbase. But but but RIFT Prime is subscription-based. Surely that means a strong community, where such polite warnings from developers aren’t necessary? Yeah, not so much, as anyone who played old-school MMORPGs can tell you. This is a problem even in games whose devs prioritize community and care a whole lot.
So this week, let’s talk about in-game chat. Do you use it? Do you watch it? Do you turn it off? Is it really terrible everywhere, or just in some games? Which one is the worst and the best, and what should developers do about chat specifically?
Last week, we covered an ESPN piece in which the author called out Blizzard for sitting on its hands after an Overwatch League player signed to the Dallas Fuel, Timo “Taimou” Kettunen, was caught openly using homophobic, racist, and ageist language toward other players, not the first time for the Fuel. It was just one more piece in a long series of incidents in Overwatch toxicity that’s now spilled over into the e-sports league itself.
Or is it? After initially reportedly dismissing the complaint back in January, Blizzard announced this weekend that it was fining Taimou $1000 for the slurs. It also fined an LA Valiant player $1000 for account sharing, issued a “formal warning” against a Houston Outlaws player who posted an offensive meme, and fined a fourth player, Félix “xQc” Lengyel from the Dallas Fuel, $4000 for having “repeatedly used an emote in a racially disparaging manner on the league’s stream and on social media, and used disparaging language against Overwatch League casters and fellow players on social media and on his personal stream.” In fact, we’ve covered Lengyel before when he was fined, suspended, and benched back in January for homophobic remarks to an openly gay fellow player.
What’s going on in the online video games business this week? Let’s dig in.
Steam, toxicity, and Kartridge
The Center for Investigative Reporting (via Motherboard) has a scathing piece out on Steam toxicity this week. Valve has traditionally maintained a hands-off approach with Steam groups, which means that the groups can easily become a toxic cesspit. The platform is accused of being loaded with hate groups, many of which support racist agendas or promote school shootings. Motherboard notes that Valve has refused to respond to questions on this topic since last October.
Meanwhile, Kongregate is launching Kartridge, a potential Steam competitor that says it will embrace indie “premium” titles and small-fry developers. “Our initial plan is that the first $10,000 in net revenue, one hundred percent will go to the developer,” Kongregate’s CEO says. “We’re not coming in just to build another store. No-one needs that. This is about building a platform that is focused on creating a very fair and supportive environment for indie developers” – as well as on social and community tools.
Ubisoft is sick of toxicity in its games, and to combat it, it’s whipping out the banhammer as a “first step” in getting the playerbase under control.
“Starting next week, we will be implementing an improvement on the system we have been using to ban players that use racial and homophobic slurs, or hate speech, in game,” the company told Rainbow Six Siege players on Reddit over the weekend. “The bans for this will fall within the following durations, depending on severity” – that’s everything from two days to a permanent ban. “Any language or content deemed illegal, dangerous, threatening, abusive, obscene, vulgar, defamatory, hateful, racist, sexist, ethically offensive or constituting harassment is forbidden.”
Moreover, toxicity-related bans will be broadcast via global message for all to see.
This week’s dev-written Saga of Lucimia blog asks everybody over the age of 35 to think back to bygone days “when reputation used to mean something” and miscreants were blacklisted by the community.
“For the most part, there is little cooperative spirit in most modern-day MMORPGs, even on the so-called PvE servers,” the indie sandbox’s creative director Tim “Renfail” Anderson asserts. “Instead, it’s a free-for-all storm of mayhem where play-nice-policies are no longer enforced, and player toxicity is allowed to run rampant in favor of generating the most amount of money possible to satisfy investor needs.”
“In a group-based game where you couldn’t really solo anything, reputation was the most important currency anyone had. If you did something bad enough to justify your name being posted in the forums, you very quickly found that no one would group with you. If no one would group with you, your forward momentum was halted; you couldn’t progress through the game. The bad apples of the community were quickly rooted out, and either rage quit, changed to a new character, or learned how to play nice with others.”
Back in 2013, when Linda “Brasse” Carlson still fronted SOE’s community branch, she made headlines for making SOE’s anti-toxicity policies very clear. “If we know who you are and you’re abusing somebody on Twitter, we will ban your game account and we will not accept you as a customer ever again,” she told trolls. “It’s not always possible to identify people [in that way], but we take that seriously.” At the time, MMORPG players were divided on whether that was an overall plus for online game communities or a creepy invasion of privacy.
But it’s 2018 now. Times and sentiments have changed, and Blizzard is trying a similar approach now in Overwatch, where toxicity has taken root and blossomed in spite of Blizzard’s apparent efforts to prune it.
In Overwatch’s latest developer update, Jeff Kaplan says fighting toxicity is still a “major initiative” for the studio and that recent additions – like console reporting and suspension warnings – have cut chat toxicity by 17%. Another effective tactic? They’re watching toxic players on social media, particularly in video.
When is it appropriate to send verbal abuse to someone you don’t know personally? When is it appropriate to tell someone that you hope they lose their job or suffer significant personal injury? The obvious answer to these questions should all be “never,” and yet a new article by small indie developer Morgan Jaffit points out that in the game industry, dealing with vicious targeted abuse is part of the cost of doing business. Development across the board is dealing with people who feel that there is a point when all of this is appropriate, even if they differ on the circumstances when it’s appropriate.
Needless to say, this has a pretty huge impact on development, and it spills over to related fields. (Is it appropriate to say awful things to a community manager over a feature you don’t like when the community manager is not a developer and had nothing to do with it?) The article cites the omnipresence of social media and the popularity of personalities who “tell it like it is” (read: spew invective and curses at top volume), and it’s the sort of thing that everyone who cares about the future of games should read and consider.
Among last year’s toxicity-in-gaming stories was the one that taught the internet an important lesson: how to spell homunculus. No, that wasn’t it. It was “don’t be a game dev who insults and jokes about your toxic players’ deaths,” or at least, don’t get caught, because at the end of it all, the toxic players will still be playing and you’ll be out of a lucrative job.
We’re talking here, of course, about Tyler1, who was banned by Riot Games from League of Legends back in 2016 for toxicity – in his case, specifically verbal abuse, harassment, and outright cheating. Even though he kept streaming, you probably forgot all about him until October 2017, when Riot’s Lead Experience Designer apparently drank a little bit too much joy and then called him a “humunculus” in public, remaking that it’d be “gucci” if Tyler1 were to “die from a coke overdose or testicular cancer from all the steroids.” Though Tyler1 (wisely) stated he wasn’t upset and had no hard feelings over the insults, Riot still fired the employee.
And while the whole ordeal did cause a noticeable spike in google searches for the word homunculus, which continues to amuse me, it may have also influenced Riot’s decision to unban him, news that he announced on his twitter account yesterday and which appears to have been confirmed obliquely by Riot.
Toxicity in online gaming is easily one of the biggest stories of the year, particularly in Overwatch, where Blizzard has been focusing its anti-toxicity efforts with such persistence that it’s almost become silly. And yet here we are, with the problem unsolved and a whole lot of people sure it’s unsolveable or content to direct victims to just “ignore” it.
So how bad is it? Eurogamer collected clips of female gamers and streamers being harassed via voice chat in Overwatch and toted them to BlizzCon, showing them to attendees who agreed to be interviewed about their reactions and their own experiences. Forewarning if you’re going to watch the video below: The clips are awful and will make you angry once you realize they aren’t parody. The worst part? Most of the men and women Eurogamer interviewed basically all have that same stony look on their faces that I currently have on mine because it’s par for the course – and it’s just the misogyny brand of toxicity. The video doesn’t even touch on racism or homophobia.