Lawful Neutral: Managing toxicity in the MMO industry

    
15
Technopathological.

When I’m playing games, I make a lot of use of the reporting function. If I see a moonkin zerg farming a spot in World of Warcraft, I’ll stop and report as many as I can. If I see toxic chat in an MMO or things I think would be offensive to me or others, I’ll report it. Some days I worry I’ll get on some list because I feel like I am stuck spending so much in-game time reporting the toxic players in video games. Earlier this month in New World, I found a gaggle of people AFK running into a wall to avoid timeouts, and I spent a solid 10 minutes reporting each and every one of them (it took forever because the UI to interact with a player is infuriating, but that’s a rant for another time).

But I never know what happens with my reports. I rarely get a note back about any action was taken. And while I do my best report within the rules, I wonder how often my interpretation actually aligns with the thinking of whoever gets my report. How do developers decide what’s unacceptable behavior, and what’s not? Then how do they enforce it?

In the first part of this Lawful Neutral miniseries, Defining toxicity in the MMO industry, we established that our own definition of toxicity requires three things: a behavior, an outcome, and intent. A behavior like trash talking that doesn’t result in a negative outcome for those involved isn’t toxicity. A behavior like corpse camping that has a negative outcome and the intent of a player to make that a negative experience is toxicity. But the starting point for all of this is “a behavior.”

So what governs behavior? Well, it turns out a lot. Way more than I can go into in a column here (plus, no one wants to read my take on social contract theory anyway). Instead, we’ll scope down to just games and consider what influences our behavior in games: the player community and the game studio, specifically what the game studios themselves do to manage toxicity inside their game communities.

I’ve written before about player communities influence behavior, as the Theory of Planned Behavior says that people are more likely to do something if the group (or community) is accepting of the behavior. It may be a little reductionist, but in short, toxic people make more toxic people, and nice people make more nice people. So the community and how it acts actually changes the behavior of the people in the community.

There’s also mounting evidence that assholes are in fact, just assholes all the time.

Let me put a slightly positive spin on this: One of the things that I always love about FFXIV dungeons is that despite the fact that everyone has done Sastasha at least 100 times before, people generally start the dungeon with a friendly, pleasant greeting. There are other games, like WoW, where this isn’t the case and most attempts at a pleasant greeting are ignored or given a “STFU.” The difference is the the community and the impact of the theory of planned behavior: It’s “accepted” in some games to ignore pleasantries, and so more people ignore pleasantries.

But the more interesting aspect here is how game developers impact how communities act and behave. Game developers influence or direct player behavior through a series of different but inter-related mechanisms.

Paws for impact.

Terms of service, EULAs, and user agreements

Most games come with a Terms of Service (TOS), an End User License Agreement (EULA), or a User Agreement. Sometimes the content of one is part of the other, resulting in just a single document. Generally speaking, a TOS is used in cases where you have a service, like an MMO game, while the EULA is more often used in games where you download and can play without an ongoing subscription. There are, of course, exceptions to this, but broadly speaking this is how to differentiate the two. MassivelyOP for example, has a Terms of Service, while Blizzard has a EULA that governs all of its games, not just WoW.

The key with these two documents is that they represent a legally binding agreement between you, the user, and the company providing the license or the service, and they tell you what you can (and more importantly) what you can’t do. Mostly, these documents can be thought of as “cover your ass” documents and generally outline only things that are explicitly illegal in order to protect the company. They aim to prevent the developer from being sued, protecting its intellectual property, and making allowances for how a company can use your personal information.

These documents can also include sections for conduct, which outlines unacceptable user conduct on the platform. These conduct sections are often small, maybe a sentence or two when included in the TOS, and usually cover only things that could get the game developer in hot legal water if it didn’t explicitly say it wasn’t allowed on the platform. For example, consider the “Conduct” section RUST’s Terms of Service:

ix. Conduct. do or say anything unlawful, racist, harassing, threatening, abusive, hateful, xenophobic, sexist, discriminatory, abusive, defamatory, obscene, invasive of the privacy of another person or otherwise offensive. This includes in any chat or other communications with users. Facepunch reserves the right to monitor the content of any of your messages and prevent your use of any such chat or other communication systems for any reason.

That’s the most that we get from Facepunch Games on acceptable player conduct. Now this doesn’t seem that bad, and I can imagine many readers nodding along as they read that quote. But it’s buried in the middle of a dense, legal document. How many people actually are aware of the fact that there are conduct guidelines for RUST, let alone where to find those guidelines and what they say? If we had to answer that question based on the behavior of a notoriously toxic community, I’d have to conclude that no, many people aren’t aware that there are conduct guidelines. That leads us to the next area where game studios influence player behavior: a code of conduct.

Codes of conduct

A code of conduct is a separate piece of documentation that game developers sometimes use to govern how players can behavior on the platform by describing what they cannot do in the game. The code of conduct could be buried in the TOS or exist as another standalone document. Generally speaking, when it’s called out as its own document, it tends to have more content and be more detailed.

An explicit code of conduct tends to go beyond the “this is illegal” or “this protects the company” into things might not be illegal or hurt the company but aren’t things that the game developer wants in its games. For example, spamming in chat channels isn’t illegal and doesn’t directly harm the company, but provisions against spamming appear in most companies’ code of conduct.

That said, while a code of conduct is usually more detailed than a section in the TOS, the content from game developer to game developer doesn’t vary much. For example, Guild Wars 2 (and NCSoft), Square Enix, Elder Scrolls Online and Riot Games all prohibit “offensive language,” cheating, and doxxing. Even Facepunch has a general provision against offensive language. Some, like Guild Wars 2, even tell you in their terms of conduct what happens if you break the code. The punishments aren’t really surprising: warning, temporary bans, permanent bans, and other standard fare. Riot spends time outlining specific gameplay behavior, such as “intent to lose” being a violation of the code of conduct. Others, like Blizzard’s in-game code of conduct, are decidedly vague:

Hate speech and discriminatory language is inappropriate, as is any obscene or disruptive language. Threatening or harassing another player is always unacceptable, regardless of language used. Violating any of these expectations will result in account restrictions. More serious and repeated violations will result in greater restrictions.

The paragraph uses lots of the right words, but it’s not super clear about what is obscene or disruptive. There are no descriptions and nothing to help qualify what exactly they mean here. Additionally, whether something is considered “obscene” or “disruptive” can vary from person to person, making these terms a moving target. Coupled with no clear examples of punishments outside of “account restrictions,” Blizzard’s code of conduct doesn’t really provide clear guidelines for players to work within.

Enforcing the rules

Some game developers do write novels’ worth of documentation about unacceptable behavior in their games, but without a way to enforce those novels, it’s all pretty meaningless. Luckily, in most cases we have to do this through various “reporting” functions. Let’s look at two of them: player reporting and machine learning.

Player reporting

Every game that we’ve talked about has an option to report a player (though it’s worth noting that there doesn’t seem to be an in-game option to report players in RUST). There’s very little variation here in how we report a player: Usually, we click a name, select an option from a very limited drop-down of offenses, and maybe get a space to write something. That report goes into the developer’s ticketing system, which is a black box to players. The journey for the reporting player generally ends here; in most cases, you will never know whether your report was acted on or made any actual difference. You fire off the report into a black hole and never hear about it again.

Developer are pretty quiet about what happens from their end too; few clearly articulate the process when a player is reported. Reporting a player could just a flag an account with higher visibility in case more reports come in, or it could be sent to a queue for a support rep to review individually, or in some cases, it could automatically apply punishments if a certain number of reports come in over short amount of time.

In some cases, studios will have specially trained support reps – Trust and Safety Analysts – whose job is to review reports and divvy out punishments specifically for violations of code of conduct or other policies. They are supposed to be experts on all the policies, behaviors, and punishments for violations. They also generally have input on updates and changes to policies based on their experiences with players and player reports. They are specialists in the support world, but they’re also more expensive than just a generalist support rep, so they aren’t very common for game companies.

As a reported player, you’ll generally get an automated message either warning you about bad behavior or claiming a violation of code of conduct with a link for you to review. It’s uncommon that reported players are told the exact reason they were warned or punished. In most cases, the reported players have to open a support ticket to get more information about why they were punished.

Artificial intelligence and machine learning

Some companies, such as Riot and more recently Blizzard, have started using artificial intelligence and machine learning to identify toxic behavior and automatically take action against perpetrators. Theoretically, this means that a developer can provide feedback on violations moments after it happens and without any human intervention.

However, it’s not always clear why an action or behavior was a violation. There’s challenge here with understanding how the machine learning model is determining that specific behavior is a violation, and naturally there’s a high probability of false-positives. Machine learning models thus far struggle frequently with understanding how context in different situations impacts whether something is toxic or not, and not just in games; machine learning in all applications struggles with context. In other words, this is a cheaper solution for a company, but far less reliable.

How do we do better?

These are just some of the ways that MMOs attempt to influence the behavior of the community and manage toxicity from the top-down. Anecdotally, we know some developer-led efforts are more successful than others. For example games like Elder Scrolls Online, Guild Wars 2, and Final Fantasy XIV are known for having relatively positive in-game communities, compared to games like World of Warcraft, Overwatch, and League of Legends, which have unfortunate reputations for being mostly toxic – communities where people exhibit behaviors that they intend to negatively impact others.

Regardless of the success of one game or another, there much more we can do across the board to make our communities even less toxic; we’ll address just that in our next piece in this miniseries.

Every other week, Andy McAdams braves the swarms of buzzwords and esoteric legalese of the genre to bring you Massively OP’s Lawful Neutral column, an in-depth analysis of the legal and business issues facing MMOs. Have a topic you want to see covered? Shoot him an email!
Advertisement
Previous articleValheim teases an exploration-focused Mistlands update and a mountainous new dungeon in dev blog
Next articlePlayable Worlds’ Raph Koster on who really owns all the stuff inside your video games

No posts to display

15 Comments
newest
oldest most liked
Inline Feedback
View all comments