The Daily Grind: Are blacklists, quizzes, and threats effective at reducing text toxicity in MMOs?

    
107

Toxicity in online gaming just keeps popping up – specifically as it pertains to chat and commenting.

MOP reader Tanek pointed us to a thread about Standing Stone Games, which is apparently blocking specific words in LOTRO’s chat, including supposedly “political” words, leading some players to demand the company publish the full list to prove to said players they’re not “biased” (not gonna happen).

Reader Stephen then linked us to the amusing story of a Norwegian site that’s developed a WordPress plugin that requires people to take a quiz on an article’s contents before being allowed to comment.

Finally, there’s Saga of Lucimia, which this week spent its Monday dev blog discussing the Fair Play Alliance and its own home-grown play nice policy – and the fact that it will take a zero-tolerance, insta-ban approach to dealing with racism (we’ll assume other bigotry too).

All of these are approaches to handling specific community problems that MMO players deal with in text-based chat and forums (vs other online games that are more focused on toxic voice chat or grief play). Do you think they’re effective? Do text-based games have a bigger problem than voice-based games? Are chat blacklists, intelligence vetting, and dire threats enough to thwart text toxicity, or is there another way?

Every morning, the Massively Overpowered writers team up with mascot Mo to ask MMORPG players pointed questions about the massively multiplayer online roleplaying genre. Grab a mug of your preferred beverage and take a stab at answering the question posed in today’s Daily Grind!
newest oldest most liked
Subscribe to:
Reader
Bryan Gregory

Ugh, seeing a lot of blanket statements in the comments.
Age has much less to do with how a person behaves than does their environment, role models, discipline, and education. There are plenty of mature young people and plenty of immature older people. I honestly believe that many adults find their bad behavior acceptable simply BECAUSE they are adults. And I’d say the same for minors even. I think everyone would be surprised to find the age of some of the people they interact with online. It’s not always what you think. And despite the things I listed, it still comes down to a choice of whether a person wants to be toxic or not. Competition, for example, does not excuse or make it acceptable to be toxic. There are rules against it, and companies and sports leagues are not afraid to ban or suspend people for it. If you can’t control your emotions during competitions, you shouldn’t be a part of them, and everyone else shouldn’t have to put up with that kind of behavior. Thus the rules.

Anyway, I do actually believe that companies should take some responsibility in policing the social content of their games. I said in one of the other articles regarding this topic, the reason there is as much toxicity as there is, is due to the anonymity and lack of consequences. And that’s why I believe we should start bringing in more real world solutions. I’d be perfectly OK with fines and even jail time for the more serious offenses. Bring reality to the internet. You’ll see a huge drop in toxicity. Perhaps the Fair Play Alliance will share your credit card or payment information and when you get banned, that payment info gets banned across every company enlisted.

Also wanted to point out, someone mentioned communities like ESO and FFXIV having less toxicity than most games. Can’t speak for ESO, but FFXIV has an EXTREMELY unforgiving CS department – you pretty much only get one chance before you’re permanently banned on their forums. They take this stuff seriously. I honestly believe that in many games, reporting abuse and harassment and hate speech actually doesn’t do anything. They assure you that action will be taken if necessary with any report, but you best believe you see the same person online the next day and every day after. I truly believe many companies would rather have the accuser’s money than to induce any sort of justice. Ignoring/blacklisting is not a solution. Nor are temporary or even sometimes permanent bans. If you want to deal drastic damage to toxicity, you’re going to need to make drastic consequences. Because as of right now, there really are none.

On a related note, I wonder how many offenders are drunk or high while offending, and how long til you can get arrested for public intoxication on the internet. :D

Reader
Toy Clown

In today’s gaming, there’s no fear of consequences of actions by the perpetrators. Combine that with reluctance by mod teams to shut down toxic players, and I think that’s gone a long way toward making this cess pool larger than it should have ever been.

Players do it because they can get away with it. Threaten them with action or banning? They rear up and go full-blown vitriol, attempting to push a ban as publicly as they can, stirring up support posts on forums right under mods eyes while thumbing their noses at them.

Another problem that adds into it is gaming accounts are fairly cheap these days for many MMOs. If a player gets banned, they shell out 10$ and there they go again.

Maybe the US should adopt the systems used in Asia of using real information to secure a gaming account. Can you imagine getting a public record of bad acts, much like a credit report gives for not paying bills?

Reader
Bryan Gregory

YES please. This is a step in the right direction, IMO.

Reader
Loyal Patron
Patreon Donor
Kickstarter Donor
Briar Grey

Nothing will replace actual real people with strong rules moderating and monitoring and enforcing. Algorithms and such are great to help alert, but better to use it that way (alert the GM only) and go from there.

Hamblepants
Reader
Hamblepants

Well said, 100% agree

Reader
A Dad Supreme

As the Russian/Trump bot incidents have shown, there is no effective way to curb people’s behavior and all it takes to spread toxicity is just a few people. And even when you think you’ve gotten rid of those people, they come back like cockroaches in a project housing development.

In an age where every new game is F2P and dying for customers combined with VPNs that allow players to remain anonymous forever, toxicity has a straight mainline into gaming’s veins. You really can’t get rid of it.

“Ban me? No problem. Now I’ll be even more of a chatmonster while hiding my IP address. Ban me again? No worries. I’m a no-lifer. I can do this all day, every day. I’ll come on, spread rumors, take sides in order to create havoc, and hate in your game until it has a terrible reputation.”

Hamblepants
Reader
Hamblepants

You can’t get rid of crime through police or courts.

You can’t get rid of health problems with health insurance or public systems.

You can’t get rid of enemies of the state with armies.

And yet, people have made pretty strong efforts to decrease or manage the first set with the second, and have been somewhat successful.

Again, when someone suggests the goal is complete elimination of any common unwanted human experience, I smell something fishy.

Reader
Schmidt.Capela

Again, when someone suggests the goal is complete elimination of any common unwanted human experience, I smell something fishy.

Reminds me of the movie “Thank You For Smoking”. One of the strategies the lobbyist uses for winning otherwise unwinnable debates is painting the opposite opinion as an absolute, goad the people defending it into accepting that absolute position, and then use the fact absolute positions are always wrong to collapse the opposite argument while using the distraction to prevent others from attacking the vulnerable position he was defending.

Reader
A Dad Supreme

And yet, people have made pretty strong efforts to decrease or manage the first set with the second, and have been somewhat successful.

Let me know when you get rid of it and things are no longer “fishy”.

“Strong efforts” do not mean “strong results“.

I love reading the pollyanna replies though. They are pretty entertaining.. kind of like the “strong efforts” to get rid of gun violence… by arming teachers.

Hamblepants
Reader
Hamblepants

Let you know when I get rid of what?

Edit: The fishiness is in you implying that “there’s no perfect solution, therefore don’t try.”

That’s not up to me to solve, that’s up to you.

Edit 2: in advance of any “that’s not what I meant” – the fact we will never completely remove a common human behaviour is a point so obvious, that it goes beyond my belief that was the *only* point you were making.

Hamblepants
Reader
Hamblepants

Also worth pointing out that the main tool humans have to tell us which experiences to avoid is emotional pain.

That means the worst of human experiences are defined by emotional pain.

So putting tools in place to lessen the likelihood of experiencing significant, real pain, especially power-based emotional pain, is a super good thing to do

Hamblepants
Reader
Hamblepants

Yes to automated features to filter some people acting awful.

Yes to moderation to filter out other people who get through the first filter.

Yes to ignore lists and social shaming to filter out people who get through the first two.

The number of times I’ve come across people significantly hurting other players on purpose or by accident (not over sensitive ones, genuine legit hurt) is in the hundreds.

The number of times I’ve come across people over-policing toxicity is maybe 2-3.

There may be a line, on one side of which is over-zealous censoring, but we’re miles from it.

Reader
Bryan Correll

Blacklisting certain words never works, it just encourages people to invent new insults. It also tends to have bizarre/amusing side effects. Case 1: SWTOR – Want to tell people you’re from the great state of Virginia? You’ll just confuse the people who have the filter on. Case 2: Earth and Beyond – Wanted to ask in general chat, “Does anyone know of a good guide for mining?” Too bad, no one knew what you said. Since the letters f, a, and g appeared in order (even with spaces between and the f and g as part of other words) this truly inoffensive question gets filtered.

Hamblepants
Reader
Hamblepants

That’s why you have a mod team that does something when someone is harming players using words that clearly mean something different than their on-paper definition.

I’m Jewish, if someone knows that and calls me a cike, or sike, it takes a pretty committed attempt to be unclear about what they’re doing.

None of the tools being presented work on their own.

But nobody who’s serious about fighting toxicity is suggesting using only one tool.

so any argument suggesting moderation won’t work cause tool x doesn’t work on its own makes me suspicious.

Reader
Bryan Correll

if someone knows that and calls me a cike, or sike, it takes a pretty committed attempt to be unclear about what they’re doing.

That was sort of my point. Filter or not this individual wouldn’t be impeded in their offensiveness without being reported.
PS – Since I didn’t mention it above, I’m fine with language filters as long as they can be turned off. If I get offended by someone in chat (it happens occasionally) enough to care I’ll just /ignore them myself. But crude language often has its place.

Hamblepants
Reader
Hamblepants

The discussion is not about crude language, the first kind of bad words (f word meaning sex, s word meaning poop, a word meaning bum, D word meaning send something to a very bad place, h word meaning a very bad place etc).

None of those first kind of words are as bad as the second kind: direct statements/assessments of someone’s value/worth as a human, deservingness-of-personal-power, or moral goodness.

As offended as someone might get by the first kind, the second kind are objectively worse, and anyone who disagrees is flat out wrong.

The kind of toxicity that is (obviously) being discussed is the second kind.

Reader
Bryan Correll

None of those first kind of words are as bad as the second kind:

But that first kind are effectively blocked by filters, the worse stuff not so much.
And while actual living GM’s monitoring chat might be effective against toxicity, I just can’t see many game companies being willing to actually part with enough cash to pay for that level of service.

Hamblepants
Reader
Hamblepants

Right, so they have people who moderate based on reports people file. And take reports seriously, and follow up, and enforce their ToS. All do-able activities, if they care to do them.

Hamblepants
Reader
Hamblepants

If the dev cares about this, then they’ll

A) log all chat

B) create filters for direct use of bad words

C) created specialized systems for reporting the stuff that makes it through the filter

D) act vigorously on reports they get

E) give moderation and banning powers to players who regularly and ethically keep chat civil

And F) when possible, moderate chat directly via the devs.

camren_rooke
Reader
camren_rooke

Tera at go live had some weird setup where something like ‘too much’ triggered the filter.

No clue why. Of course, ‘t o o m u c h’ worked fine.

Hamblepants
Reader
Hamblepants

Can’t understand why that would spark the filter. Weird.

Reader
Leiloni

Was some of that based on the fact that it came from the Korean client and maybe it hadn’t been fixed yet for English?

Hamblepants
Reader
Hamblepants

Make better algorithms then? Was anything complicated and worth doing done perfectly the first time? No? No.

And yet they continued anyway, and good on them for doing so.

Reader
Loyal Patron
Patreon Donor
imayb1

One game tried to combat gold sellers, so you couldn’t use the word “gold” in chat … even though that was the standard of currency and there were gold medals that were popularly traded. >_>

GW2’s forums have some interesting hiccups, too. They filter swearing (in English) into “kitten”. So, if you are typing about an item called the “wizard’s hat” you get “wizard’ kitten” because s hat is considered swearing.

It’s always interesting to me to me to see how studios try to combat language and how people figure ways to get around it.

Reader
Roger Melly

I hope so but time will tell . I have given up on several mmo’s because their communities became too toxic for me to want to play the game anymore . Its the reason I no longer play Elder Scrolls or Warcraft .

camren_rooke
Reader
camren_rooke

:(

Don’t leave!

kjempff
Reader
kjempff

I think you could limit the number of words or sentences a player can type in public chats, without causing problems for normal players or at least with minor inconveniences.
Why ? Have you noticed those who shall we say get a kick out of expressing opinions, insults, provocations, often do it in a continous stream and day after day, and those are also often the instigators for a “discussion” chain that draws others in because they feel provocated or wants to trigger it further.
By stopping this before it escalated, can help on some of it, which again will help building a culture of chatting.
How exactly to implement it so it is a reasonable balance is up to testing; you could limit to 5 sentences within 5 minutes, x words within 5 minutes, or more complex systems such as other players toxicity-valuing chat sentences causing temp chat lockouts (in public chats).

Reader
Leiloni

Systems that limit how often you can chat just end up being irritating for players legitimately trying to participate in friendly conversations in general chat channels. It might just require live moderation. Take one person off of live Support Chat or CS tickets and have them moderate a few chat channels. It would also have the effect of lessening the amount of tickets they receive about this and related incidents after the fact if they can curb it at the time it’s occurring.

For something like that I think temp (~10 min) silences would be a good start. It serves to allow people to calm down and avoid escalating a conversation further without permanent consequences. I would do that on both sides – both the instigator and those that responded too forcefully. Both sides are guilty in those situations of adding to the toxicity. This serves to prevent the situation from getting worse while teaching them not to do it and how to calm down. If people become repeat offenders, you just increase the time they’re muted to a point where they’re temp banned from the game, and for serious offenses I suppose you’d eventually get a perma ban.

But I would definitely suggest that they receive specific details as to why they were silenced so people understand. Many players are not malicious and would do their best to avoid that in the future while the handful of bad seeds would get appropriately caught in the system (and with live GM monitoring, those bad seeds would get “caught in the system” far faster than they do with standard CS tickets). But not explaining why people are silenced or banned only serves to anger them further if they don’t know they did something wrong.

It does require a change in how companies handle GM, in game moderation, and CS. It would require some companies to overhaul things a bit. But the upfront work will pay off in the end with less CS tickets when you can nip it in the bud early. Ideally it would also create a better community.

Bottom line – If what these companies are doing now isn’t working, they need to change how they do things. Not add new features, but change the ones they already have.

kjempff
Reader
kjempff

I get your point but I am not sure that those “friendly conversations” are that common or interesting that everyone should be an involuntary listener to them. Also the cases where someone has so much to say they have to “spam” it, that is often either bordering to toxicity or “strong opinions” not likely to spawn interesting replies.

As for GM policing chat, it is rather expensive to have a GM staff monitoring several public chat channels on several servers. So I don’t find that to be a very realistic solution, at least the cost doesn’t seem to be worth it.

Which bring me back to the idea of player policing themselves. Who knows better what can be defined as toxicity than the players effected ? If x number of players find someones comments to be spreading toxicity and mark the comment(s) as such, and the system silences those players, I think that will solve this problem effectively and without much negative impact.

The system should be like a warning point system where your points slowly disappear with time. Someone who just crossed the line or involuntarily got silenced will not suffer very long because they generally behave, while those repeat offenders will get longer and longer timeouts eventually either learning to put a lid on it or not learning being “perma” silenced – Both cases problem solved.
On top of this, I would put the “spam” detecting idea could be added to automatic apply warning points to overly loud mouthed players; Or leave that system out if it turns out you are right about it targeting the wrong players.
Of course there can be a loophole for griefing if you gang up on someone and report them without reason, but that is what GM/Support is for.
And just to be clear, I am not talking about completely silencing any player, this is only for public chat channels.