MMO Mechanics: The Fair Play Alliance and mechanising fairness

    
104

News of over 30 gaming companies taking a united stand against unfairness and toxicity in online game communities sprung out of GDC 2018 a few days ago, with some rather surprising company names making the list of those involved. The issue of toxic behaviour is a tough nut to crack, and these companies believe that the best way to tackle the issue is by pooling research and resources to share knowledge of what works and what doesn’t. It’s an interesting and complex idea that has got me thinking, so I just had to take up an issue of MMO Mechanics to discuss the potential implications for MMOs and near-MMOs.

In this edition of MMO Mechanics, I’ll look at the mission of the Fair Play Alliance, discuss the ground they’ve covered so far, explore a case study of how toxicity affects one involved MMO developer, and then will give my thoughts on mechanical rollouts that could be employed to help smash toxicity.

All about the Fair Play Alliance

The Fair Play Alliance has a simple mission statement: It is a coalition of companies that are both committed to developing quality games that are free of harassment, abuse, and discrimination and also are healthy communities that encourage fair play and self-expression. At GDC, the first Fair Play Summit happened on Wednesday 21 March, and I was thoroughly impressed with the breadth of the topics covered in this first rotation of their work. Dr Kimberly Voll of Riot Games led the keynote and also gave an interview to Kotaku to discuss the rationale behind the movement in more detail, which is a great read if you’re curious as to the roots of combating toxicity in online games. Voll is both humbled by the failures Riot has faced when dealing with toxicity and optimistic about the changes a collaborative effort could make to the industry.

When I initially saw Riot, CCP Games, Blizzard Entertainment, and Epic Games on a list of companies interested in encouraging toxicity-free gaming, I have to be entirely honest and admit that my eyebrows hit the roof: Some of these companies have obviously produced games that are the worst offenders when it comes to toxicity and the methods employed to deal with them haven’t been particularly effective historically. After my initial thoughts, however, I got to grips with the fact that these are the very companies that have the potential to make the most difference: Each of them has endured a large amount of public scrutiny because of their products’ reputations as toxic games and each has produced various statistics that show the knock-on effect toxic exchanges have on their community.

One of the most impressive goals of the Fair Play Alliance is to create a consistently applied set of community management and anti-toxicity rules that can be rolled out to all involved companies, and this is where I have taken an MMO Mechanics interest in the alliance’s plans. Aside from the more broad-spectrum keynote I’ve already mentioned, one particular talk stood out to me: Speakers from Two Hat Security, Kabam, Blizzard, Supercell, and Epic Games came together to deliver a thought-provoking presentation on Player Behaviour by Game Design that focused on the mechanics that influence and structure player behaviour in-game.

Romantic?A case study in toxicity: EVE Online

Brendan Drain will forever tell the tale of EVE‘s historic entrenchment in toxic behaviour better than I can so I urge you to read his article on the matter, but I’ll take some time to get to the bare bones and will highlight how mechanical intervention could help CCP tackle the issue. In broad strokes, CCP has drastically changed how it deals with toxicity in its community and is far more willing to intervene than it was at its inception in 2003. As the internet has begun to become less about anonymity and griefing mechanics have become outmoded, the excuse of in-game exchanges not being real and griefing being just another part of the gameplay has been dashed in modernity, and companies have had to adapt to player thought on the matter in their policies.

Ganking, scamming, stealing, and killing were all commonplace in EVE and trolling was seen as totally acceptable, but the internet culture has changed dramatically and dangerously since then: Mental health has suffered and access to people’s real-world space, including their personal address, has become so much easier, so threats and trolling have an absolutely tangible effect on the real-world space that goes beyond in-game shenanigans and making fun. When you insult that avatar, blow up their ships, and tell them to go die if they don’t like it, we now understand fairly universally that you’re having an effect on an actual person behind a screen who internalises those words and actions. As our in-game worlds get more realistic and our mechanics become so sophisticated that an in-game society with rules and expectations forms, it stands to reason that how we play and how we expect to be treated will also change.

People are infinitely more traceable outside the game space now since connectivity of game and social accounts is so prevalent: In-game wars are not limited to the boundaries of the game space, but now spill out onto social media platforms, so our stance on toxicity must become stronger. Brendan’s article lists case upon case of players being deeply affected by abuse that began in the game and was sustained on social media, so CCP’s stance on when to intervene has been forced to change from a hands-off, only in-game abuse will result in punishment mentality to something more robust. Becoming part of this alliance is a massive step forward for CCP and is an admission of sorts that the company has both learned from its history and is willing to make further changes to make its community a safer place to be.

Can mechanical overhauls really help?

While I will always maintain that there is not one 100%-effective cure for assholitis and general toxicity, I do believe that mechanical intervention can save players from overexposure to the worst offenders. If I could wave a magic wand and give an overhaul to every-online-game-ever while totally ignoring budget limitations, I’d obfuscate the block button behind a more heavily featured report and feedback system. The apparent problem with online communities in my mind is that one person’s banter is another person’s abusive slander: I see this in Guild Chat submissions regularly and in my own online experiences daily. Collecting data on what a certain player thumbs up and thumbs down can help match them with those they’ll likely synergise best with.

I feel that only examining poor behaviour and focusing on punitive measures makes toxic players feel victimised by the system and vindicated for displaying poor behaviour in the first place. If matchmaking used the data collected to pair like personalities together as well as those with similar skills and experience, perhaps we’d see a net reduction in toxicity because the matchmaking prevents initial clashes of personality and the resultant escalation into hate-filled verbal diarrhoea. Riot has experienced the adverse effects of hardline bans on the worst offenders: They seem to simply grow in popularity because of the “edgy” and scandalous behaviours that caused them to be banned in the first place, and this just sets up a positive feedback loop that is worrisome for game communities.

I’d also like to see a more robust supportive studio response to those who receive bans rather than the severe lack of dialogue most studios engage in with offenders (Blizzard springs to mind here). Without some kind initial discourse, clear next steps to demonstrate change, and open communication channels, banned players’ anger levels — and thus future toxicity levels — are not dealt with in the slightest, meaning that when the ban is up, a player that feels vindicated in being nasty because the game community leaders issued a punishment without any discourse for an action that the player in all likelihood didn’t view as toxic in the first place.

AI to the rescue?

I would love to see a new first step being implemented in online gaming: If I had a large budget to burn and the abundant research to support my assertion (cough cough Riot!), I would employ a team of community managers that intervene with early offenders and work with individuals on their communication skills. Some fault absolutely falls on the development teams behind games for making cortisol-raising experiences that can trigger explosive responses in those who are overly shielded from the reality of another living person sitting on the other side of the screen, so I would love to use those specially trained community managers to live-read the chat logs of flagged individuals and intervene before things escalate, privately addressing questionable remarks to the individual concerned while also flagging the victimised party for a follow-up if the abusive language was sustained or severe.

I really love how Overwatch edits questionable remarks and injects humour, so maybe my idea could be made more realistic by these companies pooling resources to make an impressive intervention AI that can be rolled out to support those who are facing bans. An automated progress plan can be issued to offenders, who then have a clear idea of why their behaviours triggered action and what they can do to rectify the situation and avoid a ban or further action in future. Games such as League of Legends go some way towards this by letting people know why they are being punished via report cards after automated flagging for toxicity: We know that most people who got a report card do not re-offend, so it works. Immediate intervention on the scale needed can only be handled by humans in my dreams, so an AI response that immediately deals with the problem and doesn’t rely on heavy punishment would likely be effective.

Rapid response paired with resources that help dysregulated people relearn how to human effectively is my dream solution: We have an HR department in corporations that are smaller than some game communities, for crying out loud, so it stands to reason that game communities need access to such resources too. Research into how shockingly common ACEs (adverse childhood experiences) and potential developmental trauma are and how deeply they affect our development and future outcomes is underway, so there is huge scope for professionals in these fields to examine the correlation between poor behaviour in online communities and occurrences that limit people’s capacity to regulate themselves. The rapid response I mention helps build those cause-and-effect synapses and less focus on punitive measures helps remove feelings of shame that raise cortisol and trigger further outbursts, so I’d love to test if this response could be helpful long-term. Maybe, just maybe, game developers uniting like under the alliance can facilitate this, and if the inclusion of talks on the causes of poor behaviour is anything to go by, the companies believe this too.

Over to you!

Toxicity is a massive issue that spirals far outside the MMO space and there will never be a quick and easy solution to the problem that works for all cases. I’d love to hear your ideas on how best to tackle toxicity and what you think the alliance can do to best help. I’ll certainly be keeping a close eye on the Fair Play Alliance website for more details on its work.

MMOs are composed of many moving parts, but Massively’s Tina Lauro is willing to risk industrial injury so that you can enjoy her mechanical musings. MMO Mechanics explores the various workings behind our beloved MMOs. If there’s a specific topic you’d like to see dissected, drop Tina a comment or send an email to tina@massivelyop.com.
Advertisement
Previous articleUltima Online continues testing free-to-play patch
Next articleJukebox Heroes: World of Warcraft’s best music, part 5

No posts to display

104 Comments
newest
oldest most liked
Inline Feedback
View all comments