The EVE Online
community is aflame this week after alliance leader gigX was permanently banned
for making threats of real-life violence against another player following possibly the biggest betrayal in EVE history
. Some players don’t want to accept that gigX crossed a serious line and deserves his ban, and others have been asking why The Mittani’s similar actions in 2012 resulted in only a temporary ban. CCP’s official stance
is that its policies have become stricter since 2012, but it’s still not entirely clear exactly where the line is drawn.
Another side to the debate is that the internet itself has evolved over EVE‘s 14-year lifespan, and a lot of toxic behaviour that was accepted or commonly overlooked on the early internet is now considered totally unacceptable. Many of us have grown from a bunch of anonymous actors playing roles in fantasy game worlds to real people sharing our lives and an online hobby with each other, and antisocial behaviour is an issue that all online games now need to take seriously. The lawless wild west of EVE‘s early years is gone, and I don’t think it’s ever coming back.
So what’s the deal? Does EVE Online tolerate less toxic behaviour today, has the internet started to outgrow its lawless roots, and what does it mean for the future of sandboxes?
Even if you can overlook the expense, the current lack of games, the potential for nausea, and the annoyance of wearing a clamshell on your sweaty face, virtual reality has a looming problem: trolls.
Turns out that the same internet jerks who ruin online spaces and games via text and avatar show up to do the same in virtual reality too.
As MIT Technology Review wrote yesterday, part of the point of socializing in virtual worlds is to feel the “presence” of other people — but the very benefit that makes “virtual reality so compelling also makes awkward or hostile interactions with other people much more jarring,” such as when people invade your private space or try to touch your avatar without permission.
The publication highlights AltSpaceVR, a startup building tools to help people deal with trolls. The company has some of the basics already — like a way to make obnoxious people invisible with a block — but it’s also working on a “personal space bubble” to stop people from groping your virtual self without permission, which they would otherwise do because people are gross and have no shame.
Tonight’s Massively Overthinking aims to address a core problem facing the whole internet, not just games: antisocial behavior. Our question comes from Kickstarter donor Katie MacAlister, who wonders,
What can be done to combat the “anonymity on the Internet breeds douchecravats” mentality that pervades MMOs? Barrens chat, trade chat…for every “good” soul, there’s a handful of twits. What can the MMO world do to fight this?”
I asked our writers about the best ways players and studios can overcome this ever-present problem.
We’ve all known trolls who love to dwell in dank forum posts and shadowy comment sections, ready to spread ill will and spark flame wars just to see the world burn. But what if a tool could be devised to identify such miscreants before they could do much damage?
That is one possibility that has arisen from a new Cornell University project in which researchers studied online communities (including IGN) and created an algorithm that can predict which posters had the highest likelihood of being banned in the future. The algorithm isn’t perfect (it misclassifies one out of five users), but the team claims that it is able to spot a troll in as few as 10 posts.