Even if you can overlook the expense, the current lack of games, the potential for nausea, and the annoyance of wearing a clamshell on your sweaty face, virtual reality has a looming problem: trolls.
Turns out that the same internet jerks who ruin online spaces and games via text and avatar show up to do the same in virtual reality too.
As MIT Technology Review wrote yesterday, part of the point of socializing in virtual worlds is to feel the “presence” of other people — but the very benefit that makes “virtual reality so compelling also makes awkward or hostile interactions with other people much more jarring,” such as when people invade your private space or try to touch your avatar without permission.
The publication highlights AltSpaceVR, a startup building tools to help people deal with trolls. The company has some of the basics already — like a way to make obnoxious people invisible with a block — but it’s also working on a “personal space bubble” to stop people from groping your virtual self without permission, which they would otherwise do because people are gross and have no shame.
Tonight’s Massively Overthinking aims to address a core problem facing the whole internet, not just games: antisocial behavior. Our question comes from Kickstarter donor Katie MacAlister, who wonders,
What can be done to combat the “anonymity on the Internet breeds douchecravats” mentality that pervades MMOs? Barrens chat, trade chat…for every “good” soul, there’s a handful of twits. What can the MMO world do to fight this?”
I asked our writers about the best ways players and studios can overcome this ever-present problem.
We’ve all known trolls who love to dwell in dank forum posts and shadowy comment sections, ready to spread ill will and spark flame wars just to see the world burn. But what if a tool could be devised to identify such miscreants before they could do much damage?
That is one possibility that has arisen from a new Cornell University project in which researchers studied online communities (including IGN) and created an algorithm that can predict which posters had the highest likelihood of being banned in the future. The algorithm isn’t perfect (it misclassifies one out of five users), but the team claims that it is able to spot a troll in as few as 10 posts.