Twitch is changing its hateful conduct and harassment policy once again, this time to allow moderators to take into account problematic activity that occurs beyond the platform’s hallowed halls.
“We will now enforce against serious offenses that pose a substantial safety risk to the Twitch community, even if these actions occur entirely off Twitch,” the company announced in a blog and email yesterday. Those offenses generally include terrorism, violence, extremism, threats of mass violence or threats toward staff, hate group membership, rape, and child exploitation. Harassment is also mentioned in the FAQ. The company will be bringing in an unnamed “third party investigative partner” to help investigations scale.
“Our law enforcement response team is trained to substantiate any evidence related to user reports and will only take action in cases where we or our third-party law firm partners have been able to find a preponderance of evidence. We will also suspend accounts for submitting large amounts of frivolous reports, or encouraging others to submit fake reports.”
Twitch says it believes this is a “novel approach for both Twitch and the industry at large,” but of course MMO players know it is not, as this is a very old problem in MMO communities and games and several MMO studios have issued similar policies in the past, including SOE before it was Daybreak (2013) and Blizzard (2018). Blizzard, for example, said it would be “proactively” looking for toxicity on YouTube and other social media to track down objectionable accounts even before they’d been reported, while five years before that, SOE declared that it would permanently ban game accounts of players caught abusing humans on social media.
In fact, when CCP Games said it was uncomfortable with such a policy a few years ago, our staff and readers weighed in with a lengthy discussion, so we probably don’t need to have it again, but let’s call it novel and do it anyway. Everything old is new again on the internet if we just say it is.