Happy Thursday: Here’s your weekly argument over toxicity, moderation, and AI, courtesy of Discord, which has apparently purchased a company you’ve probably heard of once and then immediately forgot. It’s a security company that was launched publicly last year thanks to funding from everything from Twitch and Riot Games to Twitter and Reddit founders. Its pitch at the time was a product called Sentropy Detect, which purported to offer data-driven moderation tools to fight userbase toxicity at scale.
While existing customers of Sentropy’s products are essentially being given a couple of months to migrate elsewhere, Discord users might welcome the move, as Sentropy’s founders say they’re “joining Discord to continue fighting against hate and abuse on the internet.”
“Together, we can create belonging safely. As we have gotten to know Discord, we have been impressed by their deep commitment to safety — a commitment that extends beyond their own communities to the broader internet. One of the hard-and-fast requirements Sentropy had as we discussed working together was that our future home allows us to keep our focus on improving the internet as a whole, not just improving the Trust and Safety (T&S) capabilities of the company. Our team’s focus will be on helping Discord expand and evolve its T&S capabilities; we are also inspired by Discord’s commitment to knowledge-sharing and capability-building in the content moderation space, as exemplified by their Moderator Academy. We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities.”
Discord’s toxicity problem is well-publicized at this point, largely thanks to Discord’s regular transparency reports. The last one issued noted that user reports skyrocketed through the pandemic, with general harassment and cybercrime being the most-reported problems but spammers being deleted the most – on the order of over 3M spam accounts in half a year. The company deleted over 27K servers in the same period, most often for cybercrime and “exploitative content”; it reported almost 7K to the National Center for Missing and Exploited Children; and it removed over 1500 for violent extremism and 334 for QAnon extremism. Needless to say, Discord has a problem – but whether even the combination of Discord’s safety team and Sentropy’s AI-driven detection can stem the tide is an open question.