We’ve been covering the problem of harassment and hate raiding of marginalized casters on Twitch for a while now, from the #TwitchDoBetter awareness campaign to our thoughts on the day-long boycott of the platform and up to Twitch filing a suit against two hate raid organizers. In all of these cases, the hope has been that Twitch will provide fewer platitudes and more tools to combat targeted harassment, and it looks like Twitch is actually attempting to do so with some new verification features and new chat settings.
First off, viewers will now be able to verify their accounts with a cell phone number (not a VOIP or landline number) in addition to an email verification option. Up to five accounts can be linked to one phone number, and verifying an account only needs to be done once. These new verification features will mean that any one account tied to a phone number that has been banned on Twitch will be banned for every account linked to that number site-wide, and if one phone-verified or email-verified account is banned by a channel, all other accounts tied to that phone number or email will also be banned from chatting in that channel.
This new phone verification feature holds hands with a new phone-verified chat option that streamers and moderators can enact, requiring viewers who want to participate in chat to have their accounts tied to a cell phone number. Exceptions can be made for mods, VIPs, or subscribers, and the restriction can be applied to all viewers, to new viewers only, or to viewers whose Twitch accounts or channel following time reaches certain lengths of time.
According to the announcement, this feature took a long time to roll out because it needed months of testing to make sure it worked at a global level. Twitch further promises that it will continue to work to combat harassment of streamers, including an upcoming channel-level ban evasion tool in the coming months.
“Our work to make Twitch safer will never be over, just as there’ll never be a single fix for harassment and hate online. But as long as toxic behavior can find ways into our communities, we must – and will – keep working on ways to make it harder to do so. From technology and tooling to policy and education, we’re committed to finding more and better ways to decrease harm, empower Creators, and share vital information on how users can stay safe.”
The post’s FAQ further warns that not every feature can be 100% safe against botting, but these new features and others should ideally slow them down.