Twitch has updated its Spam, Scams and Malicious Conduct Policy to prohibit what the company is calling “harmful misinformation actors.” As defined by the company, these are “misinformation superspreaders” that match a very specific criteria to combat false and disproven statements about COVID-19, election fraud, and misinformation that promotes violence, among other things.
Channels that get the ax must meet three characteristics, described by Twitch below:
Together, we’ve identified three characteristics that all of these actors share: their online presence – whether on or off Twitch – is dedicated to (1) persistently sharing (2) widely disproven and broadly shared (3) harmful misinformation topics, such as conspiracies that promote violence. We’ve selected these criteria because taken together they create the highest risk of harm including inciting real world harm. We will only enforce against actors who meet all three of these criteria, and our Off-Service investigations team will be conducting thorough reviews into each case.
Given this language, it’s questionable how effective this policy will be in removing anything other than explicitly labeled propaganda outlets, such as channels controlled by the Russian state media. (Twitch frames this policy change as pre-emptive, and its blog post on the new policy doesn’t mention any shuttered channels, instead assuring readers that “this update will likely not impact you or the streamers you love on Twitch.”)
Unfortunately, the real problem with disinformation is that it often comes from sources that aren’t expressly titled Propaganda News Now; it comes from influencers with wide fanbases talking about other things — a class of Twitch users explicitly protected by the policy’s statement that “it will not be applied to users based upon individual statements or discussions that occur on the channel.”