YouTube is launching a new feature to deter angry or hostile commentators from making offensive remarks with a quick prompt that triggers before the insult is published, asking them to reconsider and edit their remark. The plan is outlined in a blog post published today.
In addition, YouTube will now have a filter on comments that have been held for review. The creator will be automatically able to hide, approve, or report the messages without being forced to read through the contents of a hateful or threatening message.
We are launching a new feature that will push commenters to reconsider hateful and offensive remarks before posting, as well as a filter that allows creators to review negative comments that had been automatically held for review.— Ryan Wyatt (@Fwiz) December 3, 2020
Story by: @sarahintampahttps://t.co/ELQNUBmA1P
Users who are about to post offensive content will get a pop up before the comment can be published, which suggests they stop and edit their post. The user can still hit “post anyway” to submit their comment, but the idea is that they have a second to reconsider their option. This is in line with measures other social media platforms have recently taken, but it’s unclear how effective this very mild deterrent is.
Android users using the English language will see the new prompts starting today; it’s unclear when or if that feature will roll out across other platforms and languages.
YouTube claims that it has ramped up internal efforts against hate speech, increasing the number of daily hate speech comment removals by 46x since early 2021. YouTube terminated 1.8 million channels in the last quarter, with over 54,000 of those shutdowns happening over hate speech.
The company also intends to release a new survey in 2021 that will ask creators to voluntarily share information about their race, ethnicity, sexual orientation, and gender, with the intent of using this data to examine how content from different groups is treated in its internal systems.