Content Control: More and More Popular

December 7, 2021

A couple recent articles emphasize there is at least some effort being made to control harmful content on social media platforms. Are these examples of responsible behavior or censorship? We are not sure. First up, a resource content creators may wish to bookmark—“5 Banned Content Topics You Can’t Talk About on YouTube” from MakeUseOf. Writer Joy Okumoko goes into detail on banned topics from spam and deception to different types of sensitive or dangerous content. Check it out if curious about what will get a YouTube video taken down or account suspended.

We also note an article at Engadget, “Personalized Warnings Could Reduce Hate Speech on Twitter, Researchers Say.” Researchers at NYUs Center for Social Media and Politics set up Twitter accounts and used them to warn certain users their language could get them banned. Just a friendly caution from a fellow user. Their results suggest such warnings could actually reduce hateful language on the platform. The more polite the warnings, the more likely users were to clean up their acts. Imagine that—civility begets civility. Reporter K. Bell writes:

“They looked for people who had used at least one word contained in ‘hateful language dictionaries’ over the previous week, who also followed at least one account that had recently been suspended after using such language. From there, the researchers created test accounts with personas such as ‘hate speech warner,’ and used the accounts to tweet warnings at these individuals. They tested out several variations, but all had roughly the same message: that using hate speech put them at risk of being suspended, and that it had already happened to someone they follow. … The researchers found that the warnings were effective, at least in the short term. ‘Our results show that only one warning tweet sent by an account with no more than 100 followers can decrease the ratio of tweets with hateful language by up to 10%,’ the authors write. Interestingly, they found that messages that were ‘more politely phrased’ led to even greater declines, with a decrease of up to 20 percent.”

The research paper suggests such warnings may be even more effective if they came from Twitter itself or from another organization instead of their small, 100-follower accounts. Still, lead researcher Mustafa Mikdat Yildirim suspects:

“The fact that their use of hate speech is seen by someone else could be the most important factor that led these people to decrease their hate speech.”

Perhaps?

Cynthia Murrell, December 7, 2021

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta