Research Suggests Better Way to Foil Hate Groups

September 9, 2019

It is no secret that internet search and social media companies have a tough time containing the spread of hate groups across their platforms. Now a study from George Washington University and the University of Miami posits why. Inverse reports, “‘Global Hate Highways’ Reveal How Online Hate Clusters Multiply and Thrive.” This is my favorite quote from the article—“In it, [researchers] observe that hate spreads online like a diseased flea, jumping from one body to the next.”

The study tracked certain hate “clusters” across international borders and through different languages as they hopped from one platform to another. Current strategies for limiting the spread of such groups include the “microscopic approach” of banning individual users and the “macroscopic approach” that bans whole ideologies. Not only does the latter approach often run afoul of free speech protections, as the article points out, it is also error-prone—algorithms have trouble distinguishing conversations about hate speech from those that are hate speech (especially where parody is used.) Besides, neither of these approaches have proven very effective. The study suggests another way; reporter Sarah Sloat writes:

“The mathematical mapping model used here showed that both these policing techniques can actually make matters worse. That’s because hate clusters thrive globally not on a micro or macro scale but in meso scale — this means clusters interconnect to form networks across platforms, countries, and languages and are quickly able to regroup or reshape after a single user is banned or after a group is banned from a single platform. They self-organize around a common interest and come together to remove trolls, bots, and adverse opinions. …

“A better way to curb the spread of hate, the researchers posit, would involve randomly banning a small fraction of individuals across platforms, which is more likely to cause global clusters to disconnect. They also advise platforms to send in groups of anti-hate advocates to bombard hate-filled spaces together with individual users to influence others to question their stance.

“The goal is to prevent hate-filled online pits that radicalize individuals like the Christchurch shooter, an Australian who attacked in New Zealand, covered his guns with the names of other violent white supremacists and citations of ancient European victories, and posted a 74-page racist manifesto on the website 8chan.”

The researchers’ approach does not require any data on individuals, nor does it rely on banning ideas wholesale. Instead, it is all about weakening the connections that keep online hate groups going. Can their concept help society dissipate hate?

Cynthia Murrell, September 9, 2019

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta