Facebook Defines Excellence: Also Participated?

April 5, 2022

Slick AI and content moderation functions are not all they are cracked up to be, sometimes with devastating results. SFGate provides one distressing example in, “‘Kill More’: Facebook Fails to Detect Hate Against Rohingya.” Rights group Global Witness recently put Facebook’s hate speech algorithms to the test. The AI failed spectacularly. The hate-filled ads submitted by the group were never posted, of course, though all eight received Facebook’s seal of approval. However, ads with similar language targeting Myanmar’s Rohingya Muslim minority have made it onto the platform in the past. Those posts were found to have contributed to a vicious campaign of genocide against the group. Associated Press reporters Victoria Milko and Barbara Ortutay write:

“The army conducted what it called a clearance campaign in western Myanmar’s Rakhine state in 2017 after an attack by a Rohingya insurgent group. More than 700,000 Rohingya fled into neighboring Bangladesh and security forces were accused of mass rapes, killings and torching thousands of homes. … On Feb. 1 of last year, Myanmar’s military forcibly took control of the country, jailing democratically elected government officials. Rohingya refugees have condemned the military takeover and said it makes them more afraid to return to Myanmar. Experts say such ads have continued to appear and that despite its promises to do better and assurances that it has taken its role in the genocide seriously, Facebook still fails even the simplest of tests — ensuring that paid ads that run on its site do not contain hate speech calling for the killing of Rohingya Muslims.”

The language in these ads is not subtle—any hate-detection algorithm that understands Burmese should have flagged it. Yet Meta (now Facebook’s “parent” company) swears it is doing its best to contain the problem. According to a recent statement sent to the AP, a company rep claims:

“We’ve built a dedicated team of Burmese speakers, banned the Tatmadaw, disrupted networks manipulating public debate and taken action on harmful misinformation to help keep people safe. We’ve also invested in Burmese-language technology to reduce the prevalence of violating content.”

Despite such assurances, Facebook has a history of failing to allocate enough resources to block propaganda with disastrous consequences for foreign populations. Perhaps taking more responsibility for their product’s impact in the world is too dull a topic for Zuck and company. They would much prefer to focus on the Metaverse, their latest shiny object, though that path is also fraught with collateral damage. Is Meta too big for anyone to hold it accountable?

Cynthia Murrell, April 5, 2022


Comments are closed.

  • Archives

  • Recent Posts

  • Meta