Facebook: The Fallacy of Rules in an Open Ended Datasphere

December 29, 2018

I read “Inside Facebook’s Secret Rulebook for Global Political Speech.” Yogi Berra time: It’s déjà vu all over again.”

Some history, gentle reader.

Years ago I met with a text analytics company. The purpose of the meeting was to discuss how to identify problematic content; for example, falsified reports related to a warfighting location.

I listened as the 20 somethings and a couple of MBA types bandied about ideas for creating a set of rules that would identify the ways in which information is falsified. There was the inevitable knowledgebase, a taxonomy of terms and jargon, and rules. “If then” stuff.

The big idea was to filter the content with the front end of old school look ups and then feed the outputs into the company’s smart system. I listened and suggested that the cost and time of fiddling with rules would consume the available manpower, time, and money.

Ho, ho, ho was the response. Listen to the old goose from rural Kentucky.

Yeah, that was in 2005, and where is that system now? It’s being used as a utility for IBM’s staggering mountain of smart software and for finding items of interest for a handful of financial clients.

Ho, ho, ho. The joke is one the whiz kids and the investors, who care going to run out of patience when the light bulb does on and says:

“Yo, folks, figuring out what’s fake, shaped, disinformationized, or reformationized content is what makes search difficult.”

I read a scoop from the New York Times. Yep, that’s the print newspaper which delivers to my door each day information that is two or three days old. I see most of the stories online in one form or another. Tip: 85 percent of news is triggered by AP or Reuters feeds.

The article reveals that Facebook’s really smart people cannot figure out how to deal with various types of speech: Political and other types. The child porn content on WhatsApp is a challenge as well I would add.

The write up says:

An examination of the files revealed numerous gaps, biases and outright errors. As Facebook employees grope for the right answers, they have allowed extremist language to flourish in some countries while censoring mainstream speech in others.

Yep, a scoop.

Facebook’s hubris, like the text processing company which dragged me into a series of bull sessions, allows the company to demonstrate that it cannot cope with filtering within a datasphere in which controls are going to be tough to enforce.

The fix is to create a for fee country club. If a person does not meet the criteria, no membership for you. Then each member gets the equivalent of a US social security number which is linked to the verified identity, the payment mechanism, and other data the system can link.

Amazon has this type of system available, but I am not sure the Facebookers are going to pay Amazon to use its policeware to create a clean, well lit place. (Sorry, Ernest, not “lighted”.)

As a final point, may I suggest that rules based systems where big data floweth are going to be tough to create, update, and pay for.

On the other hand, why not hire the New York Times to set up an old school editorial board to do the work. News is not ringing the financial bell at the NYT, so maybe becoming the Facebook solution is a path forward. The cost may put Facebook in the dog house with investors, but the NYT regains it position as the arbiter of what’s in and what’s out.

Power again!

Stephen E Arnold, December 29, 2018

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta