Automated Censorship: What Could Go CENSORED with the CENSORED System?

March 28, 2022

Automated censorship: Silent, 24×7, no personnel hassles, no vacations, no breakdowns, and no worries.

Okay, a few may have worries, but these are very small, almost microscopic, worries. The reason? If one can’t find the information, then whatever the information discusses does not exist for many people. That’s the elegance of censorship. A void. No pushback. One does not know.

How AI Is Creating a Safer Online World” does not worry about eliminating information. The argument is “more safety.” Who can disagree? Smart people understand that no information yields safety, right?

The write up states:

By using machine learning algorithms to identify and categorize content, companies can identify unsafe content as soon as it is created, instead of waiting hours or days for human review, thereby reducing the number of people exposed to unsafe content.

A net positive. The write up assumes that safe content is good. Smart software can recognize unsafe content. The AI can generate data voids which are safe.

The write up does acknowledge that there may be a tiny, probably almost insignificant issue. The article explains with brilliant prose:

Despite its promise, AI-based content moderation faces many challenges. One is that these systems often mistakenly flag safe content as unsafe, which can have serious consequences.

Do we need examples? Sure, let’s point out that the old chestnuts about Covid and politics are presented to close the issue. How are those examples playing out?

How does the write up? Logic that would knock Stephen Toulmin for a loop? A content marketing super play that will make the author and publisher drown in fame?

Nah, just jabber like this:

AI-assisted content moderation isn’t a perfect solution, but it’s a valuable tool that can help companies keep their platforms safe and free from harm. With the increasing use of AI, we can hope for a future where the online world is a safer place for all.

Does a “safer place” suggest I will be spared essays like this in the future? Oh, oh. Censorship practiced by a human: Ignoring content hoo hah. The original word I chose to characterize the analysis has been CENSORED.

Stephen E Arnold, March 28, 2022


Comments are closed.

  • Archives

  • Recent Posts

  • Meta