Automated Censorship: What Could Go CENSORED with the CENSORED System?

March 28, 2022

Automated censorship: Silent, 24×7, no personnel hassles, no vacations, no breakdowns, and no worries.

Okay, a few may have worries, but these are very small, almost microscopic, worries. The reason? If one can’t find the information, then whatever the information discusses does not exist for many people. That’s the elegance of censorship. A void. No pushback. One does not know.

How AI Is Creating a Safer Online World” does not worry about eliminating information. The argument is “more safety.” Who can disagree? Smart people understand that no information yields safety, right?

The write up states:

By using machine learning algorithms to identify and categorize content, companies can identify unsafe content as soon as it is created, instead of waiting hours or days for human review, thereby reducing the number of people exposed to unsafe content.

A net positive. The write up assumes that safe content is good. Smart software can recognize unsafe content. The AI can generate data voids which are safe.

The write up does acknowledge that there may be a tiny, probably almost insignificant issue. The article explains with brilliant prose:

Despite its promise, AI-based content moderation faces many challenges. One is that these systems often mistakenly flag safe content as unsafe, which can have serious consequences.

Do we need examples? Sure, let’s point out that the old chestnuts about Covid and politics are presented to close the issue. How are those examples playing out?

How does the write up? Logic that would knock Stephen Toulmin for a loop? A content marketing super play that will make the author and publisher drown in fame?

Nah, just jabber like this:

AI-assisted content moderation isn’t a perfect solution, but it’s a valuable tool that can help companies keep their platforms safe and free from harm. With the increasing use of AI, we can hope for a future where the online world is a safer place for all.

Does a “safer place” suggest I will be spared essays like this in the future? Oh, oh. Censorship practiced by a human: Ignoring content hoo hah. The original word I chose to characterize the analysis has been CENSORED.

Stephen E Arnold, March 28, 2022

Twitch to Ban Agents of False Information

March 18, 2022

Amazon, proud owner of streaming platform Twitch, wades into a swamp in which there are snakes and other dangerous creatures. IGN reports, “Twitch Introducing New Rules to Stop Misinformation Spreaders.” The brief write-up describes the new policy:

“The policy update will target streamers who consistently make false claims, on or off Twitch, regarding protected groups, health issues including COVID-19, public emergencies, and misinformation that promotes violence or diminishes civic systems such as election results. Angela Hession, Twitch’s vice president of trust and safety, said the website is ‘taking this precautionary step and updating our policies to ensure that these misinformation superspreaders won’t find a home on our service,’ per the New York Times. … all misinformation spreaders will be targeted by the new policy even if they don’t make false claims while streaming. Sharing the misinformation on other platforms such as Twitter is enough to warrant action against their Twitch account.”

We are guessing many users will object to this cross-platform policing. Vigorously. The company, however, must believe the threat of misinformation warrants the crackdown. The audience for streaming gamers has grown rapidly over the last several years, and the invasion in Ukraine has magnified the problem. Video games can look so real that some in-game footage has been presented as actual footage of the conflict. One developer is pleading with people not to use their software in this way. We are glad to see Amazon taking a stand even as it faces other Twitch-related problems.

Cynthia Murrell, March 18, 2022

The BBC Enacts the Ministry of Truth Playbook

February 22, 2022

The National Review calls out the BBC, comparing the organization to the Ministry of Truth from George Orwell’s 1984, in “The BBC Quietly Censors Its Own Archives.” Writer Charles C. W. Cooke informs us:

“The Daily Telegraph reported that ‘an anonymous Radio 4 Extra listener’ had ‘discovered the BBC had been quietly editing repeats of shows over the past few years to be more in keeping with social mores.’ To which the BBC said . . . well, yeah. In a statement addressing the charge, the institution confirmed that ‘on occasion we edit some episodes so they’re suitable for broadcast today, including removing racially offensive language and stereotypes from decades ago, as the vast majority of our audience would expect.’ Thus, in the absence of law or regulation, has the British establishment begun to excise material it finds inappropriate by today’s lights.”

See, the BBC was just trying to be helpful. There are just a few problems with that defense: Does the audience, which as British taxpayers effectively owns this content, really “expect” it to be unceremoniously altered? If so, why the secrecy? There is value in being able to see how vile prevailing attitudes used to be, after all. Then there is the fact that not all alterations simply removed racist, misogynistic, or other offensive tropes and language. We learn of a particularly self-serving set of alterations:

“Per the Telegraph, the BBC has ‘purged mentions of disgraced stars Jimmy Savile and Rolf Harris’ from its collections. And down the memory hole goes that.”

The memory hole, of course, being a 1984 reference. See the write-up for more Orwellian correlations. The piece continues:

“One might reasonably wonder where such a project might end. Whether one likes it or not, Jimmy Savile and Rolf Harris existed. They were real people, who had a real effect on the culture, and who appeared on a vast number of real radio and television shows that were produced and disseminated by the BBC. That they turned out to be extremely bad people is regrettable, but it does not alter material reality.”

Indeed. Cooke points out the BBC, which has been operating since 1922, has generated and collected a wealth of valuable historical information. One hundred years later, the de facto government agency should not be allowed to alter that content as it sees fit. Whatever its motives.

Cynthia Murrell, February 22, 2022

Dumpster Fire Has Been Replaced

February 20, 2022

Hats off to jkhendrickson for creating a useful way to describe an intentionally flawed system. The phrase lurks within one of case examples of an interesting Google YouTube enforcement action.

Here’s the phrase:

the byzantine garbage fire

The phrase begs for an acronym, which are loved by millennials, GenXers, and the military; therefore, we have:

BGF

What caused the BGF. Nothing much. Google YouTube unilaterally decided to delete videos.

Hey, free means outfits can do what they want when they want.

Nevertheless:

BGF

The “b” means byzantine garbage fire.

Stephen E Arnold, February 20, 2022

Who Is the Bigger Disruptor: A Twitch Streamer or a Ring Announcer?

February 17, 2022

People can agree on is that there is a lot of misinformation surrounding COVID-19. What is considered “misinformation” depends on an individuals’ beliefs. The facts remains, however, that COVID-19 is real, vaccines do not contain GPS chips, and the pandemic has been one big pain. Whenever it is declared we are in a post-pandemic world, misinformation will be regarded as one of the biggest fallouts with an on-going ripple effect.

The Verge explores how one controversial misinformation spreader will be discussed about for years to come: “The Joe Rogan Controversy Is What Happens When You Put Podcasts Behind A Wall.” Misinformation supporters, among them conspiracy theorists, used to be self-contained in their own corner of the globe, but they erupted out of their crazy zone like Vesuvius engulfing Pompeii. Rogan’s faux pas caused Spotify podcast show to remove over seventy episodes of his show or deplatform him.

Other podcast platforms celebrated the demise of a popular Spotify show and attempted to sell more subscriptions for their own content. These platforms should not be celebrating, though. Spotify owned Rogan’s show and his controversy has effectively ruined the platform, but it could happen at any time to Spotify’s rivals. Rogan is not the only loose cannon with a podcast and it does not take much for anything to be considered offensive, then canceled. The rival platforms might be raking in more dollars right now, but:

“We’re moving away from a world in which a podcast player functions as a search engine and toward one in which they act as creators and publishers of that content. This means more backlash and room for questions like: why are you paying Rogan $100 million to distribute what many consider to be harmful information? Fair questions!

This is the cost of high-profile deals and attempts to expand podcasting’s revenue. Both creators and platforms are implicated in whatever content’s distributed, hosted, and sold, and both need to think clearly about how they’ll handle inevitable controversy.”

There is probably an argument about the right to Freedom of Speech throughout this controversy, but there is also the need to protect people from harm. It is one large, gray zone with only a tight rope to walk across it.

So Amouranth or Mr. Rogan? Jury’s out.

Whitney Grace, January 17, 2022

The Metazuck Shuts Down Iranian Accounts Posing as Scottish Nationalists

February 17, 2022

Here we have an all-too-rare case of Facebook (Meta) taking action against imposter accounts. Yahoo Finance reports, “Facebook Takes Down Fake Iranian Accounts that Posed as Scottish Locals.” The network in question, however, had not been particularly effective at influencing its target audience. Though the eight Facebook and 126 Instagram accounts had 77,000 followers between them, the most popular one only garnered 4,000 followers, and only half of those were actually located in the UK. We suppose even small victories can be used for PR purposes.

We are told the fake Scots were firm supporters of Scottish independence and critical of the UK government. The creators of these false accounts may have expected more bang for their rial, for they put in an unusual amount of effort to make them seem real. Reporter Karissa Bell writes:

“In a call with reporters, Facebook’s Global IO Threat Intelligence Lead, Ben Nimmo, said that it’s not the first time the company has caught Iran-linked fake accounts targeting Scotland, but that the latest network stood out for its ‘artisanal’ approach to the fake personas. ‘What was unique about this case was the effort that the operators took to make their fakes look like real people,’ Nimmo said. He noted the accounts spent considerable time posting about their ‘side interests,’ like football, in an attempt to boost their credibility. Some of the accounts also lifted profile photos from real celebrities or media personalities, and regularly updated the images in order to appear more real. Other accounts used fake photos generated by AI programs.”

That is a lot of effort to foment a bit of unrest in a corner of the UK. We wonder what else these imposters are up to and what they have planned for the future.

Cynthia Murrell, February 17, 2021

School Book Bans Are on the Rise

February 16, 2022

If one thought we had progressed beyond censorship in this country, one should think again.

History is cyclical, after all. Axios reports, “Book Bans Are Back in Style.” Writer Russell Contreras informs us:

“School districts from Pennsylvania to Wyoming are bowing to pressure from some conservative groups to review — then purge from public school libraries — books about LGBTQ issues and people of color. … ‘I’ve worked for this office for 20 years, and we’ve never had this volume of challenges come in such a short time,’ Deborah Caldwell-Stone, director of the American Library Association’s Office for Intellectual Freedom, told Axios. ‘In my former district, we might have one big challenge like every two years,’ Carolyn Foote, a retired Texas librarian of 29 years, told Axios. ‘I have to say that what we’re seeing is really unprecedented.’”

So it has gotten even worse than it was three decades ago. Would that be more of an ellipse than a circle? A cone? Perhaps a slippery slope. See the article for some of the banned titles, most having to do with racism and sexuality, as well as the thought police’s excuses, er, reasoning. It should be noted that conservatives, while firmly in the lead, are not the only ones trying to suppress the written word. Less often, progressives have called for older books with content like racial epithets and the “white savior” trope to be pulled from syllabuses and shelves. We learn:

“Harper Lee’s ‘To Kill a Mockingbird’ and John Steinbeck’s ‘Of Mice and Men regularly appeared on the American Library Association’s annual list of banned books. The ALA’s Caldwell-Stone says such challenges are sporadic and nothing compared to the current conservative-backed efforts.”

Gee, if only there were someone to provide students context around the literature they are assigned. An educated, more experienced human available in every classroom to provide guidance. Hmmm.

Cynthia Murrell, February 16, 2022

Israeli Law Targets Palestinian Content Online

February 11, 2022

A piece of legislation that was too heavy-handed for even former Prime Minister Benjamin Netanyahu is now being revived. On his Politics for the People blog, journalist Ramzy Baroud tells us “How Israel’s ‘Facebook Law’ Plans to Control All Palestinian Content Online.” The law, introduced by now-justice minister and deputy prime minister Gideon Sa’ar, would allow courts to order the removal of content they consider inflammatory or a threat to security. Given how much Palestinian content is already removed as a matter of course, one might wonder why Sa’ar would even bother with the legislation. Baroud writes:

“According to a December 30 statement by the Palestinian Digital Rights Coalition (PDRC) and the Palestinian Human Rights Organizations Council (PHROC), Israeli censorship of Palestinian content online has deepened since 2016, when Sa’ar’s bill was first introduced. In their statement, the two organizations highlighted the fact that Israel’s so-called Cyber Unit had submitted 2,421 requests to social media companies to delete Palestinian content in 2016. That number has grown exponentially since, to the extent that the Cyber Unit alone has requested the removal of more than 20,000 Palestinian items. PDRC and PHROC suggest that the new legislation, which was already approved by the Ministerial Committee for Legislation on December 27, ‘would only strengthen the relationship between the Cyber Unit and social media companies.’ Unfortunately, that relationship is already strong, at least with Facebook, which routinely censors Palestinian content and has been heavily criticized by Human Rights Watch and other organizations.”

This censorship by Facebook is codified in an agreement the company made with Israel in 2016. This law, however, goes well beyond Facebook. We also learn:

“According to a Haaretz editorial published on December 29, the impact of this particular bill is far-reaching, as it will grant District Court judges throughout the country the power to remove posts, not only from Facebook and other social media outlets, ‘but from any website at all’.”

The write-up rightly positions this initiative as part of the country’s ramped-up efforts against the Palestinians. But we wonder—will this law really only mean the wanton removal of Palestinian content? If history is any indication, probably not. Baroud reminds us that measures Israel originally applied to that population, like facial recognition tech and Pegasus spyware, have found their way into widespread use. One cannot expect this one to be any different.

Cynthia Murrell, February 11, 2022

Government Content Removal Requests by Country, Visualized

February 8, 2022

We often hear about countries requesting tech companies remove certain content from their online platforms, especially Russia and China. It can be difficult, though, to discern and compare which types of content is verboten in which nations. Digg paints us a picture in, “The Countries that Ask Google to Remove the Most Content, Visualized.” Reporter Adwait writes:

“The most common reason for taking down content is ‘defamation’ according to a Surfshark analysis, which six out of the top ten leaders cite as a reason. Russia has a sizable lead with the most number of takedown requests, nearly ten-times more than second-placed Turkey. … Google receives thousands of requests every year from all levels of government to remove online content. From infringement of intellectual property rights to defamation, there are a number of reasons a removal request might be submitted. But where in the world asks Google to remove content the most? Historically, Russia is by far the most prolific content removal requester, submitting 123,606 requests in total over the past ten years. Turkey is next up with 14,231 requests which, although the second-highest figure, seems mere in comparison.”

Such a visual aid is a good idea, but it would be even better if it were well executed. Sadly, this illustration features ribbon graphs in muted colors with no numbers. Readers may want to navigate instead to the Surfshark User Data Surveillance Report from which this graphic was made. It includes several informative graphs, all of which contain easily discernable colors and actual numbers. It also includes data on request made of Microsoft, Facebook, and Apple as well as Google. The cybersecurity firm behind the report, Surfshark, was founded in 2018 and is based in Tórtola, Castilla-La Mancha, Spain.

Cynthia Murrell, January 8, 2021

Useful Concept: Algorithmic Censorship

January 28, 2022

This year I will be 78. Exciting. I create blog posts because it makes life in the warehouse for the soon-to-be-dead bustle along. That’s why it is difficult for me to get too excited about the quite insightful essay called “Censorship By Algorithm Does Far More Damage Than Conventional Censorship.”

Here’s the paragraph I found particularly important:

… far more consequential than overt censorship of individuals is censorship by algorithm. No individual being silenced does as much real-world damage to free expression and free thought as the way ideas and information which aren’t authorized by the powerful are being actively hidden from public view, while material which serves the interests of the powerful is the first thing they see in their search results. It ensures that public consciousness remains chained to the establishment narrative matrix.

I would like to add several observations:

  1. There is little regulatory or business incentive to exert the mental effort necessary to work through content controls on the modern datasphere in the US and Western Europe. Some countries have figured it out and are taking steps to use the flows of information to create a shaped information payload
  2. The failure to curtail the actions of a certain high technology companies illustrates a failure in decision making. Examples range from information warfare for purposes of money or ideology allowed to operate unchecked to the inability of government officials to respond to train robberies in California
  3. The censorship by algorithm approach is new and difficult to understand in social and economic contexts. As a result, biases will be baked in because initial conditions evolve automatically and it takes a great deal of work to figure out what is happening. Disintegrative deplatforming is not a concept most people find useful.

What’s the outlook? For me, no big deal. For many, digital constructs. What’s real? The clanking of the wheelchair in the next room. For some, cluelessness or finding a life in the world of zeros and ones.

Stephen E Arnold, January 28, 2022

Next Page »

  • Archives

  • Recent Posts

  • Meta