Israeli Law Targets Palestinian Content Online

February 11, 2022

A piece of legislation that was too heavy-handed for even former Prime Minister Benjamin Netanyahu is now being revived. On his Politics for the People blog, journalist Ramzy Baroud tells us “How Israel’s ‘Facebook Law’ Plans to Control All Palestinian Content Online.” The law, introduced by now-justice minister and deputy prime minister Gideon Sa’ar, would allow courts to order the removal of content they consider inflammatory or a threat to security. Given how much Palestinian content is already removed as a matter of course, one might wonder why Sa’ar would even bother with the legislation. Baroud writes:

“According to a December 30 statement by the Palestinian Digital Rights Coalition (PDRC) and the Palestinian Human Rights Organizations Council (PHROC), Israeli censorship of Palestinian content online has deepened since 2016, when Sa’ar’s bill was first introduced. In their statement, the two organizations highlighted the fact that Israel’s so-called Cyber Unit had submitted 2,421 requests to social media companies to delete Palestinian content in 2016. That number has grown exponentially since, to the extent that the Cyber Unit alone has requested the removal of more than 20,000 Palestinian items. PDRC and PHROC suggest that the new legislation, which was already approved by the Ministerial Committee for Legislation on December 27, ‘would only strengthen the relationship between the Cyber Unit and social media companies.’ Unfortunately, that relationship is already strong, at least with Facebook, which routinely censors Palestinian content and has been heavily criticized by Human Rights Watch and other organizations.”

This censorship by Facebook is codified in an agreement the company made with Israel in 2016. This law, however, goes well beyond Facebook. We also learn:

“According to a Haaretz editorial published on December 29, the impact of this particular bill is far-reaching, as it will grant District Court judges throughout the country the power to remove posts, not only from Facebook and other social media outlets, ‘but from any website at all’.”

The write-up rightly positions this initiative as part of the country’s ramped-up efforts against the Palestinians. But we wonder—will this law really only mean the wanton removal of Palestinian content? If history is any indication, probably not. Baroud reminds us that measures Israel originally applied to that population, like facial recognition tech and Pegasus spyware, have found their way into widespread use. One cannot expect this one to be any different.

Cynthia Murrell, February 11, 2022

Government Content Removal Requests by Country, Visualized

February 8, 2022

We often hear about countries requesting tech companies remove certain content from their online platforms, especially Russia and China. It can be difficult, though, to discern and compare which types of content is verboten in which nations. Digg paints us a picture in, “The Countries that Ask Google to Remove the Most Content, Visualized.” Reporter Adwait writes:

“The most common reason for taking down content is ‘defamation’ according to a Surfshark analysis, which six out of the top ten leaders cite as a reason. Russia has a sizable lead with the most number of takedown requests, nearly ten-times more than second-placed Turkey. … Google receives thousands of requests every year from all levels of government to remove online content. From infringement of intellectual property rights to defamation, there are a number of reasons a removal request might be submitted. But where in the world asks Google to remove content the most? Historically, Russia is by far the most prolific content removal requester, submitting 123,606 requests in total over the past ten years. Turkey is next up with 14,231 requests which, although the second-highest figure, seems mere in comparison.”

Such a visual aid is a good idea, but it would be even better if it were well executed. Sadly, this illustration features ribbon graphs in muted colors with no numbers. Readers may want to navigate instead to the Surfshark User Data Surveillance Report from which this graphic was made. It includes several informative graphs, all of which contain easily discernable colors and actual numbers. It also includes data on request made of Microsoft, Facebook, and Apple as well as Google. The cybersecurity firm behind the report, Surfshark, was founded in 2018 and is based in Tórtola, Castilla-La Mancha, Spain.

Cynthia Murrell, January 8, 2021

Useful Concept: Algorithmic Censorship

January 28, 2022

This year I will be 78. Exciting. I create blog posts because it makes life in the warehouse for the soon-to-be-dead bustle along. That’s why it is difficult for me to get too excited about the quite insightful essay called “Censorship By Algorithm Does Far More Damage Than Conventional Censorship.”

Here’s the paragraph I found particularly important:

… far more consequential than overt censorship of individuals is censorship by algorithm. No individual being silenced does as much real-world damage to free expression and free thought as the way ideas and information which aren’t authorized by the powerful are being actively hidden from public view, while material which serves the interests of the powerful is the first thing they see in their search results. It ensures that public consciousness remains chained to the establishment narrative matrix.

I would like to add several observations:

  1. There is little regulatory or business incentive to exert the mental effort necessary to work through content controls on the modern datasphere in the US and Western Europe. Some countries have figured it out and are taking steps to use the flows of information to create a shaped information payload
  2. The failure to curtail the actions of a certain high technology companies illustrates a failure in decision making. Examples range from information warfare for purposes of money or ideology allowed to operate unchecked to the inability of government officials to respond to train robberies in California
  3. The censorship by algorithm approach is new and difficult to understand in social and economic contexts. As a result, biases will be baked in because initial conditions evolve automatically and it takes a great deal of work to figure out what is happening. Disintegrative deplatforming is not a concept most people find useful.

What’s the outlook? For me, no big deal. For many, digital constructs. What’s real? The clanking of the wheelchair in the next room. For some, cluelessness or finding a life in the world of zeros and ones.

Stephen E Arnold, January 28, 2022

2022 Adds More Banished Words To The Lexicon

January 27, 2022

Every year since 1976 the Lake Superior State University located in Michigan compiles a list of banished words to protect and uphold standards in language. The New Zealand Herald examines the list in the article, “Banished Word List For 2022 Takes Aim At Some Kiwi Favorites.” New Zealanders should be upset, because their favorite phases “No worries” made the list.

Many of the words that made the list were due to overuse. In 2020, COVID related terms were high on the list. For 2021, colloquial phrases were criticized. Banned word nominations came from the US, Australia, Canada, Scotland, England, Belgium, and Norway.

“ ‘Most people speak through informal discourse. Most people shouldn’t misspeak through informal discourse. That’s the distinction nominators far and wide made, and our judges agreed with them,’ the university’s executive director of marketing and communications Peter Szatmary said.

LSSU president Dr Rodney Hanley said every year submitters suggested what words and terms to banish by paying close attention to what humanity utters and writes. ‘Taking a deep dive at the end of the day and then circling back make perfect sense. Wait, what?’ he joked.”

Words that made the list were: supply chain, you’re on mute, new normal, deep dive, circle back, asking for a friend, that being said, at the end of the day, no worries, and wait, what?

Whitney Grace January 27, 2022

Amazon Twitch Vs Star Amouranth

January 17, 2022

The stars of new media are creating a new twist on the Hollywood studio moguls fight to control the stars of the silver screen. Class Struggle in Hollywood, 1930-1950: Moguls, Mobsters, Stars, Reds, and Trade Unionists documents some of the power struggles. Yep, you can buy it from Amazon, the outfit which is one of moguls, mobsters, stars, reds, and trade unionists-type protagonists in the study by Gerald Horne.

The modern social media spin is that Amazon Twitch finds itself in an uncomfortable old school Hollywood moment. Its “audience” is manufacturing or spawning “stars.” Unlike the digitally inhibited book publishers, Amazon Twitch is now finding it more difficult to corral and manage the streamers. These individuals with fame blasting into orbit with unique insights and talent allow Amazon Twitch to sell ads. And those ads? Pre rolls demand attention before one knows if the Twitch creator is online, doing the BRB (be right back) pause, or just leaving the inflatable pool naked in the digital stream. The ad produces revenue and the person wanting to form a digital bond with the star gets annoyed. Even Amouranth haters have to endure ads in order to post angry emojis and often hostile comments in the starlet’s live stream.

Amouranth Calls for Twitch to Start Revealing Ban Reasons for Streamers” includes some interesting observations and statements, some attributed to the social media starlet Kaitlyn Siragusa aka Amouranth. (One of her talent is creating video hooks like chewing on a microphone whilst breathing and donning a swim suit and splashing in an inflatable kiddie pool. She is also pretty good at getting media attention and free publicity. Plus she allegedly owns a convenience store. Did you, gentle reader, when you were 29 years old?)

The write up includes this statement:

Amouranth has called for that to change, and soon ?— according to the Amazon platform’s top female streamer, Twitch must change their tune, start being clearer when it comes to explaining bans for suspended stars, and finally “accept accountability” for the site’s rules.

She is quoted in the write up as saying:

“They [Amazon Twitch] do it because they don’t want the accountability of telling you what you did wrong. They don’t want to be in charge of upholding their own policies.”

Before you scoff at her talent, consider the allegation: A large technology outfit which is believed by some to be a monopoly type operation wants the money, wants the control, and does not want to upfront about who gets punished and who does not.

If the assertion is accurate, the social media star’s “situation” could become a flywheel and bump into the flywheel inside the Amazon money machine. Think in terms of one Tour de France racer bumping into another racer’s machine.

image

Amouranth is a human, and the products which Amazon offers are not. Amazon’s services are not people to the bits and bytes either. But Amouranth, the talent, is a human, and the human can create some waves. In a phone chat with one of my research team, we identified these momentum enhancers:

  1. Amouranth can claim discrimination and take her objection to a legal eagle or — heaven help the US government — one of the committees investigating the behavior of the largely unfettered tech giants. Wow, Amouranth testifying and then doing the talking head circuit.
  2. Amouranth can enlist the support of other individuals who have allegedly been digitally and financial abused by the high school science club management methods in use at some of the other high-tech, “we do what we want” outfits. Are there unhappy YouTubers out there who would respond to a call for action from a fellow traveler.
  3. An outfit struggling for traction — maybe a BitBucket like set up? — could make a play for these stars and use their followers to kick a video streaming service into gear? What happens if a somewhat rudderless operation in the streaming business embraces Amouranth and pulls off a Joe Rogan-type of deal? Imagine having Amouranth on ESPN as a commentator for a sports event which pulls an audience so small it’s tough to measure? What about Amouranth on the Disney Channel? (Come on, Minnie. Join Amouranth in her inflatable pool. Let an Amazon accountant manage the money from the program’s ad revenue.)
  4. More interesting, Amouranth might become the first Twitch personality to become a social media Adam Carolla in an inflatable kiddie pool.

Net net: Amouranth may be a starlet-type problem for Amazon Twitch. If not managed in an astute manner, a disruption of considerable size could result and travel at the speed of social media. Ad hoc censorship may have a hefty price tag.

Stephen E Arnold, January 17, 2022

Easier Targets for Letter Signers: Joe Rogan and Spotify

January 13, 2022

YouTube received a missive from fact checkers exhorting the online ad giant to do more to combat misinformation. Ah, would there were enough fact checkers. YouTube, despite having lots of money, is an easier target for government regulators. Poke Googzilla in the nose or pull its charming tail, and the beast does a few legal thrashes and then outputs money. France and Russia love this beast baiting. Fact checkers? Not exactly in the same horsepower class as the country with fancy chickens or hearty Siberians wearing hats made of furry creatures.

I noted “Scientists, Doctors Call on Spotify to Implement Misinformation Policy Over Claims on Joe Rogan Show.” Spotify is not yet a Google-type of operation. Furthermore the point of concern is a person who was a paid cheerleader for that outstanding and humane sporting activity mixed martial arts. My recollection is that Mr. Rogan received some contractual inducements to provide content to the music service and cable TV wannabe. He allegedly has a nodding acquaintance with intravenous vitamin drips, creatine, and fish oil. You can purchase mugs from which one can guzzle quercetin liquid. Yum yum yum. Plus you can enjoy these Rogan-centric products wearing a Joe Rogan T shirt. (Is that a a mystic symbol or an insect on the likenesses’ forehead?)

image

The write up states:

More than 260 doctors, nurses, scientists, health professionals and others have signed an open letter calling on the streaming media platform Spotify to “implement a misinformation policy” in the wake of controversy over podcaster Joe Rogan’s promotion of an anti-vaccine rally with discredited scientist Robert Malone in an episode published on December 31st. Rogan has repeatedly spread vaccine misinformation and discouraged vaccine use. The December episode attracted attention in part because Dr. Malone falsely claimed millions of people were “hypnotized” to believe certain facts about COVID-19, and that people standing in line to get tested as the omicron variant has driven record new cases of the virus was an example of “mass formation psychosis,” a phenomenon that does not exist.

Impressive. The hitch in the git along is that Mr. Rogan attracts more eyeballs and listeners than some mainstream news outlets. He is an entertainer, and one might make the case that he is a comedian, pulling the leg of guests and of some listeners. I think of him as an intellectual Adam Carolla. Note that I am aware of the academic credentials of both of these stars.

The larger issue is that these letters beef up the résumé of the publicists working on these missives. Arguments and discussions in online for a whip up eddies of concern.

There are a few problems:

  1. Misinformation, disinformation, and reformation of factual data are standard functions of the human.
  2. Identifying and offering counter arguments depends upon one’s point of view.
  3. Spotify receives content and makes it available. Conduits are not as efficient in modifying what an entertainer does in near real time before the entertainer entertains.

Why not tell Spotify to drop Mr. Rogan? Money, contracts, and the still functional freedom of speech thing.

Will more letters arrive this week? My hunch is that the French, Russian, et al approach might ultimately be more pragmatic. Whom does the publicity for the control Rogan letter benefit?

Maybe Mr. Rogan?

Stephen E Arnold, January 13, 2022

Some Want YouTube to Check Facts: A Fantastical Idea

January 12, 2022

I wanted to look up a function for the DaVinci Resolve “scripting” feature. I spotted a YouTube video about the subject. The information in the video was incorrect. Is Google responsible for this factual misstep? Is DaVinci’s owner Black Magic going to rush to the editing room to create an accurate programming video? Will DaVinci users revolt, hold a protest, burn a pile of Black Magic video switchers? Nope.

An Open Letter to YouTube’s CEO from the World’s Fact Checkers” states:

What we do not see is much effort by YouTube to implement policies that address the problem [Covid information]. On the contrary, YouTube is allowing its platform to be weaponized by unscrupulous actors to manipulate and exploit others, and to organize and fundraise themselves. Current measures are proving insufficient. That is why we urge you to take effective action against disinformation and misinformation, and to elaborate a roadmap of policy and product interventions to improve the information ecosystem – and to do so with the world’s independent, non-partisan fact-checking organizations.

Okay, facts about Covid. How are those “facts” about Covid weathering the often conflicting flow of data? Government officials and Covid experts descend into primary school playground arguments. I love the use of visual aids too. What about the factual errors in many videos on YouTube? Who exactly is able to identify an error and take or recommend a specific action?

This is a fantastical idea, and it is one that may lead to online discussions, legal kerfuffles, and some videos being removed.

The notion of free hosting and streaming of videos means that unless YouTube gates the uploads or starts charging for storage and streaming, the volume is likely to overwhelm the world’s fact checkers. My hunch is that there are more wanna be YouTube stars than fact checkers. Perhaps Google’s stellar under content diving machine will automate the process using close enough for horse shoes methods? Perhaps will just hire an editorial team and operate in the manner of the late and much needed traditional newspaper industry despite the taint of yellow journalism, advertorials, hobby horses, and reportorial bias.

Net net: Nice letter but after a meeting the missive will be handed over to Google legal and PR. YouTube shall accept and stream as usual.

Stephen E Arnold, January 12, 2022

Russia May Not Contribute to the Tor Project in 2022

December 28, 2021

This is probably not a surprise to those involved with the Tor Project. We noted some evidence of Russia’s view of anonymized Internet browsing in “Russia Blocks Privacy Service Tor In Latest Move To Control Internet.” The article reports:

Russia’s media regulator has blocked the online anonymity service Tor in what is seen as the latest move by Moscow to bring the Internet in Russia under its control. Roskomnadzor announced it had blocked access to the popular service on December 8, cutting off users’ ability to thwart government surveillance by cloaking IP addresses.

The Tor Project responded with some tech tips for ways to get around the Putin partition. (Think Tor bridge. Some details are at this link.)

Does this mean that Russia has no interest in Tor? Nope. We think that some of Mr. Putin’s fellow travelers are hosting Tor relay servers, but that’s just something we heard from a person yapping about freedom.

What’s next? How about blocking any service originating in nation states not getting with Mr. Putin’s Ukrainian program? It is unlikely that Sergey Brin’s flight on a Russian rocket ship will become a reality in 2022. We also heard that the Google Cloud hosts some services that Mr. Putin thinks may erode the freedoms enjoyed by Russian citizens.

Stephen E Arnold, December 28, 2021

Content Control: More and More Popular

December 7, 2021

A couple recent articles emphasize there is at least some effort being made to control harmful content on social media platforms. Are these examples of responsible behavior or censorship? We are not sure. First up, a resource content creators may wish to bookmark—“5 Banned Content Topics You Can’t Talk About on YouTube” from MakeUseOf. Writer Joy Okumoko goes into detail on banned topics from spam and deception to different types of sensitive or dangerous content. Check it out if curious about what will get a YouTube video taken down or account suspended.

We also note an article at Engadget, “Personalized Warnings Could Reduce Hate Speech on Twitter, Researchers Say.” Researchers at NYUs Center for Social Media and Politics set up Twitter accounts and used them to warn certain users their language could get them banned. Just a friendly caution from a fellow user. Their results suggest such warnings could actually reduce hateful language on the platform. The more polite the warnings, the more likely users were to clean up their acts. Imagine that—civility begets civility. Reporter K. Bell writes:

“They looked for people who had used at least one word contained in ‘hateful language dictionaries’ over the previous week, who also followed at least one account that had recently been suspended after using such language. From there, the researchers created test accounts with personas such as ‘hate speech warner,’ and used the accounts to tweet warnings at these individuals. They tested out several variations, but all had roughly the same message: that using hate speech put them at risk of being suspended, and that it had already happened to someone they follow. … The researchers found that the warnings were effective, at least in the short term. ‘Our results show that only one warning tweet sent by an account with no more than 100 followers can decrease the ratio of tweets with hateful language by up to 10%,’ the authors write. Interestingly, they found that messages that were ‘more politely phrased’ led to even greater declines, with a decrease of up to 20 percent.”

The research paper suggests such warnings may be even more effective if they came from Twitter itself or from another organization instead of their small, 100-follower accounts. Still, lead researcher Mustafa Mikdat Yildirim suspects:

“The fact that their use of hate speech is seen by someone else could be the most important factor that led these people to decrease their hate speech.”

Perhaps?

Cynthia Murrell, December 7, 2021

MIT: Censorship and the New Approach to Learning

October 27, 2021

MIT is one of the top science and technology universities in the world. Like many universities in the United States, MIT has had its share of controversial issues related to cancel culture. The Atlantic discusses the most recent incident in the article, “Why The Latest Campus Cancellation Is Different.”

MIT invited geophysicist Dorian Abbot to deliver the yearly John Carlson Lecture about his new climate science research. When MIT students heard Abbot was invited to speak, they campaigned to disinvite him. MIT’s administration caved and Abbot’s invitation was rescinded. Unlike other cancel culture issues, when MIT disinvited Abbot it was not because he denied climate change or committed a crime. Instead, he gave his opinion about affirmative action and other ways minorities have advantages in college admission.

Abbot criticized affirmative action, legacy, and athletic admissions, which favors white applicants. He then compared these admission processes to 1930s Germany and that is a big no-no:

“Abbot seemingly meant to highlight the dangers of thinking about individuals primarily in terms of their ethnic identity. But any comparison between today’s practices on American college campuses and the genocidal policies of the Nazi regime is facile and incendiary.

Even so, it is patently absurd to cancel a lecture on climate change because of Abbot’s article in Newsweek. If every cringe worthy analogy to the Third Reich were grounds for canceling talks, hundreds of professors—and thousands of op-ed columnists—would no longer be welcome on campus.”

Pew Research shows that the majority of the United States believes merit-based admissions or hiring is the best system. The liberal state California even voted to uphold a ban on affirmative action.

MIT’s termination of the Abbot lecture may be an example of how leading universities define learning, information, and discussion. People are no longer allowed to have opposing or controversial beliefs if it offends someone. It harms not only an academic setting, especially at a research heavy university like MIT, but all of society.

It is also funny that MIT was quick to cancel Abbot, but they happily accepted money from Jeffrey Epstein. Interesting.

Whitney Grace, October 27, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta