Russia: Getting Ready for Noose Snugging
June 23, 2021
Tens of thousands of Russian citizens have taken to the streets in protest and the government is cracking down. On social media platforms, that is. Seattle PI reports, “Russia Fines Facebook, Telegram Over Banned Content.” The succinct write-up specifies that Facebook was just fined 17 million rubles (about $236,000) and messaging app Telegram 10 million rubles ($139,000) by a Moscow court. Though it was unclear what specific content prompted these latest fines, this seems to be a trend. We learn:
“It was the second time both companies have been fined in recent weeks. On May 25, Facebook was ordered to pay 26 million rubles ($362,000) for not taking down content deemed unlawful by the Russian authorities. A month ago, Telegram was also ordered to pay 5 million rubles ($69,000) for not taking down calls to protest. Earlier this year, Russia’s state communications watchdog Roskomnadzor started slowing down Twitter and threatened it with a ban, also over its alleged failure to take down unlawful content. Officials maintained the platform failed to remove content encouraging suicide among children and containing information about drugs and child pornography. The crackdown unfolded after Russian authorities criticized social media platforms that have been used to bring tens of thousands of people into the streets across Russia this year to demand the release of jailed Russian opposition leader Alexei Navalny, President Vladimir Putin’s most well-known critic. The wave of demonstrations has been a major challenge to the Kremlin. Officials alleged that social media platforms failed to remove calls for children to join the protests.”
Yes, Putin would have us believe it is all about the children. He has expressed to the police his concern for young ones who are tempted into “illegal and unsanctioned street actions” by dastardly grown-ups on social media. His concern is touching.
Beyond Search thinks Mr. Putin’s actions are about control. An article in Russian named “Sovereign DNS Is Already Here and You Haven’t Noticed” provides information that suggests Mr. Putin’s telecommunications authority has put the machinery in place to control Internet access within Russia.
Fines may be a precursor to more overt action against US companies and content the Russian authorities deem in appropriate.
Cynthia Murrell, June 23, 2021
China: More Than a Beloved Cuisine, Policies Are Getting Traction Too
June 16, 2021
As historical information continues to migrate from physical books to online archives, governments are given the chance to enact policies right out of Orwell’s 1984. And why limit those efforts to one’s own country? Quartz reports that “China’s Firewall Is Spreading Globally.” The crackdown on protesters in Tiananmen Square on June 4, 1989 is a sore spot for China. It would rather those old enough to remember it would forget and those too young to have seen it on the news never learn about it. The subject has been taboo within the country since it happened, but now China is harassing the rest of the world about it and other sensitive topics. Worse, the efforts appear to be working.
Writer Jane Li begins with the plight of activist group 2021 Hong Kong Charter, whose website is hosted by Wix. The site’s mission is to build support in the international community for democracy in Hong Kong. Though its authors now live in countries outside China and Wix is based in Israel, China succeeded in strong-arming Wix into taking the site down. The action did not stick—the provider apologized and reinstated the site after being called out in public. However, it is disturbing that it was disabled in the first place. Li writes:
“The incident appears to be a test case for the extraterritorial reach of the controversial national security law, which was implemented in Hong Kong one year ago. While Beijing has billed the law as a way to restore the city’s stability and prosperity, critics say it helps the authorities to curb dissent as it criminalizes a broad swathe of actions, and is written vaguely enough that any criticism of the Party could plausibly be deemed in violation of the law. In a word, the law is ‘asserting extraterritorial jurisdiction over every person on the planet,’ wrote Donald Clarke, a professor of law at George Washington University, last year. Already academics teaching about China at US or European universities are concerned they or their students could be exposed to greater legal risk—especially should they discuss Chinese politics online in sessions that could be recorded or joined by uninvited participants. By sending the request to Wix, the Hong Kong police are not only executing the expansive power granted to them by the security law, but also sending a signal to other foreign tech firms that they could be next to receive a request for hosting content offensive in the eyes of Beijing.”
One nation attempting to seize jurisdiction around the world may seem preposterous, but Wix is not the only tech company to take this law seriously. On the recent anniversary of the Tiananmen Square crackdown, searches for the event’s iconic photo “tank man” turned up empty on MS Bing. Microsoft blamed it on an “accidental human error.” Sure, that is believable coming from a company that is known to cooperate with Chinese censors within that country. Then there was the issue with Google-owned YouTube. The US-based group Humanitarian China hosted a ceremony on June 4 commemorating the 1989 event, but found the YouTube video of its live stream was unavailable for days. What a coincidence! When contacted, YouTube simply replied there may be a possible technical issue, what with Covid and all. Of course, Google has its own relationship to censorship in China.
Not to be outdone, Facebook suspended the live feed of the group’s commemoration with the auto-notification that it “goes against our community standards on spam.” Right. Naturally, when chastised the platform apologized and called the move a technical error. We sense is a pattern here. One more firm is to be mentioned, though to be fair some of these participants were physically in China: Last year, Zoom disabled Humanitarian China’s account mid-meeting after the group hosted its Covid-safe June 4th commemoration on the platform. At least that company did not blame the action on a glitch; it made plain it was at the direct request of Beijing. The honesty is refreshing.
Cynthia Murrell, June 16, 2021
The Addiction Analogy: The Cancellation of Influencer Rand Fishkin
May 5, 2021
Another short item. I read a series of tweets which you may be able to view at this link. The main idea is that an influencer was to give a talk about marketing. The unnamed organizer did not like Influencer Fishkin’s content. And what was that content? Information and observations critical of the outstanding commercial enterprises Facebook and Google. The apparent points of irritation were Influencer Fishkin’s statements to the effect that the two estimable outfits (Facebook and Google) were not “friendly, in-your-corner partners.” Interesting, but for me that was only part of the story.
Here’s what I surmised from the information provided by Influencer Fishkin:
- Manipulation is central to the way in which these two lighthouse firms operate in the dark world of online
- Both venerated companies function without consequences for their actions designed to generated revenue
- The treasured entities apply the model and pattern to “sector after sector.”
Beyond Search loves these revered companies.
But there is one word which casts a Beijing-in-a-sandstorm color over Influencer Fishkin’s remarks. And that word is?
Addiction
The idea is that these cherished organizations use their market position (which some have described as a monopoly set up) and specific content to make it difficult for a “user” of the “free” service to kick the habit.
My hunch is that neither of these esteemed commercial enterprises wants to be characterized as purveyor of gateway drugs, digital opioids, or artificers who put large monkeys on “users” backs.
That’s not a good look.
Hence, cancellation is a pragmatic fix, is it not?
Stephen E Arnold, May 5, 2021
Selective YouTube Upload Filtering or Erratic Smart Software?
May 4, 2021
I received some information about a YouTuber named Aquachiggers. I watched this person’s eight minute video in which Aquachigger explained that his videos had been downloaded from YouTube. Then an individual (whom I shall described as an alleged bad actor) uploaded those Aquachigger videos with a the alleged bad actor’s voice over. I think the technical term for this is a copyright violation taco.
I am not sure who did what in this quite unusual recycling of user content. What’s clear is that YouTube’s mechanism to determine if an uploaded video violates Google rules (who really knows what these are other than the magic algorithms which operate like tireless, non-human Amazon warehouse workers). Allegedly Google’s YouTube digital third grade teacher software can spot copyright violations and give the bad actor a chance to rehabilitate an offending video.
According to Aquachigger, content was appropriated, and then via logic which is crystalline to Googlers, notified Aquachigger that his channel would be terminated for copyright violation. Yep, the “creator” Aquachigger would be banned from YouTube, losing ad revenue and subscriber access, because an alleged bad actor took the Aquachigger content, slapped an audio track over it, and monetized that content. The alleged bad actor is generating revenue by unauthorized appropriation of another person’s content. The key is that the alleged bad actor generates more clicks than the “creator” Aquachigger.
Following this?
I decided to test the YouTube embedded content filtering system. I inserted a 45 second segment from a Carnegie Mellon news release about one of its innovations. I hit the upload button and discovered that after the video was uploaded to YouTube, the Googley system informed me that the video with the Carnegie Mellon news snip required further processing. The Googley system labored for three hours. I decided to see what would happen if I uploaded the test segment to Facebook. Zippity-doo. Facebook accepted my test video.
What I learned from my statistically insignificant test that I could formulate some tentative questions; for example:
- If YouTube could “block” my upload of the video PR snippet, would YouTube be able to block the Aquachigger bad actor’s recycled Aquachigger content?
- Why would YouTube block a snippet of a news release video from a university touting its technical innovation?
- Why would YouTube, create the perception that Aquachigger be “terminated”?
- Would YouTube be allowing the unauthorized use of Aquachigger content in order to derive more revenue from that content on the much smaller Aquachigger follower base?
Interesting questions. I don’t have answers, but this Aquachigger incident and my test indicate that consistency is the hobgoblin of some smart software. That’s why I laughed when I navigated to Jigsaw, a Google service, and learned that Google is committed to “protecting voices in conversation.” Furthermore:
Online abuse and toxicity stops people from engaging in conversation and, in extreme cases, forces people offline. We’re finding new ways to reduce toxicity, and ensure everyone can safely participate in online conversations.
I also learned:
Much of the world’s internet users experience digital censorship that restricts access to news, information, and messaging apps. We’re [Google] building tools to help people access the global internet.
Like I said, “Consistency.” Ho ho ho.
Stephen E Arnold, May 4, 2021
Do Tech Monopolies Have to Create Enforcement Units?
April 26, 2021
Two online enforcement articles struck me as thought provoking.
The first was the Amazon announcement that it would kick creators (people who stream on the Twitch service) off the service for missteps off the platform. This is an interesting statement, and you can get some color in “Twitch to Boot Users for Transgressions Elsewhere.” In my discussion with my research team about final changes to my Amazon policeware lecture, I asked the group about Twitch banning individuals who create video streams and push them to the Twitch platform.
There were several points of view. Here’s a summary of the comments:
- Yep, definitely
- No, free country
- This has been an informal policy for a long time. (Example: SweetSaltyPeach, a streamer from South Africa who garnered attention by assembling toys whilst wearing interesting clothing. Note: She morphed into the more tractable persona RachelKay.
There’s may be a problem for Twitch, and I am not certain Amazon can solve it. Possibly Amazon – even with its robust policeware technology – cannot control certain activities off the platform. A good example is the persona on Twitch presented as iBabyRainbow. Here’s a snap of the Twitch personality providing baseball batting instructions to legions of fans by hitting eggs with her fans’ names on them:
There is an interesting persona on the site NewRecs. It too features a persona which seems very similar to that of the Amazon persona. The colors are similar; the makeup conventions are similar; and the unicorn representation appears in both images. Even the swimming pool featured on Twitch appears in the NewRecs’ representation of the personal BabyRainbow.
What is different is that on NewRecs, the content creator is named “BabyRainbow.” Exploration of the BabyRainbow persona reveals some online lines which might raise some eyebrows in Okoboji, Iowa. One example is the link between BabyRainbow and the site Chaturbate.
My research team spotted the similarity quickly. Amazon, if it does know about the coincidence, has not taken action for the persona’s Twitch versus NewRecs versus Chaturbate and some other “interesting” services which exist.
So either Twitch enforcement is ignoring certain behavior whilst punishing other types of behavior. Neither Amazon or Twitch is talking much about iBabyRainbow or other parental or law enforcement-type of actions.
The second development is the article “Will YouTube Ever Properly Deal with Its Abusive Stars?” The write up states:
YouTube has long had a problem with acknowledging and dealing with the behavior of the celebrities it helped to create… YouTube is but one of many major platforms eager to distance themselves from the responsibility of their position by claiming that their hands-off approach and frequent ignorance over what they host is a free speech issue. Even though sites like YouTube, Twitter, Substack, and so on have rules of conduct and claim to be tough on harassment, the evidence speaks to the contrary.
The essay points out that YouTube has taken action against certain individuals whose off YouTube behavior was interesting, possibly inappropriate, and maybe in violation of certain government laws. But, the essay, asserts about a YouTuber who pranked people and allegedly bullied people:
Dobrik’s channel was eventually demonetized by YouTube, but actions like this feel too little too late given how much wealth he’s accumulated over the years. Jake Paul is still pulling in big bucks from his channel. Charles was recently demonetized, but his follower count remains pretty unscathed. And that doesn’t even include all the right-wing creeps pulling in big bucks from YouTube. Like with any good teary apology video, the notion of true accountability seems unreachable.
To what do these two example sum? The Big Tech companies may have to add law enforcement duties to their checklist of nation state behavior. When a government takes an action, there are individuals with whom one can speak. What rights does talent on an ad-based platform have. Generate money and get a free pass. Behave in a manner which might lead to a death penalty in some countries? Keep on truckin’? The online ad outfit struggles to make clear exactly what it is doing with censorship and other activities like changing the rules for APIs. It will be interesting to see what the GOOG tries to do.
Consider this: What if Mr. Dobrik and iBabyRainbow team up and do a podcast? Would Apple and Spotify bid for rights? How would the tech giants Amazon and Google respond? These are questions unthinkable prior to the unregulated, ethics free online world of 2021.
Stephen E Arnold, April 26, 2021
Preserving History? Twitter Bans Archived Trump Tweets
April 19, 2021
The National Archives of the United States archives social media accounts of politicians. Former President Donald Trump’s Twitter account is among them. One of the benefits of archived Twitter accounts is that users can read and interact with old tweets. Twitter, however, banned Trump in early 2021 because he was deemed a threat to public safety. Politico explains the trouble the National Archives and Records Administration currently has getting Trump’s old tweets back online, “National Archives Can’t Resurrect Trump’s Tweets, Twitter Says.”
Other former Trump administration officials have their old tweets active on Twitter. Many National Archives staff view Twitter’s refusal to reactivate the tweets as censorship. Trump’s controversial tweets are part of a growing battle between Washington and tech giants, where the latter censors conservatives. Supreme Court Justice Clarence Thomas lamented that the tech companies had so much control over communication.
Twitter is working with the National Archives on preserving Trump’s tweets. Twitter refuses to host any of Trump’s tweets, stating they glorified violence. The tweets will be available on the Donald J. Trump Presidential Library web site.
“Nevertheless, the process of preserving @realDonaldTrump’s tweets remains underway, NARA’s [James] Pritchett said, and since the account is banned from Twitter, federal archivists are ‘working to make the exported content available … as a download’ on the Trump Presidential Library website.
‘Twitter is solely responsible for the decision of what content is available on their platform,’ Pritchett said.’“NARA works closely with Twitter and other social media platforms to maintain archived social accounts from each presidential administration, but ultimately the platform owners can decline to host these accounts. NARA preserves platform independent copies of social media records and is working to make that content available to the public.’”
Is it really censorship if Trump’s tweets are publicly available just not through their first medium? Conservative politicians do have a valid argument. Big tech does control and influence communication, but does that give them the right to censor opinions? Future data archeologists may wonder about the gap.
Whitney Grace, April 19, 2021
Google Stop Words: Close Enough for the Mom and Pop Online Ad Vendor
April 15, 2021
I remember from a statistics lecture given by a fellow named Dr. Peplow maybe that fuzzy is one of the main characteristics of statistics. The idea is that a percentage is not a real entity; for example, the average number of lions in a litter is three, give or take a couple of the magnets for hunters and poachers. Depending upon the data set, the “real” number maybe 3.2 cubs in a litter. Who has ever seen a fractional lion? Certainly not me.
Why am I thinking fuzzy? Google is into data. The company collects, counts, and transform “real” data into actions. Whip in some smart software, and the company has processes which transform an advertiser’s need to reach eyeballs with some statistically validated interest in whatever the Mad Ave folks are trying to sell.
“Google Has a Secret Blocklist that Hides YouTube Hate Videos from Advertisers—But It’s Full of Holes” suggests that some of the Google procedures are fuzzy. The uncharitable might suggest that Google wants to get close enough to collect ad money. Horse shoe aficionados use the phrase “close enough for horse shoes” to indicate a toss which gets a point or blocks an opponent’s effort. That seems to be one possible message from the Mark Up article.
I noted this passage in the essay:
If you want to find YouTube videos related to “KKK” to advertise on, Google Ads will block you. But the company failed to block dozens of other hate and White nationalist terms and slogans, an investigation by The Markup has found. Using a list of 86 hate-related terms we compiled with the help of experts, we discovered that Google uses a blocklist to try to stop advertisers from building YouTube ad campaigns around hate terms. But less than a third of the terms on our list were blocked when we conducted our investigation.
What seems to be happening is that Google’s methods for taking a term and then “broadening” it so that related terms are identified is not working. The idea is that related terms with a higher “score” are more directly linked to the original term. Words and phrases with lower “scores” are not closely related. The article uses the example of the term KKK.
I learned:
Google Ads suggested millions upon millions of YouTube videos to advertisers purchasing ads related to the terms “White power,” the fascist slogan “blood and soil,” and the far-right call to violence “racial holy war.” The company even suggested videos for campaigns with terms that it clearly finds problematic, such as “great replacement.” YouTube slaps Wikipedia boxes on videos about the “the great replacement,” noting that it’s “a white nationalist far-right conspiracy theory.” Some of the hundreds of millions of videos that the company suggested for ad placements related to these hate terms contained overt racism and bigotry, including multiple videos featuring re-posted content from the neo-Nazi podcast The Daily Shoah, whose official channel was suspended by YouTube in 2019 for hate speech.
It seems to me that Google is filtering specific words and phrases on a stop word list. Then the company is not identifying related terms, particularly words which are synonyms for the word on the stop list.
Is it possible that Google is controlling how it does fuzzification. In order to get clicks and advertising, Google blocks specifics and omits the term expansion and synonym identification settings to eliminate the words and phrases identified by the Mark Up’s investigative team?
These references to synonym expansion and reference to query expansion are likely to be unfamiliar to some people. Nevertheless, fuzzy is in the hands of those who set statistical thresholds.
Fuzzy is not real, but the search results are. Ad money is a powerful force in some situations. The article seems to have uncovered a couple of enlightening examples. String matching coupled with synonym expansion seem to be out of step. Some fuzzification may be helpful in the hate speech methods.
Stephen E Arnold, April 12, 2021
India May Use AI to Remove Objectionable Online Content
April 7, 2021
India’s Information Technology Act, 2000 provides for the removal of certain unlawful content online, like child pornography, private images of others, or false information. Of course, it is difficult to impossible to keep up with identifying and removing such content using just human moderators. Now we learn from the Orissa Post that the “Govt Mulls Using AI to Tackle Social Media Misuse.” The write-up states:
“This step was proposed after the government witnessed widespread public disorder because of the spread of rumours in mob lynching cases. The Ministry of Home Affairs has taken up the matter and is exploring ways to implement it. On the rise in sharing of fake news over social media platforms such as Facebook, Twitter and WhatsApp, Minister of Electronics and Information Technology Ravi Shankar Prasad had said in Lok Sabha that ‘With a borderless cyberspace coupled with he possibility of instant communication and anonymity, the potential for misuse of cyberspace and social media platforms for criminal activities is a global issue.’ Prasad explained that cyberspace is a complex environment of people, software, hardware and services on the internet. He said he is aware of the spread of misinformation. The Information Technology (IT) Act, 2000 has provisions for removal of objectionable content. Social media platforms are intermediaries as defined in the Act. Section 79 of the Act provides that intermediaries are required to disable/remove unlawful content on being notified by the appropriate government or its agency.”
The Ministry of Home Affairs has issued several advisories related to real-world consequences of online content since the Act passed, including one on the protection of cows, one on the prevention of cybercrime, and one on lynch mobs spurred on by false rumors of child kidnappings. The central government hopes the use of AI will help speed the removal of objectionable content and reduce its impact on its citizens. And cows.
Cynthia Murrell, April 7, 2021
Historical Revisionism: Twitter and Wikipedia
March 24, 2021
I wish I could recall the name of the slow talking wild-eyed professor who lectured about Mr. Stalin’s desire to have the history of the Soviet Union modified. The tendency was evident early in his career. Ioseb Besarionis dz? Jughashvili became Stalin, so fiddling with received wisdom verified by Ivory Tower types should come as no surprise.
Now we have Google and the right to be forgotten. As awkward as deleting pointers to content may be, digital information invites “reeducation”.
I learned in “Twitter to Appoint Representative to Turkey” that the extremely positive social media outfit will interact with the country’s government. The idea is to make sure content is just A-Okay. Changing tweets for money is a pretty good idea. Even better is coordinating the filtering of information with a nation state is another. But Apple and China seem to be finding a path forward. Maybe Apple in Russia will be a similar success.
A much more interesting approach to shaping reality is alleged in “Non-English Editions of Wikipedia Have a Misinformation Problem.” Wikipedia has a stellar track record of providing fact rich, neutral information I believe. This “real news” story states:
The misinformation on Wikipedia reflects something larger going on in Japanese society. These WWII-era war crimes continue to affect Japan’s relationships with its neighbors. In recent years, as Japan has seen an increase in the rise of nationalism, then–Prime Minister Shinzo Abe argued that there was no evidence of Japanese government coercion in the comfort women system, while others tried to claim the Nanjing Massacre never happened.
I am interested in these examples because each provides some color to one of my information “laws”. I have dubbed these “Arnold’s Precepts of Online Information.” Here’s the specific law which provides a shade tree for these examples:
Online information invites revisionism.
Stated another way, when “facts” are online, these are malleable, shapeable, and subjective.
When one runs a query on swisscows.com and then the same query on bing.com, ask:
Are these services indexing the same content?
The answer for me is, “No.” Filters, decisions about what to index, and update calendars shape the reality depicted online. Primary sources are a fine idea, but when those sources are shaped as well, what does one do?
The answer is like one of those Borges stories. Deleting and shaping content is more environmentally friendly than burning written records. A python script works with less smoke.
Stephen E Arnold, March24, 2021
Social Audio Service Clubhouse Blocked in Oman
March 15, 2021
Just a quick note to document Oman’s blocking of the social audio service Clubhouse. The story “Oman Blocks Clubhouse, App Used for Free Debates in Mideast” appeared on March 15, 2021. The invitation only service has hosted Silicon Valley luminaries and those who wrangled an invitation via connections or social engineering. The idea is similar to the CB radio chats popular with over-the-road truckers in the United States. There’s no motion picture dramatizing the hot service, but a “Smokey and the Bandit” remake starring the hot stars in the venture capital game and the digital movers and shakers could be in the works. Elon Musk’s character could be played by Brad Pitt. Instead of a Pontiac Firebird, the Tesla is the perfect vehicle for movers and shakers in the Clubhouse.
Stephen E Arnold, March 15, 2021