The Addiction Analogy: The Cancellation of Influencer Rand Fishkin

May 5, 2021

Another short item. I read a series of tweets which you may be able to view at this link. The main idea is that an influencer was to give a talk about marketing. The unnamed organizer did not like Influencer Fishkin’s content. And what was that content? Information and observations critical of the outstanding commercial enterprises Facebook and Google. The apparent points of irritation were Influencer Fishkin’s statements to the effect that the two estimable outfits (Facebook and Google) were not “friendly, in-your-corner partners.” Interesting, but for me that was only part of the story.

Here’s what I surmised from the information provided by Influencer Fishkin:

  1. Manipulation is central to the way in which these two lighthouse firms operate in the dark world of online
  2. Both venerated companies function without consequences for their actions designed to generated revenue
  3. The treasured entities apply the model and pattern to “sector after sector.”

Beyond Search loves these revered companies.

But there is one word which casts a Beijing-in-a-sandstorm color over Influencer Fishkin’s remarks. And that word is?

Addiction

The idea is that these cherished organizations use their market position (which some have described as a monopoly set up) and specific content to make it difficult for a “user” of the “free” service to kick the habit.

My hunch is that neither of these esteemed commercial enterprises wants to be characterized as purveyor of gateway drugs, digital opioids, or artificers who put large monkeys on “users” backs.

That’s not a good look.

Hence, cancellation is a pragmatic fix, is it not?

Stephen E Arnold, May 5, 2021

Selective YouTube Upload Filtering or Erratic Smart Software?

May 4, 2021

I received some information about a YouTuber named Aquachiggers. I watched this person’s eight minute video in which Aquachigger explained that his videos had been downloaded from YouTube. Then an individual (whom I shall described as an alleged bad actor) uploaded those Aquachigger videos with a the alleged bad actor’s voice over. I think the technical term for this is a copyright violation taco.

I am not sure who did what in this quite unusual recycling of user content. What’s clear is that YouTube’s mechanism to determine if an uploaded video violates Google rules (who really knows what these are other than the magic algorithms which operate like tireless, non-human Amazon warehouse workers). Allegedly Google’s YouTube digital third grade teacher software can spot copyright violations and give the bad actor a chance to rehabilitate an offending video.

According to Aquachigger, content was appropriated, and then via logic which is crystalline to Googlers, notified Aquachigger that his channel would be terminated for copyright violation. Yep, the “creator” Aquachigger would be banned from YouTube, losing ad revenue and subscriber access, because an alleged bad actor took the Aquachigger content, slapped an audio track over it, and monetized that content. The alleged bad actor is generating revenue by unauthorized appropriation of another person’s content. The key is that the alleged bad actor generates more clicks than the “creator” Aquachigger.

Following this?

I decided to test the YouTube embedded content filtering system. I inserted a 45 second segment from a Carnegie Mellon news release about one of its innovations. I hit the upload button and discovered that after the video was uploaded to YouTube, the Googley system informed me that the video with the Carnegie Mellon news snip required further processing. The Googley system labored for three hours. I decided to see what would happen if I uploaded the test segment to Facebook. Zippity-doo. Facebook accepted my test video.

What I learned from my statistically insignificant test that I could formulate some tentative questions; for example:

  1. If YouTube could “block” my upload of the video PR snippet, would YouTube be able to block the Aquachigger bad actor’s recycled Aquachigger content?
  2. Why would YouTube block a snippet of a news release video from a university touting its technical innovation?
  3. Why would YouTube, create the perception that Aquachigger be “terminated”?
  4. Would YouTube be allowing the unauthorized use of Aquachigger content in order to derive more revenue from that content on the much smaller Aquachigger follower base?

Interesting questions. I don’t have answers, but this Aquachigger incident and my test indicate that consistency is the hobgoblin of some smart software. That’s why I laughed when I navigated to Jigsaw, a Google service, and learned that Google is committed to “protecting voices in conversation.” Furthermore:

Online abuse and toxicity stops people from engaging in conversation and, in extreme cases, forces people offline. We’re finding new ways to reduce toxicity, and ensure everyone can safely participate in online conversations.

I also learned:

Much of the world’s internet users experience digital censorship that restricts access to news, information, and messaging apps. We’re [Google] building tools to help people access the global internet.

Like I said, “Consistency.” Ho ho ho.

Stephen E Arnold, May 4, 2021

Do Tech Monopolies Have to Create Enforcement Units?

April 26, 2021

Two online enforcement articles struck me as thought provoking.

The first was the Amazon announcement that it would kick creators (people who stream on the Twitch service) off the service for missteps off the platform. This is an interesting statement, and you can get some color in “Twitch to Boot Users for Transgressions Elsewhere.” In my discussion with my research team about final changes to my Amazon policeware lecture, I asked the group about Twitch banning individuals who create video streams and push them to the Twitch platform.

There were several points of view. Here’s a summary of the comments:

  • Yep, definitely
  • No, free country
  • This has been an informal policy for a long time. (Example: SweetSaltyPeach, a streamer from South Africa who garnered attention by assembling toys whilst wearing interesting clothing. Note: She morphed into the more tractable persona RachelKay.

There’s may be a problem for Twitch, and I am not certain Amazon can solve it. Possibly Amazon – even with its robust policeware technology – cannot control certain activities off the platform. A good example is the persona on Twitch presented as iBabyRainbow. Here’s a snap of the Twitch personality providing baseball batting instructions to legions of fans by hitting eggs with her fans’ names on them:

baby 3 baseball

There is an interesting persona on the site NewRecs. It too features a persona which seems very similar to that of the Amazon persona. The colors are similar; the makeup conventions are similar; and the unicorn representation appears in both images. Even the swimming pool featured on Twitch appears in the NewRecs’ representation of the personal BabyRainbow.

baby newrecs filtered copy

What is different is that on NewRecs, the content creator is named “BabyRainbow.” Exploration of the BabyRainbow persona reveals some online lines which might raise some eyebrows in Okoboji, Iowa. One example is the link between BabyRainbow and the site Chaturbate.

My research team spotted the similarity quickly. Amazon, if it does know about the coincidence, has not taken action for the persona’s Twitch versus NewRecs versus Chaturbate and some other “interesting” services which exist.

So either Twitch enforcement is ignoring certain behavior whilst punishing other types of behavior. Neither Amazon or Twitch is talking much about iBabyRainbow or other parental or law enforcement-type of actions.

The second development is the article “Will YouTube Ever Properly Deal with Its Abusive Stars?” The write up states:

YouTube has long had a problem with acknowledging and dealing with the behavior of the celebrities it helped to create… YouTube is but one of many major platforms eager to distance themselves from the responsibility of their position by claiming that their hands-off approach and frequent ignorance over what they host is a free speech issue. Even though sites like YouTube, Twitter, Substack, and so on have rules of conduct and claim to be tough on harassment, the evidence speaks to the contrary.

The essay points out that YouTube has taken action against certain individuals whose off YouTube behavior was interesting, possibly inappropriate, and maybe in violation of certain government laws. But, the essay, asserts about a YouTuber who pranked people and allegedly bullied people:

Dobrik’s channel was eventually demonetized by YouTube, but actions like this feel too little too late given how much wealth he’s accumulated over the years. Jake Paul is still pulling in big bucks from his channel. Charles was recently demonetized, but his follower count remains pretty unscathed. And that doesn’t even include all the right-wing creeps pulling in big bucks from YouTube. Like with any good teary apology video, the notion of true accountability seems unreachable.

To what do these two example sum? The Big Tech companies may have to add law enforcement duties to their checklist of nation state behavior. When a government takes an action, there are individuals with whom one can speak. What rights does talent on an ad-based platform have. Generate money and get a free pass. Behave in a manner which might lead to a death penalty in some countries? Keep on truckin’? The online ad outfit struggles to make clear exactly what it is doing with censorship and other activities like changing the rules for APIs. It will be interesting to see what the GOOG tries to do.

Consider this: What if Mr. Dobrik and iBabyRainbow team up and do a podcast? Would Apple and Spotify bid for rights? How would the tech giants Amazon and Google respond? These are questions unthinkable prior to the unregulated, ethics free online world of 2021.

Stephen E Arnold, April 26, 2021

Preserving History? Twitter Bans Archived Trump Tweets

April 19, 2021

The National Archives of the United States archives social media accounts of politicians.  Former President Donald Trump’s Twitter account is among them.  One of the benefits of archived Twitter accounts is that users can read and interact with old tweets.  Twitter, however, banned Trump in early 2021 because he was deemed a threat to public safety.  Politico explains the trouble the National Archives and Records Administration currently has getting Trump’s old tweets back online, “National Archives Can’t Resurrect Trump’s Tweets, Twitter Says.”

Other former Trump administration officials have their old tweets active on Twitter.  Many National Archives staff view Twitter’s refusal to reactivate the tweets as censorship.  Trump’s controversial tweets are part of a growing battle between Washington and tech giants, where the latter censors conservatives.  Supreme Court Justice Clarence Thomas lamented that the tech companies had so much control over communication.

Twitter is working with the National Archives on preserving Trump’s tweets.  Twitter refuses to host any of Trump’s tweets, stating they glorified violence.  The tweets will be available on the Donald J. Trump Presidential Library web site.

“Nevertheless, the process of preserving @realDonaldTrump’s tweets remains underway, NARA’s [James] Pritchett said, and since the account is banned from Twitter, federal archivists are ‘working to make the exported content available … as a download’ on the Trump Presidential Library website.

‘Twitter is solely responsible for the decision of what content is available on their platform,’ Pritchett said.’“NARA works closely with Twitter and other social media platforms to maintain archived social accounts from each presidential administration, but ultimately the platform owners can decline to host these accounts. NARA preserves platform independent copies of social media records and is working to make that content available to the public.’”

Is it really censorship if Trump’s tweets are publicly available just not through their first medium?  Conservative politicians do have a valid argument.  Big tech does control and influence communication, but does that give them the right to censor opinions? Future data archeologists may wonder about the gap.

Whitney Grace, April 19, 2021

Google Stop Words: Close Enough for the Mom and Pop Online Ad Vendor

April 15, 2021

I remember from a statistics lecture given by a fellow named Dr. Peplow maybe that fuzzy is one of the main characteristics of statistics. The idea is that a percentage is not a real entity; for example, the average number of lions in a litter is three, give or take a couple of the magnets for hunters and poachers. Depending upon the data set, the “real” number maybe 3.2 cubs in a litter. Who has ever seen a fractional lion? Certainly not me.

Why am I thinking fuzzy? Google is into data. The company collects, counts, and transform “real” data into actions. Whip in some smart software, and the company has processes which transform an advertiser’s need to reach eyeballs with some statistically validated interest in whatever the Mad Ave folks are trying to sell.

Google Has a Secret Blocklist that Hides YouTube Hate Videos from Advertisers—But It’s Full of Holes” suggests that some of the Google procedures are fuzzy. The uncharitable might suggest that Google wants to get close enough to collect ad money. Horse shoe aficionados use the phrase “close enough for horse shoes” to indicate a toss which gets a point or blocks an opponent’s effort. That seems to be one possible message from the Mark Up article.

I noted this passage in the essay:

If you want to find YouTube videos related to “KKK” to advertise on, Google Ads will block you. But the company failed to block dozens of other hate and White nationalist terms and slogans, an investigation by The Markup has found. Using a list of 86 hate-related terms we compiled with the help of experts, we discovered that Google uses a blocklist to try to stop advertisers from building YouTube ad campaigns around hate terms. But less than a third of the terms on our list were blocked when we conducted our investigation.

What seems to be happening is that Google’s methods for taking a term and then “broadening” it so that related terms are identified is not working. The idea is that related terms with a higher “score” are more directly linked to the original term. Words and phrases with lower “scores” are not closely related. The article uses the example of the term KKK.

I learned:

Google Ads suggested millions upon millions of YouTube videos to advertisers purchasing ads related to the terms “White power,” the fascist slogan “blood and soil,” and the far-right call to violence “racial holy war.” The company even suggested videos for campaigns with terms that it clearly finds problematic, such as “great replacement.” YouTube slaps Wikipedia boxes on videos about the “the great replacement,” noting that it’s “a white nationalist far-right conspiracy theory.” Some of the hundreds of millions of videos that the company suggested for ad placements related to these hate terms contained overt racism and bigotry, including multiple videos featuring re-posted content from the neo-Nazi podcast The Daily Shoah, whose official channel was suspended by YouTube in 2019 for hate speech.

It seems to me that Google is filtering specific words and phrases on a stop word list. Then the company is not identifying related terms, particularly words which are synonyms for the word on the stop list.

Is it possible that Google is controlling how it does fuzzification. In order to get clicks and advertising, Google blocks specifics and omits the term expansion and synonym identification settings to eliminate the words and phrases identified by the Mark Up’s investigative team?

These references to synonym expansion and reference to query expansion are likely to be unfamiliar to some people. Nevertheless, fuzzy is in the hands of those who set statistical thresholds.

Fuzzy is not real, but the search results are. Ad money is a powerful force in some situations. The article seems to have uncovered a couple of enlightening examples. String matching coupled with synonym expansion seem to be out of step. Some fuzzification may be helpful in the hate speech methods.

Stephen E Arnold, April 12, 2021

India May Use AI to Remove Objectionable Online Content

April 7, 2021

India’s Information Technology Act, 2000 provides for the removal of certain unlawful content online, like child pornography, private images of others, or false information. Of course, it is difficult to impossible to keep up with identifying and removing such content using just human moderators. Now we learn from the Orissa Post that the “Govt Mulls Using AI to Tackle Social Media Misuse.” The write-up states:

“This step was proposed after the government witnessed widespread public disorder because of the spread of rumours in mob lynching cases. The Ministry of Home Affairs has taken up the matter and is exploring ways to implement it. On the rise in sharing of fake news over social media platforms such as Facebook, Twitter and WhatsApp, Minister of Electronics and Information Technology Ravi Shankar Prasad had said in Lok Sabha that ‘With a borderless cyberspace coupled with he possibility of instant communication and anonymity, the potential for misuse of cyberspace and social media platforms for criminal activities is a global issue.’ Prasad explained that cyberspace is a complex environment of people, software, hardware and services on the internet. He said he is aware of the spread of misinformation. The Information Technology (IT) Act, 2000 has provisions for removal of objectionable content. Social media platforms are intermediaries as defined in the Act. Section 79 of the Act provides that intermediaries are required to disable/remove unlawful content on being notified by the appropriate government or its agency.”

The Ministry of Home Affairs has issued several advisories related to real-world consequences of online content since the Act passed, including one on the protection of cows, one on the prevention of cybercrime, and one on lynch mobs spurred on by false rumors of child kidnappings. The central government hopes the use of AI will help speed the removal of objectionable content and reduce its impact on its citizens. And cows.

Cynthia Murrell, April 7, 2021

Historical Revisionism: Twitter and Wikipedia

March 24, 2021

I wish I could recall the name of the slow talking wild-eyed professor who lectured about Mr. Stalin’s desire to have the history of the Soviet Union modified. The tendency was evident early in his career. Ioseb Besarionis dz? Jughashvili became Stalin, so fiddling with received wisdom verified by Ivory Tower types should come as no surprise.

Now we have Google and the right to be forgotten. As awkward as deleting pointers to content may be, digital information invites “reeducation”.

I learned in “Twitter to Appoint Representative to Turkey” that the extremely positive social media outfit will interact with the country’s government. The idea is to make sure content is just A-Okay. Changing tweets for money is a pretty good idea. Even better is coordinating the filtering of information with a nation state is another. But Apple and China seem to be finding a path forward. Maybe Apple in Russia will be a  similar success.

A much more interesting approach to shaping reality is alleged in “Non-English Editions of Wikipedia Have a Misinformation Problem.” Wikipedia has a stellar track record of providing fact rich, neutral information I believe. This “real news” story states:

The misinformation on Wikipedia reflects something larger going on in Japanese society. These WWII-era war crimes continue to affect Japan’s relationships with its neighbors. In recent years, as Japan has seen an increase in the rise of nationalism, then­–Prime Minister Shinzo Abe argued that there was no evidence of Japanese government coercion in the comfort women system, while others tried to claim the Nanjing Massacre never happened.

I am interested in these examples because each provides some color to one of my information “laws”. I have dubbed these “Arnold’s Precepts of Online Information.” Here’s the specific law which provides a shade tree for these examples:

Online information invites revisionism.

Stated another way, when “facts” are online, these are malleable, shapeable, and subjective.

When one runs a query on swisscows.com and then the same query on bing.com, ask:

Are these services indexing the same content?

The answer for me is, “No.” Filters, decisions about what to index, and update calendars shape the reality depicted online. Primary sources are a fine idea, but when those sources are shaped as well, what does one do?

The answer is like one of those Borges stories. Deleting and shaping content is more environmentally friendly than burning written records. A python script works with less smoke.

Stephen E Arnold, March24, 2021

Social Audio Service Clubhouse Blocked in Oman

March 15, 2021

Just a quick note to document Oman’s blocking of the social audio service Clubhouse. The story “Oman Blocks Clubhouse, App Used for Free Debates in Mideast” appeared on March 15, 2021. The invitation only service has hosted Silicon Valley luminaries and those who wrangled an invitation via connections or social engineering. The idea is similar to the CB radio chats popular with over-the-road truckers in the United States. There’s no motion picture dramatizing the hot service, but a “Smokey and the Bandit” remake starring the hot stars in the venture capital game and the digital movers and shakers could be in the works. Elon Musk’s character could be played by Brad Pitt. Instead of a Pontiac Firebird, the Tesla is the perfect vehicle for movers and shakers in the Clubhouse.

Stephen E Arnold, March 15, 2021

DarkCyber for February 23, 2021 Is Now Available

February 23, 2021

DarkCyber, Series 3, Number 4 includes five stories. The first summarizes the value of an electronic game’s software. Think millions. The second explains that Lokinet is now operating under the brand Oxen. The idea is that the secure services’ offerings are “beefier.” The third story provides an example of how smaller cyber security startups can make valuable contributions in the post-SolarWinds’ era. The fourth story highlights a story about the US government’s getting close to an important security implementation, only to lose track of the mission. And the final story provides some drone dope about the use of unmanned aerial systems on Super Bowl Sunday as FBI agents monitored an FAA imposed no fly zone. You could download the video at this url after we uploaded it to YouTube.

But…

YouTube notified Stephen E Arnold that his interview with Robert David Steele, a former CIA professional, was removed from YouTube. The reason was “bullying.” Mr. Arnold is 76 or 77, and he talked with Mr. Steele about the Jeffrey Epstein allegations. Mr. Epstein was on the radar of Mr. Steele because the legal allegations were of interest to an international tribunal about human trafficking and child sex crime. Mr. Steele is a director of that tribunal. Bullying about a deceased person allegedly involved in a decades long criminal activity? What? 

What’s even more interesting is that the DarkCyber videos, which appear every 14 days focus on law enforcement, intelligence, and cyber crime issues. One law enforcement professional told Mr. Arnold after his Dark Web lecture at the National Cyber Crime Conference in 2020, you make it clear that investigators have to embrace new technology and not wait for budgets to accommodate more specialists.

Mr. Arnold told me that he did not click the bright red button wanting Google / YouTube to entertain an appeal. I am not certain about his reasoning, but I assume that Mr. Arnold, who was an advisor to the world’s largest online search system, was indifferent to the censorship. My perception is that Mr. Arnold recognizes that Alphabet, Google, and YouTube are overwhelmed with management challenges, struggling to figure out how to deal with copyright violations, hate content, and sexually related information. Furthermore, Alphabet, Google, and YouTube face persistent legal challenges, employee outcries about discrimination, and ageing systems and methods.

What does this mean? In early March 2021, we will announce other video services which will make the DarkCyber video programs available.

The DarkCyber team is composed of individuals who are not bullies. If anything, the group is more accurately characterized as researchers and analysts who prefer the libraries of days gone by to the zip zip world of thumbtypers, smart software, and censorship of content related to law enforcement and intelligence professionals.

Mr. Arnold was discussing online clickfraud at lunch next week. Would that make an interesting subject for a DarkCyber story? With two firms controlling more than two thirds of the online advertising, click fraud is a hot potato topic. How does it happen? What’s done to prevent it? What’s the cost to the advertisers? What are the legal consequences of the activity?

Kenny Toth, February 23, 2021

YouTube Censors a Government Hearing in Ohio

February 2, 2021

It is a strange world we live in. Google’s efforts to curb misinformation on YouTube have led it to take down footage of legislative testimony in Ohio. Cincinnati’s WLWT5 News reports, “YouTube Removes Ohio Committee Video, Citing Misinformation.” We are not surprised the misinformation at hand relates to COVID-19. Digital editor Brian Wiechert writes:

“The video showed Thomas Renz, an attorney for Ohio Stands Up, a citizen group, make the opening testimony during a House committee hearing on a bill that would allow lawmakers to vote down public health orders during the pandemic. In the more than 30-minute testimony, Renz made a number of debunked or baseless claims, including that no Ohioans under the age of 19 have died from COVID-19 – a claim that has been debunked by state data. … “The removal, first reported by Ohio Capital Journal, comes days after the Republican lawmakers in the Senate passed a bill that would establish ‘checks and balances’ on fellow GOP Gov. Mike DeWine’s ability to issue and keep in place executive action during the coronavirus pandemic. Proponents of the bills in the House and Senate believe DeWine and the state health department have issued orders during the last 11 months of the pandemic that have remained enacted for longer than necessary and, as a result, have unduly damaged small businesses and the state’s economy. Opponents called it unconstitutional and warned it would decentralize the state’s response during an emergency and cost lives in the process.”

Checks and balances on lifesaving measures during a pandemic—I am sure this is not what our founders had in mind. Good move, Google. Ohio is a fly over state, so maybe it is devalued because it is not intellectually as capable as the Left and Right coasts of the USA? If residents of the state disagree with that assessment, they may wish to do something about the current occupants of their Senate chamber.

Can we blame it on the Google artificial intelligence software?

Cynthia Murrell, March 2, 2021

Next Page »

  • Archives

  • Recent Posts

  • Meta