Wiki People: One Cannot Find Online Information If It Is Censored

September 2, 2021

Women have born the brunt of erasure from history, but thanks web sites like Wikipedia, their stories are shared more than ever. There is a problem with Wikipedia though, says CBC in the article: “Canadian Nobel Scientist’s Deletion From Wikipedia Points To Wider Bias, Study Finds.” Wikipedia is the most comprehensive, collaborative, and largest encyclopedia in human history. It is maintained by thousands of volunteer editors, who curate the content, verify information, and delete entries.

There are different types of Wikipedia editors. One type is an “inclusionist,” an editor who takes broad views about what to include in Wikipedia. The second type are “deflationists,” who have high content standards. American sociologist Francesca Tripodi researched the pages editors deleted and discovered that women’s pages are deleted more than men’s. Tripodi learned that 25% of women’s pages account for all deletion recommendations and their pages only make up 19% of the profiles.

Experts say it is either gender bias or notability problem. The notability is a gauge Wiki editors use to determine if a topic deserves a page and they weigh the notability against reliable sources. What makes a topic notable, Tripodi explained, leads to gender bias, because there is less information on them. It also does not help that many editors are men and there are attempts to add more women:

“Over the years, women have tried to fix the gender imbalance on Wikipedia, running edit-a-thons to change that ratio. Tripodi said these efforts to add notable women to the website have moved the needle — but have also run into roadblocks. ‘They’re welcoming new people who’ve never edited Wikipedia, and they’re editing at these events,’ she said. ‘But then after all of that’s done, after these pages are finally added, they have to double back and do even more work to make sure that the article doesn’t get deleted after being added.”

Unfortunately women editors complain they need to do more work to make sure their profiles are verifiable and are published. The Wikipedia Foundation acknowledges that the lack of women pages, because it reflects world gender biases. The Wikipedia Foundation, however, is committed to increasing the amount of women pages and editors. The amount of women editors has increased over 30% in the past year.

That is the problem when there is a lack of verifiable data about women or anyone erased from history due to biases. If there is not any information on them, they cannot be searched even by trained research librarians like me. Slick method, right?

Whitney Grace, September 2, 2021

Thailand Does Not Want Frightening Content

August 6, 2021

The prime minister of Thailand is Prayut Chan-o-cha. He is a retired Royal Thai Army officer, and he is not into scary content. What’s the fix? “PM Orders Internet Blocked For Anyone Spreading Info That Might Frighten People” reported:

Prime Minister Prayut Chan-o-cha has ordered internet service providers to immediately block the internet access of anyone who propagates information that may frighten people. The order, issued under the emergency situation decree, was published in the Royal Gazette on Thursday night and takes effect on Friday. It prohibits anyone from “reporting news or disseminating information that may frighten people or intentionally distorting information to cause a misunderstanding about the emergency situation, which may eventually affect state security, order or good morality of the people.”

So what’s “frightening?” I for one find the idea of having access to the Internet blocked. Why not just put the creator of frightening content in one of Thailand’s exemplary and humane prisons? These, as I understand the situation, feature ample space, generous prisoner care services, and healthful food. With an occupancy level of 300 percent, what’s not to like?

Frightening so take PrisonStudies.org offline I guess.

Stephen E Arnold, August 6, 2021

Facebook Lets Group Admins Designate Experts. Okay!

August 2, 2021

Facebook once again enlists the aid of humans to impede the spread of misinformation, only this time it has found a way to avoid paying anyone for the service. Tech Times reports, “Facebook Adds Feature to Let Admin in Groups Chose ‘Experts’ to Curb Misinformation.” The move also has the handy benefit of shifting responsibility for bad info away from the company. We wonder—what happened to that smart Facebook software? The article does not say. Citing an article from Business Insider, writer Alec G. does tell us:

“The people who run the communities on Facebook now have the authority to promote individuals within its group to gain the title of ‘expert.’ Then, the individuals dubbed as experts can be the voices of which the public can then base their questions and concerns. This is to prevent misinformation plaguing online communities for a while now.”

But will leaving the designation of “expert” up to admins make the problem worse instead of better? The write-up continues:

“The social platform now empowers specific individuals inside groups who are devoted to solely spreading misinformation-related topics. The ‘Stop the Steal’ group, for example, was created in November 2020 with over 365,000 members. They were convinced that the election for the presidency was a fraud. If Facebook didn’t remove the group two days later, it would continue to have negative effects. Facebook explained that the organization talked about ‘the delegitimization of the election process,’ and called for violence, as reported by the BBC. Even before that, other groups within Facebook promoted violence and calls to action that would harm the civility of the governments.”

Very true. We are reminded of the company’s outsourced Oversight Board created in 2018, a similar shift-the-blame approach that has not worked out so well. Facebook’s continued efforts to transfer responsibility for bad content to others fail to shield it from blame. They also do little to solve the problem and may even make it worse. Perhaps it is time for a different (real) solution.

Cynthia Murrell, August 2, 2021

Putin Has Kill Switch

July 26, 2021

“Russia Disconnected Itself from the Global Internet in Tests” shares an intriguing factoid. Mr. Putin can disconnected the country from the potato fields near Estonia to the fecund lands where gulags once bloomed. The write up reports:

State communications regulator Roskomnadzor said the tests were aimed at improving the integrity, stability and security of Russia’s Internet infrastructure…

If a pesky cyber gang shuts down the Moscow subway from Lichtenstein, it’s pull the plug time. The idea is that Russia will not have to look outside of its territory to locate the malefactors. If outfits like Twitter refuse to conform to Russian law, the socially responsible company may lose some of its Russian content creators.

What other countries will be interested in emulating Russia’s action or licensing the technology? I can think of a few. The Splinter Net is starting to gain momentum. Those ideals about information wanting to be free and the value of distributed systems seem out of step with Mr. Putin’s kill switch.

Stephen E Arnold, July 26, 2021

Russia: Getting Ready for Noose Snugging

June 23, 2021

Tens of thousands of Russian citizens have taken to the streets in protest and the government is cracking down. On social media platforms, that is. Seattle PI reports, “Russia Fines Facebook, Telegram Over Banned Content.” The succinct write-up specifies that Facebook was just fined 17 million rubles (about $236,000) and messaging app Telegram 10 million rubles ($139,000) by a Moscow court. Though it was unclear what specific content prompted these latest fines, this seems to be a trend. We learn:

“It was the second time both companies have been fined in recent weeks. On May 25, Facebook was ordered to pay 26 million rubles ($362,000) for not taking down content deemed unlawful by the Russian authorities. A month ago, Telegram was also ordered to pay 5 million rubles ($69,000) for not taking down calls to protest. Earlier this year, Russia’s state communications watchdog Roskomnadzor started slowing down Twitter and threatened it with a ban, also over its alleged failure to take down unlawful content. Officials maintained the platform failed to remove content encouraging suicide among children and containing information about drugs and child pornography. The crackdown unfolded after Russian authorities criticized social media platforms that have been used to bring tens of thousands of people into the streets across Russia this year to demand the release of jailed Russian opposition leader Alexei Navalny, President Vladimir Putin’s most well-known critic. The wave of demonstrations has been a major challenge to the Kremlin. Officials alleged that social media platforms failed to remove calls for children to join the protests.”

Yes, Putin would have us believe it is all about the children. He has expressed to the police his concern for young ones who are tempted into “illegal and unsanctioned street actions” by dastardly grown-ups on social media. His concern is touching.

Beyond Search thinks Mr. Putin’s actions are about control. An article in Russian named “Sovereign DNS Is Already Here and You Haven’t Noticed” provides information that suggests Mr. Putin’s telecommunications authority has put the machinery in place to control Internet access within Russia.

Fines may be a precursor to more overt action against US companies and content the Russian authorities deem in appropriate.

Cynthia Murrell, June 23, 2021

China: More Than a Beloved Cuisine, Policies Are Getting Traction Too

June 16, 2021

As historical information continues to migrate from physical books to online archives, governments are given the chance to enact policies right out of Orwell’s 1984. And why limit those efforts to one’s own country? Quartz reports that “China’s Firewall Is Spreading Globally.” The crackdown on protesters in Tiananmen Square on June 4, 1989 is a sore spot for China. It would rather those old enough to remember it would forget and those too young to have seen it on the news never learn about it. The subject has been taboo within the country since it happened, but now China is harassing the rest of the world about it and other sensitive topics. Worse, the efforts appear to be working.

Writer Jane Li begins with the plight of activist group 2021 Hong Kong Charter, whose website is hosted by Wix. The site’s mission is to build support in the international community for democracy in Hong Kong. Though its authors now live in countries outside China and Wix is based in Israel, China succeeded in strong-arming Wix into taking the site down. The action did not stick—the provider apologized and reinstated the site after being called out in public. However, it is disturbing that it was disabled in the first place. Li writes:

“The incident appears to be a test case for the extraterritorial reach of the controversial national security law, which was implemented in Hong Kong one year ago. While Beijing has billed the law as a way to restore the city’s stability and prosperity, critics say it helps the authorities to curb dissent as it criminalizes a broad swathe of actions, and is written vaguely enough that any criticism of the Party could plausibly be deemed in violation of the law. In a word, the law is ‘asserting extraterritorial jurisdiction over every person on the planet,’ wrote Donald Clarke, a professor of law at George Washington University, last year. Already academics teaching about China at US or European universities are concerned they or their students could be exposed to greater legal risk—especially should they discuss Chinese politics online in sessions that could be recorded or joined by uninvited participants. By sending the request to Wix, the Hong Kong police are not only executing the expansive power granted to them by the security law, but also sending a signal to other foreign tech firms that they could be next to receive a request for hosting content offensive in the eyes of Beijing.”

One nation attempting to seize jurisdiction around the world may seem preposterous, but Wix is not the only tech company to take this law seriously. On the recent anniversary of the Tiananmen Square crackdown, searches for the event’s iconic photo “tank man” turned up empty on MS Bing. Microsoft blamed it on an “accidental human error.” Sure, that is believable coming from a company that is known to cooperate with Chinese censors within that country. Then there was the issue with Google-owned YouTube. The US-based group Humanitarian China hosted a ceremony on June 4 commemorating the 1989 event, but found the YouTube video of its live stream was unavailable for days. What a coincidence! When contacted, YouTube simply replied there may be a possible technical issue, what with Covid and all. Of course, Google has its own relationship to censorship in China.

Not to be outdone, Facebook suspended the live feed of the group’s commemoration with the auto-notification that it “goes against our community standards on spam.” Right. Naturally, when chastised the platform apologized and called the move a technical error. We sense is a pattern here. One more firm is to be mentioned, though to be fair some of these participants were physically in China: Last year, Zoom disabled Humanitarian China’s account mid-meeting after the group hosted its Covid-safe June 4th commemoration on the platform. At least that company did not blame the action on a glitch; it made plain it was at the direct request of Beijing. The honesty is refreshing.

Cynthia Murrell, June 16, 2021

The Addiction Analogy: The Cancellation of Influencer Rand Fishkin

May 5, 2021

Another short item. I read a series of tweets which you may be able to view at this link. The main idea is that an influencer was to give a talk about marketing. The unnamed organizer did not like Influencer Fishkin’s content. And what was that content? Information and observations critical of the outstanding commercial enterprises Facebook and Google. The apparent points of irritation were Influencer Fishkin’s statements to the effect that the two estimable outfits (Facebook and Google) were not “friendly, in-your-corner partners.” Interesting, but for me that was only part of the story.

Here’s what I surmised from the information provided by Influencer Fishkin:

  1. Manipulation is central to the way in which these two lighthouse firms operate in the dark world of online
  2. Both venerated companies function without consequences for their actions designed to generated revenue
  3. The treasured entities apply the model and pattern to “sector after sector.”

Beyond Search loves these revered companies.

But there is one word which casts a Beijing-in-a-sandstorm color over Influencer Fishkin’s remarks. And that word is?

Addiction

The idea is that these cherished organizations use their market position (which some have described as a monopoly set up) and specific content to make it difficult for a “user” of the “free” service to kick the habit.

My hunch is that neither of these esteemed commercial enterprises wants to be characterized as purveyor of gateway drugs, digital opioids, or artificers who put large monkeys on “users” backs.

That’s not a good look.

Hence, cancellation is a pragmatic fix, is it not?

Stephen E Arnold, May 5, 2021

Selective YouTube Upload Filtering or Erratic Smart Software?

May 4, 2021

I received some information about a YouTuber named Aquachiggers. I watched this person’s eight minute video in which Aquachigger explained that his videos had been downloaded from YouTube. Then an individual (whom I shall described as an alleged bad actor) uploaded those Aquachigger videos with a the alleged bad actor’s voice over. I think the technical term for this is a copyright violation taco.

I am not sure who did what in this quite unusual recycling of user content. What’s clear is that YouTube’s mechanism to determine if an uploaded video violates Google rules (who really knows what these are other than the magic algorithms which operate like tireless, non-human Amazon warehouse workers). Allegedly Google’s YouTube digital third grade teacher software can spot copyright violations and give the bad actor a chance to rehabilitate an offending video.

According to Aquachigger, content was appropriated, and then via logic which is crystalline to Googlers, notified Aquachigger that his channel would be terminated for copyright violation. Yep, the “creator” Aquachigger would be banned from YouTube, losing ad revenue and subscriber access, because an alleged bad actor took the Aquachigger content, slapped an audio track over it, and monetized that content. The alleged bad actor is generating revenue by unauthorized appropriation of another person’s content. The key is that the alleged bad actor generates more clicks than the “creator” Aquachigger.

Following this?

I decided to test the YouTube embedded content filtering system. I inserted a 45 second segment from a Carnegie Mellon news release about one of its innovations. I hit the upload button and discovered that after the video was uploaded to YouTube, the Googley system informed me that the video with the Carnegie Mellon news snip required further processing. The Googley system labored for three hours. I decided to see what would happen if I uploaded the test segment to Facebook. Zippity-doo. Facebook accepted my test video.

What I learned from my statistically insignificant test that I could formulate some tentative questions; for example:

  1. If YouTube could “block” my upload of the video PR snippet, would YouTube be able to block the Aquachigger bad actor’s recycled Aquachigger content?
  2. Why would YouTube block a snippet of a news release video from a university touting its technical innovation?
  3. Why would YouTube, create the perception that Aquachigger be “terminated”?
  4. Would YouTube be allowing the unauthorized use of Aquachigger content in order to derive more revenue from that content on the much smaller Aquachigger follower base?

Interesting questions. I don’t have answers, but this Aquachigger incident and my test indicate that consistency is the hobgoblin of some smart software. That’s why I laughed when I navigated to Jigsaw, a Google service, and learned that Google is committed to “protecting voices in conversation.” Furthermore:

Online abuse and toxicity stops people from engaging in conversation and, in extreme cases, forces people offline. We’re finding new ways to reduce toxicity, and ensure everyone can safely participate in online conversations.

I also learned:

Much of the world’s internet users experience digital censorship that restricts access to news, information, and messaging apps. We’re [Google] building tools to help people access the global internet.

Like I said, “Consistency.” Ho ho ho.

Stephen E Arnold, May 4, 2021

Do Tech Monopolies Have to Create Enforcement Units?

April 26, 2021

Two online enforcement articles struck me as thought provoking.

The first was the Amazon announcement that it would kick creators (people who stream on the Twitch service) off the service for missteps off the platform. This is an interesting statement, and you can get some color in “Twitch to Boot Users for Transgressions Elsewhere.” In my discussion with my research team about final changes to my Amazon policeware lecture, I asked the group about Twitch banning individuals who create video streams and push them to the Twitch platform.

There were several points of view. Here’s a summary of the comments:

  • Yep, definitely
  • No, free country
  • This has been an informal policy for a long time. (Example: SweetSaltyPeach, a streamer from South Africa who garnered attention by assembling toys whilst wearing interesting clothing. Note: She morphed into the more tractable persona RachelKay.

There’s may be a problem for Twitch, and I am not certain Amazon can solve it. Possibly Amazon – even with its robust policeware technology – cannot control certain activities off the platform. A good example is the persona on Twitch presented as iBabyRainbow. Here’s a snap of the Twitch personality providing baseball batting instructions to legions of fans by hitting eggs with her fans’ names on them:

baby 3 baseball

There is an interesting persona on the site NewRecs. It too features a persona which seems very similar to that of the Amazon persona. The colors are similar; the makeup conventions are similar; and the unicorn representation appears in both images. Even the swimming pool featured on Twitch appears in the NewRecs’ representation of the personal BabyRainbow.

baby newrecs filtered copy

What is different is that on NewRecs, the content creator is named “BabyRainbow.” Exploration of the BabyRainbow persona reveals some online lines which might raise some eyebrows in Okoboji, Iowa. One example is the link between BabyRainbow and the site Chaturbate.

My research team spotted the similarity quickly. Amazon, if it does know about the coincidence, has not taken action for the persona’s Twitch versus NewRecs versus Chaturbate and some other “interesting” services which exist.

So either Twitch enforcement is ignoring certain behavior whilst punishing other types of behavior. Neither Amazon or Twitch is talking much about iBabyRainbow or other parental or law enforcement-type of actions.

The second development is the article “Will YouTube Ever Properly Deal with Its Abusive Stars?” The write up states:

YouTube has long had a problem with acknowledging and dealing with the behavior of the celebrities it helped to create… YouTube is but one of many major platforms eager to distance themselves from the responsibility of their position by claiming that their hands-off approach and frequent ignorance over what they host is a free speech issue. Even though sites like YouTube, Twitter, Substack, and so on have rules of conduct and claim to be tough on harassment, the evidence speaks to the contrary.

The essay points out that YouTube has taken action against certain individuals whose off YouTube behavior was interesting, possibly inappropriate, and maybe in violation of certain government laws. But, the essay, asserts about a YouTuber who pranked people and allegedly bullied people:

Dobrik’s channel was eventually demonetized by YouTube, but actions like this feel too little too late given how much wealth he’s accumulated over the years. Jake Paul is still pulling in big bucks from his channel. Charles was recently demonetized, but his follower count remains pretty unscathed. And that doesn’t even include all the right-wing creeps pulling in big bucks from YouTube. Like with any good teary apology video, the notion of true accountability seems unreachable.

To what do these two example sum? The Big Tech companies may have to add law enforcement duties to their checklist of nation state behavior. When a government takes an action, there are individuals with whom one can speak. What rights does talent on an ad-based platform have. Generate money and get a free pass. Behave in a manner which might lead to a death penalty in some countries? Keep on truckin’? The online ad outfit struggles to make clear exactly what it is doing with censorship and other activities like changing the rules for APIs. It will be interesting to see what the GOOG tries to do.

Consider this: What if Mr. Dobrik and iBabyRainbow team up and do a podcast? Would Apple and Spotify bid for rights? How would the tech giants Amazon and Google respond? These are questions unthinkable prior to the unregulated, ethics free online world of 2021.

Stephen E Arnold, April 26, 2021

Preserving History? Twitter Bans Archived Trump Tweets

April 19, 2021

The National Archives of the United States archives social media accounts of politicians.  Former President Donald Trump’s Twitter account is among them.  One of the benefits of archived Twitter accounts is that users can read and interact with old tweets.  Twitter, however, banned Trump in early 2021 because he was deemed a threat to public safety.  Politico explains the trouble the National Archives and Records Administration currently has getting Trump’s old tweets back online, “National Archives Can’t Resurrect Trump’s Tweets, Twitter Says.”

Other former Trump administration officials have their old tweets active on Twitter.  Many National Archives staff view Twitter’s refusal to reactivate the tweets as censorship.  Trump’s controversial tweets are part of a growing battle between Washington and tech giants, where the latter censors conservatives.  Supreme Court Justice Clarence Thomas lamented that the tech companies had so much control over communication.

Twitter is working with the National Archives on preserving Trump’s tweets.  Twitter refuses to host any of Trump’s tweets, stating they glorified violence.  The tweets will be available on the Donald J. Trump Presidential Library web site.

“Nevertheless, the process of preserving @realDonaldTrump’s tweets remains underway, NARA’s [James] Pritchett said, and since the account is banned from Twitter, federal archivists are ‘working to make the exported content available … as a download’ on the Trump Presidential Library website.

‘Twitter is solely responsible for the decision of what content is available on their platform,’ Pritchett said.’“NARA works closely with Twitter and other social media platforms to maintain archived social accounts from each presidential administration, but ultimately the platform owners can decline to host these accounts. NARA preserves platform independent copies of social media records and is working to make that content available to the public.’”

Is it really censorship if Trump’s tweets are publicly available just not through their first medium?  Conservative politicians do have a valid argument.  Big tech does control and influence communication, but does that give them the right to censor opinions? Future data archeologists may wonder about the gap.

Whitney Grace, April 19, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta