China: Control and Common Sense. Common Sense?

November 25, 2020

I must admit that I saw some darned troubling things when I last visited China and Hong Kong. However, I spotted an allegedly accurate factoid in “China Bans Spending by Teens in New Curbs on Livestreaming.” In one of my lectures about the Dark Web I pointed out livestreaming sites which permitted gambling, purchase of merchandise which is now called by the somewhat jarring term “merch,” and buying “time” with an individual offering “private sessions.” I pointed out examples on Amazon Twitch and on a service called ManyVids, an online outfit operating from Canada. (Yep, dull, maple crazed Canada.)

Here’s the passage of particular significance in my opinion:

Livestreaming platforms now must limit the amount of money a user can give hosts as a tip. Users must register their real names to buy the virtual gifts, in addition to the ban on teens giving such gifts. The administration also asked the platforms to strengthen training for employees who screen content and encouraged the companies to hire more censors, who also will need to register with regulators. The media regulator will create a blacklist of hosts who frequently violate the rules, and ban them from hosting livestreaming programs on any platform. [Emphasis added by Beyond Search]

Okay, spending controls will force buyers (sometimes known as “followers”) to be more creative in the buying time function.

But the killer point is “real names.”

No doubt there are online consumers who will bristle at censorship, registration, and blacklisting. Nevertheless, “real names” might be a useful concept for online services not under the watchful eye of party faithful grandmas in a digital hotong. What a quaint idea for outfits like Facebook, Twitter, YouTube, and other online content outputters to consider.

Stephen E Arnold, November 25, 2020

Facebook: Slipslidin’ Away from the Filterin’ Thing

May 28, 2020

Censorship, flagged tweets, and technology companies trying to be a nervous parent? Sound familiar. DarkCyber finds the discussion interesting. One of the DarkCyber team spotted “Facebook’s Mark Zuckerberg Says Platform Policing Should Be Limited To Avoiding Imminent Harm.” The main point of the write up contains this statement:

… the platform’s criteria for removing content remains “imminent harm” — not harm “down the line.”

The article provides some training wheels for the DarkCyber researcher:

Zuckerberg said several times that, in the balance, he thinks of himself “as being on the side of giving people a voice and pushing back on censorship.”

Some of the companies powering the digital economy appear to be willing to make decisions about what the product (those who use the services) or the customers (advertisers) can access.

The article provides a context for Facebook’s “imminent harm”; for example:

Facebook’s 2.6 billion users give it unprecedented reach, noted Susan Perez, a portfolio manager at Harrington Investments, who brought up the issue of political interference and fraudulent content on the platform. “Society’s risk is also the company’s risk,” she said.

The article includes a “Yes, but…”; to wit:

Nick Clegg, Facebook’s president of global affairs and communications, said during a question and answer session, said the company doesn’t think a private tech company “should be in the position of vetting what politicians say. We think people should be allowed to hear what politicians say so they can make up their own mind and hold the politician to account.”

As censorship becomes an issue in the datasphere, is Facebook “slip sliding away”? Is the senior management of Facebook climbing a rock face using an almost invisible path, a path that other digital climbers have not discerned?

But wait? Didn’t that pop song say?

You know the nearer your destination
The more you’re slip slidin’ away

Sure, but what if Facebook’s slip slidin’ is movin’ closer?

Stephen E Arnold, May 28, 2020

Big Tech: Adulting Arrives But A Global Challenge Proved Stronger Than Silicon Shirkers

March 29, 2020

Interesting item from NBC News: “Coronavirus Misinformation Makes Neutrality a Distant Memory for Tech Companies.” DarkCyber thinks the the write up should have used the phrase “finally adulting,” but, alas, the real news story states:

Most major consumer technology platforms embraced the idea that they were neutral players, leaving the flow of information up to users. Now, facing the prospect that hoaxes or misinformation could worsen a global pandemic, tech platforms are taking control of the information ecosystem like never before. It’s a shift that may finally dispose of the idea that Big Tech provides a “neutral platform” where the most-liked idea wins, even if it’s a conspiracy theory.

The recursive nature of the click loops creates some interesting phenomena. Among the outcomes is the myth of Silicon Valley bros, the mantra “Ask for forgiveness, not permission,” and the duplicity of executives explaining how their ad-fueled money systems have chopped through the fabric of society like a laser cutter in an off shore running shoe factory.

The write up includes some good quotes; for example:

“Neutrality — there’s no such thing as that, because taking a neutral stance on an issue of public health consequence isn’t neutral,” said Whitney Phillips, a professor of communication at Syracuse University who researches online harassment and disinformation. “Choosing to be neutral is a position,” she said. “It’s to say, ‘I am not getting involved because I do not believe it is worth getting involved.’ It is internally inconsistent. It is illogical. It doesn’t work as an idea. “So these tech platforms can claim neutrality all they want, but they have never been neutral from the very outset,” she added.

Okay, interesting. One question:

Why has it taken a real news outfit such a long time to focus on a problem?


We wanted a free mouse pad.

The problem is that undoing the digital damage may be a more difficult job than some anticipate.

Adulting permits a number of behaviors. Example: Falling off the responsibility wagon. Perhaps a recovery program is needed for dataholics?

Stephen E Arnold, March 29, 2020

Russia: Ever the Innovator for Internal Controls

March 12, 2020

DarkCyber tries to ignore Russia. The Fancy Bears, the hackers, and the secretive university research facilities—these give the team a headache. We spotted a headline which caused us to lift our gaze from more interesting innovations in Herliya and Tel Aviv to read “Russia Seeks to Block ‘Darknet’ Technologies, Including Telegram’s Blockchain.” According to the story:

A Russian government agency has requested contractor bids to find ways to block censorship-resistant internet technologies, like mesh networks. The list includes messaging app company Telegram’s yet-to-be-launched blockchain.

The technologies which Russia deems problematic include:

mesh networks, Internet of Things (IoT) protocols and protocols allowing anonymous browsing, including Invisible Internet Project (I2P), The Onion Router (TOR), Freenet, Zeronet, anoNet – and one blockchain, the Telegram Open Network (TON).

Other countries are likely to have similar concerns. Client states are likely to benefit from any Russian innovations which blunt these digital tools.

DarkCyber has a slightly different view:

  1. The technologies needed to deal with these systems will be developed. How quickly is anyone’s guess. But progress will be made.
  2. Turnover within research entities and Russia’s dynamic and quite interesting commercial sector is ongoing.
  3. Certain entrepreneurs apply innovations to what some people might describe as “extra legal” activities. If these individuals and their corporate constructs enjoy the benefit of positive support from some Russian officials, the innovations will find their way into a gray market.

Net net: Censorship is part of the government agenda. The new tools will have an impact outside of the Russian nation states. Censorship and monitoring go hand in hand in some countries.

Stephen E Arnold, March 12, 2020

YouTube: Tidying Up Script Kiddie Crumbs

October 15, 2019

An interesting series of comments flowed on Reddit (Monday, October 14, 2019). You may be ablt to access the original post and the comments at this link. No guarantees, however. The subject: Alleged Google  censorship. The topic: Methods for penetrating other people’s computers.

Is Google actively removing videos which violate the Jello-like terms of service?

DarkCyber hopes so.

YouTube is TV for hundreds of millions around the world.

There is some interesting material available on YouTube.

The post includes links. DarkCyber suggests you do some clicking and forming your own conclusion. Google often lacks consistency, so it is difficult to know where the Googley ball is bouncing.

Stephen E Arnold, October 15, 2019.

Robots: Not Animals?

October 9, 2019

The prevailing belief is that if Google declares something to be true, then it is considered a fact. The same can be said about YouTube, because if someone sees it on YouTube then, of course, it must be real. YouTube already has trouble determining what truly is questionable content. For example, YouTube does not flag white supremacy and related videos taken down. Another curious YouTube incident about flagged content concerns robot abuse, “YouTube Concedes Robot Fight Videos Are Not Actually Animal Cruelty After Removing Them By Mistake” from Gizmodo.

YouTube rules state that videos displaying animals suffering, such as dog and cock fights, cannot be posted on the streaming service. For some reason, videos and channels centered on robot fighting were cited and content was removed.

“…the takedowns were first noted by YouTube channel Maker’s Muse and affected several channels run by Battle Bots contenders, including Jamison Go of Team SawBlaze (who had nine videos taken down) and Sarah Pohorecky of Team Uppercut. Pohorecky told Motherboard she estimated some 10 to 15 builders had been affected, with some having had multiple videos removed. There didn’t appear to be any pattern in the titles of either the videos or the robots themselves, beyond some of the robots being named after animals, she added.”

YouTube’s algorithms make mistakes and robots knocking the gears and circuit boards out of each other was deemed violent, along the lines of “inflicting suffering.” YouTubers can appeal removal citations, so that content can be reviewed again.

Google humans doing human deciding. Interesting.

Whitney Grace, October 9, 2019

Mauritania Shuts Down Internet During Elections

July 12, 2019

Africa was shafted by colonial powers and now the continent is shafting itself with corruption from its numerous countries. Africa remains home to some of the poorest nations on Earth and according to Quartz, many of these countries habitually shut down the Internet in “Mauritania Blocked The Internet Over Protests Though Just One In Five People Are Online.” Countries that have shut off the Internet include Liberia, Benin, Democratic Republic of Congo, Chad, and Algeria. More recently the Sudan shut off lines when protesters demanded president Omar al-Bashir leave office and wanted an end to military rule. Ethopia cut their surfing power to curb cheating on exams and when there were rumors of a coup. The African Internet gets turned off for numerous reasons, mostly due too political ties: elections, government protests, and political referenda.

Mauritania took its turn to shut down the Internet amid its contested election. People hoped the election would be the first peaceful transfer of power since the country gained its independence in 1960. When the results were tallied the ruling party won by 52%, but opposition challenged the results. The government suspended mobile and fixed-Internet lines. It points to the government being afraid of any opposing force and using extreme measurements to maintain control. Most African governments do not offer explanations, but some explain it away as limiting hate speech, fake news, and violence.

Mauritania is indicative of the problems around the entire continent:

“Campaigners say the shutdown in Mauritania is only exacerbating the situation and preventing journalists, human rights defenders, and opposition groups from freely accessing and exchanging information. Mauritanian television also broadcast foreigners from neighboring countries confessing to ferment trouble following the polls—a “toxic and highly problematic” issue, activists say, in a country still battling racial discrimination and the vestiges of slavery.”

Freedom of information and communication is key to a democratic society and gives power to people. Heavy handiness might have its need in times of war, but during elections in a country that is supposed to be democratic it is a sign of societal changes.

Whitney Grace, July 11, 2019

Criticizing the Digital Czarina of Silicon Valley

May 31, 2019

DarkCyber would not criticize Kara Swisher. We think that her method of talking over those whom she interviews is just an outstanding way to deliver understandable audio. We find her summaries of her stellar career in journalism necessary because some of the DarkCyber team (like me) has a lousy memory for some crucial information. We enjoy her interactions with the kind, patient, and deeply informed author of The Algebra of Happiness a remarkable opportunity to learn how life is to be lived in the 21st century.


But TechDirt has a different point of view, expressed clearly in “Dear Kara Swisher: Don’t Let Your Hatred of Facebook Destroy Free Speech Online.” See, that’s what a brave person, steeped in the law, will share about a digital czarina of Silicon Valley.

We noted this statement in the 1362 word epistle:

This is wrong on so many levels that it makes me wonder where Swisher is getting her information from.

The “wrong” refers to Ms. Swisher’s posture toward Facebook censorship.

We also circled in blue, this statement:

…her analysis is simply incorrect.

Yikes. An error in analysis. The “incorrect” refers to Section 230 and other legal matters.

We also underlined this passage:

For quite some time now, we’ve been talking about the “impossibility” of doing content moderation at scale well. There are always going to be disagreements. But Section 230 is what allows for experimentation. People can (and should) criticize Facebook when they think the company made the wrong call, but to blithely toss Section 230 under the bus as the reason for Facebook failing to meet her own exacting standards, Swisher is actually throwing the open internet and free speech under the bus instead. It’s a horrifically bad take, and one that Swisher should know better about.

There it is. Ms. Swisher is not fully informed. (My mother used to tell me “You should know better.” I assume this phrasing is part of the adulting movement.

To wrap up, my hunch is that two important people in the world of Silicon Valley may exchange further communications.

Will the Czarina respond directly, or will a colleague or former colleague (of which there appear to be many) pick up the gauntlet and slap TechDirt in the head in order to knock some sense and appreciation into it?

Worth watching. There’s nothing like a lawyer and czarina dust up to reveal why Silicon Valley is held in such high regard by millions of people. DarkCyber will watch from a safe distance, of course. When elephants fight, only the grass suffers.

Stephen E Arnold, May 31, 2019

Making, Not Filtering, Disinformation

April 8, 2019

I spotted a link to this article on Sunday (April 7, 2019). The title of the “real news” report was “Facebook Is Asking to Be Regulated but Wants to Choose How.” The write ostensibly was about Facebook’s realization that regulation would be good for everyone. Mark Zuckerberg wants to be able to do his good work within a legal framework.

I noted this passage in the article:

Facebook has been in the vanguard of creating ways in which both harmful content can be generated and easily sent to anyone in the world, and it has given rise to whole new categories of election meddling. Asking for government regulation of “harmful content” is an interesting proposition in terms of the American constitution, which straight-up forbids Congress from passing any law that interferes with speech under the first amendment.

I also circled this statement:

Facebook went to the extraordinary lengths of taking out “native advertising” in the Daily Telegraph. In other words ran a month of paid-for articles demonstrating the sunnier side of tech, and framing Facebook’s efforts to curb nefarious activities on its own platform. There is nothing wrong with Facebook buying native advertising – indeed, it ran a similar campaign in the Guardian a couple of years ago – but this was the first time that the PR talking points adopted by the company have been used in such a way.

From Mr. Zuckerberg’s point of view, he is sharing his ideas.

From the Guardian’s point of view, he is acting in a slippery manner.

From the newspapers reporting about his activities and, in the case of the Washington Post, providing him with an editorial forum, news is news.

But what’s the view from Harrod’s Creek? Let me share a handful of observations:

  1. If a person pays money to a PR firm to get information in a newspaper, that information is “news” even if it sets forth an agenda
  2. Identifying disinformation or weaponized information is difficult, it seems, for humans involved in creating “real news”. No wonder software struggles. Money may cloud judgment.
  3. Information disseminated from seemingly “authoritative” sources is not much different from the info rocks from a digital slingshot. Disgruntled tweeters and unhappy Instagramers can make people duck and respond.

For me, disinformation, reformation, misinformation, and probably regular old run-of-the-mill information is unlikely to be objective. Therefore, efforts and motivations to identify and filter these payloads is likely to be very difficult.

Stephen E Arnold, April 8, 2019

The Function of Filters

April 4, 2019

Filters block access to words, sites, or other items identifiable via modern computation; for example, a pattern of relationships and addresses of certain businesses or people. An online publication Abacus reports an item of information which makes clear that it is important to be in charge of filters. “Chinese Browsers Block Protest against China’s 996 Overtime Work Culture” asserts:

A number of Chinese browsers, including Tencent’s QQ Browser, Qihoo’s 360 Browser and the native browser on Xiaomi smartphones, have restricted user access to the repository on GitHub.

Maybe the only way to get unfiltered information is to work in the agency examining content to figure out what one should not see? What if Bing, Google, and Yandex were blocking access to content and no one except those working in the censorship department knew? Interesting to consider.

Stephen E Arnold, April 4, 2019

Next Page »

  • Archives

  • Recent Posts

  • Meta