China and Facebook: Coincidence, Trend, the Future?

May 23, 2017

I read “China Clamps Down on Online News With New Security Rules.” The main idea is that China is taking steps to make sure the right news reaches the happy Internet consumers in the middle kingdom. Forget the artificial intelligence approach. China may be heading down a more traditional water buffalo path. Human herders will keep those bovines in line. Bad bovines become Chinese beef with broccoli. The Great Firewall is, it seems, not so magnificent. VPNs are on the hit list too. Monitoring is the next big thing in making sure 1.2 billion Chinese are fully informed. The question is, “Didn’t the previous online intercept and filtering mechanism work?” Who knew?

Image result for philosophical problem

I also noted “Facebook Is Hiring a Small Army to Block Murder and Suicide Videos.” The point of the write up is that the vaunted revolution in artificial intelligence is not so vaunted. To find and censor nasty videos, Facebook is embracing an old-fashioned approach—humans. The term for this digital “fast food” type workers is moderators. The moderators will be part of Facebook’s “community operations team. If the “real journalism” outfit is correct, Facebook’s COT has a cadre of 4,500 people. For those lucky enough to work at the Taco Bell of deciding what’s “good”, “appropriate,” or “Facebooky”, I learned:

Facebook says the people hired to review Facebook content for the company will receive psychological support…

I would imagine that it might be easier to hire individuals who don’t worry about free speech and figuring out the answer to such questions as, “Exactly what is Facebooky?” Tom Aquinas, John Locke, Socrates, Bertrand Russell, and  Descartes are not available to provide their input.

More intriguing is that Google is adding “workshops” for humans. Presumably, Google has cracked the problem of figuring out what’s in and what’s out under the US Constitution’s First Amendment. The high power Google smart software are getting a spring tune up. But humanoids will be working on identifying hate speech if the information in “Google Search Changes Tackle Fake News and Hate Speech.”

For a moment, I thought there was some similarity among the China, Facebook, and Google approaches. I realized that China is a real country and it is engaged in information control. Facebook and Google are “sort of countries”? Each is engaged in a process of deciding what’s okay and what’s not okay?

Am I worried? Not really. I think that nation states make decisions so their citizens are fully informed. I think that US monopolies operate for the benefit of their users.

The one issue which gives me a moment’s pause is the revolution in big thinking. China, Facebook, and Google have obviously resolved the thorny problem of censorship.

Those losers like Socrates deserved to die. Tom Aquinas had the right idea: Stay inside and focus on a greater being. Descartes was better at math than the “I think and therefore I am” silliness. Perhaps the spirit of John Locke has been revivified, and it is guiding the rationalists in China, Facebook, and Google in their quest to decide what’s good and what’s bad.

Three outfits have “Russell-ed” up answers to tough philosophical questions. Trivial, right?

Stephen E Arnold, May 23, 2017

Advocacy Groups Back Google over Right to Be Forgotten Conflict

January 31, 2017

Does a European’s  “right to be forgotten” extend around the globe? (And if not, is one really “forgotten”?) Can one nation decide what the rest of the world is allowed to see about its citizens? Thorny questions are at the heart of the issue MediaPost examines in, “Google Draws Support in Showdown Over ‘Right to Be Forgotten’.”

Privacy-protection rights, established by European judges, demand Google remove search-result links that could embarrass a European citizen at the subject’s request (barring any public interest in the subject, of course). French regulators want Google to extend this censorship on its citizens’ behalf around the world, rather than restrict access just within that country’s borders. No way, says Google, and it has some noteworthy support—the Center for Democracy & Technology, Electronic Frontier Foundation, Human Rights Watch, and other organizations agree that what France is attempting sets a dangerous precedent.  Writer Wendy Davis elaborates:

Google argues that it can comply with the ruling by preventing links from appearing in the results pages of search engines aimed at specific countries, like Google.fr, for French residents. But the French authorities say Google must delete the links from all of its search engines, including Google.com in the U.S. Earlier this year, France’s CNIL [Commission Nationale de l’Informatique et des Libertés ]rejected Google’s position and fined the company $112,000. Google is now appealing that ruling, and the Center for Democracy & Technology and others are backing Google’s position.

The CDT argues in a blog post that authorities in one country shouldn’t be able to decide whether particular search results are available in other countries—especially given that authorities in some parts of the world often object to material that’s perfectly legal in many nations. For instance, Pakistan authorities recently asked Google (unsuccessfully) to take down videos that satirized politicians, while Thai authorities unsuccessfully asked Google to remove YouTube clips that allegedly insulted the royal family.

Google itself has argued that no one country should be able to censor the Web internationally. ‘In the end, the Internet would only be as free as the world’s least free place,’ global privacy counsel Peter Fleischer wrote on the company’s blog last year.

Indeed. As someone whose (most) foolish years occurred before the Web was a thing, I sympathize with folks who want to scrub the Internet of their embarrassing moments. However, trying to restrict what citizens of other countries can access simply goes too far.

Cynthia Murrell, January 31, 2017

Google Needs a Time-Out for Censorship, But Who Will Enforce Regulations

January 26, 2017

The article on U.S. News and World Report titled The New Censorship offers a list of the ways in which Google is censoring its content, and builds a compelling argument for increased regulation of Google. Certain items on the list, such as pro-life music videos being removed from YouTube, might have you rolling your eyes, but the larger point is that Google simply has too much power over what people see, hear, and know. The most obvious problem is Google’s ability to squash a business simply by changing its search algorithm, but the myriad ways that it has censored content is really shocking. The article states,

No one company, which is accountable to its shareholders but not to the general public, should have the power to instantly put another company out of business or block access to any website in the world. How frequently Google acts irresponsibly is beside the point; it has the ability to do so, which means that in a matter of seconds any of Google’s 37,000 employees with the right passwords or skills could laser a business or political candidate into oblivion…

At times the article sounds like a sad conservative annoyed that the most influential company in the world tends toward liberal viewpoints. Hearing white male conservatives complain about discrimination is always a little off-putting, especially when you have politicians like Rand Paul still defending the right of businesses to refuse service based on skin color. But from a liberal standpoint, just because Google often supports left-wing causes like gun control or the pro-choice movement doesn’t mean that it deserves a free ticket to decide what people are exposed to. Additionally, the article points out that the supposed “moral stands” made by Google are often revealed to be moneymaking or anticompetitive schemes. Absolute power corrupts no matter who yields it, and companies must be scrutinized to protect the interests of the people.

Chelsea Kerwin, January 26, 2017

Chinese Censorship Agency Declares All News Online Must Be Verified

January 12, 2017

The heavy hand of Chinese censorship has just gotten heavier. The South China Morning Post reports, “All News Stories Must Be Verified, China’s Internet Censor Decrees as it Tightens Grip on Online Media.” The censorship agency now warns websites not to publish news without “proper verification.” Of course, to hear the government tell it, they just wants to cut down on fake news and false information. Reporter Choi Chi-yuk  writes:

The instruction, issued by the Cyberspace Administration of China, came only a few days after Xu Lin, formerly the deputy head of the organisation, replaced his boss, Lu Wei, as the top gatekeeper of Chinese internet affairs. Xu is regarded as one of President Xi Jinping’s key supporters.

The cyberspace watchdog said online media could not report any news taken from social media websites without approval. ‘All websites should bear the key responsibility to further streamline the course of reporting and publishing of news, and set up a sound internal monitoring mechanism among all mobile news portals [and the social media chat websites] Weibo or WeChat,’ Xinhua reported the directive as saying. ‘It is forbidden to use hearsay to create news or use conjecture and imagination to distort the facts,’ it said.

We’re told the central agency has directed regional offices to aggressively monitor content and “severely” punish those who post what they consider false news. They also insist that sources be named within posts. Apparently, several popular news portals have been rebuked under the policy, including Sina.com, Ifeng.com, Caijing.com.cn, Qq.com and 163.com.

Cynthia Murrell, January 12, 2017

Instance of the LinkedIn Blue Pencil

September 15, 2016

I love LinkedIn. I love the wonky email inducements to pay. I love the quirky information posted by people who are looking for jobs, consulting gigs, or a digital water cooler.

But what I love most is learning about alleged instances of bowdlerization, restrictions, information black outs, and what might be labeled “censorship.”

Let me be clear. The example comes from an individual with whom I have worked for 12, maybe 15 years. I am reporting this alleged suppression of information to shine some light on what seems to be one more step in restricting factoids and opinions. As I said, I love LinkedIn, which I have described as the social Clippy now that Microsoft will embrace the system in its services. Eager am I. I loved Bob too.

I learned from a person who was a US Marine officer and also a former Central Intelligence Agency professional that a post about the democratic candidate for the presidency was deleted. The author was put in LinkedIn’s dunce cap. You can read the original “Owl” post at this link.

Here’s what I learned. Note that this information came to me from Robert David Steele Vivas, the person who was summarily sent to sit in the corner of the LinkedIn virtual professional meet up on September 13, 2016.

Steele says:

Yesterday I was censored by LinkedIn when I tried to post a story on “The Madness of Queen Hillary.” Coming as it does in the aftermath of Google manipulating both search and spam results in favor of Hillary Clinton, Facebook blocking YouTubes from Alex Jones, and Twitter censoring trending results associated with Hillary Clinton’s health, I have realized that the major social media enterprises have become part of a police state where the opinions of we “unredeemable deplorables” are easily censored.

Intrigued, I ask Steele what happened then? He says:

My three attempts to post were blocked, and then I found that my profile was restricted from posting. I immediately deleted  the account, LinkedIn, while efficient at censoring, is inefficient at elective deletions, so it will take a few days.

How were you told about this action? Steele states:

I was neither warned nor notified. I discovered the censorship when I found that I had lost functionality.

In a time when smart software promotes false news stories, I wanted to know if Steele knew if the action was taken by a human or an artificially smart chunk of code. Steele replies:

Presumably this was a software-driven trigger that closes down commentaries using negative words in association with Hillary Clinton. However I have also noticed that both the Clinton camp and the Israeli lobby have perfected the use of spam reports to silence critics — there is no court of appeals if you are maliciously labeled a spammer. I suspect the censorship resulted from a mix of the two anti-thought measures.

Why I asked myself would LinkedIn censor a member’s essay about a campaign that is dominating the news cycle in just about every form of media I check out? I asked Steele this question, and he writes:

Eric Schmidt is on record as saying that he has the right and the ability to control “hate speech” online. The “digital innovators” in the White House are all committed to Hillary Clinton in part so they can keep their jobs and continue to play with new means of manipulating the information environment. This happened because the White House ignored my 1994 letter calling for major investments in the integrity and security of the cyber domain (and actually allowed NSA to gut what security existed, with the complicity of IT CEOs, for the convenience of our mass surveillance program), and because in the absence of legitimate oversight in the public interest, social media enterprises will trend toward the abuse of their power, much as banks and corporations have in the material world.

Living in rural Kentucky, I am not certain that I am qualified to comment about the actions of smart software and even smarter executives. I have several thoughts I want to capture before I leave this vallis lacrimarum:

  1. LinkedIn has some content which strikes me as subpar. If the outfit is editing and blocking content, the process seems a bit hit and miss. I prefer some substantive, thought provoking information, not recycled marketing jargon.
  2. What other content has been blocked? Is there a Web site or social media stream where instances of censorship are captured and commented upon? I checked several pastesites and drew a blank.
  3. I assume that LinkedIn operates like a mall; that is, the mall owner can run the mall any old way he or she wishes. But how does one evaluate a professional who may be qualified for a job or a consulting gig if the information that professional supplies to LinkedIn is blocked. Doesn’t this distort the picture of the potential hire? What about a felon who creates an identify on LinkedIn and then is revealed by another LinkedIn user. Will LinkedIn block the revelatory information and allow the felon to cruise along with a false background?

As I said, the LinkedIn system is a fave at Beyond Search. I think it is difficult to make an informed decision without having access to information created by a LinkedIn member. What else is missing from the LinkedIn data pool?

Stephen E Arnold, September xx, 2016

Improving Information for Everyone

August 14, 2016

I love it when Facebook and Google take steps to improve information quality for everyone.

I noted “Facebook’s News Feed to Show Fewer Clickbait Headlines.” I thought the Facebook news feed was 100 percent beef. I learned:

The company receives thousands of complaints a day about clickbait, headlines that intentionally withhold information or mislead users to get people to click on them…

Thousands. I am impressed. Facebook is going to do some filtering to help its many happy users avoid clickbait, a concept which puzzles me. I noted:

Facebook created a system that identifies and classifies such headlines. It can then determine which pages or web domains post large amounts of clickbait and rank them lower in News Feed. Facebook routinely updates its algorithm for News Feed, the place most people see postings on the site, to show users what they are most interested in and encourage them to spend even more time on the site.

Clustering methods are readily available. I ask myself, “Why did Facebook provide streams of clickbait in the first place?”

On a related note, the Google released exclusive information to Time Warner, which once owned AOL and now owns a chunk of Hula. Google’s wizards have identified bad bits, which it calls “unwanted software.” The Googlers converted the phrase into UwS and then into the snappy term “ooze.”

Fortune informed me:

people bump into 60 million browser warnings for download attempts of unwanted software at unsafe Web pages every week.

Quite a surprise I assume. Google will definitely talk about “a really big problem.” Alas, Fortune was not able to provide information about what Mother Google will do to protect its users. Obviously the “real” journalists at Fortune did not find the question, “What are you going to do about this?” germane.

It is reassuring to know that Facebook and Google are improving the quality of the information each provides. Analytics and user feedback are important.

Stephen E Arnold, August 13, 2016

Facebook Algorithms: Doing What Users Expect Maybe

August 9, 2016

I read an AOL-Yahoo post titled “Inside Facebook Algorithms.” With the excitement of algorithms tingeing the air, explanations of smart software make the day so much better.

I learned:

if you understand the rules, you can play them by doing the same thing over and over again

Good point. But how many Facebook users are sufficiently attentive to correlate a particular action with an outcome which may not be visible to the user?

Censorship confusing? It doesn’t need to be. I learned:

Mr. Abbasi [a person whose Facebook post was censored] used several words which would likely flag his post as hate speech, which is against Facebook’s community guidelines. It is also possible that the number of the words flagged would rank it on a scale of “possibly offensive” to “inciting violence”, and the moderators reviewing these posts would allocate most of their resources to posts closer to the former, and automatically delete those in the latter category. So far, this tool continues to work as intended.

There is nothing like a word look up list containing words which will result in censorship. We love word lists. Non public words lists are not much fun for some.

Now what about algorithms? The examples in the write up are standard procedures for performing brute force actions. Algorithms, as presented in the AOL Yahoo article, seem to be collections of arbitrary rules. Straightforward for those who know the rules.

A “real” newspaper tackled the issue of algorithms and bias. The angle, which may be exciting to some, is “racism.” Navigate to “Is an Algorithms Any Less Racist Than a Human?” Since algorithms are often generated by humans, my hunch is that bias is indeed possible. The write up tells me:

any algorithm can – and often does – simply reproduce the biases inherent in its creator, in the data it’s using, or in society at large. For example, Google is more likely to advertise executive-level salaried positions to search engine users if it thinks the user is male, according to a Carnegie Mellon study. While Harvard researchers found that ads about arrest records were much more likely to appear alongside searches for names thought to belong to a black person versus a white person.

Don’t know the inside rules? Too bad, gentle reader. Perhaps you can search for an answer using Facebook’s search systems or the Wow.com service. Better yet. Ask a person who constructs algorithms for a living.

Stephen E Arnold, August 9, 2016

Revisionism: Hit That Delete Key for Happiness

August 14, 2015

The Jive Aces are into happy songs. Reddit has figured out how to make some folks who love revisionism happy. I wonder if the Jive Aces have a tune for removing content to create digital happiness.

Navigate to “Reddit Responds after Being Threatened, Banned and Unbanned by the Russian Government.” The main point is that one cannot find information if it is not in the index. Magic. Better and cheaper than reprinting history books in certain countries.

The write up says:

One thing that is clear is that Russia doesn’t play around though when it comes to speech encouraging drug use online. In 2013, Roskomnadzor blacklisted Wikipedia in its entirety for a single article on “Cannabis smoking.” Reddit doesn’t address concerns over restrictions of free speech from the Russian government in this statement, but instead seems to say that whatever the situation, wherever it’s posted, Reddit and Reddit alone has the final call. It sounds like a lot of redditors are perplexed by Reddit’s right to “restrict content,” but in the time being it seems that stability, rather than free speech, is Reddit’s main priority.

Cue the music:

Even when the darkest clouds are in the sky
You mustn’t sigh and you mustn’t cry
Spread a little happiness as you go by

That’s it. Reddit is spreading a little happiness. Are Russian content mavens smiling? I assume they are having a “golden shoes day.” When information is disappeared, that makes someone happy.

Stephen E Arnold, August 14, 2015

Search the Snowden Documents

July 16, 2015

This cat has long since forgotten what the inside of the bag looked like. Have you perused the documents that were released by Edward Snowden, beginning in 2013? A website simply titled “Snowden Doc Search” will let you do just that through a user-friendly search system. The project’s Description page states:

“The search is based upon the most complete archive of Snowden documents to date. It is meant to encourage users to explore the documents through its extensive filtering capabilities. While users are able to search specifically by title, description, document, document date, and release date, categories also allow filtering by agency, codeword, document topic, countries mentioned, SIGADS, classification, and countries shared with. Results contain not only full document text, pdf, and description, but also links to relevant articles and basic document data, such as codewords used and countries mentioned within the document.”

The result of teamwork between the Courage Foundation and Transparency Toolkit, the searchable site is built upon the document/ news story archive maintained by the Edward Snowden Defense Fund. The sites Description page also supplies links to the raw dataset and to Transparency Toolkit’s Github page, for anyone who would care to take a look. Just remember, “going incognito doesn’t hide your browsing from your employer, your internet service provider, or the websites you visit.” (Chrome)

Cynthia Murrell, July 16 , 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

You Cannot Search It If It Is Not There: The Wayback in Russia

June 26, 2015

I know that many people believe that a search reveals “all” and “everything” about a topic. Nothing is further from the truth. There are forces at work which wish to ensure that only certain information is available to a person with an Internet connection.

Navigate to “Russia Bans the Internet Archive’s ‘Wayback Machine’.” The Wayback Machine, which once had tie ups with outfits as different as the Library of Congress and Amazon. I found it useful when working as an expert witness to be able refresh my memory on certain Web sites’ presentation of information. I am confident there are other uses of “old information.”

According the write up, Russia is not to keen on the notion of old information. Kenneth Waggner, one of my high school teachers, had Russian language textbooks from the Stalin era. He had marked passages included in one book and excluded from another. If he were correct, the tradition of filtering has a reasonable track record in Russia. Keep in mind that other countries and company and individuals have the same goal: Present only what a smarter, more informed person thinks I should be able to access.

The article states:

By banning access to the Internet Archive, the government is denying Russian Internet users a powerful tool—one that is particularly useful in an environment where websites often disappear behind a state-operated blacklist, as is increasingly true in Russia today.

Governments are like horse races. No one is sure of the winner unless the race is rigged.

Stephen E Arnold, June 26, 2015

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta