Making, Not Filtering, Disinformation

April 8, 2019

I spotted a link to this article on Sunday (April 7, 2019). The title of the “real news” report was “Facebook Is Asking to Be Regulated but Wants to Choose How.” The write ostensibly was about Facebook’s realization that regulation would be good for everyone. Mark Zuckerberg wants to be able to do his good work within a legal framework.

I noted this passage in the article:

Facebook has been in the vanguard of creating ways in which both harmful content can be generated and easily sent to anyone in the world, and it has given rise to whole new categories of election meddling. Asking for government regulation of “harmful content” is an interesting proposition in terms of the American constitution, which straight-up forbids Congress from passing any law that interferes with speech under the first amendment.

I also circled this statement:

Facebook went to the extraordinary lengths of taking out “native advertising” in the Daily Telegraph. In other words ran a month of paid-for articles demonstrating the sunnier side of tech, and framing Facebook’s efforts to curb nefarious activities on its own platform. There is nothing wrong with Facebook buying native advertising – indeed, it ran a similar campaign in the Guardian a couple of years ago – but this was the first time that the PR talking points adopted by the company have been used in such a way.

From Mr. Zuckerberg’s point of view, he is sharing his ideas.

From the Guardian’s point of view, he is acting in a slippery manner.

From the newspapers reporting about his activities and, in the case of the Washington Post, providing him with an editorial forum, news is news.

But what’s the view from Harrod’s Creek? Let me share a handful of observations:

  1. If a person pays money to a PR firm to get information in a newspaper, that information is “news” even if it sets forth an agenda
  2. Identifying disinformation or weaponized information is difficult, it seems, for humans involved in creating “real news”. No wonder software struggles. Money may cloud judgment.
  3. Information disseminated from seemingly “authoritative” sources is not much different from the info rocks from a digital slingshot. Disgruntled tweeters and unhappy Instagramers can make people duck and respond.

For me, disinformation, reformation, misinformation, and probably regular old run-of-the-mill information is unlikely to be objective. Therefore, efforts and motivations to identify and filter these payloads is likely to be very difficult.

Stephen E Arnold, April 8, 2019

The Function of Filters

April 4, 2019

Filters block access to words, sites, or other items identifiable via modern computation; for example, a pattern of relationships and addresses of certain businesses or people. An online publication Abacus reports an item of information which makes clear that it is important to be in charge of filters. “Chinese Browsers Block Protest against China’s 996 Overtime Work Culture” asserts:

A number of Chinese browsers, including Tencent’s QQ Browser, Qihoo’s 360 Browser and the native browser on Xiaomi smartphones, have restricted user access to the 996.icu repository on GitHub.

Maybe the only way to get unfiltered information is to work in the agency examining content to figure out what one should not see? What if Bing, Google, and Yandex were blocking access to content and no one except those working in the censorship department knew? Interesting to consider.

Stephen E Arnold, April 4, 2019

Apple Conforms. No Wonder Certain US Government Officials Are Agitated with the Cupertino Elite

April 3, 2019

Apple’s attitude toward certain government officials has legs. San Bernadino, foot dragging, and China supplication — Not the best way to win friends and influence people in DC. The information in “Apple Censoring the News” may not be 100 percent accurate. But the description of how Apple has engineered a way to dress in a government regulation uniform is interesting.

The write up states:

To accomplish this censorship Apple is using a form of location fingerprinting that is not available to normal applications on iOS. It works like this: despite the fact that your phone uses a SIM from a US carrier it must connect to a Chinese cellular network. Apple is using private APIs to identify that you are in mainland China based on the name of the underlying cellular network and blocking access to the News app. This information is not available via public APIs in iOS1 specifically to improve privacy for users.

Why the razzle dazzle? To make certain that a mobile with a non-Chinese SIM cannot access blocked online services. Apple is taking a page from Burger King’s approach. Certain customers can indeed have it their way. An express window for some customers, and another line for “other” people where some news is only $10 per month.

Stephen E Arnold, April 3, 2019

Deep Fakes: A Tough Nut to Crack

February 8, 2019

If you are in the media or intelligence business, you undoubtedly already know about the potential of deep fakes or “deepfake” videos. Clips that utilize AI technology to create realistic and completely fake videos using existing footage. The catch is that they are getting more and more convincing…and that’s not good, as we discovered in a recent Phys.org article, “Misinformation Woes Could Multiply with Deepfake Videos.”

According to the story:

“As the technology advances, worries are growing about how deepfakes can be used for nefarious purposes by hackers or state actors. ‘A well-timed and thoughtfully scripted deepfake or series of deepfakes could tip an election, spark violence in a city primed for civil unrest, bolster insurgent narratives about an enemy’s supposed atrocities, or exacerbate political divisions in a society.’”

What’s “true” and what’s “false” is an issue which may not lend itself to zeros and ones. Google asserts that it is developing software that helps spot deepfakes. Does Google have a solution?

Does anyone?

If an artifact is created and someone labels it “false,” smart software has to decide. Humans, history suggests, struggle with defining the truth.

The problem is likely to be difficult to resolve. Censorship anyone?

Patrick Roland, February 8, 2019

Censorship: An Interesting View

December 7, 2018

I read “Former ‘Guardian’ Editor On Snowden, WikiLeaks And Remaking Journalism.”

I noted this passage:

In the modern world, it is very difficult to prevent good information (and sadly, bad information) … from being published, because it’s like water, and you can’t you can’t control it in the way that you could even 50 years ago. [emphasis added]

That 50 year date means that censorship was easy and presumably widely practiced in 1968.

Interesting.

How did I come to know about Prague Spring, the murder of Martin Luther King, the assassination of Senator Robert Kennedy, anti-Vietnam protests, Surveyor 7, the moon landing, the strike in Paris, the Pueblo (remember Mogen David and the Grapes of Wrath), and my getting encouragement in my quest to index Latin sermons?

Telepathy? What did I miss?

Stephen E Arnold, December 7, 2018

Censorship: Deleted and Blocked Content Popular

November 7, 2018

The Internet is a tool and companies harness the Internet to offer services, such as social media, search, news, and commerce. These companies act as portals for users to post their information and content. The Digital Millennium Copyright Act (DMCA) protects companies from being held liable for their users’ actions. This means that companies cannot be sued or prosecuted for what their users share. This could all change.

Inc. takes a look at how this could change in the article, “Facebook, Google, And Twitter Must Censor The Web, Demand Investors.” Why would this change? It would change because bad actors use social media and other services for illegal activities. The law that could change the DMCA is the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and Web sites would be held liable for content posted on them. Any content posted on say Facebook, Twitter, Google, etc. that results in illegal activities could get the Internet providers arrested.

“FOSTA creates a legal precedent to hold Internet providers responsible for user-created content that drives other behaviors. Hate speech might lead to murder and terrorism, for instance. Therefore, it’s easy to imagine that the US government will pass laws similar to FOSTA holding Internet providers legally liable for that content. Other examples of user-content that might face FOSTA-style laws include sexual harassment, racism, fake news, and election interference.”

Investors are not happy about this inevitability and at future shareholder meetings they will demand these companies clean up their acts. Since nobody wants to see CEOs and other employees arrested, investors are pushing for censorship of user-generated content.

This would mean the end of free speech on the Internet, because everyone finds everything and anything offensive. It also violates the First Amendment. The backlash is going to huge and we cannot wait to see how 4chan, YouTube, and Reddit react.

Whitney Grace, November 7, 2018

Google and Its Smart Software: Stupid?

October 16, 2018

I received an email from the owner of a Web site focused on providing consumers with automobile information. The individual shared with me an email sent to his company by the Google smart entity “publisher-policy-noreply.com”.

The letter was an AdSense Publisher Policy Violation Report. In short, Google’s smart software spotted an offensive article. The Google document said:

  • New violations were detected. As a result, ad serving has been restricted or disabled on pages where these violations of the AdSense Program Policies were found. To resolve the issues, you can either remove the violating content and request a review, or remove the ad code from the violating pages.

Translating the Google speak: “You are showing ads on a page which contains pornography, contraband, hate speech, etc. Make this right, or no AdSense money for you.”

Okay, I was intrigued. How can information about cars be about porn, contraband, hate speech, etc.

The offensive item, my colleagues and I determined, was a review of a 2004 Saab 9-3 Arc Convertible, published about 14 years ago. The offense was that the review contained words of a sexual nature.

2004 saab label

Does this vehicle and the height of its truck or boot offend you? If it does, you are not Googley.

I read the review and noted that the author of the review does indeed focus on an automobile. The problem is that the review is a long tail news story. That means that old content rarely gets clicks. So what’s Google doing? Processing historical data in order to locate porn, contraband, and hate speech? Must be. This suggests that the company is playing catch up. I thought Google was on top of offensive content and had been for more than a decade. Google forbidden word lists have been kicking around for years.

Image result for saab 2004 convertible rear seat

I find this extremely suggestive? Perhaps that is why the reviewer described the tiny rear seating area as needful of a way to “ease rear seat access.” I am not sure my French bulldog would fit in the back seat of this Saab nor could he engage in hanky panky.

I noted that the Saab convertible has a “high rear.” Looking at the picture, it looks as if the mechanical engineers did increase the height of the trunk or boot in order to accommodate the folding hard top for this model Saab. I am not sure if I would have thought the phrase “high rear” was sexual because I was reading about how the solid convertible top had been accommodated by the engineering team. Who reads about trunk lids or boots as a sexual reference.

But wait. There’s more lingo about the car described about 14 years ago. Check out this passage:

While the convertible’s interior is similar to the sedan’s, with a semi-wraparound cockpit- style instrument panel, it has unique and very comfortable front seats, with the shoulder straps anchored to the seat frame to ease rear-seat access.

Can you spot offensive language. Well, there’s the cockpit, which I assume could be interpreted in a way different from where the driver sits to drive the vehicle. Then there is “rear seat access.” My goodness. That is offensive. Imagine buying a convertible in which a person could sit in the back seat. Obviously “rear seat” is a trigger phrase. When combined with “cockpit,” the Google smart software becomes. What is the word. Oh, right. Stupid.

Let’s step back. Some observations:

  • Google positions itself as having a whiz bang system for preventing offensive  content from reaching its “customers.” I must say that the system seems to be doing a less than brilliant job. (See. I did not use the word stupid again.) In my DarkCyber video news program for October 23, 2018, I point out that YouTube offers videos which explain to teens how to buy drugs on the Dark Web. The smart filters, I assume, think these vids are A Okay.
  • At the same time Google’s smart software is deciding that car reviews are filthy and offensive, the company is telling elected officials it does not know what it will do about its possible China search system. But today I noted “Sundar Pichai Spoke about Google’s China Plans for the First Time and It Doesn’t Look Like He’s Backing Down.” So Google is thinking more about assisting a government with its censorship effort when it cannot figure out that a car review is not pornographic? Stupid is not the word. Maybe mendacious?
  • The company seems to be expending resources to reprocess content which it had already identified, copied, parsed, and indexed. This Saab story was indexed and available 14 years ago. I wonder if Google realized that its index and Web archives are digital time bombs. Could the content become evidence in the event Google was subjected to a thorough investigation by European or US regulators? House cleaning before visitors arrive? Interesting because the smart software may be tweaked to be overzealous, not stupid at all.

Our view from Harrod’s Creek is simple. We think Google is a smart company. These minor, trivial, inconsequential filter failures are anomalies. In fact, the offensive auto reviews must go. What else must go? Another interesting question.

Google is great. Very intelligent.

I suppose one could pop the boot in the high rear and go for some rear seat access. I think there is a vernacular bound phrase for this sentiment.

Stephen E Arnold, October 16, 2018

Google Censorship Related Document

October 10, 2018

I am not sure this is a real Google document with the name “Google Leak.” If the link goes dead, you are on your own. Plus it is a long one, chuck full of quotes and images and crunchy statements. Some Googlers like crunchy statements.

An entity named Allum Bokhari uploaded the document.

For me the main point is that Google can embrace censorship. Makes sense I suppose.

The images of the slides in a PowerPoint-type presentation could have been created by Google, a third party, or some combination of thinkers with a design firm added for visual spice.

The group through whose hands the artifact passed was was Breitbart, a semi famous outfit. I know this because the name Breitbart is overlaid in orange on each of the pages of the document. The document also contains the Google logo and the branding “Insights Lab.”

I know there is an Insights Lab in Colorado, but it is tough to figure out who crated the document from what appears to be hours spent running queries against the Google search engine and fiddling with a PowerPoint type presentation system.

But who exactly is responsible for the document? Anonymity is popular with the outputs of the New York Times, Bloomberg, and online postings like this one.

The who is a bit of a mystery.

To get the document from Scribd, yep, the service with the pop ups, pleas for sign ups, etc., you have to sign up with Facebook or Google. Makes sense.

Plus, the document contains more than 80 pages, and it takes some time to dig through the lingo, the images, and the gestalt of the construct.

Here’s an image, which explains that the least free countries are China and Syria. The most free countries are Estonia and Iceland. Estonia and Iceland are good places to be free. The downside of Estonia is the tension between Estonians and Russians, who are if the chart is accurate, not into living without censorship. Plus, the border between Russia and Estonia is not formidable. It is a bit like a potato field in places. Iceland is super, particularly if one enjoys low cost data center services, fishing, hiking, and brisk winters.

image

The future, it seems, is censorship. I noted the phrase “well ordered spaces for safety and civility.”

The document is worth a look if you can tolerate the fact that one registers via Facebook and Google to view the alleged Google document. Viewing the document for now does not require registration. Downloading may invite endless appeals for cash.

Stephen E Arnold, October 10, 2010

Surf with Freedom: China, Iran, Russia, and Other Countries May Not Notice

October 5, 2018

How does this sound to you?

Intra included the following feature list:

• Free access to websites and apps blocked by DNS manipulation
• No limits on data usage and it won’t slow down your internet connection
• Open source
• Keep your information private – Intra doesn’t track the apps you use or websites you visit
• Customize your DNS server provider — use your own or pick from popular providers

You can get the scoop by reading “On Protected: Your Connection Is Protected from DNS Attacks.”

The service is provided by Jigsaw, an outfit under the wing of Google.

The article explains:

With Intra, they’ve created an app that protects against DNS manipulation. This is an app for the world to access the entire internet without, for example, government censorship.

For now this is an Android app, which may be a mobile phone operating system less of a hurdle for some surveillance activities. Of course, authorities in China, Iran, and Russia will remain unaware of this Google-centric app. I wonder if anyone in the US will notice?

Nah, probably not. I like the warnings issued to me by my browsers about unsafe sites, and I think the outcomes of DNS manipulations are interesting.

Stephen E Arnold, October 5, 2018

Content Filtering Seeps Into Mainstream

August 27, 2018

Content filtering is a new trend. For those fearing fake news, or simply tired of bad news, Google is trying to brighten their day. Their home assistant will deliver just good news if you ask it, but is there a dark underbelly to such actions? We started wrestling with this topic after a Digital Trends story, “By Request, Google Assistant Makes it Easy to Find Good News.”

According to the story:

“Google sources the positive difference stories from the Solutions Journalism Network (SJN). The nonprofit, nonpartisan organization focuses on publishing stories about how people can make the world a better place — the practice is called “solutions journalism.” SJN gathers and summaries articles from a large and diverse range of media sources.”

While this seems like a cute news snippet, it is potentially dangerous. Take for example, the news of a EU official who penned an op-ed about the dangers of filtering copy written works. Of course, a bot simply filed a complaint of copyright infringement and got the story wiped from the internet. Google’s good news filter is far from this kind of deviousness, but it’s also not so far that one day we could all have important, yet unpleasant news stripped from our world.

Patrick Roland, August 27, 2018

Next Page »

  • Archives

  • Recent Posts

  • Meta