Mauritania Shuts Down Internet During Elections

July 12, 2019

Africa was shafted by colonial powers and now the continent is shafting itself with corruption from its numerous countries. Africa remains home to some of the poorest nations on Earth and according to Quartz, many of these countries habitually shut down the Internet in “Mauritania Blocked The Internet Over Protests Though Just One In Five People Are Online.” Countries that have shut off the Internet include Liberia, Benin, Democratic Republic of Congo, Chad, and Algeria. More recently the Sudan shut off lines when protesters demanded president Omar al-Bashir leave office and wanted an end to military rule. Ethopia cut their surfing power to curb cheating on exams and when there were rumors of a coup. The African Internet gets turned off for numerous reasons, mostly due too political ties: elections, government protests, and political referenda.

Mauritania took its turn to shut down the Internet amid its contested election. People hoped the election would be the first peaceful transfer of power since the country gained its independence in 1960. When the results were tallied the ruling party won by 52%, but opposition challenged the results. The government suspended mobile and fixed-Internet lines. It points to the government being afraid of any opposing force and using extreme measurements to maintain control. Most African governments do not offer explanations, but some explain it away as limiting hate speech, fake news, and violence.

Mauritania is indicative of the problems around the entire continent:

“Campaigners say the shutdown in Mauritania is only exacerbating the situation and preventing journalists, human rights defenders, and opposition groups from freely accessing and exchanging information. Mauritanian television also broadcast foreigners from neighboring countries confessing to ferment trouble following the polls—a “toxic and highly problematic” issue, activists say, in a country still battling racial discrimination and the vestiges of slavery.”

Freedom of information and communication is key to a democratic society and gives power to people. Heavy handiness might have its need in times of war, but during elections in a country that is supposed to be democratic it is a sign of societal changes.

Whitney Grace, July 11, 2019

Criticizing the Digital Czarina of Silicon Valley

May 31, 2019

DarkCyber would not criticize Kara Swisher. We think that her method of talking over those whom she interviews is just an outstanding way to deliver understandable audio. We find her summaries of her stellar career in journalism necessary because some of the DarkCyber team (like me) has a lousy memory for some crucial information. We enjoy her interactions with the kind, patient, and deeply informed author of The Algebra of Happiness a remarkable opportunity to learn how life is to be lived in the 21st century.

image

But TechDirt has a different point of view, expressed clearly in “Dear Kara Swisher: Don’t Let Your Hatred of Facebook Destroy Free Speech Online.” See, that’s what a brave person, steeped in the law, will share about a digital czarina of Silicon Valley.

We noted this statement in the 1362 word epistle:

This is wrong on so many levels that it makes me wonder where Swisher is getting her information from.

The “wrong” refers to Ms. Swisher’s posture toward Facebook censorship.

We also circled in blue, this statement:

…her analysis is simply incorrect.

Yikes. An error in analysis. The “incorrect” refers to Section 230 and other legal matters.

We also underlined this passage:

For quite some time now, we’ve been talking about the “impossibility” of doing content moderation at scale well. There are always going to be disagreements. But Section 230 is what allows for experimentation. People can (and should) criticize Facebook when they think the company made the wrong call, but to blithely toss Section 230 under the bus as the reason for Facebook failing to meet her own exacting standards, Swisher is actually throwing the open internet and free speech under the bus instead. It’s a horrifically bad take, and one that Swisher should know better about.

There it is. Ms. Swisher is not fully informed. (My mother used to tell me “You should know better.” I assume this phrasing is part of the adulting movement.

To wrap up, my hunch is that two important people in the world of Silicon Valley may exchange further communications.

Will the Czarina respond directly, or will a colleague or former colleague (of which there appear to be many) pick up the gauntlet and slap TechDirt in the head in order to knock some sense and appreciation into it?

Worth watching. There’s nothing like a lawyer and czarina dust up to reveal why Silicon Valley is held in such high regard by millions of people. DarkCyber will watch from a safe distance, of course. When elephants fight, only the grass suffers.

Stephen E Arnold, May 31, 2019

Making, Not Filtering, Disinformation

April 8, 2019

I spotted a link to this article on Sunday (April 7, 2019). The title of the “real news” report was “Facebook Is Asking to Be Regulated but Wants to Choose How.” The write ostensibly was about Facebook’s realization that regulation would be good for everyone. Mark Zuckerberg wants to be able to do his good work within a legal framework.

I noted this passage in the article:

Facebook has been in the vanguard of creating ways in which both harmful content can be generated and easily sent to anyone in the world, and it has given rise to whole new categories of election meddling. Asking for government regulation of “harmful content” is an interesting proposition in terms of the American constitution, which straight-up forbids Congress from passing any law that interferes with speech under the first amendment.

I also circled this statement:

Facebook went to the extraordinary lengths of taking out “native advertising” in the Daily Telegraph. In other words ran a month of paid-for articles demonstrating the sunnier side of tech, and framing Facebook’s efforts to curb nefarious activities on its own platform. There is nothing wrong with Facebook buying native advertising – indeed, it ran a similar campaign in the Guardian a couple of years ago – but this was the first time that the PR talking points adopted by the company have been used in such a way.

From Mr. Zuckerberg’s point of view, he is sharing his ideas.

From the Guardian’s point of view, he is acting in a slippery manner.

From the newspapers reporting about his activities and, in the case of the Washington Post, providing him with an editorial forum, news is news.

But what’s the view from Harrod’s Creek? Let me share a handful of observations:

  1. If a person pays money to a PR firm to get information in a newspaper, that information is “news” even if it sets forth an agenda
  2. Identifying disinformation or weaponized information is difficult, it seems, for humans involved in creating “real news”. No wonder software struggles. Money may cloud judgment.
  3. Information disseminated from seemingly “authoritative” sources is not much different from the info rocks from a digital slingshot. Disgruntled tweeters and unhappy Instagramers can make people duck and respond.

For me, disinformation, reformation, misinformation, and probably regular old run-of-the-mill information is unlikely to be objective. Therefore, efforts and motivations to identify and filter these payloads is likely to be very difficult.

Stephen E Arnold, April 8, 2019

The Function of Filters

April 4, 2019

Filters block access to words, sites, or other items identifiable via modern computation; for example, a pattern of relationships and addresses of certain businesses or people. An online publication Abacus reports an item of information which makes clear that it is important to be in charge of filters. “Chinese Browsers Block Protest against China’s 996 Overtime Work Culture” asserts:

A number of Chinese browsers, including Tencent’s QQ Browser, Qihoo’s 360 Browser and the native browser on Xiaomi smartphones, have restricted user access to the 996.icu repository on GitHub.

Maybe the only way to get unfiltered information is to work in the agency examining content to figure out what one should not see? What if Bing, Google, and Yandex were blocking access to content and no one except those working in the censorship department knew? Interesting to consider.

Stephen E Arnold, April 4, 2019

Apple Conforms. No Wonder Certain US Government Officials Are Agitated with the Cupertino Elite

April 3, 2019

Apple’s attitude toward certain government officials has legs. San Bernadino, foot dragging, and China supplication — Not the best way to win friends and influence people in DC. The information in “Apple Censoring the News” may not be 100 percent accurate. But the description of how Apple has engineered a way to dress in a government regulation uniform is interesting.

The write up states:

To accomplish this censorship Apple is using a form of location fingerprinting that is not available to normal applications on iOS. It works like this: despite the fact that your phone uses a SIM from a US carrier it must connect to a Chinese cellular network. Apple is using private APIs to identify that you are in mainland China based on the name of the underlying cellular network and blocking access to the News app. This information is not available via public APIs in iOS1 specifically to improve privacy for users.

Why the razzle dazzle? To make certain that a mobile with a non-Chinese SIM cannot access blocked online services. Apple is taking a page from Burger King’s approach. Certain customers can indeed have it their way. An express window for some customers, and another line for “other” people where some news is only $10 per month.

Stephen E Arnold, April 3, 2019

Deep Fakes: A Tough Nut to Crack

February 8, 2019

If you are in the media or intelligence business, you undoubtedly already know about the potential of deep fakes or “deepfake” videos. Clips that utilize AI technology to create realistic and completely fake videos using existing footage. The catch is that they are getting more and more convincing…and that’s not good, as we discovered in a recent Phys.org article, “Misinformation Woes Could Multiply with Deepfake Videos.”

According to the story:

“As the technology advances, worries are growing about how deepfakes can be used for nefarious purposes by hackers or state actors. ‘A well-timed and thoughtfully scripted deepfake or series of deepfakes could tip an election, spark violence in a city primed for civil unrest, bolster insurgent narratives about an enemy’s supposed atrocities, or exacerbate political divisions in a society.’”

What’s “true” and what’s “false” is an issue which may not lend itself to zeros and ones. Google asserts that it is developing software that helps spot deepfakes. Does Google have a solution?

Does anyone?

If an artifact is created and someone labels it “false,” smart software has to decide. Humans, history suggests, struggle with defining the truth.

The problem is likely to be difficult to resolve. Censorship anyone?

Patrick Roland, February 8, 2019

Censorship: An Interesting View

December 7, 2018

I read “Former ‘Guardian’ Editor On Snowden, WikiLeaks And Remaking Journalism.”

I noted this passage:

In the modern world, it is very difficult to prevent good information (and sadly, bad information) … from being published, because it’s like water, and you can’t you can’t control it in the way that you could even 50 years ago. [emphasis added]

That 50 year date means that censorship was easy and presumably widely practiced in 1968.

Interesting.

How did I come to know about Prague Spring, the murder of Martin Luther King, the assassination of Senator Robert Kennedy, anti-Vietnam protests, Surveyor 7, the moon landing, the strike in Paris, the Pueblo (remember Mogen David and the Grapes of Wrath), and my getting encouragement in my quest to index Latin sermons?

Telepathy? What did I miss?

Stephen E Arnold, December 7, 2018

Censorship: Deleted and Blocked Content Popular

November 7, 2018

The Internet is a tool and companies harness the Internet to offer services, such as social media, search, news, and commerce. These companies act as portals for users to post their information and content. The Digital Millennium Copyright Act (DMCA) protects companies from being held liable for their users’ actions. This means that companies cannot be sued or prosecuted for what their users share. This could all change.

Inc. takes a look at how this could change in the article, “Facebook, Google, And Twitter Must Censor The Web, Demand Investors.” Why would this change? It would change because bad actors use social media and other services for illegal activities. The law that could change the DMCA is the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and Web sites would be held liable for content posted on them. Any content posted on say Facebook, Twitter, Google, etc. that results in illegal activities could get the Internet providers arrested.

“FOSTA creates a legal precedent to hold Internet providers responsible for user-created content that drives other behaviors. Hate speech might lead to murder and terrorism, for instance. Therefore, it’s easy to imagine that the US government will pass laws similar to FOSTA holding Internet providers legally liable for that content. Other examples of user-content that might face FOSTA-style laws include sexual harassment, racism, fake news, and election interference.”

Investors are not happy about this inevitability and at future shareholder meetings they will demand these companies clean up their acts. Since nobody wants to see CEOs and other employees arrested, investors are pushing for censorship of user-generated content.

This would mean the end of free speech on the Internet, because everyone finds everything and anything offensive. It also violates the First Amendment. The backlash is going to huge and we cannot wait to see how 4chan, YouTube, and Reddit react.

Whitney Grace, November 7, 2018

Google and Its Smart Software: Stupid?

October 16, 2018

I received an email from the owner of a Web site focused on providing consumers with automobile information. The individual shared with me an email sent to his company by the Google smart entity “publisher-policy-noreply.com”.

The letter was an AdSense Publisher Policy Violation Report. In short, Google’s smart software spotted an offensive article. The Google document said:

  • New violations were detected. As a result, ad serving has been restricted or disabled on pages where these violations of the AdSense Program Policies were found. To resolve the issues, you can either remove the violating content and request a review, or remove the ad code from the violating pages.

Translating the Google speak: “You are showing ads on a page which contains pornography, contraband, hate speech, etc. Make this right, or no AdSense money for you.”

Okay, I was intrigued. How can information about cars be about porn, contraband, hate speech, etc.

The offensive item, my colleagues and I determined, was a review of a 2004 Saab 9-3 Arc Convertible, published about 14 years ago. The offense was that the review contained words of a sexual nature.

2004 saab label

Does this vehicle and the height of its truck or boot offend you? If it does, you are not Googley.

I read the review and noted that the author of the review does indeed focus on an automobile. The problem is that the review is a long tail news story. That means that old content rarely gets clicks. So what’s Google doing? Processing historical data in order to locate porn, contraband, and hate speech? Must be. This suggests that the company is playing catch up. I thought Google was on top of offensive content and had been for more than a decade. Google forbidden word lists have been kicking around for years.

Image result for saab 2004 convertible rear seat

I find this extremely suggestive? Perhaps that is why the reviewer described the tiny rear seating area as needful of a way to “ease rear seat access.” I am not sure my French bulldog would fit in the back seat of this Saab nor could he engage in hanky panky.

I noted that the Saab convertible has a “high rear.” Looking at the picture, it looks as if the mechanical engineers did increase the height of the trunk or boot in order to accommodate the folding hard top for this model Saab. I am not sure if I would have thought the phrase “high rear” was sexual because I was reading about how the solid convertible top had been accommodated by the engineering team. Who reads about trunk lids or boots as a sexual reference.

But wait. There’s more lingo about the car described about 14 years ago. Check out this passage:

While the convertible’s interior is similar to the sedan’s, with a semi-wraparound cockpit- style instrument panel, it has unique and very comfortable front seats, with the shoulder straps anchored to the seat frame to ease rear-seat access.

Can you spot offensive language. Well, there’s the cockpit, which I assume could be interpreted in a way different from where the driver sits to drive the vehicle. Then there is “rear seat access.” My goodness. That is offensive. Imagine buying a convertible in which a person could sit in the back seat. Obviously “rear seat” is a trigger phrase. When combined with “cockpit,” the Google smart software becomes. What is the word. Oh, right. Stupid.

Let’s step back. Some observations:

  • Google positions itself as having a whiz bang system for preventing offensive  content from reaching its “customers.” I must say that the system seems to be doing a less than brilliant job. (See. I did not use the word stupid again.) In my DarkCyber video news program for October 23, 2018, I point out that YouTube offers videos which explain to teens how to buy drugs on the Dark Web. The smart filters, I assume, think these vids are A Okay.
  • At the same time Google’s smart software is deciding that car reviews are filthy and offensive, the company is telling elected officials it does not know what it will do about its possible China search system. But today I noted “Sundar Pichai Spoke about Google’s China Plans for the First Time and It Doesn’t Look Like He’s Backing Down.” So Google is thinking more about assisting a government with its censorship effort when it cannot figure out that a car review is not pornographic? Stupid is not the word. Maybe mendacious?
  • The company seems to be expending resources to reprocess content which it had already identified, copied, parsed, and indexed. This Saab story was indexed and available 14 years ago. I wonder if Google realized that its index and Web archives are digital time bombs. Could the content become evidence in the event Google was subjected to a thorough investigation by European or US regulators? House cleaning before visitors arrive? Interesting because the smart software may be tweaked to be overzealous, not stupid at all.

Our view from Harrod’s Creek is simple. We think Google is a smart company. These minor, trivial, inconsequential filter failures are anomalies. In fact, the offensive auto reviews must go. What else must go? Another interesting question.

Google is great. Very intelligent.

I suppose one could pop the boot in the high rear and go for some rear seat access. I think there is a vernacular bound phrase for this sentiment.

Stephen E Arnold, October 16, 2018

Google Censorship Related Document

October 10, 2018

I am not sure this is a real Google document with the name “Google Leak.” If the link goes dead, you are on your own. Plus it is a long one, chuck full of quotes and images and crunchy statements. Some Googlers like crunchy statements.

An entity named Allum Bokhari uploaded the document.

For me the main point is that Google can embrace censorship. Makes sense I suppose.

The images of the slides in a PowerPoint-type presentation could have been created by Google, a third party, or some combination of thinkers with a design firm added for visual spice.

The group through whose hands the artifact passed was was Breitbart, a semi famous outfit. I know this because the name Breitbart is overlaid in orange on each of the pages of the document. The document also contains the Google logo and the branding “Insights Lab.”

I know there is an Insights Lab in Colorado, but it is tough to figure out who crated the document from what appears to be hours spent running queries against the Google search engine and fiddling with a PowerPoint type presentation system.

But who exactly is responsible for the document? Anonymity is popular with the outputs of the New York Times, Bloomberg, and online postings like this one.

The who is a bit of a mystery.

To get the document from Scribd, yep, the service with the pop ups, pleas for sign ups, etc., you have to sign up with Facebook or Google. Makes sense.

Plus, the document contains more than 80 pages, and it takes some time to dig through the lingo, the images, and the gestalt of the construct.

Here’s an image, which explains that the least free countries are China and Syria. The most free countries are Estonia and Iceland. Estonia and Iceland are good places to be free. The downside of Estonia is the tension between Estonians and Russians, who are if the chart is accurate, not into living without censorship. Plus, the border between Russia and Estonia is not formidable. It is a bit like a potato field in places. Iceland is super, particularly if one enjoys low cost data center services, fishing, hiking, and brisk winters.

image

The future, it seems, is censorship. I noted the phrase “well ordered spaces for safety and civility.”

The document is worth a look if you can tolerate the fact that one registers via Facebook and Google to view the alleged Google document. Viewing the document for now does not require registration. Downloading may invite endless appeals for cash.

Stephen E Arnold, October 10, 2010

Next Page »

  • Archives

  • Recent Posts

  • Meta