After Decades of the Online Revolution: The Real Revolution Is Explained

October 9, 2020

Years ago I worked at a fancy, blue chip consulting firm. One of the keys to success in generating the verbiage needed to reassure clients was reading the Economist. The publication, positioned as a newspaper, sure looked like a magazine. I wondered about that marketing angle, and I was usually puzzled by the “insights” about a range of topics. Then an idea struck me: The magazine was a summarizer of data and verbiage for those in the “knowledge” business. I worked through the write ups, tried to recall the mellifluous turns of phrase, and stuff my “Data to Recycle” folder with clips from the publication.

I read “Faith in Government Declines When Mobile Internet Arrives: A New Study Finds That Incumbent Parties Lose Votes after Their Citizens Get Online.” [A paywall or an institutional subscription may be required to read about this obvious “insight.”] Readers of the esteemed publication will be launching Keynote or its equivalent and generating slide decks. These are often slide decks which will remain unfindable by an organization’s enterprise search system or in ineffectual online search systems. That may not be a bad thing.

The “new study” remains deliciously vague: No statistical niceties like who, when, how, etc. Just data and a killer insight:

A central (and disconcerting) implication is that governments that censor offline media could maintain public trust better if they restricted the internet too. But effective digital censorship requires technical expertise that many regimes lack.

The statements raise some interesting questions for experts to explain; for example, “Dictatorships may restore faith in governments.” That’s a topic for a Zoom meeting among one percenters.

Several observations seem to beg for dot pointing:

  1. The “online revolution” began about 50 years ago with a NASA program. What was the impact of those sluggy and buggy online systems like SDC’s? The answer is that information gatekeepers were eviscerated, slowly at first and then hasta la vista.
  2. Gatekeepers provided useful functions. One of these was filtering information and providing some aggregation functions. The recipient of information from the early-days online information systems was some speed up in information access but not enough to eliminate the need for old fashioned research and analysis. Real time is, by definition, not friendly to gatekeepers.
  3. With the development of commercial online infrastructure and commercial providers, the hunger or addiction to ever quicker online systems was evident. The “need for speed” seemed to be hard wired into those who worked in knowledge businesses. At least one online vendor reduces the past to a pattern and then looks at the “now” data to trigger conclusions. So much for time consuming deliberation of verifiable information.

The article cited above has discovered downstream consequences of behaviors (social and economic) which have been part of the online experience for many years.

The secondary consequences of online extend far beyond the mobile devices. TikTok exists for a reason, and that service may be one of the better examples of “knowledge work” today.

One more question: How can institutions, old fashioned knowledge, and prudent decision making survive in today’s datasphere? With Elon Musk’s implants, who will need a mobile phone?

Perhaps the next Economist write up will document that change, hopefully in a more timely manner.

Stephen E Arnold, October 9, 2020

TikTok: Maybe Some Useful Information?

September 19, 2020

US President Donald Trump banned Americans from using TikTok, because of potential information leaks to China. In an ironic twist, The Intercept explains “Leaked Documents Reveal What TikTok Shares With Authorities—In The U.S.” It is not a secret in the United States that social media platforms from TikTok to Facebook collect user data as ways to spy and sell products.

While the US monitors its citizens, it does not take the same censorship measures as China does with its people. It is alarming the amount of data TikTok gathers for the Chinese, but leaked documents show that the US also accesses that data. Data privacy has been a controversial topic for years within the United States and experts argue that TikTok collects the same type of information as Google, Amazon, and Facebook. The documents reveal that ByteDance, TikTok’s parent company, the FBI, and Department of Homeland Security monitored the platform.

Law enforcement officials use TikTok as a means to monitor social unrest related to the death of George Floyd. Floyd suffocated when a police officer cut off his oxygen attempting to restrain him during arrest. TikTok users post videos about Black Lives Matter, police protests, tips for disarming law enforcement, and even jokes about the US’s current upheaval. TikTok’s user agreement says it collects information and will share it with third parties. The third parties include law enforcement if TikTok feels there is an imminent danger.

TikTok, however, also censors videos, particularly those the Chinese government dislikes. These videos include political views, the Hong Kong protests, Uyghur internment camps, and people considered poor, disabled, or ugly.

Trump might try to make the US appear as the better country, but:

““The common concern, whether we’re talking about TikTok or Huawei, isn’t the intentions of that company necessarily but the framework within which it operates,” said Elsa Kania, an expert on Chinese technology at the Center for a New American Security. “You could criticize American companies for having an opaque relationship to the U.S. government, but there definitely is a different character to the ecosystem.” At the same time, she added, the Trump administration’s actions, including a handling of Portland protests that brought to mind the police crackdown in Hong Kong, have undercut official critiques of Chinese practices: “At a moment when we’re seeing attempts by the administration to draw a contrast in terms of values and ideology with China, these eerie parallels that keep recurring do really undermine that.”

The issue is contentious. Information does not have to be used at the time of collection. The actions of youth can be used to exert pressure at a future time. That may be the larger risk.

Whitney Grace, September 19, 2020

Amazon and Halliburton: A Tie Up to Watch? Yep

September 11, 2020

DarkCyber noted “Explor, Halliburton, AWS Collaborate to Achieve Breakthrough with Seismic Data Processing in the Cloud.” The write up explains that crunching massive seismic data sets works. Among the benchmarks reported by the online bookstore and the environmentally-aware engineering and services companies are:

  • An 85% decrease in CDP sort order times: Tested by sorting 308 million traces comprising of 1.72 TB from shot domain to CDP domain, completing the flow in an hour.
  • An 88% decrease in CDP FK Filtering times: Tested with a 57 million-trace subset of the data comprising 318 GB, completing the flow in less than 6 minutes.
  • An 82% decrease in pre-stack time migration times: Tested on the full 165 million-trace dataset comprising of 922GB, completing the flow in 54 minutes.

What do these data suggest? Better, faster, and cheaper processing?

We noted this paragraph in the write up:

“The collaboration with AWS and Explor demonstrates the power of digital investments that Halliburton is making, in this instance to bring high-density surveys to market faster and more economically than ever before.  By working with industry thought leaders like Explor and AWS, we have been able to demonstrate that digital transformation can deliver step-change improvements in the seismic processing market.” – Philip Norlund, Geophysics Domain Manager, Halliburton, Landmark

Keep in mind that these data are slightly more difficult to manipulate than a couple hundred thousand tweets.

Stephen E Arnold, September 11, 2020

The Ideal Internet: Point of View Is Important

September 11, 2020

I read “Now the Impact of Regulation on the Internet Can Be Gauged.” Interesting but fanciful, the article lays out what the Internet should be. The main points appear to exist in a mental construct removed from political turmoil, the Rona, and financial challenges.

The write up explains that the Internet Society has crafted an Internet Impact Assessment Toolkit. I learned that the:

Internet Way of Networking (IWN): Defining the Critical Properties of the Internet, … explains how the Internet’s unique foundation is responsible for its strength and success. It also identifies the critical properties that must be protected to enable the Internet to reach its full potential….The Internet Impact Assessment Toolkit is a guide to help ensure regulation, technology trends and decisions don’t harm the infrastructure of the Internet.

Here are the key elements of the IWN:

  1. An accessible infrastructure with a common protocol – A ‘common language’ enabling global connectivity and unrestricted access to the Internet.
  2. An open architecture of interoperable and reusable building blocks – Open infrastructure with a set of standards enabling permission-free innovation.
  3. Decentralized management and a single distributed routing system – Distributed routing enabling local networks to grow, while maintaining worldwide connectivity.
  4. Common global identifiers – A single common identifier allowing computers and devices around the world to communicate with each other.
  5. A technology-neutral, general-purpose network – A simple and adaptable dynamic environment cultivating infinite opportunities for innovation.

Quite idealistic, and the statements do not address the reality of corrosive social networks and the emergence of corporate nation states. And there’s China. Oh, right, China.

Stephen E Arnold, September 11, 2020

Mobile Data Costs Around the World

September 7, 2020

Sometimes it takes looking at the cost of certain services in other countries before we decide whether our situation is acceptable. No, I am not talking about healthcare—Cable.co.uk has published “Worldwide Mobile Data Pricing: The Cost of 1GB of Mobile Data in 228 Countries.” The interactive map makes it clear that the US is making it difficult for some to afford acceptable Internet access.

Anyone who cares to compare should navigate to the map, where one can hover over each country to see highest, lowest, and average prices. The creators have also assigned a rank to each country and note how many plans were sampled and when. Tabs at the top take the curious to “highlights” of the study, regional data, and researcher comments. The description tells us:

“Countries are color-coded by the average price of one gigabyte (1GB) of mobile data. As you can see, this paints an interesting picture, with a lot of the countries where mobile data is cheapest in and around the former USSR, and with some of the most expensive in North America, Africa and Western Europe. …

“Why some countries are missing data: Unlike our measurements of worldwide broadband speed and worldwide broadband pricing, where lack of fixed-line infrastructure meant significant gaps, mobile data provision is near-ubiquitous. However, there are still some countries or territories where either no provision exists, there exists only 2G infrastructure, providing only calls and/or SMS texts, or the data simply isn’t available. And there are countries and regions where problems with the currency do not allow for useful comparison.”

We particularly took note of three enlightening cost comparisons—The US average (in US dollars) of $12.55/GB versus $3.91 in Japan, $1.39 in the UK, and $0.81 in France. Hmm.

Cynthia Murrell, September 07, 2020

Monoculture and Monopoly Law: Attraction to a Single Point Occurs and Persists

September 2, 2020

Did you hear the alarm clock ring? “Zoom Is Now Critical Infrastructure. That’s a Concern” makes it clear that even the deep sleepers can wake up. What’s the tune on these wizards’ mobile phone? Maybe a fabulous fake of “Still Drowsy after All These Years.” (Sorry, Mr. Simon.)

The write up makes clear that the Brookings community and scholars have been told the following:

  • Zoom is the information superhighway for education
  • Zoom content is visible to Zoom
  • Zoom is fending off the likes of Apple and Facetime, Google Meet and Hangouts, and Microsoft Teams (Skype shoved its hands in the barbeque briquettes, thus making that service less interesting.)
  • Zoom goes down, thus wrecking havoc.

The write up does not suggest that Zoom is up to fancy dancing with authorities from another nation state. The write up does not delve into the tale of the stunning Alex Stamos, a human Swiss Army Knife of security. The write up does not articulate this Arnold Law:

A monoculture and a monopoly manifest attraction to a single online point.

A corrolary is:

That single point persists.

In the absence of meaningful oversight, Zoom is, according to the write up:

By contrast, a successful cyber attack targeting Zoom could bring education and an enormous amount of business activity to a complete halt.

And what about the Zoom data? Useful to some perhaps?

Stephen E Arnold, September 2, 2020

Deepfakes and Other AI Threats

August 19, 2020

As AI technology matures it has greater and greater potential to facilitate bad actors. Now, researchers at the University College London have concluded that falsified audio and video content poses the greatest danger. The university announces its results on its news page in, “‘Deepfakes’ Ranked as Most Serious AI Crime Threat.” The post relates:

“The study, published in Crime Science and funded by the Dawes Centre for Future Crime at UCL (and available as a policy briefing), identified 20 ways AI could be used to facilitate crime over the next 15 years. These were ranked in order of concern – based on the harm they could cause, the potential for criminal profit or gain, how easy they would be to carry out and how difficult they would be to stop. Authors said fake content would be difficult to detect and stop, and that it could have a variety of aims – from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call. Such content, they said, may lead to a widespread distrust of audio and visual evidence, which itself would be a societal harm.”

Is the public ready to take audio and video evidence with a grain of salt? And what happens when we do? It is not as though first-hand witnesses are more reliable. The rest of the list presents five more frightening possibilities: using driverless vehicles as weapons; crafting more specifically tailored phishing messages; disrupting AI-controlled systems (like power grids, we imagine); large-scale blackmail facilitated by raking in data from the Web; and one of our favorites, realistic AI-generated fake news. The post also lists some crimes of medium- and low-concern. For example, small “burglar bots” could be thwarted by measures as simple as a letterbox cage. The write-up describes the study’s methodology:

“Researchers compiled the 20 AI-enabled crimes from academic papers, news and current affairs reports, and fiction and popular culture. They then gathered 31 people with an expertise in AI for two days of discussions to rank the severity of the potential crimes. The participants were drawn from academia, the private sector, the police, the government and state security agencies.”

Dawes Centre Director Shane Johnson notes that, as technology evolves, we must predict with potential threats so policy makers and others can keep up. Yes, that would be nice. She promises more reports are in her organization’s future. Stay tuned.

Cynthia Murrell, August 19, 2020

Free Content: Like Technology, Now a Political Issue

August 3, 2020

Free content is interesting. It seems to represent a loss when compared to content that costs money. But are these two options the only ones? Nope, digital information has a negative cost. I think that’s a fair characterization of the knowledge road many are walking.

For seven years, I have produced “content” and made it available without charge to law enforcement and intelligence professionals in the US and to US allies. When I embarked on this approach, I met with skepticism and questions like “What’s the catch?”

I learned quickly that “free” means hook, trick, or sales ploy. Intrigued by the reaction, I persisted. Over time, my approach was — to a small number of people — somewhat helpful. In a few weeks, I will be 77, and I don’t plan on changing what I do, terminating the researchers who assist me, and telling those who want me to give a talk or write up a profile about one of the companies I follow to get lost.

I thought about my approach when I read “The Truth Is Paywalled But The Lies Are Free.” The title annoyed me because what I do is free. I could identify an interesting organization which has recently availed itself of one of my free reports. My team and I tried to assemble hard-to-find and little known information and package it into a format that was easy-to-understand. Yep, the document was free, and it has found its way into several groups focused on chasing down bad actors.

The write up in Current Affairs, now an online information service, states:

This means that a lot of the most vital information will end up locked behind the paywall… The lie is more accessible than its refutation.

I think I understand. The majority of free content has spin. For fee content is, therefore, delivered with less spin or without spin.

Is this true?

The reports I prepare describe specific characteristics of a particular technology. In my opinion and that of the researchers who assist me, we make an effort to identify consistent statements, present information for which there is a document like a technical specification, and use cases that are verifiable.

I suppose the fact that I maintain profiles of companies of little interest to most “real” journalists and pundits creates an exception. What I do can be set aside with the comment, “Yeah, but who really cares about the Polish company Datawalk?”

The write up states:

More reason to have publications funded by the centralized free-information library rather than through subscriptions or corporate sponsorship. Creators must be compensated well. But at the same time we have to try to keep things that are important and profound from getting locked away where few people will see them. The truth needs to be free and universal.

I think I see the point. However, my model is different. The content I produce is a side product of what I do. If someone pays me to produce a product or service, I use that money to keep my research group working.

Money can be generated and a portion of it can be designated to an information task. The challenge is finding a way to produce money and then allocating the funds in a responsible way. Done correctly, there is no need to beg for dollars, snarl at Adam Carolla for selling a “monthly nut,” or criticize information monopolies.

These toll booths for information are a result of choice, a failure of regulatory authorities, the friction of established institutions that want “things the way they have to be” thinking, and selfish agendas.

In short, the lack of high value “free” information is distinctly human. I want to point out that even with information paywalls, there are several ways to obtain useful information:

  1. Talking to people, preferable in person but email works okay
  2. Obtaining publicly accessible documents; for example, patent applications
  3. Comments posted in discussion groups; for instance, the worker at a large tech company who lets slip a useful factoid
  4. Information recycled by wonky news services; for example, the GoCurrent outfit.

The real issue is that “free” generally triggers fear, doubt, and uncertainty. Paying for something means reliable, factual, and true.

Put my approach aside. Step back from the “create a universal knowledge bank which anyone can access. Forget the paying the author angle.

High-value information exists in the flows of data. Knowledge can be extracted from deltas; that is, changes in the data themselves. The trick is point of view and noticing. The more one tries to survive by creating information, the more likely it is that generating cash will be difficult if not impossible.

Therefore, high value content can be the result of doing other types of a knowledge work. Get paid for that product or service, then generate information and give it away.

That’s what I have been doing, and it seems to work okay for me. For radicals, whiners, monopolists fearful of missing a revenue forecast — do some thinking, then come up with a solution.

What’s going on now seems to be a dead end to me. Ebsco and LexisNexis live in fear of losing a “big customer.” Therefore, prices go up. Fewer people can afford the products. The knowledge these companies have becomes more and more exclusive. I get it.

But what these firms and to some extent government agencies which charge for data assembled and paid for with tax payer dollars are accelerating intellectual loss.

The problem is a human and societal one. I am going to keep chugging along, making my content free. I think the knowledge economy seems to be one more signal that the datasphere is not a zero sum game. Think in terms of a negative number. We now have a positive (charging for information), free (accessing information for nothing), and what I call the “data negative” or D-neg (the loss of information and by extension being “informed”).

In my experience, D-neg accelerates stupidity. That’s a bigger problem than many want to think about. Arguing about the wrong thing seems to be the status quo; that is, generating negatives.

Stephen E Arnold, August 3, 2020

More about India App Banning

July 23, 2020

India and China are not likely to hold a fiesta to celebrate the digital revolution in the next month or two. “Government Said to Ask Makers of 59 Banned Chinese Apps to Ensure Strict Compliance” explains that India has some firm ideas about the potential risks of Chinese-centric and Chinese-developed mobile applications. The risks include actions “prejudicial to sovereignty, integrity and security of the country.”

The write up states:

If any app in the banned list is found to be made available by the company through any means for use within India, directly or indirectly, it would be construed as a violation of the government orders…

It is not clear what action the Indian government can take, but obviously the issue is perceived as important; specifically, the accusation relates to the:

stealing and surreptitiously transmitting users’ data in an unauthorized manner to servers which have locations outside India.

Among the nearly 60 banned apps are:

  • Club Factory
  • TikTok
  • UC Browser
  • WeChat
  • Xiaomi

Plus, some less high profile services:

  • Bigo Live
  • CamScanner
  • Helo
  • Likee
  • Shein

There will be workarounds, of course. It is not clear if a citizen persists in using a Xiaomi phone and its baked in apps (some of which route interesting information through data centers in Singapore) what the consequences will be.

Censorship of the Internet is thriving and becoming an active measure in India and other countries. Why? Because Internet, of course.

Stephen E Arnold, July 23, 2020

Digital Fire hoses: Destructive and Must Be Controlled by Gatekeepers

July 16, 2020

Let’s see how many individualistic thinkers I have offended with my headline. I apologize, but I am thinking about the blast of stories about the most recent Twitter “glitch”: “Apple, Biden, Musk and Other High-Profile Twitter Accounts Hacked in Crypto Scam.”

Are you among the individuals whom I am offending in this essay?

First, we have the individuals who did not believe my observations made in my ASIS Eagleton Lecture 40 years ago. Flows of digital information are destructive. The flows erode structures like societal norms, logical constructs, and organizational systems. Yep, these are things. Unfettered flows of information cut them down, efficiently and steadily. In some cases, the datum can set up something like this:

image

Those nuclear reactions are energetic in some cases.

Second, individuals who want to do any darn thing they want. These individuals form a cohort—either real or virtual—and have at it. I have characterized this behavior in my metaphor of the high school science club. The idea is that anyone “smart” thinks that his or her approach to a problem is an intelligent one. Sufficiently intelligent individuals will recognize the wisdom of the idea and jump aboard. High school science clubs can be a useful metaphor for understanding the cute and orthogonal behavior of some high technology firms. It also describes the behavior of a group of high school students who use social media to poke fun or “frame” a target. Some nation states direct their energies at buttons which will ignite social unrest or create confusion. Thus, successful small science clubs can grow larger and be governed — if that’s the right word — by high school science club management methods. That’s why students at MIT put weird objects on buildings or perform cool pranks. Really cool, right?

Third, individuals who do not want gatekeepers. I use the phrase “adulting” to refer to individuals able to act in an informed, responsible, and ethical manner when deciding what content becomes widely available and what does not. I used to work for an outfit which published newspapers, ran TV stations, and built commercial databases. The company at that time had the “adulting” approach well in hand. Individuals who decry informed human controls. It is time to put thumbs in digital dikes.

Read more

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta