Calls for AI Pause Futile At this Late Date

August 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Well, the nuclear sub has left the base. A group of technology experts recently called for a 6-month pause on AI rollouts in order to avoid the very “loss of control of our civilization” to algorithms. That might be a good idea—if it had a snowball’s chance of happening. As it stands, observes ComputerWorld‘s Rob Enderle, “Pausing AI Development Is a Foolish Idea.” We think foolish is not a sufficiently strong word. Perhaps regulation could have been established before the proverbial horse left the barn, but by now there are more than 500 AI startups according to Jason Calacanis, noted entrepreneur and promoter.

8 27 sdad sailor

A sad sailor watches the submarine to which he was assigned leave the dock without him. Thanks, MidJourney. No messages from Mother MJ on this image.

Enderle opines as a premier pundit:

“Once a technology takes off, it’s impossible to hold back, largely because there’s no strong central authority with the power to institute a global pause — and no enforcement entity to ensure the pause directive is followed. The right approach would be to create such an authority beforehand, so there’s some way to assure the intended outcome. I tend to agree with former Microsoft CEO Bill Gates that the focus should be on assuring AI reliability, not trying to pause everything. … There simply is no global mechanism to enforce a pause in any technological advance that has already reached the market.”

We are reminded that even development on clones, which is illegal in most of the world, continues apace. The only thing bans seem to have accomplished there is to obliterate transparency around cloning projects. There is simply no way to rein in all the world’s scientists. Not yet. Enderle offers a grain of hope on artificial intelligence, however. He notes it is not too late to do for general-purpose AI what we failed to do for generative AI:

“General AI is believed to be more than a decade in the future, giving us time to devise a solution that’s likely closer to a regulatory and oversight body than a pause. In fact, what should have been proposed in that open letter was the creation of just such a body. Regardless of any pause, the need is to ensure that AI won’t be harmful, making oversight and enforcement paramount. Given that AI is being used in weapons, what countries would allow adequate third-party oversight? The answer is likely none — at least until the related threat rivals that of nuclear weapons.”

So we have that to look forward to. And clones, apparently. The write-up points to initiatives already in the works to protect against “hostile” AI. Perhaps they will even be effective.

Cynthia Murrell, August 16, 2023

The Secret Cultural Erosion Of Public Libraries: Who Knew?

August 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It appears the biggest problem public and school libraries are dealing with are demands to ban controversial gay and trans titles. While some libraries are facing closures or complete withdrawals of funding, they mostly appear to be in decent standing. Karawynn Long unfortunately discovered that is not the case. She spills the printer’s ink in her Substack post: “The Coming [Cultural Erosion] Of Public Libraries” with the cleverly deplorable subtitle “global investment vampires have positioned themselves to suck our libraries dry.”

Before she details how a greedy corporation is bleeding libraries like a leech, Long explains how there is a looming cultural erosion brought on by capitalism. A capitalist economic system is not inherently evil but bad actors exploit it. Long uses a more colorful word to explain libraries’ cultural erosion. In essence the colorful word means when something good deteriorates into crap.

A great example is when corporations use a platform, i.e. Facebook, Twitter, and Amazon, to pit buyers and sellers against each other while the top runs away with heaps of cash.

This ties back to public libraries because they use a digital library app called OverDrive. Library patrons use OverDrive to access copies of digital books, videos, audiobooks, magazines, and other media. It is the only app available to public libraries to manage digital media. Patrons could access OverDrive via an app call Libby or a Web site portal. In May 2023, the Web site portal deleted a feature that allowed patrons to recommend new titles to their libraries.

OverDrive wants to force users to adopt their Libby app. The Libby app has a “notify me” option that alerts users when their library acquires an item. OverDrive’s overlords also want to collect sellable user data, like other companies. Among other details, OverDrive is owned by the global investment firm KKR, Kohlberg Kravis Roberts.

KKR’s goal is one of the vilest investment capital companies, dubbed a “vampire capitalist” company, and it has a fanged hold on the US’s public libraries. OverDrive flaunts its B corporation status but that does not mask the villain lurking behind the curtain:

“ As one library industry publication warned in advance of the sale to KKR, ‘This time, the acquisition of OverDrive is a ‘financial investment,’ in which the buyer, usually a private equity firm or other financial sponsor, expects to increase the value of the company over the short term, typically five to seven years.’ We are now three years into that five-to-seven, making it likely that KKR’s timeframe for completing maximum profit extraction is two to four more years. Typically this is accomplished by levying enormous annual “management fees” on the purchased company, while also forcing it (through Board of Director mandates) to make changes to its operations that will result in short-term profit gains regardless of long-term instability. When they believe the short-term gains are maxed out, the investment firm sells off the company again, leaving it with a giant pile of unsustainable debt from the leveraged buyout and often sending it into bankruptcy.”

OverDrive likely plans to sell user data then bleed the public libraries dry until local and federal governments shout, “Uncle!” Among book bans and rising inflation, public libraries will see a reckoning with their budgets before 2030.

Whitney Grace, August 25, 2023

India Where Regulators Actually Try or Seem to Try

August 22, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Data Act Will Make Digital Companies Handle Info under Legal Obligation.” The article reports that India’s regulators are beavering away in an attempt to construct a dam to stop certain flows of data. The write up states:

Union Minister of State for Electronics and Information Technology Rajeev Chandrasekhar on Thursday [August 17, 2023] said the Digital Personal Data Protection Act (DPDP Act) passed by Parliament recently will make digital companies handle the data of Indian citizens under absolute legal obligation.

What about certain high-technology companies operating with somewhat flexible methods? The article uses the phrase “punitive consequences of high penalty and even blocking them from operating in India.”

8 18 eagles

US companies’ legal eagles take off. Destination? India. MidJourney captures 1950s grade school textbook art quite well.

This passage caught my attention because nothing quite like it has progressed in the US:

The DPDP [Digital Personal Data Protection] Bill is aimed at giving Indian citizens a right to have his or her data protected and casts obligations on all companies, all platforms be it foreign or Indian, small or big, to ensure that the personal data of Indian citizens is handled with absolute (legal) obligation…

Will this proposed bill become law? Will certain US high-technology companies comply? I am not sure of the answer, but I have a hunch that a dust up may be coming.

Stephen E Arnold, August 22, 2023

Thought Leader Thinking: AI Both Good and Bad. Now That Is an Analysis of Note

August 17, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read what I consider a “thought piece.” This type of essay discusses a topic and attempts to place it in a context of significance. The “context” is important. A blue chip consulting firm may draft a thought piece about forever chemicals. Another expert can draft a thought piece about these chemicals in order to support the companies producing them. When thought pieces collide, there is a possible conference opportunity, definitely some consulting work to be had, and today maybe a ponderous online webinar. (Ugh.)

8 17 don quixote

A modern Don Quixote and thought leader essay writer lines up a windmill and charges. As the bold 2023 Don shouts, “Vile and evil windmill, you pretend to grind grain but you are a mechanical monster destroying the fair land. Yield, I say.” The mechanical marvel just keeps on turning and the modern Don is ignored until a blade of the windmill knocks the knight to the ground.” Thanks, MidJourney. It only took three tries to get close to what I described. Outstanding evidence of degradation of function.

The AI Power Paradox: Can States Learn to Govern Artificial Intelligence—Before It’s Too Late?” considers the “problem” of smart software. My recollection is that artificial intelligence and machine learning have been around for decades. I have a vivid recollection of a person named Marvin Weinberger I believe. This gentleman made an impassioned statement at an Information Industry Association meeting about the need for those in attendance to amp up their work with smart software. The year, as I recall, was 1981.

The thought piece does not dwell on the long history of smart software. The interest is in what the thought piece presents as it context; that is:

And generative AI is only the tip of the iceberg. Its arrival marks a Big Bang moment, the beginning of a world-changing technological revolution that will remake politics, economies, and societies.

The excitement about smart software is sufficiently robust to magnetize those who write thought pieces. Is the outlook happy or sad? You judge. The essay asserts:

In May 2023, the G-7 launched the “Hiroshima AI process,” a forum devoted to harmonizing AI governance. In June, the European Parliament passed a draft of the EU’s AI Act, the first comprehensive attempt by the European Union to erect safeguards around the AI industry. And in July, UN Secretary-General Antonio Guterres called for the establishment of a global AI regulatory watchdog.

I like the reference to Hiroshima.

The thought piece points out that AI is “different.”

It does not just pose policy challenges; its hyper-evolutionary nature also makes solving those challenges progressively harder. That is the AI power paradox. The pace of progress is staggering.

The thought piece points out that AI or any other technology is “dual use”; that is, one can make a smart microwave or one can make a smart army of robots.

Where is the essay heading? Let’s try to get a hint. Consider this passage:

The overarching goal of any global AI regulatory architecture should be to identify and mitigate risks to global stability without choking off AI innovation and the opportunities that flow from it.

From my point of view, we have a thought piece which recycles a problem similar to squaring the circle.

The fix, according to the thought piece, is to create a “minimum of three AI governance regimes, each with different mandates, levers, and participants.

To sum up, we have consulting opportunities, we have webinars, and we have global regulatory “entities.” How will that work out? Have you tried to get someone in a government agency, a non-governmental organization, or federation of conflicting interests to answer a direct question?

While one waits for the smart customer service system to provide an answer, the decades old technology will zip along leaving thought piece ideas in the dust. Talk global; fail local.

Stephen E Arnold, August 17, 2023

AI and Non-State Actors

June 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

AI Weapons Need a Safe Back Door for Human Control” contains a couple of interesting statements.

The first is a quote from Hugh Durrant-Whyte, director of the Centre for Translational Data Science at the University of Sydney. He allegedly said:

China is investing arguably twice as much as everyone else put together. We need to recognize that it genuinely has gone to town. If you look at the payments, if you look at the number of publications, if you look at the companies that are involved, it is quite significant. And yet, it’s important to point out that the US is still dominant in this area.

For me, the important point is the investment gap. Perhaps the US should be more aggressive in its identifying and funding promising smart software companies?

The second statement which caught my attention was:

James Black, assistant director of defense and security research group RAND Europe, warned that non-state actors could lead in the proliferation of AI-enhanced weapons systems. “A lot of stuff is very much going to be difficult to control from a non-proliferation perspective, due to its inherent software-based nature. A lot of our export controls and non-proliferation regimes that exist are very much focused on old-school traditional hardware…

Several observations:

  1. Smart software ups the ante in modern warfare, intelligence, and law enforcement activities
  2. The smart software technology has been released into the wild. As a result, bad actors have access to advanced tools
  3. The investment gap is important but the need for skilled smart software engineers, mathematicians, and support personnel is critical in the US. University research departments are, in my opinion, less and less productive. The concentration of research in the hands of a few large publicly traded companies suggests that military, intelligence, and law enforcement priorities will be ignored.

Net net: China, personnel, and institution biases require attention by senior officials. These issues are not fooling around with Twitter scale. More is at stake. Urgent action is needed, which may be uncomfortable for fans of TikTok and expensive dinners in Washington, DC.

Stephen E Arnold, June 16, 2023

AI Legislation: Can the US Regulate What It Does Understand Like a Dull Normal Student?

April 20, 2023

I read an essay by publishing and technology luminary Tim O’Reilly. If you don’t know the individual, you may recognize the distinctive art used on many of his books. Here’s what I call the parrot book’s cover:

image

You can get a copy at this link.

The essay to which I referred in the first sentence of this post is “You Can’t Regulate What You Don’t Understand.” The subtitle of the write up is “Or, Why AI Regulations Should Begin with Mandated Disclosures.” The idea is an interesting one.

Here’s a passage I found worth circling:

But if we are to create GAAP for AI, there is a lesson to be learned from the evolution of GAAP itself. The systems of accounting that we take for granted today and use to hold companies accountable were originally developed by medieval merchants for their own use. They were not imposed from without, but were adopted because they allowed merchants to track and manage their own trading ventures. They are universally used by businesses today for the same reason.

The idea is that those without first hand knowledge of something cannot make effective regulations.

The essay makes it clear that government regulators may be better off:

formalizing and requiring detailed disclosure about the measurement and control methods already used by those developing and operating advanced AI systems. [Emphasis in the original.]

The essay states:

Companies creating advanced AI should work together to formulate a comprehensive set of operating metrics that can be reported regularly and consistently to regulators and the public, as well as a process for updating those metrics as new best practices emerge.

The conclusion is warranted by the arguments offered in the essay:

We shouldn’t wait to regulate these systems until they have run amok. But nor should regulators overreact to AI alarmism in the press. Regulations should first focus on disclosure of current monitoring and best practices. In that way, companies, regulators, and guardians of the public interest can learn together how these systems work, how best they can be managed, and what the systemic risks really might be.

My thought is that it may be useful to look at what generalities and self-regulation deliver in real life. As examples, I would point out:

  1. The report “Independent Oversight of the Auditing Professionals: Lessons from US History.” To keep it short and sweet: Self regulation has failed. I will leave you to work through the somewhat academic argument. I have burrowed through the document and largely agree with the conclusion.
  2. The US Securities & Exchange Commission’s decision to accept $1.1 billion in penalties as a result of 16 Wall Street firms’ failure to comply with record keeping requirements.
  3. The hollowness of the points set forth in “The Role of Self-Regulation in the Cryptocurrency Industry: Where Do We Go from Here?” in the wake of the Sam Bankman Fried FTX problem.
  4. The MBA-infused “ethical compass” of outfits operating with a McKinsey-type of pivot point?

My view is that the potential payoff from pushing forward with smart software is sufficient incentive to create a Wild West, anything-goes environment. Those companies with the most to gain and the resources to win at any cost can overwhelm US government professionals with flights of legal eagles.

With innovations in smart software arriving quickly, possibly as quickly as new Web pages in the early days of the Internet, firms that don’t move quickly, act expediently, and push toward autonomous artificial intelligence will be unable to catch up with firms who move with alacrity.

Net net: No regulation, imposed or self-generated, will alter the rocket launch of news services. The US economy is not set up to encourage snail-speed innovation. The objective is met by generating money. Money, not guard rails, common sense, or actions which harm a company’s self interest, makes the system work… for some. Losers are the exhaust from an economic machine. One doesn’t drive a Model T Ford. Today those who can drive a Tesla Plaid or McLaren. The “pet” is a French bulldog, not a parrot.

Stephen E Arnold, April 20, 2023

Google, Does Quantum Supremacy Imply That Former Staff Grouse in Public?

April 5, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am not sure if this story is spot on. I am writing about “Report: A Google AI Researcher Resigned after Learning Google’s Bard Uses Data from ChatGPT.” I am skeptical because today is All Fools’ Day. Being careful is sometimes a useful policy. An exception might be when a certain online advertising company is losing bigly to the marketing tactics of [a] Microsoft, the AI in Word and Azure Security outfit, [b] OpenAI and its little language model that could, and [c] Midjourney which just rolled out its own camera with a chip called Bionzicle. (Is this perhaps pronounced “bio-cycle” like washing machine cycle or “bion zickle” like bio pickle? I go with the pickle sound; it seems appropriate.

The cited article reports as actual factual real news:

ChatGPT AI is often accused of leveraging “stolen” data from websites and artists to build its AI models, but this is the first time another AI firm has been accused of stealing from ChatGPT.  ChatGPT is powering Bing Chat search features, owing to an exclusive contract between Microsoft and OpenAI. It’s something of a major coup, given that Bing leap-frogged long-time search powerhouse Google in adding AI to its setup first, leading to a dip in Google’s share price.

This is im port’ANT as the word is pronounced on a certain podcast.

More interesting to me is that recycled Silicon Valley type real news verifies this remarkable assertion as the knowledge output of a PROM’ inANT researcher, allegedly named Jacob Devlin. Mr. Devil has found his future at – wait for it – OpenAI. Wasn’t OpenAI the company that wanted to do good and save the planet and then discovered Microsoft backing, thirsty trapped AI investors, and the American way of wealth?

Net net: I wish I could say, April’s fool, but I can’t. I have an unsubstantiated hunch that Google’s governance relies on the whims of high school science club members arguing about what pizza topping to order after winning the local math competition. Did the team cheat? My goodness no. The team has an ethical compass modeled on the triangulations of William McCloundy or I.O.U. O’Brian, the fellow who sold the Brooklyn Bridge in the early 20th century.

Stephen E Arnold, April 5, 2023

FAA Software: Good Enough?

January 11, 2023

Is today’s software good enough. For many, the answer is, “Absolutely.” I read “The FAA Grounded Every Single Domestic Flight in the U.S. While It Fixed Its Computers.” The article states what many people in affected airports knows:

The FAA alerted the public to a problem with the system at 6:29 a.m. ET on Twitter and announced that it had grounded flights at 7:19 a.m. ET. While the agency didn’t provide details on what had gone wrong with the system, known as NOTAM, Reuters reported that it had apparently stopped processing updated information. As explained by the FAA, pilots use the NOTAM system before they take off to learn about “closed runways, equipment outages, and other potential hazards along a flight route or at a location that could affect the flight.” As of 8:05 a.m. ET, there were 3,578 delays within, out, and into the U.S., according to flight-tracking website FlightAware.

NOTAM, for those not into government speak, means “Notice to Air Missions.”

Let’s go back in history. In the 1990s I think I was on the Board of the National Technical Information Service. One of our meetings was in a facility shared with the FAA. I wanted to move my rental car from the direct sunlight to a portion of the parking lot which would be shaded. I left the NTIS meeting, moved my vehicle, and entered through a side door. Guess what? I still remember my surprise when I was not asked for my admission key card. The door just opened and I was in an area which housed some FAA computer systems. I opened one of those doors and poked my nose in and saw no one. I shut the door, made sure it was locked, and returned to the NTIS meeting.

I recall thinking, “I hope these folks do software better than they do security.”

Today’s (January 11, 2023) FAA story reminded me that security procedures provide a glimpse to such technical aspects of a government agency as software. I had an engagement for the blue chip consulting firm for which I worked in the 1970s and early 1980s to observe air traffic control procedures and systems at one of the busy US airports. I noticed that incoming aircraft were monitored by printing out tail numbers and details of the flight, using a rubber band to affix these data to wooden blocks which were stacked in a holder on the air traffic control tower’s wall. A controlled knew the next flight to handle by taking the bottom most block, using the data, and putting the unused block back in a box on a table near the bowl of antacid tablets.

I recall that discussions were held about upgrading certain US government systems; for example, the IRS and the FAA computer systems. I am not sure if these systems were upgraded. My hunch is that legacy machines are still chugging along in facilities which hopefully are more secure than the door to the building referenced above.

My point is that “good enough” or “close enough for government work” is not a new concept. Many administrations have tried to address legacy systems and their propensity to [a] fail like the Social Security Agency’s mainframe to Web system, [b] not work as advertised; that is, output data that just doesn’t jibe with other records of certain activities (sorry, I am not comfortable naming that agency), or [c] are unstable because either funds for training staff, money for qualified contractors, or investments in infrastructure to keep the as is systems working in an acceptable manner.

I think someone other than a 78 year old should be thinking about the issue of technology infrastructure that, like Southwest Airlines’ systems, or the FAA’s system does not fail.

Why are these core systems failing? Here’s my list of thoughts. Note: Some of these will make anyone between 45 and 23 unhappy. Here goes:

  1. The people running agencies and their technology units don’t know what to do
  2. The consultants hired to do the work agency personnel should do don’t deliver top quality work. The objective may be a scope change or a new contract, not a healthy system
  3. The programmers don’t know what to do with IBM-type mainframe systems or other legacy hardware. These are not zippy mobile phones which run apps. These are specialized systems whose quirks and characteristics often have to be learned with hands on interaction. YouTube videos or a TikTok instructional video won’t do the job.

Net net: Failures are baked into commercial and government systems. The simultaneous of several core systems will generate more than annoyed airline passengers. Time to shift from “good enough” to “do the job right the first time”. See. I told you I would annoy some people with my observations. Well, reality is different from thinking about smart software will write itself.

Stephen E Arnold, January 11, 2023

Ah, Emergent Behavior: Tough to Predict, Right?

December 28, 2022

Super manager Jeff (I manage people well) Dean and a gam of Googlers published “Emergent Abilities of Large Language Models.” The idea is that those smart software systems informed by ingesting large volumes of content demonstrate behaviors the developers did not expect. Surprise!

Also, Google published a slightly less turgid discussion of the paper which has 16 authors. in a blog post called “Characterizing Emergent Phenomena in Large Language Models.” This post went live in November 2022, but the time required to grind through the 30 page “technical” excursion was not available to me until this weekend. (Hey, being retired and working on my new lectures for 2023 is time-consuming. Plus, disentangling Google’s techy content marketing from the often tough to figure out text and tiny graphs is not easy for my 78 year old eyes.

image

Helpful, right? Source: https://openreview.net/pdf?id=yzkSU5zdwD

In a nutshell, the smart software does things the wizards had not anticipated. According to the blog post:

The existence of emergent abilities has a range of implications. For example, because emergent few-shot prompted abilities and strategies are not explicitly encoded in pre-training, researchers may not know the full scope of few-shot prompted abilities of current language models. Moreover, the emergence of new abilities as a function of model scale raises the question of whether further scaling will potentially endow even larger models with new emergent abilities. Identifying emergent abilities in large language models is a first step in understanding such phenomena and their potential impact on future model capabilities. Why does scaling unlock emergent abilities? Because computational resources are expensive, can emergent abilities be unlocked via other methods without increased scaling (e.g., better model architectures or training techniques)? Will new real-world applications of language models become unlocked when certain abilities emerge? Analyzing and understanding the behaviors of language models, including emergent behaviors that arise from scaling, is an important research question as the field of NLP continues to grow.

The write up emulates other Googlers’ technical write ups. I noted several facets of the topic not included in the paper on OpenReview.net’s version of the paper. (Note: Snag this document now because many Google papers, particularly research papers, have a tendency to become unfindable for the casual online search expert.)

First, emergent behavior means humans were able to observe unexpected outputs or actions. The question is, “What less obvious emergent behaviors are operating within the code edifice?” Is it possible the wizards are blind to more substantive but subtle processes. Could some of these processes be negative? If so, which are and how does the observer identify those before an undesirable or harmful outcome is discovered?

Second, emergent behavior, in my view of bio-emulating systems, evokes the metaphor of cancer. If we assume the emergent behavior is cancerous, what’s the mechanism for communicating these behaviors to others working in the field in a responsible way? Writing a 30 page technical paper takes time, even for super duper Googlers. Perhaps the “emergent” angle requires a bit more pedal to the metal?

Third, how does the emergent behavior fit into the Google plan to make its approach to smart software the de facto standard? There is big money at stake because more and more organizations will want smart software. But will these outfits sign up with a system that demonstrates what might be called “off the reservation” behavior? One example is the use of Google methods for war fighting? Will smart software write a sympathy note to those affected by an emergent behavior or just a plain incorrect answer buried in a subsystem?

Net net: I discuss emergent behavior in my lecture about shadow online services. I cover what the software does and what use humans make of these little understood yet rapidly diffusing methods.

Stephen E Arnold, December 28, 2022

Fried Dorsey: Soggy, Not Crispy

December 15, 2022

I noted an odd shift in Big Tech acceptance of responsibility. For now, I will  call this the Fried Dorsey Anomaly.

First, CNBC reported about a letter the MIT graduate and top dog at FTX wrote to employees.  The article has the snappy title “Here’s the Apology Letter Sam Bankman-Fried Sent to FTX Employees: When Sh—y Things Happen to Us, We All Tend to Make Irrational Decisions. The logic in this victim argument and the use of a categorical affirmative are probably interesting to someone who loved Psychology 101. Here’s the sentence which caught my eye:

“I lost track of the most important things in the commotion of company growth. I care deeply about you all, and you were my family, and I’m sorry…”

This is the “Fried” side of making or not making certain decisions. Then there’s the apology.

Now let’s shift to the Dorsey facet of the anomaly. The estimable Wall Street Journal published “Dorsey Calls Twitter Controls Too Great.” The write up appeared in the December 15, 2022, dead tree version of the Murdoch output. The online, paywalled article is at this link.  Here’s the statement I noted:

If you want to blame, direct it at me and my actions.

These quotes are somewhat different from the “Senator, thank you for the question” and “We will improve…” statements from what we can think of as the pre-Covid era of Big Tech.

Now we have individuals accepting blame and demonstrating a soupçon of remorse, regret, or some related mental posture.

Thus, the post-Covid era of Big Tech is now into mea culpa suggestions and acceptance of blame.

Will the Fried Dorsey Anomaly persist? Will the tactic work as the penitents’ anticipate. Wow, I am convinced already.

Stephen E Arnold, December 15, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta