Smart Software: It May Never Forget

November 13, 2024

A recent paper challenges the big dogs of AI, asking, “Does Your LLM Truly Unlearn? An Embarrassingly Simple Approach to Recover Unlearned Knowledge.” The study was performed by a team of researchers from Penn State, Harvard, and Amazon and published on research platform arXiv. True or false, it is a nifty poke in the eye for the likes of OpenAI, Google, Meta, and Microsoft, who may have overlooked  the obvious. The abstract explains:

“Large language models (LLMs) have shown remarkable proficiency in generating text, benefiting from extensive training on vast textual corpora. However, LLMs may also acquire unwanted behaviors from the diverse and sensitive nature of their training data, which can include copyrighted and private content. Machine unlearning has been introduced as a viable solution to remove the influence of such problematic content without the need for costly and time-consuming retraining. This process aims to erase specific knowledge from LLMs while preserving as much model utility as possible.”

But AI firms may be fooling themselves about this method. We learn:

“Despite the effectiveness of current unlearning methods, little attention has been given to whether existing unlearning methods for LLMs truly achieve forgetting or merely hide the knowledge, which current unlearning benchmarks fail to detect. This paper reveals that applying quantization to models that have undergone unlearning can restore the ‘forgotten’ information.”

Oops. The team found as much as 83% of data thought forgotten was still there, lurking in the shadows. The paper offers a explanation for the problem and suggestions to mitigate it. The abstract concludes:

“Altogether, our study underscores a major failure in existing unlearning methods for LLMs, strongly advocating for more comprehensive and robust strategies to ensure authentic unlearning without compromising model utility.”

See the paper for all the technical details. Will the big tech firms take the researchers’ advice and improve their products? Or will they continue letting their investors and marketing departments lead them by the nose?

Cynthia Murrell, November 13, 2024

MBAs Gone Wild: Assertions, Animation & Antics

August 5, 2024

Author’s note: Poor WordPress in the Safari browser is having a very bad day. Quotes from the cited McKinsey document appear against a weird blue background. My cheerful little dinosaur disappeared. And I could not figure out how to claim that AI did not help me with this essay. Just a heads up.

Holed up in rural Illinois, I had time to read the mid-July McKinsey & Company document “McKinsey Technology Trends Outlook 2024.” Imagine a group of well-groomed, top-flight, smooth talking “experts” with degrees from fancy schools filming one of those MBA group brainstorming sessions. Take the transcript, add motion graphics, and give audio sweetening to hot buzzwords. I think this would go viral among would-be consultants, clients facing the cloud of unknowing about the future. and those who manifest the Peter Principle. Viral winner! From my point of view, smart software is going to be integrated into most technologies and is, therefore, the trend. People may lose money, but applied AI is going to be with most companies for a long, long time.

The report boils down the current business climate to a few factors. Yes, when faced with exceptionally complex problems, boil those suckers down. Render them so only the tasty sales part remains. Thus, today’s businesss challenges become:

Generative AI (gen AI) has been a standout trend since 2022, with the extraordinary uptick in interest and investment in this technology unlocking innovative possibilities across interconnected trends such as robotics and immersive reality. While the macroeconomic environment with elevated interest rates has affected equity capital investment and hiring, underlying indicators—including optimism, innovation, and longer-term talent needs—reflect a positive long-term trajectory in the 15 technology trends we analyzed.

The data for the report come from inputs from about 100 people, not counting the people who converted the inputs into the live-action report. Move your mouse from one of the 15 “trends” to another. You will see the graphic display colored balls of different sizes. Yep, tiny and tinier balls and a few big balls tossed in.

I don’t have the energy to take each trend and offer a comment. Please, navigate to the original document and review it at your leisure. I can, however, select three trends and offer an observation or two about this very tiny ball selection.

Before sharing those three trends, I want to provide some context. First, the data gathered appear to be subjective and similar to the dorm outputs of MBA students working on a group project. Second, there is no reference to the thought process itself which when applied to a real world problem like boosting sales for opioids. It is the thought process that leads to revenues from consulting that counts.

Source: https://www.youtube.com/watch?v=Dfv_tISYl8A
Image from the ENDEVR opioid video.

Third, McKinsey’s pool of 100 thought leaders seems fixated on two things:

gen AI and electrification and renewables.

But is that statement comprised of three things? [1] AI, [2] electrification, and [3] renewables? Because AI is a greedy consumer of electricity, I think I can see some connection between AI and renewable, but the “electrification” I think about is President Roosevelt’s creating in 1935 the Rural Electrification Administration. Dinobabies can be such nit pickers.

Let’s tackle the electrification point before I get to the real subject of the report, AI in assorted forms and applications. When McKinsey talks about electrification and renewables, McKinsey means:

The electrification and renewables trend encompasses the entire energy production, storage, and distribution value chain. Technologies include renewable sources, such as solar and wind power; clean firm-energy sources, such as nuclear and hydrogen, sustainable fuels, and bioenergy; and energy storage and distribution solutions such as long-duration battery systems and smart grids.In 2019, the interest score for Electrification and renewables was 0.52 on a scale from 0 to 1, where 0 is low and 1 is high. The innovation score was 0.29 on the same scale. The adoption rate was scored at 3. The investment in 2019 was 160 on a scale from 1 to 5, with 1 defined as “frontier innovation” and 5 defined as “fully scaled.” The investment was 160 billion dollars. By 2023, the interest score for Electrification and renewables was 0.73. The innovation score was 0.36. The investment was 183 billion dollars. Job postings within this trend changed by 1 percent from 2022 to 2023.

Stop burning fossil fuels? Well, not quite. But the “save the whales” meme is embedded in the verbiage. Confused? That may be the point. What’s the fix? Hire McKinsey to help clarify your thinking.

AI plays the big gorilla in the monograph. The first expensive, hairy, yet promising aspect of smart software is replacing humans. The McKinsey report asserts:

Generative AI describes algorithms (such as ChatGPT) that take unstructured data as input (for example, natural language and images) to create new content, including audio, code, images, text, simulations, and videos. It can automate, augment, and accelerate work by tapping into unstructured mixed-modality data sets to generate new content in various forms.

Yep, smart software can produce reports like this one: Faster, cheaper, and good enough. Just think of the reports the team can do.

The third trend I want to address is digital trust and cyber security. Now the cyber crime world is a relatively specialized one. We know from the CrowdStrike misstep that experts in cyber security can wreck havoc on a global scale. Furthermore, we know that there are hundreds of cyber security outfits offering smart software, threat intelligence, and very specialized technical services to protect their clients. But McKinsey appears to imply that its band of 100 trend identifiers are hip to this. Here’s what the dorm-room btrainstormers output:

The digital trust and cybersecurity trend encompasses the technologies behind trust architectures and digital identity, cybersecurity, and Web3. These technologies enable organizations to build, scale, and maintain the trust of stakeholders.

Okay.

I want to mention that other trends range from blasting into space to software development appear in the list. What strikes me as a bit of an oversight is that smart software is going to be woven into the fabric of the other trends. What? Well, software is going to surf on AI outputs. And big boy rockets, not the duds like the Seattle outfit produces, use assorted smart algorithms to keep the system from burning up or exploding… most of the time. Not perfect, but better, faster, and cheaper than CalTech grads solving equations and rigging cybernetics with wire and a soldering iron.

Net net: This trend report is a sales document. Its purpose is to cause an organization familiar with McKinsey and the organization’s own shortcomings to hire McKinsey to help out with these big problems. The data source is the dorm room. The analysts are cherry picked. The tone is quasi-authoritative. I have no problem with marketing material. In fact, I don’t have a problem with the McKinsey-generated list of trends. That’s what McKinsey does. What the firm does not do is to think about the downstream consequences of their recommendations. How do I know this? Returning from a lunch with some friends in rural Illinois, I spotted two opioid addicts doing the droop.

Stephen E Arnold, August 5, 2024

AI: Big Ideas and Bigger Challenges for the Next Quarter Century. Maybe, Maybe Not

February 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting ArXiv.org paper with a good title: “Ten Hard Problems in Artificial Intelligence We Must Get Right.” The topic is one which will interest some policy makers, a number of AI researchers, and the “experts” in machine learning, artificial intelligence, and smart software.

The structure of the paper is, in my opinion, a three-legged stool analysis designed to support the weight of AI optimists. The first part of the paper is a compressed historical review of the AI journey. Diagrams, tables, and charts capture the direction in which AI “deep learning” has traveled. I am no expert in what has become the next big thing, but the surprising point in the historical review is that 2010 is the date pegged as the start to the 2016 time point called “the large scale era.” That label is interesting for two reasons. First, I recall that some intelware vendors were in the AI game before 2010. And, second, the use of the phrase “large scale” defines a reality in which small outfits are unlikely to succeed without massive amounts of money.

The second leg of the stool is the identification of the “hard problems” and a discussion of each. Research data and illustrations bring each problem to the reader’s attention. I don’t want to get snagged in the plagiarism swamp which has captured many academics, wives of billionaires, and a few journalists. My approach will be to boil down the 10 problems to a short phrase and a reminder to you, gentle reader, that you should read the paper yourself. Here is my version of the 10 “hard problems” which the authors seem to suggest will be or must be solved in 25 years:

  1. Humans will have extended AI by 2050
  2. Humans will have solved problems associated with AI safety, capability, and output accuracy
  3. AI systems will be safe, controlled, and aligned by 2050
  4. AI will make contributions in many fields; for example, mathematics by 2050
  5. AI’s economic impact will be managed effectively by 2050
  6. Use of AI will be globalized by 2050
  7. AI will be used in a responsible way by 2050
  8. Risks associated with AI will be managed by effectively by 2050
  9. Humans will have adapted its institutions to AI by 2050
  10. Humans will have addressed what it means to be “human” by 2050

Many years ago I worked for a blue-chip consulting firm. I participated in a number of big-idea projects. These ranged from technology, R&D investment, new product development, and the global economy. In our for-fee reports were did include a look at what we called the “horizon.” The firm had its own typographical signature for this portion of a report. I recall learning in the firm’s “charm school” (a special training program to make sure new hires knew the style, approach, and ground rules for remaining employed at that blue-chip firm). We kept the horizon tight; that is, talking about the future was typically in the six to 12 month range. Nosing out 25 years was a walk into a mine field. My boss, as I recall told me, “We don’t do science fiction.”

2 10 robot and person

The smart robot is informing the philosopher that he is free to find his future elsewhere. The date of the image is 2025, right before the new year holiday. Thanks, MidJourney. Good enough.

The third leg of the stool is the academic impedimenta. To be specific, the paper is 90 pages in length of which 30 present the argument. The remain 60 pages present:

  • Traditional footnotes, about 35 pages containing 607 citations
  • An “Electronic Supplement” presenting eight pages of annexes with text, charts, and graphs
  • Footnotes to the “Electronic Supplement” requiring another 10 pages for the additional 174 footnotes.

I want to offer several observations, and I do not want to have these be less than constructive or in any way what one of my professors who was treated harshly in Letters to the Editor for an article he published about Chaucer. He described that fateful letter as “mean spirited.”

  1. The paper makes clear that mankind has some work to do in the next 25 years. The “problems” the paper presents are difficult ones because they touch upon the fabric of social existence. Consider the application of AI to war. I think this aspect of AI may be one to warrant a bullet on AI’s hit parade.
  2. Humans have to resolve issues of automated systems consuming verifiable information, synthetic data, and purpose-built disinformation so that smart software does not do things at speed and behind the scenes. Do those working do resolve the 10 challenges have an ethical compass and if so, what does “ethics” mean in the context of at-scale AI?
  3. Social institutions are under stress. A number of organizations and nation-states operate as dictators. One central American country has a rock star dictator, but what about the rock star dictators working techno feudal companies in the US? What governance structures will be crafted by 2050 to shape today’s technology juggernaut?

To sum up, I think the authors have tackled a difficult problem. I commend their effort. My thought is that any message of optimism about AI is likely to be hard pressed to point to one of the 10 challenges and and say, “We have this covered.” I liked the write up. I think college students tasked with writing about the social implications of AI will find the paper useful. It provides much of the research a fresh young mind requires to write a paper, possibly a thesis. For me, the paper is a reminder of the disconnect between applied technology and the appallingly inefficient, convenience-embracing humans who are ensnared in the smart software.

I am a dinobaby, and let me you, “I am glad I am old.” With AI struggling with go-fast and regulators waffling about go-slow, humankind has quite a bit of social system tinkering to do by 2050 if the authors of the paper have analyzed AI correctly. Yep, I am delighted I am old, really old.

Stephen E Arnold, February 13, 2024

Scale Fail: Define Scale for Tech Giants, Not Residents of Never Never Land

December 29, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Scale Is a Trap.” The essay presents an interesting point of view, scale from the viewpoint of a resident of Never Never Land. The write up states:

But I’m pretty convinced the reason these sites [Vice, Buzzfeed, and other media outfits] have struggled to meet the moment is because the model under which they were built — eyeballs at all cost, built for social media and Google search results — is no longer functional. We can blame a lot of things for this, such as brand safety and having to work through perhaps the most aggressive commercial gatekeepers that the world has ever seen. But I think the truth is, after seeing how well it worked for the tech industry, we made a bet on scale — and then watched that bet fail over and over again.

The problem is that the focus is on media companies designed to surf on the free megaphones like Twitter and the money from Google’s pre-threat ad programs. 

However, knowledge is tough to scale. The firms which can convert knowledge into what William James called “cash value” charge for professional services. Some content is free like wild and crazy white papers. But the “good stuff” is for paying clients.

Outfits which want to find enough subscribers who will pay the necessary money to read articles is a difficult business to scale. I find it interesting that Substack is accepting some content sure to attract some interesting readers. How much will these folks pay. Maybe a lot?

But scale in information is not what many clever writers or traditional publishers and authors can do. What happens when a person writes a best seller. The publisher demands more books and the result? Subsequent books which are not what the original was. 

Whom does scale serve? Scale delivers power and payoff to the organizations which can develop products and services that sell to a large number of people who want a deal. Scale at a blue chip consulting firm means selling to the biggest firms and the organizations with the deepest products. 

But the scale of a McKinsey-type firm is different from the scale at an outfit like Microsoft or Google.

What is the definition of scale for a big outfit? The way I would explain what the technology firms mean when scale is kicked around at an artificial intelligence conference is “big money, big infrastructure, big services, and big brains.” By definition, individuals and smaller firms cannot deliver.

Thus, the notion of appropriate scale means what the cited essay calls a “niche.” The problems and challenges include:

  • Getting the cash to find, cultivate, and grow people who will pay enough to keep the knowledge enterprise afloat
  • Finding other people to create the knowledge value
  • Protecting the idea space from carpetbaggers
  • Remaining relevant because knowledge has a shelf life, and it takes time to grow knowledge or acquire new knowledge.

To sum up, the essay is more about how journalists are going to have to adapt to a changing world. The problem is that scale is a characteristic of the old school publishing outfits which have been ill-suited to the stress of adapting to a rapidly changing world.

Writers are not blue chip consultants. Many just think they are.

Stephen E Arnold, December 29, 2023

A Dinobaby Misses Out on the Hot Searches of 2023

December 28, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I looked at “Year in Search 2023.” I was surprised at how out of the flow of consumer information I was. “Out of the flow” does not not capture my reaction to the lists of the news topics, dead people, and songs I was. Do you know much about Bizarrap? I don’t. More to the point, I have never heard of the obviously world-class musician.

Several observations:

First, when people tell me that Google search is great, I have to recalibrate my internal yardsticks to embrace queries for entities unrelated to my microcosm of information. When I assert that Google search sucks, I am looking for information absolutely positively irrelevant to those seeking insight into most of the Google top of the search charts. No wonder Google sucks for me. Google is keeping pace with maps of sports stadia.

Second, as I reviewed these top searches, I asked myself, “What’s the correlation between advertisers’ spend and the results on these lists? My idea is that a weird quantum linkage exists in a world inhabited by incentivized programmers, advertisers, and the individuals who want information about shirts. Its the game rigged? My hunch is, “Yep.” Spooky action at a distance I suppose.

Third, from the lists substantive topics are rare birds. Who is looking for information about artificial intelligence, precision and recall in search, or new approaches to solving matrix math problems? The answer, if the Google data are accurate and not a come on to advertisers, is almost no one.

As a dinobaby, I am going to feel more comfortable in my isolated chamber in a cave of what I find interesting. For 2024, I have steeled myself to exist without any interest in Ginny & Georgia, FIFTY FIFTY, or papeda.

I like being a dinobaby. I really do.

Stephen E Arnold, December 28, 2023

Intel Inference: A CUDA Killer? Some Have Hope

December 15, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Intel is embracing “inference” approach. Why? Maybe it will irritate fewer legal eagles? Maybe it is a marketing differentiator? Maybe Intel knows how to make probability less of a “problem”?

Brilliant, right? The answer to these questions are supposed to be explained in “Intel CEO Attacks Nvidia on AI: The Entire Industry Is Motivated to Eliminate the CUDA Market.” The Tom’s Hardware report uses the “attack” angle as a hook. Intel is thinking differently. The company has not had the buzz of nVidia or OpenAI. Plus, no horse metaphors.

image

Marketing professionals explain to engineers what must be designed, tested, and delivered in 2024. The engineers are skeptical. The marketing team is confident that their TikTok strategy will be a winner. Thanks, MSFT Copilot. Good enough.

What’s an inference? According to Bing (the stupid version), inference is “a conclusion reached on the basis of evidence and reasoning.” But in mathematics, inference has a slightly different denotation; to wit, this explanation from Britannica:

Inference, in statistics, the process of drawing conclusions about a parameter one is seeking to measure or estimate. Often scientists have many measurements of an object—say, the mass of an electron—and wish to choose the best measure. One principal approach of statistical inference is Bayesian estimation, which incorporates reasonable expectations or prior judgments (perhaps based on previous studies), as well as new observations or experimental results. Another method is the likelihood approach, in which “prior probabilities” are eschewed in favor of calculating a value of the parameter that would be most “likely” to produce the observed distribution of experimental outcomes. In parametric inference, a particular mathematical form of the distribution function is assumed. Nonparametric inference avoids this assumption and is used to estimate parameter values of an unknown distribution having an unknown functional form.

Now what does Tom’s Hardware present at Intel’s vision for its “to be” chips. I have put several segments together for the purposes of my blog post:

"You know, the entire industry is motivated to eliminate the CUDA market.  [Gelsinger, the Intel CEO] said. He cited examples such as MLIR, Google, and OpenAI, suggesting that they are moving to a "Pythonic programming layer" to make AI training more open. "We think of the CUDA moat as shallow and small," Gelsinger went on. "Because the industry is motivated to bring a broader set of technologies for broad training, innovation, data science, et cetera." But Intel isn’t relying just on training. Instead, it thinks inference is the way to go. "As inferencing occurs, hey, once you’ve trained the model… There is no CUDA dependency," Gelsinger continued. "It’s all about, can you run that model well?"

CUDA is definitely the target. “CUDA” refers to nVidia’s parallel computing platform and programming model … With more than 20 million downloads to date, CUDA helps developers speed up their applications by harnessing the power of GPU accelerators.

Tom’s Hardware raises a question:

It’s a bold strategy, and Gelsinger appeared confident as he led his team through presentations today. Can he truly take on CUDA? Only time will tell as applications for the chips Intel launched today — and that his competitors are also working on — become more widespread.

Of course. With content marketing, PR professionals, and a definite need to generate some buzz in an OpenAI-dominated world, Intel will be capturing some attention. The hard part will be making sufficiently robust sales to show that an ageing company still can compete.

Stephen E Arnold, December 15, 2023

Does the NY Times Want Celebrity Journalists?

September 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “In the AI Age, The New York Times Wants Reporters to Tell Readers Who They Are.” subtitle is definitely Google pet food:

The paper is rolling out enhanced bios as “part of our larger mission to be more transparent,” says the Times’ Edmund Lee, and “as generative AI begins to creep into the media landscape.”

The write up states:

The idea behind the “enhanced bios,” as they are being called, is to put more of a face and a name to reporters, so as to foster greater trust with readers and, as more news elsewhere is written by generative AI, emphasize the paper’s human-led reporting.

9 26 trust mom and kid

“I am sorry, mom. I cannot reveal my sources. You have to trust me when I say, ‘Gran is addicted to trank.’” Thanks, MidJourney. Carry on with the gradient descent.

I have a modest proposal: Why not identify sources, financial tie ups of reporters, and provide links to the Web pages and LinkedIn biographies of these luminaries?

I know. I know. Don’t be silly. Okay, I won’t be. But trust begins with verifiable facts and sources as that dinobaby Walter Isaacson told Lex Fridman a few days ago. And Mr. Isaacson provides sources too. How old fashioned.

Stephen E Arnold, September 28, 2023

Free Employees? Yep, Smart Software Saves Jobs Too

May 31, 2023

If you want a “free employee,” navigate to “100+ Tech Roles Prompt Templates.” The service offers:

your secret weapon for unleashing the full potential of AI in any tech role. Boost productivity, streamline communication, and empower your AI to excel in any professional setting.

The templates embrace:

  • C-Level Roles
  • Programming Roles
  • Cybersecurity Roles
  • AI Roles
  • Administrative Roles

How will an MBA makes use of this type of capability? Here are a few thoughts:

First, terminate unproductive humans with software. The action will save time and reduce (allegedly) some costs.

Second, trim managerial staff who handle hiring, health benefits (ugh!), and administrative work related to humans.

Third, modify one’s own job description to yield more free time in which to enjoy the bonus pay the savvy MBA will receive for making the technical unit more productive.

Fourth, apply the concept to the company’s legal department, marketing department, and project management unit.

Paradise.

Stephen E Arnold, May 2023

What Is the Byproduct of a Farm, Content Farm, That Is?

May 31, 2023

Think about the glorious spring morning spent in a feed lot in Oklahoma. Yeah, that is an unforgettable experience. The sights, the sounds, and — well — the smell.

I read “Google’s AI Search Feels Like a Content Farm on Steroids.” Zoom. Back to the feed lot or in my case, the Poland China pen in Farmington, Illinois. Special.

The write up is about the Google and its smart software. I underlined this passage:

…with its LLM (Large Language Model) doing all the writing, Google looks like the world’s biggest content farm, one powered by robotic farmers who can produce an infinite number of custom articles in real-time.

What are the outputs of Google’s smart software search daemons? Bits and bytes, clicks and cash, and perhaps it is the digital stench of a content farm byproduct?

Beyond Search loves the Google and all things Google, even false allegations of stealing intellectual property and statements before Congress which include the words trust, responsibility, and users.

It will come as no surprise that Beyond Search absolutely loves content farms’ primary and secondary outputs.

Stephen E Arnold, June 1, 2023

Regulate Does Not Mean Regulate. Leave the EU Does Not Mean Leave the EU. Got That?

May 30, 2023

I wrote about Sam AI-man’s explanation that he wants regulation. I pointed out that his definition of regulate means leaving OpenAI free to do whatever it can to ace out the Google and a handful of other big outfits chasing the pot of gold at the end of the AI rainbow.

I just learned from the self-defined trusted news source (Thomson Reuters) that Mr. AI-man has no plans to leave Europe. I understand. “Leave” does not mean leave as in depart, say adios, or hit the road, Jack.

ChatGPT Maker OpenAI Says Has No Plan to Leave Europe” reports:

OpenAI has no plans to leave Europe, CEO Sam Altman said on Friday, reversing a threat made earlier this week to leave the region if it becomes too hard to comply with upcoming laws on artificial intelligence.

I am not confused. Just as the company’s name OpenAI does not mean “open,” the AI-man’s communication skills are based on the probabilities of certain words following another word. Got it. The slippery fish with AI-man is that definition of the words in his mind do not regress to the mean. The words — like those of some other notable Silicon Valley high tech giants — reflect the deeper machinations of a machine-assisted superior intelligence.

Translated this means: Regulate means shaft our competitors. Leave means stay. Regulate means let those OpenAI sheep run through the drinking water of free range cattle

The trusted write up says:

Reacting to Altman’s tweet on Friday, Dutch MEP Kim van Sparrentak, who has worked closely on the AI draft rules, told Reuters she and her colleagues must stand up to pressure from tech companies…. Voluntary codes of conduct are not the European way.

What does this statement mean to AI-man?

I would suggest from my temporary office in clear thinking Washington, DC, not too much.

I look forward to the next hearing from AI-man. That will be equally easy to understand.

Stephen E Arnold, May 30, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta