AI: Meh.
March 19, 2025
It seems consumers can see right through the AI hype. TechRadar reports, “New Survey Suggests the Vast Majority of iPhone and Samsung Galaxy Users Find AI Useless—and I’m Not Surprised.” Both iPhones and Samsung Galaxy smartphones have been pushing AI onto their users. But, according to a recent survey, 73% of iPhone users and 87% of Galaxy users respond to the innovations with a resounding “meh.” Even more would refuse to pay for continued access to the AI tools. Furthermore, very few would switch platforms to get better AI features: 16.8% of iPhone users and 9.7% of Galaxy users. In fact, notes writer Jamie Richards, fewer than half of users report even trying the AI features. He writes:
“I have some theories about what could be driving this apathy. The first centers on ethical concerns about AI. It’s no secret that AI is an environmental catastrophe in motion, consuming massive amounts of water and emitting huge levels of CO2, so greener folks may opt to give it a miss. There’s also the issue of AI and human creativity – TechRadar’s Editorial Associate Rowan Davies recently wrote of a nascent ‘cultural genocide‘ as a result of generative AI, which I think is a compelling reason to avoid it. … Ultimately, though, I think AI just isn’t interesting to the everyday person. Even as someone who’s making a career of being excited about phones, I’ve yet to see an AI feature announced that doesn’t look like a chore to use or an overbearing generative tool. I don’t use any AI features day-to-day, and as such I don’t expect much more excitement from the general public.”
No, neither do we. If only investors would catch on. The research was performed by phone-reselling marketplace SellCell, which surveyed over 2,000 smartphone users.
Cynthia Murrell, March 19, 2025
What Sells Books? Publicity, Sizzle, and Mouth-Watering Titbits
March 18, 2025
Editor note: This post was written on March 13, 2025. Availability of the articles and the book cited may change when this appears in Mr. Arnold’s public blog.
I have heard that books are making a comeback. In rural Kentucky, where I labor in an underground nook, books are good for getting a fire started. The closest bookstore is filled with toys and odd stuff one places on a desk. I am rarely motivated to read a whatchamacallit like a book. I must admit that I read one of those emergence books from a geezer named Stuart A. Kauffman at the Santa Fe Institute, and it was pretty good. Not much in the jazzy world of social media but it was a good use of my time.
I now have another book I want to read. I think it is a slice of reality TV encapsulated in a form of communication less popular than TikTok- or Telegram Messenger-type of media. The bundle of information is called Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism. Many and pundits have grabbed the story of a dispute between everyone’s favorite social media company and an authoress named Sarah Wynn-Williams.
There is nothing like some good old legal action, a former employee, and a very defensive company.
The main idea is that a memoir published on March 11, 2025, and available via Amazon at https://shorturl.at/Q077l is not supposed to be sold. Like any good dinobaby who actually read a dead tree thing this year, I bought the book. I have no idea if it has been delivered to my Kindle. I know one thing. Good old Amazon will be able to reach out and kill that puppy when the news reaches the equally sensitive leadership at that outstanding online service.
A festive group ready to cook dinner over a small fire of burning books. Thanks, You.com. Good enough.
According to The Verge, CNBC, and the Emergency International Arbitral Tribunal, an arbitrator (Nicholas Gowen) decided that the book has to be put in the information freezer. According to the Economic Times:
… violated her contract… In addition to halting book promotions and sales, Wynn-Williams must refrain from engaging in or ‘amplifying any further disparaging, critical or otherwise detrimental comments… She also must retract all previous disparaging comments ‘to the extent within her control.’”
My favorite green poohbah publication The Verge offered:
…it’s unclear how much authority the arbitrator has to do so.
Such a bold statement: It’s unclear, we say.
The Verge added:
In the decision, the arbitrator said Wynn-Williams must stop making disparaging remarks against Meta and its employees and, to the extent that she can control, cease further promoting the book, further publishing the book, and further repetition of previous disparaging remarks. The decision also says she must retract disparaging remarks from where they have appeared.
Now I have written a number of books and monographs. These have been published by outfits no longer in business. I had a publisher in Scandinavia. I had a publisher in the UK. I had a publisher in the United States. A couple of these actually made revenue and one of them snagged a positive review in a British newspaper.
But in all honesty, no one really cared about my Google, search and retrieval, and electronic publishing work.
Why?
I did not have a giant company chasing me to the Emergency International Arbitral Tribunal and making headlines for the prestigious outfit CNBC.
Well, in my opinion Sarah Wynn-Williams has hit a book publicity home run. Imagine, non readers like me buying a book about a firm to which I pay very little attention. Instead of writing about the Zuckbook, I am finishing a book (gasp!) about Telegram Messenger and that sporty baby maker Pavel Durov. Will his “core” engineering team chase me down? I wish. Sara Wynn-Williams is in the news.
Will Ms. Wynn-Williams “win” a guest spot on the Joe Rogan podcast or possibly the MeidasTouch network? I assume that her publisher, agent, and she have their fingers crossed. I heard somewhere that any publicity is good publicity.
I hope Mr. Beast picks up this story. Imagine what he would do with forced arbitration and possibly a million dollar payoff for the PR firm that can top the publicity the apparently Meta has delivered to Ms. Wynn-Williams.
Net net: Win, Wynn!
Stephen E Arnold, March 18, 2025
Management Insights Circa Spring 2025
March 18, 2025
Another dinobaby blog post. Eight decades and still thrilled when I point out foibles.
On a call today, one of the people asked, “Did you see that excellent leadership comes from ambivalence?” No, sorry. After my years at the blue chip consulting firm, I ignore those insights. Ambivalence. The motivated leader cares about money, the lawyers, the vacations, the big customer, and money. I think I have these in the correct order.
Imagine my surprise when I read another management breakthrough. Navigate to “Why Your ‘Harmonious’ Team Is Actually Failing.” The insight is that happy teams are in coffee shop mode. If one is not motivated by one of the factors I identified in the first paragraph of this essay, life will be like a drive-through smoothie shop. Kick back, let someone else do the work, and lap up that banana and tangerine goodie.
The write up reports about a management concept that is that one should strive for a roughie, maybe with a dollop of chocolate and some salted nuts. Get that blood pressure rising. Here’s a passage I noted:
… real psychological safety isn’t about avoiding conflict. It’s about creating an environment where challenging ideas makes the team stronger, not weaker.
The idea is interesting. I have learned that many workers, like helicopter parents, want to watch and avoid unnecessary conflicts, interactions, and dust ups. The write up slaps some psycho babble on this management insight. That’s perfect for academics on tenure track and talking to quite sensitive big spending clients. But often a more dynamic approach is necessary. If it is absent, there is a problem with the company. Hello, General Motors, Intel, and Boeing.
Stifle much?
The write up adds:
I’ve seen plenty of “nice” teams where everyone was polite, nobody rocked the boat, and meetings were painless. And almost all of those teams produced ok work. Why? Because critical thinking requires friction. Those teams weren’t actually harmonious—they were conflict-avoidant. The disagreements still existed; they just went underground. Engineers would nod in meetings then go back to their desks and code something completely different. Design flaws that everyone privately recognized would sail through reviews untouched. The real dysfunction wasn’t the lack of conflict—it was the lack of honest communication. Those teams weren’t failing because they disagreed too little; they were failing because they couldn’t disagree productively.
Who knew? Hello, General Motors, Intel, and Boeing.
Here’s the insight:
Here’s the weird thing I’ve found: teams that feel safe enough to hash things out actually have less nasty conflict over time. When small disagreements can be addressed head-on, they don’t turn into silent resentment or passive-aggressive BS. My best engineering teams were never the quiet ones—they were the ones where technical debates got spirited, where different perspectives were welcomed, and where we could disagree while still respecting each other.
The challenge is to avoid creating complacency.
Stephen E Arnold, March 18, 2025
AI May Be Discovering Kurt Gödel Just as Einstein and von Neumann Did
March 17, 2025
This blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything.
AI re-thinking is becoming more widespread. I published a snippet of an essay about AI and its impact in socialist societies on March 10, 2025. I noticed “A Bear Case: My Predictions Regarding AI Progress.” The write is interesting, and I think it represents thinking which is becoming more prevalent among individuals who have racked up what I call AI mileage.
The main theme of the write up is a modern day application of Kurt Gödel’s annoying incompleteness theorem. I am no mathematician like my great uncle Vladimir Arnold, who worked for year with the somewhat quirky Dr. Kolmogorov. (Family tip: Going winter camping with Dr. Kolmogorov wizard was not a good idea unless. Well, you know…)
The main idea is a formal axiomatic system satisfying certain technical conditions cannot decide the truth value of all statements about natural numbers. In a nutshell, a set cannot contain itself. Smart software is not able to go outside of its training boundaries as far as I know.
Back to the essay, the author points out that AI something useful:
There will be a ton of innovative applications of Deep Learning, perhaps chiefly in the field of biotech, see GPT-4b and Evo 2. Those are, I must stress, human-made innovative applications of the paradigm of automated continuous program search. Not AI models autonomously producing innovations.
The essay does contain a question I found interesting:
Because what else are they [AI companies and developers] to do? If they admit to themselves they’re not closing their fingers around godhood after all, what will they have left?
Let me offer several general thoughts. I admit that I am not able to answer the question, but some ideas crossed my mind when I was thinking about the sporty Kolmogorov, my uncle’s advice about camping in the winter, and this essay:
- Something else will come along. There is a myth that technology progresses. I think technology is like the fictional tribble on Star Trek. The products and services are destined to produce more products and services. Like the Santa Fe Institute crowd, order emerges. Will the next big thing be AI? Probably AI will be in the DNA of the next big thing. So one answer to the question is, “Something will emerge.” Money will flow and the next big thing cycle begins again.
- The innovators and the AI companies will pivot. This is a fancy way of saying, “Try to come up with something else.” Even in the age of monopolies and oligopolies, change is relentless. Some of the changes will be recognized as the next big thing or at least the thing a person can do to survive. Does this mean Sam AI-Man will manage the robots at the local McDonald’s? Probably not, but he will come up with something.
- The AI hot pot will cool. Life will regress to the mean or a behavior that is not hell bent on becoming a super human like the guy who gets transfusions from his kid, the wonky “have my baby” thinking of a couple of high profile technologist, or the money lust of some 25 year old financial geniuses on Wall Street. A digitized organization man man living out the theory of the leisure class will return. (Tip: Buy a dark grey suit. Lose the T shirt.)
As an 80 year old dinobaby, I find the angst of AI interesting. If Kurt Gödel were alive, he might agree to comment, “Sonny, you can’t get outside the set.” My uncle would probably say, “Those costs. Are they crazy?”
Stephen E Arnold, March 17, 2025
Wizard Snarks Amazon: Does Amazon Care? Ho Ho No
March 13, 2025
Another post from the dinobaby. Alas, no smart software used for this essay.
I read a wonderful essay from the fellow who created a number of high-value solutions. Remember the Oxford English Dictionary SGML project or the Open Text Index? The person involved deeply in both of these projects is Tim Bray. He wrote a pretty good essay called “Bye, Prime.” On the surface it is a chatty explanation of why a former Amazon officer dropped the “Prime” membership. Thinking about the comments in the write up, Dr. Bray’s article underscores some deeper issues.
In my opinion, the significant points include:
First, 21st century capitalism lacks “ethics stuff.” The decisions benefit the stakeholders.
Second, in a major metropolitan area, local outlets provide equivalent products at competitive prices. This suggests a bit of price exploitation occurs in giant online retail operations.
Third, American companies are daubed with tar as a result of certain national postures.
Fourth, a crassness is evident in some US online services.
Is the article about Amazon? I would suggest that it is, but the implications are broader. I recommend the write up. I believe attending to the explicit and implicit messages in the essay would be useful.
I think the processes identified by Dr. Bray are unlikely to slow. Going back is difficult, perhaps impossible.
PS. I think fixing up the security of AWS buckets, getting the third party reseller scams cleaned up, and returning basic functionality to the Kindle interface are indications that Amazon has gotten lost in one of its warehouses because smart Alexa is really dumb.
Stephen E Arnold, March 13, 2025
NSO Group the PR of Intelware Captures Headlines …. Yet Again
March 13, 2025
Our reading and research have lead us to this basic rule: Unless measures are taken to keep something secret, diffusion is inevitable. Knowledge about systems, methods, and tools to access data is widespread. Case in point—Today’s General Counsel tells us, "Pegasus Spyware Is Showing Up on Corporate Execs’ Cell Phones." The brief write-up cites reporting by The Record’s Suzanne Smalley, who got her information from security firm iVerify. It shows a steep climb in Pegasus-infected devices over the second half of last year. We learn:
"The number of reported infected phones among iVerify corporate clients was eleven out of 18,000 devices tested in December last year. In May 2024, when iVerify first began offering the spyware testing service, a study found seven spyware infections out of 3,000 phones tested. ‘The world remains totally unprepared to deal with this from a security perspective,’ says iVerify co-founder and former National Security Agency analyst Rocky Cole, who was interviewed for the article. ‘This stuff is way more prevalent than people think.’ The article notes that business executives are now proving to be vulnerable, including individuals with access to proprietary plans and financial data, as well as those who frequently communicate with other influential leaders in the private sector. These leaders engage in sensitive work out of the public eye, including deals that have the potential to impact financial markets."
But how could this happen? Pegasus-maker NSO Group vows it only sells spyware to whitelisted governments for counterterrorism and fighting crime. It does do that. And also other things, reportedly. So we are unsurprised to find business executives among those allegedly targeted. We think it best to assume anything digital can be accessed by anyone at any moment. Is it time to bring back communications via pen and paper? At least someone must get out from behind a desk to intercept snail mail or dead drops.
Cynthia Murrell, March 13, 2025
AI and Jobs: Tell These Folks AI Will Not Impact Their Work
March 12, 2025
The work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.
I have a friend who does some translation work. She’s chugging along because of her reputation for excellent work. However, one of the people who worked with me on a project requiring Russian language skills has not worked out. The young person lacks the reputation and the contacts with a base of clients. The older person can be as busy as she wants to be.
What’s the future of translating from one language to another for money? For the established person, smart software appears to have had zero impact. The younger person seems to be finding that smart software is getting the translation work.
I will offer my take in a moment. First, let’s look at “Turkey’s Translators Are Training the AI Tools That Will Replace Them.”
I noted this statement in the cited article:
Turkey’s sophisticated translators are moonlighting as trainers of artificial intelligence models, even as their profession shrinks with the rise of machine translations. As the models improve, these training jobs, too, may disappear.
What’s interesting is that the skilled translators are providing information to AI models. These models are definitely going to replace the humans. The trajectory is easy to project. Machines will work faster and cheaper. The humans will abandon the discipline. Then prices will go up. Those requiring translations will find themselves spending more and having few options. Eventually the old hands will wither. Excellent translations which capture nuance will become a type of endangered species. The snow leopard of knowledge work is with us.
I noted this statement in the article:
Book publishing, too, is transforming. Turkish publisher Dedalus announced in 2023 that it had machine-translated nine books. In 2022, Agora Books, helmed by translator Osman Ak?nhay, released a Turkish edition of Jean-Dominique Brierre’s Milan Kundera, une vie d’écrivain, a biography of the Czech-French novelist Milan Kundera. Ak?nhay, who does not know French, used Google Translate to help him in the translation, to much criticism from the industry.
What’s this mean?
- Jobs will be lost and the professionals with specialist skills are going to be the buggy whip makers in a world of automobiles
- The downstream impact of smart software is going to kill off companies. The Chegg legal matter illustrates how a monopoly can mindlessly erode a company. This is like a speeding semi-truck smashing love bugs on a Florida highway. The bugs don’t know what hit them, and the semi-truck is unaware and the driver is uncaring. Dead bugs? So what? See “Chegg Sues Google for Hurting Traffic with AI As It Considers Strategic Alternatives.”
- Data from different sources suggesting that AI will just create jobs is either misleading, public relations, or dead wrong. The Bureau of Labor Statistics data are spawning articles like “AI and Its Impact on Software Development Jobs.”
Net net: What’s emerging is one of those classic failure scenarios. Nothing big seems to go wrong. Then a collapse occurs. That’s what’s beginning to appear. Just little changes. Heed the signals? Of course not. I can hear someone saying, “That won’t happen to me.” Of course not but cheaper and faster are good enough at this time.
Stephen E Arnold, March 12, 2025
Microsoft Sends a Signal: AI, AIn’t Working
March 11, 2025
Another post from the dinobaby. Alas, no smart software used for this essay.
The problems with Microsoft’s AI push were evident from the start of its AI push in 2023. The company thought it had identified the next big thing and had the big fish on the line. Now the work was easy. Just reel in the dough.
Has it worked out for Microsoft? We know that big companies often have difficulty innovating. The enervating white board sessions which seek to answer the question, “Do we build it or buy it?” usually give way to: [a] Let’s lock it up somehow or [b] Let’s steal it because it won’t take our folks too long to knock out a me-too.
Microsoft sent a fairly loud beep-beep-beep when it began to cut back on its dependence on OpenAI. Not long ago, Microsoft trimmed some of its crazy spending for AI. Now we have the allegedly accurate information in “Microsoft Is Reportedly Potting a Future without OpenAI.”
The write up states:
Microsoft has poured over $13 billion into the AI firm since 2019, but now it wants more control over its own models and costs. Simple enough in theory—build in-house alternatives, cut expenses, and call the shots.
Is this a surprise? No, I think it is just one more beep added to the already emitted beep-beep-beep.
Here’s my take:
- Narrowly focused smart software adds some useful capabilities to what I would call workflow enhancement. The narrow focus for an AI system reduces some of the wonkiness of the output. Therefore, certain tasks benefit; for example, grinding through data for a chemistry application or providing a call center operation with a good enough solution to rising costs. Broad use cases are more problematic.
- Humans who rely on information for a living don’t want to be caught out. This means that using smart software is an assist or a supplement. This is like an older person using a cane when walking on a senior citizens adventure tour.
- Productizing a broad use case for smart software is expensive and prone to the sort of failure rate associated with a new product or service. A good example is a self driving auto with collision avoidance. Would you stand in front of such a vehicle confident in the smart software’s ability to not run over you? I wouldn’t.
What’s happening at Microsoft is a reasonably predictable and understandable approach. The company wants to hedge its bets since big bucks are flowing out, not in. The firm thinks it has enough smarts to do a better job even though in my opinion this is unlikely. Remember Bob, Clippy, and Windows updates? I do.
Also, small teams believe their approach will be a winner. Big companies believe their people can row that boat faster than anyone else. I know from personal experience and observation that this is not true. But the appearance of effort and the illusion of high value work encourages the approach.
Plus, the idea that a “leadership team” can manage innovation is a powerful one. Microsoft’s leadership believes in its leadership. That’s why the company is a leader. (I love this logic.)
Net net: My hunch is that Microsoft’s AI push is a disappointment. Now the company can shift into SWAT team mode and overwhelm the problem: AI that does not pay for itself.
Will this approach work? Nope, the outcome will be good enough. That is a bit more than one can say about Apple intelligence: Seriously out of step with the Softies.
Stephen E Arnold, March 11, 2025
AI and Two Villages: A Challenge in Some Large Countries
March 10, 2025
This blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you? We used AI to translate the original Russian into semi English and to create the illustration. Hasta la vista a human Russian translater and a human artist. That’s how AI works in real life.
My team and I are wrapping up out Telegram monograph. As part of the drill, we have been monitoring some information sources in Russia. We spotted the essay “AI and Capitalism.” Note: I am not sure the link will resolve, but you can locate it via Yandex by searching for PCNews. I apologize, but some content is tricky to locate using consumer tools.)
The “white-collar village” and the “blue collar village” generated by You.com. Good enough.
I mention the article because it makes clear how smart software is affecting one technical professional working in a Russian government-owned telecommunications company. The author’s day-to-day work requires programming. One description of the value of smart software appears in this passage:
I work as a manager in a telecom and since last year I have been actively modifying the product line, adding AI components to each product. And I am not the only one there – the movement is going on in principle throughout the IT industry, of which we are a part… Where we have seen the payoff is replacing tree navigation with a text search bar, helping to generate text on a specific topic taking into account the concept cloud of the subject area, aggregating information from sources with different data structures, extracting a sequence of semantic actions of a person while working on a laptop, simultaneous translation with imitation of any voice, etc. The goal of all these events, as before, is to increase labor productivity. Previously, a person dug with his hands, then with a shovel, now with an excavator. Indeed, now it’s easier to ask the model for an example of code than to spend hours searching on Stack Overflow. This seriously speeds things up.
The author then identifies three consequences of the use of AI:
- Training will change because “you will need to retrain for another narrow specialty several times”
- Education will become more expensive but who will pay? Possible as important who will be able to learn?
- Society will change which is a way of saying “social turmoil” ahead in my opinion.
Here’s an okay translation of the essay’s final paragraph:
…in the medium term, the target architecture of our society will inevitably see a critical stratification into workers and educated people. Blue and white collar castes. The fence between them will be so high that films about a possible future will become a fairly accurate forecast. I really want to end up in a white-collar village in the role of a white collar worker. Scary.
What’s interesting about this person’s point of view is that AI is already changing work in Russia and the Russian Federation. The challenge will be that an allegedly “flat” social structure will be split into those who can implement smart software and those who cannot. The chatter about smart software is usually focused on which company will find a way to generate revenue from the massive investments required to create solutions that consumers and companies will buy.
What gets less attention is the apparent impact of the technology on countries which purport to make life “better” via a different system. If the author is correct, some large nation states are likely to face some significant social challenges. Not everyone can work in “a white-collar village.”
Stephen E Arnold, March 10, 2025
A French Outfit Points Out Some Issues with Starlink-Type Companies
March 10, 2025
Another one from the dinobaby. No smart software. I spotted a story on the Thales Web site, but when I went back to check a detail, it had disappeared. After a bit of poking I found a recycled version called “Thales Warns Governments Over Reliance on Starlink-Type Systems.” The story must be accurate because it is from the “real” news outfit that wants my belief in their assertion of trust. Well, what do you know about trust?
Thales, as none of the people in Harrod’s Creek knows, is a French defence, intelligence, and go-to military hardware type of outfit. Thales and Dassault Systèmes are among the world leaders in a number cutting edge technology sectors. As a person who did some small work in France, I heard the Thales name mentioned a number of times. Thales has a core competency in electronics, military communications, and related fields.
The cited article reports:
Thales CEO Patrice Caine questioned the business model of Starlink, which he said involved frequent renewal of satellites and question marks over profitability. Without further naming Starlink, he went on to describe risks of relying on outside services for government links. “Government actors need reliability, visibility and stability,” Caine told reporters. “A player that – as we have seen from time to time – mixes up economic rationale and political motivation is not the kind that would reassure certain clients.”
I am certainly no expert in the lingo of a native French speaker using English words. I do know that the French language has a number of nuances which are difficult for a dinobaby like me to understand without saying, “Pourriez-vous répéter, s’il vous plaît?”
I noticed several things; specifically:
- The phrase “satellite renewal.” The idea is that the useful life of a Starlink-type device is shorter than some other technologies such as those from Thales-type of companies. Under the surface is the French attitude toward “fast fashion”. The idea is that cheap products are wasteful; well-made products, like a well-made suite, last a long time. Longer than a black baseball cap is how I interpreted the reference to “renewal.” I may be wrong, but this is a quite serious point underscoring the issue of engineering excellence.
- The reference to “profitability” seems to echo news reports that Starlink itself may be on the receiving end of preferential contract awards. If those type of cozy deals go away, will the Starlink-type business generate sufficient revenue to sustain innovation, higher quality, and longer life spans? Based on my limited knowledge of thing French, this is a fairly direct way of pointing out the weak business model of the Starlink-type of service.
- The use of the words “reliability” and “stability” struck me as directing two criticisms at the Starlink-type of company. On one level the issue of corporate stability is obvious. However, “stability” applies to engineering methods as well as mental set up. Henri Bergson observed, ““Think like a man of action, act like a man of thought.” I am not sure what M. Bergson would have thought about a professional wielding a chainsaw during a formal presentation.
- The direct reference to “mixing up” reiterates the mental stability and corporate stability referents. But the killer comment is the merging of “economic rationale and political motivation” flashes bright warning lights to some French professionals and probably would resonate with other Europeans. I wonder what Austrian government officials thought about the chainsaw performance.
Net net: Some of the actions of a Starlink-type of company have been disruptive. In game theory, “keep people guessing” is a proven tactic. Will it work in France? Unlikely. Chainsaws will not be permitted in most meetings with Thales or French agencies. The baseball cap? Probably not.
Stephen E Arnold, March 10, 2025