AI Will Not Have a Negative Impact on Jobs. Knock Off the Negativity Now

September 2, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

The word from Goldman Sachs is parental and well it should be. After all, Goldman Sachs is the big dog. PC Week’s story “Goldman Sachs: AI’s Job Hit Will Be Brief as Productivity Rises” makes this crystal clear or almost. In an era of PR and smart software, I am never sure who is creating what.

The write up says:

AI will cause significant, but ultimately temporary, disruption. The headline figure from the report is that widespread adoption of AI could displace 6-7% of the US workforce. While that number sounds alarming, the firm’s economists, Joseph Briggs and Sarah Dong, argue against the narrative of a permanent “jobpocalypse.” They remain “skeptical that AI will lead to large employment reductions over the next decade.”

Knock of the complaining already. College graduates with zero job offers. Just do the van life thing for a decade or become an influencer.

The write up explains history just like the good old days:

“Predictions that technology will reduce the need for human labor have a long history but a poor track record,” they write. The report highlights a stunning fact: Approximately 60% of US workers today are employed in occupations that didn’t even exist in 1940. This suggests that over 85% of all employment growth in the last 80 years has been fueled by the creation of new jobs driven by new technologies. From the steam engine to the internet, innovation has consistently eliminated some roles while creating entirely new industries and professions.

Technology and brilliant management like that at Goldman Sachs makes the economy hum along. And the write up proves it, and I quote:

Goldman Sachs expects AI to follow this pattern.

For those TikTok- and YouTube-type videos revealing that jobs are hard to obtain or the fathers whining about sending 200 job applications each month for six months, knock it off. The sun will come up tomorrow. The financial engines will churn and charge a service fee, of course. The flowers will bloom because that baloney about global warming is dead wrong. The birds will sing (well, maybe not in Manhattan) but elsewhere because windmills creating power are going to be shut down so the birds won’t be decapitated any more.

Everything is great. Goldman Sachs says this. In Goldman we trust or is it Goldman wants your trust… fund that is.

Stephen E Arnold, September 2, 2025

Swinging for the Data Centers: You May Strike Out, Casey

September 2, 2025

Home to a sparse population of humans, the Cowboy State is about to generate an immense amount of electricity. Tech Radar Pro reports, “A Massive Wyoming Data Center Will Soon Use 5x More Power than the State’s Human Occupants—But No One Knows Who Is Using It.” Really? We think we can guess. The Cheyenne facility is to be powered by a bespoke combination of natural gas and renewables. Writer Efosa Udinmwen writes:

“The proposed facility, a collaboration between energy company Tallgrass and data center developer Crusoe, is expected to start at 1.8 gigawatts and could scale to an immense 10 gigawatts. For context, this is over five times more electricity than what all households in Wyoming currently use.”

Who could need so much juice? Could it be OpenAI? So far, Crusoe neither confirms nor denies that suspicion. The write-up, however, notes Crusoe worked with OpenAI to build the world’s “largest data center” in Texas as part of the OpenAI-led “Stargate” initiative. (Yes, named for the portals in the 1994 movie and subsequent TV show. So clever.) Udinmwen observes:

“At the core of such AI-focused data centers lies the demand for extremely high-performance hardware. Industry experts expect it to house the fastest CPUs available, possibly in dense, rack-mounted workstation configurations optimized for deep learning and model training. These systems are power-hungry by design, with each server node capable of handling massive workloads that demand sustained cooling and uninterrupted energy. Wyoming state officials have embraced the project as a boost to local industries, particularly natural gas; however, some experts warn of broader implications. Even with a self-sufficient power model, a data center of this scale alters regional power dynamics. There are concerns that residents of Wyoming and its environs could face higher utility costs, particularly if local supply chains or pricing models are indirectly affected. Also, Wyoming’s identity as a major energy exporter could be tested if more such facilities emerge.”

The financial blind spot is explained in Futurism’s article “There’s a Stunning Financial Problem With AI Data Centers.” The main idea is that today’s investment will require future spending for upgrades, power, water, and communications. The result is that most of these “home run” swings will result in lousy batting averages and maybe become a hot dog vendor at the ball park adjacent the humming, hot structures.

Cynthia Murrell, September 2, 2025

Picking on the Zuck: Now It Is the AI Vision

September 1, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

Hey, the fellow just wanted to meet girls on campus. Now his life work has become a negative. Let’s cut some slack for the Zuck. He is a thinking, caring family man. Imagine my shock when I read “Mark Zuckerberg’s Unbelievably Bleak AI Vision: We Were Promised Flying Cars. We Got Instagram Brain Rot.”

A person choosing to use a product the Zuck just bought conflates brain rot with a mass affliction. That’s outstanding reasoning.

The write up says:

In an Instagram video (of course) posted last week, Zuck explains that Meta’s goal is to develop “personal superintelligence for everyone,” accessed through devices like “glasses that can see what we see, hear what we hear, and interact with us throughout the day.” “A lot has been written about the scientific and economic advances that AI can bring,” he noted. “And I’m really optimistic about this.” But his vision is “different from others in the industry who want to direct AI at automating all of the valuable work”: “I think an even more meaningful impact in our lives is going to come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, be a better friend, and grow to become the person that you aspire to be.”

A person wearing the Zuck glasses will not be a “glasshole.” That individual will be a better human. Imagine taking the Zuck qualities and amplifying them like a high school sound system on the fritz. That’s what smart software will do.

The write up I saw is dated August 6, 2025, and it is hopelessly out of date. the Zuck has reorganized his firm’s smart software unit. He has frozen hiring except for a few quick strikes at competitors. And he is bringing more order to a quite well organized, efficiently run enterprise.

The big question is, “How can a write up dated August 6, 2025, become so mismatched with what the Zuck is currently doing? I don’t think I can rely on a write up with an assertion like this one:

I’ve seen the best digital minds of my generation wasted on Reels.

I have never seen a Reels, but it is obvious I am in the minority. That means that I am ill-equipped to understand this:

the AI systems his team is building are not meant to automate work but to provide a Meta-governed layer between individual human beings and the world outside of them.

This sounds great.

I would like to share three thoughts I had whilst reading this essay:

  1. Ephemeral writing becomes weirdly unrelated to the reality of the current online market in the United States
  2. The Zuck’s statements and his subsequent reorganization suggest that alignment at Facebook is a bit like a grade school student trying to fit puzzle pieces into the wrong puzzle
  3. Googles, glasses, implants — The fact that Facebook does not have a device has created a desire for a vehicle with a long hood and a big motor. Compensation comes in many forms.

Net net: One of the risks in the Silicon Valley world is that “real” is slippery. Do the outputs of “leadership” correlate with the reality of the organization?

Nope. Do this. Do that. See what works. Modern leadership. Will someone turn off those stupid flashing red and yellow alarm lights? I can see the floundering without the glasses, buzzing, and flashing.

Stephen E Arnold, September 1, 2025

More about AI and Peasants from a Xoogler Too

September 1, 2025

A former Googler predicts a rough ride ahead for workers. And would-be workers. Yahoo News shares “Ex-Google Exec’s Shocking Warning: AI Will Create 15 Years of ‘Hell’—Starting Sooner than We Think.” Only 15 years? Seems optimistic. Mo Gawdat issued his prophesy on the “Diary of a CEO” podcast. He expects “the end of white-collar work” to begin by the end of this decade. Indeed, the job losses have already begun. But the cascading effects could go well beyond high unemployment. Reporter Ariel Zilber writes:

“Without proper government oversight, AI technology will channel unprecedented wealth and influence to those who own or control these systems, while leaving millions of workers struggling to find their place in the new economy, according to Gawdat. Beyond economic concerns, Gawdat anticipates serious social consequences from this rapid transformation. Gawdat said AI will trigger significant ‘social unrest’ as people grapple with losing their livelihoods and sense of purpose — resulting in rising rates of mental health problems, increased loneliness and deepening social divisions. ‘Unless you’re in the top 0.1%, you’re a peasant,’ Gawdat said. ‘There is no middle class.’”

That is ominous. But, to hear Gawdat tell it, there is a bright future on the other side of those hellish 15 years. He believes those who survive past 2040 can look forward to a “utopian” era free from tedious, mundane tasks. This will free us up to focus on “love, community, and spiritual development.” Sure. But to get there, he warns, we must take certain steps:

“Gawdat said that it is incumbent on governments, individuals and businesses to take proactive measures such as the adoption of universal basic income to help people navigate the transition. ‘We are headed into a short-term dystopia, but we can still decide what comes after that,’ Gawdat told the podcast, emphasizing that the future remains malleable based on choices society makes today. He argued that outcomes will depend heavily on decisions regarding regulation, equitable access to technology, and what he calls the ‘moral programming’ of AI algorithms.”

We are sure government and Big Tech will get right on that. Totally doable in our current political and business climates. Meanwhile, Mo Gawdat is working on an “AI love coach.” I am not sure Mr. Gawdat is connected to the bureaucratic and management ethos of 2025. Is that why he is a Xoogler?

Cynthia Murrell, September 1, 2025

Faux Boeuf Delivers Zero Calories Plus a Non-Human Toxin

August 29, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

That sizzling rib AI called boeuf à la Margaux Blanchard is a treat. I learned about this recipe for creating filling, substantive, calorie laden content in “Wired and Business Insider Remove Articles by AI-Generated Freelancer.” I can visualize the meeting in which the decision was taken to hire Margaux Blanchard. I can also run in my mental VHS, the meeting when the issue was discovered. In my version, the group agreed to blame it on a contractor and the lousy job human resource professionals do these days.

What’s the “real” story? Let go to the Guardian write up:

On Thursday [August 22, 2025], Press Gazette reported that at least six publications, including Wired and Business Insider, have removed articles from their websites in recent months after it was discovered that the stories – written under the name of Margaux Blanchard – were AI-generated.

I frequently use the phrase “ordained officiant” in my dinobaby musings. Doesn’t everyone with some journalism experience?

The write u p said:

Wired’s management acknowledged the faux pas, saying: “If anyone should be able to catch an AI scammer, it’s Wired. In fact we do, all the time … Unfortunately, one got through. We made errors here: This story did not go through a proper fact-check process or get a top edit from a more senior editor … We acted quickly once we discovered the ruse, and we’ve taken steps to ensure this doesn’t happen again. In this new era, every newsroom should be prepared to do the same.”

Yeah, unfortunately and quickly. Yeah.

I liked this paragraph in the story:

This incident of false AI-generated reporting follows a May error when the Chicago Sun-Times’ Sunday paper ran a syndicated section with a fake reading list created by AI. Marco Buscaglia, a journalist who was working for King Features Syndicate, turned to AI to help generate the list, saying: “Stupidly, and 100% on me, I just kind of republished this list that [an AI program] spit out … Usually, it’s something I wouldn’t do … Even if I’m not writing something, I’m at least making sure that I correctly source it and vet it and make sure it’s all legitimate. And I definitely failed in that task.” Meanwhile, in June, the Utah court of appeals sanctioned a lawyer after he was discovered to have used ChatGPT for a filing he made in which he referenced a nonexistent court case.

Hey, that AI is great. It builds trust. It is intellectually satisfying just like some time in the kitchen with Margot Blanchard, a hot laptop, and some spicy prompts. Yum yum yum.

Stephen E Arnold, August 29, 2025

Computer Science Grad Job Crisis: Root Cause Revealed

August 29, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

I read a short item called “A Popular College Major Has One of The Highest Unemployment Rates.” The article contains  old news, but it also reveals one of the underlying causes of the issue.

First, here’s the set up for  the “no jobs for you” write up:

Computer science ranked seventh amongst undergraduate majors with the highest unemployment at 6.1 percent, according to the Federal Reserve Bank of New York. “Every kid with a laptop thinks they’re the next Zuckerberg, but most can’t debug their way out of a paper bag,” one expert told Newsweek.

Now, let’s look at the passage that points to an underlying cause:

HR consultant Bryan Driscoll told Newsweek: Computer science majors have long been sold a dream that doesn’t match reality.

And a bit of supporting input:

Michael Ryan, a finance expert and the founder of MichaelRyanMoney.com, told Newsweek: … “We created a gold rush mentality around coding right as the gold ran out. Companies are cutting engineering budgets by 40 percent while CS enrollment hits record highs. It’s basic economics. Flood the market, crater the wages.”

My take is this another example of “think it and it will become real” patterning in the US and probably elsewhere too. College and universities wanted to “sell” student loans. Computer science was nothing more than the bait on the hook of employment for life for the mark.

When one can visualize a world and make it real corresponds to how life unspools strikes me as crazy. In my career I have met a few people who said, “I knew I wanted to be an X, so I just did it.” The majority of those with whom I have interacted in my 60 plus year work career say something like this, “Yeah, I majored in X, but an opportunity arose, and I took it. Now I do Y. Go figure.”

The “think it into reality” approach seems to deliver low probability results. Situational decisions have several upsides. First, one doesn’t have a choice for some reason. Two, surprises happen. And, three, as one moves through life (the unspooling idea) perceptions, interests, and even intelligence change.

My hunch is that today (it happens to be August 21, 2025) is that we are living in a world in which “think it and it will happen” thought processes are everywhere. Is Mark Zuckerberg suddenly concerned about an AI bubble? Will Microsoft launch Excel Copilot with a warning label that says, “This will output errors”? Will you trust your child’s medical treatment to a smart robot?

I like to thing about dialing more “real” world into everyday life. Unemployment for computer science graduates won’t change too much in the “up” direction. But at least the carnival culture approach to selling a college education, an AI start up idea to a 20 something MBA “managing director”, and the “do it for 10,000 hours and become an expert” may loosen the grip on what are some pretty wacky ideas.

Stephen E Arnold, August 29, 2025

Misunderstanding the Google: A Hot Wok

August 29, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

I am no longer certain how many people read blog posts. Bing, Google, and Yandex seem to be crawling in a more focused way; that is, comprehensiveness is not part of the game plan. I want to do my small part by recommending that you scan (preferably study) “Google Is Killing the Open Web.”

The premise of the essay is clear: Google has been working steadily and in a relatively low PR voltage mode to control the standards for the Web. I commented on this in my Google Legacy, Google Version 2.0, and other Google writings as early as 2003. How did I identify this strategic vision? Easy. A Googler told me. This individual like it when I called Google a “calculating predator.” This person made an effort (a lame one because he worked at Google) to hear my lectures about Google’s Web search.

Now 22 years later, a individual has put the pieces together and concluded rightly that Google is killing the open Web. The essay states:

Google is managing to achieve what Microsoft couldn’t: killing the open web. The efforts of tech giants to gain control of and enclose the commons for extractive purposes have been clear to anyone who has been following the history of the Internet for at least the last decade, and the adopted strategies are varied in technique as they are in success, from Embrace, Extend, Extinguish (EEE) to monopolization and lock-in.

Several observations:

  1. The visible efforts to monopolize have been search, ads, and the mobile plays. The lower profile technical standards are going to be more important as new technologies emerge. The accuracy of the early Googlers’ instincts were accurate. People (namely Wok) are just figuring it out. Unfortunately it is too late.
  2. Because online services have a tendency to become monopolies, the world of “online” has become increasingly centralized. The “myth” of decentralization is a great one but so was “Epic of Gilgamesh.” There may be some pony in there, but the reality is that it is better to centralize and then decide what to move out there.
  3. The big tech outfits reside in a “country,” but the reality is that these are borderless. There is no traditional there there. Consequently governments struggle to regulate what these outfits do. Australia levies a fine on Google. So what? Google just keeps being Googley. Live with it.

One cannot undo decades of methodical, strategic thinking, and deft tactical moves quickly. My view is that changing Google will occur within Google. The management thinking is becoming increasingly like that of an AT&T type company. Chop it up and it will just glue itself back together.

I know the Wok is hot. Time to cool off and learn to thrive in the walled garden. Getting out is going to be more difficult than many other tasks. Google controls lots of technology, including the button that opens the gate to the walled garden.

Stephen E Arnold, August 26, 2025

Think It. The * It * Becomes Real. Think Again?

August 27, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

Fortune Magazine — once the gem for a now spinning-in-his-grave publisher —- posted “MIT Report: 95% of Generative AI Pilots at Companies Are Failing.” I take a skeptical view of MIT. Why? The esteemed university found Jeffrey Epstein a swell person.

The thrust of the story is that people stick smart software into an organization, allow it time to steep, cook up a use case, and find the result unpalatable. Research is useful. When it evokes a “Duh!”, I don’t get too excited.

But there was a phrase in the write up which caught my attention: Learning gap. AI or smart software is a “belief.” The idea of the next big thing creates an opportunity to move money. Flow, churn, motion — These are positive values in some business circles.

AI fits the bill. The technology demonstrates interesting capabilities. Use cases exist. Companies like Microsoft have put money into the idea. Moving money is proof that “something” is happening. And today that something is smart software. AI is the “it” for the next big thing.

Learning gap, however, is the issue. The hurdle is not Sam Altman’s fears about the end of humanity or his casual observation that trillions of dollars are needed to make AI progress. We have a learning gap.

But the driving vision for Internet era innovation is do something big, change the world, reinvent society. I think this idea goes back to the sales-oriented philosophy of visualizing a goal and aligning one’s actions to achieve that goal. I a fellow or persona named Napoleon Hill pulled together some ideas and crafted “Think and Grow Rich.” Today one just promotes the “next big thing,” gets some cash moving, and an innovation like smart software will revolutionize, remake, or redo the world.

The “it” seems to be stuck in the learning gap. Here’s the proof, and I quote:

But for 95% of companies in the dataset, generative AI implementation is falling short. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained. The data also reveals a misalignment in resource allocation. More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation—eliminating business process outsourcing, cutting external agency costs, and streamlining operations.

Consider this question: What if smart software mostly works but makes humans uncomfortable in ways difficult for the user to articulate? What if humans lack the mental equipment to conceptualize what a smart system does? What if the smart software cannot answer certain user questions?

I find information about costs, failed use cases, hallucinations, and benefits plentiful. I don’t see much information about the “learning gap.” What causes a learning gap? Spell check makes sense. A click that produces a complete report on a complex topic is different. But in what way? What is the impact on the user?

I think the “learning gap” is a key phrase. I think there is money to be made in addressing it. I am not confident that visualizing a better AI is going to solve the problem which is similar to a bonfire of cash. The learning gap might be tough to fill with burning dollar bills.

Stephen E Arnold, August 27, 2025

Apple and Meta: The After Market Route

August 26, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

Two big outfits are emulating the creative motif for an American television series titled “Pimp My Ride.” The show was hosted by rapper Xzibit, who has a new album called “Kingmaker” in the works. He became the “meme” of the television program with his signature phrase, “Yo, dawg, I heard you like.”

image

A DVD of season one, is available for sale at www.bol.com.

Each episode a “lucky person” would be approached and told that his or her vehicle would be given a make over. Some of the make overs were memorable. Examples included the “Yellow Shag Disaster,” which featured yellow paint and yellow shag carpeting. The team removed a rat living in the 1976 Pacer. Another was the “Drive In Theater Car.” It included a pop up champagne dispenser and a TV screen installed under the hood for a viewing experience when people gathered outside the vehicle.

The idea was to take something that mostly worked and then add-on extras. Did the approach work? It made Xzibit even more famous and it contributed the phrase “Yo, dawg, I heard you like” to the US popular culture between 2004 and 2007.

I think the “Pimp My Ride” concept has returned for Apple and Meta. Let me share my thoughts with you.

First, I noted that Bloomberg is exploring the use of Google Gemini AI to Power the long suffering Siri. You can read the paywalled story at this link. Apple knows that Google’s payments are worth real money. The idea of adding more Google and getting paid for the decision probably makes sense to the estimable Apple. Will the elephants mate and produce more money or will the grass get trampled. I don’t know. It will be interesting to see what the creative wizards at both companies produce. There is no date for the release of the first episode. I will be watching.

Second, the story presented in fragments on X.com appears at this X.com page. The key item of information is the alleged tie up between Meta and MidJourney:

Today we’re proud to announce a partnership with @midjourney , to license their aesthetic technology for our future models and products, bringing beauty to billions.

Meta, like Apple, is partnering with an AI success in the arts and crafts sector of smart software. The idea seems to focus on “aesthetic excellence.” How will these outfits enhance Meta. Here’s what the X.com comment offers:

To ensure Meta is able to deliver the best possible products for people it will require taking an all-of-the-above approach. This means world-class talent, ambitious compute roadmap, and working with the best players across the industry.

Will these add-one approaches to AI deliver something useful to millions or will the respective organizations produce the equivalent of the “Pimp My Ride” Hot Tub Limousine. This after-market confection added a hot tub filled with water to a limousine. The owner of the vehicle could relax in the hot tub while the driver ferried the proud owner to the bank.

I assume the creations of the Apple, Google, Meta, and MidJourney teams will be captured on video and distributed on TikTok-type services as well as billions of computing devices. My hope is that Xzibit is asked to host the roll outs for the newly redone services. I would buy a hat, a T shirt, and a poster for the “winner” of this new AI enhanced effort.

Yo, dawg, I heard you like AI, right?

Stephen E Arnold, August 26, 2025

And the Problem for Enterprise AI Is … Essentially Unsolved

August 26, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

I try not to let my blood pressure go up when I read “our system processes all your organization’s information.” Not only is this statement wildly incorrect it is probably some combination of [a] illegal, [b] too expensive, and [c] too time consuming.

Nevertheless, vendors either repeat the mantra or imply it. When I talk with representatives of these firms, over time, fewer and fewer recognize the craziness of the assertion. Apparently the reality of trying to process documents related to a legal matter, medical information, salary data, government-mandated secrecy cloaks, data on a work-from-home contractor’s laptop which contains information about payoffs in a certain country to win a contract, and similar information is not part of this Fantasyland.

I read “Immature Data Strategies Threaten Enterprise AI Plans.” The write up is a hoot. The information is presented in a way to avoid describing certain ideas as insane or impossible. Let’s take a look at a couple of examples. I will in italics offer my interpretation of what the online publication is trying to coat with sugar and stick inside a Godiva chocolate.

Here’s the first snippet:

Even as senior decision-makers hold their data strategies in high regard, enterprises face a multitude of challenges. Nearly 90% of data pros reported difficulty with scaling and complexity, and more than 4 in 5 pointed to governance and compliance issues. Organizations also grapple with access and security risks, as well as data quality, trust and skills gaps.

My interpretation: Executives (particularly leadership types) perceive their organizations as more buttoned up than they are in reality. Ask another employee, and you will probably hear something like “overall we do very well.” The fact of the matter is that leadership and satisfied employees have zero clue about what is required to address a problem. Looking too closely is not a popular way to get that promotion or to keep the Board of Directors and stakeholders happy. When you have to identify an error use a word like “governance” or “regulations.”

Here’s the second snippet:

To address the litany of obstacles, organizations are prioritizing data governance. More than half of those surveyed expect strengthened governance to significantly improve AI implementation, data quality and trust in business decisions.

My interpretation: Let’s talk about governance, not how poorly procurement is handled and the weird system problems that just persist. What is “governance”? Organizations are unsure how they continue to operate. The purpose of many organizations is — believe it or not — lost. Make money is the yardstick. Do what’s necessary to keep going. That’s why in certain organizations an employee from 30 years ago could return and go to a meeting. Why? No change. Same procedures, same thought processes, just different people. Incrementalism and momentum power the organization.

So what? Organizations are deciding to give AI a whirl or third parties are telling them to do AI. Guess what? Major change is difficult. Systems-related activities repeat the same cycle. Here’s one example: “We want to use Vendor X to create an enterprise knowledge base.” Then the time, cost, and risks are slowly explained. The project gets scaled back because there is neither time, money, employee cooperation, or totally addled attorneys to make organization spanning knowledge available to smart software.

The pitch sounds great. It has for more than 60 years. It is still a difficult deliverable, but it is much easier to market today. Data strategies are one thing; reality is anther.

Stephen E Arnold, August 26, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta