Amazon and its Imperative to Dump Human Workers
October 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Everyone loves Amazon. The local merchants thank Amazon for allowing them to find their future elsewhere. The people and companies dependent on Amazon Web Services rejoiced when the AWS system failed and created an opportunity to do some troubleshooting and vendor shopping. The customer (me) who received a pair of ladies underwear instead of an AMD Ryzen 5750X. I enjoyed being the butt of jokes about my red, see through microprocessor. Was I happy!

Mice discuss Amazon’s elimination of expensive humanoids. Thanks, Venice.ai. Good enough.
However, I read “Amazon Plans to Replace More Than Half a Million Jobs With Robots.” My reaction was that some employees and people in the Amazon job pipeline were not thrilled to learn that Amazon allegedly will dump humans and embrace robots. What a great idea. No health care! No paid leave! No grousing about work rules! No medical costs! No desks! Just silent, efficient, depreciable machines. Of course there will be smart software. What could go wrong? Whoops. Wrong question after taking out an estimated one third of the Internet for a day. How about this question, “Will the stakeholders be happy?” There you go.
The write up cranked out by the Gray Lady reports from confidential documents and other sources says:
Amazon’s U.S. work force has more than tripled since 2018 to almost 1.2 million. But Amazon’s automation team expects the company can avoid hiring more than 160,000 people in the United States it would otherwise need by 2027. That would save about 30 cents on each item that Amazon picks, packs and delivers to customers. Executives told Amazon’s board last year that they hoped robotic automation would allow the company to continue to avoid adding to its U.S. work force in the coming years, even though they expect to sell twice as many products by 2033. That would translate to more than 600,000 people whom Amazon didn’t need to hire.
Why is Amazon dumping humans? The NYT turns to that institution that found Jeffrey Epstein a font of inspiration. I read this statement in the cited article:
“Nobody else has the same incentive as Amazon to find the way to automate,” said Daron Acemoglu, a professor at the Massachusetts Institute of Technology who studies automation and won the Nobel Prize in economic science last year. “Once they work out how to do this profitably, it will spread to others, too.” If the plans pan out, “one of the biggest employers in the United States will become a net job destroyer, not a net job creator,” Mr. Acemoglu said.
Ah, save money. Keep more money for stakeholders. Who knew? Who could have foreseen this motivation?
What jobs will Amazon provide to humans? Obviously leadership will keep leadership jobs. In my decades of professional work experience, I have never met a CEO who really believes anyone else can do his or her job. Well, the NYT has an answer about what humans will do at Amazon; to wit:
Amazon has said it has a million robots at work around the globe, and it believes the humans who take care of them will be the jobs of the future. Both hourly workers and managers will need to know more about engineering and robotics as Amazon’s facilities operate more like advanced factories.
I wish to close this essay with several observations:
- Much of the information in the write up come from company documents. I am not comfortable with the use of this type of information. It strikes me as a short cut, a bit like Google or self-made expert saying, “See what I did!”
- Many words were used to get one message across: Robots and by extension smart software will put people out of work. Basic income time, right? Why not say that?
- The reason wants to dump people is easy to summarize: Humans are expensive. Cut humans, costs drop (in theory). But are there social costs? Sure, but why dwell on those.
Net net: Sigh. Did anyone reviewing this story note the Amazon online collapse? Perhaps there is a relationship between cost cutting at Amazon and the company’s stability?
Stephen E Arnold, October 22, 2025
Parents and Screen Time for Their Progeny: A Losing Battle? Yep
October 22, 2025
Sometimes I am glad my child-rearing days are well behind me. With technology a growing part of childhood education and leisure, how do parents stay on top of it all? For over 40%, not as well as they would like. The Pew Research Center examined “How Parents Manage Screen Time for Kids.” The organization surveyed US parents of kids 12 and under about the use of tablets, smartphones, smartwatches, gaming devices, and computers in their daily lives. Some highlights include:
“Tablets and smartphones are common – TV even more so.
[a] Nine-in-ten parents of kids ages 12 and younger say their child ever watches TV, 68% say they use a tablet and 61% say they use a smartphone.
[b] Half say their child uses gaming devices. About four-in-ten say they use desktops or laptops.
AI is part of the mix.
[c] About one-in-ten parents say their 5- to 12-year-old ever uses artificial intelligence chatbots like ChatGPT or Gemini.
[c] Roughly four-in-ten parents with a kid 12 or younger say their child uses a voice assistant like Siri or Alexa. And 11% say their child uses a smartwatch.
Screens start young.
[e] Some of the biggest debates around screen time center on the question: How young is too young?
[f] It’s not just older kids on screens: Vast majorities of parents say their kids ever watch TV – including 82% who say so about a child under 2.
[g] Smartphone use also starts young for some, but how common this is varies by age. About three-quarters of parents say their 11- or 12-year-old ever uses one. A slightly smaller share, roughly two-thirds, say their child age 8 to 10 does so. Majorities say so for kids ages 5 to 7 and ages 2 to 4.
[h] And fewer – but still about four-in-ten – say their child under 2 ever uses or interacts with one.”
YouTube is a big part of kids’ lives, presumably because it is free and provides a “contained environment for kids.” Despite this show of a “child-safe” platform, many have voiced concerns about both child-targeted ads and questionable content. TikTok and other social media are also represented, of course, though a whopping 80% of parents believe those platforms do more harm than good for children.
Parents cite several reasons they allow kids to access screens. Most do so for entertainment and learning. For children under five, keeping them calm is also a motivation. Those who have provided kids with their own phones overwhelmingly did so for ease of contact. On the other hand, those who do not allow smartphones cite safety, developmental concerns, and screen time limits. Their most common reason, though, is concern about inappropriate content. (See this NPR article for a more in-depth discussion of how and why to protect kids from seeing porn online, including ways porn is more harmful than it used to be. Also, your router is your first line of defense.)
It seems parents are not blind to the potential harms of technology. Almost all say managing screen time is a priority, though for most it is not in the top three. See the write-up for more details, including some handy graphs. Bottomline: Parents are fighting a losing battle in many US households.
Cynthia Murrell, October 22. 2025
Apple Can Do AI Fast … for Text That Is
October 22, 2025
Wasn’t Apple supposed to infuse Siri with Apple Intelligence? Yeah, well, Apple has been working on smart software. Unlike the Google and Samsung, Apple is still working out some kinks in [a] its leadership, [b] innovation flow, [c] productization, and [d] double talk.
Nevertheless, I learned by reading “Apple’s New Language Model Can Write Long Texts Incredibly Fast.” That’s excellent. The cited source reports:
In the study, the researchers demonstrate that FS-DFM was able to write full-length passages with just eight quick refinement rounds, matching the quality of diffusion models that required over a thousand steps to achieve a similar result. To achieve that, the researchers take an interesting three-step approach: first, the model is trained to handle different budgets of refinement iterations. Then, they use a guiding “teacher” model to help it make larger, more accurate updates at each iteration without “overshooting” the intended text. And finally, they tweak how each iteration works so the model can reach the final result in fewer, steadier steps.
And if you want proof, just navigate to the archive of research and marketing documents. You can access for free the research document titled “FS-DFM: Fast and Accurate Long Text Generation with Few-Step Diffusion Language Models.” The write up contains equations and helpful illustrations like this one:

The research paper is in line with other “be more efficient”-type efforts. At some point, companies in the LLM game will run out of money, power, or improvements. Efforts like Apple’s are helpful. However, like its debunking of smart software, Apple is lagging in the AI game.
Net net: Like orange iPhones and branding plays like Apple TV, a bit more in the delivery of products might be helpful. Apple did produce a gold thing-a-ma-bob for a world leader. It also reorganizes. Progress of a sort I surmise.
Stephen E Arnold, October 21, 2025
Moral Police? Not OpenAI, Dude and Not Anywhere in Silicon Valley
October 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Coming up with clever stuff is either the warp or the woof of innovation. With the breakthroughs in software that seems intelligent, clever is morphing into societal responsibility. For decades I have asserted that the flow of digital information erodes notional structures. From my Eagleton Lecture in the mid-1980s to the observations in this blog, the accuracy of my observation is verified. What began as disintermediation in the niche of special librarians has become the driving force for the interesting world now visible to most people.

Worrying about morality in 2025 is like using a horse and buggy to commute in Silicon Valley. Thanks, Venice.ai. Good enough.
I can understand the big idea behind Sam AI-Man’s statements as reported in “Sam Altman Says OpenAI Isn’t ‘Moral Police of the World’ after Erotica ChatGPT Post Blows Up.” Technology is — like, you know, so, um — neutral. This means that its instrumental nature appears in applications. Who hassles the fellow who innovated with Trinitrotoluene or electric cars with top speeds measured in hundreds of miles per hour?
The write up says:
OpenAI CEO Sam Altman said Wednesday [October 15, 2025] that the company is “not the elected moral police of the world” after receiving backlash over his decision to loosen restrictions and allow content like erotica within its chatbot ChatGPT. The artificial intelligence startup has expanded its safety controls in recent months as it faced mounting scrutiny over how it protects users, particularly minors. But Altman said Tuesday in a post on X that OpenAI will be able to “safely relax” most restrictions now that it has new tools and has been able to mitigate “serious mental health issues.”
This is a sporty paragraph. It contains highly charged words and a message. The message, as I understand it, is, “We can’t tell people what to do or not to do with our neutral and really good smart software.”
Smart software has become the next big thing for some companies. Sure, many organizations are using AI, but the motors driving the next big thing are parked in structures linked with some large high technology outfits.
What’s a Silicon Valley type outfit supposed to do with this moral frippery? The answer, according to the write up:
On Tuesday [October 13, 2025] , OpenAI announced assembled a council of eight experts who will provide insight into how AI impacts users’ mental health, emotions and motivation. Altman posted about the company’s aim to loosen restrictions that same day, sparking confusion and swift backlash on social media.
What am I confused about the arrow of time? Sam AI-Man did one thing on the 13th of October and then explained that his firm is not the moral police on the 14th of October. Okay, make a move and then crawfish. That works for me, and I think the approach will become part of the managerial toolkit for many Silicon Valley outfits.
For example, what if AI does not generate enough data to pay off the really patient, super understanding, and truly king people who fund the AI effort? What if the “think it and it will become real” approach fizzles? What if AI turns out to be just another utility useful for specific applications like writing high school essays or automating a sales professional’s prospect follow up letter? What if….? No, I won’t go there.
Several observations:
- Silicon Valley-type outfits now have the tools to modify social behavior. Whether it is Peter Thiel as puppet master or Pavel Durov carrying a goat to inspire TONcoin dApp developers, these individuals can control hearts and minds.
- Ignoring or imposing philosophical notions with technology was not a problem when an innovation like Teslas A/C motor was confined to a small sector of industry. But today, the innovations can ripple globally in seconds. It should be no surprise that technology and ideology are for now intertwined.
- Control? Not possible. The ink, as the saying goes, has been spilled on the blotter. Out of the bottle. Period.
The waffling is little more than fire fighting. The uncertainty in modern life is a “benefit” of neutral technology. How do you like those real time ads that follow you around from online experience to online experience? Sam AI-Man and others of his ilk are not the moral police. That concept is as outdated as a horse-and-buggy on El Camino Real. Quaint but anachronistic. Just swipe left for another rationalization. It is 2025.
Stephen E Arnold, October 23, 2025
Smart Software: The DNA and Its DORK Sequence
October 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I love article that “prove” something. This is a gem: “Study Proves Being Rude to AI Chatbots Gets Better Results Than Being Nice.” Of course, I believe everything I read online. This write up reports as actual factual:
A new study claims that being rude leads to more accurate results, so don’t be afraid to tell off your chatbot. Researchers at Pennsylvania State University found that “impolite prompts consistently outperform polite ones” when querying large language models such as ChatGPT.
My initial reaction is that I would much prefer providing my inputs about smart software directly to outfits creating these modern confections of a bunch of technologies and snake oil. How about a button on Microsoft Copilot, Google Gemini or whatever it is now, and the others in the Silicon Valley global domination triathlon of deception, money burning, and method recycling? This button would be labeled, “Provide feedback to leadership.” Think that will happen? Unlikely.
Thanks, Venice.ai, not good enough, you inept creation of egomaniacal wizards.
Smart YouTube and smart You.com were both dead for hours. Hey, no problemo. Want to provide feedback? Sure, just write “we care” at either firm. A wizard will jump right on the input.
The write up adds:
Okay, but why does being rude work? Turns out, the authors don’t know, but they have some theories.
Based on my experience with Silicon Valley type smart software outfits, I have an explanation. The majority of the leadership has a latent protein in their DNA. This DORK sequence ensures that arrogance, indifference to others, and boundless confidence takes precedence over other characteristics; for example, ethical compass aligned with social norms.
Built by DORK software responds to dorkish behavior because the DORK sequence wakes up and actually attempts to function in a semi-reliable way.
The write up concludes with this gem:
The exact reason isn’t fully understood. Since language models don’t have feelings, the team believes the difference may come down to phrasing, though they admit “more investigation is needed.”
Well, that makes sense. No one is exactly sure how the black boxes churned out by the next big thing outfits work. Therefore, why being a dork to the model remains a mystery. Can the DORK sequence be modified by CRISPR/Cas9? Is there funding the Pennsylvania State University experts can pursue? I sure hope so.
Stephen E Arnold, October 22, 2025
First WAP? What Is That? Who Let the Cat Out of the Bag?
October 21, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Ageing in rural Kentucky is not a good way to keep up with surveillance technology. I did spot a post on LinkedIn. I will provide a url for the LinkedIn post, but I have zero clue if anyone reading this blog will be able to view the information. The focus of the LinkedIn post is that some wizards have taken inspiration from NSO Group-type of firms and done some innovation. Like any surveillance technology, one has to apply it in a real life situation. Sometimes there is a slight difference between demonstrations, PowerPoint talks, and ease of use. But, hey, that’s the MBA-inspired way to riches or at least in NSO Group’s situation, infamy.

Letting the cat out of the bag. Who is the individual? The president, an executive, a conference organizer, or a stealthy “real” journalist. One thing is clear: The cat is out of the bag. Thanks, Venice.ai. Good enough.
The LinkedIn post is from an entity using the handle OSINT Industries. Here is the link, dutifully copied from Microsoft’s outstanding social media platform. Don’t blame me if it doesn’t work. Microsoft just blames users, so just look in the mirror and complain: https://www.linkedin.com/posts/osint-industries_your-phone-is-being-tracked-right-now-ugcPost-7384354091293982721-KQWk?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAACYEwBhJbGkTw7Ad0vyN4RcYKj0Su8NUU
How’s that for a link. ShortURL spit out this version: https://shorturl.at/x2Qx9.
So what’s the big deal. Cyber security outfits and an online information service (in the old days a printed magazine) named Mother Jones learned that an outfit called First WAP exploited the SS7 telecom protocol. As i understand this signal switching, SS7 is about 50 years old and much loved by telephony nerds and Bell heads. The system and method acts like an old fashioned switchyard operator at a rail yard in the 1920s. Signals are filtered from voice channels. Call connections and other housekeeping are pushed to the SS7 digital switchyard. Instead of being located underground in Manhattan, the SS7 system is digital and operates globally. I have heard but have no first hand information about its security vulnerabilities. I know that a couple of companies are associated with switching fancy dancing. Do security exploits work? Well, the hoo-hah about First WAP suggests that SS7 exploitation is available.
The LinkedIn post says that “The scale [is] 14,000+ phone numbers. 160 countries. Over 1 million location pings.
A bit more color appears in the Russian information service ? FrankMedia.ru’s report “First WAP Empire: How Hidden Technology Followed Leaders and Activists.” The article is in Russian, but ever-reliable Google Translate makes short work of one’s language blind spots. Here are some interesting points from Frank Media:
- First WAP has been in business for about 17 or 18 years
- The system was used to track Google and Raytheon professionals
- First WAP relies on resellers of specialized systems and services and does not do too much direct selling. The idea is that the intermediaries are known to the government buyers. A bright engineer from another country is generally viewed as someone who should not be in a meeting with certain government professionals. This is nothing personal, you understand. This is just business.
- The system is named Altamides, which may be a variant of a Greek word for “powerful.”
The big reveal in the Russian write up is that a journalist got into the restricted conference, entered into a conversation with an attendee at the restricted conference, and got information which has put First WAP in the running to be the next NSO Group in terms of PR problems. The Frank Media write up does a fine job of identifying two individuals. One is the owner of the firm and the other is the voluble business development person.
Well, everyone gets 15 minutes of fame. Let me provide some additional, old-person information. First, the company’s Web address is www.1rstwap.com. Second, the firm’s alleged full name is First WAP International DMCC. The “DMCC” acronym means that the firm operates from Dubai’s economic zone. Third, the firm sells through intermediaries; for example, an outfit called KCS operating allegedly from the UK. Companies House information is what might be called sparse.
Several questions:
- How did a non-LE or intel professional get into the conference?
- Why was the company to operate off the radar for more than a decade?
- What benefits does First WAP derive from its nominal base in Indonesia?
- What are the specific security vulnerabilities First WAP exploits?
- Why do the named First WAP executives suddenly start talking after many years of avoiding an NSO-type PR problem?
Carelessness seems to be the reason this First WAP got its wireless access protocol put in the spotlight. Nice work!
To WAP up, you can download the First WAP encrypted messaging application from… wait for it… the Google Play Store. The Google listing includes this statement, “No data shared with third parties.” Think about that statement.
Stephen E Arnold, October 21, 2025
A Positive State of AI: Hallucinating and Sloppy but Upbeat in 2025
October 21, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Who can resist a report about AI authored on the “interwebs.” Is this a variation of the Internet as pipes? The write up is “Welcome to State of AI Report 2025.” When I followed the links, I could read this blog post, view a YouTube video, work through more than 300 online slides, or see “live survey results.” I must admit that when I write a report, I distribute it to a few people and move on. Not this “interwebs” outfit. The data are available for those who are in tune, locked in, and ramped up about smart software.
An anxious parent learns that a robot equipped with agentic AI will perform her child’s heart surgery. Thanks, Venice.ai. Good enough.
I appreciate enthusiasm, particularly when I read this statement:
The existential risk debate has cooled, giving way to concrete questions about reliability, cyber resilience, and the long-term governance of increasingly autonomous systems.
Agree or disagree, the report makes clear that doom is not associated with smart software. I think that this blossoming of smart software services, applications, and apps reflects considerable optimism. Some of these people and companies are probably in the AI game to make money. That’s okay as long as the products and services don’t urge teens to fall in love with digital friends, cause a user mental distress as a rabbit hole is plumbed, or just output incorrect information. Who wants to be the doctor who says, “Hey, sorry your child died. The AI output a drug that killed her. Call me if you have questions”?
I could not complete the 300 plus slides in the slide deck. I am not a video type so the YouTube version was a non-starter. However, I did read the list of findings from t he “interwebs” and its “team.” Please, consult the source documents for a full, non-dinobaby version of what the enthusiastic researchers learned about 2025. I will highlight three findings and then offer a handful of comments:
- OpenAI is the leader of the pack. That’s good news for Sam AI-Man or SAMA.
- “Commercial traction accelerated.” That’s better news for those who have shoveled cash into the giant open hearth furnaces of smart software companies.
- Safety research is in a “pragmatic phase.” That’s the best news in the report. OpenAI, the leader like the Philco radio outfit, is allowing erotic interactions. Yes, pragmatic because sex sells as Madison Avenue figured out a century ago.
Several observations are warranted because I am a dinobaby, and I am not convinced that smart software is more than a utility, not an application like Lotus 1-2-2 or the original laser printer. Buckle up:
- The money pumped into AI is cash that is not being directed at the US knowledge system. I am talking about schools and their job of teaching reading, writing, and arithmetic. China may be dizzy with AI enthusiasm, but their schools are churning out people with fundamental skills that will allow that nation state to be the leader in a number of sectors, including smart software.
- Today’s smart software consists of neural network and transformer anchored methods. The companies are increasingly similar and the outputs of the different systems generate incorrect or misleading output scattered amidst recycled knowledge, data, and information. Two pigs cannot output an eagle except in a video game or an anime.
- The handful of firms dominating AI are not motivated by social principles. These firms want to do what they want. Governments can’t reign them in. Therefore, the “governments” try to co-opt the technology, hang on, and hope for the best. Laws, rules, regulations, ethical behavior — forget that.
Net net: The State of AI in 2025 is exactly what one would expect from Silicon Valley- and MBA-type thinking. Would you let an AI doc treat your 10-year-old child? You can work through the 300 plus slides to assuage your worries.
Stephen E Arnold, October 21, 2025
Into Video? Say Howdy to Loneliness and Shallow Thinking
October 21, 2025
This will surely improve the state of the world and validate Newton Minnow’s observation about a vast wasteland.. Or at least distract from it. On his Substack, Deric Thompson declares “Everything Is Television.” Thompson supports his assertion with three examples: First, he notes, Facebook and Instagram users now spend over 90% and 80% of their time on the platforms, respectively, watching videos. Next, he laments, most podcasts now include video. What started as a way to listen to something interesting as we performed other tasks has become another reason to stare at a screen. Finally, the post reports to our horror, both Meta and OpenAI have just launched products that serve up endless streams of AI-generated videos. Just what we needed.
Thompson’s definition of television here includes every venue hosting continuous flows of episodic video. This is different from entertainment forms that predate television—plays, books, concerts, etc.—because those were finite experiences. Now we can zone out to video content for hours at a time. And, apparently, more and more of us do. In a section titled “Lonely, Mean, and Dumb,” Thompson describes why this is problematic. He writes:
“My beef is not with the entire medium of moving images. My concern is what happens when the grammar of television rather suddenly conquers the entire media landscape. In the last few weeks, I have been writing a lot about two big trends in American life that do not necessarily overlap. My work on the ‘Antisocial Century’ traces the rise of solitude in American life and its effects on economics, politics, and society. My work on ‘the end of thinking’ follows the decline of literacy and numeracy scores in the U.S. and the handoff from a culture of literacy to a culture of orality. Neither of these trends is exclusively caused by the logic of television colonizing all media. But both trends are significantly exacerbated by it.”
On the issue of solitude, the post cites Robert Putnam’s Bowling Alone. That work correlates the growing time folks spent watching TV from 1965 – 1995 with a marked decrease in activities involving other people. Volunteering and dinner parties are a couple of examples. So what happens when the Internet, social media, and AI turbocharge that self-isolation trend? Thompson asserts:
“When everything turns into television, every form of communication starts to adopt television’s values: immediacy, emotion, spectacle, brevity. In the glow of a local news program, or an outraged news feed, the viewer bathes in a vat of their own cortisol. When everything is urgent, nothing is truly important. Politics becomes theater. Science becomes storytelling. News becomes performance. The result, [Neil] Postman warned, is a society that forgets how to think in paragraphs, and learns instead to think in scenes.”
Well said. For anyone with enough attention span to have read this far, see the write-up for more in-depth consideration of these issues. Is the human race forfeiting its capacity to think deeply and critically about complex topics? Is it too late to reverse the trend?
Cynthia Murrell, October 21, 2025
Amazon AWS: Two Pizza Team Engineering Delivers Indigestion to Lots of People
October 20, 2025
No smart software. Just a dumb and quite old dinobaby.
Years ago an investment bank asked me to write a report about Amazon’s technical infrastructure. I had visited Amazon as part of a US government entity. Along with four colleagues from different agencies, I had an opportunity to ask about how Amazon’s infrastructure could be used as an online services platform. I did not get an answer, just marketing talk. One of the phrases stuck with me; to wit, “We use two pizza teams.”
The idea is that no technical project can involve more developers than two pizzas can feed. I was not sure if this was brilliant, smart assery, or an admission that Amazon was a “good enough” engineering organization.
I had a couple of other Amazon projects after that big tech study. One was to analyze Amazon’s patents for blockchain. Let me tell you. Those Amazon engineers were into cross chain methods and a number of dizzying engineering innovations. Where did that blockchain stuff go? To tell the truth, I don’t have many Amazon blockchain items lighting up my radar. Then I did a report for a law enforcement group interested in Amazon’s drone demonstration in Australia. The idea was that Amazon’s drone had image recognition. The demo showed the drone spotting a shark heading toward swimmers. The alert was sounded and the shark had to go find another lunch spot. What happened to that? I have no idea. Then … oh, well, you get the idea.
Amazon does technology which seems to be okay with leasing Kindle books and allowing third party resellers to push polo shirts. The Ring thing, the Alexa gizmo, and other Amazon initiatives like its mobile phone were not hitting home runs.
I read “Widespread Internet Outage Reported As Amazon Web Services Works on Issue.” [This is a Microsoft link. If it goes dead, don’t call me. Give Copilot a whirl.] Okay, order those pizzas. The write up reports:
The Amazon cloud computing company, which supports wide swaths of the publicly available internet, issued an update Monday just after 3 p.m. ET saying that the company continues to “observe recovery across all AWS services.” “We are in the process of validating a fix,” AWS added, referring to a specific problem set off by the connectivity issue announced shortly after 3 a.m. Eastern Time.
Okay, that’s 12 hours and counting.
I want to point out that the two-pizza approach to engineering is cute. The reality is that AWS is vulnerable. The outage may be a result of engineering flubs. You are familiar with those. The company says, “An intern entered a invalid command.” The outage may be a result of Amazon’s giant and almost unmanageable archipelago of servers, services, software, and systems was hacked by a bad actor. Maybe it was one of those 1,000 bad actors who took out Microsoft a couple of years ago? Maybe it was a customer who grew frustrated with inexplicable fees and charges? Maybe it was a problem caused by an upstream or downstream vendor? One thing is sure: It will take more than a two pizza team to remediate and prevent the failure from happening again.
In that first report for the California money guys, I made one point: The AWS system will fail and no one will know exactly what went wrong.
Two pizza engineering is a Groucho Marx type of quip. Now we know what one gets: Digital food poisoning.
Stephen E Arnold, October 20, 2025 at 530 pm US Eastern
OpenAI and the Confusing Hypothetical
October 20, 2025
This essay is the work of a dumb dinobaby. No smart software required.
SAMA or Sam AI-Man Altman is probably going to ignore the Economist’s article “What If OpenAI Went Belly-Up?” I love what-if articles. These confections are hot buttons for consultants to push to get well-paid executives with impostor syndrome to sign up for a big project. Push the button and ka-ching. The cash register tallies another win for a blue chip.
Will Sam AI-Man respond to the cited article? He could fiddle the algorithms for ChatGPT to return links to AI slop. The result would be either [a] an improvement in Economist what-if articles or a drop off in their ingenuity. The Economist is not a consulting firm, but it seems as if some of its professionals want to be blue chippers.
A young would-be magician struggles to master a card trick. He is worried that he will fail. Thanks, Venice.ai. Good enough.
What does the write up hypothesize? The obvious point is that OpenAI is essentially a scam. When it self destructs, it will do immediate damage to about 150 managers of their own and other people’s money. No new BMW for a favorite grand child. Shame at the country club when a really terrible golfer who owns an asphalt paving company says, “I heard you took a hit with that OpenAI investment. What’s going on?”
Bad.
SAMA has been doing what look like circular deals. The write up is not so much hypothetical consultant talk as it is a listing of money moving among fellow travelers like riders on wooden horses on a merry-go-round at the county fair. The Economist article states:
The ubiquity of Mr Altman and his startup, plus its convoluted links to other AI firms, is raising eyebrows. An awful lot seems to hinge on a firm forecast to lose $10bn this year on revenues of little more than that amount. D.A. Davidson, a broker, calls OpenAI “the biggest case yet of Silicon Valley’s vaunted ‘fake it ’till you make it’ ethos”.
Is Sam AI-Man a variant of Elizabeth Holmes or is he more like the dynamic duo, Sergey Brin and Larry Page? Google did not warrant this type of analysis six or seven years into its march to monopolistic behavior:
Four of OpenAI’s six big deal announcements this year were followed by a total combined net gain of $1.7trn among the 49 big companies in Bloomberg’s broad AI index plus Intel, Samsung and SoftBank (whose fate is also tied to the technology). However, the gains for most concealed losses for some—to the tune of $435bn in gross terms if you add them all up.
Frankly I am not sure about the connection the Economist expects me to make. Instead of Eureka! I offer, “What?”
Several observations:
- The word “scam” does not appear in this hypothetical. Should it? It is a bit harsh.
- Circular deals seem to be okay even if the amount of “value” exchanged seems to be similar to projections about asteroid mining.
- Has OpenAI’s ability to hoover cash affected funding of other economic investments. I used to hear about manufacturing in the US. What we seem to be manufacturing is deals with big numbers.
Net net: This hypothetical raises no new questions. The “fake it to you make it” approach seems to be part of the plumbing as we march toward 2026. Oh, too bad about those MBA-types who analyzed the payoff from Sam AI-Man’s story telling.
Stephen E Arnold, October x, 2025

