AI-Yai-Yai: Two Wizards Unload on What VCs and Consultants Ignore
December 2, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
I read “Ilya Sutskever, Yann LeCun and the End of Just Add GPUs.” The write up is unlikely to find too many accelerationists printing out the write up and handing it out to their pals at Philz Coffee. What does this indigestion maker way? Let’s take a quick look.
The write up says:
Ilya Sutskever – co-founder of OpenAI and now head of Safe Superintelligence Inc. – argued that the industry is moving from an “age of scaling” to an “age of research”. At the same time, Yann LeCun, VP & Chief AI Scientist at Meta, has been loudly insisting that LLMs are not the future of AI at all and that we need a completely different path based on “world models” and architectures like JEPA. [Beyond Search note because the author of the article was apparently making assumptions about what readers know. JEPA is short hand for Joint Embedding Predictive Architecture. The idea is to find a recipe to all machines learn about the world in a way a human does.]
I like to try to make things simple. Simple things are easier for me to remember. This passage means: Dead end. New approaches needed. Your interpretation may be different. I want to point out that my experience with LLMs in the past few months have left me with a sense that a “No Outlet” sign is ahead.

Thanks, Venice.ai. The signs are pointing in weird directions, but close enough for horse shoes.
Let’s take a look at another passage in the cited article.
“The real bottleneck [is] generalization. For Sutskever, the biggest unsolved problem is generalization. Humans can:
learn a new concept from a handful of examples
transfer knowledge between domains
keep learning continuously without forgetting everything
Models, by comparison, still need:
huge amounts of data
careful evals (sic) to avoid weird corner-case failures
extensive guardrails and fine-tuning
Even the best systems today generalize much worse than people. Fixing that is not a matter of another 10,000 GPUs; it needs new theory and new training methods.”
I assume “generalization” to AI wizards has this freight of meaning. For me, this is a big word way of saying, “Current AI models don’t work or perform like humans.” I do like the clarity of “needs new theory and training methods.” The “old” way of training has not made too many pals among those who hold copyright in my opinion. The article calls this “new recipes.”
Yann LeCun points out:
LLMs, as we know them, are not the path to real intelligence.
Yann LeCun likes world models. These have these attributes:
- “learn by watching the world (especially video)
- build an internal representation of objects, space and time
- can predict what will happen next in that world, not just what word comes next”
What’s the fix? You can navigate to the cited article and read the punch line to the experts’ views of today’s AI.
Several observations are warranted:
- Lots of money is now committed to what strikes these experts as dead ends
- The move fast and break things believers are in a spot where they may be going too fast to stop when the “Dead End” sign comes into view
- The likelihood of AI companies demonstrating that they can wish, think, and believe they have the next big thing and are operating with a willing suspension of disbelief.
I wonder if they positions presented in this article provide some insight into Google’s building dedicated AI data centers for big buck, security conscious clients like NATO and Pavel Durov’s decision to build the SETI-type of system he has announced.
Stephen E Arnold, December 2, 2025
IBM on the Path to Dyson Spheres But Quantum Networks Come First
November 28, 2025
This essay is the work of a dumb dinobaby. No smart software required.
How does one of the former innovators in Fear, Uncertainty, and Doubt respond to the rare atmosphere of smart software? The answer, in my opinion, appears in “IBM, Cisco Outline Plans for Networks of Quantum Computers by Early 2030s.” My prediction was wrong about IBM. I thought that with a platform like Watson, IBM would aim directly at Freeman Dyson’s sphere. The idea is to build a sphere in space to gather energy and power advanced computing systems. Well, one can’t get to the Dyson sphere without a network of quantum computers. And the sooner the better.

A big thinker conceptualizes inventions anticipated by science fiction writers. The expert believes that if he thinks it, that “it” will become real. Sure, but usually more than a couple of years are needed for really big projects like affordable quantum computers linked via quantum networks. Thanks, Venice.ai. Good enough.
The write up from the “trust” outfit Thomson Reuters says:
IBM and Cisco Systems … said they plan to link quantum computers over long distances, with the goal of demonstrating the concept is workable by the end of 2030. The move could pave the way for a quantum internet, though executives at the two companies cautioned that the networks would require technologies that do not currently exist and will have to be developed with the help of universities and federal laboratories.
Imagine artificial general intelligence is like to arrive about the same time. IBM has Watson. Does this mean that Watson can run on quantum computers. Those can solve the engineering challenges of the Dyson sphere. IBM can then solve the world’s energy requirements. This sequence seems like a reasonable tactical plan.
The write up points out that building a quantum network poses a few engineering problems. I noted this statement in the news report:
The challenge begins with a problem: Quantum computers like IBM’s sit in massive cryogenic tanks that get so cold that atoms barely move. To get information out of them, IBM has to figure out how to transform information in stationary “qubits” – the fundamental unit of information in a quantum computer – into what Jay Gambetta, director of IBM Research and an IBM fellow, told Reuters are “flying” qubits that travel as microwaves. But those flying microwave qubits will have to be turned into optical signals that can travel between Cisco switches on fiber-optic cables. The technology for that transformation – called a microwave-optical transducer – will have to be developed with the help of groups like the Superconducting Quantum Materials and Systems Center, led by the Fermi National Accelerator Laboratory near Chicago, among others.
Trivial compared to the Dyson sphere confection. It is now sundown for year 2025. IBM and its partner target being operational in 2029. That works out to 24 months. Call it 36 just to add a margin of error.
Several observations:
- IBM and its partner Cisco Systems are staking out their claims to the future of computing
- Compared to the Dyson sphere idea, quantum computers networked together to provide the plumbing for an Internet that makes Jack Dorsey’s Web 5 vision seem like something from a Paleolithic sketch on the wall of the Lescaux Caves.
- Watson and IBM’s other advanced AI technologies probably assisted the IBM marketing professionals with publicizing Big Blue’s latest idea for moving beyond the fog of smart software.
Net net: The spirit of avid science fiction devotees is effervescing. Does the idea of a network of quantum computers tickle your nose or your fancy? I have marked my calendar.
Stephen E Arnold, November 28, 2025
Turkey Time: IT Projects Fail Like Pies and Cakes from Crazed Aunties
November 27, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
Today is Thanksgiving, and it is appropriate to consider the turkey approach to technology. The source of this idea comes from the IEEE.org online publication. The article explaining what I call “turkeyism” is “How IT Managers Fail Software Projects.” Because the write up is almost 4,000 words and far too long for reading during an American football game’s halftime break, I shall focus on a handful of points in the write up. I encourage you to read the entire article and, of course, sign up and subscribe. If you don’t, the begging for dollars pop up may motivate you to click away and lose the full wisdom of the IEEE write up. I want to point out that many IT managers are trained as electrical engineers or computer scientists who have had to endure the veritable wonderland of imaginary numbers for a semester or two. But increasingly IT managers can be MBAs or in some frisky Silicon Valley type companies, recent high school graduates with a native ability to solve complex problems and manage those older than they. Hey, that works, right?

Auntie knows how to manage the baking process. She practices excellent hygiene, but with age comes forgetfulness. Those cookies look yummy. Thanks, Venice.a. No mom. But good enough with Auntie pawing the bird.
Answer: Actually no.
The cited IEEE article states:
Global IT spending has more than tripled in constant 2025 dollars since 2005, from US $1.7 trillion to $5.6 trillion, and continues to rise. Despite additional spending, software success rates have not markedly improved in the past two decades. The result is that the business and societal costs of failure continue to grow as software proliferates, permeating and interconnecting every aspect of our lives.
Yep, and lots of those managers are members of IEEE or similar organizations. How about that jump from solving mathy problems to making software that works? It doesn’t seem to be working. Is it the universities, the on the job training, or the failure of continuing education? Not surprisingly, the write up doesn’t offer a solution.
What we have is a global, expensive problem. With more of everyday life dependent on “technology,” a failure can have some interesting consequences. Not only is it tough to get that new sweater delivered by Amazon, but downtime can kill a kid in a hospital when a system keels over. Dead is dead, isn’t it?
The write up says:
A report fromthe Consortium for Information & Software Quality (CISQ) estimated the annual cost of operational software failures in the United States in 2022 alone was $1.81 trillion, with another $260 billion spent on software-development failures. It is larger than the total U.S. defense budget for that year, $778 billion.
Chatter about the “cost” of AI tosses around even bigger numbers. Perhaps some of the AI pundits should consider the impact of AI failure in the context of IT failure. Frankly I am not confident about AI because of IT failure. The money is one thing, but given the evidence about the prevalence of failure, I am not ready to sing the JP Morgan tune about the sunny side of the street.
The write up adds:
Next to electrical infrastructure, with which IT is increasingly merging into a mutually codependent relationship, the failure of our computing systems is an existential threat to modern society. Frustratingly, the IT community stubbornly fails to learn from prior failures.
And what role does a professional organization play in this little but expensive drama? Are the arrows of accountability pointing at the social context in which the managers work? What about the education of these managers? What about the drive to efficiency? You know. Design the simplest possible solution. Yeah, these contextual components have created a high probability of failure. Will Auntie’s dessert give everyone food poisoning? Probably. Auntie thinks she has washed her hands and baked with sanitation in mind. Yep, great assumption because Auntie is old. Auntie Compute is going on 85 now. Have another cookie.
But here’s the killer statement in the write up:
Not much has worked with any consistency over the past 20 years.
This is like a line in a Jack Benny Show skit.
Several observations:
- The article identifies a global, systemic problem
- The existing mechanisms for training people to manage don’t work
- There is no solution.
Have a great Thanksgiving. Have another one of Auntie’s cookies. The two people who got food poisoning last year just had upset tummies. It will just get better. At least that’s what mom says.
Stephen E Arnold, November 27, 2025
Collaboration: Why Ask? Just Do. (Great Advice, Job Seeker)
November 24, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I read
I am too old to have an opinion about collaboration in 2025. I am a slacker, not a user of Slack. I don’t “GoTo” meetings; I stay in my underground office. I don’t “chat” on Facebook or smart software. I am, therefore, qualified to comment on the essay “Collaboration Sucks.” The main point of the essay is that collaboration is not a positive. (I know that this person has not worked at a blue chip consulting firm. If you don’t collaborate, you better have telepathy. Otherwise, you will screw up in a spectacular fashion with the client and the lucky colleagues who get to write about your performance or just drop hints to a Carpetland dweller.
The essay states:
We aim to hire people who are great at their jobs and get out of their way. No deadlines, minimal coordination, and no managers telling you what to do. In return, we ask for extraordinarily high ownership and the ability to get a lot done by yourself. Marketers ship code, salespeople answer technical questions without backup, and product engineers work across the stack.
To me, this sounds like a Silicon Valley commandment along with “Go fast and break things” or “It’s easier to ask forgiveness than it is to get permission.” Allegedly Rear Admiral Grace Hopper offered this observation. However, Admiral Craig Hosmer told me that her attitude did more harm to females in the US Navy’s technical services than she thought. Which Admiral does one believe? I believe what Admiral Hosmer told me when I provided technical support to his little Joint Committee on Nuclear Energy many years ago.

Thanks, Venice.ai. Good enough. Good enough.
The idea that a team of really smart and independent specialists can do great things is what has made respected managers familiar with legal processes around the world. I think Google just received an opportunity to learn from its $600 million fine levied by Germany. Moving fast, Google made some interesting decisions about German price comparison sites. I won’t raise again the specter of the AI bubble and the leadership methods of Sam AI-Man. Everything is working out just swell, right?
The write up presents seven reasons why collaboration sucks. Most of the reasons revolve around flaws in a person. I urge you to read the seven variations on the theme of insecurity, impostor syndrome, and cluelessness.
My view is that collaboration, like any business process, depends on the context of the task and the work itself. In some organizations, employees can do almost anything because middle managers (if they are still present) have little idea about what’s going on with workers who are in an office half a world away, down the hall but playing Foosball, pecking away at a laptop in a small, overpriced apartment in Plastic Fantastic (aka San Mateo), or working from a van and hoping the Starlink is up.
I like the idea of crushing collaboration. I urge those who want to practice this skill join a big time law firm, a blue chip consulting firm, or engage in the work underway at a pharmaceutical research lab. I love the tips the author trots out; specifically:
- Just ship the code, product, whatever. Ignore inputs like Slack messages.
- Tell the boss or leader, you are the “driver.” (When I worked for the Admiral, I would suggest that this approach was not appropriate for the context of that professional, the work related to nuclear weapons, or a way to win his love, affection, and respect. I would urge the author to track down a four star and give his method a whirl. Let me know how that works out.)
- Tell people what you need. That’s a great idea if one has power and influence. If not, it is probably important to let ChatGPT word an email for you.
- Don’t give anyone feedback until the code or product has shipped. This a career builder in some organizations. It is quite relevant when a massive penalty ensures because an individual withheld knowledge and thus made the problem worse. (There is something called “discovery.” And, guess what, those Slack and email messages can be potent.)
- Listen to inputs but just do what you want. (In my 60 year work career, I am not sure this has ever been good advice. In an AI outfit, it’s probably gold for someone. Isn’t there something called Fool’s Gold?)
Plus, there is one item on the action list for crushing collaboration I did not understand. Maybe you can divine its meaning? “If you are a team lead, or leader of leads, who has been asked for feedback, consider being more you can just do stuff.”
Several observations:
- I am glad I am not working in Sillycon Valley any longer. I loved the commute from Berkeley each day, but the craziness in play today would not match my context. Translation: I have had enough of destructive business methods. Find someone else to do your work.
- The suggestions for killing collaboration may kill one’s career except in toxic companies. (Notice that I did not identify AI-centric outfits. How politic of me.)
- The management failure implicit in this approach to colleagues, suggestions, and striving for quality is obvious to me. My fear is that some young professionals may see this collaboration sucks approach and fail to recognize the issues it creates.
Net net: When you hire, I suggest you match the individual to the context and the expertise required to the job. Short cuts contribute to the high failure rate of start ups and the dead end careers some promising workers create for themselves.
Stephen E Arnold, November 24, 2025
US Government Procurement Changes: Like Silicon Valley, Really? I Mean For Sure?
November 12, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I learned about the US Department of War overhaul of its procurement processes by reading “The Department of War Just Shot the Accountants and Opted for Speed.” Rumblings of procurement hassles have been reaching me for years. The cherished methods of capture planning, statement of work consulting, proposal writing, and evaluating bids consumes many billable hours by consultants. The processes involve thousands of government professionals: Lawyers, financial analysts, technical specialists, administrative professionals, and consultants. I can’t omit the consultants.
According to the essay written by Steve Blank (a person unfamiliar to me):
Last week the Department of War finally killed the last vestiges of Robert McNamara’s 1962 Planning, Programming, and Budgeting System (PPBS). The DoW has pivoted from optimizing cost and performance to delivering advanced weapons at speed.
The write up provides some of the history of the procurement process enshrined in such documents as FAR or the Federal Acquisition Regulations. If you want the details, Mr. Blank provides I urge you to read his essay in full.
I want to highlight what I think is an important point to the recent changes. Mr. Bloom writes:
The war in Ukraine showed that even a small country could produce millions of drones a year while continually iterating on their design to match changes on the battlefield. (Something we couldn’t do.) Meanwhile, commercial technology from startups and scaleups (fueled by an immense pool of private capital) has created off-the-shelf products, many unmatched by our federal research development centers or primes, that can be delivered at a fraction of the cost/time. But the DoW acquisition system was impenetrable to startups. Our Acquisition system was paralyzed by our own impossible risk thresholds, its focus on process not outcomes, and became risk averse and immoveable.
Based on my experience, much of it working as a consultant on different US government projects, the horrific “special operation” delivered a number of important lessons about modern warfare. Reading between the lines of the passage cited above, two important items of information emerged from what I view as an illegal international event:
- Under certain conditions human creativity can blossom and then grow into major business operations. I would suggest that Ukraine’s innovations in the use of drones, how the drones are deployed in battle conditions, and how the basic “drone idea” reduce the effectiveness of certain traditional methods of warfare
- Despite disruptions to transportation and certain third-party products, Ukraine demonstrated that just-in-time production facilities can be made operational in weeks, sometimes days.
- The combination of innovative ideas, battlefield testing, and right-sized manufacturing demonstrated that a relatively small country can become a world-class leader in modern warfighting equipment, software, and systems.
Russia, with its ponderous planning and procurement process, has become the fall guy to a president who was a stand up comedian. Who is laughing now? It is not the perpetrators of the “special operation.” The joke, as some might say, is on individuals who created the “special operation.”
Mr. Blank states about the new procurement system:
To cut through the individual acquisition silos, the services are creating Portfolio Acquisition Executives (PAEs). Each Portfolio Acquisition Executive (PAE) is responsible for the entire end-to-process of the different Acquisition functions: Capability Gaps/Requirements, System Centers, Programming, Acquisition, Testing, Contracting and Sustainment. PAEs are empowered to take calculated risks in pursuit of rapidly delivering innovative solutions.
My view of this type of streamlining is that it will become less flexible over time. I am not sure when the ossification will commence, but bureaucratic systems, no matter how well designed, morph and become traditional bureaucratic systems. I am not going to trot out the academic studies about the impact of process, auditing, and legal oversight on any efficient process. I will plainly state that the bureaucracies to which I have been exposed in the US, Europe, and Asia are fundamentally the same.

Can the smart software helping enable the Silicon Valley approach to procurement handle the load and keep the humanoids happy? Thanks, Venice.ai. Good enough.
Ukraine is an outlier when it comes to the organization of its warfighting technology. Perhaps other countries if subjected to a similar type of “special operation” would behave as the Ukraine has. Whether I was giving lectures for the Japanese government or dealing with issues related to materials science for an entity on Clarendon Terrace, the approach, rules, regulations, special considerations, etc. were generally the same.
The question becomes, “Can a new procurement system in an environment not at risk of extinction demonstrate the speed, creativity, agility, and productivity of the Ukrainian model?”
My answer is, “No.”
Mr. Blank writes before he digs into the new organizational structure:
The DoW is being redesigned to now operate at the speed of Silicon Valley, delivering more, better, and faster. Our warfighters will benefit from the innovation and lower cost of commercial technology, and the nation will once again get a military second to none.
This is an important phrase: Silicon Valley. It is the model for making the US Department of War into a more flexible and speedy entity, particularly with regard to procurement, the use of smart software (artificial intelligence), and management methods honed since Bill Hewlett and Dave Packard sparked the garage myth.
Silicon Valley has been an model for many organizations and countries. However, who thinks much about the Silicon Fen? I sure don’t. I would wager a slice of cheese that many readers of this blog post have never, ever heard of Sophia Antipolis. Everyone wants to be a Silicon Valley and high-technology, move fast and break things outfit.
But we have but one Silicon Valley. Now the question is, “Will the US government be a successful Silicon Valley, or will it fizzle out?” Based on my experience, I want to go out on a very narrow limb and suggest:
- Cronyism was important to Silicon Valley, particularly for funding and lawyering. The “new” approach to Department of War procurement is going to follow a similar path.
- As the stakes go up, growth becomes more important than fiscal considerations. As a result, the cost of becoming bigger, faster, cheaper spikes. Costs for the majority of Silicon Valley companies kill off most start ups. The failure rate is high, and it is exacerbated by the need of the winners to continue to win.
- Silicon Valley management styles produce some negative consequences. Often overlooked are such modern management methods as [a] a lack of common sense, [b] decisions based on entitlement or short term gains, and [c] a general indifference to the social consequences of an innovation, a product, or a service.
If I look forward based on my deeply flawed understanding of this Silicon Valley revolution I see monopolistic behavior emerging. Bureaucracies will emerge because people working for other people create rules, procedures, and processes to minimize the craziness of doing the go fast and break things activities. Workers create bureaucracies to deal with chaos, not cause chaos.
Mr. Blank’s essay strikes me as generally supportive of this reinvention of the Federal procurement process. He concludes with:
Let’s hope these changes stick.
My personal view is that they won’t. Ukraine’s created a wartime Silicon Valley in a real-time, shoot-and-survive conflict. The urgency is not parked in a giant building in Washington, DC, or a Silicon Valley dream world. A more pragmatic approach is to partition procurement methods. Apply Silicon Valley thinking in certain classes of procurement; modify the FAR to streamline certain processes; and leave some of the procedures unchanged.
AI is a go fast and break things technology. It also hallucinates. Drones from Silicon Valley companies don’t work in Ukraine. I know because someone with first hand information told me. What will the new methods of procurement deliver? Answer: Drones that won’t work in a modern asymmetric conflict. With decisions involving AI, I sure don’t want to find myself in a situation about which smart software makes stuff up or operates on digital mushrooms.
Stephen E Arnold, November 12, 2025
Fear in Four Flavors or What Is in the Closet?
November 6, 2025
This essay is the work of a dumb dinobaby. No smart software required.
AI fear. Are you afraid to resist the push to make smart software a part of your life. I think of AI as a utility, a bit like old fashioned enterprise search just on very expensive steroids. One never knows how that drug use will turn out. Will the athlete win trophies or drop from heart failure in the middle of an event?
The write up “Meet the People Who Dare to Say No to Artificial Intelligence” is a rehash of some AI tropes. What makes the write up stand up and salute is a single line in the article. (This is a link from Microsoft. If the link is dead, call let one of its caring customer support chatbots know, not me.) Here it is:
Michael, a 36-year-old software engineer in Chicago who spoke on the condition that he be identified only by his first name out of fear of professional repercussions…
I find this interesting. A professional will not reveal his name for fear of “professional repercussions.” I think the subject is algorithms, not politics. I think the subject is neural networks, not racial violence. I think the subject is online, not the behavior of a religious figure.

Two roommates are afraid of a blue light. Very normal. Thanks, Venice.ai. Good enough.
Let’s think about the “fear” of talking about smart software.
I asked AI why a 35-year-old would experience fear. Here’s the short answer from the remarkably friendly, eager AI system:
- Biological responses to perceived threats,
- Psychological factors like imagination and past trauma,
- Personality traits,
- Social and cultural influences.
It seems to me that external and internal factors enter into fear. In the case of talking about smart software, what could be operating. Let me hypothesize for a moment.
First, the person may see smart software as posing a threat. Okay, that’s an individual perception. Everyone can have an opinion. But the fear angle strikes me as a displacement activity in the brain. Instead of thinking about the upside of smart software, the person afraid to talk about a collection of zeros and ones only sees doom and gloom. Okay, I sort of understand.
Second, the person may have some psychological problems. But software is not the same as a seven year old afraid there is a demon in the closet. We are back, it seems, to the mysteries of the mind.
Third, the person is fearful of zeros and ones because the person is afraid of many things. Software is just another fear trigger like a person uncomfortable around little spiders is afraid of a great big one like the tarantulas I had to kill with a piece of wood when my father wanted to drive his automobile in our garage in Campinas, Brazil. Tarantulas, it turned out, liked the garage because it was cool and out of the sun. I guess the garage was similar to a Philz’ Coffee to an AI engineer in Silicon Valley.
Fourth, social and cultural influences cause a person to experience fear. I think of my neighbor approached by a group of young people demanding money and her credit card. Her social group consists of 75 year old females who play bridge. The youngsters were a group of teenagers hanging out in a parking lot in an upscale outdoor mall. Now my neighbor does not want to go to the outdoor mall alone. Nothing happened but those social and cultural influences kicked in.
Anyway fear is real.
Nevertheless, I think smart software fear boils down to more basic issues. One, smart software will cause a person to lose his or her job. The job market is not good; therefore, fear of not paying bills, social disgrace, etc. kick in. Okay, but it seems that learning about smart software might take the edge off.
Two, smart software may suck today, but it is improving rapidly. This is the seven year old afraid of the closet behavior. Tough love says, “Open the closet. Tell me what you see.” In most cases, there is no person in the closet. I did hear about a situation involving a third party hiding in the closet. The kid’s opening the door revealed the stranger. Stuff happens.
Three, a person was raised in an environment in which fear was a companion that behavior may carry forward. Boo.
Net net: What is in Mr. AI’s closet?
Stephen E Arnold, November 6, 2025
News Flash: Software Has a Quality Problem. Insight!
November 3, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “The Great Software Quality Collapse: How We Normalized Catastrophe.” What’s interesting about this essay is that the author cares about doing good work.
The write up states:
We’ve normalized software catastrophes to the point where a Calculator leaking 32GB of RAM barely makes the news. This isn’t about AI. The quality crisis started years before ChatGPT existed. AI just weaponized existing incompetence.

Marketing is more important than software quality. Right, rube? Thanks, Venice.ai. Good enough.
The bound phrase “weaponized existing incompetence” points to an issue in a number of knowledge-value disciplines. The essay identifies some issues he has tracked; for example:
- Memory consumption in Google Chrome
- Windows 11 updates breaking the start menu and other things (printers, mice, keyboards, etc.)
- Security problems such as the long-forgotten CrowdStrike misstep that cost customers about $10 billion.
But the list of indifferent or incompetent coding leads to one stop on the information superhighway: Smart software. The essay notes:
But the real pattern is more disturbing. Our research found:
AI-generated code contains 322% more security vulnerabilities
45% of all AI-generated code has exploitable flaws
Junior developers using AI cause damage 4x faster than without it
70% of hiring managers trust AI output more than junior developer code
We’ve created a perfect storm: tools that amplify incompetence, used by developers who can’t evaluate the output, reviewed by managers who trust the machine more than their people.
I quite like the bound phrase “amplify incompetence.”
The essay makes clear that the wizards of Big Tech AI prefer to spend money on plumbing (infrastructure), not software quality. The write up points out:
When you need $364 billion in hardware to run software that should work on existing machines, you’re not scaling—you’re compensating for fundamental engineering failures.
The essay concludes that Big Tech AI as well as other software development firms shift focus.
Several observations:
- Good enough is now a standard of excellence
- “Go fast” is better than “good work”
- The appearance of something is more important than its substance.
Net net: It’s a TikTok-, YouTube, and carnival midway bundled into a new type of work environment.
Stephen E Arnold, November 3, 2025
You Should Feel Lucky Because … Technology!
October 24, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Doesn’t it feel like people are accomplishing more than you? It’s a common feeling but Fast Company explains that high achievers are accomplishing more because they’re using tech. They’ve outlined a nice list of how high achieves do this: “4 Ways High Achievers Use Tech To Get More Done.” The first concept to understand is that high achievers use technology as a tool and not a distraction. Sachin Puri, Liquid Web’s chief growth office said,
“‘They make productivity apps their first priority, plan for intentional screen time, and select platforms intentionally. They may spend lots of time on screens, but they set boundaries where they need to, so that technology enhances their performance, rather than slowing it down.’”
Liquid Web surveyed six-figure earners aka high achievers to learn how they leverage their tech. They discovered that these high earners are intention with their screen time. They average seven hours a day on their screens but their time is focused on being productive. They also limit phone entertainment time to three hours a day.
Sometimes they also put a hold on using technology for mental and health hygiene. It’s important to take technology breaks to reset focus and maintain productiveness. They also choose tools to be productive such as calendar/scheduling too, using chatbots to stay ahead of deadlines, also to automize receptive tasks, brainstorm, summarize information, and stay ahead of deadlines.
Here’s another important one: high achievers focus their social media habits. Here’s what Liquid Web found that winners have focused social media habits. Yes, that is better than doom scrolling. Other finding are:
- “Finally, high-achievers are mindful of social media. For example, 49% avoid TikTok entirely. Instead, they gravitate toward sites that offer a career-related benefit. Nearly 40% use Reddit as their most popular platform for learning and engagement.
- Successful people are also much more engaged on LinkedIn. Only 17% of high-achievers said they don’t use the professional networking site, compared to 38% of average Americans who aren’t engaged there.
- “Many high-achievers don’t give up on screens altogether—they just shift their focus,” says Puri. “Their social media habits show it, with many opting for interactive, discussion-based apps such as Reddit over passive scroll-based apps such as TikTok.”
The lesson here is that screen time isn’t always a time waste bin. We did not know that LinkedIn was an important service since the report suggests that 83 percent of high achievers embrace the Microsoft service. Don’t the data go into the MSFT AI clothes hamper?
Whitney Grace, October 24, 2025
Forget AI. The Real Game Is Control by Tech Wizards
October 6, 2025
This essay is the work of a dumb dinobaby. No smart software required.
The weird orange newspaper ran an opinion-news story titled “How Tech Lords and Populists Changed the Rules of Power.” The author is Giuliano da Empoli. Now he writes. He has worked in the Italian government. He was the Deputy Mayor for Culture in the city of Florence. Niccolò Machiavelli (1469-1527) lived in Florence. That Florentine’s ideas may have influenced Giuliano.
What are the tech bros doing? M. da Empoli writes:
The new technological elites, the Musks, Mark Zuckerbergs and Sam Altmans of this world, have nothing in common with the technocrats of Davos. Their philosophy of life is not based on the competent management of the existing order but, on the contrary, on an irrepressible desire to throw everything up in the air. Order, prudence and respect for the rules are anathema to those who have made a name for themselves by moving fast and breaking things, in accordance with Facebook’s famous first motto. In this context, Musk’s words are just the tip of the iceberg and reveal something much deeper: a battle between power elites for control of the future.
In the US, the current pride of tech lions have revealed their agenda and their battle steed, Donald J. Trump. The “governing elite” are on their collective back feet. M. da Empoli points the finger at social media and online services as the magic carpet the tech elites ride even though these look like private jets. In the online world, M. da Empoli says:
On the internet, a campaign of aggression or disinformation costs nothing, while defending against it is almost impossible. As a result, our republics, our large and small liberal democracies, risk being swept away like the tiny Italian republics of the early 16th century. And taking center stage are characters who seem to have stepped out of Machiavelli’s The Prince to follow his teachings. In a situation of uncertainty, when the legitimacy of power is precarious and can be called into question at any moment, those who fail to act can be certain that changes will occur to their disadvantage.
What’s the end game? M. da Empoli asserts:
Together, political predators and digital conquistadors have decided to wipe out the old elites and their rules. If they succeed in achieving this goal, it will not only be the parties of lawyers and technocrats that will be swept away, but also liberal democracy as we have known it until today.
Several observations:
- The tech elites are in a race which they have to win. Dumb phones and GenAI limiting their online activities are two indications that in the US some behavioral changes can be identified. Will the “spirit of log off” spread?
- The tech elites want AI to win. The reason is that control of information streams translates into power. With power comes opportunities to increase the wealth of those who manage the AI systems. A government cannot do this, but the tech elites can. If AI doesn’t work, lots of money evaporates. The tech elites do not want that to happen.
- Online tears down and leads inevitably to monopolistic or oligopolistic control of markets. The end game does not interest the tech elite. Power and money do.
Net net: What’s the fix? M. da Empoli does not say. He knows what’s coming is bad. What happens to those who deliver bad news? Clever people like Machiavelli write leadership how-to books.
Stephen E Arnold, October 6, 2025
What a Hoot? First, Snow White and Now This
October 3, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “Disney+ Cancellation Page Crashes As Customers Rush to Quit after Kimmel Suspension.” I don’t think too much about Disney, the cost of going to a theme park, or the allegedly chill Walt Disney. Now it is Disney, Disney, Disney. The chant is almost displacing Epstein, Epstein, Epstein.
Somehow the Disney company muffed the bunny with Snow White. I think the film hit my radar when certain short human actors were going to be in a remake of the 1930s’ cartoon “Snow White.” Then then I noted some stories about a new president and an old president who wanted to be the president again or whatever. Most recently, Disney hit the pause button for a late night comedy show. Some people were not happy.
The write up informed me:
With cancellations surging, many subscribers reported technical issues. On Reddit’s r/Fauxmoi, one post read, “The page to cancel your Hulu/Disney+ subscription keeps crashing.”
As a practical matter, the way to stop cancellations is to dial back the resources available to the Web site. Presto. No more cancellations until the server is slowly restored to functionality so it can fall over again.
I am pragmatic. I don’t like to think that information technology professionals (either full time “cast” or part-timers) can’t keep a Web site online. It is 2025. A phone call to a service provider can solve most reliability problems as quickly as the data can be copied to a different data center.
Let me step back. I see several signals in what I will call the cartoon collapse.
- The leadership of Disney cannot rely on the people in the company; for example, the new Snow White and the Web server fell over.
- The judgment of those involved in specific decisions seems to be out of sync with the customers and the stakeholders in the company. Walt had Mickey Mouse aligned with what movie goers wanted to see and what stakeholders expected the enterprise to deliver.
- The technical infrastructure seems flawed. Well, not “seems.” The cancellation server failed.
Disney is an example of what happens when “leadership” has not set up an organization to succeed. Furthermore, the Disney case raises this question, “How many other big, well-known companies will follow this Disney trajectory?” My thought is that the disconnect between “management” staff, customers, stakeholders, and technology is similar to Disney in a number of outfits.
What will be these firms’ Snow White and late night comedian moment?
Stephen E Arnold, October 3, 2025
PS. Disney appears to have raised prices and then offered my wife a $2.99 per month “deal.” Slick stuff.

