We Must Admire a $5 Trillion Outfit

November 5, 2025

The title of this piece refers to the old adage of not putting all of your eggs in one basket. It’s a popular phrase used by investors and translates to: diversify, diversify, diversify! Nvidia really needs to take that heart, because despite having record breaking sales in the last quarter, their top customer base is limited to three. Tom’s Hardware reports, “More Than 50% Of Nvidia’s Data Center Revenue Comes From Three Customers — $21.9 Billion In Sales Recorded From The Unnamed Companies.”

Business publication Sherwood reported that 53% of Nvidia’s sales are from three anonymous customers and they total $21.9 billion. Here’s where the old adage about ego enters:

“This might not sound like a problem — after all, why complain if three different entities are handing you piles and piles of money — but concentrating the majority of your sales to just a handful of clients could cause a sudden, unexpected issue. For example, the company’s entire second-quarter revenue is around $46 billion, which means that Customer A makes up more than 20% of its sales. If this company were to suddenly vanish (say it decided to build its own chips, go with AMD, or a scandal forces it to cease operations), then it would have a massive impact on Nvidia’s cash flow and operations.”

The article then hypothesizes that the mysterious customers are Elon Musk, xAI, OpenAI, Oracle, and Meta. The company did lose sales in China because of President Trump’s actions, so the customers aren’t from Asia. Nvidia needs to diversify its client portfolio if it doesn’t want to sink when and if these customers head to greener pastures. With a $5 trillion value, how many green pastures await Nvidia. Just think of them and they will manifest themselves. That works.

Whitney Grace, November 5, 2025

Transformers May Face a Choice: The Junk Pile or Pizza Hut

November 4, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

I read a marketing collateral-type of write up in Venture Beat. The puffy delight carries this title “The Beginning of the End of the Transformer Era? Neuro-Symbolic AI Startup AUI Announces New Funding at $750M Valuation.” The transformer is a Googley thing. Obviously with many users of Google’s Googley AI, Google perceives itself as the Big Dog in smart software. Sorry, Sam AI-Man, Google really, really believes it is the leader; otherwise, why would Apple turn to Google for help with its AI challenges? Ah, you don’t know? Too bad, Sam, I feel for you.

image

Thanks, MidJourney. Good enough.

This write up makes clear that someone has $750 million reasons to fund a different approach to smart software. Contrarian brilliance or dumb move? I don’t know. The write up says:

AUI is the company behind Apollo-1, a new foundation model built for task-oriented dialog, which it describes as the "economic half" of conversational AI — distinct from the open-ended dialog handled by LLMs like ChatGPT and Gemini. The firm argues that existing LLMs lack the determinism, policy enforcement, and operational certainty required by enterprises, especially in regulated sectors.

But there’s more:

Apollo-1’s core innovation is its neuro-symbolic architecture, which separates linguistic fluency from task reasoning. Instead of using the most common technology underpinning most LLMs and conversational AI systems today — the vaunted transformer architecture described in the seminal 2017 Google paper "Attention Is All You Need" — AUI’s system integrates two layers:

  • Neural modules, powered by LLMs, handle perception: encoding user inputs and generating natural language responses.

  • A symbolic reasoning engine, developed over several years, interprets structured task elements such as intents, entities, and parameters. This symbolic state engine determines the appropriate next actions using deterministic logic.

This hybrid architecture allows Apollo-1 to maintain state continuity, enforce organizational policies, and reliably trigger tool or API calls — capabilities that transformer-only agents lack.

What’s important is that interest in an alternative to the Googley approach is growing. The idea is that maybe — just maybe — Google’s transformer is burning cash and not getting much smarter with each billion dollar camp fire. Consequently individuals with a different approach warrant a closer look.

The marketing oriented write up ends this way:’

While LLMs have advanced general-purpose dialog and creativity, they remain probabilistic — a barrier to enterprise deployment in finance, healthcare, and customer service. Apollo-1 targets this gap by offering a system where policy adherence and deterministic task completion are first-class design goals.

Researchers around the world are working overtime to find a way to deliver smart software without the Mad Magazine economics of power, CPUs, and litigation associated with the Googley approach. When a practical breakthrough takes place, outfits mired in Googley methods may be working at a job their mothers did not envision for her progeny.

Stephen E Arnold, November 4, 2025

Medical Fraud Meets AI. DRG Codes Meet AI. Enjoy

November 4, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have heard that some large medical outfits make use of DRG “chains” or “coding sequences.” I picked up this information when my team and I worked on what is called a “subrogation project.” I am not going to explain how subrogation works or what the mechanisms operating are. These are back office or invisible services that accompany something that seems straightforward. One doesn’t buy stock from a financial advisor; there is plumbing and plumbing companies that do this work. The hospital sends you a bill; there is plumbing and plumbing companies providing systems and services. To sum up, a hospital bill is often large, confusing, opaque, and similar to a secret language. Mistakes happen, of course. But often inflated medical bills do more to benefit the institution and its professionals than the person with the bill in his or her hand. (If you run into me at an online fraud conference, I will explain how the “chain” of codes works. It is slick and not well understood by many of the professionals who care for the patient. It is a toss up whether Miami or Nashville is the Florence of medical fancy dancing. I won’t argue for either city, but I would add that Houston and LA should be in the running for the most creative center of certain activities.

image

Grieving Family Uses AI Chatbot to Cut Hospital Bill from $195,000 to $33,000 — Family Says Claude Highlighted Duplicative Charges, Improper Coding, and Other Violations” contains some information that will be [a] good news for medical fraud investigators and [b] for some health care providers and individual medical specialists in their practices. The person with the big bill had to joust with the provider to get a detailed, line item breakdown of certain charges. Once that anti-social institution provider the detail, it was time for AI.

The write up says:

Claude [Anthropic, the AI outfit hooked up with Google] proved to be a dogged, forensic ally. The biggest catch was that it uncovered duplications in billing. It turns out that the hospital had billed for both a master procedure and all its components. That shaved off, in principle, around $100,000 in charges that would have been rejected by Medicare. “So the hospital had billed us for the master procedure and then again for every component of it,” wrote an exasperated nthmonkey. Furthermore, Claude unpicked the hospital’s improper use of inpatient vs emergency codes. Another big catch was an issue where ventilator services are billed on the same day as an emergency admission, a practice that would be considered a regulatory violation in some circumstances.

Claude, the smart software, clawed through the data. The smart software identified certain items that required closer inspection. The AI helped the human using Claude to get the health care provider to adjust the bill.

Why did the hospital make billing errors? Was it [a] intentional fraud programmed into the medical billing system; [b] was it an intentional chain of DRG codes tuned to bill as many items, actions, and services as possible within reason and applicable rules; or [c] a computer error. If you picked item c, you are correct. The write up says:

Once a satisfactory level of transparency was achieved (the hospital blamed ‘upgraded computers’), Claude AI stepped in and analyzed the standard charging codes that had been revealed.

Absolutely blame the problem on the technology people. Who issued the instructions to the technology people? Innocent MBAs and financial whiz kids who want to maximize their returns are not part of this story. Should they be? Of course not. Computer-related topics are for other people.

Stephen E Arnold, November 4, 2025

Google Is Really Cute: Push Your Content into the Jaws of Googzilla

November 4, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google has a new, helpful, clever, and cute service just for everyone with a business Web site. “Google Labs’ Free New Experiment Creates AI-Generated Ads for Your Small Business” lays out the basics of Pomelli. (I think this word means knobs or handles.)

image

A Googley business process designed to extract money and data from certain customers. Thanks, Venice.ai. Good enough.

The cited article states:

Pomelli uses AI to create campaigns that are unique to your business; all you need to do is upload your business website to begin. Google says Pomelli uses your business URL to create a “Business DNA” that analyzes your website images to identify brand identity. The Business DNA profile includes tone of voice, color palettes, fonts, and pictures. Pomelli can also generate logos, taglines, and brand values.

Just imagine Google processing your Web site, its content, images, links, and entities like email addresses, phone numbers, etc. Then using its smart software to create an advertising campaign, ads, and suggestions for the amount of money you should / will / must spend via Google’s own advertising system. What a cute idea!

The write up points out:

Google says this feature eliminates the laborious process of brainstorming unique ad campaigns. If users have their own campaign ideas, they can enter them into Pomelli as a prompt. Finally, Pomelli will generate marketing assets for social media, websites, and advertisements. These assets can be edited, allowing users to change images, headers, fonts, color palettes, descriptions, and create a call to action.

How will those tireless search engine optimization consultants and Google certified ad reselling outfits react to this new and still “experimental” service? I am confident that [a] some will rationalize the wonderfulness of this service and sell advisory services about the automated replacement for marketing and creative agencies; [b] some will not understand that it is time to think about a substantive side gig because Google is automating basic business functions and plugging into the customer’s wallet with no pesky intermediary to shave off some bucks; and [c] others will watch as their own sales efforts become less and less productive and then go out of business because adaptation is hard.

Is Google’s idea original? No, Adobe has something called AI Found, according to the write up. Google is not into innovation. Need I remind you that Google advertising has some roots in the Yahoo garden in bins marked GoTo.com and Overture.com. Also, there is a bank account with some Google money from a settlement about certain intellectual property rights that Yahoo believed Google used as a source of business process inspiration.

As Google moves into automating hooks, it accrues several significant benefits which seem to stick up in Google’s push to help its users:

  1. Crawling costs may be reduced. The users will push content to Google. This may or may not be a significant factor, but the user who updates provides Google with timely information.
  2. The uploaded or pushed content can be piped into the Google AI system and used to inform the advertising and marketing confection Pomelli. Training data and ad prospects in one go.
  3. The automation of a core business function allows Google to penetrate more deeply into a business. What if that business uses Microsoft products? It strikes me that the Googlers will say, “Hey, switch to Google and you get advertising bonus bucks that can be used to reduce your overall costs.”
  4. The advertising process is a knob that Google can be used to pull the user and his cash directly into the Google business process automation scheme.

As I said, cute and also clever. We love you, Google. Keep on being Googley. Pull those users’ knobs, okay.

Stephen E Arnold, November 4, 2025

News Flash: Software Has a Quality Problem. Insight!

November 3, 2025

green-dino_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “The Great Software Quality Collapse: How We Normalized Catastrophe.” What’s interesting about this essay is that the author cares about doing good work.

The write up states:

We’ve normalized software catastrophes to the point where a Calculator leaking 32GB of RAM barely makes the news. This isn’t about AI. The quality crisis started years before ChatGPT existed. AI just weaponized existing incompetence.

image

Marketing is more important than software quality. Right, rube? Thanks, Venice.ai. Good enough.

The bound phrase “weaponized existing incompetence” points to an issue in a number of knowledge-value disciplines. The essay identifies some issues he has tracked; for example:

  • Memory consumption in Google Chrome
  • Windows 11 updates breaking the start menu and other things (printers, mice, keyboards, etc.)
  • Security problems such as the long-forgotten CrowdStrike misstep that cost customers about $10 billion.

But the list of indifferent or incompetent coding leads to one stop on the information superhighway: Smart software. The essay notes:

But the real pattern is more disturbing. Our research found:


  • AI-generated code contains 322% more security vulnerabilities



  • 45% of all AI-generated code has exploitable flaws



  • Junior developers using AI cause damage 4x faster than without it



  • 70% of hiring managers trust AI output more than junior developer code


We’ve created a perfect storm: tools that amplify incompetence, used by developers who can’t evaluate the output, reviewed by managers who trust the machine more than their people.

I quite like the bound phrase “amplify incompetence.”

The essay makes clear that the wizards of Big Tech AI prefer to spend money on plumbing (infrastructure), not software quality. The write up points out:

When you need $364 billion in hardware to run software that should work on existing machines, you’re not scaling—you’re compensating for fundamental engineering failures.

The essay concludes that Big Tech AI as well as other software development firms shift focus.

Several observations:

  1. Good enough is now a standard of excellence
  2. “Go fast” is better than “good work”
  3. The appearance of something is more important than its substance.

Net net: It’s a TikTok-, YouTube, and carnival midway bundled into a new type of work environment.

Stephen E Arnold, November 3, 2025

Don Quixote Takes on AI in Research Integrity Battle. A La Vista!

November 3, 2025

Scientific publisher Frontiers asserts its new AI platform is the key to making the most of valuable research data. ScienceDaily crows, “90% of Science is Lost. This New AI Just Found It.” Wow, 90%. Now who is hallucinating? Turns out that percentage only applies if one is looking at new research submitted within Frontiers’ new system. Cutting out past and outside research really narrows the perspective. The press release explains:

“Out of every 100 datasets produced, about 80 stay within the lab, 20 are shared but seldom reused, fewer than two meet FAIR standards, and only one typically leads to new findings. … To change this, [Frontiers’ FAIR² Data Management Service] is designed to make data both reusable and properly credited by combining all essential steps — curation, compliance checks, AI-ready formatting, peer review, an interactive portal, certification, and permanent hosting — into one seamless process. The goal is to ensure that today’s research investments translate into faster advances in health, sustainability, and technology. FAIR² builds on the FAIR principles (Findable, Accessible, Interoperable and Reusable) with an expanded open framework that guarantees every dataset is AI-compatible and ethically reusable by both humans and machines.”

That does sound like quite the time- and hassle- saver. And we cannot argue with making it easier to enact the FAIR principles. But the system will only achieve its lofty goals with wide buy-in from the academic community. Will Frontiers get it? The write-up describes what participating researchers can expect:

“Researchers who submit their data receive four integrated outputs: a certified Data Package, a peer-reviewed and citable Data Article, an Interactive Data Portal featuring visualizations and AI chat, and a FAIR² Certificate. Each element includes quality controls and clear summaries that make the data easier to understand for general users and more compatible across research disciplines.”

The publisher asserts its system ensures data preservation, validation, and accessibility while giving researchers proper recognition. The press release describes four example datasets created with the system as well as glowing reviews from select researchers. See the post for those details.

Cynthia Murrell, November 3, 2025

Hollywood Has to Learn to Love AI. You Too, Mr. Beast

October 31, 2025

green-dino_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Russia’s leadership is good at talking, stalling, and doing what it wants. Is OpenAI copying this tactic? ”OpenAI Cracks Down on Sora 2 Deepfakes after Pressure from Bryan Cranston, SAG-AFTRA” reports:

OpenAI announced on Monday [October 20, 2025] in a joint statement that it will be working with Bryan Cranston, SAG-AFTRA, and other actor unions to protect against deepfakes on its artificial intelligence video creation app Sora.

Talking, stalling or “negotiating,” and then doing what it wants may be within the scope of this sentence.

The write up adds via a quote from OpenAI leadership:

“OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness,” Altman said in a statement. “We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers.”

This sounds good. I am not sure it will impress  teens as much as Mr. Altman’s posture on erotic chats, but the statement sounds good. If I knew Russian, it would be interesting to translate the statement. Then one could compare the statement with some of those emitted by the Kremlin.

image

Producing a big budget commercial film or a Mr. Beast-type video will look very different in 18 to 24 months. Thanks, Venice.ai. Good enough.

Several observations:

  1. Mr. Altman has to generate cash or the appearance of cash. At some point investors will become pushy.  Pushy investors can be problematic.
  2. OpenAI’s approach to model behavior does not give me confidence that the company can figure out how to engineer guard rails and then enforce them. Young men and women fiddling with OpenAI can be quite ingenious.
  3. The BBC ran a news program with the news reader as a deep fake. What does this suggest about a Hollywood producer facing financial pressure working out a deal with an AI entrepreneur facing even greater financial pressure? I think it means that humanoids are expendable first a little bit and then for the entire digital production. Gamification will be too delicious.

Net net: I think I know how this interaction will play out. Sam Altman, the big name stars, and the AI outfits know. The lawyers know. Who doesn’t know? Frankly everyone knows how digital disintermediation works. Just ask a recent college grad with a degree in art history.

Stephen E Arnold, October 31, 2025

Will AMD Deal Make OpenAI Less Deal Crazed? Not a Chance

October 31, 2025

Why does this deal sound a bit like moving money from dad’s coin jar to mom’s spare change box? AP News reports, “OpenAI and Chipmaker AMD Sign Chip Supply Partnership for AI Infrastructure.” We learn AMD will supply OpenAI with hardware so cutting edge it won’t even hit the market until next year. The agreement will also allow OpenAI to buy up about 10% of AMD’s common stock. The day the partnership was announced, AMD’s shares went up almost 24%, while rival chipmaker Nvidia’s went down 1%. The write-up observes:

“The deal is a boost for Santa Clara, Calif.-based AMD, which has been left behind by rival Nvidia. But it also hints at OpenAI’s desire to diversify its supply chain away from Nvidia’s dominance. The AI boom has fueled demand for Nvidia’s graphics processing chips, sending its shares soaring and making it the world’s most valuable company. Last month, OpenAI and Nvidia announced a $100 billion partnership that will add at least 10 gigawatts of data center computing power. OpenAI and its partners have already installed hundreds of Nvidia’s GB200, a tall computing rack that contains dozens of specialized AI chips within it, at the flagship Stargate data center campus under construction in Abilene, Texas. Barclays analysts said in a note to investors Monday that OpenAI’s AMD deal is less about taking share away from Nvidia than it is a sign of how much computing is needed to meet AI demand.”

No doubt. We are sure OpenAI will buy up all the high-powered graphics chips it can get. But after it and other AI firms acquire their chips, will there be any left for regular consumers? If so, expect their costs to remain sky high. Just one more resource AI firms are devouring with little to no regard for the impact on others.

Cynthia Murrell, October 31, 2025

AI Will Kill, and People Will Grow Accustomed to That … Smile

October 30, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I spotted a story in SFGate, which I think was or is part of a dead tree newspaper. What struck me was the photograph (allegedly not a deep fake) of two people looking not just happy. I sensed a bit of self satisfaction and confidence. Regardless, both people gracing “Society Will Accept a Death Caused by a Robotaxi, Waymo Co-CEO Says.” Death, as far back as I can recall as an 81-year-old dinobaby, has never made me happy, but I just accepted the way life works. Part of me says that my vibrating waves will continue. I think Blaise Pascal suggested that one should believe in God because what’s the downside. Go, Blaise, a guy who did not get to experience an an accident involving a self-driving smart vehicle.

image

A traffic jam in a major metro area. The cause? A self-driving smart vehicle struck a school bus. But everyone is accustomed to this type of trivial problem. Thanks, MidJourney. Good enough like some high-tech outfits’ smart software.

But Waymo is a Google confection dating from 2010 if my memory is on the money. Google is a reasonably big company. It brokers, sells, and creates a market for its online advertising business. The cash spun from that revolving door is used to fund great ideas and moon shots. Messrs. Brin, Page, and assorted wizards had some time to kill as they sat in their automobiles creeping up and down Highway 101. The idea of a self-driving car that would allow a very intelligent, multi-tasking driver to do something productive than become a semi-sentient meat blob sparked an idea. We can rig a car to creep along Highway 101. Cool. That insight spawned what is now known as Waymo.

An estimable Google Waymo expert found himself involved in litigation related to Google’s intellectual property. I had ignored Waymo until the Anthony Levandowski founded a company, sold it to Uber, and then ended up in a legal matter that last from 2017 to 2019. Publicity, I have heard, whether positive or negative, is good. I knew about Waymo: A Google project, intellectual property, and litigation. Way to go, Waymo.

For me, Waymo appears in some social media posts (allegedly actual factual) when Waymo vehicles get trapped in a dead end in Cow Town. Sometimes the Waymos don’t get out of the way of traffic barriers and sit purring and beeping. I have heard that some residents of San Francisco have [a] kicked, [b] sprayed graffiti on Waymos, and/or [c] put traffic cones in certain roads to befuddle the smart Google software-powered vehicles. From a distance, these look a bit like something from a Mad Max motion picture.

My personal view is that I would never stand in front of a rolling Waymo. I know that [a] Google search results are not particularly useful, [b] Google’s AI outputs crazy information like glue cheese on pizza, and [c] Waymo’s have been involved in traffic incidents which cause me to stay away from Waymos.

The cited article says that the Googler said in response to a question about a Waymo hypothetical killing of a person:

“I think that society will,” Mawakana answered, slowly, before positioning the question as an industry wide issue. “I think the challenge for us is making sure that society has a high enough bar on safety that companies are held to.” She said that companies should be transparent about their records by publishing data about how many crashes they’re involved in, and she pointed to the “hub” of safety information on Waymo’s website. Self-driving cars will dramatically reduce crashes, Mawakana said, but not by 100%: “We have to be in this open and honest dialogue about the fact that we know it’s not perfection.” [Emphasis added by Beyond Search]

My reactions to this allegedly true and accurate statement from a Googler are:

  1. I am not confident that Google can be “transparent.” Google, according to one US court is a monopoly. Google has been fined by the European Union for saying one thing and doing another. The only reason I know about these court decisions is because legal processes released information. Google did not provide the information as part of its commitment to transparency.
  2. Waymos create problems because the Google smart software cannot handle the demands of driving in the real world. The software is good enough, but not good enough to figure out dead ends, actions by human drivers, and potentially dangerous situations. I am aware of fender benders and collisions with fixed objects that have surfaced in Waymo’s 15 year history.
  3. Self driving cars specifically Waymo will injure or kill people. But Waymo cars are safe. So some level of killing humans is okay with Google, regulators, and the society in general. What about the family of the person who is killed by good enough Google software? The answer: The lawyers will blame something other than Google. Then fight in court because Google has oodles of cash from its estimable online advertising business.

The cited article quotes the Waymo Googler as saying:

“If you are not being transparent, then it is my view that you are not doing what is necessary in order to actually earn the right to make the roads safer,” Mawakana said. [Emphasis added by Beyond Search]

Of course, I believe everything Google says. Why not believe that Waymos will make self driving vehicle caused deaths acceptable? Why not believe Google is transparent? Why not believe that Google will make roads safer? Why not?

But I like the idea that people will accept an AI vehicle killing people. Stuff happens, right?

Stephen E Arnold, October 30, 2025

Is It Unfair to Blame AI for Layoffs? Sure

October 30, 2025

When AI exploded onto the scene, we were promised the tech would help workers, not replace them. Then that story began to shift, with companies revealing they do plan to slash expenses by substituting software for humans. But some are skeptical of this narrative, and for good reason. Techspot asks, “Is AI Really Behind Layoffs, or Just a Convenient Excuse for Companies?” Reporter Rob Thubron writes:

“Several large organizations, including Accenture, Salesforce, Klarna, Microsoft, and Duolingo, have said they are reducing staff numbers as AI helps streamline operations, reduce costs, and increase efficiency. But Fabian Stephany, Assistant Professor of AI & Work at the Oxford Internet Institute, told CNBC that companies are ‘scapegoating’ the technology.”

Stephany notes many companies are still trying to expel the extra humans they hired during the pandemic. Apparently, return-to-office mandates have not driven out as many workers as hoped. The write-up continues:

“Blaming AI for layoffs also has its advantages. Multibillion- and trillion-dollar companies can not only push the narrative that the changes must be made in order to stay competitive, but doing so also makes them appear more cutting-edge, tech-savvy, and efficient in the eyes of potential investors. Interestingly, a study by the Yale Budget Lab a few weeks ago showed there is little evidence that AI has displaced workers more severely than earlier innovations such as computers or the internet. Meanwhile, Goldman Sachs Research has estimated that AI could ultimately displace 6 to 7 percent of the US workforce, though it concluded the effect would likely be temporary.”

The write-up includes a graph Anthropic made in 2023 that compares gaps between actual and expected AI usage by occupation. A few fields overshot the expectation– most notably in computer and mathematical jobs. Most, though, fell short. So are workers really losing their jobs to AI? Or is that just a high-tech scapegoat?

Cynthia Murrell, October 30, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta