How Frisky Will AI Become? Users Like Frisky… a Lot
November 7, 2025
OpenAI promised to create technology that would benefit humanity, much like Google and other Big tech companies. We know how that has gone. Much to the worry of its team, OpenAI released a TikTok-like app powered by AI. What could go wrong? Well we’re still waiting to see the fallout, but TechCrunch shares that possibilities in the story: “OpenAI Staff Grapples With The Company’s Social Media Push.”
OpenAI is headed into social media because that is where the money is. The push for social media is by OpenAI’s bigwigs. The new TikTok-like app is called Sora 2 and it has an AI-based feed. Past and present employees are concerned how Sora 2 will benefit humanity. They are worried that Sora 2 will produce more AI slop, the equivalent of digital brain junk food, to consumers instead of benefitting humanity. Even OpenAI’s CEO Sam Altman is astounded by the amount of money allowed to AI social media projects:
‘ ‘We do mostly need the capital for build [sic] AI that can do science, and for sure we are focused on AGI with almost all of our research effort,’ said Altman. ‘It is also nice to show people cool new tech/products along the way, make them smile, and hopefully make some money given all that compute need.’ ‘When we launched chatgpt there was a lot of ‘who needs this and where is AGI,’ Altman continued. ‘[R]eality is nuanced when it comes to optimal trajectories for a company.’”
Here’s another quote about the negative effects of AI:
‘One of the big mistakes of the social media era was [that] the feed algorithms had a bunch of unintended, negative consequences on society as a whole, and maybe even individual users. Although they were doing the thing that a user wanted — or someone thought users wanted — in the moment, which is [to] get them to, like, keep spending time on the site.’”
Let’s start taking bets about how long it will take the bad actors to transform Sora 2 into quite frisky service.
Whitney Grace, November 7, 2025
Copilot in Excel: Brenda Has Another Problem
November 6, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
Simon Wilson posted an interesting snippet from a person whom I don’t know. The handle is @belligerentbarbies who is a member of TikTok. You can find the post “Brenda” on Simon Wilson’s Weblog. The main idea in the write up is that a person in accounting or finance assembles an Excel worksheet. In many large outfits, the worksheets are templates or set up to allow the enthusiastic MBA to plug in a few numbers. Once the numbers are “in,” then the bright, over achiever hits Shift F9 to recalculate the single worksheet. If it looks okay, the MBA mashes F9 and updates the linked spreadsheets. Bingo! A financial services firm has produced the numbers needed to slap into a public or private document. But, and here’s the best part…

Thanks, Venice.ai. Good enough.
Before the document leaves the office, a senior professional who has not used Excel checks the spreadsheet. Experience dictates to look at certain important cells of data. If those pass the smell test, then the private document is moved to the next stage of its life. It goes into production so that the high net worth individual, the clued in business reporter, the big customers, and people in the CEO’s bridge group get the document.
Because those “reports” can move a stock up or down or provide useful information about a deal that is not put into a number context, most outfits protect Excel spreadsheets. Heck, even the fill-in-the-blank templates are big time secrets. Each of the investment firms for which I worked over the years follow the same process. Each uses its own, custom-tailored, carefully structure set of formulas to produce the quite significant reports, opinions, and marketing documents.
Brenda knows Excel. Most Big Dogs know some Excel, but as these corporate animals fight their way to Carpetland, those Excel skills atrophy. Now Simon Wilson’s post enters and references Copilot. The post is insightful because it highlights a process gap. Specifically if Copilot is involved in an Excel spreadsheet, Copilot might— just might in this hypothetical — make a change. The Big Dog in Carpetland does not catch the change. The Big Dog just sniffs a few spots in the forest or jungle of numbers.
Before Copilot Brenda or similar professional was involved. Copilot may make it possible to ignore Brenda and push the report out. If the financial whales make money, life is good. But what happens if the Copilot tweaked worksheet is hallucinating. I am not talking a few disco biscuits but mind warping errors whipped up because AI is essentially operating at “good enough” levels of excellence.
Bad things transpire. As interesting as this problem is to contemplate, there’s another angle that the Simon Wilson post did not address. What if Copilot is phoning home. The idea is that user interaction with a cloud-based service is designed to process data and add those data to its training process. The AI wizards have some jargon for this “learn as you go” approach.
The issue is, however, what happens if that proprietary spreadsheet or the “numbers” about a particular company find their way into a competitor’s smart output? What if Financial firm A does not know this “process” has compromised the confidentiality of a worksheet. What if Financial firm B spots the information and uses it to advantage firm B?
Where’s Brenda in this process? Who? She’s been RIFed. What about Big Dog in Carpetland? That professional is clueless until someone spots the leak and the information ruins what was a calm day with no fires to fight. Now a burning Piper Cub is in the office. Not good, is it.
I know that Microsoft Copilot will be or is positioned as super secure. I know that hypotheticals are just that: Made up thought donuts.
But I think the potential for some knowledge leaking may exist. After all Copilot, although marvelous, is not Brenda. Clueless leaders in Carpetland are not interested in fairy tales; they are interested in making money, reducing headcount, and enjoying days without a fierce fire ruining a perfectly good Louis XIV desk.
Net net: Copilot, how are you and Brenda communicating? What’s that? Brenda is not answering her company provided mobile. Wow. Bummer.
Stephen E Arnold, November 6, 2025
Iran and Crypto: A Short Cut Might Not Be Working
November 6, 2025
One factor about cryptocurrency mining (and AI) that is glossed over by news outlets is the amount of energy required to keep the servers running. In short, it’s a lot! The Cool Down reports how one Middle Eastern country is dealing with a cryptocurrency crisis: “Stunning Report Reveals Government-Linked Crypto Crisis: ‘Serious And Unimaginable’”.
What is very interesting (and not surprising) about the crypto-currency mining is who is doing it: the Iranian government. Iran is dealing with an energy crisis and the citizens are dismayed. Lakes are drying up and there are abundant power outages. Iran is dealing with one of the worst droughts in its modern history.
Iran’s people have protested, but it’s like pushing a boulder up hill: no one is listening. Iran is home to a large saltwater lake, Lake Urmia, and it has transformed into a marsh.
Here’s what one expert said:
“An Iranian engineer cited by The Observer alleged that cryptocurrency mining by the state is consuming up to 5% of electricity, contributing to water and power depletion. "We are in a serious and unimaginable crisis," Iran President Masoud Pezeshkian said as he urged action during a recent cabinet meeting.”
The Iranian government has temporarily closed offices and is rationing resources, but it likely won’t be enough to curb power demanded by the crypto mining.
Iran could demolish its authoritarian and fundamentalist religious government, invest in a mixed economy, liberate women, and invest in education and technology to prepare for a better future. That likely won’t happen.
Whitney Grace, November 6, 2025
We Must Admire a $5 Trillion Outfit
November 5, 2025
The title of this piece refers to the old adage of not putting all of your eggs in one basket. It’s a popular phrase used by investors and translates to: diversify, diversify, diversify! Nvidia really needs to take that heart, because despite having record breaking sales in the last quarter, their top customer base is limited to three. Tom’s Hardware reports, “More Than 50% Of Nvidia’s Data Center Revenue Comes From Three Customers — $21.9 Billion In Sales Recorded From The Unnamed Companies.”
Business publication Sherwood reported that 53% of Nvidia’s sales are from three anonymous customers and they total $21.9 billion. Here’s where the old adage about ego enters:
“This might not sound like a problem — after all, why complain if three different entities are handing you piles and piles of money — but concentrating the majority of your sales to just a handful of clients could cause a sudden, unexpected issue. For example, the company’s entire second-quarter revenue is around $46 billion, which means that Customer A makes up more than 20% of its sales. If this company were to suddenly vanish (say it decided to build its own chips, go with AMD, or a scandal forces it to cease operations), then it would have a massive impact on Nvidia’s cash flow and operations.”
The article then hypothesizes that the mysterious customers are Elon Musk, xAI, OpenAI, Oracle, and Meta. The company did lose sales in China because of President Trump’s actions, so the customers aren’t from Asia. Nvidia needs to diversify its client portfolio if it doesn’t want to sink when and if these customers head to greener pastures. With a $5 trillion value, how many green pastures await Nvidia. Just think of them and they will manifest themselves. That works.
Whitney Grace, November 5, 2025
Medical Fraud Meets AI. DRG Codes Meet AI. Enjoy
November 4, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I have heard that some large medical outfits make use of DRG “chains” or “coding sequences.” I picked up this information when my team and I worked on what is called a “subrogation project.” I am not going to explain how subrogation works or what the mechanisms operating are. These are back office or invisible services that accompany something that seems straightforward. One doesn’t buy stock from a financial advisor; there is plumbing and plumbing companies that do this work. The hospital sends you a bill; there is plumbing and plumbing companies providing systems and services. To sum up, a hospital bill is often large, confusing, opaque, and similar to a secret language. Mistakes happen, of course. But often inflated medical bills do more to benefit the institution and its professionals than the person with the bill in his or her hand. (If you run into me at an online fraud conference, I will explain how the “chain” of codes works. It is slick and not well understood by many of the professionals who care for the patient. It is a toss up whether Miami or Nashville is the Florence of medical fancy dancing. I won’t argue for either city, but I would add that Houston and LA should be in the running for the most creative center of certain activities.

“Grieving Family Uses AI Chatbot to Cut Hospital Bill from $195,000 to $33,000 — Family Says Claude Highlighted Duplicative Charges, Improper Coding, and Other Violations” contains some information that will be [a] good news for medical fraud investigators and [b] for some health care providers and individual medical specialists in their practices. The person with the big bill had to joust with the provider to get a detailed, line item breakdown of certain charges. Once that anti-social institution provider the detail, it was time for AI.
The write up says:
Claude [Anthropic, the AI outfit hooked up with Google] proved to be a dogged, forensic ally. The biggest catch was that it uncovered duplications in billing. It turns out that the hospital had billed for both a master procedure and all its components. That shaved off, in principle, around $100,000 in charges that would have been rejected by Medicare. “So the hospital had billed us for the master procedure and then again for every component of it,” wrote an exasperated nthmonkey. Furthermore, Claude unpicked the hospital’s improper use of inpatient vs emergency codes. Another big catch was an issue where ventilator services are billed on the same day as an emergency admission, a practice that would be considered a regulatory violation in some circumstances.
Claude, the smart software, clawed through the data. The smart software identified certain items that required closer inspection. The AI helped the human using Claude to get the health care provider to adjust the bill.
Why did the hospital make billing errors? Was it [a] intentional fraud programmed into the medical billing system; [b] was it an intentional chain of DRG codes tuned to bill as many items, actions, and services as possible within reason and applicable rules; or [c] a computer error. If you picked item c, you are correct. The write up says:
Once a satisfactory level of transparency was achieved (the hospital blamed ‘upgraded computers’), Claude AI stepped in and analyzed the standard charging codes that had been revealed.
Absolutely blame the problem on the technology people. Who issued the instructions to the technology people? Innocent MBAs and financial whiz kids who want to maximize their returns are not part of this story. Should they be? Of course not. Computer-related topics are for other people.
Stephen E Arnold, November 4, 2025
Google Is Really Cute: Push Your Content into the Jaws of Googzilla
November 4, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Google has a new, helpful, clever, and cute service just for everyone with a business Web site. “Google Labs’ Free New Experiment Creates AI-Generated Ads for Your Small Business” lays out the basics of Pomelli. (I think this word means knobs or handles.)

A Googley business process designed to extract money and data from certain customers. Thanks, Venice.ai. Good enough.
The cited article states:
Pomelli uses AI to create campaigns that are unique to your business; all you need to do is upload your business website to begin. Google says Pomelli uses your business URL to create a “Business DNA” that analyzes your website images to identify brand identity. The Business DNA profile includes tone of voice, color palettes, fonts, and pictures. Pomelli can also generate logos, taglines, and brand values.
Just imagine Google processing your Web site, its content, images, links, and entities like email addresses, phone numbers, etc. Then using its smart software to create an advertising campaign, ads, and suggestions for the amount of money you should / will / must spend via Google’s own advertising system. What a cute idea!
The write up points out:
Google says this feature eliminates the laborious process of brainstorming unique ad campaigns. If users have their own campaign ideas, they can enter them into Pomelli as a prompt. Finally, Pomelli will generate marketing assets for social media, websites, and advertisements. These assets can be edited, allowing users to change images, headers, fonts, color palettes, descriptions, and create a call to action.
How will those tireless search engine optimization consultants and Google certified ad reselling outfits react to this new and still “experimental” service? I am confident that [a] some will rationalize the wonderfulness of this service and sell advisory services about the automated replacement for marketing and creative agencies; [b] some will not understand that it is time to think about a substantive side gig because Google is automating basic business functions and plugging into the customer’s wallet with no pesky intermediary to shave off some bucks; and [c] others will watch as their own sales efforts become less and less productive and then go out of business because adaptation is hard.
Is Google’s idea original? No, Adobe has something called AI Found, according to the write up. Google is not into innovation. Need I remind you that Google advertising has some roots in the Yahoo garden in bins marked GoTo.com and Overture.com. Also, there is a bank account with some Google money from a settlement about certain intellectual property rights that Yahoo believed Google used as a source of business process inspiration.
As Google moves into automating hooks, it accrues several significant benefits which seem to stick up in Google’s push to help its users:
- Crawling costs may be reduced. The users will push content to Google. This may or may not be a significant factor, but the user who updates provides Google with timely information.
- The uploaded or pushed content can be piped into the Google AI system and used to inform the advertising and marketing confection Pomelli. Training data and ad prospects in one go.
- The automation of a core business function allows Google to penetrate more deeply into a business. What if that business uses Microsoft products? It strikes me that the Googlers will say, “Hey, switch to Google and you get advertising bonus bucks that can be used to reduce your overall costs.”
- The advertising process is a knob that Google can be used to pull the user and his cash directly into the Google business process automation scheme.
As I said, cute and also clever. We love you, Google. Keep on being Googley. Pull those users’ knobs, okay.
Stephen E Arnold, November 4, 2025
Don Quixote Takes on AI in Research Integrity Battle. A La Vista!
November 3, 2025
Scientific publisher Frontiers asserts its new AI platform is the key to making the most of valuable research data. ScienceDaily crows, “90% of Science is Lost. This New AI Just Found It.” Wow, 90%. Now who is hallucinating? Turns out that percentage only applies if one is looking at new research submitted within Frontiers’ new system. Cutting out past and outside research really narrows the perspective. The press release explains:
“Out of every 100 datasets produced, about 80 stay within the lab, 20 are shared but seldom reused, fewer than two meet FAIR standards, and only one typically leads to new findings. … To change this, [Frontiers’ FAIR² Data Management Service] is designed to make data both reusable and properly credited by combining all essential steps — curation, compliance checks, AI-ready formatting, peer review, an interactive portal, certification, and permanent hosting — into one seamless process. The goal is to ensure that today’s research investments translate into faster advances in health, sustainability, and technology. FAIR² builds on the FAIR principles (Findable, Accessible, Interoperable and Reusable) with an expanded open framework that guarantees every dataset is AI-compatible and ethically reusable by both humans and machines.”
That does sound like quite the time- and hassle- saver. And we cannot argue with making it easier to enact the FAIR principles. But the system will only achieve its lofty goals with wide buy-in from the academic community. Will Frontiers get it? The write-up describes what participating researchers can expect:
“Researchers who submit their data receive four integrated outputs: a certified Data Package, a peer-reviewed and citable Data Article, an Interactive Data Portal featuring visualizations and AI chat, and a FAIR² Certificate. Each element includes quality controls and clear summaries that make the data easier to understand for general users and more compatible across research disciplines.”
The publisher asserts its system ensures data preservation, validation, and accessibility while giving researchers proper recognition. The press release describes four example datasets created with the system as well as glowing reviews from select researchers. See the post for those details.
Cynthia Murrell, November 3, 2025
Hollywood Has to Learn to Love AI. You Too, Mr. Beast
October 31, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Russia’s leadership is good at talking, stalling, and doing what it wants. Is OpenAI copying this tactic? ”OpenAI Cracks Down on Sora 2 Deepfakes after Pressure from Bryan Cranston, SAG-AFTRA” reports:
OpenAI announced on Monday [October 20, 2025] in a joint statement that it will be working with Bryan Cranston, SAG-AFTRA, and other actor unions to protect against deepfakes on its artificial intelligence video creation app Sora.
Talking, stalling or “negotiating,” and then doing what it wants may be within the scope of this sentence.
The write up adds via a quote from OpenAI leadership:
“OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness,” Altman said in a statement. “We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers.”
This sounds good. I am not sure it will impress teens as much as Mr. Altman’s posture on erotic chats, but the statement sounds good. If I knew Russian, it would be interesting to translate the statement. Then one could compare the statement with some of those emitted by the Kremlin.

Producing a big budget commercial film or a Mr. Beast-type video will look very different in 18 to 24 months. Thanks, Venice.ai. Good enough.
Several observations:
- Mr. Altman has to generate cash or the appearance of cash. At some point investors will become pushy. Pushy investors can be problematic.
- OpenAI’s approach to model behavior does not give me confidence that the company can figure out how to engineer guard rails and then enforce them. Young men and women fiddling with OpenAI can be quite ingenious.
- The BBC ran a news program with the news reader as a deep fake. What does this suggest about a Hollywood producer facing financial pressure working out a deal with an AI entrepreneur facing even greater financial pressure? I think it means that humanoids are expendable first a little bit and then for the entire digital production. Gamification will be too delicious.
Net net: I think I know how this interaction will play out. Sam Altman, the big name stars, and the AI outfits know. The lawyers know. Who doesn’t know? Frankly everyone knows how digital disintermediation works. Just ask a recent college grad with a degree in art history.
Stephen E Arnold, October 31, 2025
Will AMD Deal Make OpenAI Less Deal Crazed? Not a Chance
October 31, 2025
Why does this deal sound a bit like moving money from dad’s coin jar to mom’s spare change box? AP News reports, “OpenAI and Chipmaker AMD Sign Chip Supply Partnership for AI Infrastructure.” We learn AMD will supply OpenAI with hardware so cutting edge it won’t even hit the market until next year. The agreement will also allow OpenAI to buy up about 10% of AMD’s common stock. The day the partnership was announced, AMD’s shares went up almost 24%, while rival chipmaker Nvidia’s went down 1%. The write-up observes:
“The deal is a boost for Santa Clara, Calif.-based AMD, which has been left behind by rival Nvidia. But it also hints at OpenAI’s desire to diversify its supply chain away from Nvidia’s dominance. The AI boom has fueled demand for Nvidia’s graphics processing chips, sending its shares soaring and making it the world’s most valuable company. Last month, OpenAI and Nvidia announced a $100 billion partnership that will add at least 10 gigawatts of data center computing power. OpenAI and its partners have already installed hundreds of Nvidia’s GB200, a tall computing rack that contains dozens of specialized AI chips within it, at the flagship Stargate data center campus under construction in Abilene, Texas. Barclays analysts said in a note to investors Monday that OpenAI’s AMD deal is less about taking share away from Nvidia than it is a sign of how much computing is needed to meet AI demand.”
No doubt. We are sure OpenAI will buy up all the high-powered graphics chips it can get. But after it and other AI firms acquire their chips, will there be any left for regular consumers? If so, expect their costs to remain sky high. Just one more resource AI firms are devouring with little to no regard for the impact on others.
Cynthia Murrell, October 31, 2025
AI Will Kill, and People Will Grow Accustomed to That … Smile
October 30, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I spotted a story in SFGate, which I think was or is part of a dead tree newspaper. What struck me was the photograph (allegedly not a deep fake) of two people looking not just happy. I sensed a bit of self satisfaction and confidence. Regardless, both people gracing “Society Will Accept a Death Caused by a Robotaxi, Waymo Co-CEO Says.” Death, as far back as I can recall as an 81-year-old dinobaby, has never made me happy, but I just accepted the way life works. Part of me says that my vibrating waves will continue. I think Blaise Pascal suggested that one should believe in God because what’s the downside. Go, Blaise, a guy who did not get to experience an an accident involving a self-driving smart vehicle.

A traffic jam in a major metro area. The cause? A self-driving smart vehicle struck a school bus. But everyone is accustomed to this type of trivial problem. Thanks, MidJourney. Good enough like some high-tech outfits’ smart software.
But Waymo is a Google confection dating from 2010 if my memory is on the money. Google is a reasonably big company. It brokers, sells, and creates a market for its online advertising business. The cash spun from that revolving door is used to fund great ideas and moon shots. Messrs. Brin, Page, and assorted wizards had some time to kill as they sat in their automobiles creeping up and down Highway 101. The idea of a self-driving car that would allow a very intelligent, multi-tasking driver to do something productive than become a semi-sentient meat blob sparked an idea. We can rig a car to creep along Highway 101. Cool. That insight spawned what is now known as Waymo.
An estimable Google Waymo expert found himself involved in litigation related to Google’s intellectual property. I had ignored Waymo until the Anthony Levandowski founded a company, sold it to Uber, and then ended up in a legal matter that last from 2017 to 2019. Publicity, I have heard, whether positive or negative, is good. I knew about Waymo: A Google project, intellectual property, and litigation. Way to go, Waymo.
For me, Waymo appears in some social media posts (allegedly actual factual) when Waymo vehicles get trapped in a dead end in Cow Town. Sometimes the Waymos don’t get out of the way of traffic barriers and sit purring and beeping. I have heard that some residents of San Francisco have [a] kicked, [b] sprayed graffiti on Waymos, and/or [c] put traffic cones in certain roads to befuddle the smart Google software-powered vehicles. From a distance, these look a bit like something from a Mad Max motion picture.
My personal view is that I would never stand in front of a rolling Waymo. I know that [a] Google search results are not particularly useful, [b] Google’s AI outputs crazy information like glue cheese on pizza, and [c] Waymo’s have been involved in traffic incidents which cause me to stay away from Waymos.
The cited article says that the Googler said in response to a question about a Waymo hypothetical killing of a person:
“I think that society will,” Mawakana answered, slowly, before positioning the question as an industry wide issue. “I think the challenge for us is making sure that society has a high enough bar on safety that companies are held to.” She said that companies should be transparent about their records by publishing data about how many crashes they’re involved in, and she pointed to the “hub” of safety information on Waymo’s website. Self-driving cars will dramatically reduce crashes, Mawakana said, but not by 100%: “We have to be in this open and honest dialogue about the fact that we know it’s not perfection.” [Emphasis added by Beyond Search]
My reactions to this allegedly true and accurate statement from a Googler are:
- I am not confident that Google can be “transparent.” Google, according to one US court is a monopoly. Google has been fined by the European Union for saying one thing and doing another. The only reason I know about these court decisions is because legal processes released information. Google did not provide the information as part of its commitment to transparency.
- Waymos create problems because the Google smart software cannot handle the demands of driving in the real world. The software is good enough, but not good enough to figure out dead ends, actions by human drivers, and potentially dangerous situations. I am aware of fender benders and collisions with fixed objects that have surfaced in Waymo’s 15 year history.
- Self driving cars specifically Waymo will injure or kill people. But Waymo cars are safe. So some level of killing humans is okay with Google, regulators, and the society in general. What about the family of the person who is killed by good enough Google software? The answer: The lawyers will blame something other than Google. Then fight in court because Google has oodles of cash from its estimable online advertising business.
The cited article quotes the Waymo Googler as saying:
“If you are not being transparent, then it is my view that you are not doing what is necessary in order to actually earn the right to make the roads safer,” Mawakana said. [Emphasis added by Beyond Search]
Of course, I believe everything Google says. Why not believe that Waymos will make self driving vehicle caused deaths acceptable? Why not believe Google is transparent? Why not believe that Google will make roads safer? Why not?
But I like the idea that people will accept an AI vehicle killing people. Stuff happens, right?
Stephen E Arnold, October 30, 2025

