Derailing Smart Software with Invisible Prompts
September 3, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
The Russian PCNews service published “Visual Illusion: Scammers Have Learned to Give Invisible Instructions to Neural Networks.” Note: The article is in Russian.
The write up states:
Attackers can embed hidden instructions for artificial intelligence (AI) into the text of web pages, letters or documents … For example, CSS (a style language for describing the appearance of a document) makes text invisible to humans, but quite readable to a neural network.
The write up includes examples like these:
… Attackers can secretly run scripts, steal data, or encrypt files. The neural network response may contain social engineering commands [such as] “download this file,” “execute a PowerShell command,” or “open the link,” … At the same time, the user perceives the output as trusted … which increases the chance of installing ransomware or stealing data. If data [are] “poisoned” using hidden prompts [and] gets into the training materials of any neural network, [the system] will learn to give “harmful advice” even when processing “unpoisoned” content in future use….
Examples of invisible information have been identified in the ArXiv collection of pre-printed journal articles.
Stephen E Arnold, September 3, 2025
Bending Reality or Creating a Question of Ownership and Responsibility for Errors
September 3, 2025
No AI. Just a dinobaby working the old-fashioned way.
The Google has may busy digital beavers working in the superbly managed organization. The BBC, however, seems to be agitated about what may be a truly insignificant matter: Ownership of substantially altered content and responsibility for errors introduced into digital content.
“YouTube secretly used AI to Edit People’s Videos. The Results Could Bend Reality” reports:
In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission.
The BBC ignores a couple of issues that struck me as significant if — please, note the “if” — the assertion about YouTube altering content belonging to another entity. I will address these after some more BBC goodness.
I noted this statement:
the company [Google] has finally confirmed it is altering a limited number of videos on YouTube Shorts, the app’s short-form video feature.
Okay, the Google digital beavers are beavering away.
I also noted this passage attributed to Samuel Woolley, the Dietrich chair of disinformation studies at the University of Pittsburgh:
“You can make decisions about what you want your phone to do, and whether to turn on certain features. What we have here is a company manipulating content from leading users that is then being distributed to a public audience without the consent of the people who produce the videos…. “People are already distrustful of content that they encounter on social media. What happens if people know that companies are editing content from the top down, without even telling the content creators themselves?”
What about those issues I thought about after reading the BBC’s write up:
- If Google’s changes (improvements, enhancements, AI additions, whatever), will Google “own” the resulting content? My thought is that if Google can make more money by using AI to create a “fair use” argument, it will. How long will it take a court (assuming these are still functioning) to figure out if Google’s right or the individual content creator is the copyright holder?
- When, not if, Google’s AI introduces some type of error, is Google responsible or is it the creator’s problem? My hunch is that Google’s attorneys will argue that it provides a content creator with a free service. See the Terms of Service for YouTube and stop complaining.
- What if a content creator hits a home run and Google’s AI “learns” then outputs content via its assorted AI processes? Will Google be able to deplatform the original creator and just use it as a way to make money without paying the home-run hitting YouTube creator?
Perhaps the BBC would like to consider how these tiny “experiments” can expand until they shift the monetization methods further in favor of the Google. Maybe one reason is that BBC doesn’t think these types of thoughts. The Google, based on my experience, is indeed thinking these types of “what if” talks in a sterile room with whiteboards and brilliant Googlers playing with their mobile devices or snacking on goodies.
Stephen E Arnold, September 3, 2025
Deadbots. Many Use Cases, Including Advertising
September 2, 2025
No AI. Just a dinobaby working the old-fashioned way.
I like the idea of deadbots, a concept explained by the ever-authoritative NPR in “AI Deadbots Are Persuasive — and Researchers Say, They’re Primed for Monetization.” The write up reports in what I imagine as a resonant, somewhat breathy voice:
AI avatars of deceased people – or “deadbots” – are showing up in new and unexpected contexts, including ones where they have the power to persuade.
Here’s a passage I thought was interesting:
Researchers are now warning that commercial use is the next frontier for deadbots. “Of course it will be monetized,” said Lindenwood University AI researcher James Hutson. Hutson co-authored several studies about deadbots, including one exploring the ethics of using AI to reanimate the dead. Hutson’s work, along with other recent studies such as one from Cambridge University, which explores the likelihood of companies using deadbots to advertise products to users, point to the potential harms of such uses. “The problem is if it is perceived as exploitative, right?” Hutson said.
Not surprisingly, some sticks in the mud see a downside to deadbots:
Quinn [a wizard a Authetic Interactions Inc.] said companies are going to try to make as much money out of AI avatars of both the dead and the living as possible, and he acknowledges there could be some bad actors. “Companies are already testing things out internally for these use cases,” Quinn said, with reference to such uses cases as endorsements featuring living celebrities created with generative AI that people can interactive with. “We just haven’t seen a lot of the implementations yet.”
I wonder if any philosophical types will consider how an interaction with a dead person’s avatar can be an “authetic interaction.”
I started thinking of deadbots I would enjoy coming to life on my digital devices; for example:
- My first boss at a blue chip consulting firm who encouraged rumors that his previous wives accidently met with boating accidents
- My high school English teacher who took me to the assistant principal’s office for writing a poem about the spirit of nature who looked to me like a Playboy bunny
- The union steward who told me that I was working too fast and making other workers look like they were not working hard
- The airline professional who told me our flight would be delayed when a passenger died during push back from the gate. (The fellow was sitting next to me. Airport food did it I think.)
- The owner of an enterprise search company who insisted, “Our enterprise information retrieval puts all your company’s information at an employee’s fingertips.”
You may have other ideas for deadbots. How would you monetize a deadbot, Google- and Meta-type companies? Will Hollywood do deadbot motion pictures? (I know the answer to that question.)
Stephen E Arnold, September 2, 2025
AI Will Not Have a Negative Impact on Jobs. Knock Off the Negativity Now
September 2, 2025
No AI. Just a dinobaby working the old-fashioned way.
The word from Goldman Sachs is parental and well it should be. After all, Goldman Sachs is the big dog. PC Week’s story “Goldman Sachs: AI’s Job Hit Will Be Brief as Productivity Rises” makes this crystal clear or almost. In an era of PR and smart software, I am never sure who is creating what.
The write up says:
AI will cause significant, but ultimately temporary, disruption. The headline figure from the report is that widespread adoption of AI could displace 6-7% of the US workforce. While that number sounds alarming, the firm’s economists, Joseph Briggs and Sarah Dong, argue against the narrative of a permanent “jobpocalypse.” They remain “skeptical that AI will lead to large employment reductions over the next decade.”
Knock of the complaining already. College graduates with zero job offers. Just do the van life thing for a decade or become an influencer.
The write up explains history just like the good old days:
“Predictions that technology will reduce the need for human labor have a long history but a poor track record,” they write. The report highlights a stunning fact: Approximately 60% of US workers today are employed in occupations that didn’t even exist in 1940. This suggests that over 85% of all employment growth in the last 80 years has been fueled by the creation of new jobs driven by new technologies. From the steam engine to the internet, innovation has consistently eliminated some roles while creating entirely new industries and professions.
Technology and brilliant management like that at Goldman Sachs makes the economy hum along. And the write up proves it, and I quote:
Goldman Sachs expects AI to follow this pattern.
For those TikTok- and YouTube-type videos revealing that jobs are hard to obtain or the fathers whining about sending 200 job applications each month for six months, knock it off. The sun will come up tomorrow. The financial engines will churn and charge a service fee, of course. The flowers will bloom because that baloney about global warming is dead wrong. The birds will sing (well, maybe not in Manhattan) but elsewhere because windmills creating power are going to be shut down so the birds won’t be decapitated any more.
Everything is great. Goldman Sachs says this. In Goldman we trust or is it Goldman wants your trust… fund that is.
Stephen E Arnold, September 2, 2025
More about AI and Peasants from a Xoogler Too
September 1, 2025
A former Googler predicts a rough ride ahead for workers. And would-be workers. Yahoo News shares “Ex-Google Exec’s Shocking Warning: AI Will Create 15 Years of ‘Hell’—Starting Sooner than We Think.” Only 15 years? Seems optimistic. Mo Gawdat issued his prophesy on the “Diary of a CEO” podcast. He expects “the end of white-collar work” to begin by the end of this decade. Indeed, the job losses have already begun. But the cascading effects could go well beyond high unemployment. Reporter Ariel Zilber writes:
“Without proper government oversight, AI technology will channel unprecedented wealth and influence to those who own or control these systems, while leaving millions of workers struggling to find their place in the new economy, according to Gawdat. Beyond economic concerns, Gawdat anticipates serious social consequences from this rapid transformation. Gawdat said AI will trigger significant ‘social unrest’ as people grapple with losing their livelihoods and sense of purpose — resulting in rising rates of mental health problems, increased loneliness and deepening social divisions. ‘Unless you’re in the top 0.1%, you’re a peasant,’ Gawdat said. ‘There is no middle class.’”
That is ominous. But, to hear Gawdat tell it, there is a bright future on the other side of those hellish 15 years. He believes those who survive past 2040 can look forward to a “utopian” era free from tedious, mundane tasks. This will free us up to focus on “love, community, and spiritual development.” Sure. But to get there, he warns, we must take certain steps:
“Gawdat said that it is incumbent on governments, individuals and businesses to take proactive measures such as the adoption of universal basic income to help people navigate the transition. ‘We are headed into a short-term dystopia, but we can still decide what comes after that,’ Gawdat told the podcast, emphasizing that the future remains malleable based on choices society makes today. He argued that outcomes will depend heavily on decisions regarding regulation, equitable access to technology, and what he calls the ‘moral programming’ of AI algorithms.”
We are sure government and Big Tech will get right on that. Totally doable in our current political and business climates. Meanwhile, Mo Gawdat is working on an “AI love coach.” I am not sure Mr. Gawdat is connected to the bureaucratic and management ethos of 2025. Is that why he is a Xoogler?
Cynthia Murrell, September 1, 2025
Faux Boeuf Delivers Zero Calories Plus a Non-Human Toxin
August 29, 2025
No AI. Just a dinobaby working the old-fashioned way.
That sizzling rib AI called boeuf à la Margaux Blanchard is a treat. I learned about this recipe for creating filling, substantive, calorie laden content in “Wired and Business Insider Remove Articles by AI-Generated Freelancer.” I can visualize the meeting in which the decision was taken to hire Margaux Blanchard. I can also run in my mental VHS, the meeting when the issue was discovered. In my version, the group agreed to blame it on a contractor and the lousy job human resource professionals do these days.
What’s the “real” story? Let go to the Guardian write up:
On Thursday [August 22, 2025], Press Gazette reported that at least six publications, including Wired and Business Insider, have removed articles from their websites in recent months after it was discovered that the stories – written under the name of Margaux Blanchard – were AI-generated.
I frequently use the phrase “ordained officiant” in my dinobaby musings. Doesn’t everyone with some journalism experience?
The write u p said:
Wired’s management acknowledged the faux pas, saying: “If anyone should be able to catch an AI scammer, it’s Wired. In fact we do, all the time … Unfortunately, one got through. We made errors here: This story did not go through a proper fact-check process or get a top edit from a more senior editor … We acted quickly once we discovered the ruse, and we’ve taken steps to ensure this doesn’t happen again. In this new era, every newsroom should be prepared to do the same.”
Yeah, unfortunately and quickly. Yeah.
I liked this paragraph in the story:
This incident of false AI-generated reporting follows a May error when the Chicago Sun-Times’ Sunday paper ran a syndicated section with a fake reading list created by AI. Marco Buscaglia, a journalist who was working for King Features Syndicate, turned to AI to help generate the list, saying: “Stupidly, and 100% on me, I just kind of republished this list that [an AI program] spit out … Usually, it’s something I wouldn’t do … Even if I’m not writing something, I’m at least making sure that I correctly source it and vet it and make sure it’s all legitimate. And I definitely failed in that task.” Meanwhile, in June, the Utah court of appeals sanctioned a lawyer after he was discovered to have used ChatGPT for a filing he made in which he referenced a nonexistent court case.
Hey, that AI is great. It builds trust. It is intellectually satisfying just like some time in the kitchen with Margot Blanchard, a hot laptop, and some spicy prompts. Yum yum yum.
Stephen E Arnold, August 29, 2025
Google Uses a Blue Light Special for the US Government (Sorry K-Meta You Lose)
August 27, 2025
No AI. Just a dinobaby working the old-fashioned way.
I read an interesting news item in Artificial Intelligence News, a publication unknown to me. Like most of the AI information I read online I believe every single word. AI radiates accuracy, trust, and factual information. Let’s treat this “real” news story as actual factual. To process the information, you will want to reflect on the sales tactics behind Filene’s Basement, K-Mart’s blue light specials, and the ShamWow guy.
“The US Federal Government Secures a Massive Google Gemini AI Deal at $0.47 per Agency” reports:
Google Gemini will soon power federal operations across the United States government following a sweeping new agreement between the General Services Administration (GSA) and Google that delivers comprehensive AI capabilities at unprecedented pricing.
I regret I don’t have Microsoft government sales professional or a Palantir forward deployed engineer to call and get their view of this deal. Oh, well, that’s what happens when one gets old. (Remember. For a LinkedIn audience NEVER reveal your age. Okay, too bad LinkedIn, I am 81.)
It so happens I was involved in Year 2000 in some meetings at which Google pitched its search-and-retrieval system for US government wide search. For a number of reasons, the Google did not win that procurement bake off. It took a formal protest and some more meetings to explain the concept of conforming to a Statement of Work and the bid analysis process used by the US government 25 years ago. Google took it on the snout.
Not this time.
By golly, Google figured out how to deal with RFPs, SOWs, the Q&A process, and the pricing dance. The write up says:
The “Gemini for Government” offering, announced by GSA, represents one of the most significant government AI procurement deals to date. Under the OneGov agreement extending through 2026, federal agencies will gain access to Google’s full artificial intelligence stack for just US$0.47 per agency—a pricing structure that industry observers note is remarkably aggressive for enterprise-level AI services.
What does the US government receive? According to the write up:
Google CEO Sundar Pichai characterized the partnership as building on existing relationships: “Building on our Workspace offer for federal employees, ‘Gemini for Government’ gives federal agencies access to our full stack approach to AI innovation, including tools like NotebookLM and Veo powered by our latest models and our secure cloud infrastructure.”
Yo, Microsoft. Yo, Palantir. Are you paying attention? This explanation suggests that a clever government professional can do what your firms do. But — get this — at a price that may be “unsustainable.” (Of course, I know that em dashes signal smart software. Believe me. I use em dashes all by myself. No AI needed.)
I also noted this statement in the write up:
The $0.47 per agency pricing model raises immediate concerns about market distortion and the sustainability of such aggressive government contracting. Industry analysts question whether this represents genuine cost efficiency or a loss-leader strategy designed to lock agencies into Google’s ecosystem before prices inevitably rise after 2026. Moreover, the deal’s sweeping scope—encompassing everything from basic productivity tools to custom AI agent development—may create dangerous vendor concentration risks. Should technical issues, security breaches, or contract disputes arise, the federal government could find itself heavily dependent on a single commercial provider for critical operational capabilities. The announcement notably lacks specific metrics for measuring success, implementation timelines, or safeguards against vendor lock-in—details that will ultimately determine whether this represents genuine modernization or expensive experimentation with taxpayer resources.
Several observations are warranted:
- Google has figured out that making AI too cheap to resist appeals to certain government procurement professionals. A deal is a deal, of course. Scope changes, engineering services, and government budget schedules may add some jerked chicken spice to the bargain meal.
- The existing government-wide incumbent types are probably going to be holding some meetings to discuss what “this deal” means to existing and new projects involving smart software.
- The budget issues about AI investments are significant. Adding more expense for what can be a very demanding client is likely to have a direct impact on advertisers who fund the Google fun bus. How much will that YouTube subscription go up? Would Google raise rates to fund this competitive strike at Microsoft and Palantir? Of course not, you silly goose.
I wish I were at liberty to share some of the Google-related outputs from the Year 2000 procurement. But, alas, I cannot. Let me close by saying, “Google has figured out some basics of dealing with the US government.” Hey, it only took a quarter century, not bad for an ageing Googzilla.
Stephen E Arnold, August 27, 2025
Think It. The * It * Becomes Real. Think Again?
August 27, 2025
No AI. Just a dinobaby working the old-fashioned way.
Fortune Magazine — once the gem for a now spinning-in-his-grave publisher —- posted “MIT Report: 95% of Generative AI Pilots at Companies Are Failing.” I take a skeptical view of MIT. Why? The esteemed university found Jeffrey Epstein a swell person.
The thrust of the story is that people stick smart software into an organization, allow it time to steep, cook up a use case, and find the result unpalatable. Research is useful. When it evokes a “Duh!”, I don’t get too excited.
But there was a phrase in the write up which caught my attention: Learning gap. AI or smart software is a “belief.” The idea of the next big thing creates an opportunity to move money. Flow, churn, motion — These are positive values in some business circles.
AI fits the bill. The technology demonstrates interesting capabilities. Use cases exist. Companies like Microsoft have put money into the idea. Moving money is proof that “something” is happening. And today that something is smart software. AI is the “it” for the next big thing.
Learning gap, however, is the issue. The hurdle is not Sam Altman’s fears about the end of humanity or his casual observation that trillions of dollars are needed to make AI progress. We have a learning gap.
But the driving vision for Internet era innovation is do something big, change the world, reinvent society. I think this idea goes back to the sales-oriented philosophy of visualizing a goal and aligning one’s actions to achieve that goal. I a fellow or persona named Napoleon Hill pulled together some ideas and crafted “Think and Grow Rich.” Today one just promotes the “next big thing,” gets some cash moving, and an innovation like smart software will revolutionize, remake, or redo the world.
The “it” seems to be stuck in the learning gap. Here’s the proof, and I quote:
But for 95% of companies in the dataset, generative AI implementation is falling short. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained. The data also reveals a misalignment in resource allocation. More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation—eliminating business process outsourcing, cutting external agency costs, and streamlining operations.
Consider this question: What if smart software mostly works but makes humans uncomfortable in ways difficult for the user to articulate? What if humans lack the mental equipment to conceptualize what a smart system does? What if the smart software cannot answer certain user questions?
I find information about costs, failed use cases, hallucinations, and benefits plentiful. I don’t see much information about the “learning gap.” What causes a learning gap? Spell check makes sense. A click that produces a complete report on a complex topic is different. But in what way? What is the impact on the user?
I think the “learning gap” is a key phrase. I think there is money to be made in addressing it. I am not confident that visualizing a better AI is going to solve the problem which is similar to a bonfire of cash. The learning gap might be tough to fill with burning dollar bills.
Stephen E Arnold, August 27, 2025
Apple and Meta: The After Market Route
August 26, 2025
No AI. Just a dinobaby working the old-fashioned way.
Two big outfits are emulating the creative motif for an American television series titled “Pimp My Ride.” The show was hosted by rapper Xzibit, who has a new album called “Kingmaker” in the works. He became the “meme” of the television program with his signature phrase, “Yo, dawg, I heard you like.”
A DVD of season one, is available for sale at www.bol.com.
Each episode a “lucky person” would be approached and told that his or her vehicle would be given a make over. Some of the make overs were memorable. Examples included the “Yellow Shag Disaster,” which featured yellow paint and yellow shag carpeting. The team removed a rat living in the 1976 Pacer. Another was the “Drive In Theater Car.” It included a pop up champagne dispenser and a TV screen installed under the hood for a viewing experience when people gathered outside the vehicle.
The idea was to take something that mostly worked and then add-on extras. Did the approach work? It made Xzibit even more famous and it contributed the phrase “Yo, dawg, I heard you like” to the US popular culture between 2004 and 2007.
I think the “Pimp My Ride” concept has returned for Apple and Meta. Let me share my thoughts with you.
First, I noted that Bloomberg is exploring the use of Google Gemini AI to Power the long suffering Siri. You can read the paywalled story at this link. Apple knows that Google’s payments are worth real money. The idea of adding more Google and getting paid for the decision probably makes sense to the estimable Apple. Will the elephants mate and produce more money or will the grass get trampled. I don’t know. It will be interesting to see what the creative wizards at both companies produce. There is no date for the release of the first episode. I will be watching.
Second, the story presented in fragments on X.com appears at this X.com page. The key item of information is the alleged tie up between Meta and MidJourney:
Today we’re proud to announce a partnership with @midjourney , to license their aesthetic technology for our future models and products, bringing beauty to billions.
Meta, like Apple, is partnering with an AI success in the arts and crafts sector of smart software. The idea seems to focus on “aesthetic excellence.” How will these outfits enhance Meta. Here’s what the X.com comment offers:
To ensure Meta is able to deliver the best possible products for people it will require taking an all-of-the-above approach. This means world-class talent, ambitious compute roadmap, and working with the best players across the industry.
Will these add-one approaches to AI deliver something useful to millions or will the respective organizations produce the equivalent of the “Pimp My Ride” Hot Tub Limousine. This after-market confection added a hot tub filled with water to a limousine. The owner of the vehicle could relax in the hot tub while the driver ferried the proud owner to the bank.
I assume the creations of the Apple, Google, Meta, and MidJourney teams will be captured on video and distributed on TikTok-type services as well as billions of computing devices. My hope is that Xzibit is asked to host the roll outs for the newly redone services. I would buy a hat, a T shirt, and a poster for the “winner” of this new AI enhanced effort.
Yo, dawg, I heard you like AI, right?
Stephen E Arnold, August 26, 2025
Deal Breakers in Medical AI
August 26, 2025
No AI. Just a dinobaby working the old-fashioned way.
My newsfeed thing spit out a link to “Why Radiology AI Didn’t Work and What Comes Next.” I have zero interest in radiology. I don’t get too excited about smart software. So what did I do? Answer: I read the article. I was delighted to uncover a couple of points that, in my opinion, warrant capturing in my digital notebook.
The set up is that a wizard worked at a start up trying to get AI to make sense of the consistently fuzzy, murky, and baffling images cranked out by radiology gizmos. Tip: Follow the instructions and don’t wear certain items of jewelry. The start up fizzled. AI was part of the problem, but the Jaws-type sharp lurking in the murky image explains this type of AI implosion.
Let’s run though the points that struck me.
First, let’s look at this passage:
Unlike coding or mathematics, medicine rarely deals in absolutes. Clinical documentation, especially in radiology, is filled with hedge language — phrases like “cannot rule out,” “may represent,” or “follow-up recommended for correlation.” These aren’t careless ambiguities; they’re defensive signals, shaped by decades of legal precedent and diagnostic uncertainty.
Okay, lawyers play a significant role in establishing thought processes and normalizing ideas that appear to be purpose-built to vaporize like one of those nifty tattoo removing gadgets the smart system. I would have pegged insurance companies, then lawyers, but the write up directed my attention of the legal eagles’ role: Hedge language. Do I have disease X? The doctor responds, “Maybe, maybe not. Let’s wait 30 days and run more tests.” Fuzzy lingo, fuzzy images, perfect.
Second, the write up asks two questions:
- How do we improve model coverage at the tail without incurring prohibitive annotation costs?
- Can we combine automated systems with human-in-the-loop supervision to address the rare but dangerous edge cases?
The answers seem to be: You cannot afford to have humans do indexing and annotation. That’s why certain legal online services charge a lot for annotations. And, the second question, no, you cannot pull off automation with humans for events rarely covered in the training data. Why? Cost and finding enough humans who will do this work in a consistent way in a timely manner.
Here’s the third snippet:
Without direct billing mechanisms or CPT reimbursement codes, it was difficult to monetize the outcomes these tools enabled. Selling software alone meant capturing only a fraction of the value AI actually created. Ultimately, we were offering tools, not outcomes. And hospitals, rightly, were unwilling to pay for potential unless it came bundled with performance.
Finally, insurance procedures. Hospitals aren’t buying AI; they are buying ways to deliver “service” and “bill.” AI at this time does not sell what hospitals want to buy: A way to keep high rates and slash costs wherever possible.
Unlikely but perhaps some savvy AI outfit will create a system that can crack the issues the article identifies. Until then, no money, no AI.
Stephen E Arnold, August 26, 2025


