Supermarket Snitches: Old-Time Methods Are Back
September 5, 2025
So much for AI and fancy cyber-security systems. One UK grocery chain has found a more efficient way to deal with petty theft—pay people to rat out others. BBC reports, “Iceland Offers £1 Reward for Reporting Shoplifters.” (Not to be confused with the country, this Iceland is a British supermarket chain.) Business reporter Charlotte Edwards tells us shoplifting is a growing problem for grocery stores and pharmacies. She writes:
“Victims minister Alex Davies-Jones told BBC Radio 4’s Today programme on Monday that shoplifting had ‘got out of hand’ in the UK. … According to the Office for National Statistics, police recorded 530,643 shoplifting offences in the year to March 2025. That is a 20% increase from 444,022 in the previous year, and the highest figure since current recording practices began in 2002-03.”
Amazing what economic uncertainty will do. In response, the government plans to put thousands more police officers on neighborhood patrols by next spring. Perhaps encouraging shoppers to keep their eyes peeled will help. We learn:
“Supermarket chain Iceland will financially reward customers who report incidents of shoplifting, as part of efforts to tackle rising levels of retail theft. The firm’s executive chairman, Richard Walker, said that shoppers who alert staff to a theft in progress will receive a £1 credit on their Iceland Bonus Card. The company estimates that shoplifting costs its business around £20m each year. Mr Walker said this figure not only impacts the company’s bottom line but also limits its ability to reduce prices and reinvest in staff wages. Iceland told the BBC that the shoplifters do not necessarily need to be apprehended for customers to receive the £1 reward but will need to be reported and verified.”
How, exactly, they will be verified is left unexplained. Perhaps that is the role for advanced security systems. Totally worth it. Walker emphasizes customers should not try to apprehend shoplifters, just report them. Surely no one will get that twisted. But with one pound sterling equal to $1.35 USD, we wonder: is that enough incentive to pull the phone out of one’s pocket?
Technology is less effective than snitching.
Cynthia Murrell, September 5, 2025
Grousing Employees Can Be Fun. Credible? You Decide
September 4, 2025
No AI. Just a dinobaby working the old-fashioned way.
I read “Former Employee Accuses Meta of Inflating Ad Metrics and Sidestepping Rules.” Now former employees saying things that cast aspersions on a former employer are best processed with care. I did that, and I want to share the snippets snagging my attention. I try not to think about Meta. I am finishing my monograph about Telegram, and I have to stick to my lane. But I found this write up a hoot.
The first passage I circled says:
Questions are mounting about the reliability of Meta’s advertising metrics and data practices after new claims surfaced at a London employment tribunal this week. A former Meta product manager alleged that the social media giant inflated key metrics and sidestepped strict privacy controls set by Apple, raising concerns among advertisers and regulators about transparency in the industry.
Imagine. Meta coming up at a tribunal. Does that remind anyone of the Cambridge Analytica excitement? Do you recall the rumors that fiddling with Facebook pushed Brexit over the finish line? Whatever happened to those oh-so-clever CA people?
I found this tribunal claim interesting:
… Meta bypassed Apple’s App Tracking Transparency (ATT) rules, which require user consent before tracking their activity across iPhone apps. After Apple introduced ATT in 2021, most users opted out of tracking, leading to a significant reduction in Meta’s ability to gather information for targeted advertising. Company investors were told this would trim revenues by about $10 billion in 2022.
I thought Apple had their system buttoned up. Who knew?
Did Meta have a response? Absolutely. The write up reports:
“We are actively defending these proceedings …” a Meta spokesperson told The Financial Times. “Allegations related to the integrity of our advertising practices are without merit and we have full confidence in our performance review processes.”
True or false? Well….
Stephen E Arnold, September 4, 2025
Spotify Does Messaging: Is That Good or Bad?
September 4, 2025
No AI. Just a dinobaby working the old-fashioned way.
My team and I have difficulty keeping up with the messaging apps that seem to be like mating gerbils. I noted that Spotify, the semi-controversial music app, is going to add messaging. “Spotify Adds In-App Messaging Feature to Let Users Share Music and Podcasts Directly” says:
According to the company, the update is designed “to give users what they want and make those moments of connection more seamless and streamlined in the Spotify app.” Users will be able to message people they have interacted with on Spotify before, such as through Jams, Blends and Collaborative Playlists, or those who share a Family or Duo plan.
The messaging app is no Telegram. The interesting question for me is, “Will Spotify emulate Telegram’s features as Meta’s WhatsApp has?”
Telegram, despite its somewhat negative press, has found a way to monetize user clicks, supplement subscription revenue with crypto service charges, and alleged special arrangement now being adjudicated by the French judiciary.
New messaging platforms get a look from bad actors. How will Spotify police the content? Avid music people often find ways to circumvent different rules and regulations to follow their passion.
Will Spotify cooperate with regulators or will it emulate some of the Dark Web messaging outfits or Telegram, a firm with a template for making money appear when necessary?
Stephen E Arnold, September 4, 2025
Fabulous Fakes Pollute Publishing: That AI Stuff Is Fatuous
September 4, 2025
New York Times best selling author David Baldacci testified before the US Congress about regulating AI. Medical professionals are worried about false information infiltrating medical knowledge like the scandal involving Med-Gemini and an imaginary body part. It’s getting worse says ZME Science: “A Massive Fraud Ring Is Publishing Thousands of Fake Studies and the Problem is Exploding. ‘These Networks Are Essentially Criminal Organizations.’”
Bad actors in scientific publishing used to be a small group, but now it’s a big posse:
“What we are seeing is large networks of editors and authors cooperating to publish fraudulent research at scale. They are exploiting cracks in the system to launder reputations, secure funding, and climb academic ranks. This isn’t just about the occasional plagiarized paragraph or data fudged to fool reviewers. This is about a vast and resilient system that, in some cases, mimics organized crime. And it’s infiltrating the very core of science.”
Luís Amaral discovered in a study he conducted that analyzed five million papers across 70,000 scientific journals that there is a fraudulent paper mill for publishing. You’ve heard of paper mill colleges where students can buy so-called degrees. This is similar except the products are authorship slots and journal placements from artificial research and compromised editors.
Outstanding, AI champions!
This is a way for bad actors to pad their resumes and gain undeserved creditability.
Fake science has always been a problem but it’s outpacing fact-based science. It’s cheaper to produce fake science than legitimate truth. The article then waxes poetic about the need for respectability, the dangerous consequences of false science, and how the current tools aren’t enough. It’s devastating but the expected cultural shift needed to be more respectful of truth and hard facts is not equipped to deal with the new world. Thanks, AI.
Whitney Grace, September 4, 2025
Derailing Smart Software with Invisible Prompts
September 3, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
The Russian PCNews service published “Visual Illusion: Scammers Have Learned to Give Invisible Instructions to Neural Networks.” Note: The article is in Russian.
The write up states:
Attackers can embed hidden instructions for artificial intelligence (AI) into the text of web pages, letters or documents … For example, CSS (a style language for describing the appearance of a document) makes text invisible to humans, but quite readable to a neural network.
The write up includes examples like these:
… Attackers can secretly run scripts, steal data, or encrypt files. The neural network response may contain social engineering commands [such as] “download this file,” “execute a PowerShell command,” or “open the link,” … At the same time, the user perceives the output as trusted … which increases the chance of installing ransomware or stealing data. If data [are] “poisoned” using hidden prompts [and] gets into the training materials of any neural network, [the system] will learn to give “harmful advice” even when processing “unpoisoned” content in future use….
Examples of invisible information have been identified in the ArXiv collection of pre-printed journal articles.
Stephen E Arnold, September 3, 2025
AI Words Are the Surface: The Deeper Thought Embedding Is the Problem with AI
September 3, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Humans are biased. Content generated by humans reflects these mental patterns. Smart software is probabilistic. So what?
Select the content to train smart software. The more broadly the content base, the greater range of biases will be baked into the Fancy Dan software. Then toss in the human developers who make decisions about thresholds, weights, and rounding. Mix in the wrapper code that does the guardrails which are created by humans with some of those biases, attitudes, and idiosyncratic mental equipment.
Then provide a system to students and people eager to get more done with less effort and what do you get? A partial and important glimpse of the consequences of about 2.5 years of AI as the next big thing are presented in “On-Screen and Now IRL: FSU Researchers Find Evidence of ChatGPT Buzzwords Turning Up in Everyday Speech.”
The write up reports:
“The changes we are seeing in spoken language are pretty remarkable, especially when compared to historical trends,” Juzek said. “What stands out is the breadth of change: so many words are showing notable increases over a relatively short period. Given that these are all words typically overused by AI, it seems plausible to conjecture a link.”
Conjecture. That’s a weasel word. Once words are embedded they dragged a hard sided carry on with them.
The write up adds:
“Our research highlights many important ethical questions,” Galpin said. “With the ability of LLMs to influence human language comes larger questions about how model biases and misalignment, or differences in behavior in LLMs, may begin to influence human behaviors.”
As more research data become available, I project that several factoids will become points of discussion:
- What happens when AI outputs are weaponized for political, personal, or financial gain?
- How will people consuming AI outputs recognize that their vocabulary and the attendant “value baggage” is along for the life journey?
- What type of mental remapping can be accomplished with shaped AI output?
For now, students are happy to let AI think for them. In the future, will that warm, fuzzy feeling persist. If ignorance is bliss, I say, “Hello, happy.”
Stephen E Arnold, September 3, 2025
Bending Reality or Creating a Question of Ownership and Responsibility for Errors
September 3, 2025
No AI. Just a dinobaby working the old-fashioned way.
The Google has may busy digital beavers working in the superbly managed organization. The BBC, however, seems to be agitated about what may be a truly insignificant matter: Ownership of substantially altered content and responsibility for errors introduced into digital content.
“YouTube secretly used AI to Edit People’s Videos. The Results Could Bend Reality” reports:
In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission.
The BBC ignores a couple of issues that struck me as significant if — please, note the “if” — the assertion about YouTube altering content belonging to another entity. I will address these after some more BBC goodness.
I noted this statement:
the company [Google] has finally confirmed it is altering a limited number of videos on YouTube Shorts, the app’s short-form video feature.
Okay, the Google digital beavers are beavering away.
I also noted this passage attributed to Samuel Woolley, the Dietrich chair of disinformation studies at the University of Pittsburgh:
“You can make decisions about what you want your phone to do, and whether to turn on certain features. What we have here is a company manipulating content from leading users that is then being distributed to a public audience without the consent of the people who produce the videos…. “People are already distrustful of content that they encounter on social media. What happens if people know that companies are editing content from the top down, without even telling the content creators themselves?”
What about those issues I thought about after reading the BBC’s write up:
- If Google’s changes (improvements, enhancements, AI additions, whatever), will Google “own” the resulting content? My thought is that if Google can make more money by using AI to create a “fair use” argument, it will. How long will it take a court (assuming these are still functioning) to figure out if Google’s right or the individual content creator is the copyright holder?
- When, not if, Google’s AI introduces some type of error, is Google responsible or is it the creator’s problem? My hunch is that Google’s attorneys will argue that it provides a content creator with a free service. See the Terms of Service for YouTube and stop complaining.
- What if a content creator hits a home run and Google’s AI “learns” then outputs content via its assorted AI processes? Will Google be able to deplatform the original creator and just use it as a way to make money without paying the home-run hitting YouTube creator?
Perhaps the BBC would like to consider how these tiny “experiments” can expand until they shift the monetization methods further in favor of the Google. Maybe one reason is that BBC doesn’t think these types of thoughts. The Google, based on my experience, is indeed thinking these types of “what if” talks in a sterile room with whiteboards and brilliant Googlers playing with their mobile devices or snacking on goodies.
Stephen E Arnold, September 3, 2025
Deadbots. Many Use Cases, Including Advertising
September 2, 2025
No AI. Just a dinobaby working the old-fashioned way.
I like the idea of deadbots, a concept explained by the ever-authoritative NPR in “AI Deadbots Are Persuasive — and Researchers Say, They’re Primed for Monetization.” The write up reports in what I imagine as a resonant, somewhat breathy voice:
AI avatars of deceased people – or “deadbots” – are showing up in new and unexpected contexts, including ones where they have the power to persuade.
Here’s a passage I thought was interesting:
Researchers are now warning that commercial use is the next frontier for deadbots. “Of course it will be monetized,” said Lindenwood University AI researcher James Hutson. Hutson co-authored several studies about deadbots, including one exploring the ethics of using AI to reanimate the dead. Hutson’s work, along with other recent studies such as one from Cambridge University, which explores the likelihood of companies using deadbots to advertise products to users, point to the potential harms of such uses. “The problem is if it is perceived as exploitative, right?” Hutson said.
Not surprisingly, some sticks in the mud see a downside to deadbots:
Quinn [a wizard a Authetic Interactions Inc.] said companies are going to try to make as much money out of AI avatars of both the dead and the living as possible, and he acknowledges there could be some bad actors. “Companies are already testing things out internally for these use cases,” Quinn said, with reference to such uses cases as endorsements featuring living celebrities created with generative AI that people can interactive with. “We just haven’t seen a lot of the implementations yet.”
I wonder if any philosophical types will consider how an interaction with a dead person’s avatar can be an “authetic interaction.”
I started thinking of deadbots I would enjoy coming to life on my digital devices; for example:
- My first boss at a blue chip consulting firm who encouraged rumors that his previous wives accidently met with boating accidents
- My high school English teacher who took me to the assistant principal’s office for writing a poem about the spirit of nature who looked to me like a Playboy bunny
- The union steward who told me that I was working too fast and making other workers look like they were not working hard
- The airline professional who told me our flight would be delayed when a passenger died during push back from the gate. (The fellow was sitting next to me. Airport food did it I think.)
- The owner of an enterprise search company who insisted, “Our enterprise information retrieval puts all your company’s information at an employee’s fingertips.”
You may have other ideas for deadbots. How would you monetize a deadbot, Google- and Meta-type companies? Will Hollywood do deadbot motion pictures? (I know the answer to that question.)
Stephen E Arnold, September 2, 2025
AI Will Not Have a Negative Impact on Jobs. Knock Off the Negativity Now
September 2, 2025
No AI. Just a dinobaby working the old-fashioned way.
The word from Goldman Sachs is parental and well it should be. After all, Goldman Sachs is the big dog. PC Week’s story “Goldman Sachs: AI’s Job Hit Will Be Brief as Productivity Rises” makes this crystal clear or almost. In an era of PR and smart software, I am never sure who is creating what.
The write up says:
AI will cause significant, but ultimately temporary, disruption. The headline figure from the report is that widespread adoption of AI could displace 6-7% of the US workforce. While that number sounds alarming, the firm’s economists, Joseph Briggs and Sarah Dong, argue against the narrative of a permanent “jobpocalypse.” They remain “skeptical that AI will lead to large employment reductions over the next decade.”
Knock of the complaining already. College graduates with zero job offers. Just do the van life thing for a decade or become an influencer.
The write up explains history just like the good old days:
“Predictions that technology will reduce the need for human labor have a long history but a poor track record,” they write. The report highlights a stunning fact: Approximately 60% of US workers today are employed in occupations that didn’t even exist in 1940. This suggests that over 85% of all employment growth in the last 80 years has been fueled by the creation of new jobs driven by new technologies. From the steam engine to the internet, innovation has consistently eliminated some roles while creating entirely new industries and professions.
Technology and brilliant management like that at Goldman Sachs makes the economy hum along. And the write up proves it, and I quote:
Goldman Sachs expects AI to follow this pattern.
For those TikTok- and YouTube-type videos revealing that jobs are hard to obtain or the fathers whining about sending 200 job applications each month for six months, knock it off. The sun will come up tomorrow. The financial engines will churn and charge a service fee, of course. The flowers will bloom because that baloney about global warming is dead wrong. The birds will sing (well, maybe not in Manhattan) but elsewhere because windmills creating power are going to be shut down so the birds won’t be decapitated any more.
Everything is great. Goldman Sachs says this. In Goldman we trust or is it Goldman wants your trust… fund that is.
Stephen E Arnold, September 2, 2025
Swinging for the Data Centers: You May Strike Out, Casey
September 2, 2025
Home to a sparse population of humans, the Cowboy State is about to generate an immense amount of electricity. Tech Radar Pro reports, “A Massive Wyoming Data Center Will Soon Use 5x More Power than the State’s Human Occupants—But No One Knows Who Is Using It.” Really? We think we can guess. The Cheyenne facility is to be powered by a bespoke combination of natural gas and renewables. Writer Efosa Udinmwen writes:
“The proposed facility, a collaboration between energy company Tallgrass and data center developer Crusoe, is expected to start at 1.8 gigawatts and could scale to an immense 10 gigawatts. For context, this is over five times more electricity than what all households in Wyoming currently use.”
Who could need so much juice? Could it be OpenAI? So far, Crusoe neither confirms nor denies that suspicion. The write-up, however, notes Crusoe worked with OpenAI to build the world’s “largest data center” in Texas as part of the OpenAI-led “Stargate” initiative. (Yes, named for the portals in the 1994 movie and subsequent TV show. So clever.) Udinmwen observes:
“At the core of such AI-focused data centers lies the demand for extremely high-performance hardware. Industry experts expect it to house the fastest CPUs available, possibly in dense, rack-mounted workstation configurations optimized for deep learning and model training. These systems are power-hungry by design, with each server node capable of handling massive workloads that demand sustained cooling and uninterrupted energy. Wyoming state officials have embraced the project as a boost to local industries, particularly natural gas; however, some experts warn of broader implications. Even with a self-sufficient power model, a data center of this scale alters regional power dynamics. There are concerns that residents of Wyoming and its environs could face higher utility costs, particularly if local supply chains or pricing models are indirectly affected. Also, Wyoming’s identity as a major energy exporter could be tested if more such facilities emerge.”
The financial blind spot is explained in Futurism’s article “There’s a Stunning Financial Problem With AI Data Centers.” The main idea is that today’s investment will require future spending for upgrades, power, water, and communications. The result is that most of these “home run” swings will result in lousy batting averages and maybe become a hot dog vendor at the ball park adjacent the humming, hot structures.
Cynthia Murrell, September 2, 2025

