Spotify Does Messaging: Is That Good or Bad?

September 4, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

My team and I have difficulty keeping up with the messaging apps that seem to be like mating gerbils. I noted that Spotify, the semi-controversial music app, is going to add messaging. “Spotify Adds In-App Messaging Feature to Let Users Share Music and Podcasts Directly” says:

According to the company, the update is designed “to give users what they want and make those moments of connection more seamless and streamlined in the Spotify app.” Users will be able to message people they have interacted with on Spotify before, such as through Jams, Blends and Collaborative Playlists, or those who share a Family or Duo plan.

The messaging app is no Telegram. The interesting question for me is, “Will Spotify emulate Telegram’s features as Meta’s WhatsApp has?”

Telegram, despite its somewhat negative press, has found a way to monetize user clicks, supplement subscription revenue with crypto service charges, and alleged special arrangement now being adjudicated by the French judiciary.

New messaging platforms get a look from bad actors. How will Spotify police the content? Avid music people often find ways to circumvent different rules and regulations to follow their passion.

Will Spotify cooperate with regulators or will it emulate some of the Dark Web messaging outfits or Telegram, a firm with a template for making money appear when necessary?

Stephen E Arnold, September 4, 2025

Fabulous Fakes Pollute Publishing: That AI Stuff Is Fatuous

September 4, 2025

New York Times best selling author David Baldacci testified before the US Congress about regulating AI. Medical professionals are worried about false information infiltrating medical knowledge like the scandal involving Med-Gemini and an imaginary body part. It’s getting worse says ZME Science: “A Massive Fraud Ring Is Publishing Thousands of Fake Studies and the Problem is Exploding. ‘These Networks Are Essentially Criminal Organizations.’”

Bad actors in scientific publishing used to be a small group, but now it’s a big posse:

“What we are seeing is large networks of editors and authors cooperating to publish fraudulent research at scale. They are exploiting cracks in the system to launder reputations, secure funding, and climb academic ranks. This isn’t just about the occasional plagiarized paragraph or data fudged to fool reviewers. This is about a vast and resilient system that, in some cases, mimics organized crime. And it’s infiltrating the very core of science.”

Luís Amaral discovered in a study he conducted that analyzed five million papers across 70,000 scientific journals that there is a fraudulent paper mill for publishing. You’ve heard of paper mill colleges where students can buy so-called degrees. This is similar except the products are authorship slots and journal placements from artificial research and compromised editors.

Outstanding, AI champions!

This is a way for bad actors to pad their resumes and gain undeserved creditability.

Fake science has always been a problem but it’s outpacing fact-based science. It’s cheaper to produce fake science than legitimate truth. The article then waxes poetic about the need for respectability, the dangerous consequences of false science, and how the current tools aren’t enough. It’s devastating but the expected cultural shift needed to be more respectful of truth and hard facts is not equipped to deal with the new world. Thanks, AI.

Whitney Grace, September 4, 2025

Derailing Smart Software with Invisible Prompts

September 3, 2025

Dino 5 18 25Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.

The Russian PCNews service published “Visual Illusion: Scammers Have Learned to Give Invisible Instructions to Neural Networks.” Note: The article is in Russian.

The write up states:

Attackers can embed hidden instructions for artificial intelligence (AI) into the text of web pages, letters or documents … For example, CSS (a style language for describing the appearance of a document) makes text invisible to humans, but quite readable to a neural network.

The write up includes examples like these:

… Attackers can secretly run scripts, steal data, or encrypt files. The neural network response may contain social engineering commands [such as] “download this file,” “execute a PowerShell command,” or “open the link,” … At the same time, the user perceives the output as trusted … which increases the chance of installing ransomware or stealing data. If data [are] “poisoned” using hidden prompts [and] gets into the training materials of any neural network, [the system] will learn to give “harmful advice” even when processing “unpoisoned” content in future use….

Examples of invisible information have been identified in the ArXiv collection of pre-printed journal articles.

Stephen E Arnold, September 3, 2025

AI Words Are the Surface: The Deeper Thought Embedding Is the Problem with AI

September 3, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker. 

Humans are biased. Content generated by humans reflects these mental patterns. Smart software is probabilistic. So what?

Select the content to train smart software. The more broadly the content base, the greater range of biases will be baked into the Fancy Dan software. Then toss in the human developers who make decisions about thresholds, weights, and rounding. Mix in the wrapper code that does the guardrails which are created by humans with some of those biases, attitudes, and idiosyncratic mental equipment.

Then provide a system to students and people eager to get more done with less effort and what do you get? A partial and important glimpse of the consequences of about 2.5 years of AI as the next big thing are presented in “On-Screen and Now IRL: FSU Researchers Find Evidence of ChatGPT Buzzwords Turning Up in Everyday Speech.”

The write up reports:

“The changes we are seeing in spoken language are pretty remarkable, especially when compared to historical trends,” Juzek said. “What stands out is the breadth of change: so many words are showing notable increases over a relatively short period. Given that these are all words typically overused by AI, it seems plausible to conjecture a link.”

Conjecture. That’s a weasel word. Once words are embedded they dragged a hard sided carry on with them.

The write up adds:

“Our research highlights many important ethical questions,” Galpin said. “With the ability of LLMs to influence human language comes larger questions about how model biases and misalignment, or differences in behavior in LLMs, may begin to influence human behaviors.”

As more research data become available, I project that several factoids will become points of discussion:

  1. What happens when AI outputs are weaponized for political, personal, or financial gain?
  2. How will people consuming AI outputs recognize that their vocabulary and the attendant “value baggage” is along for the life journey?
  3. What type of mental remapping can be accomplished with shaped AI output?

For now, students are happy to let AI think for them. In the future, will that warm, fuzzy feeling persist. If ignorance is bliss, I say, “Hello, happy.”

Stephen E  Arnold, September 3, 2025

Bending Reality or Creating a Question of Ownership and Responsibility for Errors

September 3, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

The Google has may busy digital beavers working in the superbly managed organization. The BBC, however, seems to be agitated about what may be a truly insignificant matter: Ownership of substantially altered content and responsibility for errors introduced into digital content.

YouTube secretly used AI to Edit People’s Videos. The Results Could Bend Reality” reports:

In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission.

The BBC ignores a couple of issues that struck me as significant if — please, note the “if” — the assertion about YouTube altering content belonging to another entity. I will address these after some more BBC goodness.

I noted this statement:

the company [Google] has finally confirmed it is altering a limited number of videos on YouTube Shorts, the app’s short-form video feature.

Okay, the Google digital beavers are beavering away.

I also noted this passage attributed to Samuel Woolley, the Dietrich chair of disinformation studies at the University of Pittsburgh:

“You can make decisions about what you want your phone to do, and whether to turn on certain features. What we have here is a company manipulating content from leading users that is then being distributed to a public audience without the consent of the people who produce the videos…. “People are already distrustful of content that they encounter on social media. What happens if people know that companies are editing content from the top down, without even telling the content creators themselves?”

What about those issues I thought about after reading the BBC’s write up:

  1. If Google’s changes (improvements, enhancements, AI additions, whatever), will Google “own” the resulting content? My thought is that if Google can make more money by using AI to create a “fair use” argument, it will. How long will it take a court (assuming these are still functioning) to figure out if Google’s right or the individual content creator is the copyright holder?
  2. When, not if, Google’s AI introduces some type of error, is Google responsible or is it the creator’s problem? My hunch is that Google’s attorneys will argue that it provides a content creator with a free service. See the Terms of Service for YouTube and stop complaining.
  3. What if a content creator hits a home run and Google’s AI “learns” then outputs content via its assorted AI processes? Will Google be able to deplatform the original creator and just use it as a way to make money without paying the home-run hitting YouTube creator?

Perhaps the BBC would like to consider how these tiny “experiments” can expand until they shift the monetization methods further in favor of the Google. Maybe one reason is that BBC doesn’t think these types of thoughts. The Google, based on my experience, is indeed thinking these types of “what if” talks in a sterile room with whiteboards and brilliant Googlers playing with their mobile devices or snacking on goodies.

Stephen E Arnold, September 3, 2025

Deadbots. Many Use Cases, Including Advertising

September 2, 2025

Dino 5 18 25_thumbNo AI. Just a dinobaby working the old-fashioned way.

I like the idea of deadbots, a concept explained by the ever-authoritative NPR in “AI Deadbots Are Persuasive — and Researchers Say, They’re Primed for Monetization.” The write up reports in what I imagine as a resonant, somewhat breathy voice:

AI avatars of deceased people – or “deadbots” – are showing up in new and unexpected contexts, including ones where they have the power to persuade.

Here’s a passage I thought was interesting:

Researchers are now warning that commercial use is the next frontier for deadbots. “Of course it will be monetized,” said Lindenwood University AI researcher James Hutson. Hutson co-authored several studies about deadbots, including one exploring the ethics of using AI to reanimate the dead. Hutson’s work, along with other recent studies such as one from Cambridge University, which explores the likelihood of companies using deadbots to advertise products to users, point to the potential harms of such uses. “The problem is if it is perceived as exploitative, right?” Hutson said.

Not surprisingly, some sticks in the mud see a downside to deadbots:

Quinn [a wizard a Authetic Interactions Inc.] said companies are going to try to make as much money out of AI avatars of both the dead and the living as possible, and he acknowledges there could be some bad actors. “Companies are already testing things out internally for these use cases,” Quinn said, with reference to such uses cases as endorsements featuring living celebrities created with generative AI that people can interactive with. “We just haven’t seen a lot of the implementations yet.”

I wonder if any philosophical types will consider how an interaction with a dead person’s avatar can be an “authetic interaction.”

I started thinking of deadbots I would enjoy coming to life on my digital devices; for example:

  • My first boss at a blue chip consulting firm who encouraged rumors that his previous wives accidently met with boating accidents
  • My high school English teacher who took me to the assistant principal’s office for writing a poem about the spirit of nature who looked to me like a Playboy bunny
  • The union steward who told me that I was working too fast and making other workers look like they were not working hard
  • The airline professional who told me our flight would be delayed when a passenger died during push back from the gate. (The fellow was sitting next to me. Airport food did it I think.)
  • The owner of an enterprise search company who insisted, “Our enterprise information retrieval puts all your company’s information at an employee’s fingertips.”

You may have other ideas for deadbots. How would you monetize a deadbot, Google- and Meta-type companies? Will Hollywood do deadbot motion pictures? (I know the answer to that question.)

Stephen E Arnold, September 2, 2025

AI Will Not Have a Negative Impact on Jobs. Knock Off the Negativity Now

September 2, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

The word from Goldman Sachs is parental and well it should be. After all, Goldman Sachs is the big dog. PC Week’s story “Goldman Sachs: AI’s Job Hit Will Be Brief as Productivity Rises” makes this crystal clear or almost. In an era of PR and smart software, I am never sure who is creating what.

The write up says:

AI will cause significant, but ultimately temporary, disruption. The headline figure from the report is that widespread adoption of AI could displace 6-7% of the US workforce. While that number sounds alarming, the firm’s economists, Joseph Briggs and Sarah Dong, argue against the narrative of a permanent “jobpocalypse.” They remain “skeptical that AI will lead to large employment reductions over the next decade.”

Knock of the complaining already. College graduates with zero job offers. Just do the van life thing for a decade or become an influencer.

The write up explains history just like the good old days:

“Predictions that technology will reduce the need for human labor have a long history but a poor track record,” they write. The report highlights a stunning fact: Approximately 60% of US workers today are employed in occupations that didn’t even exist in 1940. This suggests that over 85% of all employment growth in the last 80 years has been fueled by the creation of new jobs driven by new technologies. From the steam engine to the internet, innovation has consistently eliminated some roles while creating entirely new industries and professions.

Technology and brilliant management like that at Goldman Sachs makes the economy hum along. And the write up proves it, and I quote:

Goldman Sachs expects AI to follow this pattern.

For those TikTok- and YouTube-type videos revealing that jobs are hard to obtain or the fathers whining about sending 200 job applications each month for six months, knock it off. The sun will come up tomorrow. The financial engines will churn and charge a service fee, of course. The flowers will bloom because that baloney about global warming is dead wrong. The birds will sing (well, maybe not in Manhattan) but elsewhere because windmills creating power are going to be shut down so the birds won’t be decapitated any more.

Everything is great. Goldman Sachs says this. In Goldman we trust or is it Goldman wants your trust… fund that is.

Stephen E Arnold, September 2, 2025

Swinging for the Data Centers: You May Strike Out, Casey

September 2, 2025

Home to a sparse population of humans, the Cowboy State is about to generate an immense amount of electricity. Tech Radar Pro reports, “A Massive Wyoming Data Center Will Soon Use 5x More Power than the State’s Human Occupants—But No One Knows Who Is Using It.” Really? We think we can guess. The Cheyenne facility is to be powered by a bespoke combination of natural gas and renewables. Writer Efosa Udinmwen writes:

“The proposed facility, a collaboration between energy company Tallgrass and data center developer Crusoe, is expected to start at 1.8 gigawatts and could scale to an immense 10 gigawatts. For context, this is over five times more electricity than what all households in Wyoming currently use.”

Who could need so much juice? Could it be OpenAI? So far, Crusoe neither confirms nor denies that suspicion. The write-up, however, notes Crusoe worked with OpenAI to build the world’s “largest data center” in Texas as part of the OpenAI-led “Stargate” initiative. (Yes, named for the portals in the 1994 movie and subsequent TV show. So clever.) Udinmwen observes:

“At the core of such AI-focused data centers lies the demand for extremely high-performance hardware. Industry experts expect it to house the fastest CPUs available, possibly in dense, rack-mounted workstation configurations optimized for deep learning and model training. These systems are power-hungry by design, with each server node capable of handling massive workloads that demand sustained cooling and uninterrupted energy. Wyoming state officials have embraced the project as a boost to local industries, particularly natural gas; however, some experts warn of broader implications. Even with a self-sufficient power model, a data center of this scale alters regional power dynamics. There are concerns that residents of Wyoming and its environs could face higher utility costs, particularly if local supply chains or pricing models are indirectly affected. Also, Wyoming’s identity as a major energy exporter could be tested if more such facilities emerge.”

The financial blind spot is explained in Futurism’s article “There’s a Stunning Financial Problem With AI Data Centers.” The main idea is that today’s investment will require future spending for upgrades, power, water, and communications. The result is that most of these “home run” swings will result in lousy batting averages and maybe become a hot dog vendor at the ball park adjacent the humming, hot structures.

Cynthia Murrell, September 2, 2025

Picking on the Zuck: Now It Is the AI Vision

September 1, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

Hey, the fellow just wanted to meet girls on campus. Now his life work has become a negative. Let’s cut some slack for the Zuck. He is a thinking, caring family man. Imagine my shock when I read “Mark Zuckerberg’s Unbelievably Bleak AI Vision: We Were Promised Flying Cars. We Got Instagram Brain Rot.”

A person choosing to use a product the Zuck just bought conflates brain rot with a mass affliction. That’s outstanding reasoning.

The write up says:

In an Instagram video (of course) posted last week, Zuck explains that Meta’s goal is to develop “personal superintelligence for everyone,” accessed through devices like “glasses that can see what we see, hear what we hear, and interact with us throughout the day.” “A lot has been written about the scientific and economic advances that AI can bring,” he noted. “And I’m really optimistic about this.” But his vision is “different from others in the industry who want to direct AI at automating all of the valuable work”: “I think an even more meaningful impact in our lives is going to come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, be a better friend, and grow to become the person that you aspire to be.”

A person wearing the Zuck glasses will not be a “glasshole.” That individual will be a better human. Imagine taking the Zuck qualities and amplifying them like a high school sound system on the fritz. That’s what smart software will do.

The write up I saw is dated August 6, 2025, and it is hopelessly out of date. the Zuck has reorganized his firm’s smart software unit. He has frozen hiring except for a few quick strikes at competitors. And he is bringing more order to a quite well organized, efficiently run enterprise.

The big question is, “How can a write up dated August 6, 2025, become so mismatched with what the Zuck is currently doing? I don’t think I can rely on a write up with an assertion like this one:

I’ve seen the best digital minds of my generation wasted on Reels.

I have never seen a Reels, but it is obvious I am in the minority. That means that I am ill-equipped to understand this:

the AI systems his team is building are not meant to automate work but to provide a Meta-governed layer between individual human beings and the world outside of them.

This sounds great.

I would like to share three thoughts I had whilst reading this essay:

  1. Ephemeral writing becomes weirdly unrelated to the reality of the current online market in the United States
  2. The Zuck’s statements and his subsequent reorganization suggest that alignment at Facebook is a bit like a grade school student trying to fit puzzle pieces into the wrong puzzle
  3. Googles, glasses, implants — The fact that Facebook does not have a device has created a desire for a vehicle with a long hood and a big motor. Compensation comes in many forms.

Net net: One of the risks in the Silicon Valley world is that “real” is slippery. Do the outputs of “leadership” correlate with the reality of the organization?

Nope. Do this. Do that. See what works. Modern leadership. Will someone turn off those stupid flashing red and yellow alarm lights? I can see the floundering without the glasses, buzzing, and flashing.

Stephen E Arnold, September 1, 2025

More about AI and Peasants from a Xoogler Too

September 1, 2025

A former Googler predicts a rough ride ahead for workers. And would-be workers. Yahoo News shares “Ex-Google Exec’s Shocking Warning: AI Will Create 15 Years of ‘Hell’—Starting Sooner than We Think.” Only 15 years? Seems optimistic. Mo Gawdat issued his prophesy on the “Diary of a CEO” podcast. He expects “the end of white-collar work” to begin by the end of this decade. Indeed, the job losses have already begun. But the cascading effects could go well beyond high unemployment. Reporter Ariel Zilber writes:

“Without proper government oversight, AI technology will channel unprecedented wealth and influence to those who own or control these systems, while leaving millions of workers struggling to find their place in the new economy, according to Gawdat. Beyond economic concerns, Gawdat anticipates serious social consequences from this rapid transformation. Gawdat said AI will trigger significant ‘social unrest’ as people grapple with losing their livelihoods and sense of purpose — resulting in rising rates of mental health problems, increased loneliness and deepening social divisions. ‘Unless you’re in the top 0.1%, you’re a peasant,’ Gawdat said. ‘There is no middle class.’”

That is ominous. But, to hear Gawdat tell it, there is a bright future on the other side of those hellish 15 years. He believes those who survive past 2040 can look forward to a “utopian” era free from tedious, mundane tasks. This will free us up to focus on “love, community, and spiritual development.” Sure. But to get there, he warns, we must take certain steps:

“Gawdat said that it is incumbent on governments, individuals and businesses to take proactive measures such as the adoption of universal basic income to help people navigate the transition. ‘We are headed into a short-term dystopia, but we can still decide what comes after that,’ Gawdat told the podcast, emphasizing that the future remains malleable based on choices society makes today. He argued that outcomes will depend heavily on decisions regarding regulation, equitable access to technology, and what he calls the ‘moral programming’ of AI algorithms.”

We are sure government and Big Tech will get right on that. Totally doable in our current political and business climates. Meanwhile, Mo Gawdat is working on an “AI love coach.” I am not sure Mr. Gawdat is connected to the bureaucratic and management ethos of 2025. Is that why he is a Xoogler?

Cynthia Murrell, September 1, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta