Real News Outfit Finds a Study Proving That AI Has No Impact in the Workplace

May 27, 2025

Dino 5 18 25_thumb_thumb_thumbJust the dinobaby operating without Copilot or its ilk.

The “real news” outfit is the wonderful business magazine Fortune, now only $1 a month. Subscribe now!

The title of the write up catching my attention was “Study Looking at AI Chatbots in 7,000 Workplaces Finds ‘No Significant Impact on Earnings or Recorded Hours in Any Occupation.” Out of the blocks this story caused me to say to myself, “This is another you-can’t-fire-human-writers” proxy.”

Was I correct? Here are three snips, and I not only urge you to subscribe to Fortune but read the original article and form your own opinion. Another option is to feed it into an LLM which is able to include Web content and ask it to tell you about the story. If you are reading my essay, you know that a dinobaby plucks the examples, no smart software required, although as I creep toward 81, I probably should let a free AI do the thinking for me.

Here’s the first snip I captured:

Their [smart software or large language models] popularity has created and destroyed entire job descriptions and sent company valuations into the stratosphere—then back down to earth. And yet, one of the first studies to look at AI use in conjunction with employment data finds the technology’s effect on time and money to be negligible.

You thought you could destroy humans, you high technology snake oil peddlers (not the contraband Snake Oil popular in Hong Kong at this time). Think old-time carnival barkers.

Here’s the second snip about the sample:

focusing on occupations believed to be susceptible to disruption by AI

Okay, “believed” is the operative word. Who does the believing a University of Chicago assistant professor of economics (Yay, Adam Smith. Yay, yay, Friedrich Hayak) and a graduate student. Yep, a time honored method: A graduate student.

Now the third snip which presents the rock solid proof:

On average, users of AI at work had a time savings of 3%, the researchers found. Some saved more time, but didn’t see better pay, with just 3%-7% of productivity gains being passed on to paychecks. In other words, while they found no mass displacement of human workers, neither did they see transformed productivity or hefty raises for AI-wielding super workers.

Okay, not much payoff from time savings. Okay, not much of a financial reward for the users. Okay, nobody got fired. I thought it was hard to terminate workers in some European countries.

After reading the article, I like the penultimate paragraph’s reminder that outfits like Duolingo and Shopify have begun rethinking the use of chatbots. Translation: You cannot get rid of human writers and real journalists.

Net net: A temporary reprieve will not stop the push to shift from expensive humans who want health care and vacations. That’s the news.

Stephen E Arnold, May 27, 2025

Ten Directories of AI Tools

May 26, 2025

Dino 5 18 25Just the dinobaby operating without Copilot or its ilk.

I scan DailyHunt, an India-based news summarizer powered by AI I think. The link I followed landed me on a story titled “Best 10 AI Directories to Promote.” I looked for a primary source, an author, and links to each service. Zippo. Therefore, I assembled the list, provided links, and generated with my dinobaby paws and claws the list below. Enjoy or ignore. I am weary of AI, but many others are not. I am not sure why, but that is our current reality, replete with alternative facts, cheating college professors, and oodles of crypto activity. Remember. The list is not my “best of”; I am simply presenting incomplete information in a slightly more useful format.

AIxploria https://www.aixploria.com/en/ [Another actual directory. Its promotional language says “largest list”. Yeah, I believe that]

AllAITool.ai at https://allaitool.ai/

FamouseAITools.ai https://famousaitools.ai/ [Another marketing outfit sucking up AI tool submissions]

Futurepedia.io https://www.futurepedia.io/ 

TheMangoAI.co https://themangoai.co/ [Not a directory, an advertisement of sorts for an AI-powered marketing firm]

NeonRev https://www.neonrev.com/ [Another actual directory. It looks like a number of Telegram bot directories]

Spiff Store https://spiff.store/ [Another directory. I have no idea how many tools are included]

StackViv https://stackviv.ai/ [An actual directory with 10,000 tools. No I did not count them. Are you kidding me?]

TheresanAIforThat https://theresanaiforthat.com/ [You have to register to look at the listings. A turn off for me]

Toolify.ai https://www.toolify.ai/ [An actual listing of more than 25,000 AI tools organized into categories probably by AI, not a professional indexing specialist]

When I looked at each of these “directories”, marketing is something the AI crowd finds important. A bit more effort in the naming of some of these services might help. Just a thought. Enjoy.

Stephen E Arnold, May 26, 2025

Censorship Gains Traction at an Individual Point

May 23, 2025

dino-orange_thumb_thumb_thumb_thumb_[1]No AI, just the dinobaby expressing his opinions to Zillennials.

I read a somewhat sad biographical essay titled “The Great Displacement Is Already Well Underway: It’s Not a Hypothetical, I’ve Already Lost My Job to AI For The Last Year.” The essay explains that a 40 something software engineer lost his job. Despite what strike me as heroic efforts, no offers ensued. I urge you to take a look at this essay because the push to remove humans from “work” is accelerating. I think with my 80 year old neuro-structures that the lack of “work” will create some tricky social problems.

I spotted one passage in the essay which struck me as significant. The idea of censorship is a popular topic in central Kentucky. Quite a few groups and individuals have quite specific ideas about what books should be available for students and others to read. Here is the quote about censorship from the cited “Great Displacement” essay:

I [the author of the essay] have gone back and deleted 95% of those articles and vlogs, because although many of the ideas they presented were very forward-thinking and insightful at the time, they may now be viewed as pedestrian to AI insiders merely months later due to the pace of AI progress. I don’t want the wrong person with a job lead to see a take like that as their first exposure to me and think that I’m behind the last 24 hours of advancements on my AI takes.

Self-censorship was used to create a more timely version of the author. I have been writing articles with titles like “The Red Light on the Green Board” for years. This particular gem points out that public school teachers sell themselves and their ideas out. The prostitution analogy was intentional. I caught a bit of criticism from an educator in the public high school in which I “taught” for 18 months. Now people just ignore what I write. Thankfully my lectures about online fraud evoke a tiny bit of praise because the law enforcement, crime analysts, and cyber attorneys don’t throw conference snacks at me when I offer one of my personal observations about bad actors.

The cited essay presents a person who is deleting content into to present an “improved” or “shaped” version of himself. I think it is important to have in original form essays, poems, technical reports, and fiction — indeed, any human-produced artifact — available. These materials I think will provide future students and researchers with useful material to mine for insights and knowledge.

Deletion means that information is lost. I am not sure that is a good thing. What’s notable is that censorship is taking place by the author for the express purpose of erasing the past and shaping an impression of the present individual. Will that work? Based on the information in the essay, it had not when I read the write up.

Censorship may be one facet of what the author calls a “displacement.” I am not too keen on censorship regardless of the decider or the rationalization. But I am a real dinobaby, not a 40-something dinobaby like the author of the essay.

Stephen E Arnold, May 23, 2025

AI: Improving Spam Quality, Reach, and Effectiveness

May 22, 2025

It is time to update our hoax detectors. The Register warns, “Generative AI Makes Fraud Fluent—from Phishing Lures to Fake Lovers.” What a great phrase: “fluent fraud.” We can see it on a line of hats and t-shirts. Reporter Iain Thomson consulted security pros Chester Wisniewski of Sophos and Kevin Brown at NCC Group. We learn:

“One of the red flags that traditionally identified spam, including phishing attempts, was poor spelling and syntax, but the use of generative AI has changed that by taking humans out of the loop. … AI has also widened the geographical scope of spam and phishing. When humans were the primary crafters of such content, the crooks stuck to common languages to target the largest audience with the least amount of work. But, Wisniewski explained, AI makes it much easier to craft emails in different languages.”

For example, residents of Quebec used to spot spam by its use of European French instead of the Québécois dialect. Similarly, folks in Portugal learned to dismiss messages written in Brazilian Portuguese. Now, though, AI makes it easy to replicate regional dialects. Perhaps more eerily, it also make it easier to replicate human empathy. Thomson writes:

“AI chatbots have proven highly effective at seducing victims into thinking they are being wooed by an attractive partner, at least during the initial phases. Wisniewski said that AI chatbots can easily handle the opening phases of the scams, registering interest and appearing to be empathetic. Then a human operator takes over and begins removing funds from the mark by asking for financial help, or encouraging them to invest in Ponzi schemes.”

Great. To make matters worse, much of this is now taking place with realistic audio fakes. For example:

“Scammers might call everybody on the support team with an AI-generated voice that duplicates somebody in the IT department, asking for a password until one victim succumbs.”

Chances are good someone eventually will. Whether video bots are a threat (yet) is up for debate. Wisniewski, for one, believes convincing, real-time video deepfakes are not quite there. But Brown reports the experienced pros at his firm have successfully created them for specific use cases. Both believe it is only a matter of time before video deepfakes become not only possible but easy to create and deploy. It seems we must soon learn to approach every interaction that is not in-person with great vigilance and suspicion. How refreshing.

Cynthia Murrell, May 22, 2025

IBM CEO Replaces Human HR Workers with AskHR AI

May 21, 2025

An IBM professional asks the smart AI system, “Have I been terminated?” What if the   smart software hallucinates? Yeah, surprise!

Which employees are the best to replace with AI? For IBM, ironically, it is the ones with “Human” in their title. Entrepreneur reports, “IBM Replaced Hundreds of HR Workers with AI, According to Its CEO.” But not to worry, the firm actually hired workers in other areas. We learn:

“IBM CEO Arvind Krishna told The Wall Street Journal … that the tech giant had tapped into AI to take over the work of several hundred human resources employees. However, IBM’s workforce expanded instead of shrinking—the company used the resources freed up by the layoffs to hire more programmers and salespeople. ‘Our total employment has actually gone up, because what [AI] does is it gives you more investment to put into other areas,’ Krishna told The Journal. Krishna specified that those ‘other areas’ included software engineering, marketing, and sales or roles focused on ‘critical thinking,’ where employees ‘face up or against other humans, as opposed to just doing rote process work.’”

Yes, the tech giant decided to dump those touchy feely types in personnel. Who need human sensitivity with issues like vacations, medical benefits, discrimination claims, or potential lawsuits? That is all just rote process work, right? The AskHR agent can handle it.

According to Wedbush analyst Dan Ives, IBM is just getting started on its metamorphosis into an AI company. What does that mean for humans in other departments? Will their jobs begin to go the way of their former colleagues’ in HR? If so, who would they complain to? Watson, are you on the job?

Cynthia Murrell, May 21, 2025

Microsoft: What Is a Brand Name?

May 20, 2025

Dino 5 18 25Just the dinobaby operating without Copilot or its ilk.

I know that Palantir Technologies, a firm founded in 2003, used the moniker “Foundry” to describe its platform for government use. My understanding is that Palantir Foundry was a complement to Palantir Gotham. How different were these “platforms”? My recollection is that Palantir used home-brew software and open source to provide the raw materials from which the company shaped its different marketing packages. I view Palantir as a consulting services company with software, including artificial intelligence. The idea is that Palantir can now perform like Harris’ Analyst Notebook as well as deliver semi-custom, industrial-strength solutions to provide unified solutions to thorny information challenges. I like to think of Palantir’s present product and service line up as a Distributed Common Ground Information Service that generally works. About a year ago, Microsoft and Palantir teamed up to market Microsoft – Palantir solutions to governments via “bootcamps.” These are training combined with “here’s what you too can deploy” programs designed to teach and sell the dream of on-time, on-target information for a range of government applications.

I read “Microsoft Is Now Hosting xAI’s Grok 3 Models” and noted this subtitle:

Grok 3 and Grok 3 mini are both coming to Microsoft’s Azure AI Foundry service.

Microsoft’s Foundry service. Is that Palantir’s Foundry, a mash up of Microsoft and Palantir, or something else entirely. The name confuses me, and I wonder if government procurement professionals will be knocked off center as well. The “dream” of smart software is a way to close deals in some countries’ government agencies. However, keeping the branding straight is also important.

image

What does one call a Foundry with a Grok? Shakespeare suggested that it would smell as sweet no matter what the system was named. Thanks, OpenAI? Good enough.

The write up says:

At Microsoft’s Build developer conference today, the company confirmed it’s expanding its Azure AI Foundry models list to include Grok 3 and Grok 3 mini from xAI.

It is not clear if Microsoft will offer Grok as another large language model or whether [a] Palantir will be able to integrate Grok into its Foundry product, [b] Microsoft Foundry is Microsoft’s own spin on Palantir’s service which is deprecated to some degree, or [c] a way to give Palantir direct, immediate access to the Grok smart software. There are other possibilities as well; for example, Foundry is a snappy name in some government circles. Use what helps close deals with end-of-year money or rev up for new funds seeking smart software.

The write up points out that Sam AI-Man may be annoyed with the addition of Grok to the Microsoft toolkit. Both OpenAI and Grok have some history. Maybe Microsoft is positioning itself as the role of the great mediator, a digital Henry Clay of sorts?

A handful of companies are significant influencers of smart software in some countries’ Microsoft-centric approach to platform technology. Microsoft’s software and systems are so prevalent that Israel did some verbal gymnastics to make clear that Microsoft technology was not used in the Gaza conflict. This is an assertion that I find somewhat difficult to accept.

What is going on with large language models at Microsoft? My take is:

  1. Microsoft wants to offer a store shelf stocked with LLMs so that consulting service revenue provides evergreen subscription revenue
  2. Customers who want something different, hot, or new can make a mark on the procurement shopping list and Microsoft will do its version of home delivery, not quite same day but convenient
  3. Users are not likely to know what smart software is fixing up their Miltonic prose or centering a graphic on a PowerPoint slide.

What about the brand or product name “Foundry”? Answer: Use what helps close deals perhaps? Does Palantir get a payoff? Yep.

Stephen E Arnold, May 20, 2025

Salesforce CEO Criticizes Microsoft, Predicts Split with OpenAI

May 20, 2025

Salesforce CEO Marc Benioff is very unhappy with Microsoft. Windows Central reports, “Salesforce CEO Says Microsoft Did ‘Pretty Nasty’ Things to Slack and Its OpenAI Partnership May Be a Recipe for Disaster.” Writer Kevin Okemwa reminds us Benioff recently dubbed Microsoft an “OpenAI reseller” and labeled Copilot the new Clippy. Harsh words. Then Okemwa heard Benioff criticizing Microsoft on a recent SaaStr podcast. He tells us:

“According to Salesforce CEO Marc Benioff: ‘You can see the horrible things that Microsoft did to Slack before we bought it. That was pretty bad and they were running their playbook and did a lot of dark stuff. And it’s all gotten written up in an EU complaint that Slack made before we bought them.’ Microsoft has a long-standing rivalry with Slack. The messaging platform accused Microsoft of using anti-competitive techniques to maintain its dominance across organizations, including bundling Teams into its Microsoft Office 365 suite.”

But, as readers may have noticed, Teams is no longer bundled into Office 365. Score one for Salesforce. The write-up continues:

“Marc Benioff further indicated that Microsoft’s treatment of Slack was ‘pretty nasty.’ He claimed that the company often employs a similar playbook to gain a competitive advantage over its rivals while referencing ‘browser wars’ with Netscape and Internet Explorer in the late 1990s.”

How did that one work out? Not well for the once-dominant Netscape. Benioff is likely referring to Microsoft’s dirty trick of making IE 1.0 free with Windows. This does seem to be a pattern for the software giant. In the same podcast, the CEO predicts a split between Microsoft and ChatGPT. It is a recent theme of his. Okemwa writes:

“Over the past few months, multiple reports and speculations have surfaced online suggesting that Microsoft’s multi-billion-dollar partnership with OpenAI might be fraying. It all started when OpenAI unveiled its $500 billion Stargate project alongside SoftBank, designed to facilitate the construction of data centers across the United States. The ChatGPT maker had previously been spotted complaining that Microsoft doesn’t meet its cloud computing needs, shifting blame to the tech giant if one of its rivals hit the AGI benchmark first. Consequently, Microsoft lost its exclusive cloud provider status but retains the right of refusal to OpenAI’s projects.”

Who knows how long that right of refusal will last. Microsoft itself seems to be preparing for a future without its frenemy. Will Benioff crow when the partnership is completely destroyed? What will he do if OpenAI buys Chrome and pushes forward with his “everything” app?

Cynthia Murrell, May 20, 2025

Behind Microsoft’s Dogged Copilot Push

May 20, 2025

Writer Simon Batt at XDA foresees a lot of annoyance in Windows users’ future. “Microsoft Will Only Get More Persistent Now that Copilot has Plateaued,” he predicts. Yes, Microsoft has failed to attract as many users to Copilot as it had hoped. It is as if users see through the AI hype. According to Batt, the company famous for doubling down on unpopular ideas will now pester us like never before. This can already be seen in the new way Microsoft harasses Windows 10 users. While it used to suggest every now and then such users purchase a Windows 11-capable device, now it specifically touts Copilot+ machines.

Batt suspects Microsoft will also relentlessly push other products to boost revenue. Especially anything it can bill monthly. Though Windows is ubiquitous, he notes, users can go years between purchases. Many of us, we would add, put off buying a new version until left with little choice. (Any XP users still out there?) He writes:

“When ChatGPT began to take off, I can imagine Microsoft seeing dollar signs when looking at its own assistant, Copilot. They could make special Copilot-enhanced devices (which make them money) that run Copilot locally and encourage people to upgrade to Copilot Pro (which makes them money) and perhaps then pay extra for the Office integration (which makes them money). But now that golden egg hasn’t panned out like Microsoft wants, and now it needs to find a way to help prop up the income while it tries to get Copilot off the ground. This means more ads for the Microsoft Store, more ads for its game store, and more ads for Microsoft 365. Oh, and let’s not forget the ads within Copilot itself. If you thought things were bad now, I have a nasty feeling we’re only just getting started with the ads.”

And they won’t stop, he expects, until most users have embraced Copilot. Microsoft may be creeping toward some painful financial realities.

Cynthia Murrell, May 20, 2025

Grok and the Dog Which Ate the Homework

May 16, 2025

dino-orange_thumb_thumb_thumb_thumb_[1]_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zillennials.

I remember the Tesla full self driving service. Is that available? I remember the big SpaceX rocket ship. Are those blowing up after launch? I now have to remember an “unauthorized modification” to xAI’s smart software Grok. Wow. So many items to tuck into my 80 year old brain.

I read “xAI Blames Grok’s Obsession with White Genocide on an Unauthorized Modification.” Do I believe this assertion? Of course, I believe everything I read on the sad, ad-choked, AI content bedeviled Internet.

Let’s look at the gems of truth in the report.

First, what is an unauthorized modification of a complex software humming along happily in Silicon Valley and— of all places — Memphis, a lovely town indeed. The unauthorized modification— whatever that is— caused a “bug in its AI-powered Grok chatbot.” If I understand this, a savvy person changed something he, she, or it was not supposed to modify. That change then caused a “bug.” I thought Grace Hopper nailed the idea of a “bug” when she  pulled an insect from one of the dinobaby’s favorite systems, the Harvard Mark II. Are their insects at the X shops? Are these unauthorized insects interacting with unauthorized entities making changes that propagate more bugs? Yes.

Second, the malfunction occurs when “@grok” is used as a tag. I believe this because the “unauthorized modification” fiddled with the user mappings and jiggled scripts to allow the “white genocide” content to appear. This is definitely not hallucination; it is an “unauthorized modification.” (Did you know that the version of Grok available via x.com cannot return information from X.com (formerly Twitter) content. Strange? Of course not.

Third, I know that Grok, xAI, and the other X entities have “internal policies and core values.” Violating these is improper. The company — like other self regulated entities — “conducted a thorough investigation.” Absolutely. Coders at X are well equipped to perform investigations. That’s why X.com personnel are in such demand as advisors to law enforcement and cyber fraud agencies.

Finally, xAI is going to publish system prompts on Microsoft GitHub. Yes, that will definitely curtail the unauthorized modifications and bugs at X entities. What a bold solution.

The cited write up is definitely not on the same page as this dinobaby. The article reports:

A study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found xAI ranks poorly on safety among its peers, owing to its “very weak” risk management practices. Earlier this month, xAI missed a self-imposed deadline to publish a finalized AI safety framework.

This negative report may be expanded to make the case that an exploding rocket or a wonky full self driving vehicle is not safe. Everyone must believe X outfits. The company is a paragon of veracity, excellent engineering, and delivering exactly what it says it will provide. That is the way you must respond.

Stephen E Arnold, May 16, 2025

Google Advertises Itself

May 16, 2025

No AI, just the dinobaby expressing his opinions to Zellenials. With search traffic zipping right along, one would think that Google would be able to use its own advertising system to get its AI message out, wouldn’t you? Answer: Nope. Google is advertising its smart software on Techmeme, a semi-useful headline aggregator. Here’s the advertisement I spotted on May 9, 2025:
The link in the advertisement points to this:
The sponsored post wants the user to log in. Whatever happened to that single sign on, Google. Also, the headline is “Meet Gemini, Your Personal AI Assistant.” I thought that Google had “won” the AI marketing wars. If that assertion were true, why is Google advertising its service on a news headline outfit? Perhaps the advertisement is a tacit admission that Eddie Cue’s “traffic is down” comment and the somewhat surprising revelations by Cloudflare’s Big Dog in “Bernard L. Schwartz Annual Lecture With Matthew Prince of Cloudflare” contain tiny nuggets of useful information; namely, traditional Google search is losing traction. In parallel, the uptake of Google’s Gemini Flash 2.0 (quite a moniker) is losing the consumer sector to OpenAI and Sam AI-Man. If true, the Google may face some headwinds in the last half of 2025. There are the legal hassles and the EU’s ka-ching method for extracting cash from the Google. Now an ominous cloud is in the sky: Google has to advertise its Gemini 2.0 Flash on a news aggregation site, presumably to get traffic. Plus, the Google wants to know if the ad on Techmeme is working. I thought Google’s advertising analytics system had hard data about the magnetism of specific sites. That’s part of the mysterious “quality” score I described more than a decade ago in my The Google Legacy. Taking my simplistic, uninformed, dinobaby view of Google’s advertising effort, I would suggest:
  1. The signals about declining search traffic warrant attention. SEO wizards, Google’s ad partners, and its own ad wizards depend on what once was limitless search traffic. If that erodes, those infrastructure costs will become a bit of a challenge. Profits and jobs depend on mindless queries.
  2. Google’s reaction to these signals indicates that the company’s “leadership” knows that there is trouble in paradise. The terse statement that the Cue comment about a decline in Apple to Google search traffic and this itty bitty ad are not accidents of fate. The Google once controlled fate. Now the fabled company is in a sticky spot like Sisyphus.
  3. The irony of Google’s problem stems from its own Transformer innovation. Released to open source, Google may be learning that its uphill battle is of its own creation. Nice work, “leadership.”
Net net: In 2025, we have the makings of a Greek tragedy. Will a 21st century Aeschylus capture the rise and fall of god-like entities? Probably not, but we will have tiny tombstone ads and Cue quips. Stephen E Arnold, May 16, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta