AI and Work: Just the Ticket for Monday Morning

May 20, 2024

dinosaur30aThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Well, here’s a cheerful essay for the average worker in a knowledge industry. “If Your Work’s Average, You’re Screwed It’s Over for You” is the ideal essay to kick off a new work week. The source of the publication is Digital Camera World. I thought traditional and digital cameras were yesterday’s news. Therefore, I surmise the author of the write up misses the good old days of Kodak film, chemicals, and really expensive retouching.


How many US government professionals will find themselves victims of good enough AI? Answer: More than than the professional photographers? Thanks, MSFT Copilot. Good enough, a standard your security systems seem to struggle to achieve.

What’s the camera-focuses (yeah, lame pun) essay report. Consider this passage:

there’s one thing that only humans can do…

Okay, one thing. I give up. What’s that? Create other humans? Write poetry? Take fentanyl and lose the ability to stand up for hours? Captain a boat near orcas who will do what they can to sink the vessel? Oh, well. What’s that one thing?

"But I think the thing that AI is going to have an impossible job of achieving is that last 1% that stands between everything [else] and what’s great. I think that that last 1%, only a human can impart that.

AI does the mediocre. Humans, I think, do the exceptional. The logic seems to point to someone in the top tier of humans will have a job. Everyone else will be standing on line to get basic income checks, pursuing crime, or reading books. Strike that. Scrolling social media. No doom required. Those not in the elite will know doom first hand.

Here’s another passage to bring some zip to a Monday morning:

What it’s [smart software] going to do is, if your work’s average, you’re screwed. It’s [having a job] over for you. Be great, because AI is going to have a really hard time being great itself.

Observations? Just that cost cutting may be Job One.

Stephen E Arnold, May 20, 2024

Flawed AI Will Still Take Jobs

May 16, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Shocker. Organizations are using smart software which is [a] operating in an way its creators cannot explain, [b] makes up information, and [c] appears to be dominated by a handful of “above the law” outfits. Does this characterization seem unfair? No, well, stop reading. If it seems anchored in reality, you may find my comments about jobs for GenX, GenY or GenWhy?, millennials, and Alphas (I think this is what marketers call wee lads and lasses) somewhat in line with the IMF’s view of AI.


The answer is, “Your daughter should be very, very intelligent and very, very good at an in-demand skill. If she is not, then it is doom scrolling for sure. Thanks, MSFT Copilot. Do your part for the good of mankind today.

Artificial Intelligence Hitting Labour Forces Like a Tsunami – IMF Chief” screws up the metaphor. A tsunami builds, travels, dissipates. I am not sure what the headline writer thinks will dissipate in AI land. Jobs for sure. But AI seems to have some sticking power.

What does the IMF say? Here’s a bit of insight:

Artificial intelligence is likely to impact 60% of jobs in advanced economies and 40% of jobs around the world in the next two years…

So what? The IMF Big Dog adds:

“It could bring tremendous increase in productivity if we manage it well, but it can also lead to more misinformation and, of course, more inequality in our society.”

Could. I think it will but for those who know their way around AI and are in the tippy top of smart people. ATM users, TikTok consumers, and those who think school is stupid may not emerge as winners.

I find it interesting to consider what a two-tier society in the US and Western Europe will manifest. What will the people who do not have jobs do? Volunteer to work at the local animal shelter, pick up trash, or just kick back. Yeah, that’s fun.

What if one looks back over the last 50 years? When I grew up, my father had a job. My mother worked at home. I went to school. The text books were passed along year to year. The teachers grouped students by ability and segregated some students into an “advanced” track. My free time was spent outside “playing” or inside reading. When I was 15, I worked as a car hop. No mobile phones. No computer. Just radio, a record player, and a crappy black-and-white television which displayed fuzzy programs. The neighbors knew me and the other “kids.” From my eighth grade class, everyone went to college after high school. In my high school class of 1962, everyone was thinking about an advanced degree. Social was something a church sponsored. Its main feature was ice cream. After getting an advanced degree in 1965 I believe, I got a job because someone heard me give a talk about indexing Latin sermons and said, “We need you.” Easy.

A half century later, what is the landscape. AI is eliminating jobs. Many of these will be either intermediating jobs like doing email spam for a PR firm’s client or doing legal research. In the future, knowledge work will move up the Great Chain of Being. Most won’t be able to do the climbing to make it up to a rung with decent pay, some reasonable challenges, and a bit of power.

Let’s go back to the somewhat off-the-mark tsunami metaphor. AI is going to become more reliable. The improvements will continue. Think about what an IBM PC looked like in the 1980s. Now think about the MacBook Air you or your colleague has. They are similar but not equivalent. What happens when AI systems and methods keep improving? That’s tough to predict. What’s obvious is that the improvements and innovations in smart software are not a tsunami.

I liken it more like the continuous pressure in a petroleum cracking facility. Work is placed in contact with smart software, and stuff vaporizes. The first component to be consumed are human jobs. Next, the smart software will transform “work” itself. Most work is busy work; smart software wants “real” work. As long as the electricity stays on, the impact of AI will be on-going. AI will transform. A tsunami crashes, makes a mess, and then is entropified. AI is a different and much hardier development.

The IMF is on the right track; it is just not making clear how much change is now underway.

Stephen E Arnold, May 16, 2024

AdVon: Why So Much Traction and Angst?

May 14, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

AdVon. AdVon. AdVon. Okay, the company is in the news. Consider this write up: “Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry.” So why meet AdVon? The subtitle explains:

Remember that AI company behind Sports Illustrated’s fake writers? We did some digging — and it’s got tendrils into other surprisingly prominent publications.

Let’s consider the question: Why is AdVon getting traction among “prominent publications” or any other outfit wanting content? The answer is not far to see: Cutting costs, doing more with less, get more clicks, get more money. This is not a multiple choice test in a junior college business class. This is common sense. Smart software makes it possible for those with some skill in the alleged art of prompt crafting and automation to sell “stories” to publishers for less than those publishers can produce the stories themselves.


The future continues to arrive. Here’s smart software is saying “Hasta la vista” to the human information generator. The humanoid looks very sad. The AI software nor its owner does not care. Revenue and profit are more important as long as the top dogs get paid big bucks. Thanks, MSFT Copilot. Working on your security systems or polishing the AI today?

Let’s look at the cited article’s peregrination to the obvious: AI can reduce costs of “publishing”. Plus, as AI gets more refined, the publications themselves can be replaced with scripts.

The write up says:

Basically, AdVon engages in what Google calls “site reputation abuse”: it strikes deals with publishers in which it provides huge numbers of extremely low-quality product reviews — often for surprisingly prominent publications — intended to pull in traffic from people Googling things like “best ab roller.” The idea seems to be that these visitors will be fooled into thinking the recommendations were made by the publication’s actual journalists and click one of the articles’ affiliate links, kicking back a little money if they make a purchase. It’s a practice that blurs the line between journalism and advertising to the breaking point, makes the web worse for everybody, and renders basic questions like “is this writer a real person?” fuzzier and fuzzier.

Okay. So what?

In spite of the article being labeled as “AI” in AdVon’s CMS, the Outside Inc spokesperson said the company had no knowledge of the use of AI by AdVon — seemingly contradicting AdVon’s claim that automation was only used with publishers’ knowledge.

Okay, corner cutting as part of AdVon’s business model. What about the “minimum viable product” or “good enough” approach to everything from self driving auto baloney to Boeing air craft doors? AI use is somehow exempt from what is the current business practice. Major academic figures take short cuts. Now an outfit with some AI skills is supposed to operate like a hybrid of Joan of Arc and Mother Theresa? Sure.

The write up states:

In fact, it seems that many products only appear in AdVon’s reviews in the first place because their sellers paid AdVon for the publicity. That’s because the founding duo behind AdVon, CEO Ben Faw and president Eric Spurling, also quietly operate another company called SellerRocket, which charges the sellers of Amazon products for coverage in the same publications where AdVon publishes product reviews.

To me, AdVon is using a variant of the Google type of online advertising concept. The bar room door swings both ways. The customer pays to enter and the customer pays to leave. Am I surprised? Nope. Should anyone? How about a government consumer protection watch dog. Tip: Don’t hold your breath. New York City tested a chatbot that provided information that violated city laws.

The write up concludes:

At its worst, AI lets unscrupulous profiteers pollute the internet with low-quality work produced at unprecedented scale. It’s a phenomenon which — if platforms like Google and Facebook can’t figure out how to separate the wheat from the chaff — threatens to flood the whole web in an unstoppable deluge of spam. In other words, it’s not surprising to see a company like AdVon turn to AI as a mechanism to churn out lousy content while cutting loose actual writers. But watching trusted publications help distribute that chum is a unique tragedy of the AI era.

The kicker is that the company owning the publication “exposing” AdVon used AdVon.

Let me offer several observations:

  1. The research reveals what will become an increasingly wide spread business practice. But the practice of using AI to generate baloney and spam variants is not the future. It is now.
  2. The demand for what appears to be old fashioned information generation is high. The cost of producing this type of information is going to force those who want to generate information to take short cuts. (How do I know? How about the president of Stanford University who took short cuts. That’s how. When a university president muddles forward for years and gets caught by accident, what are students learning? My answer: Cheat better than that.)
  3. AI diffusion is like gerbils. First, you have a couple of cute gerbils in your room. As a nine year old, you think those gerbils are cute. Then you have more gerbils. What do you do? You get rid of the gerbils in your house. What about the gerbils? Yeah, they are still out there. One can see gerbils; it is more difficult to see the AI gerbils. The fix is not the plastic bag filled with gerbils in the garbage can. The AI gerbils are relentless.

Net net: Adapt and accept that AI is here, reproducing rapidly, and evolving. The future means “adapt.” One suggestion: Hire McKinsey & Co. to help your firm make tough decisions. That sometimes works.

Stephen E Arnold, May 14, 2024

Big Tech and Their Software: The Tent Pole Problem

May 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I remember a Boy Scout camping trip. I was a Wolf Scout at the time, and my “pack” had the task of setting up our tent for the night. The scout master was Mr. Johnson, and he left it us. The weather did not cooperate; the tent pegs pulled out in the wind. The center tent pole broke. We stood in the rain. We knew the badge for camping was gone, just like a dry place to sleep. Failure. Whom could we blame? I suggested, “McKinsey & Co.” I had learned that third-parties were usually fall guys. No one knew what I was talking about.

4 27 tent collapse

Okay, ChatGPT, good enough.

I thought about the tent pole failure, the miserable camping experience, and the need to blame McKinsey or at least an entity other than ourselves. The memory surfaced as I read “Laws of Software Evolution.” The write up sets forth some ideas which may not be firm guidelines like those articulated by the World Court, but they are about as enforceable.

Let’s look at the laws explicated in the essay.

The first law is that software is to support a real-world task. As result (a corollary maybe?) is that the software has to evolve. That is the old chestnut ““No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.” The problem is change, which consumes money and time. As a result, original software is wrapped, peppered with calls to snappy new modules designed to fix up or extend the original software.

The second law is that when changes are made, the software construct becomes more complex. Complexity is what humans do. A true master makes certain processes simple. Software has artists, poets, and engineers with vision. Simple may not be a key component of the world the programmer wants to create. Thus, increasing complexity creates surprises like unknown dependencies, sluggish performance, and a giant black hole of costs.

The third law is not explicitly called out like Laws One and Two. Here’s my interpretation of the “lurking law,” as I have termed it:

Code can be shaped and built upon.

My reaction to this essay is positive, but the link to evolution eludes me. The one issue I want to raise is that once software is built, deployed, and fiddled with it is like a river pier built by Roman engineers.  Moving the pier or fixing it so it will persist is a very, very difficult task. At some point, even the Roman concrete will weather away. The bridge or structure will fall down. Gravity wins. I am okay with software devolution.

The future, therefore, will be stuffed with software breakdowns. The essay makes a logical statement:

… we should embrace the malleability of code and avoid redesign processes at all costs!

Sorry. Won’t happen. Woulda, shoulda, and coulda cannot do the job.

Stephen E Arnold, May 1, 2024

AI Versus People? That Is Easy. AI

April 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I don’t like to include management information in Beyond Search. I have noticed more stories related to management decisions related to information technology. Here’s an example of my breaking my own editorial policies. Navigate to “SF Exec Defends Brutal Tech Trend: Lay Off Workers to Free Up Cash for AI.” I noted this passage:

Executives want fatter pockets for investing in artificial intelligence.


Okay, Mr. Efficiency and mobile phone betting addict, you have reached a logical decision. Why are there no pictures of friends, family, and achievements in your window office? Oh, that’s MSFT Copilot’s work. What’s that say?

I think this means that “people resources” can be dumped in order to free up cash to place bets on smart software. The write up explains the management decision making this way:

Dropbox’s layoff was largely aimed at freeing up cash to hire more engineers who are skilled in AI.

How expensive is AI for the big technology companies? The write up provides this factoid which comes from the masterful management bastion:

Google AI leader Demis Hassabis said the company would likely spend more than $100 billion developing AI.

Smart software is the next big thing. Big outfits like Amazon, Google, Facebook, and Microsoft believe it. Venture firms appear to be into AI. Software development outfits are beavering away with smart technology to make their already stellar “good enough” products even better.

Money buys innovation until it doesn’t. The reason is that the time from roll out to saturation can be difficult to predict. Look how long it has taken the smart phones to become marketing exercises, not technology demonstrations. How significant is saturation? Look at the machinations at Apple or CPUs that are increasingly difficult to differentiate for a person who wants to use a laptop for business.

There are benefits. These include:

  • Those getting fired can say, “AI RIF’ed me.”
  • Investments in AI can perk up investors.
  • Jargon-savvy consultants can land new clients.
  • Leadership teams can rise about termination because these wise professionals are the deciders.

A few downsides can be identified despite the immaturity of the sector:

  • Outputs can be incorrect leading to what might be called poor decisions. (Sorry, Ms. Smith, your child died because the smart dosage system malfunctioned.)
  • A large, no-man’s land is opening between the fast moving start ups who surf on cloud AI services and the behemoths providing access to expensive infrastructure. Who wants to operate in no-man’s land?
  • The lack of controls on smart software guarantee that bad actors will have ample tools with which to innovate.
  • Knock-on effects are difficult to predict.

Net net: AI may be diffusing more quickly and in ways some experts chose to ignore… until they are RIF’ed.

Stephen E Arnold, April 25, 2024

Kicking Cans Down the Street Is Not Violence. Is It a Type of Fraud Perhaps?

April 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Ah, spring, when young men’s fancies turn to thoughts of violence. Forget the Iran Israel dust up. Forget the Russia special operation. Think about this Bloomberg headline:

Tech’s Cash Crunch Sees Creditors Turn ‘Violent’ With One Another


Thanks, ChatGPT. Good enough.

Will this be drones? Perhaps a missile or two? No. I think it will be marketing hoo hah. Even though news releases may not inflict mortal injury, although someone probably has died from bad publicity, the rhetorical tone seems — how should we phrase it — over the top maybe?

The write up says:

Software and services companies are in the spotlight after issuing almost $30 billion of debt that’s classed as distressed, according to data compiled by Bloomberg, the most in any industry apart from real estate.

How do wizards of finance react to this “risk”? Answer:

“These two phenomena, coupled with the covenant-lite nature of leveraged loans today, have been the primary drivers of the creditor-on-creditor violence we’re seeing,” he [Jason Mudrick, founder of distressed credit investor Mudrick Capital] said.

Shades of the Sydney slashings or vehicle fires in Paris.

Here’s an example:

One increasingly popular maneuver these days, known as non-pro rata uptiering, sees companies cut a deal with a small group of creditors who provide new money to the borrower, pushing others further back in the line to be repaid. In return, they often partake in a bond exchange in which they receive a better swap price than other creditors.

Does this sound like “Let’s kick the can down the road.” Not articulated is the idea, “Let’s see what happens. If we fail, our management team is free to bail out.”

Nifty, right?

Financial engineering is a no harm, no foul game for some. Those who lose money? Yeah, too bad.

Stephen E Arnold, April 25, 2024

Paranoia or Is it Parano-AI? Yes

April 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I get a kick out of the information about the future impact of smart software. If those writing about the downstream consequences of artificial intelligence were on the beam, those folks would be camping out in one of those salubrious Las Vegas casinos. They are not. Thus, the prognostications provide more insight into the authors’ fears in my opinion.

4 15 scared executive

OpenAI produced this good enough image of a Top Dog reading reports about AI’s taking jobs from senior executives. Quite a messy desk, which is an indicator of an inferior executive mindset.

Here’s an example: “Even the Boss Is Worried! Hundreds of Chief Executives Fear AI Could Steal Their Jobs Too.” The write up is based on a study conducted by Censuswide for AND Digital. Here we go, fear lovers:

  1. A “jobs apocalypse”: “AI experts have predicted a 50-50 chance machines could take over all our jobs within a century.”
  2. Scared yet? “Nearly half – 43 per cent – of bosses polled admitted they too were worried AI could take steal their job.”
  3. Ignorance is bliss: “44 per cent of global CEOs did not think their staff were ready to handle AI.”
  4. Die now? “A survey of over 2,700 AI researchers in January meanwhile suggested AI could well be ‘better and cheaper’ than humans in every profession by 2116.”

My view is that the diffusion of certain types of smart software will occur over time. If the technology proves it can cuts costs and be good enough, then it will be applied where the benefits are easy to identify and monitor. When something goes off the rails, the smart software will suffer a set back. Changes will be made, and the “Let’s try again” approach will kick in. Can motivated individuals adapt? Sure. The top folks will adjust and continue to perform. The laggards will get an “Also Participated” ribbon and collect money by busking, cleaning houses, or painting houses. The good old Darwinian principles don’t change. A digital panther can kill you just as dead as a real panther.

Exciting? Not for a surviving dinobaby.

Stephen E Arnold, April 22, 2024

AI RIFing Financial Analysts (Juniors Only for Now). And Tomorrow?

April 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Bill Gates Worries AI Will Take His Job, Says, ‘Bill, Go Play Pickleball, I’ve Got Malaria Eradication’.” Mr. Gates is apparently about becoming farmer. He is busy buying land. He took time out from his billionaire work today to point out that AI will nuke lots of jobs. What type of jobs will be most at risk? Amazon seems to be focused on using robots and smart software to clear out expensive, unreliable humans.

But the interesting profession facing what might be called an interesting future are financial analysts. “AI Is Coming for Wall Street: Banks Are Reportedly Weighing Cutting Analyst Hiring by Two-Thirds” asserts:

Incoming classes of junior investment-banking analysts could up being cut as much as two-thirds, some of the people suggested, while those brought on board could fetch lower salaries, on account of their work being assisted by artificial intelligence.

Okay, it is other people’s money, so no big deal if the smart software hallucinates as long as there is churn and percentage scrapes. But what happens when the “senior” analysts leave or get fired? Will smart software replace them, or it the idea that junior analyst who are “smart” will move up and add value “smart” software cannot?


Thanks, OpenAI. This is a good depiction of the “best of the best” at a major Wall Street financial institution after learning their future was elsewhere.

The article points out:

The consulting firm Accenture has an even more extreme outlook for industry disruption, forecasting that AI could end up replacing or supplementing nearly 75% of all working hours in the banking sector.

Let’s look at the financial sector’s focus on analysts. What other industrial sectors use analysts? Here are several my team and I track:

  1. Intelligence (business and military)
  2. Law enforcement
  3. Law
  4. Medical subrogation
  5. Consulting firms (niche, general, and technical)
  6. Publishing.

If the great trimming at McKinsey and the big New York banks deliver profits, how quickly will AI-anchored software and systems diffuse across organizations?

The answer to the question is, “Fast.”

Stephen E Arnold, April 19, 2024

Google Gem: Arresting People Management

April 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have worked for some well-managed outfits: Halliburton, Booz Allen, Ziff Communications, and others in the 55 year career. The idea that employees at Halliburton Nuclear (my assignment) would occupy the offices of a senior officer like Eugene Saltarelli was inconceivable. (Mr. Saltarelli sported a facial scar. When asked about the disfigurement, he would stare at the interlocutor and ask, “What scar?” Do you want to “take over” his office?) Another of my superiors at a firm in New York had a special method of shaping employee behavior. This professional did nothing to suppress rumors that two of his wives drowned  during “storms” after falling off his sail boat. Did I entertain taking over his many-windowed office in Manhattan? Answer: Are you sure you internalized the anecdote?

! google gems

Another Google management gem glitters in the public spot light.

But at the Google life seems to be different, maybe a little more frisky absent psychological behavior controls. I read “Nine Google Workers Get Arrested After Sit-In Protest over $1.2B Cloud Deal with Israel.” The main idea seems to be that someone at Google sold cloud services to the Israeli government. Employees apparently viewed the contract as bad, wrong, stupid, or some combination of attributes. The fix involved a 1960s-style sit in. After a period of time elapsed, someone at Google called the police. The employee-protesters were arrested.

I recall hearing years ago that Google faced a similar push back about a contract with the US government. To be honest, Google has generated so many human resource moments, I have a tough time recalling each. A few are Mt. Everests of excellence; for example, the termination of Dr. Timnit Gebru. This Googler had the nerve to question the bias of Google’s smart software. She departed. I assume she enjoyed the images of biased signers of documents related to America’s independence and multi-ethnic soldiers in the World War II German army. Bias? Google thinks not I guess.

The protest occurs as the Google tries to cope with increased market pressure and the tough-to-control costs of smart software. The quick fix is to nuke or RIF employees. “Google Lays Off Workers As Part of Pretty Large-Scale Restructuring” reports by citing Business Insider:

Ruth Porat, Google’s chief financial officer, sent an email to employees announcing that the company would create “growth hubs” in India, Mexico and Ireland. The unspecified number of layoffs will affect teams in the company’s finance department, including its treasury, business services and revenue cash operations units

That looks like off-shoring to me. The idea was a cookie cutter solution spun up by blue chip consulting companies 20, maybe 30 years ago. On paper, the math is more enticing than a new Land Rover and about as reliable. A state-side worker costs X fully loaded with G&A, benefits, etc. An off-shore worker costs X minus Y. If the delta means cost savings, go for it. What’s not to like?

According to a source cited in the New York Post:

“As we’ve said, we’re responsibly investing in our company’s biggest priorities and the significant opportunities ahead… To best position us for these opportunities, throughout the second half of 2023 and into 2024, a number of our teams made changes to become more efficient and work better, remove layers and align their resources to their biggest product priorities.

Yep, align. That senior management team has a way with words.

Will those who are in fear of their jobs join in the increasingly routine Google employee protests? Will disgruntled staff sandbag products and code? Will those who are terminated write tell-alls about their experiences at an outfit operating under Code Red for more than a year?

Several observations:

  1. Microsoft’s quite effective push of its AI products and services continues. In certain key markets like New York City and the US government, Google is on the defensive. Hint: Microsoft has the advantage, and the Google is struggling to catch up.
  2. Google’s management of its personnel seems to create the wrong type of news. Example: Staff arrests. Is that part of Peter Drucker’s management advice.
  3. The Google leadership team appears to lack the ability to do their job in a way that operates in a quiet, effective, positive, and measured way.

Net net: The online ad money machine keeps running. But if the investigations into Google’s business practices get traction, Google will have additional challenges to face. The Sundar & Prabhakar Comedy team should make a TikTok-type,  how-to video about human resource management. I would prefer a short video about the origin story for the online advertising method which allowed Google to become a fascinating outfit.

Stephen E Arnold, April 18, 2024

AI Will Take Jobs for Sure: Money Talks, Humans Walk

April 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Report Shows Managers Eager to Replace or Devalue Workers with AI Tools

Bosses have had it with the worker-favorable labor market that emerged from the pandemic. Fortunately, there is a new option that is happy to be exploited. We learn from TechSpot that a recent “Survey Reveals Almost Half of All Managers Aim to Replace Workers with AI, Could Use It to Lower Wages.” The report is by, which did its best to spin the results as a trend toward collaboration, not pink slips. Nevertheless, the numbers seem to back up worker concerns. Writer Rog Thubron summarizes:

“A report by, which makes AI-powered presentation software, surveyed over 3,000 managers about AI tools in the workplace, how they’re being implemented, and what impact they believe these technologies will have. The headline takeaway is that 41% of managers said they are hoping that they can replace employees with cheaper AI tools in 2024. … The rest of the survey’s results are just as depressing for worried workers: 48% of managers said their businesses would benefit financially if they could replace a large number of employees with AI tools; 40% said they believe multiple employees could be replaced by AI tools and the team would operate well without them; 45% said they view AI as an opportunity to lower salaries of employees because less human-powered work is needed; and 12% said they are using AI in hopes to downsize and save money on worker salaries. It’s no surprise that 62% of managers said that their employees fear that AI tools will eventually cost them their jobs. Furthermore, 66% of managers said their employees fear that AI tools will make them less valuable at work in 2024.”

Managers themselves are not immune to the threat: Half of them said they worry their pay will decrease, and 64% believe AI tools do their jobs better than experienced humans do. At least they are realistic. stresses another statistic: 60% of respondents who are already using AI tools see them as augmenting, not threatening, jobs. The firm also emphasizes the number of managers who hope to replace employees with AI decreased “significantly” since last year’s survey. Progress?

Cynthia Murrell, April 12, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta