Who Needs Middle Managers? AI Outfits. MBAs Rejoice
September 16, 2025
No smart software involved. Just a dinobaby’s work.
I enjoy learning about new management trends. In most cases, these hip approaches to reaching a goal using people are better than old Saturday Night Live skits with John Belushi dressed as a bee. Here’s a good one if you enjoy the blindingly obvious insights of modern management thinkers.
Navigate to “Middle Managers Are Essential for AI Success.” That’s a title for you!
The write up reports without a trace of SNL snarkiness:
31% of employees say they’re actively working against their company’s AI initiatives. Middle managers can bridge the gap.
Whoa, Nellie. I thought companies were pushing forward with AI because, AI is everywhere. Microsoft Word, Google “search” (I use the term as a reminder that relevance is long gone), and from cloud providers like Salesforce.com. (Yeah, I know Salesforce is working hard to get the AI thing to go, and it is doing what big companies like to do: Cut costs by terminating humanoids.)
But the guts of the modern management method is a list (possibly assisted by AI?) The article explains without a bit of tongue in cheek élan “ways managers can turn anxious employees into AI champions.”
Here’s the list:
- Communicate the AI vision. [My observation: Isn’t that what AI is supposed to deliver? Fewer employees, no health care costs, no retirement costs, and no excess personnel because AI is so darned effective?”]
- Say, “I understand” and “Let’s talk about it.” [My observation: How long does psychological- and attitudinal-centric interactions take when there are fires to put out about an unhappy really big customer’s complaint about your firm’s product or service?]
- Explain to the employee how AI will pay off for the employee who fears AI won’t work or will cost the person his/her job? [My observation: A middle manager can definitely talk around, rationalize, and lie to make the person’s fear go away. Then the middle manager will write up the issue and forward it to HR or a superior. We don’t need a weak person on our team, right?]
- “Walk the talk.” [My observation: That’s a variant of fake it until you make it. The modern middle manager will use AI, probably realize that an AI system can output a good enough response so the “walk the talk” person can do the “walk the walk” to the parking lot to drive home after being replaced by an AI agent.]
- Give employees training and a test. [My observation: Adults love going to online training sessions and filling in the on-screen form to capture trainee responses. Get the answers wrong, and there is an automated agent pounding emails to the failing employee to report to security, turn in his/her badge, and get escorted out of the building.]
These five modern management tips or insights are LinkedIn-grade output. Who will be the first to implement these at an AI company or a firm working hard to AI-ify its operations. Millions I would wager.
Stephen E Arnold, September 16, 2025
Common Sense Returns for Coinbase Global
September 5, 2025
No AI. Just a dinobaby working the old-fashioned way.
Just a quick dino tail slap for Coinbase. I read “Coinbase Reverses Remote Policy over North Korean Hacker Threats.” The write up says:
Coinbase has reversed its remote-first policy due to North Korean hackers exploiting fake remote job applications for infiltration. The company now mandates in-person orientations and U.S. citizenship for sensitive roles. This shift highlights the crypto industry’s need to balance flexible work with robust cybersecurity.
I strongly disagree with the cyber security angle. I think it is a return (hopefully) to common sense, not the mindless pursuit of cheap technical work and lousy management methods. Sure, cyber security is at risk when an organization hires people to do work from a far off land. The easy access to voice and image synthesis tools means that some outfits are hiring people who aren’t the people the really busy, super professional human resources person thinks was hired.
The write up points out:
North Korean hackers have stolen an estimated $1.6 billion from cryptocurrency platforms in 2025 alone, as detailed in a recent analysis by Ainvest. Their methods have evolved from direct cyberattacks to more insidious social engineering, including fake job applications enhanced by deepfakes and AI-generated profiles. Coinbase’s CEO, Brian Armstrong, highlighted these concerns during an appearance on the Cheeky Pint podcast, as covered by The Verge, emphasizing how remote-first policies inadvertently create vulnerabilities.
Close but the North Korean angle is akin to Microsoft saying, “1,000 Russian hackers did this.” Baloney. My view is that the organized hacking operations blend smoothly with the North Korean government’s desire for free cash and the large Chinese criminal organizations operating money laundering operations from that garden spot, the Golden Triangle.
Stealing crypto is one thing. Coordinating attacks on organizations to exfiltrate high value information is a second thing. A third thing is to perform actions that meet the needs and business methods of large-scale money laundering, phishing, and financial scamming operations.
Looking at these events from the point of view of a single company, it is easy to see that cost reduction and low cost technical expertise motivated some managers, maybe those at Coinbase. But now that more information is penetrating the MBA fog that envelopes many organizations, common sense may become more popular. Management gurus and blue chip consulting firms are not proponents of common sense in my experience. Coinbase may have seen the light.
Stephen E Arnold, September 5, 2025
More Innovative Google Management: Hit Delete for Middle Managers
August 28, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I remember a teacher lecturing about the Great Chain of Being. The idea was interesting. The Big Guy at the top, then not-so-important people, and at the bottom amoebae. Google, if the information in “Google Has Eliminated 35% of Managers Overseeing Small Teams in Past Year, Exec Says,” is on the money has embraced the Great Chain of Being.
The write up says:
Google has eliminated more than one-third of its managers overseeing small teams, an executive told employees last week, as the company continues its focus on efficiencies across the organization. “Right now, we have 35% fewer managers, with fewer direct reports” than at this time a year ago, said Brian Welle, vice president of people analytics and performance ….“So a lot of fast progress there.”
Yep, efficiency. Quicker decisions. No bureaucracy.
The write up includes this statement from the person at the top of the Great Chain of Being:
Google CEO Sundar Pichai weighed in at the meeting, reiterating the need for the company “to be more efficient as we scale up so we don’t solve everything with headcount.”
Will Google continue to trim out the irrelevant parts of the Great Chain of Being? Absolutely. Why not? The company has a VEP or a Voluntary Exit Program. From Googler to Xoogler in a flash and with benefits.
Several observations:
- Google continues to work hard to cope with the costs of its infrastructure
- Google has to find ways to offset the costs of that $0.47 per employee deal for US government entities
- Google must expand its ability to extract more cash from [a] advertisers and [b] users without making life too easy for competitors like Meta and lurkers waiting for a chance to tap into the online revenue from surveillance, subscriptions, and data licensing.
Logic suggests that the Great Chain of Being will evolve, chopping out layers between the Big Guy at the top and the amoebae at the bottom. What’s in the middle? AI powered systems. Management innovation speeds forward at the ageing Google.
Fear, confusion, and chaos appear to be safely firewalled with this new approach.
Stephen E Arnold, August 28, 2025
If You Want to Work at Meta, You Must Say Yes, Boss, Yes Boss, Yes Boss
August 18, 2025
No AI. Just a dinobaby working the old-fashioned way.
These giant technology companies are not very good in some situations. One example which comes to mind in the Apple car. What was the estimate? About $10 billion blown Meta pulled a similar trick with its variant of the Google Glass. Winners.
I read “Meta Faces Backlash over AI Policy That Lets Bots Have Sensual Conversations with Children.” My reaction was, “You are kidding, right?” Nope. Not a joke. Put aside common sense, a parental instinct for appropriateness, and the mounting evidence that interacting with smart software can be a problem. What are these lame complaints.
The write up says:
According to Meta’s 200-page internal policy seen by Reuters, titled “GenAI: Content Risk Standards”, the controversial rules for chatbots were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist.
Okay, let’s stop the buggy right here, pilgrim.
A “chief ethicist”! A chief ethicist who thought that this was okay:
An internal Meta policy document, seen by Reuters, showed the social media giant’s guidelines for its chatbots allowed the AI to “engage a child in conversations that are romantic or sensual”, generate false medical information, and assist users in arguing that Black people are “dumber than white people”.
What is an ethicist? First, it is a knowledge job. One I assume requiring knowledge of ethical thinking embodied in different big thinkers. Second, it is a profession which relies on context because what’s right for Belgium in the Congo may not be okay today. Third, the job is likely one that encourages flexible definitions of ethics. It may be tough to get another high-paying gig if one points out that the concept of sensual conversations with children is unethical.
The write up points out that an investigation is needed. Why? The chief ethicist should say, “Sorry. No way.”
Chief ethicist? A chief “yes, boss” person.
Stephen E Arnold, August 18, 2025
c
Google: Simplicity Is Not a Core Competency
August 18, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Telegram Messenger is reasonably easy to use messaging application. People believe that it is bulletproof, but I want to ask, “Are you sure?” Then there is WhatsApp, now part of Darth Zuck’s empire. However, both of these outfits appear to be viewed as obtuse and problematic by Kremlin officials. The fix? Just ban these service. Banning online services is a popular way for a government to “control” information flow.
I read a Russian language article about an option some Russians may want to consider. The write up’s title is “How to Replace Calls on WhatsApp and Telegram. Review of the Google Meet Application for Android and iOS.”
I worked through the write up and noted this statement:
Due to the need to send invitation links Meet is not very convenient for regular calls— and most importantly it belongs to the American company Google, whose products, by definition, are under threat of blocking. Moreover, several months ago, Russian President Vladimir Putin himself called for «stifling» Western services operating in Russia, and instructed the Government to prepare a list of measures to limit them by September 1, 2025.
The bulk of the write up is a how to. In order to explain the process of placing a voice call via the Google system, PCNews presented:
- Nine screenshots
- These required seven arrows
- One rectangular box in red to call attention to something. (I couldn’t figure out what, however.)
- Seven separate steps.
How does one “do” a voice call in Telegram Messenger. Here are the steps:
- I opened Telegram mini app and select the contact with whom I want to speak
- I tap on my contact’s name
- I look for the phone call icon and tap it
- I choose “Voice Call” from the options to start an audio call. If I want to make a video call instead, I select “Video Call”
One would think that when a big company wants to do a knock off of a service, someone would check out what Telegram does. (It is a Russian audience due to the censorship in the country.) Then the savvy wizard would figure out how to make the process better and faster and easier. Instead the clever Googlers add steps. That’s the way of the Sundar & Prabhakar Comedy Show.
Stephen E Arnold, August 18, 2025
AI Applesauce: Sweeten the Story about Muffing the Bunny
August 14, 2025
No AI. Just a dinobaby being a dinobaby.
I read “Apple CEO Tim Cook Calls AI ‘Bigger Than the Internet’ in Rare All-Hands Meeting.” I noted this passage:
In a global all-hands meeting hosted from Apple’s headquarters in Cupertino, California, CEO Tim Cook seemed to admit to what analysts and Apple enthusiasts around the world had been raising concerns about: that Apple has fallen behind competitors in the AI race. And Cook promised employees that the company will be doing everything to catch up. “Apple must do this. Apple will do this. This is sort of ours to grab.” …The AI revolution [is] “as big or bigger” than the internet.
Okay. Two companies of some significance have miss the train to AI Ville: Apple and Telegram. Both have interesting technology. Apple is far larger, but for some users Telegram is more important to their lives. One is fairly interested in China activities; the other is focused on Russia and crypto.
But both have managed their firms into the same digital row boat. Apple had Siri and it was not very good. Telegram knew about AI and allowed third-party bot developers to use it, but Telegram itself dragged its feet.
Both companies are asserting that each has plenty of time. Tim Cook is talking about smart software but so far the evidence of making an AI difference is scant. Telegram, on the other hand, has aimed Nikolai Durov at AI. That wizard is working on a Telegram AI system.
But the key point is that both of these forward leaning outfits are trying to catch up. This is not keeping pace, mind. The two firms are trying to go from watching the train go down the tracks to calling an Uber to get to their respective destinations.
My take on both companies is that the “leadership” have some good reasons for muffing the AI bunny. Apple is struggling with its China “syndrome.” Will the nuclear reactor melt down, fizzle out, or blow up? Apple’s future in hardware may become radioactive.
Telegram is working under the shadow of the criminal trial lumbering toward its founder and owner Pavel Durov. More than a dozen criminal charges and a focused French judicial figure have Mr. Durov reporting a couple of times a week. To travel, he has to get a note from his new “mom.”
But well-run companies don’t let things like China dependency or 20 years in Fleury-Mérogis Prison upset trillion dollar companies or cause more than one billion people to worry about their free text messages and non fungible tokens.
“Leadership,” not technology, strikes me as the problem with AI challenges. If AI is so big, why did two companies fail to get the memo? Inattention, pre-occupation with other matters, fear? Pick one or two.
Stephen E Arnold, August 14, 2025
Microsoft Management Method: Fire Humans, Fight Pollution
August 7, 2025
How Microsoft Plans to Bury its AI-Generated Waste
Here is how one big tech firm is addressing the AI sustainability quandary. Windows Central reports, “Microsoft Will Bury 4.9 Tons of ‘Manure’ in a Secretive Deal—All to Offset its AI Energy Demands that Drive Emissions Up by 168%.” We suppose this is what happens when you lay off employees and use the money for something useful. Unlike Copilot.
Writer Kevin Okemwa begins by summarizing Microsoft’s current approach to AI. Windows and Office users may be familiar with the firm’s push to wedge its AI products into every corner of the environment, whether we like it or not. Then there is the feud with former best bud OpenAI, a factor that has Microsoft eyeing a separate path. But whatever the future holds, the company must reckon with one pressing concern. Okemwa writes:
“While it has made significant headway in the AI space, the sophisticated technology also presents critical issues, including substantial carbon emissions that could potentially harm the environment and society if adequate measures aren’t in place to mitigate them. To further bolster its sustainability efforts, Microsoft recently signed a deal with Vaulted Deep (via Tom’s Hardware). It’s a dual waste management solution designed to help remove carbon from the atmosphere in a bid to protect nearby towns from contamination. Microsoft’s new deal with the waste management solution firm will help remove approximately 4.9 million metric tons of waste from manure, sewage, and agricultural byproducts for injection deep underground for the next 12 years. The firm’s carbon emission removal technique is quite unique compared to other rivals in the industry, collecting organic waste which is combined into a thick slurry and injected about 5,000 feet underground into salt caverns.”
Blech. But the process does keep the waste from being dumped aboveground, where it could release CO2 into the environment. How much will this cost? We learn:
“While it is still unclear how much this deal will cost Microsoft, Vaulted Deep currently charges $350 per ton for its carbon removal services. Simple math suggests that the deal might be worth approximately $1.7 billion.”
That is a hefty price tag. And this is not the only such deal Microsoft has made: We are told it signed a contract with AtmosClear in April to remove almost seven million metric tons of carbon emissions. The company positions such deals as evidence of its good stewardship of the planet. But we wonder—is it just an effort to keep itself from being buried in its own (literal and figurative) manure?
Cynthia Murrell, August 7, 2025
Microsoft: Knee Jerk Management Enigma
July 29, 2025
This blog post is the work of an authentic dinobaby. Sorry. Not even smart software can help this reptilian thinker.
I read “In New Memo, Microsoft CEO Addresses Enigma of Layoffs Amid Record Profits and AI Investments.” The write up says in a very NPR-like soft voice:
“This is the enigma of success in an industry that has no franchise value,” he wrote. “Progress isn’t linear. It’s dynamic, sometimes dissonant, and always demanding. But it’s also a new opportunity for us to shape, lead through, and have greater impact than ever before.” The memo represents Nadella’s most direct attempt yet to reconcile the fundamental contradictions facing Microsoft and many other tech companies as they adjust to the AI economy. Microsoft, in particular, has been grappling with employee discontent and internal questions about its culture following multiple rounds of layoffs.
Discontent. Maybe the summer of discontent. No, it’s a reshaping or re-invention of a play by William Shakespeare (allegedly) which borrows from Chaucer’s Troilus and Criseyde with a bit more emphasis on pettiness and corruption to add spice to Boccaccio’s antecedent. Willie’s Troilus and Cressida makes the “love affair” more ironic.
Ah, the Microsoft drama. Let’s recap: [a] Troilus and Cressida’s Two Kids: Satya and Sam, [b] Security woes of SharePoint (who knew? eh, everyone]; [c] buying green credits or how much manure does a gondola rail card hold? [d] Copilot (are the fuel switches on? Nope); and [e] layoffs.
What’s the description of these issues? An enigma. This is a word popping up frequently it seems. An enigma is, according to Venice, a smart software system:
The word “enigma” derives from the Greek “ainigma” (meaning “riddle” or “dark saying”), which itself stems from the verb “aigin” (“to speak darkly” or “to speak in riddles”). It entered Latin as “aenigma”, then evolved into Old French as “énigme” before being adopted into English in the 16th century. The term originally referred to a cryptic or allegorical statement requiring interpretation, later broadening to describe any mysterious, puzzling, or inexplicable person or thing. A notable modern example is the Enigma machine, a cipher device used in World War II, named for its perceived impenetrability. The shift from “riddle” to “mystery” reflects its linguistic journey through metaphorical extension.
Okay, let’s work through this definition.
- Troilus and Cressida or Satya and Sam. We have a tortured relationship. A bit of a war among the AI leaders, and a bit of the collapse of moral certainty. The play seems to be going nowhere. Okay, that fits.
- Security woes. Yep, the cipher device in World War II. Its security or lack of it contributed to a number of unpleasant outcomes for a certain nation state associated with beer and Rome’s failure to subjugate some folks.
- Manure. This seems to be a metaphorical extension. Paying “green” or money for excrement is a remarkable image. Enough said.
- Fuel switches and the subsequent crash, explosion, and death of some hapless PowerPoint users. This lines up with “puzzling.” How did those Word paragraphs just flip around? I didn’t do it. Does anyone know why? Of course not.
- Layoffs. Ah, an allegorical statement. Find your future elsewhere. There is a demand for life coaches, LinkedIn profile consultants, and lawn service workers.
Microsoft is indeed speaking darkly. The billions burned in the AI push have clouded the atmosphere in Softie Land. When the smoke clears, what will remain? My thought is that the items a to e mentioned above are going to leave some obvious environmental alterations. Yep, dark saying because knee jerk reactions are good enough.
Stephen E Arnold, July 29, 2025
Why Customer Trust of Chatbot Does Not Matter
July 22, 2025
Just a dinobaby working the old-fashioned way, no smart software.
The need for a winner is pile driving AI into consumer online interactions. But like the piles under the San Francisco Leaning Tower of Insurance Claims, the piles cannot stop the sag, the tilt, and the sight of a giant edifice tilting.
I read an article in the “real” new service called Fox News. The story’s title is “Chatbots Are Losing Customer Trust Fast.” The write up is the work of the CyberGuy, so you know it is on the money. The write up states:
While companies are excited about the speed and efficiency of chatbots, many customers are not. A recent survey found that 71% of people would rather speak with a human agent. Even more concerning, 60% said chatbots often do not understand their issue. This is not just about getting the wrong answer. It comes down to trust. Most people are still unsure about artificial intelligence, especially when their time or money is on the line.
So what? Customers are essentially irrelevant. As long as the outfit hits its real or imaginary revenue goals, the needs of the customer are not germane. If you don’t believe me, navigate to a big online service like Amazon and try to find the number of customer service. Let me know how that works out.
Because managers cannot “fix” human centric systems, using AI is a way out. Let AI do it is a heck of lot easier than figuring out a work flow, working with humans, and responding to customer issues. The old excuse was that middle management was not needed when decisions were pushed down to the “workers.”
AI flips that. Managerial ranks have been reduced. AI decisions come from “leadership” or what I call carpetland. AI solves problems: Actually managing, cost reduction, and having good news for investor communications.
The customers don’t want to talk to software. The customer wants to talk to a human who can change a reservation without automatically billing for a service charge. The customer wants a person to adjust a double billing for a hotel doing business Snap Commerce Holdings. The customer wants a fair shake.
AI does not do fair. AI does baloney, confusion, errors, and hallucinations. I tried a new service which put Google Gemini front and center. I asked one question and got an incomplete and erroneous answer. That’s AI today.
The CyberGuy’s article says:
If a company is investing in a chatbot system, it should track how well that system performs. Businesses should ask chatbot vendors to provide real-world data showing how their bots compare to human agents in terms of efficiency, accuracy and customer satisfaction. If the technology cannot meet a high standard, it may not be worth the investment.
This is simply not going to happen. Deployment equals cost savings. Only when the money goes away will someone in leadership take action. Why? AI has put many outfits in a precarious position. Big money has been spent. Much of that money comes from other people. Those “other people” want profits, not excuses.
I heard a sci-fi rumor that suggests Apple can buy OpenAI and catch up. Apple can pay OpenAI’s investors and make good on whatever promissory payments have been offered by that firm’s leadership. Will that solve the problem?
Nope. The AI firms talk about customers but don’t care. Dealing with customers abused by intentionally shady business practices cooked up by a committee that has to do something is too hard and too costly. Let AI do it.
If the CyberGuy’s write up is correct, some excitement is speeding down the information highway toward some well known smart software companies. A crash at one of the big boys junctions will cause quite a bit of collateral damage.
Whom do you trust? Humans or smart software.
Stephen E Arnold, July 22, 2025
What Did You Tay, Bob? Clippy Did What!
July 21, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I was delighted to read “OpenAI Is Eating Microsoft’s Lunch.” I don’t care who or what wins the great AI war. So many dollars have been bet that hallucinating software is the next big thing. Most content flowing through my dinobaby information system is political. I think this food story is a refreshing change.
So what’s for lunch? The write up seems to suggest that Sam AI-Man has not only snagged a morsel from the Softies’ lunch pail but Sam AI-Man might be prepared to snap at those delicate lady fingers too. The write up says:
ChatGPT has managed to rack up about 10 times the downloads that Microsoft’s Copilot has received.
Are these data rock solid? Probably not, but the idea that two “partners” who forced Googzilla to spasm each time its Code Red lights flashed are not cooperating is fascinating. The write up points out that when Microsoft and OpenAI were deeply in love, Microsoft had the jump on the smart software contenders. The article adds:
Despite that [early lead], Copilot sits in fourth place when it comes to total installations. It trails not only ChatGPT, but Gemini and Deepseek.
Shades of Windows phone. Another next big thing muffed by the bunnies in Redmond. How could an innovation power house like Microsoft fail in the flaming maelstrom of burning cash that is AI? Microsoft’s long history of innovation adds a turbo boost to its AI initiatives. The Bob, Clippy, and Tay inspired Copilot is available to billions of Microsoft Windows users. It is … everywhere.
The write up explains the problem this way:
Copilot’s lagging popularity is a result of mismanagement on the part of Microsoft.
This is an amazing insight, isn’t it? Here’s the stunning wrap up to the article:
It seems no matter what, Microsoft just cannot make people love its products. Perhaps it could try making better ones and see how that goes.
To be blunt, the problem at Microsoft is evident in many organizations. For example, we could ask IBM Watson what Microsoft should do. We could fire up Deepseek and get some China-inspired insight. We could do a Google search. No, scratch that. We could do a Yandex.ru search and ask, “Microsoft AI strategy repair.”
I have a more obvious dinobaby suggestion, “Make Microsoft smaller.” And play well with others. Silly ideas I know.
Stephen E Arnold, July 21, 2025