Microsoft Demonstrates a Combo: PR and HR Management Skill in One Decision
June 2, 2025
How skilled are modern managers? I spotted an example of managerial excellence in action. “Microsoft fires Employee Who Interrupted CEO’s Speech to Protest AI Tech for Israel” reports something that is allegedly spot on; to wit:
“Microsoft has fired an employee who interrupted a speech by CEO Satya Nadella to protest the company’s work supplying the Israeli military with technology used for the war in Gaza.”
Microsoft investigated similar accusations and learned that its technology was not used to harm citizens / residents / enemies in Gaza. I believe that a person investigating himself or herself does a very good job. Law enforcement is usually not needed to investigate a suspected bad actor when the alleged malefactor says: “Yo, I did not commit that crime.” I think most law enforcement professionals smile, shake the hand of the alleged malefactor, and say, “Thank you so much for your rigorous investigation.”
Isn’t that enough? Obviously it is. More than enough. Therefore, to output fabrications and unsupported allegations against a large, ethical, and well informed company, management of that company has a right and a duty to choke off doubt.
The write up says:
“Microsoft has previously fired employees who protested company events over its work in Israel, including at its 50th anniversary party in April [2025].”
The statement is evidence of consistency before this most recent HR / PR home run in my opinion. I note this statement in the cited article:
“The advocacy group No Azure for Apartheid, led by employees and ex-employees, says Lopez received a termination letter after his Monday protest but couldn’t open it. The group also says the company has blocked internal emails that mention words including “Palestine” and “Gaza.””
Company of the year nominee for sure.
Stephen E Arnold, June 2, 2025
Copilot Disappointments: You Are to Blame
May 30, 2025
No AI, just a dinobaby and his itty bitty computer.
Another interesting Microsoft story from a pro-Microsoft online information service. Windows Central published “Microsoft Won’t Take Bigger Copilot Risks — Due to ‘a Post-Traumatic Stress Disorder from Embarrassments,’ Tracing Back to Clippy.” Why not invoke Bob, the US government suggesting Microsoft security was needy, or the software of the Surface Duo?
The write up reports:
Microsoft claims Copilot and ChatGPT are synonymous, but three-quarters of its AI division pay out of pocket for OpenAI’s superior offering because the Redmond giant won’t allow them to expense it.
Is Microsoft saving money or is Microsoft’s cultural momentum maintaining the velocity of Steve Ballmer taking an Apple iPhone from an employee and allegedly stomping on the device. That helped make Microsoft’s management approach clear to some observers.
The Windows Central article adds:
… a separate report suggested that the top complaint about Copilot to Microsoft’s AI division is that “Copilot isn’t as good as ChatGPT.” Microsoft dismissed the claim, attributing it to poor prompt engineering skills.
This statement suggests that Microsoft is blaming a user for the alleged negative reaction to Copilot. Those pesky users again. Users, not Microsoft, is at fault. But what about the Microsoft employees who seem to prefer ChatGPT?
Windows Central stated:
According to some Microsoft insiders, the report details that Satya Nadella’s vision for Microsoft Copilot wasn’t clear. Following the hype surrounding ChatGPT’s launch, Microsoft wanted to hop on the AI train, too.
I thought the problem was the users and their flawed prompts. Could the issue be Microsoft’s management “vision”? I have an idea. Why not delegate product decisions to Copilot. That will show the users that Microsoft has the right approach to smart software: Cutting back on data centers, acquiring other smart software and AI visionaries, and putting Copilot in Notepad.
Stephen E Arnold, May 30, 2025
Microsoft: Did It Really Fork This Fellow?
May 26, 2025
Just the dinobaby operating without Copilot or its ilk.
Forked doesn’t quite communicate the exact level of frustration Philip Laine experienced while working on a Microsoft project. He details the incident in his post, “Getting Forked By Microsoft.” Laine invented a solution for image scalability without a stateful component and needed minimal operation oversight. He dubbed his project Spegel, made it open source, and was contacted by Microsoft.
Microsoft was pleased with Spegel. Laine worked with Microsoft engineers to implement Spegel into its architecture. Everything went well until Microsoft stopped working with him. He figured the moved onto other projects. Microsoft did move on but the engineers developed their own version of Spegel. They have the grace to thank Laine and in a README file. It gets worse:
"While looking into Peerd, my enthusiasm for understanding different approaches in this problem space quickly diminished. I saw function signatures and comments that looked very familiar, as if I had written them myself. Digging deeper I found test cases referencing Spegel and my previous employer, test cases that have been taken directly from my project. References that are still present to this day. The project is a forked version of Spegel, maintained by Microsoft, but under Microsoft’s MIT license.”
Microsoft plagiarized…no…downright stole Spegel’s base coding from Laine. He, however, published Spegel with Microsoft’s MIT licensing. The MIT licensing means:
“Software released under an MIT license allows for forking and modifications, without any requirement to contribute these changes back. I default to using the MIT license as it is simple and permissive.”
It does require this:
“The license does not allow removing the original license and purport that the code was created by someone else. It looks as if large parts of the project were copied directly from Spegel without any mention of the original source.”
Laine wanted to work with Microsoft and have their engineers contribute to his open source project. He’s dedicated his energy, time, and resources to Spegel and continues to do so without much contribution other than GitHub sponsors and the thanks of its users. Laine is considering changing Spegel’s licensing as it’s the only way to throw a stone at Microsoft.
If true, the pulsing AI machine is a forker.
Whitney Grace, May 26, 2025
Microsoft: What Is a Brand Name?
May 20, 2025
Just the dinobaby operating without Copilot or its ilk.
I know that Palantir Technologies, a firm founded in 2003, used the moniker “Foundry” to describe its platform for government use. My understanding is that Palantir Foundry was a complement to Palantir Gotham. How different were these “platforms”? My recollection is that Palantir used home-brew software and open source to provide the raw materials from which the company shaped its different marketing packages. I view Palantir as a consulting services company with software, including artificial intelligence. The idea is that Palantir can now perform like Harris’ Analyst Notebook as well as deliver semi-custom, industrial-strength solutions to provide unified solutions to thorny information challenges. I like to think of Palantir’s present product and service line up as a Distributed Common Ground Information Service that generally works. About a year ago, Microsoft and Palantir teamed up to market Microsoft – Palantir solutions to governments via “bootcamps.” These are training combined with “here’s what you too can deploy” programs designed to teach and sell the dream of on-time, on-target information for a range of government applications.
I read “Microsoft Is Now Hosting xAI’s Grok 3 Models” and noted this subtitle:
Grok 3 and Grok 3 mini are both coming to Microsoft’s Azure AI Foundry service.
Microsoft’s Foundry service. Is that Palantir’s Foundry, a mash up of Microsoft and Palantir, or something else entirely. The name confuses me, and I wonder if government procurement professionals will be knocked off center as well. The “dream” of smart software is a way to close deals in some countries’ government agencies. However, keeping the branding straight is also important.
What does one call a Foundry with a Grok? Shakespeare suggested that it would smell as sweet no matter what the system was named. Thanks, OpenAI? Good enough.
The write up says:
At Microsoft’s Build developer conference today, the company confirmed it’s expanding its Azure AI Foundry models list to include Grok 3 and Grok 3 mini from xAI.
It is not clear if Microsoft will offer Grok as another large language model or whether [a] Palantir will be able to integrate Grok into its Foundry product, [b] Microsoft Foundry is Microsoft’s own spin on Palantir’s service which is deprecated to some degree, or [c] a way to give Palantir direct, immediate access to the Grok smart software. There are other possibilities as well; for example, Foundry is a snappy name in some government circles. Use what helps close deals with end-of-year money or rev up for new funds seeking smart software.
The write up points out that Sam AI-Man may be annoyed with the addition of Grok to the Microsoft toolkit. Both OpenAI and Grok have some history. Maybe Microsoft is positioning itself as the role of the great mediator, a digital Henry Clay of sorts?
A handful of companies are significant influencers of smart software in some countries’ Microsoft-centric approach to platform technology. Microsoft’s software and systems are so prevalent that Israel did some verbal gymnastics to make clear that Microsoft technology was not used in the Gaza conflict. This is an assertion that I find somewhat difficult to accept.
What is going on with large language models at Microsoft? My take is:
- Microsoft wants to offer a store shelf stocked with LLMs so that consulting service revenue provides evergreen subscription revenue
- Customers who want something different, hot, or new can make a mark on the procurement shopping list and Microsoft will do its version of home delivery, not quite same day but convenient
- Users are not likely to know what smart software is fixing up their Miltonic prose or centering a graphic on a PowerPoint slide.
What about the brand or product name “Foundry”? Answer: Use what helps close deals perhaps? Does Palantir get a payoff? Yep.
Stephen E Arnold, May 20, 2025
Salesforce CEO Criticizes Microsoft, Predicts Split with OpenAI
May 20, 2025
Salesforce CEO Marc Benioff is very unhappy with Microsoft. Windows Central reports, “Salesforce CEO Says Microsoft Did ‘Pretty Nasty’ Things to Slack and Its OpenAI Partnership May Be a Recipe for Disaster.” Writer Kevin Okemwa reminds us Benioff recently dubbed Microsoft an “OpenAI reseller” and labeled Copilot the new Clippy. Harsh words. Then Okemwa heard Benioff criticizing Microsoft on a recent SaaStr podcast. He tells us:
“According to Salesforce CEO Marc Benioff: ‘You can see the horrible things that Microsoft did to Slack before we bought it. That was pretty bad and they were running their playbook and did a lot of dark stuff. And it’s all gotten written up in an EU complaint that Slack made before we bought them.’ Microsoft has a long-standing rivalry with Slack. The messaging platform accused Microsoft of using anti-competitive techniques to maintain its dominance across organizations, including bundling Teams into its Microsoft Office 365 suite.”
But, as readers may have noticed, Teams is no longer bundled into Office 365. Score one for Salesforce. The write-up continues:
“Marc Benioff further indicated that Microsoft’s treatment of Slack was ‘pretty nasty.’ He claimed that the company often employs a similar playbook to gain a competitive advantage over its rivals while referencing ‘browser wars’ with Netscape and Internet Explorer in the late 1990s.”
How did that one work out? Not well for the once-dominant Netscape. Benioff is likely referring to Microsoft’s dirty trick of making IE 1.0 free with Windows. This does seem to be a pattern for the software giant. In the same podcast, the CEO predicts a split between Microsoft and ChatGPT. It is a recent theme of his. Okemwa writes:
“Over the past few months, multiple reports and speculations have surfaced online suggesting that Microsoft’s multi-billion-dollar partnership with OpenAI might be fraying. It all started when OpenAI unveiled its $500 billion Stargate project alongside SoftBank, designed to facilitate the construction of data centers across the United States. The ChatGPT maker had previously been spotted complaining that Microsoft doesn’t meet its cloud computing needs, shifting blame to the tech giant if one of its rivals hit the AGI benchmark first. Consequently, Microsoft lost its exclusive cloud provider status but retains the right of refusal to OpenAI’s projects.”
Who knows how long that right of refusal will last. Microsoft itself seems to be preparing for a future without its frenemy. Will Benioff crow when the partnership is completely destroyed? What will he do if OpenAI buys Chrome and pushes forward with his “everything” app?
Cynthia Murrell, May 20, 2025
Microsoft Explains that Its AI Leads to Smart Software Capacity Gap Closing
May 7, 2025
No AI, just a dinobaby watching the world respond to the tech bros.
I read a content marketing write up with two interesting features: [1] New jargon about smart software and [2] a direct response to Google’s increasingly urgent suggestions that Googzilla has won the AI war. The article appears in Venture Beat with the title “Microsoft Just Launched Powerful AI ‘Agents’ That Could Completely Transform Your Workday — And Challenge Google’s Workplace Dominance.” The title suggests that Google is the leader in smart software in the lucrative enterprise market. But isn’t Microsoft’s “flavor” of smart software in products from the much-loved Teams to the lowly Notepad application? Isn’t Word like Excel at the top of the heap when it comes to usage in the enterprise?
I will ignore these questions and focus on the lingo in the article. It is different and illustrates what college graduates with a B.A. in modern fiction can craft when assisted by a sprinkling of social science majors and a former journalist or two.
Here are the terms I circled:
product name: Microsoft 365 Copilot Wave 2 Spring release (wow, snappy)
integral collaborator (another bound phrase which means agent)
intelligence abundance (something everyone is talking about)
frontier firm (forward leaning synonym)
‘human-led, agent-operated’ workplaces (yes, humans lead; they are not completely eliminated)
agent store (yes, another online store. You buy agents; you don’t buy people)
browser for AI
brand-compliant images
capacity gap (I have no idea what this represents)
agent boss (Is this a Copilot thing?)
work charts (not images, plans I think)
Copilot control system (Is this the agent boss thing?)
So what does the write up say? In my dinobaby mind, the answer is, “Everything a member of leadership could want: Fewer employees, more productivity from those who remain on the payroll, software middle managers who don’t complain or demand emotional support from their bosses, and a narrowing of the capacity gap (whatever that is).
The question is, “Can either Google, Microsoft, or OpenAI deliver this type of grand vision?” Answer: Probably the vision can be explained and made magnetic via marketing, PR, and language weaponization, but the current AI technology still has a couple of hurdles to get over without tearing the competitors’ gym shorts:
- Hallucinations and making stuff up
- Copyright issues related to training and slapping the circle C, trademarks, and patents on outputs from these agent bosses and robot workers
- Working without creating a larger attack surface for bad actors armed with AI to exploit (Remember, security, not AI, is supposed to be Job No. 1 at Microsoft. You remember that, right? Right?)
- Killing dolphins, bleaching coral, and choking humans on power plant outputs
- Getting the billions pumped into smart software back in the form of sustainable and growing revenues. (Yes, there is a Santa Claus too.)
Net net: Wow. Your turn Google. Tell us you have won, cured disease, and crushed another game player. Oh, you will have to use another word for “dominance.” Tip: Let OpenAI suggest some synonyms.
Stephen E Arnold, May 7, 2025
Israel Military: An Alleged Lapse via the Cloud
April 23, 2025
No AI, just a dinobaby watching the world respond to the tech bros.
Israel is one of the countries producing a range of intelware and policeware products. These have been adopted in a number of countries. Security-related issues involving software and systems in the country are on my radar. I noted the write up “Israeli Air Force Pilots Exposed Classified Information, Including Preparations for Striking Iran.” I do not know if the write up is accurate. My attempts to verify did not produce results which made me confident about the accuracy of the Haaretz article. Based on the write up, the key points seem to be:
- Another security lapse, possibly more severe than that which contributed to the October 2023 matter
- Classified information was uploaded to a cloud service, possibly Click Portal, associated with Microsoft’s Azure and the SharePoint content management system. Haaretz asserts: “… it [MSFT Azure SharePoint Click Portal] enables users to hold video calls and chats, create documents using Office applications, and share files.”
- Documents were possibly scanned using CamScanner, A Chinese mobile app rolled out in 2010. The app is available from the Russian version of the Apple App Store. A CamScanner app is available from the Google Play Store; however, I elected to not download the app.
Modern interfaces can confuse users. Lack of training rigor and dashboards can create a security problem for many users. Thanks, Open AI, good enough.
Haaretz’s story presents this information:
Officials from the IDF’s Information Security Department were always aware of this risk, and require users to sign a statement that they adhere to information security guidelines. This declaration did not prevent some users from ignoring the guidelines. For example, any user could easily find documents uploaded by members of the Air Force’s elite Squadron 69.
Regarding the China-linked CamScanner software, Haaretz offers this information:
… several files that were uploaded to the system had been scanned using CamScanner. These included a duty roster and biannual training schedules, two classified training presentations outlining methods for dealing with enemy weaponry, and even training materials for operating classified weapons systems.
Regarding security procedures, Haaretz states:
According to standard IDF regulations, even discussing classified matters near mobile phones is prohibited, due to concerns about eavesdropping. Scanning such materials using a phone is, all the more so, strictly forbidden…According to the Click Portal usage guidelines, only unclassified files can be uploaded to the system. This is the lowest level of classification, followed by restricted, confidential, secret and top secret classifications.
The military unit involved was allegedly Squadron 69 which could be the General Staff Reconnaissance Unit. The group might be involved in war planning and fighting against the adversaries of Israel. Haaretz asserts that other units’ sensitive information was exposed within the MSFT Azure SharePoint Click Portal system.
Several observations seem to be warranted:
- Overly complicated systems involving multiple products increase the likelihood of access control issues. Either operators are not well trained or the interfaces and options confuse an operator so errors result
- The training of those involved in sensitive information access and handling has to be made more rigorous despite the tendency to “go through the motions” and move on in many professionals undergoing specialized instruction
- The “brand” of Israel’s security systems and procedures has taken another hit with the allegations spelled out by Haaretz. October 2023 and now Squadron 69. This raises the question, “What else is not buttoned up and ready for inspection in the Israel security sector?
Net net: I don’t want to accept this write up as 100 percent accurate. I don’t want to point the finger of blame at any one individual, government entity, or commercial enterprise. But security issues and Microsoft seem to be similar to ham and eggs and peanut butter and jelly from this dinobaby’s point of view.
Stephen E Arnold, April 23, 2025
Microsoft Leadership Will Be Replaced by AI… Yet
March 14, 2025
Whenever we hear the latest tech announcement, we believe it is doom and gloom for humanity. While fire, the wheel, the Industrial Revolution, and computers have yet to dismantle humanity, the jury is still out for AI. However, Gizmodo reports that Satya Nadella of Microsoft says we shouldn’t be worried about AI and it’s time to stop glorifying it, “Microsoft’s Satya Nadella Pumps the Brakes on AI Hype.” Nadella placed a damper on AI hype with the following statement from a podcast: “Success will be measured through tangible, global economic growth rather than arbitrary benchmarks of how well AI programs can complete challenges like obscure math puzzles. Those are interesting in isolation but do not have practical utility.”
Nadella said that technology workers are saying AI will replace humans, but that’s not the case. He calls that type of thinking a distraction and the tech industry needs to “get practical and just try and make money before investors get impatient.” Nadella’s fellow Microsoft worker CEO Sam Altman is a prime example of AI fear mongering. He uses it as a tool to give himself power.
Nadella continued that if the tech industry and its investors want AI growth akin to the Industrial Revolution then let’s concentrate in it. Proof of that type of growth would be if there was 10% inflation attributed to AI. Investing in AI can’t just happen on the supply side, there needs to be demand AI-built products.
Nadella’s statements are like a pouring a bucket of cold water on a sleeping person:
"On that sense, Nadella is trying to slap tech executives awake and tell them to cut out the hype. AI safety is somewhat of a concern—the models can be abused to create deepfakes or mass spam—but it exaggerates how powerful these systems are. Eventually, push will come to shove and the tech industry will have to prove that the world is willing to put down real money to use all these tools they are building. Right now, the use cases, like feeding product manuals into models to help customers search them faster, are marginal.”
Many well-known companies still plan on implementing AI despite their difficulties. Other companies have downsized their staffing to include more AI chatbots, but the bots prove to be inefficient and frustrating. Microsoft, however, is struggling with management issues related to OpenAI, its internal “experts,” and the Softies who think they can do better. (Did Microsoft ask Grok, “How do I manage this billions of dollar bonfire?”)
Let’s blame it on AI.
Whitney Grace, March 14, 2025, 2025
Microsoft Sends a Signal: AI, AIn’t Working
March 11, 2025
Another post from the dinobaby. Alas, no smart software used for this essay.
The problems with Microsoft’s AI push were evident from the start of its AI push in 2023. The company thought it had identified the next big thing and had the big fish on the line. Now the work was easy. Just reel in the dough.
Has it worked out for Microsoft? We know that big companies often have difficulty innovating. The enervating white board sessions which seek to answer the question, “Do we build it or buy it?” usually give way to: [a] Let’s lock it up somehow or [b] Let’s steal it because it won’t take our folks too long to knock out a me-too.
Microsoft sent a fairly loud beep-beep-beep when it began to cut back on its dependence on OpenAI. Not long ago, Microsoft trimmed some of its crazy spending for AI. Now we have the allegedly accurate information in “Microsoft Is Reportedly Potting a Future without OpenAI.”
The write up states:
Microsoft has poured over $13 billion into the AI firm since 2019, but now it wants more control over its own models and costs. Simple enough in theory—build in-house alternatives, cut expenses, and call the shots.
Is this a surprise? No, I think it is just one more beep added to the already emitted beep-beep-beep.
Here’s my take:
- Narrowly focused smart software adds some useful capabilities to what I would call workflow enhancement. The narrow focus for an AI system reduces some of the wonkiness of the output. Therefore, certain tasks benefit; for example, grinding through data for a chemistry application or providing a call center operation with a good enough solution to rising costs. Broad use cases are more problematic.
- Humans who rely on information for a living don’t want to be caught out. This means that using smart software is an assist or a supplement. This is like an older person using a cane when walking on a senior citizens adventure tour.
- Productizing a broad use case for smart software is expensive and prone to the sort of failure rate associated with a new product or service. A good example is a self driving auto with collision avoidance. Would you stand in front of such a vehicle confident in the smart software’s ability to not run over you? I wouldn’t.
What’s happening at Microsoft is a reasonably predictable and understandable approach. The company wants to hedge its bets since big bucks are flowing out, not in. The firm thinks it has enough smarts to do a better job even though in my opinion this is unlikely. Remember Bob, Clippy, and Windows updates? I do.
Also, small teams believe their approach will be a winner. Big companies believe their people can row that boat faster than anyone else. I know from personal experience and observation that this is not true. But the appearance of effort and the illusion of high value work encourages the approach.
Plus, the idea that a “leadership team” can manage innovation is a powerful one. Microsoft’s leadership believes in its leadership. That’s why the company is a leader. (I love this logic.)
Net net: My hunch is that Microsoft’s AI push is a disappointment. Now the company can shift into SWAT team mode and overwhelm the problem: AI that does not pay for itself.
Will this approach work? Nope, the outcome will be good enough. That is a bit more than one can say about Apple intelligence: Seriously out of step with the Softies.
Stephen E Arnold, March 11, 2025
LinkedIn: An Ego Buster and Dating App. Who Knew?
March 5, 2025
Yep, another dinobaby original.
Okay, GenZ, you are having a traumatic moment. I mean your mobile phone works. You have social media. You have the Dark Web, Telegram, and smart software. Oh, you find that living with your parents a bit of a downer. I understand. And the lack of having a role as a decider in an important company chews on your id, ego, and superego simultaneously. Not good.
I learned something when I read “GenZ Is Suffering from LinkedIn envy — And It’s Crushing Their Chill: My Reactions Are So Intense.” I noted this statement in the “real” news write up:
…at a time when unemployed people are finding it harder to find new work, LinkedIn has become the “unrivaled behemoth of digital inadequacy,” journalist Lotte Brundle wrote for The UK Times.
I want to refer Ms. Brundle to the US Department of Labor Statistics report that says AI and other factors are not hampering the job market in the US. Is it time to apply for a green card?
The write up adds:
Brundle also likened the platform to a dating site where people compare themselves to others, adding that she has used the platform to “see what exes and past nemeses are up to” — and some of her friends have even been “chatted up” on it.
There are a couple of easy fixes. First, hire someone on Fiverr to be “you” on LinkedIn. If something important appears, that individual will alert you so you can say, “Do this.” Second, do not log into LinkedIn.
What happens if you embrace the Microsoft product? Here’s a partial answer:
“I deleted my account because every time I go on it I feel absolutely terrible about myself,” the confessional said. “It might just be me and comparing myself too much to others but does anyone else find people on there to be completely cringe and egotistical lol?! I don’t even have a bad job but I think LinkedIn has just become an egocentric breeding zone like every other social media platform.”
Okay. LinkedIn public relations and marketing messages cause a person to feel bad about oneself. I am not sure I understand.
Suck it up, buttercup or learn to use agentic AI which can send you personalized emails every hour telling you that you are not terrible. Give that a try if ignoring LinkedIn is not possible.
Stephen E Arnold, March 5, 2025