Microsoft: What Is a Brand Name?
May 20, 2025
Just the dinobaby operating without Copilot or its ilk.
I know that Palantir Technologies, a firm founded in 2003, used the moniker “Foundry” to describe its platform for government use. My understanding is that Palantir Foundry was a complement to Palantir Gotham. How different were these “platforms”? My recollection is that Palantir used home-brew software and open source to provide the raw materials from which the company shaped its different marketing packages. I view Palantir as a consulting services company with software, including artificial intelligence. The idea is that Palantir can now perform like Harris’ Analyst Notebook as well as deliver semi-custom, industrial-strength solutions to provide unified solutions to thorny information challenges. I like to think of Palantir’s present product and service line up as a Distributed Common Ground Information Service that generally works. About a year ago, Microsoft and Palantir teamed up to market Microsoft – Palantir solutions to governments via “bootcamps.” These are training combined with “here’s what you too can deploy” programs designed to teach and sell the dream of on-time, on-target information for a range of government applications.
I read “Microsoft Is Now Hosting xAI’s Grok 3 Models” and noted this subtitle:
Grok 3 and Grok 3 mini are both coming to Microsoft’s Azure AI Foundry service.
Microsoft’s Foundry service. Is that Palantir’s Foundry, a mash up of Microsoft and Palantir, or something else entirely. The name confuses me, and I wonder if government procurement professionals will be knocked off center as well. The “dream” of smart software is a way to close deals in some countries’ government agencies. However, keeping the branding straight is also important.
What does one call a Foundry with a Grok? Shakespeare suggested that it would smell as sweet no matter what the system was named. Thanks, OpenAI? Good enough.
The write up says:
At Microsoft’s Build developer conference today, the company confirmed it’s expanding its Azure AI Foundry models list to include Grok 3 and Grok 3 mini from xAI.
It is not clear if Microsoft will offer Grok as another large language model or whether [a] Palantir will be able to integrate Grok into its Foundry product, [b] Microsoft Foundry is Microsoft’s own spin on Palantir’s service which is deprecated to some degree, or [c] a way to give Palantir direct, immediate access to the Grok smart software. There are other possibilities as well; for example, Foundry is a snappy name in some government circles. Use what helps close deals with end-of-year money or rev up for new funds seeking smart software.
The write up points out that Sam AI-Man may be annoyed with the addition of Grok to the Microsoft toolkit. Both OpenAI and Grok have some history. Maybe Microsoft is positioning itself as the role of the great mediator, a digital Henry Clay of sorts?
A handful of companies are significant influencers of smart software in some countries’ Microsoft-centric approach to platform technology. Microsoft’s software and systems are so prevalent that Israel did some verbal gymnastics to make clear that Microsoft technology was not used in the Gaza conflict. This is an assertion that I find somewhat difficult to accept.
What is going on with large language models at Microsoft? My take is:
- Microsoft wants to offer a store shelf stocked with LLMs so that consulting service revenue provides evergreen subscription revenue
- Customers who want something different, hot, or new can make a mark on the procurement shopping list and Microsoft will do its version of home delivery, not quite same day but convenient
- Users are not likely to know what smart software is fixing up their Miltonic prose or centering a graphic on a PowerPoint slide.
What about the brand or product name “Foundry”? Answer: Use what helps close deals perhaps? Does Palantir get a payoff? Yep.
Stephen E Arnold, May 20, 2025
Salesforce CEO Criticizes Microsoft, Predicts Split with OpenAI
May 20, 2025
Salesforce CEO Marc Benioff is very unhappy with Microsoft. Windows Central reports, “Salesforce CEO Says Microsoft Did ‘Pretty Nasty’ Things to Slack and Its OpenAI Partnership May Be a Recipe for Disaster.” Writer Kevin Okemwa reminds us Benioff recently dubbed Microsoft an “OpenAI reseller” and labeled Copilot the new Clippy. Harsh words. Then Okemwa heard Benioff criticizing Microsoft on a recent SaaStr podcast. He tells us:
“According to Salesforce CEO Marc Benioff: ‘You can see the horrible things that Microsoft did to Slack before we bought it. That was pretty bad and they were running their playbook and did a lot of dark stuff. And it’s all gotten written up in an EU complaint that Slack made before we bought them.’ Microsoft has a long-standing rivalry with Slack. The messaging platform accused Microsoft of using anti-competitive techniques to maintain its dominance across organizations, including bundling Teams into its Microsoft Office 365 suite.”
But, as readers may have noticed, Teams is no longer bundled into Office 365. Score one for Salesforce. The write-up continues:
“Marc Benioff further indicated that Microsoft’s treatment of Slack was ‘pretty nasty.’ He claimed that the company often employs a similar playbook to gain a competitive advantage over its rivals while referencing ‘browser wars’ with Netscape and Internet Explorer in the late 1990s.”
How did that one work out? Not well for the once-dominant Netscape. Benioff is likely referring to Microsoft’s dirty trick of making IE 1.0 free with Windows. This does seem to be a pattern for the software giant. In the same podcast, the CEO predicts a split between Microsoft and ChatGPT. It is a recent theme of his. Okemwa writes:
“Over the past few months, multiple reports and speculations have surfaced online suggesting that Microsoft’s multi-billion-dollar partnership with OpenAI might be fraying. It all started when OpenAI unveiled its $500 billion Stargate project alongside SoftBank, designed to facilitate the construction of data centers across the United States. The ChatGPT maker had previously been spotted complaining that Microsoft doesn’t meet its cloud computing needs, shifting blame to the tech giant if one of its rivals hit the AGI benchmark first. Consequently, Microsoft lost its exclusive cloud provider status but retains the right of refusal to OpenAI’s projects.”
Who knows how long that right of refusal will last. Microsoft itself seems to be preparing for a future without its frenemy. Will Benioff crow when the partnership is completely destroyed? What will he do if OpenAI buys Chrome and pushes forward with his “everything” app?
Cynthia Murrell, May 20, 2025
Behind Microsoft’s Dogged Copilot Push
May 20, 2025
Writer Simon Batt at XDA foresees a lot of annoyance in Windows users’ future. “Microsoft Will Only Get More Persistent Now that Copilot has Plateaued,” he predicts. Yes, Microsoft has failed to attract as many users to Copilot as it had hoped. It is as if users see through the AI hype. According to Batt, the company famous for doubling down on unpopular ideas will now pester us like never before. This can already be seen in the new way Microsoft harasses Windows 10 users. While it used to suggest every now and then such users purchase a Windows 11-capable device, now it specifically touts Copilot+ machines.
Batt suspects Microsoft will also relentlessly push other products to boost revenue. Especially anything it can bill monthly. Though Windows is ubiquitous, he notes, users can go years between purchases. Many of us, we would add, put off buying a new version until left with little choice. (Any XP users still out there?) He writes:
“When ChatGPT began to take off, I can imagine Microsoft seeing dollar signs when looking at its own assistant, Copilot. They could make special Copilot-enhanced devices (which make them money) that run Copilot locally and encourage people to upgrade to Copilot Pro (which makes them money) and perhaps then pay extra for the Office integration (which makes them money). But now that golden egg hasn’t panned out like Microsoft wants, and now it needs to find a way to help prop up the income while it tries to get Copilot off the ground. This means more ads for the Microsoft Store, more ads for its game store, and more ads for Microsoft 365. Oh, and let’s not forget the ads within Copilot itself. If you thought things were bad now, I have a nasty feeling we’re only just getting started with the ads.”
And they won’t stop, he expects, until most users have embraced Copilot. Microsoft may be creeping toward some painful financial realities.
Cynthia Murrell, May 20, 2025
Grok and the Dog Which Ate the Homework
May 16, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
I remember the Tesla full self driving service. Is that available? I remember the big SpaceX rocket ship. Are those blowing up after launch? I now have to remember an “unauthorized modification” to xAI’s smart software Grok. Wow. So many items to tuck into my 80 year old brain.
I read “xAI Blames Grok’s Obsession with White Genocide on an Unauthorized Modification.” Do I believe this assertion? Of course, I believe everything I read on the sad, ad-choked, AI content bedeviled Internet.
Let’s look at the gems of truth in the report.
First, what is an unauthorized modification of a complex software humming along happily in Silicon Valley and— of all places — Memphis, a lovely town indeed. The unauthorized modification— whatever that is— caused a “bug in its AI-powered Grok chatbot.” If I understand this, a savvy person changed something he, she, or it was not supposed to modify. That change then caused a “bug.” I thought Grace Hopper nailed the idea of a “bug” when she pulled an insect from one of the dinobaby’s favorite systems, the Harvard Mark II. Are their insects at the X shops? Are these unauthorized insects interacting with unauthorized entities making changes that propagate more bugs? Yes.
Second, the malfunction occurs when “@grok” is used as a tag. I believe this because the “unauthorized modification” fiddled with the user mappings and jiggled scripts to allow the “white genocide” content to appear. This is definitely not hallucination; it is an “unauthorized modification.” (Did you know that the version of Grok available via x.com cannot return information from X.com (formerly Twitter) content. Strange? Of course not.
Third, I know that Grok, xAI, and the other X entities have “internal policies and core values.” Violating these is improper. The company — like other self regulated entities — “conducted a thorough investigation.” Absolutely. Coders at X are well equipped to perform investigations. That’s why X.com personnel are in such demand as advisors to law enforcement and cyber fraud agencies.
Finally, xAI is going to publish system prompts on Microsoft GitHub. Yes, that will definitely curtail the unauthorized modifications and bugs at X entities. What a bold solution.
The cited write up is definitely not on the same page as this dinobaby. The article reports:
A study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found xAI ranks poorly on safety among its peers, owing to its “very weak” risk management practices. Earlier this month, xAI missed a self-imposed deadline to publish a finalized AI safety framework.
This negative report may be expanded to make the case that an exploding rocket or a wonky full self driving vehicle is not safe. Everyone must believe X outfits. The company is a paragon of veracity, excellent engineering, and delivering exactly what it says it will provide. That is the way you must respond.
Stephen E Arnold, May 16, 2025
Google Advertises Itself
May 16, 2025


- The signals about declining search traffic warrant attention. SEO wizards, Google’s ad partners, and its own ad wizards depend on what once was limitless search traffic. If that erodes, those infrastructure costs will become a bit of a challenge. Profits and jobs depend on mindless queries.
- Google’s reaction to these signals indicates that the company’s “leadership” knows that there is trouble in paradise. The terse statement that the Cue comment about a decline in Apple to Google search traffic and this itty bitty ad are not accidents of fate. The Google once controlled fate. Now the fabled company is in a sticky spot like Sisyphus.
- The irony of Google’s problem stems from its own Transformer innovation. Released to open source, Google may be learning that its uphill battle is of its own creation. Nice work, “leadership.”
Apple AI Is AImless: Better Than Fire, Ready AIm
May 16, 2025
Apple’s Problems Rebuilding Siri
Apple is a dramatist worthy of reality TV. According to MSN, Apple’s leaders are fighting each other says the article, “New Siri Report Reveals Epic Dysfunction Within Apple — But There’s Hope.” There’s so many issues with Apple’s leaders that Siri 2.0 is delayed until 2026.
Managerial styles and backroom ambitions clashed within Apple’s teams. John Giannandrea heads Siri and has since 2018. He was hired to lead Siri and an AI group. Siri engineers claim they are treated like second class citizens. Their situation worsened when Craig Federighi’s software team released features and updates.
The two leaders are very different:
“Federighi was placed in charge of the Siri overhaul in March, alongside his number two Mike Rockwell — who created the Apple Vision Pro headset— as Apple attempts to revive its Siri revamp. The difference between Giannandrea and Federighi appears to be the difference between the tortoise and the hare. John is allegedly more of a listener and slow mover who lets those underneath him take charge of the work, especially his number two Robby Walker. He reportedly preferred incremental updates and was repeatedly cited as a problem with Siri development. Meanwhile, Federighi is described as brash and quick but very efficient and knowledgeable. Supposedly, Giannandrea’s “relaxed culture” lead to other engineers dubbing his AI team: AIMLess.”
The two teams are at each other’s throats. Projects are getting done but they’re arguing over the means of how to do them. Siri 2.0 is caught in the crossfire like a child of divorce. The teams need to put their egos aside or someone in charge of both needs to make them play nicely.
Whitney Grace, May 16, 2025
Retail Fraud Should Be Spelled RetAIl Fraud
May 16, 2025
As brick-and-mortar stores approach extinction and nearly all shopping migrates to the Web, AI introduces new vulnerabilities to the marketplace. Shocking, we know. Cyber Security Intelligence reports, “ChatGPT’s Image Generation Could Be Driving Retail Fraud.” We learn:
“The latest AI image generators can create images that look like real photographs as well as imagery from simple text prompts with incredible accuracy. It can reproduce documents with precisely matching formatting, official logos, accurate timestamps, and even realistic barcodes or QR codes. In the hands of fraudsters, these tools can be used to commit ‘return fraud’ by creating convincing fake receipts and proof-of-purchase documentation.”
But wait, there is more. The post continues:
“Fake proof of purchase documentation can be used to claim warranty service for products that are out of warranty or purchased through unauthorised channels. Fraudsters could also generate fake receipts showing purchases at higher values than was actually paid for – then requesting refunds to gift cards for the inflated amount. Internal threats also exist too, as employees can create fake expense receipts for reimbursement. This is particularly damaging for businesses with less sophisticated verification processes in place. Perhaps the scenario most concerning of all is that these tools can enable scammers to generate convincing payment confirmations or shipping notices as part of larger social engineering attacks.”
Also of concern is the increased inconvenience to customers as sites beef up their verification processes. After all, the write-up notes, The National Retail Federation found 70% of customers say a positive return experience makes them more likely to revisit a seller.
So what is a retail site to do? Well, author Doriel Abrahams is part of Forter, a company that uses AI to protect online sellers from fraud. Naturally, he suggests using a platform like his firm’s to find suspicious patterns without hindering legit customers too much. Is more AI the solution? We are not certain. If one were to go down that route, though, one should probably compare multiple options.
Cynthia Murrell, May 16, 2025
Complexity: Good Enough Is Now the Best Some Can Do at Google-
May 15, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
I read a post called “Working on Complex Systems: What I Learned Working at Google.” The write up is a thoughtful checklist of insights, lessons, and Gregorian engineering chants a “coder” learned in the online advertising company. I want to point out that I admire the amount of money and power the Google has amassed from its reinvention of the GoTo-Overture-Yahoo advertising approach.
A Silicon Valley executive looks at past due invoices. The government has ordered the company to be broken up and levied large fines for improper behavior in the marketplace. Thanks, ChatGPT. Definitely good enough.
The essay in The Coder Cafe presents an engineer’s learnings after Google began to develop products and services tangential to search hegemony, selling ads, and shaping information flows.
The approach is to differentiate complexity from complicated systems. What is interesting about the checklists is that one hearkens back to the way Google used to work in the Backrub and early pre-advertising days at Google. Let’s focus on complex because that illuminates where Google wants to direct its business, its professionals, its users, and the pesky thicket of regulators who bedevil the Google 24×7.
Here’s the list of characteristics of complex systems. Keep in mind that “systems” means software, programming, algorithms, and the gizmos required to make the non-fungible work, mostly.
- Emergent behavior
- Delayed consequences
- Optimization (local optimization versus global optimization)
- Hysteresis (I think this is cultural momentum or path dependent actions)
- Nonlinearity
Each of these is a study area for people at the Santa Fe Institute. I have on my desk a copy of The Origins of Order: Self-Organization and Selection in Evolution and the shorter Reinventing the Sacred, both by Stuart A. Kauffman. As a point of reference Origins is 700 pages and Reinventing about 300. Each of the cited articles five topics gets attention.
The context of emergent behavior in human- and probably some machine- created code is that it is capable of producing “complex systems.” Dr. Kauffman does a very good job of demonstrating how quite simple methods yield emergent behavior. Instead of a mess or a nice tidy solution, there is considerable activity at the boundaries of complexity and stability. Emergence seems to be associated with these boundary conditions: A little bit of chaos, a little bit of stability.
The other four items in the list are optimization. Dr. Kauffman points out is a consequence of the simple decisions which take place in the micro and macroscopic world. Non-linearity is a feature of emergent systems. The long-term consequences of certain emergent behavior can be difficult to predict. Finally, the notion of momentum keeps some actions or reactions in place through time units.
What the essay reveals, in my opinion, that:
- Google’s work environment is positioned as a fundamental force. Dr. Kauffman and his colleagues at the Santa Fe Institute may find some similarities between the Google and the mathematical world at the research institute. Google wants to be the prime mover; the Santa Fe Institute wants to understand, explain, and make useful its work.
- The lingo of the cited essay suggests that Google is anchored in the boundary between chaos and order. Thus, Google’s activities are in effect trials and errors intended to allow Google to adapt and survive in its environment. In short, Google is a fundamental force.
- The “leadership” of Google does not lead; leadership is given over to the rules or laws of emergence as described by Dr. Kauffman and his colleagues at the Santa Fe Institute.
Net net: Google cannot produce good products. Google can try to emulate emergence, but it has to find a way to compress time to allow many more variants. Hopefully one of those variants with be good enough for the company to survive. Google understands the probability functions that drive emergence. After two decades of product launches and product failures, the company remains firmly anchored in two chunks of bedrock:
First, the company borrows or buys. Google does not innovate. Whether the CLEVER method, the billion dollar Yahoo inspiration for ads, or YouTube, Bell Labs and Thomas Edison are not part of the Google momentum. Advertising is.
Second, Google’s current management team is betting that emergence will work at Google. The question is, “Will it?”
I am not sure bright people like those who work at Google can identify the winners from an emergent approach and then create the environment for those winners to thrive, grow, and create more winners. Gluing cheese to pizza and ramping up marketing for Google’s leadership in fields ranging from quantum computing to smart software is now just good enough. One final question: “What happens if the advertising money pipeline gets cut off?”
Stephen E Arnold, May 15, 2025
LLM Trade Off Time: Let Us Haggle for Useful AI
May 15, 2025
No AI, just the dinobaby expressing his opinions to Zellenials.
What AI fixation is big tech hyping now? VentureBeat declares, “Bigger Isn’t Always Better: Examining the Business Case for Multi-Million Token LLMs.” The latest AI puffery involves large context models—LLMs that can process and remember more than a million tokens simultaneously. Gemini 1.5 Pro, for example can process 2 million tokens at once. This achievement is dwarfed by MiniMax-Text-01, which can handle 4 million. That sounds impressive, but what are such models good for? Writers Rahul Raja and Advitya Gemawat tell us these tools can enable:
Cross-document compliance checks: A single 256K-token prompt can analyze an entire policy manual against new legislation.
Customer support: Chatbots with longer memory deliver more context-aware interactions.
Financial research: Analysts can analyze full earnings reports and market data in one query.
Medical literature synthesis: Researchers use 128K+ token windows to compare drug trial results across decades of studies.
Software development: Debugging improves when AI can scan millions of lines of code without losing dependencies.
I theory, they may also improve accuracy and reduce hallucinations. We are all for that—if true. But research from early adopter JPMorgan Chase found disappointing results, particularly with complex financial tasks. Not ideal. Perhaps further studies will have better outcomes.
The question for companies is whether to ditch ponderous chunking and RAG systems for models that can seamlessly debug large codebases, analyze entire contracts, or summarize long reports without breaking context. Naturally, there are trade-offs. We learn:
While large context models offer impressive capabilities, there are limits to how much extra context is truly beneficial. As context windows expand, three key factors come into play:
- Latency: The more tokens a model processes, the slower the inference. Larger context windows can lead to significant delays, especially when real-time responses are needed.
- Costs: With every additional token processed, computational costs rise. Scaling up infrastructure to handle these larger models can become prohibitively expensive, especially for enterprises with high-volume workloads.
- Usability: As context grows, the model’s ability to effectively ‘focus’ on the most relevant information diminishes. This can lead to inefficient processing where less relevant data impacts the model’s performance, resulting in diminishing returns for both accuracy and efficiency.”
Is it worth those downsides for simpler workflows? It depends on whom one asks. Some large context models are like a 1958 Oldsmobile Ninety-Eight: lots of useless chrome and lousy mileage.
Stephen E Arnold, May 15, 2025
Bing Goes AI: Metacrawler Outfits Are Toast
May 15, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
The Softies are going to win in the AI-centric search wars. In every war, there will be casualties. One of the casualties will be metasearch companies. What’s metasearch? These are outfits that really don’t crawl the Web. That is expensive and requires constant fiddling to keep pace with the weird technical “innovations” purveyors of Web content present to the user. The metasearch companies provide an interface and then return results from cooperating and cheap primary Web search services. Most users don’t know the difference and have demonstrated over the years total indifference to the distinction. Search means Google. Microsoft wants to win at search and become the one true search service.
The most recent fix? Kill off the Microsoft Bing application programming interface. Those metasearch outfits will have to learn to love Qwant, SwissCows, and their ilk or face some-survive-or-die decisions. Do these outfits use YaCy, OpenSearch, Mwmbl, or some other source of Web indexing?
Bob Softie has just tipped over the metasearch lemonade stand. The metasearch sellers are not happy with Bob. Bob seems quite thrilled with his bold move. Thanks, ChatGPT, although I have not been able to access your wonder 4.1 service, the cartoon is good enough.
The news of this interesting move appears in “Retirement: Bing Search APIs on August 11, 2025.” The Softies say:
Bing Search APIs will be retired on August 11, 2025. Any existing instances of Bing Search APIs will be decommissioned completely, and the product will no longer be available for usage or new customer signup. Note that this retirement will apply to partners who are using the F1 and S1 through S9 resources of Bing Search, or the F0 and S1 through S4 resources of Bing Custom Search. Customers may want to consider Grounding with Bing Search as part of Azure AI Agents. Grounding with Bing Search allows Azure AI Agents to incorporate real-time public web data when generating responses with an LLM. If you have questions, contact support by emailing Bing Search API’s Partner Support. Learn more about service retirements that may impact your resources in the Azure Retirement Workbook. Please note that retirements may not be visible in the workbook for up to two weeks after being announced.
Several observations:
- The DuckDuckGo metasearch system is exempted. I suppose its super secure approach to presenting other outfits’ search results is so darned wonderful
- The feisty Kagi may have to spend to get new access deals or pay low profile crawlers like Dassault Exalead to provide some content (Let’s hope it is timely and comprehensive)
- The beneficiaries may be Web search systems not too popular with some in North America; for example, Yandex.com. I have found that Yandex.com and Yandex.ru are presenting more useful results since the re-juggling of the company’s operations took place.
Why is Microsoft taking this action? My hunch is paranoia. The AI search “thing” is going to have to work if Microsoft hopes to cope with Google’s push into what the Softies have long considered their territory. Those enterprise, cloud, and partnership set ups need to have an advantage. Binging it with AI may be viewed as the winning move at this time.
My view is that Microsoft may be edging close to another Bob moment. This is worth watching because the metasearch disruption will flip over some rocks. Who knows if Yandex or another non-Google or non-Bing search repackager surges to the fore? Web search is getting slightly more interesting and not because of the increasing chaos of AI-infused search results.
Stephen E Arnold, May 15, 2025