Microsoft: Marketing Is One Thing, a Cost Black Hole Is Quite Another

March 11, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumbYep, another dinobaby original.

I read “Microsoft Cuts Data Centre Plans and Hikes Prices in Push to Make Users Carry AI Cost.” The headline meant one thing to me: The black hole of AI costs must be capped. For my part,  I try to avoid MSFT AI. After testing the Redmoanians’ smart software for months, I decided, “Nope.”

The write up says:

Last week, Microsoft unceremoniously pulled back on some planned data centre leases. The move came after the company increased subscription prices for its flagship 365 software by up to 45%, and quietly released an ad-supported version of some products. The tech giant’s CEO, Satya Nadella, also recently suggested AI has so far not produced much value.

No kidding. I won’t go into the annoyances. AI in Notepad? Yeah, great thinking like that which delivered Bob to users who loved Clippy.

The essay notes:

Having sunk billions into generative AI, Microsoft is trying to find the business model that will make the technology profitable.

Maybe someday, but that day is not today or tomorrow. If anything, Microsoft is struggling with old-timey software as well. The Register, a UK online publication, reports:

Microsoft blames Outlook’s wobbly weekend on ‘problematic code change’ And Monday’s not looking that steady, either.

Back to AI. The AI financial black hole exists, and it may not be easy to resolve. What’s the fix? Here’s the Microsoft data center plan as of March 2025:

As AI infrastructure costs rise and model development evolves, shifting the costs to consumers becomes an appealing strategy for AI companies. While big enterprises such as government departments and universities may manage these costs, many small businesses and individual consumers may struggle.

Several observations are warranted:

  1. What happens if Microsoft cannot get consumers to pay the AI bills?
  2. What happens if people like this old dinobaby don’t want smart software and just shift to work flows without Microsoft products?
  3. What happens if the marvel of the Tensor and OpenAI’s and others’ implementations continue to hallucinate creating more headaches than the methods improve?

Net net: Marketing may have gotten ahead of reality, but the black hole of costs are very real and not hallucinations. Can Microsoft escape a black hole like this one?

Stephen E Arnold, March 11, 2025

Microsoft Sends a Signal: AI, AIn’t Working

March 11, 2025

dino orange_thumb_thumb_thumb_thumb_thumbAnother post from the dinobaby. Alas, no smart software used for this essay.

The problems with Microsoft’s AI push were evident from the start of its AI push in 2023. The company thought it had identified the next big thing and had the big fish on the line. Now the work was easy. Just reel in the dough.

Has it worked out for Microsoft? We know that big companies often have difficulty innovating. The enervating white board sessions which seek to answer the question, “Do we build it or buy it?” usually give way to: [a] Let’s lock it up somehow or [b] Let’s steal it because it won’t take our folks too long to knock out a me-too.

Microsoft sent a fairly loud beep-beep-beep when it began to cut back on its dependence on OpenAI. Not long ago, Microsoft trimmed some of its crazy spending for AI. Now we have the allegedly accurate information in “Microsoft Is Reportedly Potting a Future without OpenAI.”

The write up states:

Microsoft has poured over $13 billion into the AI firm since 2019, but now it wants more control over its own models and costs. Simple enough in theory—build in-house alternatives, cut expenses, and call the shots.

Is this a surprise? No, I think it is just one more beep added to the already emitted beep-beep-beep.

Here’s my take:

  1. Narrowly focused smart software adds some useful capabilities to what I would call workflow enhancement. The narrow focus for an AI system reduces some of the wonkiness of the output. Therefore, certain tasks benefit; for example, grinding through data for a chemistry application or providing a call center operation with a good enough solution to rising costs. Broad use cases are more problematic.
  2. Humans who rely on information for a living don’t want to be caught out. This means that using smart software is an assist or a supplement. This is like an older person using a cane when walking on a senior citizens adventure tour.
  3. Productizing a broad use case for smart software is expensive and prone to the sort of failure rate associated with a new product or service. A good example is a self driving auto with collision avoidance. Would you stand in front of such a vehicle confident in the smart software’s ability to not run over you? I wouldn’t.

What’s happening at Microsoft is a reasonably predictable and understandable approach. The company wants to hedge its bets since big bucks are flowing out, not in. The firm thinks it has enough smarts to do a better job even though in my opinion this is unlikely. Remember Bob, Clippy, and Windows updates? I do.

Also, small teams believe their approach will be a winner. Big companies believe their people can row that boat faster than anyone else. I know from personal experience and observation that this is not true. But the appearance of effort and the illusion of high value work encourages the approach.

Plus, the idea that a “leadership team” can manage innovation is a powerful one. Microsoft’s leadership believes in its leadership. That’s why the company is a leader. (I love this logic.)

Net net: My hunch is that Microsoft’s AI push is a disappointment. Now the company can shift into SWAT team mode and overwhelm the problem: AI that does not pay for itself.

Will this approach work? Nope, the outcome will be good enough. That is a bit more than one can say about Apple intelligence: Seriously out of step with the Softies.

Stephen E Arnold, March 11, 2025

Automobile Trivia: The Tesla Cybertruck and the Ford Pinto

March 11, 2025

dino orange_thumb_thumb_thumbAnother post from the dinobaby. Alas, no smart software used for this essay.

I don’t cover the auto industry. However, this article caught my eye: “Cybertruck Goes to Mardi Gras Parade, Gets Bombarded by Trash and Flees in Shame: That’s Gotta Hurt.”

The write up reports:

With a whopping seven recalls in just over a year — and a fire fatality rate exceeding the infamous Ford Pinto— it’s never been a particularly great time to be a Cybertruck owner. But now, thanks to the political meddling of billionaire Tesla owner Elon Musk, it might be worse than ever. That’s what some Cybertruck drivers discovered firsthand at a Lundi Gras parade on Monday — the “Fat Monday” preamble to the famed Mardi Gras — when their hulking electric tanks were endlessly mocked and pelted with trash by revelers.

I did not know that the Tesla vehicle engaged in fire events at a rate greater than the famous Ford Pinto. I know the Pinto well. I bought one for a very low price. I drove it for about a year and sold it for a little more than I paid for it. I think I spent more time looking in my rear view mirrors than looking down the road. The Pinto, if struck from behind, would burn. I think the gas tank was made of some flimsy material. A bump in the back would cause the tank to leak and sometimes the vehicle would burst into flame. A couple of unlucky Pinto drivers suffered burns and some went to the big Ford dealership in the great beyond. I am not sure if the warranty was upheld.

I think this is interesting automotive trivia; for example, “What vehicle has a fire fatality rate exceeding the Ford Pinto?” The answer as I now know is the lovely and graceful Tesla Cybertruck.

The write up (which may be from The Byte or from Futurism) says:

According to a post on X-formerly-Twitter, at least one Cybertruck had its “bulletproof window” shattered by plastic beads before tucking tail and fleeing the parade under police protection. At least three Cybertrucks were reportedly there as part of a coordinated effort by an out-of-state Cybertruck Club to ferry parade marshals down the route. One marshal posted about their experience riding in the EV on Reddit, saying it was “boos and attacks from start to evacuation.”

I got a kick (not a recall or a fire) out of the write up and the plastic bead reference. Not as slick as “bouffon sous kétamine,” but darned good. And, no, I am not going to buy a Cybertruck. One year in Pinto fear was quite enough.

Now a test question: Which is more likely to explode? [a] a Space X rocket, [b] a Pinto, or [c] a Cybertruck?

Stephen E Arnold, March 11, 2025

AI and Two Villages: A Challenge in Some Large Countries

March 10, 2025

Hopping Dino_thumb_thumbThis blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you? We used AI to translate the original Russian into semi English and to create the illustration. Hasta la vista a human Russian translater and a human artist. That’s how AI works in real life.

My team and I are wrapping up out Telegram monograph. As part of the drill, we have been monitoring some information sources in Russia. We spotted the essay “AI and Capitalism.” Note: I am not sure the link will resolve, but you can locate it via Yandex by searching for PCNews. I apologize, but some content is tricky to locate using consumer tools.)

image

The “white-collar village” and the “blue collar village” generated by You.com. Good enough.

I mention the article because it makes clear how smart software is affecting one technical professional working in a Russian government-owned telecommunications company. The author’s day-to-day work requires programming. One description of the value of smart software appears in this passage:

I work as a manager in a telecom and since last year I have been actively modifying the product line, adding AI components to each product. And I am not the only one there – the movement is going on in principle throughout the IT industry, of which we are a part… Where we have seen the payoff is replacing tree navigation with a text search bar, helping to generate text on a specific topic taking into account the concept cloud of the subject area, aggregating information from sources with different data structures, extracting a sequence of semantic actions of a person while working on a laptop, simultaneous translation with imitation of any voice, etc. The goal of all these events, as before, is to increase labor productivity. Previously, a person dug with his hands, then with a shovel, now with an excavator. Indeed, now it’s easier to ask the model for an example of code  than to spend hours searching on Stack Overflow. This seriously speeds things up.

The author then identifies three consequences of the use of AI:

  1. Training will change because “you will need to retrain for another narrow specialty several times”
  2. Education will become more expensive but who will pay? Possible as important who will be able to learn?
  3. Society will change which is a way of saying “social turmoil” ahead in my opinion.

Here’s an okay translation of the essay’s final paragraph:

…in the medium term, the target architecture of our society will inevitably see a critical stratification into workers and educated people. Blue and white collar castes. The fence between them will be so high that films about a possible future will become a fairly accurate forecast. I really want to end up in a white-collar village in the role of a white collar worker. Scary.

What’s interesting about this person’s point of view is that AI is already changing work in Russia and the Russian Federation. The challenge will be that an allegedly “flat” social structure will be split into those who can implement smart software and those who cannot. The chatter about smart software is usually focused on which company will find a way to generate revenue from the massive investments required to create solutions that consumers and companies will buy.

What gets less attention is the apparent impact of the technology on countries which purport to make life “better” via a different system. If the author is correct, some large nation states are likely to face some significant social challenges. Not everyone can work in “a white-collar village.”

Stephen E Arnold, March 10, 2025

A French Outfit Points Out Some Issues with Starlink-Type Companies

March 10, 2025

dino orangeAnother one from the dinobaby. No smart software. I spotted a story on the Thales Web site, but when I went back to check a detail, it had disappeared. After a bit of poking I found a recycled version called “Thales Warns Governments Over Reliance on Starlink-Type Systems.” The story must be accurate because it is from the “real” news outfit that wants my belief in their assertion of trust. Well, what do you know about trust?

Thales, as none of the people in Harrod’s Creek knows, is a French defence, intelligence, and go-to military hardware type of outfit. Thales and Dassault Systèmes are among the world leaders in a number cutting edge technology sectors. As a person who did some small work in France,  I heard the Thales name mentioned a number of times. Thales has a core competency in electronics, military communications, and related fields.

The cited article reports:

Thales CEO Patrice Caine questioned the business model of Starlink, which he said involved frequent renewal of satellites and question marks over profitability. Without further naming Starlink, he went on to describe risks of relying on outside services for government links. “Government actors need reliability, visibility and stability,” Caine told reporters. “A player that – as we have seen from time to time – mixes up economic rationale and political motivation is not the kind that would reassure certain clients.”

I am certainly no expert in the lingo of a native French speaker using English words. I do know that the French language has a number of nuances which are difficult for a dinobaby like me to understand without saying, “Pourriez-vous répéter, s’il vous plaît?”

I noticed several things; specifically:

  • The phrase “satellite renewal.” The idea is that the useful life of a Starlink-type device is shorter than some other technologies such as those from Thales-type of companies. Under the surface is the French attitude toward “fast fashion”. The idea is that cheap products are wasteful; well-made products, like a well-made suite, last a long time. Longer than a black baseball cap is how I interpreted the reference to “renewal.” I may be wrong, but this is a quite serious point underscoring the issue of engineering excellence.
  • The reference to “profitability” seems to echo news reports that Starlink itself may be on the receiving end of preferential contract awards. If those type of cozy deals go away, will the Starlink-type business generate sufficient revenue to sustain innovation, higher quality, and longer life spans? Based on my limited knowledge of thing French, this is a fairly direct way of pointing out the weak business model of the Starlink-type of service.
  • The use of the words “reliability” and “stability” struck me as directing two criticisms at the Starlink-type of company. On one level the issue of corporate stability is obvious. However, “stability” applies to engineering methods as well as mental set up. Henri Bergson observed, ““Think like a man of action, act like a man of thought.” I am not sure what M. Bergson would have thought about a professional wielding a chainsaw during a formal presentation.
  • The direct reference to “mixing up” reiterates the mental stability and corporate stability referents. But the killer comment is the merging of “economic rationale and political motivation” flashes bright warning lights to some French professionals and probably would resonate with other Europeans. I wonder what Austrian government officials thought about the chainsaw performance.

Net net: Some of the actions of a Starlink-type of company have been disruptive. In game theory, “keep people guessing” is a proven tactic. Will it work in France? Unlikely. Chainsaws will not be permitted in most meetings with Thales or French agencies. The baseball cap? Probably not.

Stephen E Arnold, March 10, 2025

From $20 a Month to $20K a Month. Great Idea… or Not?

March 10, 2025

dino orange_thumbAnother post from the dinobaby. Alas, no smart software used for this essay.

OpenAI was one of many smart software companies. If you meet the people on my team, you will learn that I dismissed most of the outfits as search-and-retrieval outfits looking for an edge. Search definitely needs an edge, but I was not confident that predictive generation of an “answer” was a solution. It was a nifty party trick, but then the money started flowing. In January 2023, Microsoft put Google’s cute sharp teeth on edge. Suddenly AI or smart software was the next big thing. The virtual reality thing did not ring the bell. The increasingly weird fiddling with mobile phones did not get the brass ring. And the idea of Apple becoming the next big thing in chips has left everyone confused. My M1 devices work pretty well, and unless I look at the label on the gizmos I can tell an M1 from and M3. Do I care? Nope.

But OpenAI became news. It squabbled with the mastermind of “renewable” satellites, definitely weird trucks, and digging tunnels in Las Vegas. (Yeah, nice idea, just not for anyone who does not want to get stalled in traffic.) When ChatGPT became available, one of those laboring in my digital vineyards signed me up. I fiddled with it and decided that I would run some of my research through the system. I learned that my research was not in the OpenAI “system.” I had it do some images. Those sucked. I will cancel this week.

I put in my AI folder this article “OpenAI’s is Getting Ready to Release PhD Level AI Agents.” I was engaging in some winnowing and I scanned it. In early February 2025, Digital Marketing News wrote about PhD level agents. I am not a PhD. I quite before I finished my dissertation to work in the really socially conscious nuclear unit of that lovable outfit Halliburton. You know the company. That’s the one that charged about $950.00 for a gallon of fuel during the Iraq war. You will also associate Dick Cheney, a fun person, with the company. So no PhD for me.

I was skeptical because of the dismal performance of ChatGPT 4, oh, whatever, trying to come up with the information I have assembled for my new book for law enforcement professionals. Then I read a Slashdot post with the title “OpenAI Plots Charging $20,000 a Month For PhD-Level Agents” shared from a publication I don’t know much about. I think it is like 404 or a for-fee Substack. The publication has great content, and you have to pay for it.

Be that as it may, the Slashdot post reports or recycles information that suggests the fee per month for a PhD level version of OpenAI’s smart software will be a modest $20,000 a month. I think the service one of my team registered costs $20.00 per month. What’s with the 20s? Twenty is a pronic number; that is, it can be slapped on a high school math test so students can say it is the product of two consecutive integers. In college I knew a person who was a numerologist. I recall that the meaning of 20 was cooperation.

The interesting part of the Slashdot post was the comments. I scanned them and concluded that some of the commenters saw the high-end service killing jobs for high-end programmers and consultants. Yeah, maybe. Somehow I doubt that a code base that struggles with information related to a widely-used messaging application is suddenly going to replicate the information I have obtained from my sources in Eastern Europe seems a bit of stretch. Heck, ChatGPT could barely do English. Russian? Not a change, but who knows. And for $200,000 it is not likely this dinobaby will take what seems like unappetizing bait.

One commenter allegedly named TheGreatEmu said:

I was about to make a similar comment, but the cost still doesn’t add up. I’m at a national lab with generally much higher overheads than most places, and a postdoc runs us $160k/year fully burdened. And of course the AI sure as h#ll can’t connect cables, turn knobs, solder, titrate, use a drill press, clean, chat with the machinist who doesn’t use email, sneaker net data out of the air-gapped lab, or understand napkin drawings over beer where all real science gets done. Or do anything useful with information that isn’t already present in the training data, and if you’re not pushing past existing knowledge boundaries, you’re not really doing science are you?

My hunch is that this is a PR or marketing play. Let’s face it. With Microsoft cutting off data center builds and Google floundering with cheese, the smart software revolution is muddling forward. The wins are targeted applications in quite specific domains. Yes, gentle reader, that’s why people pay for Chemical Abstracts online. The information is not on the public Internet. The American Chemical Society has information that the super capable AI outfits have not figured as something the non-computational, organic, or inorganic chemist will use from a somewhat volatile outfit. Get something wrong in a nuclear lab and smart software won’t be too helpful if it hallucinates.

Net net: Is everything marketing? At age 80, my answer is, “Absolutely.” Sam AI-Thinks in terms of trillions. Is $20 trillion the next pricing level?

Stephen E Arnold, March 10, 2025

Next-Gen IT Professionals: Up for Doing a Good Job?

March 10, 2025

The entirety of the United States is facing a crisis when it comes to decent paying jobs. Businesses are watching their budgets like misers clutch their purse strings, so they’re hiring the cheapest tech workers possible. Medium explains that “8 Out Of 10 Senior Engineers Feel Undervalued: The Hidden Crisis In Tech’s Obsession With Junior Talent.”

Another term for budgeting and being cheaper is “cost optimization.” Experienced tech workers are being replaced with green newbies who wouldn’t know how to find errors if it was on the back of their hands. Or the experienced tech workers are bogged down by mentoring/fixing the mistakes of their younger associates.

It’s a recipe for disaster, but cost optimization is what businesses care about. There will be casualties in the trend, not all of them human:

“The silent casualties of this trend:

1. Systems designed by juniors who’ve never seen a server catch fire

2. Codebases that work right up until they dont

3. The quiet exodus of graybeards into early retirement”

Junior tech workers are cheaper, but it is difficult to just ask smart software to impart experience in a couple hundred words. Businesses are also treating their seasoned employees like they are mentors:

“I’m all for mentoring. But when companies treat seniors as:

  • Free coding bootcamp instructors
  • Human linters for junior code
  • On-call explainers of basic algorithms

…they’re not paying for mentorship. They’re subsidizing cheap labor with senior salaries.”

There’s a happy medium where having experienced tech experts work with junior tech associates can be beneficial for those involved. It is cheaper to dump the dinobabies and assume that those old systems can be fixed when they go south.

Whitney Grace, March 10, 2025

AI Generated Code Adds To Technical Debt

March 7, 2025

Technical debt refers to using flawed code that results in more work. It’s okay for projects to be ruled out with some technical debt as long as it is paid back. The problem comes when the code isn’t corrected and it snowballs into a huge problem. LeadDev explores how AI code affects projects: “How AI Generated Code Compounds Technical Debt.” The article highlights that it has never been easier to write code especially with AI, but there’s a large amassment of technical debt. The technical debt is so large that it is comparable to the US’s ballooning debt.

GitClear tracked the an eight-gold increase in code frequency blocks with give or more lines that duplicate adjectives code during 2024. This was ten times higher than the previous two years. GitClear found some more evidence of technical debt:

“That same year, 46% of code changes were new lines, while copy-pasted lines exceeded moved lines. “Moved,” lines is a metric GitClear has devised to track the rearranging of code, an action typically performed to consolidate previous work into reusable modules. “Refactored systems, in general, and moved code in particular, are the signature of code reuse,” says Bill Harding, CEO of Amplenote and GitClear. A year-on-year decline in code movement suggests developers are less likely to reuse previous work, a marked shift from existing industry best practice that would lead to more redundant systems with less consolidation of functions.”

These facts might not seem alarming, especially if one reads Google’s 2024 DORA report that said there was a 25% increase in AI usage to quicken code reviews and documentation. The downside was a 7.2% decrease in delivery and stability. These numbers might be small now but what is happening is like making a copy of a copy of a copy: the integrity is lost.

It’s also like relying entirely on spellcheck to always correct your spelling and grammar. While these are good tools to have, what will you do when you don’t have fundamentals in your toolbox or find yourself in a spontaneous spelling bee?

Whitney Grace, March 7, 2025

Patents, AI, and Lawyers: Litigators, Start Your Engines

March 7, 2025

Patents can be a useful source of insights, a fact startup Patlytics is banking on. TechCrunch reports, "Patlytics Raises $14M for its Patent Analytics Platform." The firm turbo-charges intellectual property research with bespoke AI. We learn:

"Patlytics’ large language models (LLMs) and generative AI-powered engine are custom-built for IP-related research and other work such as patent application drafting, invention disclosures, invalidity analysis, infringement detection/analysis, Standard Essential Patents (SEPs) analysis, and IP assets portfolio management."

Apparently, the young firm is already meeting with success. We learn:

"The 1-year-old startup said it has seen a 20x increase in ARR and an 18x expansion in its customer base within six months, with a sustained 300% month-over-month growth rate. Patlytics did not disclose how many customers it has but said approximately 50% of its customer base are law firms, and the other half are corporate clients from industries like semiconductors, bio, pharmaceuticals, and more. Additionally, the company now serves customers in South Korea and Japan, and recently launched its first pilot product in London and Germany. Its clients include Abnormal Security, Google, Koch Disruptive Technologies, Quinn Emanuel Urquhart & Sullivan, Richardson Oliver, Reichman Jorgensen Lehman & Feldberg, Xerox, and Young Basile."

That is quite a client roster in such a short time. This round, combined with April’s seed round, brings the companies funding total to $21 million. The firm will put the funds to use hiring new engineers and expanding its products. Based in New York, Patlytics was launched in January, 2024.

Will AI increase patent litigation? Do Tesla Cybertrucks attract attention?

Cynthia Murrell, March 7, 2025

Another New Search System with AI Too

March 7, 2025

There’s a new AI engine in town down specifically designed to assist with research. The Next Web details the newest invention that comes from a big name in the technology industry: “Tech mogul Launches AI Research Engine Corpora.ai.” Mel Morris is a British tech mogul and the man behind the latest research engine: Corpora.ai.

Morris had Corpora.ai designed to provided in-depth research from single prompts. It is also an incredibly fast engine. It can process two million documents per second. Corpora.ai works by reading a prompt then the AI algorithm scans information, including legal documents, news articles, academic papers, and other Web data. The information is then compiled into summaries or reports.

Morris insists that Corpora.ai is a research engine, not a search engine. He invested $15 million of his personal fortune into the project. Morris doesn’t want to compete with other AI projects, instead he wants to form working relationships:

“His funding aims to create a new business model for LLMs. Rather than challenge the leading GenAI firms, Corpora plans to bring a new service to the sector. The research engine can also integrate existing models on the market. ‘We don’t compete with OpenAI, Google, or Deepseek,’ Morris said. ‘The nice thing is, we can play with all of these AI vendors quite nicely. As they improve their models, our output gets better. It’s a really great symbiotic relationship.’

Mel Morris is a self-made businessman who is the former head of King, the Candy Crush game creator. He also owned and sold the dating Web site, uDate. He might see a return on his Corpora.ai investment .

Whitney Grace, March 7, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta