The Thought Process May Be a Problem: Microsoft and Copilot Fees

February 4, 2025

dino orange_thumbYep, a dinobaby wrote this blog post. Replace me with a subscription service or a contract worker from Fiverr. See if I care.

Here’s a connection to consider. On one hand, we have the remarkable attack surface of Microsoft software. Think SolarWinds. Think note from the US government to fix security. Think about the flood of bug fixes to make Microsoft software secure. Think about the happy bad actors gleefully taking advantage of what is the equivalent of a piece of chocolate cake left on a picnic table in Iowa in July.

Now think about the marketing blast that kicked off the “smart software” revolution. Google flashed its weird yellow and red warning lights. Sam AI-Man began thinking in terms of trillions of dollars. Venture firms wrote checks like it was 1999 again. Even grade school students are using smart software to learn about George Washington crossing the Delaware.

And where are we? ZDNet published an interesting article which may have the immediate effect of getting some Microsoft negative vibes. But to ZDNet’s credit the write up “The Microsoft 365 Copilot Launch Was a Total Disaster.” I want to share some comments from the write up before I return to the broader notion that the “thought process” is THE Microsoft problem.

I noted this passage:

Shortly after the New Year, someone in Redmond pushed a button that raised the price of its popular (84 million paid subscribers worldwide!) Microsoft 365 product. You know, the one that used to be called Microsoft Office? Yeah, well, now the app is called Microsoft 365 Copilot, and you’re going to be paying at least 30% more for that subscription starting with your next bill.

How about this statement:

No one wants to pay for AI

Some people do, but these individuals do not seem to be the majority of computing device users. Furthermore there are some brave souls suggesting that today’s approach to AI is not improving as the costs of delivering AI continue to rise. Remember those Sam AI-Man trillions?

Microsoft is not too good with numbers either. The article walks through the pricing and cancellation functions. Here’s the key statement after explaining the failure to get the information consistent across the Microsoft empire:

It could be worse, I suppose. Just ask the French and Spanish subscribers who got a similar pop-up message telling them their price had gone from €10 a month to €13,000. (Those pesky decimals.)

Yep, details. Let’s go back to the attack surface idea. Microsoft’s corporate thought process creates problems. I think the security and Copilot examples make clear that something is amiss at Microsoft. The engineering of software and the details of that engineering are not a priority.

That is the problem. And, to me, it sure seems as though Microsoft’s worse characteristics are becoming the dominant features of the company. Furthermore, I believe that the organization cannot remediate itself. That is very concerning. Not only have users lost control, but the firm is unconsciously creating a greater set of problems for many people and organizations.

Not good. In fact, really bad.

Stephen E Arnold, February 4, 2025

Two Rules for Software. All Software If You Can Believe It

January 31, 2025

Did you know that there are two rules that dictate how all software is written? No, we didn’t either. FJ van Wingerde from the Ask The User blog states and explains what the rules are in his post: “The Two Rules Of Software Creation From Which Every Problem Derives.” After a bunch of jib jab about the failures of different codes, Wingerde states the questions:

“It’s the two rules that actually are behind every statement in the agile manifesto. The manifesto unfortunately doesn’t name them really; the people behind it were so steeped in the problems of software delivery—and what they thought would fix it—that they posited their statements without saying why each of these things are necessary to deliver good software. (Unfortunately, necessary but not enough for success, but that we found out in the next decades.) They are [1] Humans cannot accurately describe what they want out of a software system until it exists. and [2] Humans cannot accurately predict how long any software effort will take beyond four weeks. And after 2 weeks it is already dicey.”

The first rule is a true statement for all human activities, except the inability to accurately describe the problem. That may be true for software, however. Humans know they have a problem, but they don’t have a solution to fix. The smart humans figure out how to solve the problem and learn how to describe it with greater accuracy.

As for number two, is project management and weekly maintenance on software all a lucky guess then? Unless effort changes daily and that justifies paying software developers. Then again, someone needs to keep the systems running. Tech people are what keep businesses running, not to mention the entire world.

If software development only has these two rules, we now know why why developers cannot provide time estimates or provide assurances that their software works as leadership trained as accountants and lawyers expect. Rest easy. Software is hopefully good enough and advertising can cover the costs.

Whitney Grace, January 31, 2025

Happy New Year the Google Way

January 31, 2025

We don’t expect Alphabet Inc. to release anything but positive news these days. Business Standard reports another revealing headline, especially for Googlers in the story: "Google Layoffs: Sundar Pichai Announced 10% Job Cuts In Managerial Roles.” After a huge push in the wake of wokeness to hire under represented groups aka DEI hires, Google has slowly been getting rid of its deadweight employees. That is what Alphabet Inc. probably calls them.

DEI hires were the first to go, now in the last vestiges of Googles 2024 push for efficiency, 10% of its managerial positions are going bye-bye. Among those positions are directors and vice presidents. CEO Sundar Pichai says the push for downsizing also comes from bigger competition from AI companies, such as OpenAI. These companies are challenging Google’s dominance in the tech industry.

Pichai started the efficiency push in 2022 when people were starting to push back against the ineffectiveness of DEI hires, especially when their budgets were shrunk from inflation. In January 2023, 12,000 employees were laid off. Picker is also changing the meaning of “Googleyness”:

“At the same meeting, Pichai introduced a refined vision for ‘Googleyness’, a term that once broadly defined the traits of an ideal Google employee but had grown too ambiguous. Pichai reimagined it with a sharper focus on mission-driven work, innovation, and teamwork. He emphasized the importance of creating helpful products, taking bold risks, fostering a scrappy attitude, and collaborating effectively. “Updating modern Google,” as Pichai described it, is now central to the company’s ethos.”

The new spin on being Googley. Enervating. A month into the bright new year, let me ask a non Googley question: “How are those job searches, bills, and self esteem coming along?

Whitney Grace, January 31, 2025

The Joust of the Month: Microsoft Versus Salesforce

January 29, 2025

These folks don’t seem to see eye to eye: Windows Central tells us, “Microsoft Claps Back at Salesforce—Claims ‘100,000 Organizations’ Had Used Copilot Studio to Create AI Agents by October 2024.” Microsoft’s assertion is in response to jabs from Salesforce CEO Marc Benioff, who declares, “Microsoft has disappointed everybody with how they’ve approached this AI world.” To support this allegation, Benioff points to lines from a recent MarketWatch post. A post which, coincidentally, also lauds his company’s success with AI agents. The smug CEO also insists he is receiving complaints about his giant competitor’s AI tools. Writer Kevin Okemwa elaborates:

“Benioff has shared interesting consumer feedback about Copilot’s user experience, claiming customers aren’t finding themselves transformed while leveraging the tool’s capabilities. He added that customers barely use the tool, ‘and that’s when they don’t have a ChatGPT license or something like that in front of them.’ Last year, Salesforce’s CEO claimed Microsoft’s AI efforts are a ‘tremendous disservice’ to the industry while referring to Copilot as the new Microsoft Clippy because it reportedly doesn’t work or deliver value. As the AI agent race becomes more fierce, Microsoft has seemingly positioned itself in a unique position to compete on a level playing field with key players like Salesforce Agentforce, especially after launching autonomous agents and integrating them into Copilot Studio. Microsoft claims over 100,000 organizations had used Copilot Studio to create agents by October 2024. However, Benioff claimed Microsoft’s Copilot agents illustrated panic mode, majorly due to the stiff competition in the category.”

One notable example, writes Okemwa, is Zuckerberg’s vision of replacing Meta’s software engineers with AI agents. Oh, goodie. This anti-human stance may have inspired Benioff, who is second-guessing plans to hire live software engineers in 2025. At least Microsoft still appears to be interested in hiring people. For now. Will that antiquated attitude hold the firm back, supporting Benioff’s accusations?

Mount your steeds. Fight!

Cynthia Murrell, January 29, 2025

And 2024, a Not-So-Wonderful Year

January 22, 2025

Every year has tech failures, some of them will join the zeitgeist as cultural phenomenons like Windows Vista, Windows Me, Apple’s Pippin game console, chatbots, etc.  PC Mag runs down the flops in: “Yikes: Breaking Down the 10 Biggest Tech Fails of 2024.”  The list starts with Intel’s horrible year with its booted CEO, poor chip performance.  It follows up with the Salt Typhoon hack that proved (not that we didn’t already know it with TikTok) China is spying on every US citizen with a focus on bigwigs.

National Public Data lost 272 million social security numbers to a hacker.  That was a great day in summer for hacker, but the summer travel season became a nightmare when a CrowdStrike faulty kernel update grounded over 2700 flights and practically locked down the US borders.  Microsoft’s Recall, an AI search tool that took snapshots of user activity that could be recalled later was a concern. What if passwords and other sensitive information were recorded?

The fabulous Internet Archive was hacked and taken down by a bad actor to protest the Israel-Gaza conflict.  It makes us worry about preserving Internet and other important media history.  Rabbit and Humane released AI-powered hardware that was supposed to be a hands free way to use a digital assistant, but they failed.  JuiceBox ended software support on its EV car chargers, while Scarlett Johansson’s voice was stolen by OpenAI for its Voice Mode feature. She sued.

The worst of the worst is this:

“Days after he announced plans to acquire Twitter in 2022, Elon Musk argued that the platform needed to be “politically neutral” in order for it to “deserve public trust.” This approach, he said, “effectively means upsetting the far right and the far left equally.” In March 2024, he also pledged to not donate to either US presidential candidate, but by July, he’d changed his tune dramatically, swapping neutrality for MAGA hats. “If we want to preserve freedom and a meritocracy in America, then Trump must win,” Musk tweeted in September. He seized the @America X handle to promote Trump, donated millions to his campaign, shared doctored and misleading clips of VP Kamala Harris, and is now working closely with the president-elect on an effort to cut government spending, which is most certainly a conflict of interest given his government contracts. Some have even suggested that he become Speaker of the House since you don’t have to be a member of Congress to hold that position. The shift sent many X users to alternatives like Bluesky, Threads, and Mastodon in the days after the US election.”

Let’s assume NPR is on the money. Will the influence of the Leonardo da Vinci of modern times make everything better? Absolutely. I mean the last Space X rocket almost worked. No Tesla has exploded in my neighborhood this week. Perfect.

Whitney Grace, January 22, 2025

Bossless: Managers of the Future Recognize They Cannot Fix Management or Themselves

January 17, 2025

Hopping Dino_thumbA dinobaby-crafted post. I confess. I used smart software to create the heart wrenching scene of a farmer facing a tough 2025.

I have never heard of Robert Walters. Sure, I worked on projects in London for several years, but that outfit never hit my radar. Now it has, and I think its write up is quite interesting. “Conscious Unbossing – 52% of Gen-Z Professionals Don’t Want to Be Middle Managers” introduced me to a new bound phrase: Conscious unbossing. That is super and much more elegant than the coinage ensh*tification.

image

A conscious unbosser looks in the mirror and sees pain. He thinks, “I can’t make the decision to keep or fire Tameka. I can’t do the budget because I don’t have my MBA study group to help me.  I can’t give that talk to the sales team because I have never sold a thing in my life. Thanks, MSFT Copilot. I figured out how to make you work again. Too bad about killing those scanners, right?

The write up reports:

Over half of Gen-Z professionals don’t want to take on a middle management role in their career.

Is there some analysis? Well, sort of. The Robert Walters outfit offers this:

The Robert Walters poll found that 72% of Gen-Z would actually opt for an individual route to advance their career – one which focuses on personal growth and skills accumulation over taking on a management role (28%). Lucy Bisset, Director of Robert Walters North comments: “Gen-Z are known for their entrepreneurial mindset – preferring to bring their ‘whole self’ to projects and spend time cultivating their own brand and approach, rather than spending time managing others. “However, this reluctance to take on middle management roles could spell trouble for employers later down the line.”

The entrepreneurial mindset and “whole self” desire are what the survey sample’s results suggest. The bigger issue, in my opinion, is, “What’s caused a big chunk of Gen-Z (whatever that is) to want to have a “brand” and avoid the responsibility of making decisions, dealing with consequences (good and bad) of those decisions, and working with people to build a process that outputs results?”

Robert Walters sidesteps this question. Let me take a whack at why the Gen-Z crowd (people who were 23 to 38 in 2019) are into what I call “soft” work and getting paid to have experiences work.

  1. This group grew up with awards for nothing. Run in a race, lose, and get a badge. Do this enough and the “losers” come to know that they are non-performers no matter what mommy, daddy, and the gym teacher told them.
  2. Gen-Z was a group who matured in a fantasy land with nifty computers, mobile phones, and social media. Certain life skills were not refined in the heat treating process of a competitive education.
  3. Affirmation and attention became more important as their social opportunities narrowed. The great tattooing craze grabbed hold of those in Gen-Z. When I see a 32 year old restaurant worker adorned with tattoos, I wonder, “What the heck was he/she/ze thinking? I know what I am thinking, “Insecurity. A desire to stand out. A permanent “also participated” badge which will look snappy when the tattooed person is 70 years old.

Net net: I think the data in the write up is suggestive. I have questions about the sample size, the method of selection, and the statistical approach taken to determine if a “result” is verifiable. One thing is certain. Outfits like McKinsey, Bain, and BCG will have to rework their standard slide decks for personnel planning and management techniques. However, I can overlook the sparse information in the write up and the shallow analysis. I love that “conscious unbossing” neologism. See, there is room for sociology and psychology majors in business. Not much. But some room.

Stephen E Arnold, January 17, 2025

Amazon Embodies Modern Management: Efficient, Effective, Encouraging

January 16, 2025

Hopping Dino_thumb_thumb_thumbA dinobaby-crafted post. I confess. I used smart software to create the heart wrenching scene of a farmer facing a tough 2025.

I don’t know if this write up is spot on, but I loved it. Navigate to “Amazon Worker – Struck and Shot in New Orleans Terror Attack – Initially Denied Time Off.” If the link is dead, complain to MSFT, please. (Perhaps the headline tells the tale?) The article pointed out:

Alexis Scott-Windham was celebrating the New Year with friends on Bourbon Street when a pickup truck mounted the sidewalk and rammed a crowd shortly after 3 am local time. She was treated in hospital after the back of her right foot was run over by the vehicle, and she was also shot in the foot. The bullet remains in her limb while doctors work out the best course of action to remove it while she recovers at home. The regional Times-Picayune newspaper interviewed Scott-Windham, who revealed she had been denied medical leave by the Amazon warehouse where she works for a medical checkup in two weeks’ time. The mother feared if she was absent from work for that appointment, she would lose her job.

Several observations are warranted:

  1. Struck means that the vehicle hit her. That would probably test the situational awareness of a Delta Force operator walking with pals to the Green Beans.
  2. Shot. Now when a person is shot, there is the wound itself. However, the shock and subsequent pain are to some annoying. I knew a person who flinched each time a sharp sound interrupted a conversation. That individual, who received a military award for bravery, told me, “Just a reflex.” Sure. Reflex. Hard wired decades after the incident in the Vietnam “conflict.”
  3. Fear of being fired for injuries incurred in a terrorist incident. That’s a nifty way to motivate employees to do their best and trust an organization.

Herewith, the dinobaby award for outstanding management goes to the real or virtual individual who informed the person injured in the terrorist attack the Outstanding Management insignia. Wear it proudly. When terminating people, the insignia is known to blink in Morse code, “Amazon is wonderful.”

image

Stephen E Arnold, January 16, 2025

Agentic Workflows and the Dust Up Between Microsoft and Salesforce

January 14, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb Prepared by a still-alive dinobaby.

The Register, a UK online publication, does a good job of presenting newsworthy events with a touch of humor. Today I spotted a new type of information in the form of an explainer plus management analysis. Plus the lingo and organization suggest a human did all or most of the work required to crank out a very good article called “In AI Agent Push, Microsoft Re-Orgs to Create CoreAI – Platform and Tools Team.”

I want to highlight the explainer part of the article. The focus is on the notion of agentic; specifically:

agentic applications with memory, entitlements, and action space that will inherit powerful model capabilities. And we will adapt these capabilities for enhanced performance and safety across roles, business processes, and industry domains. Further, how we build, deploy, and maintain code for these AI applications is also fundamentally changing and becoming agentic.

These words are attributed to Microsoft’s top dog Satya Nadella, but they sound as if one of the highly paid wordsmiths laboring for the capable Softies. Nevertheless, the idea is important. In order to achieve the agentic pinnacle, Microsoft has to reorganize. Whoever can figure out how to make agentic applications work across different vendors’ solutions will be able to make money. That’s the basic idea: Smart software is going to create a new big thing for enterprise software and probably some consumers.

The write up explains:

It’s arguably just plain old software talking to plain old software, which would be nothing new. The new angle here, though, is that it’s driven mainly by, shall we say, imaginative neural networks and models making decisions, rather than algorithms following entirely deterministic routes. Which is still software working with software. Nadella thinks building artificially intelligent agentic apps and workflows needs “a new AI-first app stack — one with new UI/UX patterns, runtimes to build with agents, orchestrate multiple agents, and a reimagined management and observability layer.”

To win the land in this new territory, Microsoft must have a Core AI team. Google and Salesforce presumably have this type of set up. Microsoft has to step up its AI efforts. The Register points out:

Nadella noted that “our internal organizational boundaries are meaningless to both our customers and to our competitors”. That’s an odd observation given Microsoft published his letter, which concludes with this observation: “Our success in this next phase will be determined by having the best AI platform, tools, and infrastructure. We have a lot of work to do and a tremendous opportunity ahead, and together, I’m looking forward to building what comes next.”

Here’s what I found interesting:

  1. Agentic is the next big thing in smart software. Essentially smart software that does one thing is useful. Orchestrating agents to do a complex process is the future. The software decides. Everything works well — at least, that’s the assumption.
  2. Microsoft, like Google, is now in a Code Yellow or Code Red mode. The company feels the heat from Salesforce. My hunch is that Microsoft knows that add ins like Ghostwriter for Microsoft Office is more useful than Microsoft’s own Copilot for many users. If the same boiled fish appears on the enterprise menu, Microsoft is in a world of hurt from Salesforce and probably a lot of other outfits.
  3. The re-org parallels the disorder that surfaced at Google when it fixed up its smart software operation or tried to deal with the clash of the wizards in that estimable company. Pushing boxes around on an organization chart is honorable work, but that management method may not deliver the agentic integration some people want.

The conclusion I drew from The Register’s article is that the big AI push and the big players’ need to pop up a conceptual level in smart software is perceived as urgent. Costs? No problem. Hallucination? No problem. Hardware availability? No problem. Software? No problem. A re-organization is obvious and easy. No problem.

Stephen E Arnold, January 14, 2025

GitHub Identifies a Sooty Pot and Does Not Offer a Fix

January 9, 2025

Hopping Dino_thumb_thumb_thumb_thumb_thumbThis is an official dinobaby post. No smart software involved in this blog post.

GitLab’s Sabrina Farmer is a sharp thinking person. Her “Three Software Development Challenges Slowing AI Progress” articulates an issue often ignored or just unknown. Specifically, according to her:

AI is becoming an increasingly critical component in software development. However, as is the case when implementing any new tool, there are potential growing pains that may make the transition to AI-powered software development more challenging.

Ms. Farmer is being kind and polite. I think she is suggesting that the nest with the AI eggs from the fund-raising golden goose has become untidy. Perhaps, I should use the word “unseemly”?

She points out three challenges which I interpret as the equivalent of one of those unsolved math problems like cracking the Riemann Hypothesis or the Poincaré Conjecture. These are:

  1. AI training. Yeah, marketers write about smart software. But a relatively small number of people fiddle with the knobs and dials on the training methods and the rat’s nests of computational layers that make life easy for an eighth grader writing an essay about Washington’s alleged crossing of the Delaware River whilst standing up in a boat rowed by hearty, cheerful lads. Big demand, lots of pretenders, and very few 10X coders and thinkers are available. AI Marketers? A surplus because math and physics are hard and art history and social science are somewhat less demanding on today’s thumb typers.
  2. Tools, lots of tools. Who has time to keep track of every “new” piece of smart software tooling? I gave up as the hyperbole got underway in early 2023. When my team needs to do something specific, they look / hunt for possibilities. Testing is required because smart software often gets things wrong. Some call this “innovation.” I call it evidence of the proliferation of flawed or cute software. One cannot machine titanium with lousy tools.
  3. Management measurements. Give me a break, Ms. Farmer. Managers are often evidence of the Peter Principle, an accountant, or a lawyer. How can one measure what one does not use, understand, or creates? Those chasing smart software are not making spindles for a wooden staircase. The task of creating smart software that has a shot at producing money is neither art nor science. It is a continuous process of seeing what works, fiddling, and fumbling. You want to measure this? Good luck, although blue chip consultants will gladly create a slide deck to show you the ropes and then churn out a spectacular invoice for professional services.

One question: Is GitLab part of the problem or part of the solution?

Stephen E Arnold, January 9, 2025

Why Buzzwords Create Problems. Big Problems, Right, Microsoft?

January 7, 2025

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis is an official dinobaby post. No smart software involved in this blog post.

I read an essay by Steven Sinofsky. He worked at Microsoft. You can read about him in Wikipedia because he was a manager possibly associated with Clippy. He wrote an essay called “225. Systems Ideas that Sound Good But Almost Never Work—”Let’s just…” The write up is about “engineering patterns that sound good but almost never work as intended.”

I noticed something interesting about his explanation of why many software solutions go off the rails, fail to work, create security opportunities for bad actors associated with entities not too happy with the United States, and on-going headaches for for hundreds of millions of people.

Here is a partial list of the words and bound phrases from his essay:

Add an API
Anomaly detection
Asynchronous
Cross platform
DSL
Escape to native
Hybrid parallelism
Multi-master writes
Peer to peer
Pluggable
Sync the data

What struck me about this essay is that it reveals something I think is important about Microsoft and probably other firms tapping the expertise of the author; that is, the jargon drives how the software is implemented.

I am not certain that my statement is accurate for software in general. But for this short blog post, let’s assume that it applies to some software (and I am including Microsoft’s own stellar solutions as well as products from other high profile and wildly successful vendors). With the ground rules established, I want to offer several observations about this “jargon drives the software engineering” assertion.

First, the resulting software is flawed. Problems are not actually resolved. The problems are papered over with whatever the trendy buzzword says will work. The approach makes sense because actual problem solving may not be possible within a given time allocation or a working solution may fail which requires figuring out how to not fail again.

Second, the terms reveal that marketing think takes precedence over engineering think. Here’s what the jargon creators do. These sales oriented types grab terms that sound good and refer to an approach. The “team” coalesces around the jargon, and the jargon directs how the software is approached. Does hybrid parallelism “work”? Who knows, but it is the path forward. The manager says, “Let’s go team” and Clippy emerges or the weird opaqueness of the “ribbon.”

Third, the jargon shaped by art history majors and advertising mavens defines the engineering approach. The more successful the technical jargon, the more likely those people who studied Picasso’s colors or Milton’s Paradise Regained define the technical frame in which a “solution” is crafted.

How good is software created in this way? Answer: Good enough.

How reliable is software created in this way? Answer: Who knows until someone like a paying customer actually uses the software.

How secure is the software created in this way? Answer: It is not secure as the breaches of the Department of Treasury, the US telecommunications companies, and the mind boggling number of security lapses in 2024 prove.

Net net: Engineering solutions based on jargon are not intended to deliver excellence. The approach is simply “good enough.” Now we have some evidence that industry leaders realize the fact. Right, Clippy?

Stephen E Arnold, January 8, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta