McKinsey & Co. Emits the Message “You Are No Longer the Best of the Best”

April 4, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I love blue chip consulting firms’ management tactics. I will not mention the private outfits which go public and then go private. Then the firms’ “best of the best” partners decide to split the firm. Wow. Financial fancy dancing or just evidence that “best of the best” is like those plastic bottles killing off marine life?

I read “McKinsey Is so Eager to Trim Staff That It’s Offering Some Employees 9 Months’ Pay to Go and Do Something Else. I immediately asked myself, “What’s some mean?” I am guessing based on my experience that “all” of the RIF’ed staff are not getting the same deal. Well, that’s life in the exciting world of the best and the brightest. Some have to accept that there are blue chippers better and, therefore, able to labor enthusiastically at a company known as the Big Dog in the consulting world.


Thanks MSFT Copilot. (How’s your security today?)

The write up reports as “real” NY news:

McKinsey is attempting  to slim the company down in a caring and supporting way by paying its workers to quit.

Hmmm. “Attempting” seems an odd word for a consulting firm focused on results. One slims down or one remains fat and prone to assorted diseases if I understood my medical professional. Is McKinsey signaling that its profit margin is slipping like the trust level for certain social media companies? Or is artificial intelligence the next big profit making thing; therefore, let’s clear out the deadwood and harvest the benefits of smart software unencumbered by less smart humans?

Plus, the formerly “best and brightest” will get help writing their résumés. My goodness, imagine a less good Type A super achiever unable to write a résumé. But just yesterday those professionals were able to advise executives often with decades more experience, craft reports with asterisk dot points, and work seven days a week. These outstanding professionals need help writing their résumés. This strikes me as paternalistic and a way to sidestep legal action for questionable termination.

Plus, the folks given the chance to find their future elsewhere (as long as the formerly employed wizard conforms to McKinsey’s policies about client poaching) can allegedly use their McKinsey email accounts. What might a person who learns he or she is no longer the best of the best might do with a live McKinsey email account? I have a couple of people on my research team who have studied mischief with emails. I assume McKinsey’s leadership knows a lot more than my staff. We don’t pontificate about pharmaceutical surfing; we do give lectures to law enforcement and intelligence professionals. Therefore, my team knows much, much less about the email usage that McKinsey management.

Deloitte, another blue chip outfit, is moving quickly into the AI space. I have heard that it wants to use AI and simultaneously advise its clients about AI. I wonder if Deloitte has considered that smart software might be marginally less expensive than paying some of the “best of the best” to do manual work for clients? I don’t know.

The blue chip outfit at which I worked long ago was a really humane place. Those rumors that an executive drowned a loved one were just rumors. The person was a kind and loving individual with a raised dais in his office. I recall I hard to look up at him when seated in front of his desk. Maybe that’s just an AI type hallucination from a dinobaby. I do remember the nurturing approach he took when pointing at a number and demanding the VP presenting the document, “I want to know where that came from now.” Yes, that blue chip professional was patient and easy going as well.

I noted this passage in the Fortune “real” NY news:

A McKinsey spokesperson told Fortune that its unusual approach to layoffs is all part of the company’s core mission to help people ‘learn and grow into leaders, whether they stay at McKinsey or continue their careers elsewhere.’

I loved the sentence including the “learn and grow into leaders” verbiage. I am imagining a McKinsey HR professional saying, “Remember when we recruited you? We told you that you were among the top one percent of the top one percent. Come on. I know you remember? Oh, you don’t remember my assurances of great pay, travel, wonderful colleagues, tremendous opportunities to learn, and build your interpersonal skills. Well, that’s why you have been fired. But you can use your McKinsey email. Please, leave now. I have billable work to do that you obviously were not able to undertake and complete in a satisfactory manner. Oh, here’s your going away gift. It is a T shirt which says, ‘’

Stephen E Arnold, April 4, 2024

Yeah, Stability at Stability AI: Will Flame Outs Light Up the Bubble?

April 4, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Inside the $1 Billion Love Affair between Stability AI’s Complicated Founder and Tech Investors Coatue and Lightspeed—And How It Turned Bitter within Months.” Interesting but, from my point of view, not surprising. High school science club members, particularly when preserving some of their teeny bopper ethos into alleged adulthood can be interesting people. And at work, exciting may be a suitable word. The write up’s main idea is that the wizard “left home in his pajamas.” Well, that’s a good summary of where Stability AI is.


The high school science club finds itself at odds with a mere school principal. The science club student knows that if the principal were capable, he would not be a mere principal. Thanks, MSFT Copilot. Were your senior managers in a high school science club?

The write up points out that Stability was the progenitor of Stable Diffusion, the art generator. I noticed the psycho-babbly terms stability and stable. Did you? Did the investors? Did the employees? Answer: Hey, there’s money to be made.

I noted this statement in the article:

The collaborative relationship between the investors and the promising startup gradually morphed into something more akin to that of a parent and an unruly child as the extent of internal turmoil and lack of clear direction at Stability became apparent, and even increased as Stability used its funding to expand its ranks.

Yep, high school management methods: “Don’t tell me what to do. I am smarter than you, Mr. Assistant Principal. You need me on the Quick Recall team, so go away,” echo in my mind in an Ezoic AI voice.

The write up continued the tale of mismanagement and adolescent angst, quoting the founder of Stability AI:

“Nobody tells you how hard it is to be a CEO and there are better CEOs than me to scale a business,” Mostaque said. “I am not sure anyone else would have been able to build and grow the research team to build the best and most widely used models out there and I’m very proud of the team there. I look forward to moving onto the next problem to handle and hopefully move the needle.”

I interpreted this as, “I did not know that calcium carbide in the lab sink drain could explode when in contact with water and then ignited, Mr. Principal.”

And, finally, let me point out this statement:

Though Stability AI’s models can still generate images of space unicorns and Lego burgers, music, and videos, the company’s chances of long-term success are nothing like they once appeared. “It’s definitely not gonna make me rich,” the investor says.

Several observations:

  1. Stability may presage the future for other high-flying and low-performing AI outfits. Why? Because teen management skills are problematic in a so-so economic environment
  2. AI is everywhere and its value is now derived by having something that solves a problem people will pay to have ameliorated. Shiny stuff fresh from the lab won’t make stakeholders happy
  3. Discipline, particularly in high school science club members, may not be what a dinobaby like me would call rigorous. Sloppiness produces a mess and lost opportunities.

Net net: Ask about a potential employer’s high school science club memories.

Stephen E Arnold, April 4, 2024

Angling to Land the Big Google Fish: A Humblebrag Quest to Be CEO?

April 3, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My goodness, the staff and alums of DeepMind have been in the news. Wherever there are big bucks or big buzz opportunities, one will find the DeepMind marketing machinery. Consider “Can Demis Hassabis Save Google?” The headline has two messages for me. The first is that a “real” journalist things that Google is in big trouble. Big trouble translates to stakeholder discontent. That discontent means it is time to roll in a new Top Dog. I love poohbahing. But opining that the Google is in trouble. Sure, it was aced by the Microsoft-OpenAI play not too long ago. But the Softies have moved forward with the Mistral deal and the mysterious Inflection deal . But the Google has money, market share, and might. Jake Paul can say he wants the Mike Tyson death stare. But that’s an opinion until Mr. Tyson hits Mr. Paul in the face.

The second message in the headline that one of the DeepMind tribe can take over Google, defeat Microsoft, generate new revenues, avoid regulatory purgatory, and dodge the pain of its swinging door approach to online advertising revenue generation; that is, people pay to get in, people pay to get out, and soon will have to subscribe to watch those entering and exiting the company’s advertising machine.


Thanks, MSFT Copilot. Nice fish.

What are the points of the essay which caught my attention other than the headline for those clued in to the Silicon Valley approach to “real” news? Let me highlight a few points.

First, here’s a quote from the write up:

Late on chatbots, rife with naming confusing, and with an embarrassing image generation fiasco just in the rearview mirror, the path forward won’t be simple. But Hassabis has a chance to fix it. To those who known him, have worked alongside him, and still do — all of whom I’ve spoken with for this story — Hassabis just might be the perfect person for the job. “We’re very good at inventing new breakthroughs,” Hassabis tells me. “I think we’ll be the ones at the forefront of doing that again in the future.”

Is the past a predictor of future success? More than lab-to-Android is going to be required. But the evaluation of the “good at inventing new breakthroughs” is an assertion. Google has been in the me-too business for a long time. The company sees itself as a modern Bell Labs and PARC. I think that the company’s perception of itself, its culture, and the comments of its senior executives suggest that the derivative nature of Google is neither remembered nor considered. It’s just “we’re very good.” Sure “we” are.

Second, I noted this statement:

Ironically, a breakthrough within Google — called the transformer model — led to the real leap. OpenAI used transformers to build its GPT models, which eventually powered ChatGPT. Its generative ‘large language’ models employed a form of training called “self-supervised learning,” focused on predicting patterns, and not understanding their environments, as AlphaGo did. OpenAI’s generative models were clueless about the physical world they inhabited, making them a dubious path toward human level intelligence, but would still become extremely powerful. Within DeepMind, generative models weren’t taken seriously enough, according to those  inside, perhaps because they didn’t align with Hassabis’s AGI priority, and weren’t close to reinforcement learning. Whatever the rationale, DeepMind fell behind in a key area.

Google figured something out and then did nothing with the “insight.” There were research papers and chatter. But OpenAI (powered in part by Sam AI-Man) used the Google invention and used it to carpet bomb, mine, and set on fire Google’s presumed lead in anything related to search, retrieval, and smart software. The aftermath of the Microsoft OpenAI PR coup is a continuing story of rehabilitation. From what I have seen, Google needs more time getting its ageingbody parts working again. The ad machine produces money, but the company reels from management issue to management issue with alarming frequency. Biased models complement spats with employees. Silicon Valley chutzpah causes neurological spasms among US and EU regulators. Something is broken, and I am not sure a person from inside the company has the perspective, knowledge, and management skills to fix an increasingly peculiar outfit. (Yes, I am thinking of ethnically-incorrect German soldiers loyal to a certain entity on Google’s list of questionable words and phrases.)

And, lastly, let’s look at this statement in the essay:

Many of those who know Hassabis pine for him to become the next CEO, saying so in their conversations with me. But they may have to hold their breath. “I haven’t heard that myself,” Hassabis says after I bring up the CEO talk. He instantly points to how busy he is with research, how much invention is just ahead, and how much he wants to be part of it. Perhaps, given the stakes, that’s right where Google needs him. “I can do management,” he says, ”but it’s not my passion. Put it that way. I always try to optimize for the research and the science.”

I wonder why the author of the essay does not query Jeff Dean, the former head of a big AI unit in Mother Google’s inner sanctum about Mr. Hassabis? How about querying Mr. Hassabis’ co-founder of DeepMind about Mr. Hassabis’ temperament and decision-making method? What about chasing down former employees of DeepMind and getting those wizards’ perspective on what DeepMind can and cannot accomplish. 

Net net: Somewhere in the little-understood universe of big technology, there is an invisible hand pointing at DeepMind and making sure the company appears in scientific publications, the trade press, peer reviewed journals, and LinkedIn funded content. Determining what’s self-delusion, fact, and PR wordsmithing is quite difficult.

Google may need some help. To be frank, I am not sure anyone in the Google starting line up can do the job. I am also not certain that a blue chip consulting firm can do much either. Google, after a quarter century of zero effective regulation, has become larger than most government agencies. Its institutional mythos creates dozens of delusional Ulysses who cannot separate fantasies of the lotus eaters from the gritty reality of the company as one of the contributors to the problems facing youth, smaller businesses, governments, and cultural norms.

Google is Googley. It will resist change.

Stephen E Arnold, April 3, 2024

AI and Stupid Users: A Glimpse of What Is to Come

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

When smart software does not deliver, who is responsible? I don’t have a dog in the AI fight. I am thinking about deployment of smart software in professional environments. When the outputs are wonky or do not deliver the bang of a  competing system, what is the customer supposed to do. Is the vendor responsible? Is the customer responsible? Is the person who tried to validate the outputs guilty of putting a finger on the scale of a system which its developers cannot explain exactly how an output was determined? Viewed from one angle, this is the Achilles’ heel of artificial intelligence. Viewed from another angle determining responsibility is an issue which, in my opinion, will be decided by legal processes. In the meantime, the issue of a system’s not working can have significant consequences. How about those automated systems on aircraft which dive suddenly or vessels which can jam a ship channel?

I read a write up which provides a peek at what large outfits pushing smart software will do when challenged about quality, accuracy, or other subjective factors related to AI-imbued systems. Let’s take a quick look at “Customers Complain That Copilot Isn’t As Good as ChatGPT, Microsoft Blames Misunderstanding and Misuse.”

The main idea in the write up strikes me as:

Microsoft is doing absolutely everything it can to force people into using its Copilot AI tools, whether they want to or not. According to a new report, several customers have reported a problem: it doesn’t perform as well as ChatGPT. But Microsoft believes the issue lies with people who aren’t using Copilot correctly or don’t understand the differences between the two products.

Yep, the user is the problem. I can imagine the adjudicator (illustrated as a mother) listening to a large company’s sales professional and a professional certified developer arguing about how the customer went off the rails. Is the original programmer the problem? Is the new manager in charge of AI responsible? Is it the user or users?


Illustration by MSFT Copilot. Good enough, MSFT.

The write up continues:

One complaint that has repeatedly been raised by customers is that Copilot doesn’t compare to ChatGPT. Microsoft says this is because customers don’t understand the differences between the two products: Copilot for Microsoft 365 is built on the Azure OpenAI model, combining OpenAI’s large language models with user data in the Microsoft Graph and the Microsoft 365 apps. Microsoft says this means its tools have more restrictions than ChatGPT, including only temporarily accessing internal data before deleting it after each query.

Here’s another snippet from the cited article:

In addition to blaming customers’ apparent ignorance, Microsoft employees say many users are just bad at writing prompts. “If you don’t ask the right question, it will still do its best to give you the right answer and it can assume things,” one worker said. “It’s a copilot, not an autopilot. You have to work with it,” they added, which sounds like a slogan Microsoft should adopt in its marketing for Copilot. The employee added that Microsoft has hired partner BrainStorm, which offers training for Microsoft 365, to help create instructional videos to help customers create better Copilot prompts.

I will be interested in watching how these “blame games” unfold.

Stephen E Arnold, March 29, 2024

The Many Faces of Zuckbook

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

As evidenced by his business decisions, Mark Zuckerberg seems to be a complicated fellow. For example, a couple recent articles illustrate this contrast: On one hand is his commitment to support open source software, an apparently benevolent position. On the other, Meta is once again in the crosshairs of EU privacy advocates for what they insist is its disregard for the law.

First, we turn to a section of VentureBeat’s piece, “Inside Meta’s AI Strategy: Zuckerberg Stresses Compute, Open Source, and Training Data.” In it, reporter Sharon Goldman shares highlights from Meta’s Q4 2023 earnings call. She emphasizes Zuckerberg’s continued commitment to open source software, specifically AI software Llama 3 and PyTorch. He touts these products as keys to “innovation across the industry.” Sounds great. But he also states:

“Efficiency improvements and lowering the compute costs also benefit everyone including us. Second, open source software often becomes an industry standard, and when companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products.”

Ah, there it is.

Our next item was apparently meant to be sneaky, but who did Meta think it was fooling? The Register reports, “Meta’s Pay-or-Consent Model Hides ‘Massive Illegal Data Processing Ops’: Lawsuit.” Meta is attempting to “comply” with the EU’s privacy regulations by making users pay to opt in to them. That is not what regulators had in mind. We learn:

“Those of us with aunties on FB or friends on Instagram were asked to say yes to data processing for the purpose of advertising – to ‘choose to continue to use Facebook and Instagram with ads’ – or to pay up for a ‘subscription service with no ads on Facebook and Instagram.’ Meta, of course, made the changes in an attempt to comply with EU law. But privacy rights folks weren’t happy about it from the get-go, with privacy advocacy group noyb (None Of Your Business), for example, sarcastically claiming Meta was proposing you pay it in order to enjoy your fundamental rights under EU law. The group already challenged Meta’s move in November, arguing EU law requires consent for data processing to be given freely, rather than to be offered as an alternative to a fee. Noyb also filed a lawsuit in January this year in which it objected to the inability of users to ‘freely’ withdraw data processing consent they’d already given to Facebook or Instagram.”

And now eight European Consumer Organization (BEUC) members have filed new complaints, insisting Meta’s pay-or-consent tactic violates the European General Data Protection Regulation (GDPR). While that may seem obvious to some, Meta insists it is in compliance with the law. Because of course it does.

Cynthia Murrell, March 29, 2024

My Way or the Highway, Humanoid

March 28, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Curious how “nice” people achieve success? “Playground Bullies Do Prosper – And Go On to Earn More in Middle Age” may have an answer. The write up says:

Children who displayed aggressive behavior at school, such as bullying or temper outbursts, are likely to earn more money in middle age, according to a five-decade study that upends the maxim that bullies do not prosper.

If you want a tip for career success, I would interpret the write up’s information to start when young. Also, start small. The Logan Paul approach to making news is to fight the ageing Mike Tyson. Is that for you? I know I would not start small by irritating someone who walks with a cane. But, to each his or her own. If there is a small child selling Girl Scout Cookies, one might sharpen his or her leadership skills by knocking the cookie box to the ground and stomping on it. The modest demonstration of power can then be followed with the statement, “Those cookies contain harmful substances. You should be ashamed.” Then as your skills become more fluid and automatic, move up. I suggest testing one’s bullying expertise on a local branch of a street gang involved in possibly illegal activities.


Thanks MSFT Copilot. I wonder if you used sophisticated techniques when explaining to OpenAI that you were hedging your bets.

The write up quotes an expert as saying:

“We found that those children who teachers felt had problems with attention, peer relationships and emotional instability did end up earning less in the future, as we expected, but we were surprised to find a strong link between aggressive behavior at school and higher earnings in later life,” said Prof Emilia Del Bono, one of the study’s authors.

A bully might respond to this professor and say, “What are you going to do about it?” One response is, “You will earn more, young student.” The write up reports:

Many successful people have had problems of various kinds at school, from Winston Churchill, who was taken out of his primary school, to those who were expelled or suspended.

Will nice guys who are not bullies become the leaders of the post Covid world? The article quotes another expert as saying:

“We’re also seeing a generational shift where younger generations expect to have a culture of belonging and being treated with fairness, respect and kindness.”

Sounds promising. Has anyone told the companies terminating thousands of workers? What about outfits like IBM which are dumping humans for smart software? Yep, progress just like that made at Google in the last couple of years.

Stephen E Arnold, March 28, 2024

A Single, Glittering Google Gem for 27 March 2024

March 27, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

So many choices. But one gem outshines the others. Google’s search generative experience is generating publicity. The old chestnut may be true. Any publicity is good publicity. I would add a footnote. Any publicity about Google’s flawed smart software is probably good for Microsoft and other AI competitors. Google definitely looks as though it has some behaviors that are — how shall I phrase it? — questionable. No, maybe, ill-considered. No, let’s go with bungling. That word has a nice ring to it. Bungling.

! google gems

I learned about this gem in “Google’s New AI Search Results Promotes Sites Pushing Malware, Scams.” The write up asserts:

Google’s new AI-powered ‘Search Generative Experience’ algorithms recommend scam sites that redirect visitors to unwanted Chrome extensions, fake iPhone giveaways, browser spam subscriptions, and tech support scams.

The technique which gets the user from the quantumly supreme Google to the bad actor goodies is redirects. Some user notification functions to pump even more inducements toward the befuddled user. (See, bungling and befuddled. Alliteration.)

Why do users fall for these bad actor gift traps? It seems that Google SGE conversational recommendations sound so darned wonderful, Google users just believe that the GOOG cares about the information it presents to those who “trust” the company. k

The write up points out that the DeepMinded Google provided this information about the bumbling SGE:

"We continue to update our advanced spam-fighting systems to keep spam out of Search, and we utilize these anti-spam protections to safeguard SGE," Google told BleepingComputer. "We’ve taken action under our policies to remove the examples shared, which were showing up for uncommon queries."

Isn’t that reassuring? I wonder if the anecdote about this most recent demonstration of the Google’s wizardry will become part of the Sundar & Prabhakar Comedy Act?

This is a gem. It combines Google’s management process, word salad frippery, and smart software into one delightful bouquet. There you have it: Bungling, befuddled, bumbling, and bouquet. I am adding blundering. I do like butterfingered, however.

Stephen E Arnold, March 27, 2024

Xoogler Predicts the Future: China Bad, Xoogler Good

March 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Did you know China, when viewed from the vantage point of a former Google executive, is bad? That is a stunning comment. Google tried valiantly to convert China into a money stream. That worked until it didn’t. Now a former Googler or Xoogler in some circles has changed his tune.


Thanks, MSFT Copilot. Working on security I presume?

Eric Schmidt’s China Alarm” includes some interesting observations. None of which address Google’s attempt to build a China-acceptable search engine. Oh, well, anyone can forget minor initiatives like that. Let’s look at a couple of comments from the article:

How about this comment about responding to China:

"We have to do whatever it takes."

I wonder if Mr. Schmidt has been watching Dr. Strangelove on YouTube. Someone might pull that viewing history to clarify “whatever it takes.”

Another comment I found interesting is:

China has already become a peer of the U.S. and has a clear plan for how it wants to dominate critical fields, from semiconductors to AI, and clean energy to biotech.

That’s interesting. My thought is that the “clear plan” seems to embrace education; that is, producing more engineers than some other countries, leveraging open source technology, and erecting interesting barriers to prevent US companies from selling some products in the Middle Kingdom. How long has this “clear plan” been chugging along? I spotted portions of the plan in Wuhan in 2007. But I guess now it’s a more significant issue after decades of being front and center.

I noted this comment about artificial intelligence:

Schmidt also said Europe’s proposals on regulating artificial intelligence "need to be re-done," and in general says he is opposed to regulating AI and other advances to solve problems that have yet to appear.

The idea is an interesting one. The UN and numerous NGOs and governmental entities around the world are trying to regulate, tame, direct, or ameliorate the impact of smart software. How’s that going? My answer is, “Nowhere fast.”

The article makes clear that Mr. Schmidt is not just a Xoogler; he is a global statesperson. But in the back of my mind, once a Googler, always a Googler.

Stephen E Arnold, March 26, 2024

AI Job Lawnmowers: Will Your Blooms Be Chopped Off and Put a Rat King in Your Future?

March 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I love “you will lose your job to AI” articles. I spotted an interesting one titled “The Job Sectors That Will Be Most Disrupted By AI, Ranked.” This is not so much an article as a billboard for an outfit named Voronoi, “where data tells the story.” That’s interesting because there is no data, no methodology, and no indication of the confidence level for each “nuked job.” Nevertheless, we have a ranking.


Thanks, MSFT Copilot. Will you be sparking human rat kings? I would wager that you will.

As I understand the analysis of 19,000 tasks, here’s that the most likely to be chopped down and converted to AI silage will be:

IT  / programmers: 73 percent of the job will experience a large impact

Finance / bean counters: 70 percent of the jobs will experience a large impact

Customer sales: 67 percent of the job will experience a large impact

Operations (well, that’s a fuzzy category, isn’t it?): 65 percent of the job will experience a large impact

Personnel / HR: 57 percent of the job will experience a large impact

Marketing: 56 percent of the job will experience a large impact

Legal eagles: 46 percent of the job will experience a large impact

Supply chain (another fuzzy wuzzy bucket): 43 percent of the job will experience a large impact

The kicker in the data is that the numbers date from September 2023. Six months in the faerie land of smart software is a long, long time. Let’s assume that the data meet 2024’s gold standard.

Technology, finance, sales, marketing, and lawyering may shatter the future of employees of less value in terms of compensation, cost to the organization, or whatever management legerdemain the top dogs and their consultants whip up. Imagine eliminate the overhead for humans like office space, health care, retirement baloney, and vacations makes smart software into an attractive “play.”

And what about the fuzzy buckets? My thought is that many people will be trimmed because a chatbot can close a sale for a product without the hassle which humans drag into the office; for example, sexual harassment, mental, drug, and alcohol “issues,” and the unfortunate workplace shooting. I think that a person sitting in a field office to troubleshoot issues related to a state or county contract might fall into the “operations” category even though the employee sees the job as something smart software cannot perform. Ho  ho ho.

Several observations:

  • A trivial cost analysis of human versus software over a five-year period means humans lose
  • AI systems, which may suck initially, will be improved over time. These initial failures may cause the once alert to replacement employee into a false sense of security
  • Once displaced, former employees will have to scramble to produce cash. With lots of individuals chasing available work and money plays, life is unlikely to revert back to the good old days of the Organization Man. (The world will be Organization AI. No suit and white shirt required.)

Net net: I am glad I am old and not quite as enthralled by efficiency.

Stephen E Arnold, March 25, 2024

Software Failure: Why Problems Abound and Multiply Like Gerbils

March 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Why Software Projects Fail” after a lunch at which crappy software and lousy products were a source of amusement. The door fell off what?

What’s interesting about the article is that it contains a number of statements which resonated with me. I recommend the article, but I want to highlight several statements from the essay. These do a good job of explaining why small and large projects go off the rails. Within the last 12 months I witnessed one project get tangled in solving a problem that existed 15 years ago. Today not so much. The team crafted the equivalent of a Greek Corinthian helmet from the 8th century BCE. Another project infused with AI and vision of providing a “new” approach to security wobble between and among a telecommunications approach, an email approach, and an SMS approach with bells and whistles only a science fiction fan would appreciate. Both of these examples obtained funding; neither set out to build a clown car. What happened? That’s where “Why Projects Fail?” becomes relevant.


Thanks, MSFT Copilot. You have that MVP idea nailed with the recent Windows 11 update, don’t you. Good enough I suppose.

Let’s look at three passages from the essay, shall we?

Belief in One’s Abilities or I Got an Also-Participated Ribbon in Middle School

Here’s the statement from the essay:

One of the things that I’ve noticed is that developers often underestimate not just the complexity of tasks, but there’s a general overconfidence in their abilities, not limited by programming:

  1. Overconfidence in their coding skills.
  2. Overconfidence in learning new technologies.
  3. Overconfidence in our abstractions.
  4. Overconfidence in external dependencies, e.g., third-party services or some open-source library.

My comment: Spot on. Those ribbons built confidence, but they mean nothing.

Open Source Is Great Unless It Has Been Screwed Up, Become a Malware Delivery Vehicle, or Just Does Not Work

Here’s the statement from the essay:

… anything you do not directly control is a risk of hidden complexity. The assumption that third-party services, libraries, packages, or APIs will work as expected without bugs is a common oversight.

My view is that “complexity” is kicked around as if everyone held a shared understanding of the term. There are quite different types of complexity. For software, there is the complexity of a simple process created in Assembler but essentially impenetrable to a 20-something from a whiz-bang computer science school. There is the complexity of software built over time by attention deficit driven people who do not communicate, coordinate, or care what others are doing, will do, or have done. Toss in the complexity of indifferent, uninformed, or uninterested “management,” and you get an exciting environment in which to “fix up” software. The cherry on top of this confection is that quite a bit of software is assumed to be good. Ho ho ho.

The Real World: It Exists and Permeates

I liked this statement:

Technology that seemed straightforward refuses to cooperate, external competitors launch similar ideas, key partners back out, and internal business stakeholders focus more on the projects that include AI in their name. Things slow down, and as months turn into years, enthusiasm wanes. Then the snowball continues — key members leave, and new people join, each departure a slight shift in direction. New tech lead steps in, eager to leave their mark, steering the project further from its original course. At this point, nobody knows where the project is headed, and nobody wants to admit the project has failed. It’s a tough spot, especially when everyone’s playing it safe, avoiding the embarrassment or penalties of admitting failure.

What are the signals that trouble looms? A fumbled ball at the Google or the Apple car that isn’t can be blinking lights. Staff who go rogue on social media or find an ambulance chasing honed law firm can catch some individual’s attention.

The write up contains other helpful observations. Will people take heed? Are you kidding me? Excellence costs money, requires informed judgment, and expertise. Who has time for this with AI calendars, the demands of TikTok and Instagram, and hitting the local coffee shop?

Stephen E Arnold, March 19, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta