Gee, Will the Gartner Group Consultants Require Upskilling?

October 16, 2024

dino orange_thumbThe only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.

I have a steady stream of baloney crossing my screen each day. I want to call attention to one of the most remarkable and unsupported statements I have seen in months. The PR document “Gartner Says Generative AI Will Require 80% of Engineering Workforce to Upskill Through 2027” contains a number of remarkable statements. Let’s look at a couple.

image

How an allegedly big time consultant is received in a secure artificial intelligence laboratory. Thanks, MSFT Copilot, good enough.

How about this one?

Through 2027, generative AI (GenAI) will spawn new roles in software engineering and operations, requiring 80% of the engineering workforce to upskill, according to Gartner, Inc.

My thought is that the virtual band of wizards which comprise Gartner cook up data the way I microwave a burrito when I am hungry. Pick a common number like the 80-20 Pareto figure. It is familiar and just use it. Personally I was disappointed that Gartner did not use 67 percent, but that’s just an old former blue chip consultant pointing out that round numbers are inherently suspicious. But does Gartner care? My hunch is that whoever reviewed the news release was happy with 80 percent. Did anyone question this number? Obviously not: There are zero supporting data, no information about how it was derived, and no hint of the methodology used by the incredible Gartner wizards. That’s a clue that these are microwaved burritos from a bulk purchase discount grocery.

How about this statement which cites a … wait for it … Gartner wizard as the source of the information?

“In the AI-native era, software engineers will adopt an ‘AI-first’ mindset, where they primarily focus on steering AI agents toward the most relevant context and constraints for a given task,” said Walsh. This will make natural-language prompt engineering and retrieval-augmented generation (RAG) skills essential for software engineers.

I love the phrase “AI native” and I think dubbing the period from January 2023 when Microsoft demonstrated its marketing acumen by announcing the semi-tie up with OpenAI. The code generation systems help exactly what “engineer”? One has to know quite a bit to craft a query, examine the outputs, and do any touch ups to get the outputs working as marketed? The notion of “steering” ignores what may be an AI problem no one at Gartner has considered; for example, emergent patterns in the code generated. This means, “Surprise.” My hunch is that the idea of multi-layered neural networks behaving in a way that produces hitherto unnoticed patterns is of little interest to Gartner. That outfit wants to sell consulting work, not noodle about the notion of emergence which is a biased suite of computations. Steering is good for those who know what’s cooking and have a seat at the table in the kitchen. Is Gartner given access to the oven, the fridge, and the utensils? Nope.

Finally, how about this statement?

According to a Gartner survey conducted in the fourth quarter of 2023 among 300 U.S. and U.K. organizations, 56% of software engineering leaders rated AI/machine learning (ML) engineer as the most in-demand role for 2024, and they rated applying AI/ML to applications as the biggest skills gap.

Okay, this is late 2024 (October to be exact). The study data are a year old. So far the outputs of smart coding systems remain a work in progress. In fact, Dr. Sabine Hossenfelder has a short video which explains why the smart AI programmer in a box may be more disappointing than other hyperbole artists claim. If you want Dr. Hossenfelder’s view, click here. In a nutshell, she explains in a very nice way about the giant bologna slide plopped on many diners’ plates. The study Dr. Hossenfelder cites suggests that productivity boosts are another slice of bologna. The 41 percent increase in bugs provides a hint of the problems the good doctor notes.

Net net: I wish the cited article WERE generated by smart software. What makes me nervous is that I think real, live humans cooked up something similar to a boiled shoe. Let me ask a more significant question. Will Gartner experts require upskilling for the new world of smart software? The answer is, “Yes.” Even today’s sketchy AI outputs information often more believable that this Gartner 80 percent confection.

Stephen E Arnold, October 16, 2024

When Accountants Do AI: Do The Numbers Add Up?

October 9, 2024

dino 10 19_thumb_thumbThis blog post did not require the use of smart software, just a dumb humanoid.

I will not suggest that Accenture has moved far, far away from its accounting roots. The firm is a go to, hip and zip services firm. I think this means it rents people to do work entities cannot do themselves or do not want to do themselves. When a project goes off the post office path like the British postal service did, entities need someone to blame and — sometimes, just sometimes mind you — to sue.

image

The carnival barker, who has an MBA and a literature degree from an Ivy League school, can do AI for you. Thanks, MSFT, good enough like your spelling.

Accenture To Train 30,000 Staff On Nvidia AI Tech In Blockbuster Deal” strikes me as a Variety-type Hollywood story. There is the word “blockbuster.” There is a big number: 30,000. There is the star: Nvidia. And there is the really big word: Deal. Yes, deal. I thought accountants were conservative, measured, low profile. Nope. Accenture apparently has gone full scale carnival culture. (Yes, this is an intentional reference to the book by James B. Twitchell. Note that this YouTube video asserts that it can train you in 80 percent of AI in less than 10 minutes.)

The article explains:

The global services powerhouse says its newly formed Nvidia Business Group will focus on driving enterprise adoption of what it called ‘agentic AI systems’ by taking advantage of key Nvidia software platforms that fuel consumption of GPU-accelerated data centers.

I love the word “agentic.” It is the digital equivalent of a Hula Hoop. (Remember. I am an 80 year old dinobaby. I understand Hula Hoops.)

The write up adds this quote from the Accenture top dog:

Julie Sweet, chair and CEO of Accenture, said the company is “breaking significant new ground” and helping clients use generative AI as a catalyst for reinvention.” “Accenture AI Refinery will create opportunities for companies to reimagine their processes and operations, discover new ways of working, and scale AI solutions across the enterprise to help drive continuous change and create value,” she said in a statement.x

The write up quotes Accenture Chief AI Officer Lan Guan as saying:

“The power of these announcements cannot be overstated. Called the “next frontier” of generative AI, these “agentic AI systems” involve an “army of AI agents” that work alongside human workers to “make decisions and execute with precision across even the most complex workflows,” according to Guan, a 21-year Accenture veteran. Unlike chatbots such as ChatGPT, these agents do not require prompts from humans, and they are not meant to automating pre-existing business steps.

I am interested in this announcement for three reasons.

First, other “services” firms will have to get in gear, hook up with an AI chip and software outfit, and pray fervently that their tie ups actually deliver something a client will not go to court because the “agentic” future just failed.

Second, the notion that 30,000 people have to be trained to do something with smart software. This idea strikes me as underscoring that smart software is not ready for prime time; that is, the promises which started gushing with Microsoft’s January 2023 PR play with OpenAI is complicated. Is Accenture saying it has hired people who cannot work with smart software. Are those 30,000 professionals going to be equally capable of “learning” AI and making it deliver value? When I lecture about a tricky topic with technology and mathematics under the hood, I am not sure 100 percent of my select audiences have what it takes to convert information into a tool usable in a demanding, work related situation. Just saying: Intelligence even among the elite is not uniform. By definition, some “weaknesses” will exist within the Accenture vision for its 30,000 eager learners.

Third, Nvidia has done a great sales job. A chip and software company has convinced the denizens of Carpetland at what CRN (once Computer Reseller News) to get an Nvidia tattoo and embrace the Nvidia future. I would love to see that PowerPoint deck for the meeting that sealed the deal.

Net net: Accountants are more Hollywood than I assumed. Now I know. They are “agentic.”

Stephen E Arnold, October 9, 2024

US Government Procurement: Long Live Silos

September 12, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Defense AI Models A Risk to Life Alleges Spurned Tech Firm.” Frankly , the headline made little sense to me so I worked through what is a story about a contractor who believes it was shafted by a large consulting firm. In my experience, the situation is neither unusual nor particularly newsworthy. The write up does a reasonable job of presenting a story which could have been titled “Naive Start Up Smoked by Big Consulting Firm.” A small high technology contractor with smart software hooks up with a project in the Department of Defense. The high tech outfit is not able to meet the requirements to get the job. The little AI high tech outfit scouts around and brings a big consulting firm to get the deal done. After some bureaucratic cycles, the small high tech outfit is benched. If you are not familiar with how US government contracting works, the write up provides some insight.

image

The work product of AI projects will be digital silos. That is the key message of this procurement story. I don’t feel sorry for the smaller company. It did not prepare itself to deal with the big time government contractor. Outfits are big for a reason. They exploit opportunities and rarely emulate Mother Theresa-type behavior. Thanks, MSFT Copilot. Good enough illustration although the robots look stupid.

For me, the article is a stellar example of how information or or AI silos are created within the US government. Smart software is hot right now. Each agency, each department, and each unit wants to deploy an AI enabled service. Then that AI infused service becomes (one hopes) an afterburner for more money with which one can add headcount and more AI technology. AI is a rare opportunity to become recognized as a high-performance operator.

As a result, each AI service is constructed within a silo. Think about a structure designed to hold that specific service. The design is purpose built to keep rats and other vermin from benefiting from the goodies within the AI silo. Despite the talk about breaking down information silos, silos in a high profile, high potential technical are like artificial intelligence are the principal product of each agency, each department, and each unit. The payoff could be a promotion which might result in a cushy job in the commercial AI sector or a golden ring; that is, the senior executive service.

I understand the frustration of the small, high tech AI outfit. It knows it has been played by the big consulting firm and the procurement process. But, hey, there is a reason the big consulting firm generates billions of dollars in government contracts. The smaller outfit failed to lock down its role, retain the key to the know how it developed, and allowed its “must have cachè” to slip away.

Welcome, AI company, to the world of the big time Beltway Bandit. Were you expecting the big time consulting firm to do what you wanted? Did you enter the deal with a lack of knowledge, management sophistication, and a couple of false assumptions? And what about the notion of “algorithmic warfare”? Yeah, autonomous weapons systems are the future. Furthermore, when autonomous systems are deployed, the only way they can be neutralized is to use more capable autonomous weapons. Does this sound like a reply of the logic of Cold War thinking and everyone’s favorite bedtime read On Thermonuclear War still available on Amazon and as of September 6, 2024, on the Internet Archive at this link.

Several observations are warranted:

  1. Small outfits need to be informed about how big consulting companies with billions in government contracts work the system before exchanging substantive information
  2. The US government procurement processes are slow to change, and the Federal Acquisition Regulations and related government documents provide the rules of the road. Learn them before getting too excited about a request for a proposal or Federal Register announcement
  3. In a fight with a big time government contractor make sure you bring money, not a chip on your shoulder, to the meeting with attorneys. The entity with the most money typically wins because legal fees are more likely to kill a smaller firm than any judicial or tribunal ruling.

Net net: Silos are inherent in the work process of any government even those run by different rules. But what about the small AI firm’s loss of the contract? Happens so often, I view it as a normal part of the success workflow. Winners and losers are inevitable. Be smarter to avoid losing.

Stephen E Arnold, September 12, 2024

Consulting Tips: How to Guide Group Thinking

August 27, 2024

One of the mysteries of big time consulting is answering the question, “Why do these guys seem so smart?” One trick is to have a little knowledge valise stuffed with thinking and questioning tricks. One example is the Boston Consulting Group dog, star, loser, and uncertain matrix. If you remember the “I like Ike” buttons, you may know that the General used this approach to keep some frisky reports mostly in line during meetings.

Are there other knowledge tools or thinking frameworks? The answer is, “Sure.” When someone asks you to name six, can you deliver a prompt, concise answer? The answer, in my 50 plus years of professional services work, “Not a chance.”

The good news is that you can locate frameworks, get some tips on how to use these to knock the socks off those in a group, and become a walking, talking Blue Chip Consultant without the pain and expense of a fancy university, hours of drudgery, or enduring scathing comments from more experienced peers.

Navigate to “Tools for Better Thinking.” The link, one hopes, displays the names of thinking frameworks in boxes. Click a box, and you get a description and a how-to about the tool.

I think the site is quite good, and it may help some people sell consulting work in certain situations.

Worth a look.

Stephen E Arnold, August 27, 2024

MBAs Gone Wild: Assertions, Animation & Antics

August 5, 2024

Author’s note: Poor WordPress in the Safari browser is having a very bad day. Quotes from the cited McKinsey document appear against a weird blue background. My cheerful little dinosaur disappeared. And I could not figure out how to claim that AI did not help me with this essay. Just a heads up.

Holed up in rural Illinois, I had time to read the mid-July McKinsey & Company document “McKinsey Technology Trends Outlook 2024.” Imagine a group of well-groomed, top-flight, smooth talking “experts” with degrees from fancy schools filming one of those MBA group brainstorming sessions. Take the transcript, add motion graphics, and give audio sweetening to hot buzzwords. I think this would go viral among would-be consultants, clients facing the cloud of unknowing about the future. and those who manifest the Peter Principle. Viral winner! From my point of view, smart software is going to be integrated into most technologies and is, therefore, the trend. People may lose money, but applied AI is going to be with most companies for a long, long time.

The report boils down the current business climate to a few factors. Yes, when faced with exceptionally complex problems, boil those suckers down. Render them so only the tasty sales part remains. Thus, today’s businesss challenges become:

Generative AI (gen AI) has been a standout trend since 2022, with the extraordinary uptick in interest and investment in this technology unlocking innovative possibilities across interconnected trends such as robotics and immersive reality. While the macroeconomic environment with elevated interest rates has affected equity capital investment and hiring, underlying indicators—including optimism, innovation, and longer-term talent needs—reflect a positive long-term trajectory in the 15 technology trends we analyzed.

The data for the report come from inputs from about 100 people, not counting the people who converted the inputs into the live-action report. Move your mouse from one of the 15 “trends” to another. You will see the graphic display colored balls of different sizes. Yep, tiny and tinier balls and a few big balls tossed in.

I don’t have the energy to take each trend and offer a comment. Please, navigate to the original document and review it at your leisure. I can, however, select three trends and offer an observation or two about this very tiny ball selection.

Before sharing those three trends, I want to provide some context. First, the data gathered appear to be subjective and similar to the dorm outputs of MBA students working on a group project. Second, there is no reference to the thought process itself which when applied to a real world problem like boosting sales for opioids. It is the thought process that leads to revenues from consulting that counts.

Source: https://www.youtube.com/watch?v=Dfv_tISYl8A
Image from the ENDEVR opioid video.

Third, McKinsey’s pool of 100 thought leaders seems fixated on two things:

gen AI and electrification and renewables.

But is that statement comprised of three things? [1] AI, [2] electrification, and [3] renewables? Because AI is a greedy consumer of electricity, I think I can see some connection between AI and renewable, but the “electrification” I think about is President Roosevelt’s creating in 1935 the Rural Electrification Administration. Dinobabies can be such nit pickers.

Let’s tackle the electrification point before I get to the real subject of the report, AI in assorted forms and applications. When McKinsey talks about electrification and renewables, McKinsey means:

The electrification and renewables trend encompasses the entire energy production, storage, and distribution value chain. Technologies include renewable sources, such as solar and wind power; clean firm-energy sources, such as nuclear and hydrogen, sustainable fuels, and bioenergy; and energy storage and distribution solutions such as long-duration battery systems and smart grids.In 2019, the interest score for Electrification and renewables was 0.52 on a scale from 0 to 1, where 0 is low and 1 is high. The innovation score was 0.29 on the same scale. The adoption rate was scored at 3. The investment in 2019 was 160 on a scale from 1 to 5, with 1 defined as “frontier innovation” and 5 defined as “fully scaled.” The investment was 160 billion dollars. By 2023, the interest score for Electrification and renewables was 0.73. The innovation score was 0.36. The investment was 183 billion dollars. Job postings within this trend changed by 1 percent from 2022 to 2023.

Stop burning fossil fuels? Well, not quite. But the “save the whales” meme is embedded in the verbiage. Confused? That may be the point. What’s the fix? Hire McKinsey to help clarify your thinking.

AI plays the big gorilla in the monograph. The first expensive, hairy, yet promising aspect of smart software is replacing humans. The McKinsey report asserts:

Generative AI describes algorithms (such as ChatGPT) that take unstructured data as input (for example, natural language and images) to create new content, including audio, code, images, text, simulations, and videos. It can automate, augment, and accelerate work by tapping into unstructured mixed-modality data sets to generate new content in various forms.

Yep, smart software can produce reports like this one: Faster, cheaper, and good enough. Just think of the reports the team can do.

The third trend I want to address is digital trust and cyber security. Now the cyber crime world is a relatively specialized one. We know from the CrowdStrike misstep that experts in cyber security can wreck havoc on a global scale. Furthermore, we know that there are hundreds of cyber security outfits offering smart software, threat intelligence, and very specialized technical services to protect their clients. But McKinsey appears to imply that its band of 100 trend identifiers are hip to this. Here’s what the dorm-room btrainstormers output:

The digital trust and cybersecurity trend encompasses the technologies behind trust architectures and digital identity, cybersecurity, and Web3. These technologies enable organizations to build, scale, and maintain the trust of stakeholders.

Okay.

I want to mention that other trends range from blasting into space to software development appear in the list. What strikes me as a bit of an oversight is that smart software is going to be woven into the fabric of the other trends. What? Well, software is going to surf on AI outputs. And big boy rockets, not the duds like the Seattle outfit produces, use assorted smart algorithms to keep the system from burning up or exploding… most of the time. Not perfect, but better, faster, and cheaper than CalTech grads solving equations and rigging cybernetics with wire and a soldering iron.

Net net: This trend report is a sales document. Its purpose is to cause an organization familiar with McKinsey and the organization’s own shortcomings to hire McKinsey to help out with these big problems. The data source is the dorm room. The analysts are cherry picked. The tone is quasi-authoritative. I have no problem with marketing material. In fact, I don’t have a problem with the McKinsey-generated list of trends. That’s what McKinsey does. What the firm does not do is to think about the downstream consequences of their recommendations. How do I know this? Returning from a lunch with some friends in rural Illinois, I spotted two opioid addicts doing the droop.

Stephen E Arnold, August 5, 2024

The Key to Success at McKinsey & Company: The 2024 Truth Is Out!

June 21, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

When I was working at a “real” company, I wanted to labor in the vineyards of a big-time, blue-chip consulting firm. I achieved that goal and, after a suitable period of time in the penal colony, I escaped to a client. I made it out, unscathed, and entered a more interesting, less nutso working life. When the “truth” about big-time, blue-chip consulting firms appears in public sources, I scan the information. Most of it is baloney; for example, the yip yap about McKinsey and its advice pertaining to addictive synthetics. Hey, stuff happens when one is objective. “McKinsey Exec Tells Summer Interns That Learning to Ask AI the Right Questions Is the Key to Success” contains some information which I find quite surprising. First, I don’t know if the factoids in the write up are accurate or if they are the off-the-cuff baloney recruiters regularly present to potential 60-hour-a-week knowledge worker serfs or if the person has a streaming video connection to the McKinsey managing partner’s work-from-the-resort office.

Let’s assume the information is correct and consider some of its implications. An intern is a no-pay or low-pay job for students from the right institutions, the right background, or the right connections. The idea is that associates (one step above the no-pay serf) and partners (the set for life if you don’t die of heart failure crowd) can observe, mentor, and judge these field laborers. The write up states:

Standing out in a summer internship these days boils down to one thing — learning to talk to AI. At least, that’s the advice McKinsey’s chief client officer, Liz Hilton Segel, gave one eager intern at the firm. “My advice to her was to be an outstanding prompt engineer,” Hilton Segel told The Wall Street Journal.

But what about grades? What about my family’s connections to industry, elected officials, and a supreme court judge? What about my background scented with old money, sheepskin from prestigious universities, and a Nobel Prize awarded a relative 50 years ago? These questions, its seems, may no longer be relevant. AI is coming to the blue-chip consulting game, and the old-school markers of building big revenues may not longer matter.

AI matters. Despite McKinsey’s 11-month effort, the firm has produced Lilli. The smart systems, despite fits and starts, has delivered results; that is, a payoff, cash money, engagement opportunities. The write up says:

Lilli’s purpose is to aggregate the firm’s knowledge and capabilities so that employees can spend more time engaging with clients, Erik Roth, a senior partner at McKinsey who oversaw Lili’s development, said last year in a press release announcing the tool.

And the proof? I learned:

“We’ve [McKinsey humanoids] answered over 3 million prompts and add about 120,000 prompts per week,” he [Erik Roth] said. “We are saving on average up to 30% of a consultants’ time that they can reallocate to spend more time with their clients instead of spending more time analyzing things.”

Thus, the future of success is to learn to use Lilli. I am surprised that McKinsey does not sell internships, possibly using a Ticketmaster-type system.

Several observations:

  1. As Lilli gets better or is replaced by a more cost efficient system, interns and newly hired professionals will be replaced by smart software.
  2. McKinsey and other blue-chip outfits will embrace smart software because it can sell what the firm learns to its clients. AI becomes a Petri dish for finding marketable information.
  3. The hallucinative functions of smart software just create an opportunity for McKinsey and other blue-chip firms to sell their surviving professionals at a more inflated fee. Why fail and lose money? Just pay the consulting firm, sidestep the stupidity tax, and crush those competitors to whom the consulting firms sell the cookie cutter knowledge.

Net net: Blue-chip firms survived the threat from gig consultants and the Gerson Lehrman-type challenge. Now McKinsey is positioning itself to create a no-expectation environment for new hires, cut costs, and increase billing rates for the consultants at the top of the pyramid. Forget opioids. Go AI.

Stephen E Arnold, June 21, 2024

Free AI Round Up with Prices

June 18, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

EWeek (once PCWeek and a big fat Ziff publication) has published what seems to be a mash up of MBA-report writing, a bit of smart software razzle dazzle, and two scoops of Gartner Group-type “insight.” The report is okay, and its best feature is that it is free. Why pay a blue-chip or mid-tier consulting firm to assemble a short monograph? Just navigate to “21 Best Generative AI Chatbots.”

image

A lecturer shocks those in the presentation with a hard truth: Human-generated reports are worse than those produced by a “leading” smart software system. Is this the reason a McKinsey professional told interns, “Prompts are the key to your future.” Thanks, MSFT Copilot. Good enough.

The report consists of:

A table with the “leading” chatbots presented in random order. Forget that alphabetization baloney. Sorting by “leading” chatbot name is so old timey. The table presents these evaluative/informative factors:

  • Best for use case; that is, in the opinion of the “analysts” when one would use a specific chatbot in the opinion of the EWeek “experts” I assume
  • Query limit. This is baffling since recyclers of generative technology are eager to sell a range of special plans
  • Language model. This column is interesting because it makes clear that of the “leading” chatbots 12 of them are anchored in OpenAI’s “solutions”; Claude turns up three times, and Llama twice. A few vendors mention the use of multiple models, but the “report” does not talk about AI layering or the specific ways in which different systems contribute to the “use case” for each system. Did I detect a sameness in the “leading” solutions? Yep.
  • The baffling Chrome “extension.” I think the idea is that the “leading” solution with a Chrome extension runs in the Google browser. Five solutions do run as a Chrome extension. The other 16 don’t.
  • Pricing. Now prices are slippery. My team pays for ChatGPT, but since the big 4o, the service seems to be free. We use a service not on the list, and each time I access the system, the vendor begs — nay, pleads — for more money. One vendor charges $2,500 per month paid annually. Now, that’s a far cry from Bing Chat Enterprise at $5 per month, which is not exactly the full six pack.

The bulk of the report is a subjective score for each service’s feature set, its ease of use, the quality of output (!), and support. What these categories mean is not provided in a definition of terms. Hey, everyone knows about “quality,” right? And support? Have you tried to contact a whiz-bang leading AI vendor? Let me know how that works out? The screenshots vary slightly, but the underlying sameness struck me. Each write up includes what I would call a superficial or softball listing of pros and cons.

The most stunning aspect of the report is the explanation of “how” the EWeek team evaluated these “leading” systems. Gee, what systems were excluded and why would have been helpful in my opinion. Let me quote the explanation of quality:

To determine the output quality generated by the AI chatbot software, we analyzed the accuracy of responses, coherence in conversation flow, and ability to understand and respond appropriately to user inputs. We selected our top solutions based on their ability to produce high-quality and contextually relevant responses consistently.

Okay, how many queries? How were queries analyzed across systems, assuming similar systems received the same queries? Which systems hallucinated or made up information? What queries causes one or more systems to fail? What were the qualifications of those “experts” evaluating the system responses? Ah, so many questions. My hunch is that EWeek just skipped the academic baloney and went straight to running queries, plugging in a guess-ti-mate, and heading to Starbucks? I do hope I am wrong, but I worked at the Ziffer in the good old days of the big fat PCWeek. There was some rigor, but today? Let’s hit the gym?

What is the conclusion for this report about the “leading” chatbot services? Here it is:

Determining the “best” generative AI chatbot software can be subjective, as it largely depends on a business’s specific needs and objectives. Chatbot software is enormously varied and continuously evolving,  and new chatbot entrants may offer innovative features and improvements over existing solutions. The best chatbot for your business will vary based on factors such as industry, use case, budget, desired features, and your own experience with AI. There is no “one size fits all” chatbot solution.

Yep, definitely worth the price of admission.

Stephen E Arnold, June 18, 2024

What Is McKinsey & Co. Telling Its Clients about AI?

June 12, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Years ago (decades now) I attended a meeting at the firm’s technology headquarters in Bethesda, Maryland. Our carpetland welcomed the sleek, well-fed, and super entitled Booz, Allen & Hamilton professionals to a low-profile meeting to discuss the McKinsey PR problem. I attended because my boss (the head of the technology management group) assumed I would be invisible to the Big Dog BAH winners. He was correct. I was an off-the-New-York radar “manager,” buried in an obscure line item. So there I was. And what was the subject of this periodic meeting? The Harvard Business Review-McKinsey Award. The NY Booz, Allen consultants failed to come up with this idea. McKinsey did. As a result, the technology management group (soon to overtake the lesser MBA side of the business) had to rehash the humiliation of not getting associated with the once-prestigious Harvard University. (The ethics thing, the medical research issue, and the protest response have tarnished the silver Best in Show trophy. Remember?)

image

One of the most capable pilots found himself answering questions from a door-to-door salesman covering his territory somewhere west of Terre Haute. The pilot who has survived but sits amidst a burning experimental aircraft ponders an important question, “How can I explain that the crash was not my fault?” Thanks, MSFT Copilot. Have you ever found yourself in a similar situation? Can you “recall” one?

Now McKinsey has AI data. Actual hands-on, unbillable work product with smart software. Is the story in the Harvard Business Review? A Netflix documentary? A million-view TikTok hit? A “60 Minutes” segment? No, nyet, unh-unh, negative. The story appears in Joe Mansueto’s Fast Company Magazine! Mr. Mansueto founded Morningstar and has expanded his business interests to online publications and giving away some of his billions.

The write up is different from McKinsey’s stentorian pontifications. It is a bit like mining coal in a hard rock dig deep underground. It was a dirty, hard, and ultimately semi-interesting job. Smart software almost broke the McKinsey marvels.

We Spent Nearly a Year Building a Generative AI Tool. These Are the 5 (Hard) Lessons We Learned” presents information which would have been marketing gold for the McKinsey decades ago. But this is 2024, more than 18 months after Microsoft’s OpenAI bomb blast at Davos.

What did McKinsey “learn”?

McKinsey wanted to use AI to “bring together the company’s vast but separate knowledge sources.” Of course, McKinsey’s knowledge is “vast.” How could it be tiny. The firm’s expertise in pharmaceutical efficiency methods exceeds that of many other consulting firms. What’s more important profits or deaths? Answer: I vote for profits, doesn’t everyone except for a few complainers in Eastern Kentucky, West Virginia, and other flyover states.

The big reveal in the write up is that McKinsey & Co learned that its “vast” knowledge is fragmented and locked in Microsoft PowerPoint slides. After the non-billable overhead work, the bright young future corporate leaders discovered that smart software could only figure out about 15 percent of the knowledge payload in a PowerPoint document. With the vast knowledge in PowerPoint, McKinsey learned that smart software was a semi-helpful utility. The smart software was not able to “readily access McKinsey’s knowledge, generate insights, and thus help clients”  or newly-hired consultants do better work, faster, and more economically. Nope.

So what did McKinsey’s band of bright smart software wizards do? The firm coded up its own content parser. How did that home brew software work? The grade is a solid B. The cobbled together system was able to make sense of 85 percent of a PowerPoint document. The other 15 percent gives the new hires something to do until a senior partner intervenes and says, “Get billable or get gone, you very special buttercup.” Non-billable and a future at McKinsey are not like peanut butter and jelly.

How did McKinsey characterize its 12-month journey into the reality of consulting baloney? The answer is a great one. Here it is:

With so many challenges and the need to work in a fundamentally new way, we described ourselves as riding the “struggle bus.” 

Did the McKinsey workers break out into work songs to make the drudgery of deciphering PowerPoints go more pleasantly? I am think about the Coal Miners Boogie by George Davis, West Virginia Mine Disaster by Jean Ritchi, or my personal favorite Black Dust Fever by the Wildwood Valley Boys.

But the workers bringing brain to reality learned five lessons. One can, I assume, pay McKinsey to apply these lessons to a client firm experiencing a mental high from thinking about the payoffs from AI. On the other hand, consider these in this free blog post with my humble interpretation:

  1. Define a shared aspiration. My version: Figure out what you want to do. Get a plan. Regroup if the objective and the method don’t work or make much sense.
  2. Assemble a multi-disciplinary team. My version: Don’t load up on MBAs. Get individuals who can code, analyze content, and tap existing tools to accomplish specific tasks. Include an old geezer partner who can “explain” what McKinsey means when it suggests “managerial evolution.” Skip the ape to MBA cartoons.
  3. Put the user first. My version: Some lesser soul will have to use the system. Make sure the system is usable and actually works. Skip the minimum viable product and get to the quality of the output and the time required to use the system or just doing the work the old-fashioned way.
  4. Tech, learn, repeat. Covert the random walk into a logical and efficient workflow. Running around with one’s hair on fire is not a methodical process nor a good way to produce value.
  5. Measure and manage. My version: Fire those who failed. Come up with some verbal razzle-dazzle and sell the planning and managing work to a client. Do not do this work on overhead for the consultants who are billable.

What does the great reveal by McKinsey tell me. First, the baloney about “saving an average of up to 30 percent of a consultants’ time by streamlining information gathering and synthesis” sounds like the same old, same old pitched by enterprise search vendors for decades. The reality is that online access to information does not save time; it creates more work, particularly when data voids are exposed. Those old dog partners are going to have to talk with young consultants. No smart software is going to eliminate that task no matter how many senior partners want a silver bullet to kill the beast of a group of beginners.

The second “win” is the idea that “insights are better.” Baloney. Flipping through the famous executive memos to a client, reading the reports with the unaesthetic dash points, and looking at the slide decks created by coal miners of knowledge years ago still has to be done… by a human who is sober, motivated, and hungry for peer recognition. Software is not going to have the same thirst for getting a pat on the head and in some cases on another part of the human frame.

The struggle bus is loading up no. Just hire McKinsey to be the driver, the tour guide, and the outfit that collects the fees. One can convert failure into billability. That’s what the Fast Company write up proves. Eleven months and all they got was a ride on the digital equivalent of the Cybertruck which turned out to be much-hyped struggle bus?

AI may ultimately rule the world. For now, it simply humbles the brilliant minds at McKinsey and generates a story for Fast Company. Well, that’s something, isn’t it? Now about spinning that story.

Stephen E Arnold, June 12, 2024

Selling AI with Scare Tactics

June 6, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Ah, another article with more assertions to make workers feel they must adopt the smart software that threatens their livelihoods. AI automation firm UiPath describes “3 Common Barriers to AI Adoption and How to Overcome Them.” Before marketing director Michael Robinson gets to those barriers, he tries to motivate readers who might be on the fence about AI. He writes:

“There’s a growing consensus about the need for businesses to embrace AI. McKinsey estimated that generative AI could add between $2.6 to $4.4 trillion in value annually, and Deloitte’s ’State of AI in the Enterprise’ report found that 94% of surveyed executives ‘agree that AI will transform their industry over the next five years.’ The technology is here, it’s powerful, and innovators are finding new use cases for it every day. But despite its strategic importance, many companies are struggling to make progress on their AI agendas. Indeed, in that same report, Deloitte estimated that 74% of companies weren’t capturing sufficient value from their AI initiatives. Nevertheless, companies sitting on the sidelines can’t afford to wait any longer. As reported by Bain & Company, a ‘larger wedge’ is being driven ‘between those organizations that have a plan [for AI] and those that don’t—amplifying advantage and placing early adopters into stronger positions.’”

Oh, no! What can the laggards do? Fret not, the article outlines the biggest hurdles: lack of a roadmap, limited in-house expertise, and security or privacy concerns. Curious readers can see the post for details about each. As it happens, software like UiPath’s can help businesses clear every one. What a coincidence.

Cynthia Murrell, June 6, 2024

Blue-Chip Consulting Firm Needs Lawyers and Luck

May 15, 2024

McKinsey’s blue chip consultants continue their fancy dancing to explain away an itsy bitsy problem: ruined lives and run-of-the-mill deaths from drug overdoses. The International Business Times reminds us, “McKinsey Under Criminal Investigation Over Alleged Role in Fueling Opioid Epidemic.” The investigation, begun before the pandemic, continues to advance at the glacial pace of justice. Journalist Kiran Tom Sajan writes:

“Global consulting firm McKinsey & Company is under a criminal investigation by the U.S. attorneys’ offices in Massachusetts and the Western District of Virginia over its alleged involvement in fueling the opioid epidemic. The Federal prosecutors, along with the Justice Department’s civil division in Washington, are specifically examining whether the consulting firm participated in a criminal conspiracy by providing advice to Purdue Pharma and other pharmaceutical companies on marketing tactics aimed at increasing sales of prescription painkillers. Purdue is the manufacturer of OxyContin, one of the painkillers that allegedly contributed to widespread addiction and fatal overdoses. Since 2021, McKinsey has reached settlements of approximately $1 billion to resolve investigations and legal actions into its collaboration with opioid manufacturers, primarily Purdue. The company allegedly advised Purdue to intensify its marketing of the drug amid the opioid epidemic, which has resulted in the deaths of hundreds of thousands of Americans. McKinsey has not admitted any wrongdoing.”

Of course not. We learn McKinsey raked in about $86 million working for Purdue, most of it since the drug firm’s 2007 guilty plea. Sajan notes the investigations do not stop with the question of fueling the epidemic: The Justice Department is also considering whether McKinsey obstructed justice when it fired two incautious partners—they were caught communicating about the destruction of related documents. It is also examining whether the firm engaged in healthcare fraud when it helped Purdue and other opioid sellers make fraudulent Medicare claims. Will McKinsey’s recent settlement with insurance companies lend fuel to that dumpster fire? Will Lady Luck kick her opioid addiction and embrace those McKinsey professionals? Maybe.

Cynthia Murrell, May 15, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta