More Inside Dope about McKinsey & Company

April 23, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It appears that blue chip consultants are finding some choppy waters in the exclusive money pond at the knowledge country club.

I Was a Consultant at McKinsey. Here’s the Frustrating Way They Pushed Me Out” reveals some interesting but essentially personal assertions about the blue chip consulting firm. McKinsey & Co. is associated in my mind with the pharmaceutical industry’s money maker, synthetic opioids. Living in Kentucky, evidence about the chemical compound is fairly easy to spot. Drive East of my home. Check out Nitro, West Virginia, and you can gather more evidence.

image

ChatGPT captures an elite group pushing someone neither liked nor connected out the door. Good enough.

The main idea of the write up is that McKinsey is presented as an exclusive club. Being liked and having connections are more important than any other capability. A “best of the best” on the outs is left marooned in a cube. The only email comes from a consultant offering help related to finding one’s future elsewhere. Fun.

What’s the firm doing in the first quarter of 2024? If the information in the Business Insider article is on the money, McKinsey is reinventing itself. Here are some of the possibly accurate statements in the  article:

  1. McKinsey & Co. has found easy consulting money drying up
  2. The firm is downsizing
  3. Work at McKinsey is mostly PowerPoint decks shaped to make the customer “look good”
  4. McKinsey does not follow its own high-value consulting advice when it comes to staffing.

What does the write up suggest? That is a question with different answers. For someone who has never worked at a blue chip consulting firm, the answer is, “Who cares?” For a person with some exposure to these outfits, the answer is, “So what’s new?” From an objective and reasonably well informed vantage point, the answer may be, “Are consulting firms a bunch of baloney?”

Change, however, is afoot. Let me cite one example. Competition for the blue-chip outfits was once narrowly defined. Now the competition is coming from unexpected places. I will offered one example to get your thought process rolling. Axios, a publishing company owned by , is now positioning its journalists as “experts.” Instead of charging a couple thousand of dollars per hour, Axios will sell a “name brand expert,” video calls, and special news reports. Plus, Axios will jump into the always-exciting world of conferences in semi-nice places.

How will McKinsey and its ilk respond? Will these firms reveal that they are also publishing houses and have been since their inception? Will they morph into giants of artificial intelligence, possibly creating their own models from the reams of proprietary reports, memoranda, emails, and consultant notes? Will McKinsey buy an Axios-type outfit and morph into something the partners from the 1960s would never recognize? Will blue-chip firms go out of business as individuals low-ball engagements to cash-conscious clients?

Net net: When a firm like McKinsey finds itself pilloried for failure to follow its own advice, the future is uncertain. Perhaps McKinsey should call another blue chip outfit? Better yet, buy some help from GLG or Coleman.

Stephen E Arnold, April 23, 2024

Paranoia or Is it Parano-AI? Yes

April 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I get a kick out of the information about the future impact of smart software. If those writing about the downstream consequences of artificial intelligence were on the beam, those folks would be camping out in one of those salubrious Las Vegas casinos. They are not. Thus, the prognostications provide more insight into the authors’ fears in my opinion.

4 15 scared executive

OpenAI produced this good enough image of a Top Dog reading reports about AI’s taking jobs from senior executives. Quite a messy desk, which is an indicator of an inferior executive mindset.

Here’s an example: “Even the Boss Is Worried! Hundreds of Chief Executives Fear AI Could Steal Their Jobs Too.” The write up is based on a study conducted by Censuswide for AND Digital. Here we go, fear lovers:

  1. A “jobs apocalypse”: “AI experts have predicted a 50-50 chance machines could take over all our jobs within a century.”
  2. Scared yet? “Nearly half – 43 per cent – of bosses polled admitted they too were worried AI could take steal their job.”
  3. Ignorance is bliss: “44 per cent of global CEOs did not think their staff were ready to handle AI.”
  4. Die now? “A survey of over 2,700 AI researchers in January meanwhile suggested AI could well be ‘better and cheaper’ than humans in every profession by 2116.”

My view is that the diffusion of certain types of smart software will occur over time. If the technology proves it can cuts costs and be good enough, then it will be applied where the benefits are easy to identify and monitor. When something goes off the rails, the smart software will suffer a set back. Changes will be made, and the “Let’s try again” approach will kick in. Can motivated individuals adapt? Sure. The top folks will adjust and continue to perform. The laggards will get an “Also Participated” ribbon and collect money by busking, cleaning houses, or painting houses. The good old Darwinian principles don’t change. A digital panther can kill you just as dead as a real panther.

Exciting? Not for a surviving dinobaby.

Stephen E Arnold, April 22, 2024

LinkedIn Content Ripple: Possible Wave Amplification

April 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google continues to make headlines. This morning (April 19, 2024) I flicked through the information in my assorted newsreaders. The coverage of Google’s calling the police and have alleged non-Googley professionals chatted up by law enforcement sparked many comments. One of those comments about this most recent demonstration of management mastery was from Dr. Timnit Gebru. My understanding of the Gebru incident is that she called attention to the bias in Google’s smart software systems and methods. She wrote a paper. Big thinkers at Google did not like the paper. The paper appeared, and Dr. Gebru disappeared from the Google payroll. I am have over simplified this remarkable management maneuver, but like some of Google’s synthetic data, I think I am close enough for horse shoes.

image

Is change coming to a social media service which has been quite homogeneous? Thanks, MSFT Copilot. How’s the security work coming?

Dr. Gebru posted a short item on LinkedIn, which is Microsoft’s professional social media service. Here’s what Dr. Gebru made available to LinkedIn’s members:

Not even 24 hrs after making history as the first company to mass fire workers for pro-Palestine protests, by summarily firing 28 people, Google announced that the “(ir)responsible AI org,” the one they created in response to firing me, is now reporting up the Israeli office, through an SVP there. Seems like they want us to know how forcefully and clearly they are backing this genocide.

To provide context, Dr. Gebru linked to a Medium (a begging for dollars information service). That article brandished the title “STATEMENT from Google Workers with the No Tech for Apartheid Campaign on Google’s Mass, Retaliatory Firings of Workers: [sic].” This Medium article is at this link. I am not sure if [a] these stories are going to require registration or payment to view and [b] the items will remain online.

What’s interesting about the Dr. Gebru item and her link is the comments made by LinkedIn members. These suggest that [a] most LinkedIn members either did not see Dr. Gebru’s post or were not motivated go click one of the “response” icons or [b] topics like Google’s management mastery are not popular with the LinkedIn audience.

Several observations based on my experience:

  1. Dr. Gebru’s use of LinkedIn may be a one-time shot, but on the other hand, it might provide ideas for others with a specific point of view to use as a platform
  2. With Apple’s willingness to remove Meta apps from the Chinese iPhone app store, will LinkedIn follow with its own filtering of content? I don’t know the answer to the question, but clicking on Dr. Gebru’s link will make it easy to track
  3. Will LinkedIn begin to experience greater pressure to allow content not related to self promotion and look for business contacts? I have noticed an uptick in requests from what appear to be machine-generated images preponderately young females asking, “Will you be my contact?” I routinely click, No, and I often add a comment along the lines of “I am 80 years old. Why do you want to interact with me?”

Net net: Change may be poised to test some of the professional social media service’s policies.

Stephen E Arnold, March 19, 2024

AI RIFing Financial Analysts (Juniors Only for Now). And Tomorrow?

April 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Bill Gates Worries AI Will Take His Job, Says, ‘Bill, Go Play Pickleball, I’ve Got Malaria Eradication’.” Mr. Gates is apparently about becoming farmer. He is busy buying land. He took time out from his billionaire work today to point out that AI will nuke lots of jobs. What type of jobs will be most at risk? Amazon seems to be focused on using robots and smart software to clear out expensive, unreliable humans.

But the interesting profession facing what might be called an interesting future are financial analysts. “AI Is Coming for Wall Street: Banks Are Reportedly Weighing Cutting Analyst Hiring by Two-Thirds” asserts:

Incoming classes of junior investment-banking analysts could up being cut as much as two-thirds, some of the people suggested, while those brought on board could fetch lower salaries, on account of their work being assisted by artificial intelligence.

Okay, it is other people’s money, so no big deal if the smart software hallucinates as long as there is churn and percentage scrapes. But what happens when the “senior” analysts leave or get fired? Will smart software replace them, or it the idea that junior analyst who are “smart” will move up and add value “smart” software cannot?

image

Thanks, OpenAI. This is a good depiction of the “best of the best” at a major Wall Street financial institution after learning their future was elsewhere.

The article points out:

The consulting firm Accenture has an even more extreme outlook for industry disruption, forecasting that AI could end up replacing or supplementing nearly 75% of all working hours in the banking sector.

Let’s look at the financial sector’s focus on analysts. What other industrial sectors use analysts? Here are several my team and I track:

  1. Intelligence (business and military)
  2. Law enforcement
  3. Law
  4. Medical subrogation
  5. Consulting firms (niche, general, and technical)
  6. Publishing.

If the great trimming at McKinsey and the big New York banks deliver profits, how quickly will AI-anchored software and systems diffuse across organizations?

The answer to the question is, “Fast.”

Stephen E Arnold, April 19, 2024

ChatGPT’s Use Goes Up But Election Info, Not Trusted

April 19, 2024

ChatGPT was released more than a year ago and Americans usage of the generative content engine increases. The Pew Research Center found that 23% of American adults used ChatGPT, up from 18% in July 2023. While the amount of people using ChatGPT continues to rise, many users are skeptical about the information it shares particularly related to election. The Pew Research Center posted a press release about this topic: “Americans’ Use of ChatGPT Is Ticking Up, But Few Trust Its Election Information.”

The Pew Research Center conducted a survey in February 2024 about how they use ChatGPT, such as for fun, learning, or workplace tasks. The respondents said they use the AI chatbot for these activities but they’re wary about trusting any information it spits out about the US 2024 presidential election. Four in ten adults have not too much or no trust in ChatGPT for accurate election information. Only 2% have a great deal or quite a bit of trust in the chatbot.

Pew found that 43% of younger adults (those under thirty years old) are more likely to use ChatGPT. That’s a ten point increase from 2023. Other age groups are using the chatbot more but the younger crowd remains the largest. Also Americans with more education are likely to use ChatGPT at 37% with postgraduate or other advanced degrees.

It’s also interesting to see how Americans are using ChatGPT: for entertainment, learning, or work.

“The share of employed Americans who have used ChatGPT on the job increased from 8% in March 2023 to 20% in February 2024, including an 8-point increase since July. Turning to U.S. adults overall, about one-in-five have used ChatGPT to learn something new (17%) or for entertainment (17%). These shares have increased from about one-in-ten in March 2023. Use of ChatGPT for work, learning or entertainment has largely risen across age groups over the past year. Still, there are striking differences between these groups (those 18 to 29, 30 to 49, and 50 and older)."

When it comes to the 2024 election, 38% or four in ten Americans don’t trust ChatGPT information, more specifically 18% don’t have too much trust and 20% have zero trust. The 2% outliers have a great deal/quite a bit of trust, while 10% of Americans have some trust. The other outlier groups are 15% of Americans who aren’t sure if they should trust ChatGPT or 34% who never heard of the chatbot. Regardless of political party, four in ten Republicans and Democrats don’t trust ChatGPT. It’s also noteworthy that very few people have turned to ChatGPT for election information.

Tech companies have pledged to prevent AI from being misused, but talk is cheap. Chatbots and big tech are programmed to return information that will keep users’ eyes glued to screen in the same vein as clickbait. Information does need to be curated, verified, and controlled to prevent misinformation. However, it draws a fine line between freedom of speech and suppression of information.

Whitney Grace, April 19, 2024

RIFed by AI? Do Not Give Hope Who Enter There

April 18, 2024

Rest assured, job seekers, it is not your imagination. Even those with impressive resumes are having trouble landing an interview, never mind a position. Case in point, Your Tango shares, “Former Google Employee Applies to 50 Jobs that He’s Overqualified For and Tracks the Alarming Number of Rejections.” Wrier Nia Tipton summarizes a pair of experiments documented on TikTok by ex-Googler Jonathan Javier. He found prospective employers were not impressed with his roles at some of the biggest tech firms in the world. In fact, his years of experience may have harmed his chances: his first 50 applications were designed to see how he would fare as an overqualified candidate. Most companies either did not respond or rejected him outright. He was not surprised. Tipton writes:

“Javier explained that recruiters are seeing hundreds of applications daily. ‘For me, whenever I put a job break out, I get about 30 to 50 every single day,’ he said. ‘So again, everybody, it’s sometimes not your resume. It’s sometimes that there’s so many qualified candidates that you might just be candidate number two and number three.’”

So take heart, applicants, rejections do not necessarily mean you are not worthy. There are just not enough positions to go around. The write-up points to February numbers from the Bureau of Labor Statistics that show that, while the number of available jobs has been growing, so is the unemployment rate. Javier’s experimentation continued:

“In another TikTok video, Jonathan continued his experiment and explained that he applied to 50 jobs with two similar resumes. The first resume showed that he was overqualified, while the other showed that he was qualified. Jonathan quickly received 24 rejections for the overqualified resume, while he received 15 rejections for the qualified resume. Neither got him any interviews. Something interesting that Javier noted was how fast he was rejected with his overqualified resume. From this, he observed that overqualified candidates are often overlooked in favor of candidates that fit 100% of the qualities they are looking for. ‘That’s unfortunate because it creates a bias for people who might be older or who might have a lot more experience, but they’re trying to transition into a specific industry or a new position,’ he said.”

Ouch. It is unclear what, if anything, can be done about this specificity bias in hiring. It seems all one can do is keep trying. But, not that way.

Cynthia Murrell, April 18, 2024

Kagi Search Beat Down

April 17, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

People surprise me. It is difficult to craft a search engine. Sure, a recent compsci graduate will tell you, “Piece of cake.” It is not. Even with oodles of open source technology, easily gettable content, and a few valiant individuals who actually want relevant results — search and retrieval are tough to get right. The secret to good search, in my opinion, is to define a domain, preferably a technical field, identify the relevant content, obtain rights, if necessary, and then do the indexing and the other “stuff.”

In my experience, it is a good idea to have either a friend with deep pockets, a US government grant (hello, NSF, said Google decades ago), or a credit card with a hefty credit line. Failing these generally acceptable solutions, one can venture into the land of other people’s money. When that runs out or just does not work, one can become a pay-to-play outfit. We know what that business model delivers. But for a tiny percentage of online users, a subscription service makes perfect sense. The only problem is that selling subscriptions is expensive, and there is the problem of churn. Lose a customer and spend quite a bit of money replacing that individual. Lose big customers spend oodles and oodles of money replacing that big spender.

I read “Do Not Use Kagi.” This, in turn, directed me to “Why I Lost Faith in Kagi.” Okay, what’s up with the Kagi booing? The “Lost Faith” article runs about 4,000 words. The key passage for me is:

Between the absolute blasé attitude towards privacy, the 100% dedication to AI being the future of search, and the completely misguided use of the company’s limited funds, I honestly can’t see Kagi as something I could ever recommend to people.

I looked at Kagi when it first became available, and I wrote a short email to the “Vlad” persona. I am not sure if I followed up. I was curious about how the blend of artificial intelligence and metasearch was going to deal with such issues as:

  1. Deduplication of results
  2. Latency when a complex query in a metasearch system has to wait for a module to do it thing
  3. How the business model was going to work: Expensive subscription, venture funding, collateral sales of the interface to law enforcement, advertising, etc..
  4. Controlling the cost of the pings, pipes, and power for the plumbing
  5. Spam control.

I know from experience that those dabbling in the search game ignore some of my routine questions. The reasons range from “we are smarter than you” to “our approach just handles these issues.”

image

Thanks, MSFT Copilot. Recognize anyone in the image you created?

I still struggle with the business model of non-ad supported search and retrieval systems. Subscriptions work. Well, they worked out of the gate for ChatGPT, but how many smart search systems do I want to join? Answer: Zero.

Metasearch systems are simply sucker fish on the shark bodies of a Web search operator. Bing is in the metasearch game because it is a fraction of the Googzilla operation. It is doing what it can to boost its user base. Just look at the wonky Edge ads and the rumored miniscule gain the additional of smart search has delivered to Bing traffic. Poor Yandex is relocating and finds itself in a different world from the cheerful environment of Russia.

Web content indexing is expensive, difficult, and tricky.

But why pick on Kagi? Beats me. Why not write about dogpile.com, ask.com, the duck thing, or startpage.com (formerly ixquick.com)? Each embodies a certain subsonic vibe, right?

Maybe it is the AI flavor of Kagi? Maybe it is the amateur hour approach taken with some functions? Maybe it is just a disconnect between an informed user and an entrepreneurial outfit running a mile a minute with a sign that says, “Subscribe”?

I don’t know, but it is interesting when Web search is essentially a massive disappointment that some bright GenX’er has not figured out a solution.

Stephen E Arnold, April 17, 2024

Meta: Innovating via Intentions

April 17, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Analytics India published “Meta Releases AI on WhatsApp, Looks Like Perplexity AI.” The headline caught my attention. I don’t pay much attention to the Zuckbook and the other Meta properties. The Analytics India story made this statement which caught my attention:

What users type in the search bar remains confidential and is not shared with Meta AI unless users intentionally send a query to the Meta AI chatbot.

I am okay with copying from Silicon Valley type outfits. That’s part of the game, which includes colors, shuffling staff, and providing jibber jabber instead of useful interfaces and documentation about policies. But think about the statement: “unless users intentionally send a query to the Meta AI chatbot.” Doesn’t that mean we don’t keep track of queries unless a user sends a query to the Zuckbook’s smart software? I love the “intention” because the user is making a choice between a search function which one of my team told me is not very useful and a “new” search system which will be better. If it is better, then user queries get piped into a smart search system for which the documentation is sparse. What happens to those data? How will those data be monetized? Will the data be shared with those who have a business relationship with Meta?

image

Thanks, MSFT Copilot. Good enough, but that’s what one might say about MSFT security, right?

So many questions.

The article states:

Users can still search their conversations for specific content without interacting with Meta AI, maintaining the same level of ease and privacy as before. Additionally, personal messages and calls remain end-to-end encrypted, ensuring neither WhatsApp nor Meta can access them, even with the Meta AI integration.

There is no substantiation of this assertion. Indeed, since the testimony of Frances Haugen, I am not certain what Meta does, and I am not willing to accept assertions about what is accessible to the firm’s employees and what is not. What about the metadata? Is that part of the chunk of data Meta cannot access?

Facebook, WhatsApp, and Instagram are interesting services. The information in the Meta services appears to be to be quite useful for a number of endeavors. Academic research groups are less helpful than they could be. Some have found data cut off or filtered. Imitating another AI outfit’s graphic design is the lowest on my list of Meta issues.

The company is profitable. It has considerable impact. The firm has oodles of data. But now a user’s intention gives permission to an interesting outfit to do whatever with that information. Unsettling? Nope, just part of the unregulated world of digital operations which some assert are having a somewhat negative impact on society. Yep, intentionally.

Stephen E Arnold, April 17, 2024

Data Thirst? Guess Who Can Help?

April 17, 2024

As large language models approach the limit of freely available data on the Internet, companies are eyeing sources supposedly protected by copyrights and user agreements. PCMag reports, “Google Let OpenAI Scrape YouTube Data Because Google Was Doing It Too.” It seems Google would rather double down on violations than be hypocritical. Writer Emily Price tells us:

“OpenAI made headlines recently after its CTO couldn’t say definitively whether the company had trained its Sora video generator on YouTube data, but it looks like most of the tech giants—OpenAI, Google, and Meta—have dabbled in potentially unauthorized data scraping, or at least seriously considered it. As the New York Times reports, OpenAI transcribed than a million hours of YouTube videos using its Whisper technology in order to train its GPT-4 AI model. But Google, which owns YouTube, did the same, potentially violating its creators’ copyrights, so it didn’t go after OpenAI. In an interview with Bloomberg this week, YouTube CEO Neal Mohan said the company’s terms of service ‘does not allow for things like transcripts or video bits to be downloaded, and that is a clear violation of our terms of service.’ But when pressed on whether YouTube data was scraped by OpenAI, Mohan was evasive. ‘I have seen reports that it may or may not have been used. I have no information myself,’ he said.”

How silly to think the CEO would have any information. Besides stealing from YouTube content creators, companies are exploring other ways to pierce untapped sources of data. According to the Times article cited above, Meta considered buying Simon & Schuster to unlock all its published works. We are sure authors would have been thrilled. Meta executives also considered scraping any protected data it could find and hoping no one would notice. If caught, we suspect they would consider any fees a small price to pay.

The same article notes Google changed its terms of service so it could train its AI on Google Maps reviews and public Google Docs. See, the company can play by the rules, as long as it remembers to change them first. Preferably, as it did here, over a holiday weekend.

Cynthia Murrell, April 17, 2024

A Less Crazy View of AI: From Kathmandu via Tufts University

April 16, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I try to look for interesting write ups from numerous places. Some in Kentucky (well, not really) and others in farther flung locations like Kathmandu. I read “The boring truth about AI.” The article was not boring in my opinion. The author (Amar Bhidé) presented what seemed like a non-crazy, hyperbole-free discussion of smart software. I am not sure how many people in Greenspring, Kentucky, read the Khatmandu Post, but I am not sure how many people in Greenspring, Kentucky, can read.

image

Rah rah. Thanks, MSFT Copilot, you have the hands-on expertise to prove that the New York City chatbot is just the best system when it comes to providing information of a legal nature that is dead wrong. Rah rah.

What’s the Tufts University business professor say? Let’s take a look at several statements in the article.

First, I circled this passage:

As economic historian Nathan Rosenberg and many others have shown, transformative technologies do not suddenly appear out of the blue. Instead, meaningful advances require discovering and gradually overcoming many unanticipated problems.

Second, I put a blue check mark next to this segment:

Unlike the Manhattan Project, which proceeded at breakneck speed, AI developers have been at work for more than seven decades, quietly inserting AI into everything from digital cameras and scanners to smartphones, automatic braking and fuel-injection systems in cars, special effects in movies, Google searches, digital communications, and social-media platforms. And, as with other technological advances, AI has long been put to military and criminal uses. Yet AI advances have been gradual and uncertain.

The author references IBM’s outstanding Watson system. I think that’s part of the gradual and uncertain in the hands of Big Blue’s marketing professionals.

Finally, I drew a happy face next to this:

Perhaps LLM chatbots can increase profits by providing cheap if maddening, customer service. Someday, a breakthrough may dramatically increase the technology’s useful scope. For now, though, these oft-mendacious talking horses warrant neither euphoria nor panic about “existential risks to humanity.” Best keep calm and let the traditional decentralised evolution of technology, laws, and regulations carry on.

I would suggest that a more pragmatic and less frenetic approach to smart software makes more sense than the wild and crazy information zapped from podcasts and conference presentations.

Stephen E Arnold, April 16, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta