AI: Yes, Intellectual Work Will Succumb, Just Sooner Rather Than Later
January 22, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Has AI innovation stalled? Nope. “It’s Getting Harder to Measure Just How Good AI Is Getting” explains:
OpenAI’s end-of-year series of releases included their latest large language model (LLM), o3. o3 does not exactly put the lie to claims that the scaling laws that used to define AI progress don’t work quite that well anymore going forward, but it definitively puts the lie to the claim that AI progress is hitting a wall.
Okay, that proves that AI is hitting the gym and getting pumped.
However, the write up veers into an unexpected calcified space:
The problem is that AIs have been improving so fast that they keep making benchmarks worthless. Once an AI performs well enough on a benchmark we say the benchmark is “saturated,” meaning it’s no longer usefully distinguishing how capable the AIs are, because all of them get near-perfect scores.
What is wrong with the lack of benchmarks? Nothing. Smart software is probabalistic. How accurate is the weather? Ask a wonk at the National Weather Service and you get quite optimistic answers. Ask a child whose birthday party at the park was rained out on a day Willie the Weather said that it would be sunny, and you get a different answer.
Okay, forget measurements. Here’s what the write up says will happen, and the prediction sounds really rock solid just like Willie the Weatherman:
The way AI is going to truly change our world is by automating an enormous amount of intellectual work that was once done by humans…. Like it or not (and I don’t really like it, myself; I don’t think that this world-changing transition is being handled responsibly at all) none of the three are hitting a wall, and any one of the three would be sufficient to lastingly change the world we live in.
Follow the argument? I must admit jumping from getting good, to an inability to measure “good” to humans will be replaced because AI can do intellectual work is quite a journey. Perhaps I am missing something, but:
- Just because people outside of research labs have smart software that seems to be working like a smart person, what about those hallucinatory outputs? Yep, today’s models make stuff up because probability dictates the output
- Use cases for smart software doing “intellectual work” are where in the write up? They aren’t because Vox doesn’t have any which are comfortable to journalists and writers who can be replaced by the SEO AI’s advertised on Telegram search engine optimization channels or by marketers writing for Forbes Magazine. That’s right. Excellent use cases are smart software killing jobs once held by fresh MBAs or newly minted CFAs. Why? Cheaper and as long as the models are “good enough” to turn a profit, let ‘em rip. Yahoooo.
- Smart software is created by humans, and humans shape what it does, how it is deployed, and care not a whit about the knock on effects. Technology operates in the hands of humans. Humans are deeply flawed entities. Mother Theresas are outnumbered by street gangs in Reno, Nevada, based on my personal observations of that fine city.
Net net: Vox which can and will be replaced by a cheaper and good enough alternative doesn’t want to raise that issue. Instead, Vox wanders around the real subject. That subject is that as those who drive AI figure out how to use what’s available and good enough, certain types of work will be pushed into the black boxes of smart software. Could smart software have written this essay? Yes. Could it have done a better job? Publications like the supremely weird Buzzfeed and some consultants I know sure like “good enough.” As long as it is cheap, AI is a winner.
Stephen E Arnold, January 22, 2025
Can the UN Control the Intelligence Units of Countries? Yeah, Sure. No Problem
January 16, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I assume that the information in “Governments Call for Spyware Regulations in UN Security Council Meeting” is spot on or very close to the bull’s eye. The write up reports:
On Tuesday [January 14, 2025] , the United Nations Security Council held a meeting to discuss the dangers of commercial spyware, which marks the first time this type of software — also known as government or mercenary spyware — has been discussed at the Security Council. The goal of the meeting, according to the U.S. Mission to the UN, was to “address the implications of the proliferation and misuse of commercial spyware for the maintenance of international peace and security.” The United States and 15 other countries called for the meeting.
Not surprisingly, different countries had different points of view. These ranged from “we have local regulations” to giant nation state assertions about bad actions by governments being more important to it is the USA’s fault.
The write up from the ubiquitous intelligence commentator did not include any history, context, or practical commentary about the diffusion of awareness of intelware or what the article, the UN, and my 90 year old neighbor calls spyware.
The public awareness of intelware coincided with hacks of some highly regarded technology. I am not going to name this product, but if one pokes about one might find documentation, code snippets, and even some conference material. Ah, ha. The conference material was obviously designed for marketing. Yes, that is correct. Conferences are routinely held in which the participants are vetted and certain measures put in place to prevent leakage of these materials. However, once someone passes out a brochure, the information is on the loose and can be snagged by a curious reporter who wants to do good. Also, some conference organizers themselves make disastrous decisions about what to post on their conference web site; for example, the presentations. I give some presentations at these closed to the public events, and I have found my slide deck on the organizer’s Web site. I won’t mention this outfit, but I don’t participate in any events associated with this outfit. Also, some conference attendees dress up as sheep and register with possibly bogus information. These folks happily snap pictures of exhibits of hardware not available to the public, record audio, and at one event held in the Hague sat in front of me and did a recording of my lecture about some odd ball research project in which I was involved. I reported the incident to the people at the conference desk. I learned that the individual left the conference and that his office telephone number was bogus. That’s enough. Leaks do occur. People are careless. Others just clever and duplicitous.
Thanks, You.com. You are barely able to manage a “good enough” these days. Money problems? Yeah, too bad. My heart bleeds for you.
However, the big reveal of intelware and its cousin policeware coincided with the push by one nation state. I won’t mention the country, but here’s how I perceived what kicked into high gear in 2005 or so. A number of start ups offered data analytics. The basic pitch was that these outfits had developed a suite of procedures to make sense of data. If the client had data, these companies could import the information and display important points identified by algorithms about the numbers, entities, and times. Marketers were interested in these systems because, like the sale pitches for AI today, the Madison Avenue crowd could dispense with the humans doing the tedious hand work required to make sense of pharmaceutical information. Buy, recycle, or create a data set. Then import it into these systems. Business intelligence spills forth. Leaders in this field did not disclose their “roots” in the intelligence community of the nation encouraging its entrepreneurs to commercialize what they learned when fulfilling their government military service.
Where did the funding come from? The nation state referenced provided some seed funds. However, in order to keep these systems in line with customer requirements for analyzing the sales of shampoo and blockbuster movies. Venture firms with personnel familiar with the nation state’s government software innovations were invited to participate in funding some of these outfits. One of them is a very large publicly traded company. This firm has a commercial sales side and a government sales side. Some readers of this post will have the stock in their mutual fund stock baskets. Once a couple of these outfits hit the financial jackpot for the venture firms, the race was on.
Companies once focused squarely on serving classified entities in a government in a number of countries wanted to sanitize the software and sell to a much larger, more diverse corporate market. Today, if one wants to kick the tires of commercially available once-classified systems and methods, one can:
- Attend conferences about data brokering
- Travel to Barcelona or Singapore and contact interesting start ups and small businesses in the marketing data analysis business
- Sign up for free open source intelligence online events and note the names and organizations speaking. (Some of these events allow a registered attendee to conduct an off line for others but real time chat with a speaker who represents an interesting company.
There are more techniques as well to identify outfits which are in the business of providing or developing intelware and policeware tools for anyone with money. How do you find these folks? That’s easy. Dark Web searches, Telegram Group surfing, and running an advertisement for a job requiring a person with specialized experience in a region like southeast Asia.
Now let me return to the topic of the cited article: The UN’s efforts to get governments to create rules, controls, or policies for intelware and policeware. Several observations:
- The effort is effectively decades too late
- The trajectory of high powered technology is outward from its original intended purpose
- Greed because the software works and can generate useful results: Money or genuinely valuable information.
Agree or disagree with me? That’s okay. I did a few small jobs for a couple of these outfits and have just enough insight to point out that the article “Governments Call for Spyware Regulations in UN Security Council Meeting” presents a somewhat thin report and lacks color.
Stephen E Arnold, January 18, 2025
The AI Profit and Cost Race: Drivers, Get Your Checkbooks Out
January 15, 2025
A dinobaby-crafted post. I confess. I used smart software to create the heart wrenching scene of a farmer facing a tough 2025.
Microsoft appears ready to spend $80 billion “on AI-enabled data centers” by December 31, 2025. Half of the money will go to US facilities, and the other half, other nation states. I learned this information from a US cable news outfit’s article “Microsoft Expects to Spend $80 Billion on AI-Enabled Data Centers in Fiscal 2025.” Is Microsoft tossing out numbers as part of a marketing plan to trigger the lustrous Google, or is Microsoft making clear that it is going whole hog for smart software despite the worries of investors that an AI revenue drought persists? My thought is that Microsoft likes to destabilize the McKinsey-type thinking at Google, wait for the online advertising giant to deliver another fabulous Sundar & Prabhakar Comedy Tour, and then continue plodding forward.
The write up reports:
Several top-tier technology companies are rushing to spend billions on Nvidia graphics processing units for training and running AI models. The fast spread of OpenAI’s ChatGPT assistant, which launched in late 2022, kicked off the AI race for companies to deliver their own generative AI capabilities. Having invested more than $13 billion in OpenAI, Microsoft provides cloud infrastructure to the startup and has incorporated its models into Windows, Teams and other products.
Yep, Google-centric marketing.
Thanks, You.com. Good enough.
But if Microsoft does spend $80 billion, how will the company convert those costs into a profit geyser? That’s a good question. Microsoft appears to be cooperating with discounts for its mainstream consumer software. I saw advertisements offering Windows 11 Professional for $25. Other deep discounts can be found for Office 365, Visio, and even the bread-and-butter sales pitch PowerPoint application.
Tweaking Google is one thing. Dealing with cost competition is another.
I noted that the South China Morning Post’s article “Alibaba Ties Up with Lee Kai-fu’s Unicorn As China’s AI Sector Consolidates.” Tucked into that rah rah write up was this statement:
The cooperation between two of China’s top AI players comes as price wars continue in the domestic market, forcing companies to further slash prices or seek partnerships with former foes. Alibaba Cloud said on Tuesday it would reduce the fees for using its visual reasoning AI model by up to 85 per cent, the third time it had marked down the prices of its AI services in the past year. That came after TikTok parent ByteDance last month cut the price of its visual model to 0.003 yuan (US$0.0004) per thousand token uses, about 85 per cent lower than the industry average.
The message is clear. The same tactic that China’s electric vehicle manufacturers are using will be applied to smart software. The idea is that people will buy good enough products and services if the price is attractive. Bean counters intuitively know that a competitor that reduces prices and delivers an acceptable product can gain market share. The companies unable to compete on price face rising costs and may be forced to cut their prices, thus risking financial collapse.
For a multi-national company, the cost of Chinese smart software may be sufficiently good to attract business. Some companies which operate under US sanctions and controls of one type or another may be faced with losing some significant markets. Examples include Brazil, India, Middle Eastern nations, and others. That means that a price war can poke holes in the financial predictions which outfits like Microsoft are basing some business decisions.
What’s interesting is that this smart software tactic apparently operating in China fits in with other efforts to undermine some US methods of dominating the world’s financial system. I have no illusions about the maturity of the AI software. I am, however, realistic about the impact of spending significant sums with the fervent belief that a golden goose will land on the front lawn of Microsoft’s headquarters. I am okay with talking about AI in order to wind up Google. I am a bit skeptical about hosing $80 billion into data centers. These puppies gobble up power, which is going to get expensive quickly if demand continues to blast past the power generation industry’s projections. An economic downturn in 2025 will not help ameliorate the situation. Toss in regional wars and social turmoil and what does one get?
Risk. Welcome to 2025.
Stephen E Arnold, January 15, 2025
Super Humans Share Super Thoughts about Free Speech
January 13, 2025
Prepared by a still-alive dinobaby.
The Marvel comix have come to life. “Elon Musk Responds As Telegram CEO Makes Fun of Facebook Parent Meta Over Fact Checking” reports
Elon Musk responded to a comment from Telegram CEO Pavel Durov, who made a playful jab at Meta over its recent decision to end fact checking on Facebook and Instagram. Durov, posted about the shut down of Meta’s fact checking program on X (formerly known as Twitter) saying that Telegram’s commitment to freedom of speech does not depend on the US Electoral cycle.
The interaction among three modern Marvel heroes is interesting. Only Mark Zuckerberg, the founder and controlling force at Facebook (now Meta) is producing children with a spouse. Messrs. Musk and Durov are engaged in spawning children — presumably super comix characters — with multiple partners and operating as if each ruled a country. Mr. Musk has fathered a number of children. Mr. Durov allegedly has more than 100 children. The idea uniting these two larger-than-life characters is that they are super humans. Mr. Zuckerberg has a different approach, guided more by political expediency than a desire to churn out numerous baby Zucks.
Technology super heroes head toward a meeting of the United Nations to explain how the world will be working with their organizations. Thanks, Copilot. Good enough.
The article includes this statement from Mr. Durov:
I’m proud that Telegram has supported freedom of speech long before it became politically safe to dop so. Our values don’t depend on US electoral cycles, said Durov in a post shared on X.
This is quite a statement. Mr. Durov blocked messages from the Ukrainian government to Russian users of Telegram. After being snared in the French judicial system, Mr. Durov has demonstrated a desire to cooperate with law enforcement. Information about Telegram users has been provided to law enforcement. Mr. Durov is confined to France as his lawyers work to secure his release. Mr. Durov has been learning more about French procedures and bureaucracy since August 2024. The wheels of justice do turn in France, probably less rapidly than the super human Pavel Durov wishes.
After Mr. Durov shared his observation about the Zuck’s willingness to embrace free speech on Twitter (now x.com), the super hero Elon Musk chose to respond. Taking time from posts designed to roil the political waters in Britain, Mr. Musk offered an ironic “Good for you” as a comment about Mr. Durov’s quip about the Zuck.
The question is, “Do these larger-than-life characters with significant personal fortunes and influential social media soap boxes support free speech?” The answer is unclear. From my vantage point in rural Kentucky, I perceive public relations or marketing output from these three individuals. My take is that Mr. Durov talks about free speech as he appears to cooperate with French law enforcement and possibly a nation-state like Russia. Mr. Musk has been characterized by some in the US as “President Musk.” The handle reflects Mr. Musk’s apparent influence on some of the policies of the incoming administration. Mr. Zuckerberg has been quick to contribute money to a recently elected candidate and even faster on the draw when it comes to dumping much of the expensive overhead of fact checking social media content.
The Times of India article is more about the global ambitions of three company leaders. Free speech could be a convenient way to continue to generate business, retain influence over information framing, and reinforce their roles as the the 2025 incarnations of Spider-Man, Iron Man, and Hulk. After decades of inattention by regulators, the new super heroes may not be engaged in saving or preserving anything except their power and influence and cash flows.
Stephen E Arnold, January 13, 2025
Paywalls: New Angles for Bad Actors
January 2, 2025
Information literacy is more important now than ever, especially as people become more polarized in their views. This is due to multiple factors such as the news media chasing profits, bad actors purposefully spreading ignorance, and algorithms that feed people confirmation biased information. Thankfully there are people like Isabella Bruno, who leads the Smithsonian’s Learning and Community department, part of the Office of Digital Transformation. She’s dedicated to learning and on her Notion she brags…er…shares that she has access to journals and resources that are otherwise locked behind paywalls.
For her job, Bruno uses a lot of academic resources, but she knows that everyone doesn’t have the same access as her. She wrote the following resource to help her fellow learning enthusiasts and researchers: How Can I Access Research Resources When Not Attached To An Academic Institution?
Bruno shares a flow chart that explains how to locate resources. If the item is a book, she recommends using LibGen, Z-Library, and BookSC. She forgets to try the Internet Archive and inter-library loans. If the source is a book, she points towards OA.mg and trying PaperPanda. It is a Chrome extension that accesses papers. She also suggests Unpaywall, another Chrome extension, that searches for the desired paper.
When in further doubt, Bruno recommends Sci-Hub or the subreddit /r/Scholar, where users exchange papers. Her best advice is directly emailing the author, but
“Sometimes you might not get a response. This is because early-career researchers (who do most of the hard work) are the most likely to reply, but the corresponding author (i.e. the author with the email address on the paper) is most likely faculty and their inboxes will often be far too full to respond to these requests. The sad reality is that you’re probably not going to get a response if you’re emailing a senior academic. 100% agree. Also, unless the paper just dropped, there’s no guarantee that any of the authors are still at that institution. Academic job security is a fantasy and researchers change institutions often, so a lot of those emails are going off into the aether.”
Bruno needs to tell people to go to their local university or visit a public library! They know how to legally get around copyright.
Whitney Grace, January 2, 2025
A Better Database of SEC Filings?
January 2, 2025
DocDelta is a new database that says it is, “revolutionizing investment research by harnessing the power of AI to decode complex financial documents at scale.” In plain speak that means it’s an AI-powered platform that analyzes financial documents. The AI studies terabytes of SEC filings, earning calls, and market data to reveal insights.
DocDelta wants its users to have an edge that other investors are missing. The DocDelta team explain the advanced language combined with financial expertise tracks subtle changes and locates patterns. The platform includes 10-K & 10-Q analysis, real time alerts, and insider trading tracker. As part of its smart monitoring, automated tools, DocDelta has risk assessments, financial metrics, and language analysis.
This platform was designed specifically for investment professionals. It notifies investors when companies update their risk factors and disclose materials through *-K filings. It also analyzes annual and quarterly earnings and compares them against past quarters, identifies material changes in risk factors, financial metrics, and management discussions. There’s also a portfolio management tool and a research feature.
DocDelta sums itself up like this:
“Detect critical changes in SEC filings before the market reacts. Get instant alerts and AI-powered analysis of risk factors, management discussion, and financial metrics.”
This could be a new tool to help the SEC track bad actors and keep the stock market clean. Is that oxymoronic?
Whitney Grace, January 2, 2024
Technical Debt: A Weight Many Carry Forward to 2025
December 31, 2024
Do you know what technical debt is? It’s also called deign debt and code debt. It refers to a development team prioritizing a project’s delivery over a functional product and the resulting consequences. Usually the project has to be redone. Data debt is a type of technical debt and it refers to the accumulated costs of poor data management that hinder decision-making and efficiency. Which debt is worse? The Newstack delves into that topic in: “Who’s the Bigger Villain? Data Debt vs. Technical Debt.”
Technical debt should only be adopted for short-term goals, such as meeting a release date, but it shouldn’t be the SOP. Data debt’s downside is that it results in poor data and manual management. It also reduces data quality, slows decision making, and increases costs. The pair seem indistinguishable but the difference is that with technical debt you can quit and start over. That’s not an option with data debt and the ramifications are bad:
“Reckless and unintentional data debt emerged from cheaper storage costs and a data-hoarding culture, where organizations amassed large volumes of data without establishing proper structures or ensuring shared context and meaning. It was further fueled by resistance to a design-first approach, often dismissed as a potential bottleneck to speed. It may also have sneaked up through fragile multi-hop medallion architectures in data lakes, warehouses, and lakehouses.”
The article goes on to recommend adopting early data-modeling and how to restructure your current systems. You do that by drawing maps or charts of your data, then project where you want them to go. It’s called planning:
“To reduce your data debt, chart your existing data into a transparent, comprehensive data model that maps your current data structures. This can be approached iteratively, addressing needs as they arise — avoid trying to tackle everything at once.
Engage domain experts and data stakeholders in meaningful discussions to align on the data’s context, significance, and usage.
From there, iteratively evolve these models — both for data at rest and data in motion—so they accurately reflect and serve the needs of your organization and customers.
Doing so creates a strong foundation for data consistency, clarity, and scalability, unlocking the data’s full potential and enabling more thoughtful decision-making and future innovation.”
Isn’t this just good data, project, or organizational management? Charting is a basic tool taught in kindergarten. Why do people forget it so quickly?
Whitney Grace, December 31, 2024
Debbie Downer Says, No AI Payoff Until 2026
December 27, 2024
Holiday greetings from the Financial Review. Its story “Wall Street Needs to Prepare for an AI Winter” is a joyous description of what’s coming down the Information Highway. The uplifting article sings:
shovelling more and more data into larger models will only go so far when it comes to creating “intelligent” capabilities, and we’ve just about arrived at that point. Even if more data were the answer, those companies that indiscriminately vacuumed up material from any source they could find are starting to struggle to acquire enough new information to feed the machine.
Translating to rural Kentucky speak: “We been shoveling in the horse stall and ain’t found the nag yet.”
The flickering light bulb has apparently illuminated the idea that smart software is expensive to develop, train, optimize, run, market, and defend against allegations of copyright infringement.
To add to the profit shadow, Debbie Downer’s cousin compared OpenAI to Visa. The idea in “OpenAI Is Visa” is that Sam AI-Man’s company is working overtime to preserve its lead in AI and become a monopoly before competitors figure out how to knock off OpenAI. The write up says:
Either way, Visa and OpenAI seem to agree on one thing: that “competition is for losers.”
Too add to the uncertainty about US AI “dominance,” Venture Beat reports:
DeepSeek-V3, ultra-large open-source AI, outperforms Llama and Qwen on launch.
Does that suggest that the squabbling and mud wrestling among US firms can be body slammed by the Chinese AI grapplers are more agile? Who knows. However, in a series of tweets, DeepSeek suggested that its “cost” was less than $6 million. The idea is that what Chinese electric car pricing is doing to some EV manufacturers, China’s AI will do to US AI. Better and faster? I don’t know but that “cheaper” angle will resonate with those asked to pump cash into the Big Dogs of US AI.
In January 2023, many were struck by the wonders of smart software. Will the same festive atmosphere prevail in 2025?
Stephen E Arnold, December 27, 2024
Anthropic Gifts a Feeling of Safety: Insecurity Blooms This Holiday Season
December 25, 2024
Written by a dinobaby, not an over-achieving, unexplainable AI system.
TechCrunch published “Google Is Using Anthropic’s Claude to Improve Its Gemini AI.” The write up reports:
Contractors working to improve Google’s Gemini AI are comparing its answers against outputs produced by Anthropic’s competitor model Claude, according to internal correspondence seen by TechCrunch. Google would not say, when reached by TechCrunch for comment, if it had obtained permission for its use of Claude in testing against Gemini.
Beyond Search notes Pymnts.com report from February 5, 2023, that Google invested at that time $300 million in Anthropic. Beyond Search recalls a presentation at a law enforcement conference. One comment made by an attendee to me suggested that Google was well aware of Anthropic’s so-called constitutional AI. I am immune to AI and crypto babble, but I did chase down “constitutional AI” because the image the bound phrase sparked in my mind was that of the mess my French bulldog delivers when he has eaten spicy food.
The illustration comes from You.com. Kwanzaa was the magic word. Good enough.
The explanation consumes 34 pages of an ArXiv paper called “Constitutional AI: Harmlessness from AI Feedback.” The paper has more than 48 authors. (Headhunters, please, take note when you need to recruit AI wizards.) I read the paper, and I think — please, note, “think” — the main idea is:
Humans provides some input. Then the Anthropic system figures out how to achieve helpfulness and instruction-following without human feedback. And the “constitution”? Those are the human-created rules necessary to get the smart software rolling along. Presumably Anthropic’s algorithms ride without training wheels forevermore.
The CAI acronym has not caught on like the snappier RAG or “retrieval augmented generation” or the most spectacular jargon “synthetic data.” But obviously Google understands and values to the tune of hundreds of millions of dollars, staff time, and the attention of big Googler thinkers like Jeff Dean (who once was the Big Dog of AI) but has given way to the alpha dog at DeepMind).
The swizzle for this “testing” or whatever the Googlers are doing is “safety.” I know that when I ask for an image like “a high school teacher at the greenboard talking to students who are immersed in their mobile phones”, I am informed that the image is not safe. I assume Anthropic will make such crazy prohibitions slightly less incomprehensible. Well, maybe, maybe not.
Several observations are warranted:
- Google’s investment in Anthropic took place shortly after the Microsoft AI marketing coup in 2023. Perhaps someone knew that Google’s “we invented it” transformer technology was becoming a bit of a problem
- Despite the Google “we are the bestest” in AI technology, the company continues to feel the need to prove that it is the bestest. That’s good. Self- knowledge and defeating “not invented here” malaise are positives.
- DeepMind itself — although identified as the go-to place for the most bestest AI technology — may not be perceived as the outfit with the Holy Grail, the secret to eternal life, and the owner of most of the land on which the Seven Cities of Cibola are erected.
Net net: Lots of authors, Google testing itself, and a bit of Google’s inferiority complex — Quite a Kwanzaa gift.
Stephen E Arnold, December 25, 2024
McKinsey Takes One for the Team
December 25, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
I read the “real” news in “McKinsey & Company to Pay $650 Million for Role in Opioid Crisis.” The write up asserts:
The global consulting firm McKinsey and Company Friday [December 13, 2024] agreed to pay $650 million to settle a federal probe into its role in helping “turbocharge” sales of the highly addictive opioid painkiller OxyContin for Purdue Pharma…
If I were still working at a big time blue chip consulting firm, I would suggest to the NPR outfit that its researchers should have:
- Estimated the fees billed for opioid-related consulting projects
- Pulled together the estimated number of deaths from illegal / quasi-legal opioid overdoses
- Calculated the revenue per death
- Calculated the cost per death
- Presented the delta between the two totals.
- Presented to aggregate revenue generated for McKinsey’s clients from opioid sales
- Estimated the amount spent to “educate” physicians about the merits of synthetic opioids.
Interviewing a couple of parents or surviving spouses from Indiana, Kentucky, or West Virginia would have added some local color. But assembling these data cannot be done with a TikTok query. Hence, the write up as it was presented.
Isn’t that efficiency of MBA think outstanding? I did like the Friday the 13th timing. A red ink Friday? Nope. The fine doesn’t do the job for big time Blue Chip consulting firms. Just like EU fines don’t deter the Big Tech outfits. Perhaps something with real consequences is needed? Who am I kidding?
Stephen E Arnold, December 25, 2024