The AI Profit and Cost Race: Drivers, Get Your Checkbooks Out
January 15, 2025
A dinobaby-crafted post. I confess. I used smart software to create the heart wrenching scene of a farmer facing a tough 2025.
Microsoft appears ready to spend $80 billion “on AI-enabled data centers” by December 31, 2025. Half of the money will go to US facilities, and the other half, other nation states. I learned this information from a US cable news outfit’s article “Microsoft Expects to Spend $80 Billion on AI-Enabled Data Centers in Fiscal 2025.” Is Microsoft tossing out numbers as part of a marketing plan to trigger the lustrous Google, or is Microsoft making clear that it is going whole hog for smart software despite the worries of investors that an AI revenue drought persists? My thought is that Microsoft likes to destabilize the McKinsey-type thinking at Google, wait for the online advertising giant to deliver another fabulous Sundar & Prabhakar Comedy Tour, and then continue plodding forward.
The write up reports:
Several top-tier technology companies are rushing to spend billions on Nvidia graphics processing units for training and running AI models. The fast spread of OpenAI’s ChatGPT assistant, which launched in late 2022, kicked off the AI race for companies to deliver their own generative AI capabilities. Having invested more than $13 billion in OpenAI, Microsoft provides cloud infrastructure to the startup and has incorporated its models into Windows, Teams and other products.
Yep, Google-centric marketing.
Thanks, You.com. Good enough.
But if Microsoft does spend $80 billion, how will the company convert those costs into a profit geyser? That’s a good question. Microsoft appears to be cooperating with discounts for its mainstream consumer software. I saw advertisements offering Windows 11 Professional for $25. Other deep discounts can be found for Office 365, Visio, and even the bread-and-butter sales pitch PowerPoint application.
Tweaking Google is one thing. Dealing with cost competition is another.
I noted that the South China Morning Post’s article “Alibaba Ties Up with Lee Kai-fu’s Unicorn As China’s AI Sector Consolidates.” Tucked into that rah rah write up was this statement:
The cooperation between two of China’s top AI players comes as price wars continue in the domestic market, forcing companies to further slash prices or seek partnerships with former foes. Alibaba Cloud said on Tuesday it would reduce the fees for using its visual reasoning AI model by up to 85 per cent, the third time it had marked down the prices of its AI services in the past year. That came after TikTok parent ByteDance last month cut the price of its visual model to 0.003 yuan (US$0.0004) per thousand token uses, about 85 per cent lower than the industry average.
The message is clear. The same tactic that China’s electric vehicle manufacturers are using will be applied to smart software. The idea is that people will buy good enough products and services if the price is attractive. Bean counters intuitively know that a competitor that reduces prices and delivers an acceptable product can gain market share. The companies unable to compete on price face rising costs and may be forced to cut their prices, thus risking financial collapse.
For a multi-national company, the cost of Chinese smart software may be sufficiently good to attract business. Some companies which operate under US sanctions and controls of one type or another may be faced with losing some significant markets. Examples include Brazil, India, Middle Eastern nations, and others. That means that a price war can poke holes in the financial predictions which outfits like Microsoft are basing some business decisions.
What’s interesting is that this smart software tactic apparently operating in China fits in with other efforts to undermine some US methods of dominating the world’s financial system. I have no illusions about the maturity of the AI software. I am, however, realistic about the impact of spending significant sums with the fervent belief that a golden goose will land on the front lawn of Microsoft’s headquarters. I am okay with talking about AI in order to wind up Google. I am a bit skeptical about hosing $80 billion into data centers. These puppies gobble up power, which is going to get expensive quickly if demand continues to blast past the power generation industry’s projections. An economic downturn in 2025 will not help ameliorate the situation. Toss in regional wars and social turmoil and what does one get?
Risk. Welcome to 2025.
Stephen E Arnold, January 15, 2025
Super Humans Share Super Thoughts about Free Speech
January 13, 2025
Prepared by a still-alive dinobaby.
The Marvel comix have come to life. “Elon Musk Responds As Telegram CEO Makes Fun of Facebook Parent Meta Over Fact Checking” reports
Elon Musk responded to a comment from Telegram CEO Pavel Durov, who made a playful jab at Meta over its recent decision to end fact checking on Facebook and Instagram. Durov, posted about the shut down of Meta’s fact checking program on X (formerly known as Twitter) saying that Telegram’s commitment to freedom of speech does not depend on the US Electoral cycle.
The interaction among three modern Marvel heroes is interesting. Only Mark Zuckerberg, the founder and controlling force at Facebook (now Meta) is producing children with a spouse. Messrs. Musk and Durov are engaged in spawning children — presumably super comix characters — with multiple partners and operating as if each ruled a country. Mr. Musk has fathered a number of children. Mr. Durov allegedly has more than 100 children. The idea uniting these two larger-than-life characters is that they are super humans. Mr. Zuckerberg has a different approach, guided more by political expediency than a desire to churn out numerous baby Zucks.
Technology super heroes head toward a meeting of the United Nations to explain how the world will be working with their organizations. Thanks, Copilot. Good enough.
The article includes this statement from Mr. Durov:
I’m proud that Telegram has supported freedom of speech long before it became politically safe to dop so. Our values don’t depend on US electoral cycles, said Durov in a post shared on X.
This is quite a statement. Mr. Durov blocked messages from the Ukrainian government to Russian users of Telegram. After being snared in the French judicial system, Mr. Durov has demonstrated a desire to cooperate with law enforcement. Information about Telegram users has been provided to law enforcement. Mr. Durov is confined to France as his lawyers work to secure his release. Mr. Durov has been learning more about French procedures and bureaucracy since August 2024. The wheels of justice do turn in France, probably less rapidly than the super human Pavel Durov wishes.
After Mr. Durov shared his observation about the Zuck’s willingness to embrace free speech on Twitter (now x.com), the super hero Elon Musk chose to respond. Taking time from posts designed to roil the political waters in Britain, Mr. Musk offered an ironic “Good for you” as a comment about Mr. Durov’s quip about the Zuck.
The question is, “Do these larger-than-life characters with significant personal fortunes and influential social media soap boxes support free speech?” The answer is unclear. From my vantage point in rural Kentucky, I perceive public relations or marketing output from these three individuals. My take is that Mr. Durov talks about free speech as he appears to cooperate with French law enforcement and possibly a nation-state like Russia. Mr. Musk has been characterized by some in the US as “President Musk.” The handle reflects Mr. Musk’s apparent influence on some of the policies of the incoming administration. Mr. Zuckerberg has been quick to contribute money to a recently elected candidate and even faster on the draw when it comes to dumping much of the expensive overhead of fact checking social media content.
The Times of India article is more about the global ambitions of three company leaders. Free speech could be a convenient way to continue to generate business, retain influence over information framing, and reinforce their roles as the the 2025 incarnations of Spider-Man, Iron Man, and Hulk. After decades of inattention by regulators, the new super heroes may not be engaged in saving or preserving anything except their power and influence and cash flows.
Stephen E Arnold, January 13, 2025
Paywalls: New Angles for Bad Actors
January 2, 2025
Information literacy is more important now than ever, especially as people become more polarized in their views. This is due to multiple factors such as the news media chasing profits, bad actors purposefully spreading ignorance, and algorithms that feed people confirmation biased information. Thankfully there are people like Isabella Bruno, who leads the Smithsonian’s Learning and Community department, part of the Office of Digital Transformation. She’s dedicated to learning and on her Notion she brags…er…shares that she has access to journals and resources that are otherwise locked behind paywalls.
For her job, Bruno uses a lot of academic resources, but she knows that everyone doesn’t have the same access as her. She wrote the following resource to help her fellow learning enthusiasts and researchers: How Can I Access Research Resources When Not Attached To An Academic Institution?
Bruno shares a flow chart that explains how to locate resources. If the item is a book, she recommends using LibGen, Z-Library, and BookSC. She forgets to try the Internet Archive and inter-library loans. If the source is a book, she points towards OA.mg and trying PaperPanda. It is a Chrome extension that accesses papers. She also suggests Unpaywall, another Chrome extension, that searches for the desired paper.
When in further doubt, Bruno recommends Sci-Hub or the subreddit /r/Scholar, where users exchange papers. Her best advice is directly emailing the author, but
“Sometimes you might not get a response. This is because early-career researchers (who do most of the hard work) are the most likely to reply, but the corresponding author (i.e. the author with the email address on the paper) is most likely faculty and their inboxes will often be far too full to respond to these requests. The sad reality is that you’re probably not going to get a response if you’re emailing a senior academic. 100% agree. Also, unless the paper just dropped, there’s no guarantee that any of the authors are still at that institution. Academic job security is a fantasy and researchers change institutions often, so a lot of those emails are going off into the aether.”
Bruno needs to tell people to go to their local university or visit a public library! They know how to legally get around copyright.
Whitney Grace, January 2, 2025
A Better Database of SEC Filings?
January 2, 2025
DocDelta is a new database that says it is, “revolutionizing investment research by harnessing the power of AI to decode complex financial documents at scale.” In plain speak that means it’s an AI-powered platform that analyzes financial documents. The AI studies terabytes of SEC filings, earning calls, and market data to reveal insights.
DocDelta wants its users to have an edge that other investors are missing. The DocDelta team explain the advanced language combined with financial expertise tracks subtle changes and locates patterns. The platform includes 10-K & 10-Q analysis, real time alerts, and insider trading tracker. As part of its smart monitoring, automated tools, DocDelta has risk assessments, financial metrics, and language analysis.
This platform was designed specifically for investment professionals. It notifies investors when companies update their risk factors and disclose materials through *-K filings. It also analyzes annual and quarterly earnings and compares them against past quarters, identifies material changes in risk factors, financial metrics, and management discussions. There’s also a portfolio management tool and a research feature.
DocDelta sums itself up like this:
“Detect critical changes in SEC filings before the market reacts. Get instant alerts and AI-powered analysis of risk factors, management discussion, and financial metrics.”
This could be a new tool to help the SEC track bad actors and keep the stock market clean. Is that oxymoronic?
Whitney Grace, January 2, 2024
Technical Debt: A Weight Many Carry Forward to 2025
December 31, 2024
Do you know what technical debt is? It’s also called deign debt and code debt. It refers to a development team prioritizing a project’s delivery over a functional product and the resulting consequences. Usually the project has to be redone. Data debt is a type of technical debt and it refers to the accumulated costs of poor data management that hinder decision-making and efficiency. Which debt is worse? The Newstack delves into that topic in: “Who’s the Bigger Villain? Data Debt vs. Technical Debt.”
Technical debt should only be adopted for short-term goals, such as meeting a release date, but it shouldn’t be the SOP. Data debt’s downside is that it results in poor data and manual management. It also reduces data quality, slows decision making, and increases costs. The pair seem indistinguishable but the difference is that with technical debt you can quit and start over. That’s not an option with data debt and the ramifications are bad:
“Reckless and unintentional data debt emerged from cheaper storage costs and a data-hoarding culture, where organizations amassed large volumes of data without establishing proper structures or ensuring shared context and meaning. It was further fueled by resistance to a design-first approach, often dismissed as a potential bottleneck to speed. It may also have sneaked up through fragile multi-hop medallion architectures in data lakes, warehouses, and lakehouses.”
The article goes on to recommend adopting early data-modeling and how to restructure your current systems. You do that by drawing maps or charts of your data, then project where you want them to go. It’s called planning:
“To reduce your data debt, chart your existing data into a transparent, comprehensive data model that maps your current data structures. This can be approached iteratively, addressing needs as they arise — avoid trying to tackle everything at once.
Engage domain experts and data stakeholders in meaningful discussions to align on the data’s context, significance, and usage.
From there, iteratively evolve these models — both for data at rest and data in motion—so they accurately reflect and serve the needs of your organization and customers.
Doing so creates a strong foundation for data consistency, clarity, and scalability, unlocking the data’s full potential and enabling more thoughtful decision-making and future innovation.”
Isn’t this just good data, project, or organizational management? Charting is a basic tool taught in kindergarten. Why do people forget it so quickly?
Whitney Grace, December 31, 2024
Debbie Downer Says, No AI Payoff Until 2026
December 27, 2024
Holiday greetings from the Financial Review. Its story “Wall Street Needs to Prepare for an AI Winter” is a joyous description of what’s coming down the Information Highway. The uplifting article sings:
shovelling more and more data into larger models will only go so far when it comes to creating “intelligent” capabilities, and we’ve just about arrived at that point. Even if more data were the answer, those companies that indiscriminately vacuumed up material from any source they could find are starting to struggle to acquire enough new information to feed the machine.
Translating to rural Kentucky speak: “We been shoveling in the horse stall and ain’t found the nag yet.”
The flickering light bulb has apparently illuminated the idea that smart software is expensive to develop, train, optimize, run, market, and defend against allegations of copyright infringement.
To add to the profit shadow, Debbie Downer’s cousin compared OpenAI to Visa. The idea in “OpenAI Is Visa” is that Sam AI-Man’s company is working overtime to preserve its lead in AI and become a monopoly before competitors figure out how to knock off OpenAI. The write up says:
Either way, Visa and OpenAI seem to agree on one thing: that “competition is for losers.”
Too add to the uncertainty about US AI “dominance,” Venture Beat reports:
DeepSeek-V3, ultra-large open-source AI, outperforms Llama and Qwen on launch.
Does that suggest that the squabbling and mud wrestling among US firms can be body slammed by the Chinese AI grapplers are more agile? Who knows. However, in a series of tweets, DeepSeek suggested that its “cost” was less than $6 million. The idea is that what Chinese electric car pricing is doing to some EV manufacturers, China’s AI will do to US AI. Better and faster? I don’t know but that “cheaper” angle will resonate with those asked to pump cash into the Big Dogs of US AI.
In January 2023, many were struck by the wonders of smart software. Will the same festive atmosphere prevail in 2025?
Stephen E Arnold, December 27, 2024
Anthropic Gifts a Feeling of Safety: Insecurity Blooms This Holiday Season
December 25, 2024
Written by a dinobaby, not an over-achieving, unexplainable AI system.
TechCrunch published “Google Is Using Anthropic’s Claude to Improve Its Gemini AI.” The write up reports:
Contractors working to improve Google’s Gemini AI are comparing its answers against outputs produced by Anthropic’s competitor model Claude, according to internal correspondence seen by TechCrunch. Google would not say, when reached by TechCrunch for comment, if it had obtained permission for its use of Claude in testing against Gemini.
Beyond Search notes Pymnts.com report from February 5, 2023, that Google invested at that time $300 million in Anthropic. Beyond Search recalls a presentation at a law enforcement conference. One comment made by an attendee to me suggested that Google was well aware of Anthropic’s so-called constitutional AI. I am immune to AI and crypto babble, but I did chase down “constitutional AI” because the image the bound phrase sparked in my mind was that of the mess my French bulldog delivers when he has eaten spicy food.
The illustration comes from You.com. Kwanzaa was the magic word. Good enough.
The explanation consumes 34 pages of an ArXiv paper called “Constitutional AI: Harmlessness from AI Feedback.” The paper has more than 48 authors. (Headhunters, please, take note when you need to recruit AI wizards.) I read the paper, and I think — please, note, “think” — the main idea is:
Humans provides some input. Then the Anthropic system figures out how to achieve helpfulness and instruction-following without human feedback. And the “constitution”? Those are the human-created rules necessary to get the smart software rolling along. Presumably Anthropic’s algorithms ride without training wheels forevermore.
The CAI acronym has not caught on like the snappier RAG or “retrieval augmented generation” or the most spectacular jargon “synthetic data.” But obviously Google understands and values to the tune of hundreds of millions of dollars, staff time, and the attention of big Googler thinkers like Jeff Dean (who once was the Big Dog of AI) but has given way to the alpha dog at DeepMind).
The swizzle for this “testing” or whatever the Googlers are doing is “safety.” I know that when I ask for an image like “a high school teacher at the greenboard talking to students who are immersed in their mobile phones”, I am informed that the image is not safe. I assume Anthropic will make such crazy prohibitions slightly less incomprehensible. Well, maybe, maybe not.
Several observations are warranted:
- Google’s investment in Anthropic took place shortly after the Microsoft AI marketing coup in 2023. Perhaps someone knew that Google’s “we invented it” transformer technology was becoming a bit of a problem
- Despite the Google “we are the bestest” in AI technology, the company continues to feel the need to prove that it is the bestest. That’s good. Self- knowledge and defeating “not invented here” malaise are positives.
- DeepMind itself — although identified as the go-to place for the most bestest AI technology — may not be perceived as the outfit with the Holy Grail, the secret to eternal life, and the owner of most of the land on which the Seven Cities of Cibola are erected.
Net net: Lots of authors, Google testing itself, and a bit of Google’s inferiority complex — Quite a Kwanzaa gift.
Stephen E Arnold, December 25, 2024
McKinsey Takes One for the Team
December 25, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
I read the “real” news in “McKinsey & Company to Pay $650 Million for Role in Opioid Crisis.” The write up asserts:
The global consulting firm McKinsey and Company Friday [December 13, 2024] agreed to pay $650 million to settle a federal probe into its role in helping “turbocharge” sales of the highly addictive opioid painkiller OxyContin for Purdue Pharma…
If I were still working at a big time blue chip consulting firm, I would suggest to the NPR outfit that its researchers should have:
- Estimated the fees billed for opioid-related consulting projects
- Pulled together the estimated number of deaths from illegal / quasi-legal opioid overdoses
- Calculated the revenue per death
- Calculated the cost per death
- Presented the delta between the two totals.
- Presented to aggregate revenue generated for McKinsey’s clients from opioid sales
- Estimated the amount spent to “educate” physicians about the merits of synthetic opioids.
Interviewing a couple of parents or surviving spouses from Indiana, Kentucky, or West Virginia would have added some local color. But assembling these data cannot be done with a TikTok query. Hence, the write up as it was presented.
Isn’t that efficiency of MBA think outstanding? I did like the Friday the 13th timing. A red ink Friday? Nope. The fine doesn’t do the job for big time Blue Chip consulting firms. Just like EU fines don’t deter the Big Tech outfits. Perhaps something with real consequences is needed? Who am I kidding?
Stephen E Arnold, December 25, 2024
FOGINT: Telegram Gets Some Lipstick to Put on a Very Dangerous Pig
December 23, 2024
Information from the FOGINT research team.
We noted the New York Times article “Under Pressure, Telegram Turns a Profit for the First Time.” The write up reported on December 23, 2024:
Now Telegram is out to show it has found its financial footing so it can move past its legal and regulatory woes, stay independent and eventually hold an initial public offering. It has expanded its content moderation efforts, with more than 750 contractors who police content. It has introduced advertising, subscriptions and video services. And it has used cryptocurrency to pay down its debt and shore up its finances. The result: Telegram is set to be profitable this year for the first time, according to a person with knowledge of the finances who declined to be identified discussing internal figures. Revenue is on track to surpass $1 billion, up from nearly $350 million last year, the person said. Telegram also has about $500 million in cash reserves, not including crypto assets.
The FOGINT’s team viewpoint is different.
- Telegram took profit on its crypto holdings and pumped that money into its financials. Like magic, Telegram will be profitable.
- The arrest of Mr. Durov has forced the company’s hand, and it is moving forward at warp speed to become the hub for a specific category of crypto transactions.
- The French have thrown a monkey wrench into Telegram’s and its associated organizations’ plans for 2025. The manic push to train developers to create click-to-earn games, use the Telegram smart contracts, and ink deals with some very interesting partners illustrates that 2025 may be a turning point in the organizations’ business practices.
The French are moving at the speed of a finely tuned bureaucracy, and it is unlikely that Mr. Durov will shake free of the pressure to deliver names, mobile numbers, and messages of individuals and groups of interest to French authorities.
The New York Times write up references profitability. There are more gears engaging than putting lipstick on a financial report. A cornered Pavel Durov can be a dangerous 40 year old with money, links to interesting countries, and a desire to create an alternative to the traditional and regulated financial system.
Stephen E Arnold, December 23, 2024
Technology Managers: Do Not Ask for Whom the Bell Tolls
December 18, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
I read the essay “The Slow Death of the Hands-On Engineering Manager.” On the surface, the essay provides some palliative comments about a programmer who is promoted to manager. On a deeper level, the message I carried from the write up was that smart software is going to change the programmer’s work. As smart software become more capable, the need to pay people to do certain work goes down. At some point, some “development” may skip the human completely.
Thanks OpenAI ChatGPT. Good enough.
Another facet of the article concerned a tip for keeping one’s self in the programming game. The example chosen was the use of OpenAI’s ChatGPT open source software to provide “answers” to developers. Thus instead of asking a person, a coder could just type into the prompt box. What could be better for an introvert who doesn’t want to interact with people or be a manager? The answer is, “Not too much.”
What the essay makes clear is that a good coder may get promoted to be a manager. This is a role which illustrates the Peter Principle. The 1969 book explains why incompetent people can get promoted. The idea is that if one is a good coder, that person will be a good manager. Yep, it is a principle still evident in many organizations. One of its side effects is a manager who knows he or she does not deserve the promotion and is absolutely no good at the new job.
The essay unintentionally makes clear that the Peter Principle is operating. The fix is to do useful things like eliminate the need to interact with colleagues when assistance is required.
John Donne in the 17th century wrote a poorly structured sonnet which asserted:
No man is an island,
Entire of itself.
Each is a piece of the continent,
A part of the main.
The cited essay provides a way to further that worker isolation.
With AI the top-of-mind thought for most bean counters, the final lines of the sonnet is on point:
Therefore, send not to know
For whom the bell tolls,
It tolls for thee.
My view is that “good enough” has replaced individual excellence in quite important jobs. Is this AI’s “good enough” principle?
Stephen E Arnold, December 17, 2024