FOGINT: France Gears Up for More Encrypted Message Access
March 12, 2025
Yep, another dinobaby original.
Buoyed with the success of the Pavel Durov litigation, France appears to be getting ready to pursue Signal, the Zuck WhatsApp, and the Switzerland-based Proton Mail. The actions seem to lie in the future. But those familiar with the mechanisms of French investigators may predict that information gathering began years ago. With ample documentation, the French legislators with communication links to the French government seem to be ready to require Pavel-ovian responses to requests for user data.
“France Pushes for Law Enforcement to Signal, WhatsApp, and Encrypted Email” reports:
An amendment to France’s proposed “Narcotraffic” bill, which is passing through the National Assembly in the French Parliament, will require tech companies to hand over decrypted chat messages of suspected criminals within 72 hours. The law, which aims to provide French law enforcement with stronger powers to combat drug trafficking, has raised concerns among tech companies and civil society groups that it will lead to the creation of “backdoors” in encrypted services that will be exploited by cyber criminals and hostile nation-states. Individuals that fail to comply face fines of €1.5m while companies risk fines of up 2% of their annual world turnover if they fail to hand over encrypted communications demanded by French law enforcement.
The practical implications of these proposals is two-fold. First, the proposed legislation provides an alert to the identified firms that France is going to take action. The idea is that the services know what’s coming. The French investigators delight at recalcitrant companies proactively cooperating will probably be beneficial for the companies. Mr. Durov has learned that cooperation makes it possible for him to environ a future that does not include a stay at the overcrowded and dangerous prison just 16 kilometers from his hotel in Paris. The second is to keep up the momentum. Other countries have been indifferent to or unwilling to take on certain firms which have blown off legitimate requests for information about alleged bad actors. The French can be quite stubborn and have a bureaucracy that almost guarantees a less than amusing for the American outfits. The Swiss have experience in dealing with France, and I anticipate a quieter approach to Proton Mail.
The write up includes this statement:
opponents of the French law argue that breaking an encryption application that is allegedly designed for use by criminals is very different from breaking the encryption of chat apps, such as WhatsApp and Signal, and encrypted emails used by billions of people for non-criminal communications. “We do not see any evidence that the French proposal is necessary or proportional. To the contrary, any backdoor will sooner or later be exploited…
I think the statement is accurate. Information has a tendency to leak. But consider the impact on Telegram. That entity is in danger of becoming irrelevant because of France’s direct action against the Teflon-coated Russian Pavel Durov. Cooperation is not enough. The French action seems to put Telegram into a credibility hole, and it is not clear if the organization’s overblown crypto push can stave off user defection and slowing user growth.
Will the French law conflict with European Union and other EU states’ laws? Probably. My view is that the French will adopt the position, “C’est dommage en effet.” The Telegram “problem” is not completely resolved, but France is willing to do what other countries won’t. Is the French Foreign Legion operating in Ukraine? The French won’t say, but some of those Telegram messages are interesting. Oui, c’est dommage. Tip: Don’t fool around with a group of French Foreign Legion fellows whether you are wearing and EU flag T shirt and carrying a volume of EU laws, rules, regulations, and policies.
How will this play out? How would I know? I work in an underground office in rural Kentucky. I don’t think our local grocery store carries French cheese. However, I can offer a few tips to executives of the firms identified in the article:
- Do not go to France
- If you do go to France, avoid interactions with government officials
- If you must interact with government officials, make sure you have a French avocat or avocate lined up.
France seems so wonderful; it has great food; it has roads without billboards; and it has a penchant for direct action. Examples range from French Guiana to Western Africa. No, the “real” news doesn’t cover these activities. And executives of Signal and the Zuckbook may want to consider their travel plans. Avoid the issues Pavel Durov faces and may have resolved this calendar year. Note the word “may.”
Stephen E Arnold, March 12, 2025
AI Hiring Spoofs: A How To
March 12, 2025
Be aware. A dinobaby wrote this essay. No smart software involved.
The late Robert Steele, one of first government professionals to hop on the open source information bandwagon, and I worked together for many years. In one of our conversations in the 1980s, Robert explained how he used a fake persona to recruit people to assist him in his work on a US government project. He explained that job interviews were an outstanding source of information about a company or an organization.
“AI Fakers Exposed in Tech Dev Recruitment: Postmortem” is a modern spin on Robert’s approach. Instead of newspaper ads and telephone calls, today’s approach uses AI and video conferencing. The article presents a recipe for what was at one time a technique not widely discussed in the 1980s. Robert learned his approach from colleagues in the US government.
The write up explains that a company wants to hire a professional. Everything hums along and then:
…you discover that two imposters hiding behind deepfake avatars almost succeeded in tricking your startup into hiring them. This may sound like the stuff of fiction, but it really did happen to a startup called Vidoc Security, recently. Fortunately, they caught the AI impostors – and the second time it happened they got video evidence.
The cited article explains how to set and operate this type of deep fake play. I am not going to present the “how to” in this blog post. If you want the details, head to the original. The penetration tactic requires Microsoft LinkedIn, which gives that platform another use case for certain individuals gathering intelligence.
Several observations:
- Keep in mind that the method works for fake employers looking for “real” employees in order to obtain information from job candidates. (Some candidates are blissfully unaware that the job is a front for obtaining data about an alleged former employer.)
- The best way to avoid AI centric scams is to do the work the old-fashioned way. Smart software opens up a wealth of opportunities to obtain allegedly actionable information. Unfortunately the old fashioned way is slow, expensive, and prone to social engineering tactics.
- As AI and bad actors take advantage of the increased capabilities of smart software, humans do not adapt quickly when those humans are not actively involved with AI capabilities. Personnel related matters are a pain point for many organizations.
To sum up, AI is a tool. It can be used in interesting ways. Is the contractor you hired on Fiverr or via some online service a real person? Is the job a real job or a way to obtain information via an AI that is a wonderful conversationalist? One final point: The target referenced in the write was a cyber security outfit. Did the early alert, proactive, AI infused system prevent penetration?
Nope.
Stephen E Arnold, March 12, 2025
Survey: Kids and AI Tools
March 12, 2025
Our youngest children are growing up alongside AI. Or, perhaps, it would be more accurate to say increasingly intertwined with it. Axios tells us, "Study Zeroes in on AI’s Youngest Users." Write Megan Morrone cites a recent survey from Common Sense Media that examined AI use by children under 8 years old. The researchers surveyed 1,578 parents last August. We learn:
"Even the youngest of children are experimenting with a rapidly changing technology that could reshape their learning and critical thinking skills in unknown ways. By the numbers: One in four parents of kids ages 0-8 told Common Sense their children are learning critical thinking skills from using AI.
- 39% of parents said their kids use AI to ‘learn about school-related material,’ while only 8% said they use AI to ‘learn about AI.’
- For older children (ages 5-8) nearly 40% of parents said their child has used an app or a device with AI to learn.
- 24% of children use AI for ‘creative content,’ like writing short stories or making art, according to their parents."
It is too soon to know the long-term effects of growing up using AI tools. These kids are effectively subjects in a huge experiment. However, we already see indications that reliance on AI is bad for critical thinking skills. And that research is on adults, never mind kids whose base neural pathways are just forming. Parents, however, seem unconcerned. Morrone reports:
- More than half (61%) of parents of kids ages 0-8 said their kids’ use of AI had no impact on their critical thinking skills.
- 60% said there was no impact on their child’s well-being.
- 20% said the impact on their child’s creativity was ‘mostly positive.’
Are these parents in denial? They cannot just be happy to offload parenting to algorithms. Right? Perhaps they just need more information. Morrone points us to EqualAI’s new AI Literacy Initiative but, again, that resource is focused on adults. The write-up emphasizes the stakes of this great experiment on our children:
‘Our youngest children are on the front lines of an unprecedented digital transformation,’ said James P. Steyer, founder and CEO of Common Sense.
‘Addressing the impact of AI on the next generation is one of the most pressing issues of our time,’ Miriam Vogel, CEO of EqualAI, told Axios in an email. ‘Yet we are insufficiently developing effective approaches to equip young people for a world where they are both using and profoundly affected by AI.’
What does this all mean for society’s future? Stay tuned.
Cynthia Murrell, March 12, 2025
AI and Jobs: Tell These Folks AI Will Not Impact Their Work
March 12, 2025
The work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.
I have a friend who does some translation work. She’s chugging along because of her reputation for excellent work. However, one of the people who worked with me on a project requiring Russian language skills has not worked out. The young person lacks the reputation and the contacts with a base of clients. The older person can be as busy as she wants to be.
What’s the future of translating from one language to another for money? For the established person, smart software appears to have had zero impact. The younger person seems to be finding that smart software is getting the translation work.
I will offer my take in a moment. First, let’s look at “Turkey’s Translators Are Training the AI Tools That Will Replace Them.”
I noted this statement in the cited article:
Turkey’s sophisticated translators are moonlighting as trainers of artificial intelligence models, even as their profession shrinks with the rise of machine translations. As the models improve, these training jobs, too, may disappear.
What’s interesting is that the skilled translators are providing information to AI models. These models are definitely going to replace the humans. The trajectory is easy to project. Machines will work faster and cheaper. The humans will abandon the discipline. Then prices will go up. Those requiring translations will find themselves spending more and having few options. Eventually the old hands will wither. Excellent translations which capture nuance will become a type of endangered species. The snow leopard of knowledge work is with us.
I noted this statement in the article:
Book publishing, too, is transforming. Turkish publisher Dedalus announced in 2023 that it had machine-translated nine books. In 2022, Agora Books, helmed by translator Osman Ak?nhay, released a Turkish edition of Jean-Dominique Brierre’s Milan Kundera, une vie d’écrivain, a biography of the Czech-French novelist Milan Kundera. Ak?nhay, who does not know French, used Google Translate to help him in the translation, to much criticism from the industry.
What’s this mean?
- Jobs will be lost and the professionals with specialist skills are going to be the buggy whip makers in a world of automobiles
- The downstream impact of smart software is going to kill off companies. The Chegg legal matter illustrates how a monopoly can mindlessly erode a company. This is like a speeding semi-truck smashing love bugs on a Florida highway. The bugs don’t know what hit them, and the semi-truck is unaware and the driver is uncaring. Dead bugs? So what? See “Chegg Sues Google for Hurting Traffic with AI As It Considers Strategic Alternatives.”
- Data from different sources suggesting that AI will just create jobs is either misleading, public relations, or dead wrong. The Bureau of Labor Statistics data are spawning articles like “AI and Its Impact on Software Development Jobs.”
Net net: What’s emerging is one of those classic failure scenarios. Nothing big seems to go wrong. Then a collapse occurs. That’s what’s beginning to appear. Just little changes. Heed the signals? Of course not. I can hear someone saying, “That won’t happen to me.” Of course not but cheaper and faster are good enough at this time.
Stephen E Arnold, March 12, 2025
Microsoft: Marketing Is One Thing, a Cost Black Hole Is Quite Another
March 11, 2025
Yep, another dinobaby original.
I read “Microsoft Cuts Data Centre Plans and Hikes Prices in Push to Make Users Carry AI Cost.” The headline meant one thing to me: The black hole of AI costs must be capped. For my part, I try to avoid MSFT AI. After testing the Redmoanians’ smart software for months, I decided, “Nope.”
The write up says:
Last week, Microsoft unceremoniously pulled back on some planned data centre leases. The move came after the company increased subscription prices for its flagship 365 software by up to 45%, and quietly released an ad-supported version of some products. The tech giant’s CEO, Satya Nadella, also recently suggested AI has so far not produced much value.
No kidding. I won’t go into the annoyances. AI in Notepad? Yeah, great thinking like that which delivered Bob to users who loved Clippy.
The essay notes:
Having sunk billions into generative AI, Microsoft is trying to find the business model that will make the technology profitable.
Maybe someday, but that day is not today or tomorrow. If anything, Microsoft is struggling with old-timey software as well. The Register, a UK online publication, reports:
Back to AI. The AI financial black hole exists, and it may not be easy to resolve. What’s the fix? Here’s the Microsoft data center plan as of March 2025:
As AI infrastructure costs rise and model development evolves, shifting the costs to consumers becomes an appealing strategy for AI companies. While big enterprises such as government departments and universities may manage these costs, many small businesses and individual consumers may struggle.
Several observations are warranted:
- What happens if Microsoft cannot get consumers to pay the AI bills?
- What happens if people like this old dinobaby don’t want smart software and just shift to work flows without Microsoft products?
- What happens if the marvel of the Tensor and OpenAI’s and others’ implementations continue to hallucinate creating more headaches than the methods improve?
Net net: Marketing may have gotten ahead of reality, but the black hole of costs are very real and not hallucinations. Can Microsoft escape a black hole like this one?
Stephen E Arnold, March 11, 2025
Microsoft Sends a Signal: AI, AIn’t Working
March 11, 2025
Another post from the dinobaby. Alas, no smart software used for this essay.
The problems with Microsoft’s AI push were evident from the start of its AI push in 2023. The company thought it had identified the next big thing and had the big fish on the line. Now the work was easy. Just reel in the dough.
Has it worked out for Microsoft? We know that big companies often have difficulty innovating. The enervating white board sessions which seek to answer the question, “Do we build it or buy it?” usually give way to: [a] Let’s lock it up somehow or [b] Let’s steal it because it won’t take our folks too long to knock out a me-too.
Microsoft sent a fairly loud beep-beep-beep when it began to cut back on its dependence on OpenAI. Not long ago, Microsoft trimmed some of its crazy spending for AI. Now we have the allegedly accurate information in “Microsoft Is Reportedly Potting a Future without OpenAI.”
The write up states:
Microsoft has poured over $13 billion into the AI firm since 2019, but now it wants more control over its own models and costs. Simple enough in theory—build in-house alternatives, cut expenses, and call the shots.
Is this a surprise? No, I think it is just one more beep added to the already emitted beep-beep-beep.
Here’s my take:
- Narrowly focused smart software adds some useful capabilities to what I would call workflow enhancement. The narrow focus for an AI system reduces some of the wonkiness of the output. Therefore, certain tasks benefit; for example, grinding through data for a chemistry application or providing a call center operation with a good enough solution to rising costs. Broad use cases are more problematic.
- Humans who rely on information for a living don’t want to be caught out. This means that using smart software is an assist or a supplement. This is like an older person using a cane when walking on a senior citizens adventure tour.
- Productizing a broad use case for smart software is expensive and prone to the sort of failure rate associated with a new product or service. A good example is a self driving auto with collision avoidance. Would you stand in front of such a vehicle confident in the smart software’s ability to not run over you? I wouldn’t.
What’s happening at Microsoft is a reasonably predictable and understandable approach. The company wants to hedge its bets since big bucks are flowing out, not in. The firm thinks it has enough smarts to do a better job even though in my opinion this is unlikely. Remember Bob, Clippy, and Windows updates? I do.
Also, small teams believe their approach will be a winner. Big companies believe their people can row that boat faster than anyone else. I know from personal experience and observation that this is not true. But the appearance of effort and the illusion of high value work encourages the approach.
Plus, the idea that a “leadership team” can manage innovation is a powerful one. Microsoft’s leadership believes in its leadership. That’s why the company is a leader. (I love this logic.)
Net net: My hunch is that Microsoft’s AI push is a disappointment. Now the company can shift into SWAT team mode and overwhelm the problem: AI that does not pay for itself.
Will this approach work? Nope, the outcome will be good enough. That is a bit more than one can say about Apple intelligence: Seriously out of step with the Softies.
Stephen E Arnold, March 11, 2025
Automobile Trivia: The Tesla Cybertruck and the Ford Pinto
March 11, 2025
Another post from the dinobaby. Alas, no smart software used for this essay.
I don’t cover the auto industry. However, this article caught my eye: “Cybertruck Goes to Mardi Gras Parade, Gets Bombarded by Trash and Flees in Shame: That’s Gotta Hurt.”
The write up reports:
With a whopping seven recalls in just over a year — and a fire fatality rate exceeding the infamous Ford Pinto— it’s never been a particularly great time to be a Cybertruck owner. But now, thanks to the political meddling of billionaire Tesla owner Elon Musk, it might be worse than ever. That’s what some Cybertruck drivers discovered firsthand at a Lundi Gras parade on Monday — the “Fat Monday” preamble to the famed Mardi Gras — when their hulking electric tanks were endlessly mocked and pelted with trash by revelers.
I did not know that the Tesla vehicle engaged in fire events at a rate greater than the famous Ford Pinto. I know the Pinto well. I bought one for a very low price. I drove it for about a year and sold it for a little more than I paid for it. I think I spent more time looking in my rear view mirrors than looking down the road. The Pinto, if struck from behind, would burn. I think the gas tank was made of some flimsy material. A bump in the back would cause the tank to leak and sometimes the vehicle would burst into flame. A couple of unlucky Pinto drivers suffered burns and some went to the big Ford dealership in the great beyond. I am not sure if the warranty was upheld.
I think this is interesting automotive trivia; for example, “What vehicle has a fire fatality rate exceeding the Ford Pinto?” The answer as I now know is the lovely and graceful Tesla Cybertruck.
The write up (which may be from The Byte or from Futurism) says:
According to a post on X-formerly-Twitter, at least one Cybertruck had its “bulletproof window” shattered by plastic beads before tucking tail and fleeing the parade under police protection. At least three Cybertrucks were reportedly there as part of a coordinated effort by an out-of-state Cybertruck Club to ferry parade marshals down the route. One marshal posted about their experience riding in the EV on Reddit, saying it was “boos and attacks from start to evacuation.”
I got a kick (not a recall or a fire) out of the write up and the plastic bead reference. Not as slick as “bouffon sous kétamine,” but darned good. And, no, I am not going to buy a Cybertruck. One year in Pinto fear was quite enough.
Now a test question: Which is more likely to explode? [a] a Space X rocket, [b] a Pinto, or [c] a Cybertruck?
Stephen E Arnold, March 11, 2025
AI and Two Villages: A Challenge in Some Large Countries
March 10, 2025
This blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you? We used AI to translate the original Russian into semi English and to create the illustration. Hasta la vista a human Russian translater and a human artist. That’s how AI works in real life.
My team and I are wrapping up out Telegram monograph. As part of the drill, we have been monitoring some information sources in Russia. We spotted the essay “AI and Capitalism.” Note: I am not sure the link will resolve, but you can locate it via Yandex by searching for PCNews. I apologize, but some content is tricky to locate using consumer tools.)
The “white-collar village” and the “blue collar village” generated by You.com. Good enough.
I mention the article because it makes clear how smart software is affecting one technical professional working in a Russian government-owned telecommunications company. The author’s day-to-day work requires programming. One description of the value of smart software appears in this passage:
I work as a manager in a telecom and since last year I have been actively modifying the product line, adding AI components to each product. And I am not the only one there – the movement is going on in principle throughout the IT industry, of which we are a part… Where we have seen the payoff is replacing tree navigation with a text search bar, helping to generate text on a specific topic taking into account the concept cloud of the subject area, aggregating information from sources with different data structures, extracting a sequence of semantic actions of a person while working on a laptop, simultaneous translation with imitation of any voice, etc. The goal of all these events, as before, is to increase labor productivity. Previously, a person dug with his hands, then with a shovel, now with an excavator. Indeed, now it’s easier to ask the model for an example of code than to spend hours searching on Stack Overflow. This seriously speeds things up.
The author then identifies three consequences of the use of AI:
- Training will change because “you will need to retrain for another narrow specialty several times”
- Education will become more expensive but who will pay? Possible as important who will be able to learn?
- Society will change which is a way of saying “social turmoil” ahead in my opinion.
Here’s an okay translation of the essay’s final paragraph:
…in the medium term, the target architecture of our society will inevitably see a critical stratification into workers and educated people. Blue and white collar castes. The fence between them will be so high that films about a possible future will become a fairly accurate forecast. I really want to end up in a white-collar village in the role of a white collar worker. Scary.
What’s interesting about this person’s point of view is that AI is already changing work in Russia and the Russian Federation. The challenge will be that an allegedly “flat” social structure will be split into those who can implement smart software and those who cannot. The chatter about smart software is usually focused on which company will find a way to generate revenue from the massive investments required to create solutions that consumers and companies will buy.
What gets less attention is the apparent impact of the technology on countries which purport to make life “better” via a different system. If the author is correct, some large nation states are likely to face some significant social challenges. Not everyone can work in “a white-collar village.”
Stephen E Arnold, March 10, 2025
A French Outfit Points Out Some Issues with Starlink-Type Companies
March 10, 2025
Another one from the dinobaby. No smart software. I spotted a story on the Thales Web site, but when I went back to check a detail, it had disappeared. After a bit of poking I found a recycled version called “Thales Warns Governments Over Reliance on Starlink-Type Systems.” The story must be accurate because it is from the “real” news outfit that wants my belief in their assertion of trust. Well, what do you know about trust?
Thales, as none of the people in Harrod’s Creek knows, is a French defence, intelligence, and go-to military hardware type of outfit. Thales and Dassault Systèmes are among the world leaders in a number cutting edge technology sectors. As a person who did some small work in France, I heard the Thales name mentioned a number of times. Thales has a core competency in electronics, military communications, and related fields.
The cited article reports:
Thales CEO Patrice Caine questioned the business model of Starlink, which he said involved frequent renewal of satellites and question marks over profitability. Without further naming Starlink, he went on to describe risks of relying on outside services for government links. “Government actors need reliability, visibility and stability,” Caine told reporters. “A player that – as we have seen from time to time – mixes up economic rationale and political motivation is not the kind that would reassure certain clients.”
I am certainly no expert in the lingo of a native French speaker using English words. I do know that the French language has a number of nuances which are difficult for a dinobaby like me to understand without saying, “Pourriez-vous répéter, s’il vous plaît?”
I noticed several things; specifically:
- The phrase “satellite renewal.” The idea is that the useful life of a Starlink-type device is shorter than some other technologies such as those from Thales-type of companies. Under the surface is the French attitude toward “fast fashion”. The idea is that cheap products are wasteful; well-made products, like a well-made suite, last a long time. Longer than a black baseball cap is how I interpreted the reference to “renewal.” I may be wrong, but this is a quite serious point underscoring the issue of engineering excellence.
- The reference to “profitability” seems to echo news reports that Starlink itself may be on the receiving end of preferential contract awards. If those type of cozy deals go away, will the Starlink-type business generate sufficient revenue to sustain innovation, higher quality, and longer life spans? Based on my limited knowledge of thing French, this is a fairly direct way of pointing out the weak business model of the Starlink-type of service.
- The use of the words “reliability” and “stability” struck me as directing two criticisms at the Starlink-type of company. On one level the issue of corporate stability is obvious. However, “stability” applies to engineering methods as well as mental set up. Henri Bergson observed, ““Think like a man of action, act like a man of thought.” I am not sure what M. Bergson would have thought about a professional wielding a chainsaw during a formal presentation.
- The direct reference to “mixing up” reiterates the mental stability and corporate stability referents. But the killer comment is the merging of “economic rationale and political motivation” flashes bright warning lights to some French professionals and probably would resonate with other Europeans. I wonder what Austrian government officials thought about the chainsaw performance.
Net net: Some of the actions of a Starlink-type of company have been disruptive. In game theory, “keep people guessing” is a proven tactic. Will it work in France? Unlikely. Chainsaws will not be permitted in most meetings with Thales or French agencies. The baseball cap? Probably not.
Stephen E Arnold, March 10, 2025
From $20 a Month to $20K a Month. Great Idea… or Not?
March 10, 2025
Another post from the dinobaby. Alas, no smart software used for this essay.
OpenAI was one of many smart software companies. If you meet the people on my team, you will learn that I dismissed most of the outfits as search-and-retrieval outfits looking for an edge. Search definitely needs an edge, but I was not confident that predictive generation of an “answer” was a solution. It was a nifty party trick, but then the money started flowing. In January 2023, Microsoft put Google’s cute sharp teeth on edge. Suddenly AI or smart software was the next big thing. The virtual reality thing did not ring the bell. The increasingly weird fiddling with mobile phones did not get the brass ring. And the idea of Apple becoming the next big thing in chips has left everyone confused. My M1 devices work pretty well, and unless I look at the label on the gizmos I can tell an M1 from and M3. Do I care? Nope.
But OpenAI became news. It squabbled with the mastermind of “renewable” satellites, definitely weird trucks, and digging tunnels in Las Vegas. (Yeah, nice idea, just not for anyone who does not want to get stalled in traffic.) When ChatGPT became available, one of those laboring in my digital vineyards signed me up. I fiddled with it and decided that I would run some of my research through the system. I learned that my research was not in the OpenAI “system.” I had it do some images. Those sucked. I will cancel this week.
I put in my AI folder this article “OpenAI’s is Getting Ready to Release PhD Level AI Agents.” I was engaging in some winnowing and I scanned it. In early February 2025, Digital Marketing News wrote about PhD level agents. I am not a PhD. I quite before I finished my dissertation to work in the really socially conscious nuclear unit of that lovable outfit Halliburton. You know the company. That’s the one that charged about $950.00 for a gallon of fuel during the Iraq war. You will also associate Dick Cheney, a fun person, with the company. So no PhD for me.
I was skeptical because of the dismal performance of ChatGPT 4, oh, whatever, trying to come up with the information I have assembled for my new book for law enforcement professionals. Then I read a Slashdot post with the title “OpenAI Plots Charging $20,000 a Month For PhD-Level Agents” shared from a publication I don’t know much about. I think it is like 404 or a for-fee Substack. The publication has great content, and you have to pay for it.
Be that as it may, the Slashdot post reports or recycles information that suggests the fee per month for a PhD level version of OpenAI’s smart software will be a modest $20,000 a month. I think the service one of my team registered costs $20.00 per month. What’s with the 20s? Twenty is a pronic number; that is, it can be slapped on a high school math test so students can say it is the product of two consecutive integers. In college I knew a person who was a numerologist. I recall that the meaning of 20 was cooperation.
The interesting part of the Slashdot post was the comments. I scanned them and concluded that some of the commenters saw the high-end service killing jobs for high-end programmers and consultants. Yeah, maybe. Somehow I doubt that a code base that struggles with information related to a widely-used messaging application is suddenly going to replicate the information I have obtained from my sources in Eastern Europe seems a bit of stretch. Heck, ChatGPT could barely do English. Russian? Not a change, but who knows. And for $200,000 it is not likely this dinobaby will take what seems like unappetizing bait.
One commenter allegedly named TheGreatEmu said:
I was about to make a similar comment, but the cost still doesn’t add up. I’m at a national lab with generally much higher overheads than most places, and a postdoc runs us $160k/year fully burdened. And of course the AI sure as h#ll can’t connect cables, turn knobs, solder, titrate, use a drill press, clean, chat with the machinist who doesn’t use email, sneaker net data out of the air-gapped lab, or understand napkin drawings over beer where all real science gets done. Or do anything useful with information that isn’t already present in the training data, and if you’re not pushing past existing knowledge boundaries, you’re not really doing science are you?
My hunch is that this is a PR or marketing play. Let’s face it. With Microsoft cutting off data center builds and Google floundering with cheese, the smart software revolution is muddling forward. The wins are targeted applications in quite specific domains. Yes, gentle reader, that’s why people pay for Chemical Abstracts online. The information is not on the public Internet. The American Chemical Society has information that the super capable AI outfits have not figured as something the non-computational, organic, or inorganic chemist will use from a somewhat volatile outfit. Get something wrong in a nuclear lab and smart software won’t be too helpful if it hallucinates.
Net net: Is everything marketing? At age 80, my answer is, “Absolutely.” Sam AI-Thinks in terms of trillions. Is $20 trillion the next pricing level?
Stephen E Arnold, March 10, 2025