AI and Two Villages: A Challenge in Some Large Countries
March 10, 2025
This blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you? We used AI to translate the original Russian into semi English and to create the illustration. Hasta la vista a human Russian translater and a human artist. That’s how AI works in real life.
My team and I are wrapping up out Telegram monograph. As part of the drill, we have been monitoring some information sources in Russia. We spotted the essay “AI and Capitalism.” Note: I am not sure the link will resolve, but you can locate it via Yandex by searching for PCNews. I apologize, but some content is tricky to locate using consumer tools.)
The “white-collar village” and the “blue collar village” generated by You.com. Good enough.
I mention the article because it makes clear how smart software is affecting one technical professional working in a Russian government-owned telecommunications company. The author’s day-to-day work requires programming. One description of the value of smart software appears in this passage:
I work as a manager in a telecom and since last year I have been actively modifying the product line, adding AI components to each product. And I am not the only one there – the movement is going on in principle throughout the IT industry, of which we are a part… Where we have seen the payoff is replacing tree navigation with a text search bar, helping to generate text on a specific topic taking into account the concept cloud of the subject area, aggregating information from sources with different data structures, extracting a sequence of semantic actions of a person while working on a laptop, simultaneous translation with imitation of any voice, etc. The goal of all these events, as before, is to increase labor productivity. Previously, a person dug with his hands, then with a shovel, now with an excavator. Indeed, now it’s easier to ask the model for an example of code than to spend hours searching on Stack Overflow. This seriously speeds things up.
The author then identifies three consequences of the use of AI:
- Training will change because “you will need to retrain for another narrow specialty several times”
- Education will become more expensive but who will pay? Possible as important who will be able to learn?
- Society will change which is a way of saying “social turmoil” ahead in my opinion.
Here’s an okay translation of the essay’s final paragraph:
…in the medium term, the target architecture of our society will inevitably see a critical stratification into workers and educated people. Blue and white collar castes. The fence between them will be so high that films about a possible future will become a fairly accurate forecast. I really want to end up in a white-collar village in the role of a white collar worker. Scary.
What’s interesting about this person’s point of view is that AI is already changing work in Russia and the Russian Federation. The challenge will be that an allegedly “flat” social structure will be split into those who can implement smart software and those who cannot. The chatter about smart software is usually focused on which company will find a way to generate revenue from the massive investments required to create solutions that consumers and companies will buy.
What gets less attention is the apparent impact of the technology on countries which purport to make life “better” via a different system. If the author is correct, some large nation states are likely to face some significant social challenges. Not everyone can work in “a white-collar village.”
Stephen E Arnold, March 10, 2025
A French Outfit Points Out Some Issues with Starlink-Type Companies
March 10, 2025
Another one from the dinobaby. No smart software. I spotted a story on the Thales Web site, but when I went back to check a detail, it had disappeared. After a bit of poking I found a recycled version called “Thales Warns Governments Over Reliance on Starlink-Type Systems.” The story must be accurate because it is from the “real” news outfit that wants my belief in their assertion of trust. Well, what do you know about trust?
Thales, as none of the people in Harrod’s Creek knows, is a French defence, intelligence, and go-to military hardware type of outfit. Thales and Dassault Systèmes are among the world leaders in a number cutting edge technology sectors. As a person who did some small work in France, I heard the Thales name mentioned a number of times. Thales has a core competency in electronics, military communications, and related fields.
The cited article reports:
Thales CEO Patrice Caine questioned the business model of Starlink, which he said involved frequent renewal of satellites and question marks over profitability. Without further naming Starlink, he went on to describe risks of relying on outside services for government links. “Government actors need reliability, visibility and stability,” Caine told reporters. “A player that – as we have seen from time to time – mixes up economic rationale and political motivation is not the kind that would reassure certain clients.”
I am certainly no expert in the lingo of a native French speaker using English words. I do know that the French language has a number of nuances which are difficult for a dinobaby like me to understand without saying, “Pourriez-vous répéter, s’il vous plaît?”
I noticed several things; specifically:
- The phrase “satellite renewal.” The idea is that the useful life of a Starlink-type device is shorter than some other technologies such as those from Thales-type of companies. Under the surface is the French attitude toward “fast fashion”. The idea is that cheap products are wasteful; well-made products, like a well-made suite, last a long time. Longer than a black baseball cap is how I interpreted the reference to “renewal.” I may be wrong, but this is a quite serious point underscoring the issue of engineering excellence.
- The reference to “profitability” seems to echo news reports that Starlink itself may be on the receiving end of preferential contract awards. If those type of cozy deals go away, will the Starlink-type business generate sufficient revenue to sustain innovation, higher quality, and longer life spans? Based on my limited knowledge of thing French, this is a fairly direct way of pointing out the weak business model of the Starlink-type of service.
- The use of the words “reliability” and “stability” struck me as directing two criticisms at the Starlink-type of company. On one level the issue of corporate stability is obvious. However, “stability” applies to engineering methods as well as mental set up. Henri Bergson observed, ““Think like a man of action, act like a man of thought.” I am not sure what M. Bergson would have thought about a professional wielding a chainsaw during a formal presentation.
- The direct reference to “mixing up” reiterates the mental stability and corporate stability referents. But the killer comment is the merging of “economic rationale and political motivation” flashes bright warning lights to some French professionals and probably would resonate with other Europeans. I wonder what Austrian government officials thought about the chainsaw performance.
Net net: Some of the actions of a Starlink-type of company have been disruptive. In game theory, “keep people guessing” is a proven tactic. Will it work in France? Unlikely. Chainsaws will not be permitted in most meetings with Thales or French agencies. The baseball cap? Probably not.
Stephen E Arnold, March 10, 2025
Attention, New MBAs in Finance: AI-gony Arrives
March 6, 2025
Another post from the dinobaby. Alas, no smart software used for this essay.
I did a couple of small jobs for a big Wall Street outfit years ago. I went to meetings, listened, and observed. To be frank, I did not do much work. There were three or four young, recent graduates of fancy schools. These individuals were similar to the colleagues I had at the big time consulting firm at which I worked earlier in my career.
Everyone was eager and very concerned that their Excel fevers were in full bloom: Bright eyes, earnest expressions, and a gentle but persistent panting in these meetings. Wall Street and Wall Street like firms in London, England, and Los Angeles, California, were quite similar. These churn outfits and deal makers shared DNA or some type of quantum entanglement.
These “analysts” or “associates” gathered data, pumped it into Excel spreadsheets set up by colleagues or technical specialists. Macros processed the data and spit out tables, charts, and graphs. These were written up as memos, reports for those with big sticks, or senior deciders.
My point is that the “work” was done by cannon fodder from well-known universities business or finance programs.
Well, bad news, future BMW buyers, an outfit called PublicView.ai may have curtailed your dreams of a six figure bonus in January or whatever month is the big momma at your firm. You can take a look at example outputs and sign up free at https://www.publicview.ai/.
If the smart product works as advertised, a category of financial work is going to be reshaped. It is possible that fewer analyst jobs will become available as the gathering and importing are converted to automated workflows. The meetings and the panting will become fewer and father between.
I don’t have data about how many worker bees power the Wall Street type outfits. I showed up, delivered information when queried, departed, and sent a bill for my time and travel. The financial hive and its quietly buzzing drones plugged away 10 or more hours a day, mostly six days a week.
The PublicView.ai FAQ page answers some basic questions; for example, “Can I perform quantitative analysis on the files?” The answer is:
Yes, you can ask Publicview to perform computations on the files using Python code. It can create graphs, charts, tables and more.
This is good news for the newly minted MBAs with programming skills. The bad news is that repeatable questions can be converted to workflows.
Let’s assume this product is good enough. There will be no overnight change in the work for existing employees. But slowly the senior managers will get the bright idea of hiring MBAs with different skills, possibly on a contract basis. Then the work will begin to shift to software. At some point in the not-to-distant future, jobs for humans will be eliminated.
The question is, “How quickly can new hires make themselves into higher value employees in what are the early days of smart software?”
I suggest getting on a fast horse and galloping forward. Donkeys with Excel will fall behind. Software does not require health care, ever increasing inducements, and vacations. What’s interesting is that at some point many “analyst” jobs, not just in finance, will be handled by “good enough” smart software.
Remember a 51 percent win rate from code that does not hang out with a latte will strike some in carpetland as a no brainer. The good news is that MBAs don’t have a graduate degree in 18th century buttons or the Brutalist movement in architecture.
Stephen E Arnold, March 6, 2025
Lawyers and High School Students Cut Corners
March 6, 2025
Cost-cutting lawyers beware: using AI in your practice may make it tough to buy a new BMW this quarter. TechSpot reports, "Lawyer Faces $15,000 Fine for Using Fake AI-Generated Cases in Court Filing." Writer Rob Thubron tells us:
"When representing HooserVac LLC in a lawsuit over its retirement fund in October 2024, Indiana attorney Rafael Ramirez included case citations in three separate briefs. The court could not locate these cases as they had been fabricated by ChatGPT."
Yes, ChatGPT completely invented precedents to support Ramirez’ case. Unsurprisingly, the court took issue with this:
"In December, US Magistrate Judge for the Southern District of Indiana Mark J. Dinsmore ordered Ramirez to appear in court and show cause as to why he shouldn’t be sanctioned for the errors. ‘Transposing numbers in a citation, getting the date wrong, or misspelling a party’s name is an error,’ the judge wrote. ‘Citing to a case that simply does not exist is something else altogether. Mr Ramirez offers no hint of an explanation for how a case citation made up out of whole cloth ended up in his brief. The most obvious explanation is that Mr Ramirez used an AI-generative tool to aid in drafting his brief and failed to check the citations therein before filing it.’ Ramirez admitted that he used generative AI, but insisted he did not realize the cases weren’t real as he was unaware that AI could generate fictitious cases and citations."
Unaware? Perhaps he had not heard about the similar case in 2023. Then again, maybe he had. Ramirez told the court he had tried to verify the cases were real—by asking ChatGPT itself (which replied in the affirmative). But that query falls woefully short of the due diligence required by the Federal Rule of Civil Procedure 11, Thubron notes. As the judge who ultimately did sanction the firm observed, Ramirez would have noticed the cases were fiction had his attempt to verify them ventured beyond the ChatGPT UI.
For his negligence, Ramirez may face disciplinary action beyond the $15,000 in fines. We are told he continues to use AI tools, but has taken courses on its responsible use in the practice of law. Perhaps he should have done that before building a case on a chatbot’s hallucinations.
Cynthia Murrell, March 6, 2025
Mathematics Is Going to Be Quite Effective, Citizen
March 5, 2025
This blog post is the work of a real-live dinobaby. No smart software involved.
The future of AI is becoming more clear: Get enough people doing something, gather data, and predict what humans will do. What if an individual does not want to go with the behavior of the aggregate? The answer is obvious, “Too bad.”
How do I know that as a handful of organizations will use their AI in this manner? I read “Spanish Running of the Bulls’ Festival Reveals Crowd Movements Can Be Predictable, Above a Certain Density.” If the data in the report are close to the pin, AI will be used to predict and then those predictions can be shaped by weaponized information flows. I got a glimpse of how this number stuff works when I worked at Halliburton Nuclear with Dr. Jim Terwilliger. He and a fellow named Julian Steyn were only too happy to explain that the mathematics used for figuring out certain nuclear processes would work for other applications as well. I won’t bore you with comments about the Monte Carl method or the even older Bayesian statistics procedures. But if it made certain nuclear functions manageable, the approach was okay mostly.
Let’s look at what the Phys.org write up says about bovines:
Denis Bartolo and colleagues tracked the crowds of an estimated 5,000 people over four instances of the San Fermín festival in Pamplona, Spain, using cameras placed in two observation spots in the plaza, which is 50 meters long and 20 meters wide. Through their footage and a mathematical model—where people are so packed that crowds can be treated as a continuum, like a fluid—the authors found that the density of the crowds changed from two people per square meter in the hour before the festival began to six people per square meter during the event. They also found that the crowds could reach a maximum density of 9 people per square meter. When this upper threshold density was met, the authors observed pockets of several hundred people spontaneously behaving like one fluid that oscillated in a predictable time interval of 18 seconds with no external stimuli (such as pushing).
I think that’s an important point. But here’s the comment that presages how AI data will be used to control human behavior. Remember. This is emergent behavior similar to the hoo-hah cranked out by the Santa Fe Institute crowd:
The authors note that these findings could offer insights into how to anticipate the behavior of large crowds in confined spaces.
Once probabilities allow one to “anticipate”, it follows that flows of information can be used to take or cause action. Personally I am going to make a note in my calendar and check in one year to see how my observation turns out. In the meantime, I will try to keep an eye on the Sundars, Zucks, and their ilk for signals about their actions and their intent, which is definitely concerned with individuals like me. Right?
Stephen E Arnold, March 5, 2025
The EU Rains on the US Cloud Parade
March 3, 2025
At least one European has caught on. Dutch blogger Bert Hubert is sounding the alarm to his fellow Europeans in the post, "It Is No Longer Safe to Move Our Governments and Societies to US Clouds." Governments and organizations across Europe have been transitioning systems to American cloud providers for reasons of cost and ease of use. Hubert implores them to prioritize security instead. He writes:
"We now have the bizarre situation that anyone with any sense can see that America is no longer a reliable partner, and that the entire large-scale US business world bows to Trump’s dictatorial will, but we STILL are doing everything we can to transfer entire governments and most of our own businesses to their clouds. Not only is it scary to have all your data available to US spying, it is also a huge risk for your business/government continuity. From now on, all our business processes can be brought to a halt with the push of a button in the US. And not only will everything then stop, will we ever get our data back? Or are we being held hostage? This is not a theoretical scenario, something like this has already happened."
US firms have been wildly successful in building reliance on their products around the world. So much so, we are told, that some officials would rather deny reality than switch to alternative systems. The post states:
"’Negotiating with reality’ is for example the letter three Dutch government ministers sent last week. Is it wise to report every applicant to your secret service directly to Google, just to get some statistics? The answer the government sent: even if we do that, we don’t, because ‘Google cannot see the IP address‘. This is complete nonsense of course, but it’s the kind of thing you tell yourself (or let others tell you) when you don’t want to face reality (or can’t)."
Though Hubert does not especially like Microsoft tools, for example, he admits Europeans are accustomed to them and have "become quite good at using them." But that is not enough reason to leave data vulnerable to "King Trump," he writes. Other options exist, even if they may require a bit of effort to implement. Security or convenience: pick one.
Cynthia Murrell, March 3, 2025
Curricula Ideas That Will Go Nowhere Fast
February 28, 2025
No smart software. Just a dinobaby doing his thing.
I read “Stuff You Should Have Been Taught in College But Weren’t” reveals a young person who has some dinobaby notions. Good for Casey Handmer, PhD. Despite his brush with Hyperloop, he has retained an ability to think clearly about education. Caltech and the JPL have shielded him from some intellectual cubby holes.
So why am I mentioning the “Stuff You Should Have…” essay and the author? I found the write up in line with thoughts my colleagues and I have shared. Let me highlight a few of Dr. Handmer’s “Should haves” despite my dislike for “woulda coulda shoulda” as a mental bookshelf.
The write up says:
in the sorts of jobs you want to have, no-one should have to spell anything out for you.
I want to point out that the essay may not be appropriate for a person who seeks a job washing dishes at the El Nopal restaurant on Goose Creek Road. The observation strikes me as appropriate for an individual who seeks employment at a high-performing organization or an aspiring “performant” outfit. (I love the coinage “performant”; it is very with it.
What are other dinobaby-in-the-making observations in the write up. I have rephrased some of the comments, and I urge you to read the original essay. Here’s goes:
- Do something tangible to demonstrate your competence. Doom scrolling and watching TikTok-type videos may not do the job.
- Offer proof you deliver value in whatever you do. I am referring to “good” actors, not “bad” actors selling Telegram and WhatsApp hacking services on the Dark Web. “Proof” is verifiable facts, a reference from an individual of repute, or demonstrating a bit of software posted on GitHub or licensed from you.
- Watch, learn, and act in a way that benefits the organization, your colleagues, and your manager.
- Change jobs to grow and demonstrate your capabilities.
- Suck it up, buttercup. Life is a series of challenges. Meet them. Deliver value.
I want to acknowledge that not all dinobabies exhibit these traits as they toddle toward the holding tank for the soon-to-be-dead. However, for an individual who wants to contribute and grow, the ideas in this essay are good ones to consider and then implement.
I do have several observations:
- The percentage of a cohort who can consistently do and deliver is very small. Excellence is not for everyone. This has significant career implications unless you have a lot of money, family connections, or a Hollywood glow.
- Most of the young people with whom I interact say they have these or similar qualities. Then their own actions prove they don’t. Here’s an example: I met a business school dean. I offered to share some ideas relevant to the job market. I gave him my card because he forgot his cards. He never emailed me. I contacted him and said politely, “What’s up?” He double talked and wanted to meet up in the spring. What’s that tell me about this person’s work ethic? Answer: Loser.
- Universities and other formal training programs struggle even when the course material and teacher is on point. The “problem” begins before the student shows up in class. The impact of family stress on a person creates a hot house of sorts. What grows in the hortorium? Species with an inability to concentrate, a pollen that cannot connect with an ovule, and a baked in confusion of “I will do it” and “doing it.”
Net net: This dinobaby is happy to say that Dr. Handmer will make a very good dinobaby some day.
Stephen E Arnold, February 28, 2025
Meta and Torrents: True, False, or Rationalization?
February 26, 2025
AIs gobble datasets for training. It is another fact that many LLMs and datasets contain biased information, are incomplete, or plain stink. One ethical but cumbersome way to train algorithms would be to notify people that their data, creative content, or other information will be used to train AI. Offering to pay for the right to use the data would be a useful step some argue.
Will this happen? Obviously not.
Why?
Because it’s sometimes easier to take instead of asking. According to Toms Hardware, “Meta Staff Torrented Nearly 82TB Of Pirated Books For AI Training-Court Records Reveal Copyright Violations.” The article explains that Meta pirated 81.7 TB of books from the shadow libraries Anna’s Archive, Z-Library, and LibGen. These books were then used to train AI models. Meta is now facing a class action lawsuit about using content from the shadow libraries.
The allegations arise from Meta employees’ written communications. Some of these messages provide insight into employees’ concern about tapping pirated materials. The employees were getting frown lines, but then some staffers’ views rotated when they concluded smart software helped people access information.
Here’s a passage from the cited article I found interesting:
“Then, in January 2023, Mark Zuckerberg himself attended a meeting where he said, “We need to move this stuff forward… we need to find a way to unblock all this.” Some three months later, a Meta employee sent a message to another one saying they were concerned about Meta IP addresses being used “to load through pirate content.” They also added, “torrenting from a corporate laptop doesn’t feel right,” followed by laughing out loud emoji. Aside from those messages, documents also revealed that the company took steps so that its infrastructure wasn’t used in these downloading and seeding operations so that the activity wouldn’t be traced back to Meta. The court documents say that this constitutes evidence of Meta’s unlawful activity, which seems like it’s taking deliberate steps to circumvent copyright laws.”
If true, the approach smacks of that suave Silicon Valley style. If false, my faith in a yacht owner with gold chains might be restored.
Whitney Grace, February 26, 2025
AI Research Tool from Perplexity Is Priced to Undercut the Competition
February 26, 2025
Are prices for AI-generated research too darn high? One firm thinks so. In a Temu-type bid to take over the market, reports VentureBeat, "Perplexity Just Made AI Research Crazy Cheap—What that Means for the Industry." CEO Aravind Srinivas credits open source software for making the move possible, opining that "knowledge should be universally accessible." Knowledge, yes. AI research? We are not so sure. Nevertheless, here we are. The write-up describes the difference in pricing:
"While Anthropic and OpenAI charge thousands monthly for their services, Perplexity offers five free queries daily to all users. Pro subscribers pay $20 monthly for 500 daily queries and faster processing — a price point that could force larger AI companies to explain why their services cost up to 100 times more."
Not only is Perplexity’s Deep Research cheaper than the competition, crows the post, its accuracy rivals theirs. We are told:
"[Deep Research] scored 93.9% accuracy on the SimpleQA benchmark and reached 20.5% on Humanity’s Last Exam, outperforming Google’s Gemini Thinking and other leading models. OpenAI’s Deep Research still leads with 26.6% on the same exam, but OpenAI charges $200 percent for that service. Perplexity’s ability to deliver near-enterprise level performance at consumer prices raises important questions about the AI industry’s pricing structure."
Well, okay. Not to stray too far from the point, but is a 20.5% or a 26.6% on Humanity’s Last Exam really something to brag about? Last we checked, those were failing grades. By far. Isn’t it a bit too soon to be outsourcing research to any LLM? But I digress.
We are told the low, low cost Deep Research is bringing AI to the micro-budget masses. And, soon, to the Windows-less—Perplexity is working on versions for iOS, Android, and Mac. Will this spell disaster for the competition?
Cynthia Murrell, February 26, 2025
Rest Easy. AI Will Not Kill STEM Jobs
February 25, 2025
Written by a dinobaby, not smart software. But I would replace myself with AI if I could.
Bob Hope quipped, “A sense of humor is good for you. Have you ever heard of a laughing hyena with heart burn?” No, Bob, I have not.
Here’s a more modern joke for you from the US Bureau of Labor Statistics circa 2025. It is much fresher than Mr. Hope’s quip from a half century ago.
The Bureau of Labor Statistics says:
Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. (Source: Investopedia)
Okay, I wonder what those LinkedIn, XTwitter, and Reddit posts about technology workers not being able to find jobs in these situations:
- Recent college graduates with computer science degrees
- Recently terminated US government workers from agencies like 18F
- Workers over 55 urged to take early retirement?
The item about the rosy job market appeared in Slashdot too. Here’s the quote I noted:
Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. According to the BLS, the impact AI will have on tech-sector employment is highly uncertain. For one, AI is adept at coding and related tasks. But at the same time, as digital systems become more advanced and essential to day-to-day life, more software developers, data managers, and the like are going to be needed to manage those systems. "Although it is always possible that AI-induced productivity improvements will outweigh continued labor demand, there is no clear evidence to support this conjecture," according to BLS researchers.
Robert Half, an employment firm, is equally optimistic. Just a couple of weeks ago, that outfit said:
Companies continue facing strong competition from other firms for tech talent, particularly for candidates with specialized skills. Across industries, AI proficiency tops the list of most-sought capabilities, with organizations needing expertise for everything from chatbots to predictive maintenance systems. Other in-demand skill areas include data science, IT operations and support, cybersecurity and privacy, and technology process automation.
What am I to conclude from these US government data? Here are my preliminary thoughts:
- The big time consulting firms are unlikely to change their methods of cost reduction; that is, if software (smart or dumb) can do a job for less money, that software will be included on a list of options. Given a choice of going out of business or embracing smart software, a significant percentage of consulting firm clients will give AI a whirl. If AI works and the company stays in business or grows, the humans will be repurposed or allowed to find their future elsewhere.
- The top one percent in any discipline will find work. The other 99 percent will need to have family connections, family wealth, or a family business to provide a boost for a great job. What if a person is not in the top one percent of something? Yeah, well, that’s not good for quite a few people.
- The permitted dominance of duopolies or oligopolies in most US business sectors means that some small and mid-sized businesses will have to find ways to generate revenue. My experience in rural Kentucky is that local accounting, legal, and technology companies are experimenting with smart software to boost productivity (the MBA word for cheaper work functions). Local employment options are dwindling because the smaller employers cannot stay in business. Potential employees want more pay than the company can afford. Result? Downward spiral which appears to be accelerating.
Am I confident in statistics related to wages, employment, and the growth of new businesses and industrial sectors? No, I am not. Statistical projects work pretty well in nuclear fuel management. Nested mathematical procedures in smart software work pretty well for some applications. Using smart software to reduce operating costs work pretty well right now.
Net net: Without meaningful work, some of life’s challenges will spark unanticipated outcomes. Exactly what type of stress breaks a social construct? Those in the job hunt will provide numerous test cases, and someone will do an analysis. Will it be correct? Sure, close enough for horseshoes.
Stop complaining. Just laugh as Mr. Hope noted. No heartburn and cost savings too boot.
Stephen E Arnold, February 25, 2025