Survey: Kids and AI Tools
March 12, 2025
Our youngest children are growing up alongside AI. Or, perhaps, it would be more accurate to say increasingly intertwined with it. Axios tells us, "Study Zeroes in on AI’s Youngest Users." Write Megan Morrone cites a recent survey from Common Sense Media that examined AI use by children under 8 years old. The researchers surveyed 1,578 parents last August. We learn:
"Even the youngest of children are experimenting with a rapidly changing technology that could reshape their learning and critical thinking skills in unknown ways. By the numbers: One in four parents of kids ages 0-8 told Common Sense their children are learning critical thinking skills from using AI.
- 39% of parents said their kids use AI to ‘learn about school-related material,’ while only 8% said they use AI to ‘learn about AI.’
- For older children (ages 5-8) nearly 40% of parents said their child has used an app or a device with AI to learn.
- 24% of children use AI for ‘creative content,’ like writing short stories or making art, according to their parents."
It is too soon to know the long-term effects of growing up using AI tools. These kids are effectively subjects in a huge experiment. However, we already see indications that reliance on AI is bad for critical thinking skills. And that research is on adults, never mind kids whose base neural pathways are just forming. Parents, however, seem unconcerned. Morrone reports:
- More than half (61%) of parents of kids ages 0-8 said their kids’ use of AI had no impact on their critical thinking skills.
- 60% said there was no impact on their child’s well-being.
- 20% said the impact on their child’s creativity was ‘mostly positive.’
Are these parents in denial? They cannot just be happy to offload parenting to algorithms. Right? Perhaps they just need more information. Morrone points us to EqualAI’s new AI Literacy Initiative but, again, that resource is focused on adults. The write-up emphasizes the stakes of this great experiment on our children:
‘Our youngest children are on the front lines of an unprecedented digital transformation,’ said James P. Steyer, founder and CEO of Common Sense.
‘Addressing the impact of AI on the next generation is one of the most pressing issues of our time,’ Miriam Vogel, CEO of EqualAI, told Axios in an email. ‘Yet we are insufficiently developing effective approaches to equip young people for a world where they are both using and profoundly affected by AI.’
What does this all mean for society’s future? Stay tuned.
Cynthia Murrell, March 12, 2025
AI and Jobs: Tell These Folks AI Will Not Impact Their Work
March 12, 2025
The work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.
I have a friend who does some translation work. She’s chugging along because of her reputation for excellent work. However, one of the people who worked with me on a project requiring Russian language skills has not worked out. The young person lacks the reputation and the contacts with a base of clients. The older person can be as busy as she wants to be.
What’s the future of translating from one language to another for money? For the established person, smart software appears to have had zero impact. The younger person seems to be finding that smart software is getting the translation work.
I will offer my take in a moment. First, let’s look at “Turkey’s Translators Are Training the AI Tools That Will Replace Them.”
I noted this statement in the cited article:
Turkey’s sophisticated translators are moonlighting as trainers of artificial intelligence models, even as their profession shrinks with the rise of machine translations. As the models improve, these training jobs, too, may disappear.
What’s interesting is that the skilled translators are providing information to AI models. These models are definitely going to replace the humans. The trajectory is easy to project. Machines will work faster and cheaper. The humans will abandon the discipline. Then prices will go up. Those requiring translations will find themselves spending more and having few options. Eventually the old hands will wither. Excellent translations which capture nuance will become a type of endangered species. The snow leopard of knowledge work is with us.
I noted this statement in the article:
Book publishing, too, is transforming. Turkish publisher Dedalus announced in 2023 that it had machine-translated nine books. In 2022, Agora Books, helmed by translator Osman Ak?nhay, released a Turkish edition of Jean-Dominique Brierre’s Milan Kundera, une vie d’écrivain, a biography of the Czech-French novelist Milan Kundera. Ak?nhay, who does not know French, used Google Translate to help him in the translation, to much criticism from the industry.
What’s this mean?
- Jobs will be lost and the professionals with specialist skills are going to be the buggy whip makers in a world of automobiles
- The downstream impact of smart software is going to kill off companies. The Chegg legal matter illustrates how a monopoly can mindlessly erode a company. This is like a speeding semi-truck smashing love bugs on a Florida highway. The bugs don’t know what hit them, and the semi-truck is unaware and the driver is uncaring. Dead bugs? So what? See “Chegg Sues Google for Hurting Traffic with AI As It Considers Strategic Alternatives.”
- Data from different sources suggesting that AI will just create jobs is either misleading, public relations, or dead wrong. The Bureau of Labor Statistics data are spawning articles like “AI and Its Impact on Software Development Jobs.”
Net net: What’s emerging is one of those classic failure scenarios. Nothing big seems to go wrong. Then a collapse occurs. That’s what’s beginning to appear. Just little changes. Heed the signals? Of course not. I can hear someone saying, “That won’t happen to me.” Of course not but cheaper and faster are good enough at this time.
Stephen E Arnold, March 12, 2025
Microsoft: Marketing Is One Thing, a Cost Black Hole Is Quite Another
March 11, 2025
Yep, another dinobaby original.
I read “Microsoft Cuts Data Centre Plans and Hikes Prices in Push to Make Users Carry AI Cost.” The headline meant one thing to me: The black hole of AI costs must be capped. For my part, I try to avoid MSFT AI. After testing the Redmoanians’ smart software for months, I decided, “Nope.”
The write up says:
Last week, Microsoft unceremoniously pulled back on some planned data centre leases. The move came after the company increased subscription prices for its flagship 365 software by up to 45%, and quietly released an ad-supported version of some products. The tech giant’s CEO, Satya Nadella, also recently suggested AI has so far not produced much value.
No kidding. I won’t go into the annoyances. AI in Notepad? Yeah, great thinking like that which delivered Bob to users who loved Clippy.
The essay notes:
Having sunk billions into generative AI, Microsoft is trying to find the business model that will make the technology profitable.
Maybe someday, but that day is not today or tomorrow. If anything, Microsoft is struggling with old-timey software as well. The Register, a UK online publication, reports:
Back to AI. The AI financial black hole exists, and it may not be easy to resolve. What’s the fix? Here’s the Microsoft data center plan as of March 2025:
As AI infrastructure costs rise and model development evolves, shifting the costs to consumers becomes an appealing strategy for AI companies. While big enterprises such as government departments and universities may manage these costs, many small businesses and individual consumers may struggle.
Several observations are warranted:
- What happens if Microsoft cannot get consumers to pay the AI bills?
- What happens if people like this old dinobaby don’t want smart software and just shift to work flows without Microsoft products?
- What happens if the marvel of the Tensor and OpenAI’s and others’ implementations continue to hallucinate creating more headaches than the methods improve?
Net net: Marketing may have gotten ahead of reality, but the black hole of costs are very real and not hallucinations. Can Microsoft escape a black hole like this one?
Stephen E Arnold, March 11, 2025
Microsoft Sends a Signal: AI, AIn’t Working
March 11, 2025
Another post from the dinobaby. Alas, no smart software used for this essay.
The problems with Microsoft’s AI push were evident from the start of its AI push in 2023. The company thought it had identified the next big thing and had the big fish on the line. Now the work was easy. Just reel in the dough.
Has it worked out for Microsoft? We know that big companies often have difficulty innovating. The enervating white board sessions which seek to answer the question, “Do we build it or buy it?” usually give way to: [a] Let’s lock it up somehow or [b] Let’s steal it because it won’t take our folks too long to knock out a me-too.
Microsoft sent a fairly loud beep-beep-beep when it began to cut back on its dependence on OpenAI. Not long ago, Microsoft trimmed some of its crazy spending for AI. Now we have the allegedly accurate information in “Microsoft Is Reportedly Potting a Future without OpenAI.”
The write up states:
Microsoft has poured over $13 billion into the AI firm since 2019, but now it wants more control over its own models and costs. Simple enough in theory—build in-house alternatives, cut expenses, and call the shots.
Is this a surprise? No, I think it is just one more beep added to the already emitted beep-beep-beep.
Here’s my take:
- Narrowly focused smart software adds some useful capabilities to what I would call workflow enhancement. The narrow focus for an AI system reduces some of the wonkiness of the output. Therefore, certain tasks benefit; for example, grinding through data for a chemistry application or providing a call center operation with a good enough solution to rising costs. Broad use cases are more problematic.
- Humans who rely on information for a living don’t want to be caught out. This means that using smart software is an assist or a supplement. This is like an older person using a cane when walking on a senior citizens adventure tour.
- Productizing a broad use case for smart software is expensive and prone to the sort of failure rate associated with a new product or service. A good example is a self driving auto with collision avoidance. Would you stand in front of such a vehicle confident in the smart software’s ability to not run over you? I wouldn’t.
What’s happening at Microsoft is a reasonably predictable and understandable approach. The company wants to hedge its bets since big bucks are flowing out, not in. The firm thinks it has enough smarts to do a better job even though in my opinion this is unlikely. Remember Bob, Clippy, and Windows updates? I do.
Also, small teams believe their approach will be a winner. Big companies believe their people can row that boat faster than anyone else. I know from personal experience and observation that this is not true. But the appearance of effort and the illusion of high value work encourages the approach.
Plus, the idea that a “leadership team” can manage innovation is a powerful one. Microsoft’s leadership believes in its leadership. That’s why the company is a leader. (I love this logic.)
Net net: My hunch is that Microsoft’s AI push is a disappointment. Now the company can shift into SWAT team mode and overwhelm the problem: AI that does not pay for itself.
Will this approach work? Nope, the outcome will be good enough. That is a bit more than one can say about Apple intelligence: Seriously out of step with the Softies.
Stephen E Arnold, March 11, 2025
AI and Two Villages: A Challenge in Some Large Countries
March 10, 2025
This blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you? We used AI to translate the original Russian into semi English and to create the illustration. Hasta la vista a human Russian translater and a human artist. That’s how AI works in real life.
My team and I are wrapping up out Telegram monograph. As part of the drill, we have been monitoring some information sources in Russia. We spotted the essay “AI and Capitalism.” Note: I am not sure the link will resolve, but you can locate it via Yandex by searching for PCNews. I apologize, but some content is tricky to locate using consumer tools.)
The “white-collar village” and the “blue collar village” generated by You.com. Good enough.
I mention the article because it makes clear how smart software is affecting one technical professional working in a Russian government-owned telecommunications company. The author’s day-to-day work requires programming. One description of the value of smart software appears in this passage:
I work as a manager in a telecom and since last year I have been actively modifying the product line, adding AI components to each product. And I am not the only one there – the movement is going on in principle throughout the IT industry, of which we are a part… Where we have seen the payoff is replacing tree navigation with a text search bar, helping to generate text on a specific topic taking into account the concept cloud of the subject area, aggregating information from sources with different data structures, extracting a sequence of semantic actions of a person while working on a laptop, simultaneous translation with imitation of any voice, etc. The goal of all these events, as before, is to increase labor productivity. Previously, a person dug with his hands, then with a shovel, now with an excavator. Indeed, now it’s easier to ask the model for an example of code than to spend hours searching on Stack Overflow. This seriously speeds things up.
The author then identifies three consequences of the use of AI:
- Training will change because “you will need to retrain for another narrow specialty several times”
- Education will become more expensive but who will pay? Possible as important who will be able to learn?
- Society will change which is a way of saying “social turmoil” ahead in my opinion.
Here’s an okay translation of the essay’s final paragraph:
…in the medium term, the target architecture of our society will inevitably see a critical stratification into workers and educated people. Blue and white collar castes. The fence between them will be so high that films about a possible future will become a fairly accurate forecast. I really want to end up in a white-collar village in the role of a white collar worker. Scary.
What’s interesting about this person’s point of view is that AI is already changing work in Russia and the Russian Federation. The challenge will be that an allegedly “flat” social structure will be split into those who can implement smart software and those who cannot. The chatter about smart software is usually focused on which company will find a way to generate revenue from the massive investments required to create solutions that consumers and companies will buy.
What gets less attention is the apparent impact of the technology on countries which purport to make life “better” via a different system. If the author is correct, some large nation states are likely to face some significant social challenges. Not everyone can work in “a white-collar village.”
Stephen E Arnold, March 10, 2025
From $20 a Month to $20K a Month. Great Idea… or Not?
March 10, 2025
Another post from the dinobaby. Alas, no smart software used for this essay.
OpenAI was one of many smart software companies. If you meet the people on my team, you will learn that I dismissed most of the outfits as search-and-retrieval outfits looking for an edge. Search definitely needs an edge, but I was not confident that predictive generation of an “answer” was a solution. It was a nifty party trick, but then the money started flowing. In January 2023, Microsoft put Google’s cute sharp teeth on edge. Suddenly AI or smart software was the next big thing. The virtual reality thing did not ring the bell. The increasingly weird fiddling with mobile phones did not get the brass ring. And the idea of Apple becoming the next big thing in chips has left everyone confused. My M1 devices work pretty well, and unless I look at the label on the gizmos I can tell an M1 from and M3. Do I care? Nope.
But OpenAI became news. It squabbled with the mastermind of “renewable” satellites, definitely weird trucks, and digging tunnels in Las Vegas. (Yeah, nice idea, just not for anyone who does not want to get stalled in traffic.) When ChatGPT became available, one of those laboring in my digital vineyards signed me up. I fiddled with it and decided that I would run some of my research through the system. I learned that my research was not in the OpenAI “system.” I had it do some images. Those sucked. I will cancel this week.
I put in my AI folder this article “OpenAI’s is Getting Ready to Release PhD Level AI Agents.” I was engaging in some winnowing and I scanned it. In early February 2025, Digital Marketing News wrote about PhD level agents. I am not a PhD. I quite before I finished my dissertation to work in the really socially conscious nuclear unit of that lovable outfit Halliburton. You know the company. That’s the one that charged about $950.00 for a gallon of fuel during the Iraq war. You will also associate Dick Cheney, a fun person, with the company. So no PhD for me.
I was skeptical because of the dismal performance of ChatGPT 4, oh, whatever, trying to come up with the information I have assembled for my new book for law enforcement professionals. Then I read a Slashdot post with the title “OpenAI Plots Charging $20,000 a Month For PhD-Level Agents” shared from a publication I don’t know much about. I think it is like 404 or a for-fee Substack. The publication has great content, and you have to pay for it.
Be that as it may, the Slashdot post reports or recycles information that suggests the fee per month for a PhD level version of OpenAI’s smart software will be a modest $20,000 a month. I think the service one of my team registered costs $20.00 per month. What’s with the 20s? Twenty is a pronic number; that is, it can be slapped on a high school math test so students can say it is the product of two consecutive integers. In college I knew a person who was a numerologist. I recall that the meaning of 20 was cooperation.
The interesting part of the Slashdot post was the comments. I scanned them and concluded that some of the commenters saw the high-end service killing jobs for high-end programmers and consultants. Yeah, maybe. Somehow I doubt that a code base that struggles with information related to a widely-used messaging application is suddenly going to replicate the information I have obtained from my sources in Eastern Europe seems a bit of stretch. Heck, ChatGPT could barely do English. Russian? Not a change, but who knows. And for $200,000 it is not likely this dinobaby will take what seems like unappetizing bait.
One commenter allegedly named TheGreatEmu said:
I was about to make a similar comment, but the cost still doesn’t add up. I’m at a national lab with generally much higher overheads than most places, and a postdoc runs us $160k/year fully burdened. And of course the AI sure as h#ll can’t connect cables, turn knobs, solder, titrate, use a drill press, clean, chat with the machinist who doesn’t use email, sneaker net data out of the air-gapped lab, or understand napkin drawings over beer where all real science gets done. Or do anything useful with information that isn’t already present in the training data, and if you’re not pushing past existing knowledge boundaries, you’re not really doing science are you?
My hunch is that this is a PR or marketing play. Let’s face it. With Microsoft cutting off data center builds and Google floundering with cheese, the smart software revolution is muddling forward. The wins are targeted applications in quite specific domains. Yes, gentle reader, that’s why people pay for Chemical Abstracts online. The information is not on the public Internet. The American Chemical Society has information that the super capable AI outfits have not figured as something the non-computational, organic, or inorganic chemist will use from a somewhat volatile outfit. Get something wrong in a nuclear lab and smart software won’t be too helpful if it hallucinates.
Net net: Is everything marketing? At age 80, my answer is, “Absolutely.” Sam AI-Thinks in terms of trillions. Is $20 trillion the next pricing level?
Stephen E Arnold, March 10, 2025
Patents, AI, and Lawyers: Litigators, Start Your Engines
March 7, 2025
Patents can be a useful source of insights, a fact startup Patlytics is banking on. TechCrunch reports, "Patlytics Raises $14M for its Patent Analytics Platform." The firm turbo-charges intellectual property research with bespoke AI. We learn:
"Patlytics’ large language models (LLMs) and generative AI-powered engine are custom-built for IP-related research and other work such as patent application drafting, invention disclosures, invalidity analysis, infringement detection/analysis, Standard Essential Patents (SEPs) analysis, and IP assets portfolio management."
Apparently, the young firm is already meeting with success. We learn:
"The 1-year-old startup said it has seen a 20x increase in ARR and an 18x expansion in its customer base within six months, with a sustained 300% month-over-month growth rate. Patlytics did not disclose how many customers it has but said approximately 50% of its customer base are law firms, and the other half are corporate clients from industries like semiconductors, bio, pharmaceuticals, and more. Additionally, the company now serves customers in South Korea and Japan, and recently launched its first pilot product in London and Germany. Its clients include Abnormal Security, Google, Koch Disruptive Technologies, Quinn Emanuel Urquhart & Sullivan, Richardson Oliver, Reichman Jorgensen Lehman & Feldberg, Xerox, and Young Basile."
That is quite a client roster in such a short time. This round, combined with April’s seed round, brings the companies funding total to $21 million. The firm will put the funds to use hiring new engineers and expanding its products. Based in New York, Patlytics was launched in January, 2024.
Will AI increase patent litigation? Do Tesla Cybertrucks attract attention?
Cynthia Murrell, March 7, 2025
Another New Search System with AI Too
March 7, 2025
There’s a new AI engine in town down specifically designed to assist with research. The Next Web details the newest invention that comes from a big name in the technology industry: “Tech mogul Launches AI Research Engine Corpora.ai.” Mel Morris is a British tech mogul and the man behind the latest research engine: Corpora.ai.
Morris had Corpora.ai designed to provided in-depth research from single prompts. It is also an incredibly fast engine. It can process two million documents per second. Corpora.ai works by reading a prompt then the AI algorithm scans information, including legal documents, news articles, academic papers, and other Web data. The information is then compiled into summaries or reports.
Morris insists that Corpora.ai is a research engine, not a search engine. He invested $15 million of his personal fortune into the project. Morris doesn’t want to compete with other AI projects, instead he wants to form working relationships:
“His funding aims to create a new business model for LLMs. Rather than challenge the leading GenAI firms, Corpora plans to bring a new service to the sector. The research engine can also integrate existing models on the market. ‘We don’t compete with OpenAI, Google, or Deepseek,’ Morris said. ‘The nice thing is, we can play with all of these AI vendors quite nicely. As they improve their models, our output gets better. It’s a really great symbiotic relationship.’
Mel Morris is a self-made businessman who is the former head of King, the Candy Crush game creator. He also owned and sold the dating Web site, uDate. He might see a return on his Corpora.ai investment .
Whitney Grace, March 7, 2025
Attention, New MBAs in Finance: AI-gony Arrives
March 6, 2025
Another post from the dinobaby. Alas, no smart software used for this essay.
I did a couple of small jobs for a big Wall Street outfit years ago. I went to meetings, listened, and observed. To be frank, I did not do much work. There were three or four young, recent graduates of fancy schools. These individuals were similar to the colleagues I had at the big time consulting firm at which I worked earlier in my career.
Everyone was eager and very concerned that their Excel fevers were in full bloom: Bright eyes, earnest expressions, and a gentle but persistent panting in these meetings. Wall Street and Wall Street like firms in London, England, and Los Angeles, California, were quite similar. These churn outfits and deal makers shared DNA or some type of quantum entanglement.
These “analysts” or “associates” gathered data, pumped it into Excel spreadsheets set up by colleagues or technical specialists. Macros processed the data and spit out tables, charts, and graphs. These were written up as memos, reports for those with big sticks, or senior deciders.
My point is that the “work” was done by cannon fodder from well-known universities business or finance programs.
Well, bad news, future BMW buyers, an outfit called PublicView.ai may have curtailed your dreams of a six figure bonus in January or whatever month is the big momma at your firm. You can take a look at example outputs and sign up free at https://www.publicview.ai/.
If the smart product works as advertised, a category of financial work is going to be reshaped. It is possible that fewer analyst jobs will become available as the gathering and importing are converted to automated workflows. The meetings and the panting will become fewer and father between.
I don’t have data about how many worker bees power the Wall Street type outfits. I showed up, delivered information when queried, departed, and sent a bill for my time and travel. The financial hive and its quietly buzzing drones plugged away 10 or more hours a day, mostly six days a week.
The PublicView.ai FAQ page answers some basic questions; for example, “Can I perform quantitative analysis on the files?” The answer is:
Yes, you can ask Publicview to perform computations on the files using Python code. It can create graphs, charts, tables and more.
This is good news for the newly minted MBAs with programming skills. The bad news is that repeatable questions can be converted to workflows.
Let’s assume this product is good enough. There will be no overnight change in the work for existing employees. But slowly the senior managers will get the bright idea of hiring MBAs with different skills, possibly on a contract basis. Then the work will begin to shift to software. At some point in the not-to-distant future, jobs for humans will be eliminated.
The question is, “How quickly can new hires make themselves into higher value employees in what are the early days of smart software?”
I suggest getting on a fast horse and galloping forward. Donkeys with Excel will fall behind. Software does not require health care, ever increasing inducements, and vacations. What’s interesting is that at some point many “analyst” jobs, not just in finance, will be handled by “good enough” smart software.
Remember a 51 percent win rate from code that does not hang out with a latte will strike some in carpetland as a no brainer. The good news is that MBAs don’t have a graduate degree in 18th century buttons or the Brutalist movement in architecture.
Stephen E Arnold, March 6, 2025
Lawyers and High School Students Cut Corners
March 6, 2025
Cost-cutting lawyers beware: using AI in your practice may make it tough to buy a new BMW this quarter. TechSpot reports, "Lawyer Faces $15,000 Fine for Using Fake AI-Generated Cases in Court Filing." Writer Rob Thubron tells us:
"When representing HooserVac LLC in a lawsuit over its retirement fund in October 2024, Indiana attorney Rafael Ramirez included case citations in three separate briefs. The court could not locate these cases as they had been fabricated by ChatGPT."
Yes, ChatGPT completely invented precedents to support Ramirez’ case. Unsurprisingly, the court took issue with this:
"In December, US Magistrate Judge for the Southern District of Indiana Mark J. Dinsmore ordered Ramirez to appear in court and show cause as to why he shouldn’t be sanctioned for the errors. ‘Transposing numbers in a citation, getting the date wrong, or misspelling a party’s name is an error,’ the judge wrote. ‘Citing to a case that simply does not exist is something else altogether. Mr Ramirez offers no hint of an explanation for how a case citation made up out of whole cloth ended up in his brief. The most obvious explanation is that Mr Ramirez used an AI-generative tool to aid in drafting his brief and failed to check the citations therein before filing it.’ Ramirez admitted that he used generative AI, but insisted he did not realize the cases weren’t real as he was unaware that AI could generate fictitious cases and citations."
Unaware? Perhaps he had not heard about the similar case in 2023. Then again, maybe he had. Ramirez told the court he had tried to verify the cases were real—by asking ChatGPT itself (which replied in the affirmative). But that query falls woefully short of the due diligence required by the Federal Rule of Civil Procedure 11, Thubron notes. As the judge who ultimately did sanction the firm observed, Ramirez would have noticed the cases were fiction had his attempt to verify them ventured beyond the ChatGPT UI.
For his negligence, Ramirez may face disciplinary action beyond the $15,000 in fines. We are told he continues to use AI tools, but has taken courses on its responsible use in the practice of law. Perhaps he should have done that before building a case on a chatbot’s hallucinations.
Cynthia Murrell, March 6, 2025