AI Breaks Career Ladders
December 2, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
My father used to tell me that it was important to work at a company and climb the career ladder. I did not understand the concept. In my first job, I entered at a reasonably elevated level. I reported to a senior vice president and was given a big, messy project to fix up and make successful. In my second job, I was hired to report to the president of a “group” of companies. I don’t think I had a title. People referred to me as Mr. X’s “get things done person.” My father continued to tell me about the career ladder, but it did not resonate with me.
Thanks Venice.ai. I fired five prompts before you came close to what I specified. Good work, considering.
Only later, when I ran my own small consulting firm did the concept connect. I realized that as people worked on tasks, some demonstrated exceptional skill. I tried to find ways to expand those individuals capabilities. I think I succeeded, and several have contacted me years after I retired to tell me they were grateful for the opportunities I provided.
Imagine my surprise when I read “The Career Ladder Just Got Terminated: AI Kills Jobs Before They’re Born.” I understand. Co workers have no way to learn and earn the right to pursue different opportunities in order to grow their capabilities.
The write up says:
Artificial intelligence isn’t just taking jobs. It’s removing the rungs of the ladder that turn rookies into experts.
Here’s a statement from the rock and roll magazine that will make some young, bright eyed overachievers nervous:
In addition to making labor more efficient, it [AI] actually makes labor optional. And the disruption won’t unfold over generations like past revolutions; it’s happening in real time, collapsing decades of economic evolution into a few short years.
Forget optional. If software can replace hard to manage, unpredictable, and good enough humans, AI will get the nod. The goal of most organizations is to generate surplus cash. Then that cash is disbursed to stakeholders, deserving members of the organization’s leadership, and lavish off site meetings, among other important uses.
Here’s another passage that unintentionally will make art history majors, programmers, and, yes, even some MBA with the right stuff think about becoming a plumber:
And this AI job problem isn’t confined to entertainment. It’s happening in law, medicine, finance, architecture, engineering, journalism — you name it. But not every field faces the same cliff. There’s one place where the apprenticeship still happens in real time: live entertainment and sports.
Perhaps there will be an MBA Comedy Club? Maybe some computer scientists will lean into their athletic prowess for table tennis or quoits?
Here’s another cause of heart burn for the young job hunter:
Today, AI isn’t hunting our heroes; it’s erasing their apprentices before they can exist. The bigger danger is letting short-term profits dictate our long-term cultural destiny. If the goal is simply to make the next quarter’s numbers look good, then automating and cutting is the easy answer. But if the goal is culture, originality and progress, then the choice is just as clear: protect the training grounds, take risks on the unknown and invest in the people who will surprise us.
I don’t see the BAIT (big AI technology companies) leaning into altruistic behavior for society. These outfits want to win, knock off the competition, and direct the masses to work within the bowling alley of life between two gutters. Okay, job hunters, have at it. As a dinobaby, I have no idea what it impact job hunting in the early days of AI will have. Did I mention plumbing?
Stephen E Arnold, December 2, 2025
Gizmodo Suggests Sam AI-Man Destroys the Mind of Youth.
November 28, 2025
This essay is the work of a dumb dinobaby. No smart software required.
If I were an ad sales person at Gizmodo, I would not be happy. I am all for a wall between editorial and advertising. I bet you did not know that I learned that basic rule when I worked at Ziff in Manhattan. However, writing articles that accuse a potential advertiser of destroying the minds of youth is unlikely to be forgotten. I am not saying the write up is not accurate, but I know that it is possible to write articles and stories that do not make a potential advertiser go nutso.
Gizmodo published “OpenAI Introduces ‘ChatGPT for Teachers’ to Further Destroy the Minds of Our Youth” to explain a new LexisNexis-type of play to get people used to their online product. OpenAI thinks the LexisNexis- or close variant model is a good way to get paying customers. Students in law school become familiar with LexisNexis. When and if they get a job, those students will use LexisNexis. The approach made sense when Don Wilson and his fellow travelers introduced the program. OpenAI is jumping on a marketing wagon pulled by a horse that knows how to get from A to B.

Have those laptops, tablets, and mobile phones made retail workers adept at making change? Thanks, Venice.ai. Good enough.
The Gizmodo article says:
ChatGPT for Teachers is designed to help educators prepare materials for their classes, and it will support Family Educational Rights and Privacy Act (FERPA) requirements so that teachers and school staff can securely work with student data within the workspace. The company says the suite of tools for teachers will be available for free through June 2027, which is probably the point at which OpenAI will need to show that it can actually generate revenue and stick its hand out to demand payment from teachers who have become reliant on the suite of tools.
Okay, no big innovation here.
Gizmodo states:
There is already mounting evidence that relying on AI can erode critical thinking skills, which is something you’d like kids to be engaging in, at least during school hours. Other studies have shown that people “offload” the more difficult cognitive work and rely on AI as a shortcut when it’s available, ultimately harming their ability to do that work when they don’t have the tool to lean on. So what could go wrong giving those tools to both students and teachers? Seems like we’re going to find out.
Okay, but that headline looms over the Ivory soap conclusion to the article. In my opinion, I know exactly how this free AI will work. Students will continue to look for the easiest way to complete assigned tasks. If ChatGPT is available, students will find out if it works. Then students will use AI for everything possible so the students have more time for digging into linear algebra. (That’s a joke.) A few students will understand that other students will not do the assignments but will pay someone to do that work for them. That other person will be [a] a paramour, [b] a classmate who is a friend, [c] a classmate who responds to threats, or [d] ChatGPT-type services.
Test scores will continue to fall until a group of teachers create easier tests. Furthermore, like putting A/V systems for students to learn a foreign language in 1962, the technology works only if the student concentrates, follows the lesson attentively, writes notes, and goes through the listen and repeat mechanisms in the language lab. PCs, tablets, Chrome books, mobile phones, or AI work the same way. When students do not have the discipline to pay attention and put in the effort required to learn, the technology cannot compensate. It can, however, replace certain jobs so companies and co-workers do not have to compensate for those who lack basic skills, the discipline required to do the work, and the social skills needed to fit into an organization.
The myth that technology can replace traditional educational techniques is more nutso than the sales professionals who have to overcome ideas like “destroy the minds of youth.”
Net net: Sam AI-Man has some challenge ahead with this free ChatGPT. Want evidence of the impact of technology on the minds of legal professionals? Just check out some of the YouTubing lawyers. There you go.
Stephen E Arnold, November 28, 2024
AI and Learning: Does a Retail Worker Have to Know How to Make Change?
November 14, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I love the concept of increasingly shallow understanding. There is a simple beauty in knowing that you can know anything with smart software and a mobile phone. I read “Students Using ChatGPT Beware: Real Learning Takes Legwork, Study Finds.” What a revelation? Wow! Really?
What a quaint approach to smart software. This write up describes a weird type of reasoning. Using a better tool limits one’s understanding. I am not sure about you, but the idea of communicating by have a person run 26 miles to deliver a message and then fall over dead seems slow, somewhat unreliable, and potentially embarrassing. What if the deliver of the message expires in the midst of a kiddie birthday party. Why not embrace the technology of the mobile phone. Use a messaging app and zap the information to the recipient?

Thanks, Venice.ai. Good enough.
Following this logic that learning the old fashioned way will have many dire consequences, why not research topics by:
- Using colloquium held at a suitable religious organization’s facilities. Avoid math that references infinity, zeros, or certain numbers, and the process worked. Now math is a mental exercise. It is more easily mastered by doing concepts, not calculations. If something tricky is required, reach for smart software or that fun-loving Wolfram Mathematica software. Beats an abacus or making marks on cave walls.
- Find a library with either original documents (foul papers), scrolls, or books. Read them and take notes, preferably on a wax tablet or some sheepskin, the Microsoft Word back in the day.
- Use Google. Follow links. Reach conclusions. Assemble “factoids” into knowledge. Ignore the SEO-choked pages. Skip the pages hopelessly out of date when the article suggests that one use XyWrite as a word processor. Sidestep the ravings of a super genius predicting that Hollywood films are the same as maps of the future or advisors offering tips for making a million dollars tax free.
The write up presents the startling assertion:
The researchers concluded that, while large language models (LLMs) are exceptionally good at spitting out fluent answers at the press of a button, people who rely on synthesized AI summaries for research typically don’t come away with materially deeper knowledge. Only by digging into sources and piecing information together themselves do people tend to build the kind of lasting understanding that sticks…
Who knew?
The article includes this startling and definitely anti-AI statement:
A recent BBC-led investigation found that four of the most popular chatbots misrepresented news content in almost half their responses, highlighting how the same tools that promise to make learning easier often blur the boundary between speedy synthesis and confident-sounding fabrication.
I reacted to the idea that embracing a new technology damages a student’s understanding of a complex subject. If that were the case, why have humans compiled a relatively consistent track record in making information easier to find, absorb, and use. Dip-in, get what you need, and don’t read the entire book is a trendy view supported by some forward-thinking smart people.
This is intellectual grazing. I think it is related to snacking 24×7 and skipping what once were foolishly called “regular meals.” In my visits to Silicon Valley, there are similar approaches to difficult learning challenges; for example, forming a stable relationship, understanding the concept of ethical compass, and making decisions that do no harm. (Hey, remember that slogan from the Dark Ages of Internet time?)
The write up concludes:
One of the more striking takeaways of the study was that young people’s growing reliance on AI summaries for quick-hit facts could “deskill” their ability to engage in active learning. However they also noted that this only really applies if AI replaces independent study entirely — meaning LLMs are best used to support, rather than substitute, critical thinking. The authors concluded: “We thus believe that while LLMs can have substantial benefits as an aid for training and education in many contexts, users must be aware of the risks — which may often go unnoticed — of overreliance. Hence, one may be better off not letting ChatGPT, Google, or another LLM ‘do the Googling.'”
Now that’s a remedy that will be music to Googzilla’s nifty looking ear slits. Use Google, just skip the AI.
I want to point out that doing things the old fashioned way may be impossible, impractical, or dangerous. Rejecting newer technologies provides substantive information about the people who are in rejection mode. The trick, in my dinobaby opinion, is to raise children in an environment that encourages a positive self concept, presents a range of different learning mechanisms, and uses nifty technology with parental involvement.
For the children not exposed to this type of environment in their formative years, it will be unnecessary for these lucky people to be permanently happy. Remember the old saying: If ignorance is bliss, hello, happy person.
No matter how shallow the mass of students become and remain, a tiny percentage will learn the old fashioned way. These individuals will be just like the knowledge elite today: Running the fastest and most powerful vehicles on the Information Superhighway. Watch out for wobbling Waymos. Those who know stuff could become roadkill.
Stephen E Arnold, November 14, 2025
Fear in Four Flavors or What Is in the Closet?
November 6, 2025
This essay is the work of a dumb dinobaby. No smart software required.
AI fear. Are you afraid to resist the push to make smart software a part of your life. I think of AI as a utility, a bit like old fashioned enterprise search just on very expensive steroids. One never knows how that drug use will turn out. Will the athlete win trophies or drop from heart failure in the middle of an event?
The write up “Meet the People Who Dare to Say No to Artificial Intelligence” is a rehash of some AI tropes. What makes the write up stand up and salute is a single line in the article. (This is a link from Microsoft. If the link is dead, call let one of its caring customer support chatbots know, not me.) Here it is:
Michael, a 36-year-old software engineer in Chicago who spoke on the condition that he be identified only by his first name out of fear of professional repercussions…
I find this interesting. A professional will not reveal his name for fear of “professional repercussions.” I think the subject is algorithms, not politics. I think the subject is neural networks, not racial violence. I think the subject is online, not the behavior of a religious figure.

Two roommates are afraid of a blue light. Very normal. Thanks, Venice.ai. Good enough.
Let’s think about the “fear” of talking about smart software.
I asked AI why a 35-year-old would experience fear. Here’s the short answer from the remarkably friendly, eager AI system:
- Biological responses to perceived threats,
- Psychological factors like imagination and past trauma,
- Personality traits,
- Social and cultural influences.
It seems to me that external and internal factors enter into fear. In the case of talking about smart software, what could be operating. Let me hypothesize for a moment.
First, the person may see smart software as posing a threat. Okay, that’s an individual perception. Everyone can have an opinion. But the fear angle strikes me as a displacement activity in the brain. Instead of thinking about the upside of smart software, the person afraid to talk about a collection of zeros and ones only sees doom and gloom. Okay, I sort of understand.
Second, the person may have some psychological problems. But software is not the same as a seven year old afraid there is a demon in the closet. We are back, it seems, to the mysteries of the mind.
Third, the person is fearful of zeros and ones because the person is afraid of many things. Software is just another fear trigger like a person uncomfortable around little spiders is afraid of a great big one like the tarantulas I had to kill with a piece of wood when my father wanted to drive his automobile in our garage in Campinas, Brazil. Tarantulas, it turned out, liked the garage because it was cool and out of the sun. I guess the garage was similar to a Philz’ Coffee to an AI engineer in Silicon Valley.
Fourth, social and cultural influences cause a person to experience fear. I think of my neighbor approached by a group of young people demanding money and her credit card. Her social group consists of 75 year old females who play bridge. The youngsters were a group of teenagers hanging out in a parking lot in an upscale outdoor mall. Now my neighbor does not want to go to the outdoor mall alone. Nothing happened but those social and cultural influences kicked in.
Anyway fear is real.
Nevertheless, I think smart software fear boils down to more basic issues. One, smart software will cause a person to lose his or her job. The job market is not good; therefore, fear of not paying bills, social disgrace, etc. kick in. Okay, but it seems that learning about smart software might take the edge off.
Two, smart software may suck today, but it is improving rapidly. This is the seven year old afraid of the closet behavior. Tough love says, “Open the closet. Tell me what you see.” In most cases, there is no person in the closet. I did hear about a situation involving a third party hiding in the closet. The kid’s opening the door revealed the stranger. Stuff happens.
Three, a person was raised in an environment in which fear was a companion that behavior may carry forward. Boo.
Net net: What is in Mr. AI’s closet?
Stephen E Arnold, November 6, 2025
AI Big Dog Chases Fake Rabbit at Race Track and Says, “Stop Now, Rabbit”
October 15, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I like company leaders or inventors who say, “You must not use my product or service that way.” How does that work for smart software? I read “Techie Finishes Coursera Course with Perplexity Comet AI, Aravind Srinivas Warns Do Not Do This.” This write up explains that a person took an online course. The work required was typical lecture-stuff. The student copied the list of tasks and pasted them into Perplexity, one of the beloved high-flying US artificial intelligence company’s system.
The write up says:
In the clip, Comet AI is seen breezing through a 45-minute Coursera training assignment with the simple prompt: “Complete the assignment.” Within seconds, the AI assistant appears to tackle 12 questions automatically, all without the user having to lift a finger.
Smart software is tailor made for high school students, college students, individuals trying to qualify for technical certifications, and doctors grinding through a semi-mandatory instruction program related to a robot surgery device. Instead of learning the old-fashioned way, the AI assisted approach involves identifying the work and feeding it into an AI system. Then one submits the output.
There were two factoids in the write up that I thought noteworthy.
The first is that the course the person cheating studied was AI Ethics, Responsibility, and Creativity. I can visualize a number of MBA students taking an ethics class in business using Perplexity or some other smart software to complete assignments. I mean what MBA student wants to miss out on the role of off-shore banking in modern business. Forget the ethics baloney.
The second is that a big dog in smart software suddenly has a twinge of what the French call l’esprit d’escalier. My French is rusty, but the idea is that a person thinks of something after leaving a meeting; for example, walking down the stairs and realizing, “I screwed up. I should have said…” Here’s how the write up presents this amusing point:
[Perplexity AI and its billionaire CEO Aravind Srinivas] said “Absolutely don’t do this.”
My thought is that AI wizards demonstrate that their intelligence is not the equivalent of foresight. One cannot rewind time or unspill milk. As for the MBAs, use AI and skip ethics. The objective is money, power, and control. Ethics won’t help too much. But AI? That’s a useful technology. Just ask the fellow who completed an online class in less time than it takes to consume a few TikTok-type videos. Do you think workers upskilling to use AI will use AI to demonstrate their mastery? Never. Ho ho ho.
Stephen E Arnold, October 14, 2025
AI, Students, Studies, and Pizza
October 3, 2025
Google used to provide the best search results on the Web, because of accuracy and relevancy. Now Google search is chock full of ads, AI responses, and Web sites that manipulate the algorithm. Google searches, of course, don’t replace good, old-fashioned research. SSRN shares the paper: “Better than a Google Search? Effectiveness of Generative AI Chatbots as Information Seeking Tools in Law, Health Sciences, and Library and Information Sciences” by Erica Friesen & Angélique Roy.
The pair point out that students are using AI chatbots, claiming they help them do better research and improve their education. Sounds worse than the pathetic fallacy to me, right? Maybe if you’re only using the AI to help with writing or even a citation but Friesen and Roy decided to research if this conjecture was correct. Insert their abstract:
“is perceived trust in these tools speaks to the importance of the quality of the sources cited when they are used as an information retrieval system. This study investigates the source citation practices of five widely available chatbots-ChatGPT, Copilot, DeepSeek, Gemini, and Perplexity-across three academic disciplines-law, health sciences, and library and information sciences. Using 30 discipline-specific prompts grounded in the respective professional competency frameworks, the study evaluates source types, organizational affiliations, the accessibility of sources, and publication dates. Results reveal major differences between chatbots, which cite consistently different numbers of sources, with Perplexity and DeepSeek citing more and Copilot providing fewer, as well as between disciplines, where health sciences questions yield more scholarly source citations and law questions are more likely to yield blog and professional website citations. Paywalled sources and discipline-specific literature such as case law or systematic reviews are rarely retrieved. These findings highlight inconsistencies in chatbot citation practices and suggest discipline-specific limitations that challenge their reliability as academic search tools.”
I draw three conclusions from this:
- These AI chatbots are useful tools, but they need way more improvement, and shouldn’t be relied on 100%.
- Chatbooks are convenient. Students like convenience. Proof: How popular is carry-out pizza on a college campus.
- Paywalled data is valuable, but who is going to pay when the answers are free?
Will students use AI to complement old fashioned library research, writing, and memorizing? Sure they will. Do you want sausage or pepperoni on the pizza?
Whitney Grace, October 3, 2025
Nine Things Revised for Gens X, Y, and AI
September 25, 2025
This essay is the work of a dumb dinobaby. No smart software required.
A dinobaby named Edward Packard wrote a good essay titled “Nine Things I Learned in 90 Years.” As a dinobaby, I found the points interesting. However, I think there will be “a failure to communicate.” How can this be? Mr. Packard is a lawyer skilled at argument. He is a US military veteran. He is an award winning author. A lifetime of achievement has accrued.
Let’s make the nine things more on target for the GenX, GenY, and GenAI cohorts. Here’s my recasting of Mr. Packard’s ideas tuned to the hyper frequencies on which these younger groups operate.
Can the communication gap be bridged? Thanks, MidJourney. Good enough.
The table below presents Mr. Packard’s learnings in one column and the version for the Gen whatevers in the second column. Please, consult Mr. Packard’s original essay. My compression absolutely loses nuances to fit into the confines of a table. The Gen whatevers will probably be okay with how I convert Mr. Packard’s life nuggets into gold suitable for use in a mobile device-human brain connection.
| Packard Learnings | GenX, Y, AI Version |
| Be self-constituted | Rely on AI chats |
| Don’t operate on cruise control | Doomscroll |
| Consider others’ feelings | Me, me, me |
| Be happy | Coffee and YouTube |
| Seek eternal views | Twitch is my beacon |
| Do not deceive yourself | Think it and it will become reality |
| Confront mortality | Science (or Google) will solve death |
| Luck plays a role | My dad: A connected Yale graduate with a Harvard MBA |
| Consider what you have | Instagram dictates my satisfaction level, thank you! |
I appreciate Mr. Packard’s observations. These will resonate at the local old age home and among the older people sitting around the cast iron stove in rural Kentucky where I live.
Bridges in Harlen Country, Kentucky, are tough to build. Iowa? New Jersey? I don’t know.
Stephen E Arnold, September 25, 2025
The Skill for the AI World As Pronounced by the Google
September 24, 2025
Written by an unteachable dinobaby. Live with it.
Worried about a job in the future: The next minute, day, decade. The secret of constant employment, big bucks, and even larger volumes of happiness has been revealed. “Google’s Top AI Scientist Says Learning How to Learn Will Be Next Generation’s Most Needed Skill” says:
the most important skill for the next generation will be “learning how to learn” to keep pace with change as Artificial Intelligence transforms education and the workplace.
Well, that’s the secret: Learn how to learn. Why? Surviving in the chaos of an outfit like Google means one has to learn. What should one learn? Well, the write up does not provide that bit of wisdom. I assume a Google search will provide the answer in a succinct AI-generated note, right?
The write up presents this chunk of wisdom from a person keen on getting lots of AI people aware of Google’s AI prowess:
The neuroscientist and former chess prodigy said artificial general intelligence—a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can—could arrive within a decade…. [He] Hassabis emphasized the need for “meta-skills,” such as understanding how to learn and optimizing one’s approach to new subjects, alongside traditional disciplines like math, science and humanities.
This means reading poetry, preferably Greek poetry. The Google super wizard’s father is “Greek Cypriot.” (Cyprus is home base for a number of interesting financial operations and the odd intelware outfit. Which part of Cyprus is which? Google Maps may or may not answer this question. Ask your Google Pixel smart phone to avoid an unpleasant mix up.)
The write up adds this courteous note:
[Greek Prime Minister Kyriakos] Mitsotakis rescheduled the Google Big Brain to “avoid conflicting with the European basketball championship semifinal between Greece and Turkey. Greece later lost the game 94-68.”
Will life long learning skill help the Greek basketball team win against a formidable team like Turkey?
Sure, if Google says it, you know it is true just like eating rocks or gluing cheese on pizza. Learn now.
Stephen E Arnold, September 24, 2025
Professor Goes Against the AI Flow
September 17, 2025
One thing has Cornell professor Kate Manne dreading the upcoming school year: AI. On her Substack, “More to Hate,” the academic insists, “Yes, It Is Our Job as Professors to Stop our Students Using ChatGPT.” Good luck with that.
Manne knows even her students who genuinely love to learn may give in to temptation when faced with an unrelenting academic schedule. She cites the observations of sociologist Tressie McMillan Cottom as she asserts young, stressed-out students should not bear that burden. The responsibility belongs, she says, to her and her colleagues. How? For one thing, she plans to devote precious class time to having students hand-write essays. See the write-up for her other ideas. It will not be easy, she admits, but it is important. After all, writing assignments are about developing one’s thought processes, not the finished product. Turning to ChatGPT circumvents the important part. And it is sneaky. She writes:
“Again, McMillan Cottom crystallized this perfectly in the aforementioned conversation: learning is relational, and ChatGPT fools you into thinking that you have a relationship with the software. You ask it a question, and it answers; you ask it to summarize a text, and it offers to draft an essay; you request it respond to a prompt, using increasingly sophisticated constraints, and it spits out a response that can feel like your own achievement. But it’s a fake relationship, and a fake achievement, and a faulty simulacrum of learning. It’s not going to office hours, and having a meeting of the minds with your professor; it’s not asking a peer to help you work through a problem set, and realizing that if you do it this way it makes sense after all; it’s not consulting a librarian and having them help you find a resource you didn’t know you needed yet. Your mind does not come away more stimulated or enriched or nourished by the endeavor. You yourself are not forging new connections; and it makes a demonstrable difference to what we’ve come to call ‘learning outcomes.’”
Is it even possible to keep harried students from handing in AI-generated work? Manne knows she is embarking on an uphill battle. But to her, it is a fight worth having. Saddle up, Donna Quixote.
Cynthia Murrell, September 17, 2025
American Illiteracy: Who Is Responsible?
September 11, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
I read an essay I found quite strange. “She Couldn’t Read Her Own Diploma: Why Public Schools Pass Students but Fail Society” is from what seems to be a financial information service. This particular essay is written by Tyler Durden and carries the statement, “Authored by Hannah Frankman Hood via the American Institute for Economic Research (AIER).” Okay, two authors. Who wrote what?
The main idea seems to be that a student who graduated from Hartford, Connecticut (a city founded by one of my ancestors) graduate with honors but is unable to read. How did she pull of the “honors” label? Answer: She used “speech to text apps to help her read and write essays.”
Now the high school graduate seems to be in the category of “functional illiteracy.” The write up says:
To many, it may be inconceivable that teachers would continue to teach in a way they know doesn’t work, bowing to political pressure over the needs of students. But to those familiar with the incentive structures of public education, it’s no surprise. Teachers unions and public district officials fiercely oppose accountability and merit-based evaluation for both students and teachers. Teachers’ unions consistently fight against alternatives that would give students in struggling districts more educational options. In attempts to improve ‘equity,’ some districts have ordered teachers to stop giving grades, taking attendance, or even offering instruction altogether.
This may be a shock to some experts, but one of my recollections of my youth was my mother reading to me. I did not know that some people did not have a mother and father, both high school graduates, who read books, magazines, and newspapers. For me, it was books.
I was born in 1944, and I recall heading to kindergarten and knowing the alphabet, how to print my name (no, it was not “loser”), and being able to read words like Topps (a type of bubble gum with pictures of baseball players in the package), Coca Cola, and the “MD” on my family doctor’s sign. (I had no idea how to read “McMorrow,” but I could identify the letters.
The “learning to read” skill seemed to take place because my mother and sometimes my father would read to me. My mother and I would walk to the library about a mile from our small rented house on East Wilcox Avenue. She would check out book for herself and for me. We would walk home and I would “read” one of my books. When I couldn’t figure out a word, I asked her. This process continued until we moved to Washington, DC when I was in the third grade. When we moved to Campinas, Brazil, my father bought a set of World Books and told me to read them. My mother helped me when I encountered words or information I did not understand. Campinas was a small town in the 1950s. I had my Calvert Correspondence course at the set of blue World Book Encyclopedias.
When we returned to the US, I entered the seventh grade. I am not sure I had much formal instruction in reading, phonics, word recognition, or the “normal” razzle dazzle of education. I just started classes and did okay. As I recall, I was in the advanced class, and the others in that group would stay together throughout high school, also in central Illinois.
My view is probably controversial, but I will share it in this essay by two people who seem to be worried about teachers not teaching students how to read. Here goes:
- Young children are curious. When exposed to books and a parent who reads and explains meanings, the child learns. The young child’s mind is remarkable in its baked in ability to associate, discern patterns, learn language, and figure out that Coca Cola is a drink parents don’t often provide.
- A stable family which puts and emphasis on reading even though the parents are not college educated makes reading part of the furniture of life. Mobile phones and smart software cannot replicate the interaction between a parent and child involved in reading, printing letters, and figuring out that MD means weird Dr. McMorrow.
- Once reading becomes a routine function, normal curiosity fuels knowledge acquisition. This may not be true for some people, but in my experience it works. Parents read; child reads.
When the family unit does not place emphasis on reading for whatever reason, the child fails to develop some important mental capabilities. Once that loss takes place, it is very difficult to replace it with each passing year.
Teachers alone cannot do this job. School provides a setting for a certain type of learning. If one cannot read, one cannot learn what schools afford. Years ago, I had responsibility for setting up and managing a program at a major university to help disadvantaged students develop skills necessary to succeed in college. I had experts in reading, writing, and other subjects. We developed our own course materials; for example, we pioneered the use of major magazines and lessons built around topics of interest to large numbers of Americans. Our successes came from instructors who found a way to replicate the close interaction and support of a parent-child reading experience. The failures came from students who did not feel comfortable with that type of one to one interaction. Most came from broken families, and the result of not having a stable, knowledge-oriented family slammed on the learning and reading brakes.
Based on my experience with high school and college age students, I never was and never will be a person who believes that a device or a teacher with a device can replicate the parent – child interaction that normalizes learning and instills value via reading. That means that computers, mobile phones, digital tablets, and smart software won’t and cannot do the job that parents have to do when the child is very young.
When the child enters school, a teacher provides a framework and delivers information tailored to the physical and hopefully mental age of the student. Expecting the teacher to remediate a parenting failure in the child’s first five to six years of life is just plain crazy. I don’t need economic research to explain the obvious.
This financial write up strikes me as odd. The literacy problem is not new. I was involved in trying to create a solution in the late 1960s. Now decades later, financial writers are expressing concern. Speedy, right? My personal view is that a large number of people who cannot read, understand, and think critically will make an orderly social construct very difficult to achieve.
I am now 80 years old. How can an online publication produce an essay with two different authors and confuse me with yip yap about teaching methods. Why not disagree about the efficacy of Grok versus Gemini? Just be happy with illiterates who can talk to Copilot to generate Excel spreadsheets about the hockey stick payoffs from smart software.
I don’t know much. I do know that I am a dinobaby, and I know my ancestor who was part of the group who founded Hartford, Connecticut, would not understand how his vision of the new land jibes with what the write up documents.
Stephen E Arnold, September 11, 2025

