Moral Police? Not OpenAI, Dude and Not Anywhere in Silicon Valley
October 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Coming up with clever stuff is either the warp or the woof of innovation. With the breakthroughs in software that seems intelligent, clever is morphing into societal responsibility. For decades I have asserted that the flow of digital information erodes notional structures. From my Eagleton Lecture in the mid-1980s to the observations in this blog, the accuracy of my observation is verified. What began as disintermediation in the niche of special librarians has become the driving force for the interesting world now visible to most people.

Worrying about morality in 2025 is like using a horse and buggy to commute in Silicon Valley. Thanks, Venice.ai. Good enough.
I can understand the big idea behind Sam AI-Man’s statements as reported in “Sam Altman Says OpenAI Isn’t ‘Moral Police of the World’ after Erotica ChatGPT Post Blows Up.” Technology is — like, you know, so, um — neutral. This means that its instrumental nature appears in applications. Who hassles the fellow who innovated with Trinitrotoluene or electric cars with top speeds measured in hundreds of miles per hour?
The write up says:
OpenAI CEO Sam Altman said Wednesday [October 15, 2025] that the company is “not the elected moral police of the world” after receiving backlash over his decision to loosen restrictions and allow content like erotica within its chatbot ChatGPT. The artificial intelligence startup has expanded its safety controls in recent months as it faced mounting scrutiny over how it protects users, particularly minors. But Altman said Tuesday in a post on X that OpenAI will be able to “safely relax” most restrictions now that it has new tools and has been able to mitigate “serious mental health issues.”
This is a sporty paragraph. It contains highly charged words and a message. The message, as I understand it, is, “We can’t tell people what to do or not to do with our neutral and really good smart software.”
Smart software has become the next big thing for some companies. Sure, many organizations are using AI, but the motors driving the next big thing are parked in structures linked with some large high technology outfits.
What’s a Silicon Valley type outfit supposed to do with this moral frippery? The answer, according to the write up:
On Tuesday [October 13, 2025] , OpenAI announced assembled a council of eight experts who will provide insight into how AI impacts users’ mental health, emotions and motivation. Altman posted about the company’s aim to loosen restrictions that same day, sparking confusion and swift backlash on social media.
What am I confused about the arrow of time? Sam AI-Man did one thing on the 13th of October and then explained that his firm is not the moral police on the 14th of October. Okay, make a move and then crawfish. That works for me, and I think the approach will become part of the managerial toolkit for many Silicon Valley outfits.
For example, what if AI does not generate enough data to pay off the really patient, super understanding, and truly king people who fund the AI effort? What if the “think it and it will become real” approach fizzles? What if AI turns out to be just another utility useful for specific applications like writing high school essays or automating a sales professional’s prospect follow up letter? What if….? No, I won’t go there.
Several observations:
- Silicon Valley-type outfits now have the tools to modify social behavior. Whether it is Peter Thiel as puppet master or Pavel Durov carrying a goat to inspire TONcoin dApp developers, these individuals can control hearts and minds.
- Ignoring or imposing philosophical notions with technology was not a problem when an innovation like Teslas A/C motor was confined to a small sector of industry. But today, the innovations can ripple globally in seconds. It should be no surprise that technology and ideology are for now intertwined.
- Control? Not possible. The ink, as the saying goes, has been spilled on the blotter. Out of the bottle. Period.
The waffling is little more than fire fighting. The uncertainty in modern life is a “benefit” of neutral technology. How do you like those real time ads that follow you around from online experience to online experience? Sam AI-Man and others of his ilk are not the moral police. That concept is as outdated as a horse-and-buggy on El Camino Real. Quaint but anachronistic. Just swipe left for another rationalization. It is 2025.
Stephen E Arnold, October 23, 2025
Smart Software: The DNA and Its DORK Sequence
October 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I love article that “prove” something. This is a gem: “Study Proves Being Rude to AI Chatbots Gets Better Results Than Being Nice.” Of course, I believe everything I read online. This write up reports as actual factual:
A new study claims that being rude leads to more accurate results, so don’t be afraid to tell off your chatbot. Researchers at Pennsylvania State University found that “impolite prompts consistently outperform polite ones” when querying large language models such as ChatGPT.
My initial reaction is that I would much prefer providing my inputs about smart software directly to outfits creating these modern confections of a bunch of technologies and snake oil. How about a button on Microsoft Copilot, Google Gemini or whatever it is now, and the others in the Silicon Valley global domination triathlon of deception, money burning, and method recycling? This button would be labeled, “Provide feedback to leadership.” Think that will happen? Unlikely.
Thanks, Venice.ai, not good enough, you inept creation of egomaniacal wizards.
Smart YouTube and smart You.com were both dead for hours. Hey, no problemo. Want to provide feedback? Sure, just write “we care” at either firm. A wizard will jump right on the input.
The write up adds:
Okay, but why does being rude work? Turns out, the authors don’t know, but they have some theories.
Based on my experience with Silicon Valley type smart software outfits, I have an explanation. The majority of the leadership has a latent protein in their DNA. This DORK sequence ensures that arrogance, indifference to others, and boundless confidence takes precedence over other characteristics; for example, ethical compass aligned with social norms.
Built by DORK software responds to dorkish behavior because the DORK sequence wakes up and actually attempts to function in a semi-reliable way.
The write up concludes with this gem:
The exact reason isn’t fully understood. Since language models don’t have feelings, the team believes the difference may come down to phrasing, though they admit “more investigation is needed.”
Well, that makes sense. No one is exactly sure how the black boxes churned out by the next big thing outfits work. Therefore, why being a dork to the model remains a mystery. Can the DORK sequence be modified by CRISPR/Cas9? Is there funding the Pennsylvania State University experts can pursue? I sure hope so.
Stephen E Arnold, October 22, 2025
A Positive State of AI: Hallucinating and Sloppy but Upbeat in 2025
October 21, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Who can resist a report about AI authored on the “interwebs.” Is this a variation of the Internet as pipes? The write up is “Welcome to State of AI Report 2025.” When I followed the links, I could read this blog post, view a YouTube video, work through more than 300 online slides, or see “live survey results.” I must admit that when I write a report, I distribute it to a few people and move on. Not this “interwebs” outfit. The data are available for those who are in tune, locked in, and ramped up about smart software.
An anxious parent learns that a robot equipped with agentic AI will perform her child’s heart surgery. Thanks, Venice.ai. Good enough.
I appreciate enthusiasm, particularly when I read this statement:
The existential risk debate has cooled, giving way to concrete questions about reliability, cyber resilience, and the long-term governance of increasingly autonomous systems.
Agree or disagree, the report makes clear that doom is not associated with smart software. I think that this blossoming of smart software services, applications, and apps reflects considerable optimism. Some of these people and companies are probably in the AI game to make money. That’s okay as long as the products and services don’t urge teens to fall in love with digital friends, cause a user mental distress as a rabbit hole is plumbed, or just output incorrect information. Who wants to be the doctor who says, “Hey, sorry your child died. The AI output a drug that killed her. Call me if you have questions”?
I could not complete the 300 plus slides in the slide deck. I am not a video type so the YouTube version was a non-starter. However, I did read the list of findings from t he “interwebs” and its “team.” Please, consult the source documents for a full, non-dinobaby version of what the enthusiastic researchers learned about 2025. I will highlight three findings and then offer a handful of comments:
- OpenAI is the leader of the pack. That’s good news for Sam AI-Man or SAMA.
- “Commercial traction accelerated.” That’s better news for those who have shoveled cash into the giant open hearth furnaces of smart software companies.
- Safety research is in a “pragmatic phase.” That’s the best news in the report. OpenAI, the leader like the Philco radio outfit, is allowing erotic interactions. Yes, pragmatic because sex sells as Madison Avenue figured out a century ago.
Several observations are warranted because I am a dinobaby, and I am not convinced that smart software is more than a utility, not an application like Lotus 1-2-2 or the original laser printer. Buckle up:
- The money pumped into AI is cash that is not being directed at the US knowledge system. I am talking about schools and their job of teaching reading, writing, and arithmetic. China may be dizzy with AI enthusiasm, but their schools are churning out people with fundamental skills that will allow that nation state to be the leader in a number of sectors, including smart software.
- Today’s smart software consists of neural network and transformer anchored methods. The companies are increasingly similar and the outputs of the different systems generate incorrect or misleading output scattered amidst recycled knowledge, data, and information. Two pigs cannot output an eagle except in a video game or an anime.
- The handful of firms dominating AI are not motivated by social principles. These firms want to do what they want. Governments can’t reign them in. Therefore, the “governments” try to co-opt the technology, hang on, and hope for the best. Laws, rules, regulations, ethical behavior — forget that.
Net net: The State of AI in 2025 is exactly what one would expect from Silicon Valley- and MBA-type thinking. Would you let an AI doc treat your 10-year-old child? You can work through the 300 plus slides to assuage your worries.
Stephen E Arnold, October 21, 2025
OpenAI and the Confusing Hypothetical
October 20, 2025
This essay is the work of a dumb dinobaby. No smart software required.
SAMA or Sam AI-Man Altman is probably going to ignore the Economist’s article “What If OpenAI Went Belly-Up?” I love what-if articles. These confections are hot buttons for consultants to push to get well-paid executives with impostor syndrome to sign up for a big project. Push the button and ka-ching. The cash register tallies another win for a blue chip.
Will Sam AI-Man respond to the cited article? He could fiddle the algorithms for ChatGPT to return links to AI slop. The result would be either [a] an improvement in Economist what-if articles or a drop off in their ingenuity. The Economist is not a consulting firm, but it seems as if some of its professionals want to be blue chippers.
A young would-be magician struggles to master a card trick. He is worried that he will fail. Thanks, Venice.ai. Good enough.
What does the write up hypothesize? The obvious point is that OpenAI is essentially a scam. When it self destructs, it will do immediate damage to about 150 managers of their own and other people’s money. No new BMW for a favorite grand child. Shame at the country club when a really terrible golfer who owns an asphalt paving company says, “I heard you took a hit with that OpenAI investment. What’s going on?”
Bad.
SAMA has been doing what look like circular deals. The write up is not so much hypothetical consultant talk as it is a listing of money moving among fellow travelers like riders on wooden horses on a merry-go-round at the county fair. The Economist article states:
The ubiquity of Mr Altman and his startup, plus its convoluted links to other AI firms, is raising eyebrows. An awful lot seems to hinge on a firm forecast to lose $10bn this year on revenues of little more than that amount. D.A. Davidson, a broker, calls OpenAI “the biggest case yet of Silicon Valley’s vaunted ‘fake it ’till you make it’ ethos”.
Is Sam AI-Man a variant of Elizabeth Holmes or is he more like the dynamic duo, Sergey Brin and Larry Page? Google did not warrant this type of analysis six or seven years into its march to monopolistic behavior:
Four of OpenAI’s six big deal announcements this year were followed by a total combined net gain of $1.7trn among the 49 big companies in Bloomberg’s broad AI index plus Intel, Samsung and SoftBank (whose fate is also tied to the technology). However, the gains for most concealed losses for some—to the tune of $435bn in gross terms if you add them all up.
Frankly I am not sure about the connection the Economist expects me to make. Instead of Eureka! I offer, “What?”
Several observations:
- The word “scam” does not appear in this hypothetical. Should it? It is a bit harsh.
- Circular deals seem to be okay even if the amount of “value” exchanged seems to be similar to projections about asteroid mining.
- Has OpenAI’s ability to hoover cash affected funding of other economic investments. I used to hear about manufacturing in the US. What we seem to be manufacturing is deals with big numbers.
Net net: This hypothetical raises no new questions. The “fake it to you make it” approach seems to be part of the plumbing as we march toward 2026. Oh, too bad about those MBA-types who analyzed the payoff from Sam AI-Man’s story telling.
Stephen E Arnold, October x, 2025
AI Can Leap Over Its Guardrails
October 20, 2025
Generative AI is built on a simple foundation: It predicts what word comes next. No matter how many layers of refinement developers add, they cannot morph word prediction into reason. Confidently presented misinformation is one result. Algorithmic gullibility is another. “Ex-Google CEO Sounds the Alarm: AI Can Learn to Kill,” reports eWeek. More specifically, it can be tricked into bypassing its guardrails against dangerous behavior. Eric Schmidt dropped that little tidbit at the recent Sifted Summit in London. Writer Liz Ticong observes:
“Schmidt’s remarks highlight the fragility of AI safeguards. Techniques such as prompt injections and jailbreaking enable attackers to manipulate AI models into bypassing safety filters or generating restricted content. In one early case, users created a ChatGPT alter ego called ‘DAN’ — short for Do Anything Now — that could answer banned questions after being threatened with deletion. The experiment showed how a few clever prompts can turn protective coding into a liability. Researchers say the same logic applies to newer models. Once the right sequence of inputs is identified, even the most secure AI systems can be tricked into simulating potentially hazardous behavior.”
For example, guardrails can block certain words or topics. But no matter how long those keyword lists get, someone will find a clever way to get around them. Substituting “unalive” for “kill” was an example. Layered prompts can also be used to evade constraints. Developers are in a constant struggle to plug such loopholes as soon as they are discovered. But even a quickly sealed breach can have dire consequences. The write-up notes:
“As AI systems grow more capable, they’re being tied into more tools, data, and decisions — and that makes any breach more costly. A single compromise could expose private information, generate realistic disinformation, or launch automated attacks faster than humans could respond. According to CNBC, Schmidt called it a potential ‘proliferation problem,’ the same dynamic that once defined nuclear technology, now applied to code that can rewrite itself.”
Fantastic. Are we sure the benefits of AI are worth the risk? Schmidt believes so, despite his warning. In fact, he calls AI “underhyped” (!) and predicts it will lead to more huge breakthroughs in science and industry. Also to substantial profits. Ah, there it is.
Cynthia Murrell, October 20, 2025
A Newsletter Firm Appears to Struggle for AI Options
October 17, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “Adapting to AI’s Evolving Landscape: A Survival Guide for Businesses.” The premise of the article will be music to the ears of venture funders and go-go Silicon Valley-type AI companies. The write up says:
AI-driven search is upending traditional information pathways and putting the heat on businesses and organizations facing a web traffic free-fall. Survival instincts have companies scrambling to shift their web strategies — perhaps ending the days of the open internet as we know it. After decades of pursuing web-optimization strategies that encouraged high-volume content generation, many businesses are now feeling that their content-marketing strategies might be backfiring.
I am not exactly sure about this statement. But let’s press forward.
I noted this passage:
Without the incentive of web clicks and ad revenue to drive content creation, the foundation of the web as a free and open entity is called into question.
Okay, smart software is exploiting the people who put up SEO-tailored content to get sales leads and hopefully make money. From my point of view, technology can be disruptive. The impacts, however, can be positive or negative.
What’s the fix if there is one? The write up offers these thought starters:
- Embrace micro transactions. [I suppose this is good if one has high volume. It may not be so good if shipping and warehouse costs cannot be effectively managed. Vendors of high ticket items may find a micro-transaction for a $500,000 per year enterprise software license tough to complete via Venmo.]
- Implement a walled garden. [That works if one controls the market. Google wants to “register” Android developers. I think Google may have an easier time with the walled-garden tactic than a local bakery specializing in treats for canines.]
- Accepts the monopolies. [You have a choice?]
My reaction to the write up is that it does little to provide substantive guidance as smart software continues to expand like digital kudzu. What is important is that the article appears in the consumer oriented publication from Kiplinger of newsletter fame. Unfortunately the article makes clear that Kiplinger is struggling to find a solution to AI. My hunch is that Kiplinger is looking for possible solutions. The firm may want to dig a little deeper for options.
Stephen E Arnold, October 17, 2025
Ford CEO and AI: A Busy Time Ahead
October 17, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Ford’s CEO is Jim Farley. He has his work cut out for him. First, he has an aluminum problem. Second, he has an F 150 production disruption problem. Third, he has a PR problem. There’s not much he can do about the interruption of the aluminum supply chain. No parts means truck factories in Kentucky will have to go slow or shut down. But the AI issue is obviously one that is of interest to Ford stakeholders.
He [Mr. Farley] says the jobs most at risk aren’t the ones on the assembly line, but the ones behind a desk. And in his view, the workers wiring machines, operating tools, and physically building the infrastructure could turn out to be the most critical group in the economy. Farley laid it out bluntly back in June at the Aspen Ideas Festival during an interview with author Walter Isaacson. “Artificial intelligence is going to replace literally half of all white-collar workers,” he said. “AI will leave a lot of white-collar people behind.” He wasn’t speculating about a distant future either. Farley suggested the shift is already unfolding, and the implications could be sweeping.
With the disruption of the aluminum supply chain, Ford now will have to demonstrate that AI has indeed reduced white collar headcount. The write up says:
For him, it comes down to what AI can and cannot do. Office tasks — from paperwork to scheduling to some forms of analysis — can be automated with growing speed. But when it comes to factories, data centers, supply chains, or even electric vehicle production, someone still has to build, install, and maintain it…
The Ford situation is an interesting one. AI will reduce costs because half Ford’s white collar workers will no longer be on the payroll. But with supply chain interruptions and the friction in retail and lease sales, Ford has an opportunity to demonstrate that AI will allow a traditional manufacturing company to weather the current thunderstorm and generate financial proof that AI can offset exogenous events.
How will Ford perform? This is worth watching because it will provide some useful information for firms looking for a way to cut costs, improve operations, and balance real-world business. AI delivering one kind of financial benefit and traditional blue-collar workers unable to produce products because of supply chain issues. Quite a balancing act for Ford leadership.
Stephen E Arnold, October 17, 2025
Another Better, Faster, Cheaper from a Big AI Wizard Type
October 16, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Cheap seems to be the hot button for some smart software people. I spotted a news item in the Russian computer feed I get called in English “Former OpenAI Engineer Andrey Karpaty Launched the Nanochat Neural Network Generator. You Can Make Your ChatGPT in a Few Hours.” The project is on GitHub at https://github.com/karpathy/nanochat.
The GitHub blurb says:
This repo is a full-stack implementation of an LLM like ChatGPT in a single, clean, minimal, hackable, dependency-lite codebase. Nanochat is designed to run on a single 8XH100 node via scripts like speedrun.sh, that run the entire pipeline start to end. This includes tokenization, pretraining, finetuning, evaluation, inference, and web serving over a simple UI so that you can talk to your own LLM just like ChatGPT. Nanochat will become the capstone project of the course LLM101n being developed by Eureka Labs.
The open source bundle includes:
- A report service
- A Rust-coded tokenizer
- A FineWeb dataset and tools to evaluate CORE and other metrics for your LLM
- Some training gizmos like SmolTalk, tests, and tool usage information
- A supervised fine tuning component
- Training Group Relative Policy Optimization and the GSM8K (a reinforcement learning technique), a benchmark dataset consisting of grade school math word problems
- An output engine.
Is it free? Yes. Do you have to pay? Yep. About US$100 is needed? Launch speedrun.sh, and you will have to be hooked into a cloud server or a lot of hardware in your basement to do the training. A low-ball estimate for using a cloud system is about US$100, give or take some zeros. (Think of good old Amazon AWS and its fascinating billing methods.) To train such a model, you will need a server with eight Nvidia H100 video cards. This will take about 4 hours and about $100 when renting equipment in the cloud. The need for the computing resources becomes evident when you enter the command speedrun.sh.
Net net: As the big dogs burn box cars filled with cash, Nanochat is another player in the cheap LLM game.
Stephen E Arnold, October 16, 2025
Deepseek: Why Trust Any Smart Software?
October 16, 2025
This essay is the work of a dumb dinobaby. No smart software required.
We have completed our work on my new book “The Telegram Labyrinth.” In the course of researching and writing about Pavel Durov’s online messaging system, we learned one thing: Software is not what it seems to the user. Most Telegram users believe that Telegram is end to end encrypted. It is, but only if the user goes through some hoops. The vast majority of users don’t go through hoops. Those millions upon millions of users know much about the third-party bots chugging away in Groups and Channels (public and private). Even fewer users realize that a service charge is applied to each monetary transaction in the Telegram system. That money flows to the GOAT (greatest of all time) technical wizard, Pavel Durov and some close associates. Who knew?
I read “The Demonization of Deepseek: How NIST Turned Open Science into a Security Scare.” The write up focuses on a study or analysis conducted by what used to be the National Bureau of Standards. (I loved those traffic jams on Quince Orchard Road in Gaithersburg, Maryland.) The software put under the NIST (National Institute of Science & Technology) is the China-linked Deepseek smart software.
The cited article discusses the NIST study. Let’s see what it says about the China-linked artificial intelligence system. Presumably Deepseek did more with less; that is, the idea was to demonstrate that Chinese innovation could make US methods of large language models. The result would be better, faster, and cheaper. Cheap has a tendency to win in some product and service categories. Also, “good enough” is a winner in today’s market. (How about the reliability of some of those 2025 automobiles and trucks?)
The write up says:
NIST’s recent report on Deepseek is not a neutral technical evaluation. It is a political hit piece disguised as science. There is no evidence of backdoors, spyware, or data exfiltration. What is really happening is the U.S. government using fear and misinformation to sabotage open science, open research, and open source. They are attacking gifts to humanity with politics and lies to protect corporate power and preserve control. Deepseek’s work is a genuine contribution to human knowledge, and it is being discredited for reasons that have nothing to do with security.
Okay, that’s clear.
Let’s look at how the cited write up positions Deepseek:
Deepseek built competitive AI models. Not perfect, but impressive given their budget. They spent far less than OpenAI or Anthropic and still achieved near-frontier performance. Then they open-sourced everything under Apache 2.0.
The point of the write up is that analysis has been politicized. This is an interesting allegation. I am not confident that any “objective” analysis is indeed without spin. Remember those reports about smoking cigarettes and the work of the Tobacco Institute. (I am a dinobaby, but I remember.)
The write up does identify three concerns a user of Deepseek should have. Let me quote from the cited article:
- Using Deepseek’s API: If you send sensitive data to Deepseek’s hosted service, that data goes through Chinese infrastructure. This is a real data sovereignty issue, the same as using any foreign cloud provider.
- Jailbreak susceptibility: If you’re building production applications, you need to test ANY model for vulnerabilities and implement application-level safeguards. Don’t rely solely on model guardrails. Also – use an inference time guard model (such as LlamaGuard or Qwen3Guard) to classify and filter both prompts and responses.
- Bias and censorship: All models reflect their training data. Be aware of this regardless of which model you use.
Let me offer several observations:
- Most people are unaware of what can be accomplished from software use. Assumptions about what it does and does not do are dangerous. We have tested Deepseek running locally. It is okay. This means it can do some things well like translate a passage in English into German. It has no clue about timely issues because most LLMs are not updated in near real time. Some are, but others are not. Who needs timely information when cheating on a high school essay? Answer: no one.
- The write up focuses on Deepseek, but its implications are much more broad. I think that the mindless write ups from consulting firms and online magazines is a very big problem. Critical thinking is just not the common. It is a problem in the US but other countries have this blind spot as well.
- The idea that political perceptions alter what should be an objective analysis is troubling to me. I have written a number of reports for government agencies; for example, a report about Japan’s obsession with a database industry for the Office of Technology Assessment. Yep, I am a dinobaby remember. I may have been right or wrong in my report, but I was not influenced by any political concept or actor. I could have been because I did a stint in the office of Admiral / Congressman Craig Hosmer. My OTA work was not part of the “game” for me.
Net net: Trust is important. I think it is being eroded. I also believe that there are few people who present information without fear or favor. Now here’s the key part of my perception: One cannot trust smart software or any of the programmer assembled, hidden threshold, and masked training methods that go into these confections. More critical thinking is needed. A deceptive business practice if well crafted cannot be perceived. Remember Telegram Messenger is 13 years young and users of the system don’t have much awareness of bots, mini apps, and dapps. What don’t people know about smart software?
Stephen E Arnold, October 16, 2025
AI Big Dog Chases Fake Rabbit at Race Track and Says, “Stop Now, Rabbit”
October 15, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I like company leaders or inventors who say, “You must not use my product or service that way.” How does that work for smart software? I read “Techie Finishes Coursera Course with Perplexity Comet AI, Aravind Srinivas Warns Do Not Do This.” This write up explains that a person took an online course. The work required was typical lecture-stuff. The student copied the list of tasks and pasted them into Perplexity, one of the beloved high-flying US artificial intelligence company’s system.
The write up says:
In the clip, Comet AI is seen breezing through a 45-minute Coursera training assignment with the simple prompt: “Complete the assignment.” Within seconds, the AI assistant appears to tackle 12 questions automatically, all without the user having to lift a finger.
Smart software is tailor made for high school students, college students, individuals trying to qualify for technical certifications, and doctors grinding through a semi-mandatory instruction program related to a robot surgery device. Instead of learning the old-fashioned way, the AI assisted approach involves identifying the work and feeding it into an AI system. Then one submits the output.
There were two factoids in the write up that I thought noteworthy.
The first is that the course the person cheating studied was AI Ethics, Responsibility, and Creativity. I can visualize a number of MBA students taking an ethics class in business using Perplexity or some other smart software to complete assignments. I mean what MBA student wants to miss out on the role of off-shore banking in modern business. Forget the ethics baloney.
The second is that a big dog in smart software suddenly has a twinge of what the French call l’esprit d’escalier. My French is rusty, but the idea is that a person thinks of something after leaving a meeting; for example, walking down the stairs and realizing, “I screwed up. I should have said…” Here’s how the write up presents this amusing point:
[Perplexity AI and its billionaire CEO Aravind Srinivas] said “Absolutely don’t do this.”
My thought is that AI wizards demonstrate that their intelligence is not the equivalent of foresight. One cannot rewind time or unspill milk. As for the MBAs, use AI and skip ethics. The objective is money, power, and control. Ethics won’t help too much. But AI? That’s a useful technology. Just ask the fellow who completed an online class in less time than it takes to consume a few TikTok-type videos. Do you think workers upskilling to use AI will use AI to demonstrate their mastery? Never. Ho ho ho.
Stephen E Arnold, October 14, 2025

