Worthless College Degrees. Hey, Where Is Mine?
July 4, 2025
Smart software involved in the graphic, otherwise just an addled dinobaby.
This write up is not about going “beyond search.” Heck, search has just changed adjectives and remains mostly a frustrating and confusing experience for employees. I want to highlight the information (which I assume to be 100 percent dead accurate like other free data on the Internet) about the “17 Most Useless College Degrees Employers Don’t Want Today.” Okay, high school seniors, pay attention. According to the estimable Finance Buzz, do not study these subjects and — heaven forbid — expect to get a job when you graduate from an online school, the local college, or a big-time, big-bucks university; I have grouped the write up’s earthworm list into some categories; to wit:
Do gooder work
- Criminal justice
- Education (Who needs an education when there is YouTube?)
Entertainment
- Fashion design
- Film, video, and photographic arts
- Music
- Performing arts
Information
- Advertising
- Creative writing (like Finance Buzz research articles?)
- Communications
- Computer science
- Languages (Emojis and some English are what is needed I assume)
Real losers
- Anthropology and archaeology (I thought these were different until Finance Buzz cleared up my confusion)
- Exercise science
- Religious studies
Waiting tables and working the midnight check in desk
- Culinary arts (Fry cook until the robots arrive)
- Hospitality (Smile and show people their table)
- Tourism (Do not fall into the volcano)
Assume the write up is providing verifiable facts. (I know, I know, this is the era of alternative facts.) If we flash forward five years, the already stretched resources for law enforcement and education will be in an even smaller pickle barrel. Good for the bad actors and the people who don’t want to learn. Perhaps less beneficial to others in society. I assume that one can make TikTok-type videos and generate a really bigly income until the Googlers change the compensation rules or TikTok is banned from the US. With the world awash in information and open source software available, who needs to learn anything. AI will do this work. Who in the heck gets a job in archaeology when one can learn from UnchartedX and Brothers of the Serpent? Exercise. Play football and get a contract when you are in middle school like talented kids in Brazil. And the cruise or specialty restaurant business? Those contracts are for six months for a reason. Plus cruise lines have started enforcing no video rules on the staff who were trying to make day in my life videos about the wonderful cruise ship environment. (Weren’t these vessels once called “prison ships”?) My hunch is that whoever assembled this stellar research at Finance Buzz was actually but indirectly writing about smart software and robots. These will decimate many jobs in the idenfied
What should a person study? Nuclear physics, mathematics (applied and theoretical maybe), chemistry, biogenetics, materials science, modern financial management, law (aren’t there enough lawyers?), medicine, and psychology until the DRG codes are restricted.
Excellent way to get a job. And in what field was my degree? Medieval religious literature. Perfect for life-long employment as a dinobaby essayist.
Stephen E Arnold, July 4, 2025
Apple Fix: Just Buy Something That Mostly Works
July 4, 2025
No smart software involved. Just an addled dinobaby.
A year ago Apple announced AI which means, of course, Apple Intelligence. Well, Apple was “held back”. In 2025, the powerful innovation machine made the iPhone and Macs look a bit like the Windows see-through motif. Okay.
I read “Apple Reportedly Has a Secret Plan to Quickly Gain Ground in the AI Race.” I won’t point out that if information is circulating AND appears in an article, that information is not secret. It is public relations and marketing output. Second, forget the split infinitive. Since few recognize that datum is singular and data is plural or that the word none is singular, I won’t mention it. Obviously few “real” journalists care.
Now to the write up. In my opinion, the big secret revealed and analyzed is …
Sources report that the company is giving serious consideration to bidding for the startup Perplexity AI, which would allow it to transplant a chunk of expertise and ready-made technology into Apple Park and leapfrog many of the obstacles it currently faces. Perplexity runs an AI-powered search engine which can already perform the contextual tricks which Apple advertised ahead of the iPhone 16 launch but hasn’t yet managed to build into Siri.
Analysis of this “secret” is a bit underwhelming. Here’s the paragraph that is supposed to make sense of this non-secret secret:
Historically, Apple has been wary of large acquisitions, whereas rivals, such as Facebook (buying WhatsApp for $22 billion) and Google (acquiring cloud security platform Wiz for $32 billion), have spent big to scoop up companies. It could be a mark of how worried Apple is about the AI situation that it’s considering such a major and out-of-character move. But after a year of headaches and obstacles, it also could pay off in a big way.
Okay, but what about Google acquiring Motorola? What about Microsoft’s clever purchase of Nokia? And there are other examples. Big companies buying other companies can work out or fizzle. Where is Dodgeball now? Orkut?
The actual issue strikes me as Apple’s failure to recognize that smart software — whether it works particularly well or not — was a marketing pony to ride in the technical circus. Microsoft got the message, and it seems that the marketing play triggered Google. But the tie up seems to be under a bit of stress as of June 2025.
Another problem is that buying AI requires that the purchaser manage the operation, ensure continued innovation of an order slightly more demanding that imitating a Windows interface, and getting the wizard huskies to remain hooked to the dog sled.
What seems to be taking place is a division of the smart software world into three sectors:
- Companies that “do” large language models; for example, Google, OpenAI, and others
- Companies that “wrap” large language models and generate start ups that are presented as AI but are interfaces
- Companies that “integrate” or “glue on” AI to an existing service, platform, or system.
Apple failed at number 1. It hasn’t invented anything in the AI world. (I think I learned about Siri in a Stanford Research Institute presentation many, many years ago. (No, it did not work particularly well even in the demo.)
Apple is not too good at wrapping anything. Safari doesn’t wrap. Safari blazes its own weird trail which is okay for those who love Apple software. For someone like me, I find it annoying.
Apple has demonstrated that it could not “glue on” SIRI.
Okay, Apple has not scored a home run with either approach one, two, or three.
Thus, the analysis, in my opinion, is that Apple like some other outfits now realize smart software — whether it is 100 percent reliable — continues to generate buzz. The task for Apple, therefore, is to figure out how to convert whatever it does into buzz. Skip the cost of invention. Sidestep wrapping AI and look for “partners” who do what department stores in the 1950s: Wrap my holiday gifts. And, three, try to make “glue on” work.
Net net: Will Apple undertake auto de fe and see the light?
Stephen E Arnold, July 4, 2025
Read This Essay and Learn Why AI Can Do Programming
July 3, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
I, entirely by accident since Web search does not work too well, an essay titled “Ticket-Driven Development: The Fastest Way to Go Nowhere.” I would have used a different title; for example, “Smart Software Can Do Faster and Cheaper Code” or “Skip Computer Science. Be a Plumber.” Despite my lack of good vibe coding from the essay’s title, I did like the information in the write up. The basic idea is that managers just want throughput. This is not news.
The most useful segment of the write up is this passage:
You don’t need a process revolution to fix this. You need permission to care again. Here’s what that looks like:
- Leave the code a little better than you found it — even if no one asked you to.
- Pair up occasionally, not because it’s mandated, but because it helps.
- Ask why. Even if you already know the answer. Especially then.
- Write the extra comment. Rename the method. Delete the dead file.
- Treat the ticket as a boundary, not a blindfold.
Because the real job isn’t closing tickets it’s building systems that work.
I wish to offer several observations:
- Repetitive boring, mindless work is perfect for smart software
- Implementing dot points one to five will result in a reprimand, transfer to a salubrious location, or termination with extreme prejudice
- Spending long hours with an AI version of an old-fashioned psychiatrist because you will go crazy.
After reading the essay, I realized that the managerial approach, the “ticket-driven workflow”, and the need for throughput applies to many jobs. Leadership no longer has middle managers who manage. When leadership intervenes, one gets [a] consultants or [b] knee-jerk decisions or mandates.
The crisis is in organizational set up and management. The developers? Sorry, you have been replaced. Say, “hello” to our version of smart software. Her name is No Kidding.
Stephen E Arnold, July 3, 2025
AI Management: Excellence in Distancing Decisions from Consequences
July 2, 2025
Smart software involved in the graphic, otherwise just an addled dinobaby.
This write up “Exclusive: Scale AI’s Spam, Security Woes Plagued the Company While Serving Google” raises two minor issues and one that is not called out in the headline or the subtitle:
$14 billion investment from Meta struggled to contain ‘spammy behavior’ from unqualified contributors as it trained Gemini.
Who can get excited about a workflow and editorial quality issue. What is “quality”? In one of my Google monographs I pointed out that Google used at one time a number of numerical recipes to figure out “quality.” Did that work? Well, it was good enough to help get the Yahoo-inspired Google advertising program off the ground. Then quality became like those good brownies from 1953: Stuffed with ingredients no self-respecting Stanford computer science graduate would eat for lunch.
I believe some caution is required when trying to understand a very large and profitable company from someone who is no longer working at the company. Nevertheless, the article presents a couple of interesting assertions and dodges what I consider the big issue.
Consider this statement in the article:
In a statement to Inc., Scale AI spokesperson Joe Osborne said: “This story is filled with so many inaccuracies, it’s hard to keep track. What these documents show, and what we explained to Inc ahead of publishing, is that we had clear safeguards in place to detect and remove spam before anything goes to customers.” [Editor’s Note: “this” means the rumor that Scale cut corners.]
The story is that a process included data that would screw up the neural network.
And the security issue? I noted this passage:
The [spam] episode raises the question of whether or not Google at one point had vital data muddied by workers who lacked the credentials required by the Bulba program. It also calls into question Scale AI’s security and vetting protocols. “It was a mess. They had no authentication at the beginning,” says the former contributor. [Editor’s Note: Bulba means “Bard.”]
A person reading the article might conclude that Scale AI was a corner cutting outfit. I don’t know. But when big money starts to flow and more can be turned on, some companies just do what’s expedient. The signals in this Scale example are the put the pedal to the metal approach to process and the information that people knew that bad data was getting pumped into Googzilla.
But what’s the big point that’s missing from the write up? In my opinion, Google management made a decision to rely on Scale. Then Google management distanced itself from the operation. In the good old days of US business, blue-suited informed middle managers pursued quality, some companies would have spotted the problems and ridden herd on the subcontractor.
Google did not do this in an effective manner.
Now Scale AI is beavering away for Meta which may be an unexpected win for the Google. Will Meta’s smart software begin to make recommendations like “glue your cheese on the pizza”? My personal view is that I now know why Google’s smart software has been more about public relations and marketing, not about delivering something that is crystal clear about its product line up, output reliability, and hallucinatory behaviors.
At least Google management can rely on Deepseek to revolutionize understanding the human genome. Will the company manage in as effective a manner as its marketing department touts its achievements?
Stephen E Arnold, July 2, 2025
Microsoft and OpenAI: An Expensive Sitcom
July 1, 2025
No smart software involved. Just an addled dinobaby.
I remember how clever I thought the book title “Who Says Elephants Can’t Dance?: Leading a Great Enterprise Through Dramatic Change.” I find the break dancing content between Microsoft and OpenAI even more amusing. Bloomberg “real” news reported that Microsoft is “struggling to sell its Copilot solutions. Why? Those Microsoft customers want OpenAI’s ChatGPT. That’s a hoot.
Computerworld adds to this side show more Monte Python twists. “Microsoft and OpenAI: Will They Opt for the Nuclear Option?” (I am not too keen on the use of the word “nuclear.” People bandy it about without understanding exactly what the actual consequences of such an opton means. Please, do a bit of homework before suggesting that two enterprises are doing anything remotely similar.)
The estimable Computerworld reports:
Microsoft needs access to OpenAI technologies to keep its worldwide lead in AI and grow its valuation beyond its current more than $3.5 trillion. OpenAI needs Microsoft to sign a deal so the company can go public via an IPO. Without an IPO, the company isn’t likely to keep its highly valued AI researchers — they’ll probably be poached by companies willing to pay hundreds of millions of dollars for the talent.
The problem seems to be that Microsoft is trying to sell its version of smart software. The enterprise customers and even dinobabies like myself prefer the hallucinatory and unpredictable ChatGPT to the downright weirdness of Copilot in Notepad. The Computerworld story says:
Hovering over it all is an even bigger wildcard. Microsoft’s and OpenAI’s existing agreement dramatically curtails Microsoft’s rights to OpenAI technologies if the technologies reach what is called artificial general intelligence (AGI) — the point at which AI becomes capable of human reasoning. AGI wasn’t defined in that agreement. But Altman has said he believes AGI might be reached as early as this year.
People cannot agree over beach rights and school taxes. The smart software (which may remain without regulation for a decade) is a much bigger deal. The dollars at stake are huge. Most people do not know that a Board of Directors for a Fortune 1000 company will spend more time arguing about parking spaces than a $300 million acquisition. The reason? Most humans cannot conceive of the numbers of dollars associated with artificial intelligence. If the AI next big thing does not work, quite a few outfits are going to be selling snake oil from tables at flea markets.
Here’s the humorous twist from my vantage point. Microsoft itself kicked off the AI boom with its announcements a couple of years ago. Google, already wondering how it can keep the money gushing to pay the costs of simply being Google, short circuited and hit the switch for Code Red, Yellow, Orange, and probably the color only five people on earth have ever seen.
And what’s happened? The Google-spawned methods aren’t eliminating hallucinations. The OpenAI methods are not eliminating hallucinations. The improvements are more and more difficult to explain. Meanwhile start ups are doing interesting things with AI systems that are good enough for certain use cases. I particularly like consulting and investment firms using AI to get rid of MBAs.
The punch line for this joke is that the Microsoft version of ChatGPT seems to have more brand deliciousness. Microsoft linked with OpenAI, created its own “line of AI,” and now finds that the frisky money burner OpenAI is more popular and can just define artificial general intelligence to its liking and enjoy the philosophical discussions among AI experts and lawyers.
One cannot make this sequence up. Jack Benny’s radio scripts came close, but I think the Microsoft – OpenAI program is a prize winner.
Stephen E Arnold, July 1, 2025
Add On AI: Sounds Easy, But Maybe Just a Signal You Missed the Train
June 30, 2025
No smart software to write this essay. This dinobaby is somewhat old fashioned.
I know about Reddit. I don’t post to Reddit. I don’t read Reddit. I do know that like Apple, Microsoft, and Telegram, the company is not a pioneer in smart software. I think it is possible to bolt on Item Z to Product B. Apple pulled this off with the Mac and laser printer bundle. Result? Desktop publishing.
Can Reddit pull off a desktop publishing-type of home run? Reddit sure hopes it can (just like Apple, Microsoft, and Telegram, et al).
“At 20 Years Old, Reddit Is Defending Its Data and Fighting AI with AI” says:
Reddit isn’t just fending off AI. It launched its own Reddit Answers AI service in December, using technology from OpenAI and Google. Unlike general-purpose chatbots that summarize others’ web pages, the Reddit Answers chatbot generates responses based purely on the social media service, and it redirects people to the source conversations so they can see the specific user comments. A Reddit spokesperson said that over 1 million people are using Reddit Answers each week. Huffman has been pitching Reddit Answers as a best-of-both worlds tool, gluing together the simplicity of AI chatbots with Reddit’s corpus of commentary. He used the feature after seeing electronic music group Justice play recently in San Francisco.
The question becomes, “Will users who think about smart software as ChatGPT be happy with a Reddit AI which is an add on?”
Several observations:
- If Reddit wants to pull a Web3 walled-garden play, the company may have lost the ability to lock its gate.
- ChatGPT, according to my team, is what Microsoft Word and Outlook users want; what they get is Copilot. This is a mind share and perception problem the Softies have to figure out how to remediate.
- If the uptake of ChatGPT or something from the “glue cheese on pizza” outfit, Reddit may have to face a world similar to the one that shunned MySpace or Webvan.
- Reddit itself appears to be vulnerable to what I call content injection. The idea is that weaponized content like search engine optimization posts are posted (injected) to Reddit. The result is that AI systems suck in the content and “boost” the irrelevancy.
My hunch is that an outfit like Reddit may find that its users may prefer asking ChatGPT or migrating to one of the new Telegram-type services now being coded in Silicon Valley.
Like Yahoo, the portal to the Internet in 1990s, Reddit may not have a front page that pulls users. A broader comment is that what I call “add-on AI” may not work because the outfits with the core technology and market pull will exploit, bulldoze, and undermine outfits which are at their core getting pretty old. We need a new truism, “When AIs fight, only the stakeholders get trampled.”
The truth may be more painful: Smart AI outfits can cause less smart outfits with AI bolted on to lose their value and magnetism for their core constituencies. Is there a fix? Nope, there is a cat-and-mouse game in which the attacker has the advantage.
Stephen E Arnold, June 30, 2025
Palantir Rattles the Windows in the Nuclear Industry
June 30, 2025
This dinobaby asked ChatGPT about Palantir. Interesting output.
“Palantir Partners to Develop AI Software for Nuclear Construction” caught me by surprise. I did not think the company founded in 2003 was an artificial intelligence company with expertise in the nuclear field. The cited article reports:
Palantir Technologies on Thursday [June 26, 2025] said it was teaming up with a nuclear deployment company to develop an artificial intelligence-driven software system built for the construction of nuclear reactors.
I have paid much attention to Palantir, but I did own a small amount of Palantir stock but I dumped it.
Screenshot of the ChatGPT output on June 27, 2025, at 855 am US Eastern
Today I wanted to refresh my memory about the company. I navigated to OpenAI’s ChatGPT and plugged in key words and this is what the Sam Altman system generated for me. Note: It is unlikely that the ChatGPT system can output the same report because probability doesn’t work the way users of the system think it does.
Remember: The quoted passage is copied directly from the ChatGPT response. ChatGPT told me:
Palantir Technologies has a pattern of circumventing formal processes, pushing ethical boundaries, and leveraging aggressive public-private tactics to secure influence—particularly within U.S. defense and intelligence communities.
Here is a structured explanation of why such behaviors have been downplayed or ignored:
1. Operational Utility Overrides Bureaucratic Violations
- In Afghanistan and Iraq, Palantir’s software reportedly delivered immediate tactical value in identifying IED networks and improving situational awareness.
- By deploying software and engineers directly to the field without formal contracts, Palantir created facts on the ground—a working product that soldiers wanted, and field commanders endorsed.
- Pentagon leadership faced a dilemma: challenge Palantir for its violations or continue using a system that operators said worked better than the legacy alternatives (e.g., DCGS-A).
- This “bottom-up demand” shielded Palantir from early procurement blowback.
2. Information Warfare and Lobbying
- Palantir conducted a calculated media and lobbying campaign targeting the Distributed Common Ground System–Army (DCGS-A, also referred to as D-CIGS).
- By discrediting IBM and other legacy defense contractors, Palantir positioned itself as the modern, Silicon Valley alternative.
- This messaging resonated with Congress, think tanks, and parts of the press, framing Palantir as a disruptor, not a rule-breaker.
- Lobbying disclosures show Palantir spent millions courting lawmakers and decision-makers.
3. Legal Maneuvering and Settlements
- In both the i2 lawsuit and the 2016 bid protest against the Army over DCGS-A, Palantir used the courts to shape procurement rules in its favor.
- The Court of Federal Claims sided with Palantir in 2016, affirming that the Army violated federal acquisition regulations by excluding commercial solutions from its process ([Palantir USG v. United States, 129 Fed. Cl. 218 (2016)]). This set a precedent that indirectly legitimized Palantir’s aggressive tactics.
4. Shift in Public Attention and National Security Prioritization
- Post-2016, the focus of public scrutiny shifted toward big tech’s relationship with consumer data, not defense contractors.
- Palantir successfully repositioned itself as a key player in pandemic response, immigration enforcement, and border security—domains where national interest arguments silence criticism.
- The company also leaned into its identity as a pro-Western, anti-authoritarian force, which further insulated it from deeper ethical scrutiny.
5. Opacity and Media Fatigue
- Palantir operates in classified and compartmented environments, which makes investigation difficult.
- Many journalists lack the access or technical background to track the nuance of procurement violations or reverse engineering disputes.
- After years of unresolved criticism, media fatigue set in—journalists moved on, and watchdog groups prioritized new targets (e.g., TikTok, Clearview AI, etc.).
Conclusion:
Palantir’s avoidance of procurement law, unauthorized deployments, and its aggressive discrediting of rivals like IBM/i2 have been documented but neutralized by its strategic utility, legal successes, lobbying efforts, and narrative control. The firm masks tactical rule-breaking with patriotic branding, which has proved highly effective in suppressing deeper accountability.
True, false, reality, or hallucinatory? I don’t know. Nuclear in the context of this ChatGPT output that suggests the Palantir outfit is definitely operating in a very interesting manner. What would Palantir’s AI say about the company?
Stephen E Arnold, June 30, 2025
Do Not Be Evil. Dolphins, Polar Bears, and Snail Darters? Tough Luck
June 30, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
The Guardian comes up with some interesting “real” news stories. “Google’s Emissions Up 51% As AI Electricity Demand Derails Efforts to Go Green” reports:
Google’s carbon emissions have soared by 51% since 2019 as artificial intelligence hampers the tech company’s efforts to go green.
The juicy factoid in my opinion is:
The [Google] report also raises concerns that the rapid evolution of AI may drive “non-linear growth in energy demand”, making future energy needs and emissions trajectories more difficult to predict.
Folks, does the phrase “brown out” resonate with you? What about “rolling blackout.” If the “non-linear growth” thing unfolds, the phrase “non-linear growth” may become synonymous with brown out and rolling blackout.
As a result, the article concludes with this information, generated without plastic, by Google:
Google is aiming to help individuals, cities and other partners collectively reduce 1GT (gigaton) of their carbon-equivalent emissions annually by 2030 using AI products. These can, for example, help predict energy use and therefore reduce wastage, and map the solar potential of buildings so panels are put in the right place and generate the maximum electricity.
Will Google’s thirst or revenue-driven addiction harm dolphins, polar bears, and snail darters? Answer: We aim to help dolphins and polar bears. But we have to ask our AI system what a snail darter is.
Will the Googley smart software suggest that snail darters just dart at snails and quit worrying about their future?
Stephen E Arnold, June 30, 2025
A Business Opportunity for Some Failed VCs?
June 26, 2025
An opinion essay written by a dinobaby who did not rely on smart software .
Do you want to open a T shirt and baseball cap with snappy quotes? If the answer is, “Yes,” I have a suggestion for you. Tucked into “Artificial Intelligence Is Not a Miracle Cure: Nobel Laureate Raises Questions about AI-Generated Image of Black Hole Spinning at the Heart of Our Galaxy” is this gem of a quotation:
“But artificial intelligence is not a miracle cure.”
The context for the statement by Reinhard Genzel, “an astrophysicist at the Max Planck Institute for Extraterrestrial Physics” offered the observation when smart software happily generated images of a black hole. These are mysterious “things” which industrious wizards find amidst the numbers spewed by “telescopes.” Astrophysicists are discussing in an academic way exactly what the properties of a black hole are. One wing of the community has suggested that our universe exists within a black hole. Other wings offer equally interesting observations about these phenomena.
The write up explains:
an international team of scientists has attempted to harness the power of AI to glean more information about Sagittarius A* from data collected by the Event Horizon Telescope (EHT). Unlike some telescopes, the EHT doesn’t reside in a single location. Rather, it is composed of several linked instruments scattered across the globe that work in tandem. The EHT uses long electromagnetic waves — up to a millimeter in length — to measure the radius of the photons surrounding a black hole. However, this technique, known as very long baseline interferometry, is very susceptible to interference from water vapor in Earth’s atmosphere. This means it can be tough for researchers to make sense of the information the instruments collect.
The fix is to feed the data into a neural network and let the smart software solve the problem. It did, and generated the somewhat tough-to-parse images in the write up. To a dinobaby, one black hole image looks like another.
But the quote states what strikes me as a truism for 2025:
“But artificial intelligence is not a miracle cure.”
Those who have funded are unlikely to buy a hat to T shirt with this statement printed in bold letters.
Stephen E Arnold, June 26, 2025
AI Side Effect: Some of the Seven Deadly Sins
June 25, 2025
New technology has been charged with making humans lazy and stupid. Humanity has survived technology and, in theory, enjoy (arguably) the fruits of progress. AI, on the other hand, might actually be rotting one’s brain. New Atlas shares the mental news about AI in “AI Is Rotting Your Brain And Making You Stupid.”
The article starts with the usual doom and gloom that’s unfortunately true, including (and I quote) the en%$^ification of Google search. Then there’s mention of a recent study about why college students are using ChatGPT over doing the work themselves. One student said, You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”
Good point, but sometimes using a car isn’t the best option. It might be faster but sometimes other options make more sense. The author also makes an important point too when he was crafting a story that required him to read a lot of scientific papers and other research:
“Could AI have assisted me in the process of developing this story? No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience. And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence.”
Here’s another pertinent observation:
“In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systems (or these simulations of intelligence) are erasing our ability to think, consider, and write. Where does it all end? For Chiang it’s pretty dystopian feedback loop of dialectical slop.”
An AI driven world won’t be an Amana, Iowa (not an old fridge), but it also won’t be dystopian. Amidst the flood of information about AI, it is difficult to figure out what’s what. What if some of the seven deadly sins are more fun than doom scrolling and letting AI suggest what one needs to know?
Whitney Grace, June 25, 2025

