LLMs and Creativity: Definitely Not Einstein

November 25, 2025

green-dino_thumbAnother dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.

I have a vague recollection of a very large lecture room with stadium seating. I think I was at the University of Illinois when I was a high school junior. Part of the odd ball program in which I found myself involved a crash course in psychology. I came away from that class with an idea that has lingered in my mind for lo these many decades; to wit: People who are into psychology are often wacky. Consequently I don’t read too much from this esteemed field of study. (I do have some snappy anecdotes about my consulting projects for a psychology magazine, but let’s move on.)

image

A semi-creative human explains to his robot that he makes up answers and is not creative in a helpful way. Thanks, Venice.ai. Good enough, and I see you are retiring models, including your default. Interesting.

I read in PsyPost this article: “A Mathematical Ceiling Limits Generative AI to Amateur-Level Creativity.” The main idea is that the current approach to smart software does not just answers dead wrong, but the algorithms themselves run into a creative wall.

Here’s the alleged reason:

The investigation revealed a fundamental trade-off embedded in the architecture of large language models. For an AI response to be effective, the model must select words that have a high probability of fitting the context. For instance, if the prompt is “The cat sat on the…”, the word “mat” is a highly effective completion because it makes sense and is grammatically correct. However, because “mat” is the most statistically probable ending, it is also the least novel. It is entirely expected. Conversely, if the model were to select a word with a very low probability to increase novelty, the effectiveness would drop. Completing the sentence with “red wrench” or “growling cloud” would be highly unexpected and therefore novel, but it would likely be nonsensical and ineffective. Cropley determined that within the closed system of a large language model, novelty and effectiveness function as inversely related variables. As the system strives to be more effective by choosing probable words, it automatically becomes less novel.

Let me take a whack at translating this quote from PsyPost: LLMs like Google-type systems have to decide. [a] Be effective and pick words that fit the context well, like “jelly” after “I ate peanut butter and jelly.” Or, [b] The LLM selects infrequent and unexpected words for novelty. This may lead to LLM wackiness. Therefore,  effectiveness and novelty work against each other—more of one means less of the other.

The article references some fancy math and points out:

This comparison suggests that while generative AI can convincingly replicate the work of an average person, it is unable to reach the levels of expert writers, artists, or innovators. The study cites empirical evidence from other researchers showing that AI-generated stories and solutions consistently rank in the 40th to 50th percentile compared to human outputs. These real-world tests support the theoretical conclusion that AI cannot currently bridge the gap to elite [creative] performance.

Before you put your life savings into a giant can’t-lose AI data center investment, you might want to ponder this passage in the PsyPost article:

“For AI to reach expert-level creativity, it would require new architecture capable of generating ideas not tied to past statistical patterns … Until such a paradigm shift occurs in computer science, the evidence indicates that human beings remain the sole source of high-level creativity.

Several observations:

  1. Today’s best-bet approach is the Google-type LLM. It has creative limits as well as the problems of selling advertising like old-fashioned Google search and outputting incorrect answers
  2. The method itself erects a creative barrier. This is good for humans who can be creative when they are not doom scrolling.
  3. A paradigm shift could make those giant data centers extremely large white elephants which lenders are not very good at herding along.

Net net: I liked the angle of the article. I am not convinced I should drop my teen impression of psychology. I am a dinobaby, and I like land line phones with rotary dials.

Stephen E Arnold, November 26, 2025

AI Content: Most People Will Just Accept It and Some May Love It or Hum Along

November 18, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

The trust outfit Thomson Reuters summarized as real news a survey. The write up sports the title “Are You Listening to Bots? Survey Shows AI Music Is Virtually Undetectable?” Truth be told, I wanted the magic power to change the headline to “Are You Reading News? Survey Shows AI Content Is Virtually Undetectable.” I have no magic powers, but I think the headline I just made up is going to appear in the near future.

image

Elvis in heaven looks down on a college dance party and realizes that he has been replaced by a robot. Thanks, Venice.ai. Wow, your outputs are deteriorating in my opinion.

What does the trust outfit report about a survey? I learned:

A staggering 97% of listeners cannot distinguish between artificial intelligence-generated and human-composed songs, a Deezer–Ipsos survey showed on Wednesday, underscoring growing concerns that AI could upend how music is created, consumed and monetized. The findings of the survey, for which Ipsos polled 9,000 participants across eight countries, including the U.S., Britain and France, highlight rising ethical concerns in the music industry as AI tools capable of generating songs raise copyright concerns and threaten the livelihoods of artists.

I won’t trot out my questions about sample selection, demographics, and methodology. Let’s just roll with what the “trust” outfit presents as “real” news.

I noted this series of factoids:

  1. “73% of respondents supported disclosure when AI-generated tracks are recommended”
  2. “45% sought filtering options”
  3. “40% said they would skip AI-generated songs entirely.”
  4. Around “71% expressed surprise at their inability to distinguish between human-made and synthetic tracks.”

Isn’t that last dot point the major finding. More than two thirds cannot differentiate synthesized, digitized music from humanoid performers.

The study means that those who have access to smart software and whatever music generation prompt expertise is required can bang out chart toppers. Whip up some synthetic video and go on tour. Years ago I watched a recreation of Elvis Presley. Judging from the audience reaction, no one had any problem doing the willing suspension of disbelief. No opium required at that event. It was the illusion of the King, not the fried banana version of him that energized the crowd.

My hunch is that AI generated performances will become a very big thing. I am assuming that the power required to make the models work is available. One of my team told me that “Walk My Walk” by Breaking Rust hit the Billboard charts.

The future is clear. First, customer support staff get to find their future elsewhere. Now the kind hearted music industry leadership will press the delete button on annoying humanoid performers.

My big take away from the “real” news story is that most people won’t care or know. Put down that violin and get a digital audio workstation. Did you know Mozart got in trouble when he was young for writing math and music on the walls in his home. Now he can stay in his room and play with his Mac Mini computer.

Stephen E Arnold, November 18, 2025

News Flash: Young Workers Are Not Happy. Who Knew?

August 12, 2025

Dino 5 18 25No AI. Just a dinobaby being a dinobaby.

My newsfeed service pointed me to an academic paper in mid-July 2025. I am just catching up, and I thought I would document this write up from big thinkers at Dartmouth College and University College London and “Rising young Worker Despair in the United States.”

The write up is unlikely to become a must-read for recent college graduates or youthful people vaporized from their employers’ payroll. The main point is that the work processes of hiring and plugging away is driving people crazy.

The author point out this revelation:

ons In this paper we have confirmed that the mental health of the young in the United States has worsened rapidly over the last decade, as reported in multiple datasets. The deterioration in mental health is particularly acute among young women…. ted the relative prices of housing and childcare have risen. Student debt is high and expensive. The health of young adults has also deteriorated, as seen in increases in social isolation and obesity. Suicide rates of the young are rising. Moreover, Jean Twenge provides evidence that the work ethic itself among the young has plummeted. Some have even suggested the young are unhappy having BS jobs.

Several points jumped from the 38 page paper:

  1. The only reference to smart software or AI was in the word “despair”. This word appears 78 times in the document.
  2. Social media gets a few nods with eight references in the main paper and again in the endnotes. Isn’t social media a significant factor? My question is, “What’s the connection between social media and the mental states of the sample?”
  3. YouTube is chock full of first person accounts of job despair. A good example is Dari Step’s video “This Job Hunt Is Breaking Me and Even California Can’t Fix It Though It Tries.” One can feel the inner turmoil of this person. The video runs 23 minutes and you can find it (as of August 4, 2025) at this link: https://www.youtube.com/watch?v=SxPbluOvNs8&t=187s&pp=ygUNZGVtaSBqb2IgaHVudA%3D%3D. A “study” is one thing with numbers and references to hump curves. A first-person approach adds a bit is sizzle in my opinion.

A few observations seem warranted:

  1. The US social system is cranking out people who are likely to be challenging for managers. I am not sure the get-though approach based on data-centric performance methods will be productive over time
  2. Whatever is happening in “education” is not preparing young people and recent graduates to support themselves with old-fashioned jobs. Maybe most of these people will become AI entrepreneurs, but I have some doubts about success rates
  3. Will the National Bureau of Economic Research pick up the slack for the disarray that seems to be swirling through the Bureau of Labor Statistics as I write this on August 4, 2025?

Stephen E Arnold, August 12, 2025

Win Big at the Stock Market: AI Can Predict What Humans Will Do

July 10, 2025

Dino 5 18 25No smart software to write this essay. This dinobaby is somewhat old fashioned.

AI is hot. Click bait is hotter. And the hottest is AI figuring out what humans will do “next.” Think stock picking. Think pitching a company “known” to buy what you are selling. The applications of predictive smart software make intelligence professionals gaming the moves of an adversary quiver with joy.

New Mind-Reading’ AI Predicts What Humans Will Do Next, And It’s Shockingly Accurate” explains:

Researchers have developed an AI called Centaur that accurately predicts human behavior across virtually any psychological experiment. It even outperforms the specialized computer models scientists have been using for decades. Trained on data from more than 60,000 people making over 10 million decisions, Centaur captures the underlying patterns of how we think, learn, and make choices.

Since I believe everything I read on the Internet, smart software definitely can pull off this trick.

How does this work?

Rather than building from scratch, researchers took Meta’s Llama 3.1 language model (the same type powering ChatGPT) and gave it specialized training on human behavior. They used a technique that allows them to modify only a tiny fraction of the AI’s programming while keeping most of it unchanged. The entire training process took only five days on a high-end computer processor.

Hmmm. The Zuck’s smart software. Isn’t Meta in the midst of playing  catch up. The company is believed to be hiring OpenAI professionals and other wizards who can convert the “also in the race” to “winner” more quickly than one can say “billions of dollar spent on virtual reality.”

The write up does not just predict what a humanoid or a dinobaby will do. The write up reports:

n a surprising discovery, Centaur’s internal workings had become more aligned with human brain activity, even though it was never explicitly trained to match neural data. When researchers compared the AI’s internal states to brain scans of people performing the same tasks, they found stronger correlations than with the original, untrained model. Learning to predict human behavior apparently forced the AI to develop internal representations that mirror how our brains actually process information. The AI essentially reverse-engineered aspects of human cognition just by studying our choices. The team also demonstrated how Centaur could accelerate scientific discovery.

I am sold. Imagine. These researchers will be able to make profitable investments, know when to take an alternate path to a popular tourist attraction, and discover a drug that will cure male pattern baldness. Amazing.

My hunch is that predictive analytics hooked up to a semi-hallucinating large language model can produce outputs. Will these predict human behavior? Absolutely. Did the Centaur system predict that I would believe this? Absolutely. Was it hallucinating? Yep, poor Centaur.

Stephen E Arnold, July 10, 2025

YouTube Reveals the Popularity Winners

June 6, 2025

dino orange_thumb_thumbNo AI, just a dinobaby and his itty bitty computer.

Another big technology outfit reports what is popular on its own distribution system. The trusted outfit knows that it controls the information flow for many Googlers. Google pulls the strings.

When I read “Weekly Top Podcast Shows,” I asked myself, “Are these data audited?” And, “Do these data match up to what Google actually pays the people who make these programs?”

I was not the only person asking questions about the much loved, alleged monopoly. The estimable New York Times wondered about some programs missing from the Top 100 videos (podcasts) on Google’s YouTube. Mediaite pointed out:

The rankings, based on U.S. watch time, will update every Wednesday and exclude shorts, clips and any content not tagged as a podcast by creators.

My reaction to the listing is that Google wants to make darned sure that it controls the information flow about what is getting views on its platform. Presumably some non-dinobaby will compare the popularity listings to other lists, possibly the misfiring Apple’s list. Maybe an enthusiast will scrape the “popular” listings on the independent podcast players? Perhaps a research firm will figure out how to capture views like the now archaic logs favored decades ago by certain research firms.

Several observations:

  1. Google owns the platform. Google controls the data. Google controls what’s left up and what’s taken down? Google is not known for making its click data just a click away. Therefore, the listing is an example of information control and shaping.
  2. Advertisers, take note. Now you can purchase air time on the programs that matter.
  3. Creators who become dependent on YouTube for revenue are slowly being herded into the 21st century’s version of the Hollywood business model from the 1940s. A failure to conform means that the money stream could be reduced or just cut off. That will keep the sheep together in my opinion.
  4. As search morphs, Google is putting on its thinking cap in order to find ways to keep that revenue stream healthy and hopefully growing.

But I trust Google, don’t you? Joe Rogan does.

Stephen E Arnold, June 6, 2025

IBM AI Study: Would The Research Report Get an A in Statistics 202?

May 9, 2025

dino-orange_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

IBM, reinvigorated with its easy-to-use, backwards-compatible, AI-capable mainframe released a research report about AI. Will these findings cause the new IBM AI-capable mainframe to sell like Jeopardy / Watson “I won” T shirts?

Perhaps.

The report is “Five Mindshifts to Supercharge  Business Growth.” It runs a mere 40 pages and requires no more time than configuring your new LinuxONE Emperor 5 mainframe. Well, the report can be absorbed in less time, but the Emperor 5 is a piece of cake as IBM mainframes go.

Here are a few of the findings revealed by IBM in its IBM research report;

AI can improve customer “experience”. I think this means that customer service becomes better with AI in it. Study says, “72 percent of those in the sample agree.”

Turbulence becomes opportunity. 100 percent of the IBM marketers assembling the report agree. I am not sure how many CEOs are into this concept; for example, Hollywood motion picture firms or Georgia Pacific which closed a factory and told workers not to come in tomorrow.

Here’s a graphic from the IBM study. Do you know what’s missing? I will give you five seconds as Arvin Haddad, the LA real estate influencer says in his entertaining YouTube videos:

image

The answer is, “Increasing revenues, boosting revenues, and keeping stakeholders thrilled with their payoffs.” The items listed by IBM really don’t count, do they?

“Embrace AI-fueled creative destruction.” Yep, another 100 percenter from the IBM team. No supporting data, no verification, and not even a hint of proof that AI-fueled creative destruction is doing much more than making lots of venture outfits and some of the US AI leaders is improving their lives. That cash burn could set the forest on fire, couldn’t it? Answer: Of course not.

I must admit I was baffled by this table of data:

image

Accelerate growth and efficiency goes down with generative AI. (Is Dr. Gary Marcus right?). Enhanced decision making goes up with generative AI. Are the decisions based on verifiable facts or hallucinated outputs? Maybe busy executives in the sample choose to believe what AI outputs because a computer like the Emperor 5 was involved. Maybe “easy” is better than old-fashioned problem solving which is expensive, slow, and contentious. “Just let AI tell me” is a more modern, streamlined approach to decision making in a time of uncertainty. And the dotted lines? Hmmm.

On page 40 of the report, I spotted this factoid. It is tiny and hard to read.

image

The text says, “50 percent say their organization has disconnected technology due to the pace of recent investments.” I am not exactly sure what this means. Operative words are “disconnected” and “pace of … investments.” I would hazard  an interpretation: “Hey, this AI costs too much and the payoff is just not obvious.”

I wish to offer some observations:

  1. IBM spent some serious money designing this report
  2. The pagination is in terms of double page spreads, so the “study” plus rah rah consumes about 80 pages if one were to print it out. On my laser printer the pages are illegible for a human, but for the designers, the approach showcases the weird ice cubes, the dotted lines, and allows important factoids to be overlooked
  3. The combination of data (which strike me as less of a home run for the AI fan and more of a report about AI friction) and flat out marketing razzle dazzle is intriguing. I would have enjoyed sitting in the meetings which locked into this approach. My hunch is that when someone thought about the allegedly valid results and said, “You know these data are sort of anti-AI,” then the others in the meeting said, “We have to convert the study into marketing diamonds.” The result? The double truck, design-infused, data tinged report.

Good work, IBM. The study will definitely sell truckloads of those Emperor 5 mainframes.

Stephen E Arnold, May 9, 2025

Waymo Self Driving Cars: Way Safer, Waymo Says

May 9, 2025

This dinobaby believes everything he reads online. I know that statistically valid studies conducted by companies about their own products are the gold standard in data collection and analysis. If you doubt this fact of business life in 2025, you are not in the mainstream.

I read “Waymo Says Its Robotaxis Are Up to 25x Safer for Pedestrians and Cyclists.” I was thrilled. Imagine. I could stand in front of a Waymo robotaxi holding my new grandchild and know that the vehicle would not strike us. I wonder if my son and his wife would allow me to demonstrate my faith in the Google.

The write up explains that a Waymo study proved beyond a shadow of doubt that Waymo robotaxis are way, way, way safer than any other robotaxi. Here’s a sampling of the proof:

92 percent fewer crashes with injuries to pedestrians

82 percent fewer crashes with injuries to kids and adults on bicycles

82 percent fewer crashes with senior citizens on scooters and adults on motorcycles.

Google has made available a big, fat research paper which provides more knock out data about the safety of the firm’s smart robot driven vehicles. If you want to dig into the document with inputs from six really smart people, click this link.

The study is a first, and it is, in my opinion, a quantumly supreme example of research. I do not believe that Google’s smart software was used to create any synthetic data. I know that if a Waymo vehicle and another firm’s robot-driven car speed at an 80 year old like myself 100 times each, the Waymo vehicles will only crash into me 18 times. I have no idea how many times I would be killed or injured if another firm’s smart vehicle smashed into me. Those are good odds, right?

The paper has a number of compelling presentations of data. Here’s an example:

image

This particular chart uses the categories of striking and struck, but only a trivial amount of these kinetic interactions raise eyebrows. No big deal. That’s why the actual report consumed only 58 pages of text and hard facts. Obvious superiority.

Would you stand in front of a Waymo driving at you as the sun sets?

I am a dinobaby, and I don’t think an automobile would do too much damage if it did hit me. Would my son’s wife allow me to hold my grandchild in my arms as I demonstrated my absolute confidence in the Alphabet Google YouTube Waymo technology? Answer: Nope.

Stephen E Arnold, May 9, 2025

Mobile Phones? Really?

May 2, 2025

dino orange_thumbNo AI, just the dinobaby himself.

I read one of those “modern” scientific summaries in the UK newspaper, The Guardian. Yep, that’s a begging for dollars outfit which reminds me that I have read eight stories since January 1, 2025.  I am impressed with the publisher’s cookie wizardry. Too bad it does not include the other systems I use in the course of my day.

The article which caught my attention and sort of annoyed me is “Older People Who Use Smartphones Have Lower Rates of Cognitive Decline.” I haven’t been in school since I abandoned my PhD to join Halliburton Nuclear in Washington, DC in the early 1970s. I don’t remember much of my undergraduate work, including classes about setting up “scientific studies” or avoiding causation problems.

I do know that I am 80 years old and that smartphones are not the center of my information world. Am I, therefore, in cognitive decline? I suppose you should ask those who will be in my OSINT lecture this coming Friday (April 18, 2025) or those hearing my upcoming talks at a US government cyber fraud conference. My hunch is that whether the people listening to me think I am best suited for drooling in an old age home or some weird nut job fooling people is best accomplished by some research that involves sample selection, objective and interview data, and benchmarking.

The Guardian article skips right to the reason I am able to walk and chew gum at the same time without requiring [a] dentures, [b] a walker, [c] an oxygen tank, or [d] a mobile smartphone.

But, no, the write up says:

Fears that smartphones, tablets and other devices could drive dementia in later life have been challenged by research that found lower rates of cognitive decline in older people who used the technology. An analysis of published studies that looked at technology use and mental skills in more than 400,000 older adults found that over-50s who routinely used digital devices had lower rates of cognitive decline than those who used them less.

Okay, why use one smartphone. Buy two. Go whole hog. Install TOR and cruise the Dark Web and figure out why Ahmia.fi is filtering results. Download apps by the dozens and use them to get mental stimulation. I highly recommend Hamster Kombat, Act 2. Plus, one must log on to Facebook — the hot spot for seniors to check out grandchildren and keep up with obituaries — and immerse oneself in mental stimulation.

The write up says:

It is unclear whether the technology staves off mental decline, or whether people with better cognitive skills simply use them more, but the scientists say the findings question the claim that screen time drives what has been called “digital dementia”.

That’s slick. Digital dementia.

My thoughts about this wishy washy correlation are:

  1. Some “scientists” are struggling to get noticed for their research and grab smartphones and data to establish that these technological gems keep one’s mind sharp. Yeah, meh!
  2. A “major real news” outfit writes up the “research” illustrates a bit of what I call “information stretching.” Like spandex tights, making the “facts” convert a blob into an acceptable shape has replaced actual mental work
  3. The mental decline thing tells me more about the researchers and the Guardian’s editorial approach.

My view is that engagement with people, devices, and ideas trump the mobile phone angle. People who face physical deterioration are going to demonstrate assorted declines. If the phone helps some people, great.

I am just tired of the efforts to explain the upsides and downsides of mobile devices. These gizmos are part of the datasphere in which people live. Put a person in solitary confinement with sound deadening technology and that individual will suffer some quite sporty declines. A rich and stimulating environment is more important than a gizmo with Telegram or WhatsApp. Maybe an old timer will become the next crypto currency trading tsar?

Net net: Those undergraduate classes in statistics, psychology, and logic might be relevant, particularly to those who became thumb typing and fast scrollers at a young age. I am a dinobaby and maybe you will attend one of my lectures. Then you can tell me that I do what I do because I have a smartphone. Actually I have four. That’s why the Guardian’s view count is wrong about how often I look at the outfit’s articles.

Stephen E Arnold, May 2, 2025

Mathematics Is Going to Be Quite Effective, Citizen

March 5, 2025

dino orange_thumbThis blog post is the work of a real-live dinobaby. No smart software involved.

The future of AI is becoming more clear: Get enough people doing something, gather data, and predict what humans will do. What if an individual does not want to go with the behavior of the aggregate? The answer is obvious, “Too bad.”

How do I know that as a handful of organizations will use their AI in this manner? I read “Spanish Running of the Bulls’ Festival Reveals Crowd Movements Can Be Predictable, Above a Certain Density.” If the data in the report are close to the pin, AI will be used to predict and then those predictions can be shaped by weaponized information flows. I got a glimpse of how this number stuff works when I worked at Halliburton Nuclear with Dr. Jim Terwilliger. He and a fellow named Julian Steyn were only too happy to explain that the mathematics used for figuring out certain nuclear processes would work for other applications as well. I won’t bore you with comments about the Monte Carl method or the even older Bayesian statistics procedures. But if it made certain nuclear functions manageable, the approach was okay mostly.

Let’s look at what the Phys.org write up says about bovines:

Denis Bartolo and colleagues tracked the crowds of an estimated 5,000 people over four instances of the San Fermín festival in Pamplona, Spain, using cameras placed in two observation spots in the plaza, which is 50 meters long and 20 meters wide. Through their footage and a mathematical model—where people are so packed that crowds can be treated as a continuum, like a fluid—the authors found that the density of the crowds changed from two people per square meter in the hour before the festival began to six people per square meter during the event. They also found that the crowds could reach a maximum density of 9 people per square meter. When this upper threshold density was met, the authors observed pockets of several hundred people spontaneously behaving like one fluid that oscillated in a predictable time interval of 18 seconds with no external stimuli (such as pushing).

I think that’s an important point. But here’s the comment that presages how AI data will be used to control human behavior. Remember. This is emergent behavior similar to the hoo-hah cranked out by the Santa Fe Institute crowd:

The authors note that these findings could offer insights into how to anticipate the behavior of large crowds in confined spaces.

Once probabilities allow one to “anticipate”, it follows that flows of information can be used to take or cause action. Personally I am going to make a note in my calendar and check in one year to see how my observation turns out. In the meantime, I will try to keep an eye on the Sundars, Zucks, and their ilk for signals about their actions and their intent, which is definitely concerned with individuals like me. Right?

Stephen E Arnold, March 5, 2025

Speed Up Your Loss of Critical Thinking. Use AI

February 19, 2025

While the human brain isn’t a muscle, its neurology does need to be exercised to maintain plasticity. When a human brain is rigid, it’s can’t function in a healthy manner. AI is harming brains by making them not think good says 404 Media: “Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared.” You can read the complete Microsoft research report at this link. (My hunch is that this type of document would have gone the way of Timnit Gebru and the flying stochastic parrot, but that’s just my opinion, Hank, Advait, Lev, Ian, Sean, Dick, and Nick.)

Carnegie Mellon University and Microsoft researchers released a paper that says the more humans rely on generative AI the “result in the deterioration of cognitive faculties that ought to be preserved.”

Really? You don’t say! What else does this remind you of? How about watching too much television or playing too many videogames? These passive activities (arguable with videogames) stunt the development of brain gray matter and in a flight of Mary Shelley rhetoric make a brain rot! What else did the researchers discover when they studied 319 knowledge workers who self-reported their experiences with generative AI:

“ ‘The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI,’ the researchers wrote. ‘Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving.’”

By the way, we definitely love and absolutely believe data based on self reporting. Think of the mothers who asked their teens, “Where did you go?” The response, “Out.” The mothers ask, “What did you do?” The answer, “Nothing.” Yep, self reporting.

Does this mean generative AI is a bad thing? Yes and no. It’ll stunt the growth of some parts of the brain, but other parts will grow in tandem with the use of new technology. Humans adapt to their environments. As AI becomes more ingrained into society it will change the way humans think but will only make them sort of dumber [sic]. The paper adds:

“ ‘GenAI tools could incorporate features that facilitate user learning, such as providing explanations of AI reasoning, suggesting areas for user refinement, or offering guided critiques,’ the researchers wrote. ‘The tool could help develop specific critical thinking skills, such as analyzing arguments, or cross-referencing facts against authoritative sources. This would align with the motivation enhancing approach of positioning AI as a partner in skill development.’”

The key is to not become overly reliant AI but also be aware that the tool won’t go away. Oh, when my mother asked me, “What did you do, Whitney?” I responded in the best self reporting manner, “Nothing, mom, nothing at all.”

Whitney Grace, February 19, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta