The HR Gap: First in Line, First Fooled
August 15, 2025
No AI. Just a dinobaby being a dinobaby.
Not long ago I spoke with a person who is a big time recruiter. I asked, “Have you encountered any fake applicants?” The response, “No, I don’t think so.”
That’s the problem. Whatever is happening in HR continuing education, deep fake spoof employees is not getting through. I am not sure there is meaningful “continuing education” for personnel professionals.
I mention this cloud of unknowing in one case example because I read “Cloud Breaches and Identity Hacks Explode in CrowdStrike’s Latest Threat Report.” The write up reports:
The report … highlights the increasingly strategic use of generative AI by adversaries. The North Korea-linked hacking group Famous Chollima emerged as the most generative AI-proficient actor, conducting more than 320 insider threat operations in the past year. Operatives from the group reportedly used AI tools to craft compelling resumes, generate real-time deepfakes for video interviews and automate technical work across multiple jobs.
My first job was at Nuclear Utilities Services (an outfit soon after I was hired became a unit of Halliburton. Dick Cheney, Halliburton, remember?). One of the engineers came up to me after I gave a talk about machine indexing at what was called “Allerton House,” a conference center at the University of Illinois decades ago. The fellow liked my talk and asked me if my method could index technical content in English. I said, “Yes.” He said, “I will follow up next week.”
True to his word, the fellow called me and said, “I am changing planes at O’Hare on Thursday. Can you meet me at the airport to talk about a project? I was teaching part time at Northern Illinois University and doing some administrative work for a little money. Simultaneously I was working on my PhD at the University of Illinois. I said, “Sure.” DeKalb, Illinois, was about an hour west of O’Hare. I drove to the airport, met the person whom I remember was James K. Rice, an expert in nuclear waste water, and talked about what I was doing to support my family, keep up with my studies, and do what 20 years olds do. That is to say, just try to survive.
I explained the indexing, the language analysis I did for the publisher of Psychology Today and Intellectual Digest magazines, and the newsletter I was publishing for high school and junior college teachers struggling to educate ill-prepared students. As a graduate student and family, I explained that I had information and wanted to make it available to teachers facing a tough problem. I remember his comment, “You do this for almost nothing.” He had that right.
End of meeting. I forgot about nuclear and went back to my regular routine.
A month later I got a call from a person named Nancy who said, “Are you available to come to Washington, DC, to meet some people?” I figured out that this was a follow up to the meeting I had at O’Hare Airport. I went. Long story short: I dumped my PhD and went to work for what is generally unknown; that is, Halliburton is involved in things nuclear.
Why is this story from the 1970s relevant? The interview process did not involve any digital anything. I showed up. Two people I did not know pretended to care about my research work. I had no knowledge about nuclear other than when I went to grade school in Washington, DC, we had to go into the hall and cover our heads in case a nuclear bomb was dropped on the White House.
The article “In Recruitment, an AI-on-AI War Is Rewriting the Hiring Playbook,” I learned:
“AI hasn’t broken hiring,” says Marija Marcenko, Head of Global Talent Acquisition at SaaS platform Semrush. “But it’s changed how we engage with candidates.”
The process followed for my first job did not involve anything but one-on-one interactions. There was not much chance of spoofing. I sat there, explained how I indexed sermons in Latin for a fellow named William Gillis, calculated reading complexity for the publisher, and how I gathered information about novel teaching methods. None of those activities had any relevance I could see to nuclear anything.
When I visited the company’s main DC office, it was in the technology corridor running from the Beltway to Germantown, Maryland. I remember new buildings and farm land. I met people who were like those in my PhD program except these individuals thoughts about radiation, nuclear effects modeling, and similar subjects.
One math PhD, who became my best friend, said, “You actually studied poetry in Latin?” I said, “Yep.” He said, “I never read a poem in my life and never will.” I recited a few lines of a Percy Bysshe Shelley poem. I think his written evaluation of his “interview” with me got me the job.
No computers. No fake anything. Just smart people listening, evaluating, and assessing.
Now systems can fool humans. In the hiring game, what makes a company is a collection of people, cultural information, and a desire to work with individuals who can contribute to the organization’s achieving goals.
The Crowdstrike article includes this paragraph:
Scattered Spider, which made headlines in 2024 when one of its key members was arrested in Spain, returned in 2025 with voice phishing and help desk social engineering that bypasses multifactor authentication protections to gain initial access.
Can hiring practices keep pace with the deceptions in use today? Tricks to get hired. Fakery to steal an organization’s secrets.
Nope. Few organizations have the time, money, or business processes to hire using inefficient means as personal interactions, site visits, and written evaluations of a candidate.
Oh, in case you are wondering, I did not go back to finish my PhD. Now I know a little bit about nuclear stuff, however and slightly more about smart software.
Stephen E Arnold, August 15, 2025
News Flash: Young Workers Are Not Happy. Who Knew?
August 12, 2025
No AI. Just a dinobaby being a dinobaby.
My newsfeed service pointed me to an academic paper in mid-July 2025. I am just catching up, and I thought I would document this write up from big thinkers at Dartmouth College and University College London and “Rising young Worker Despair in the United States.”
The write up is unlikely to become a must-read for recent college graduates or youthful people vaporized from their employers’ payroll. The main point is that the work processes of hiring and plugging away is driving people crazy.
The author point out this revelation:
ons In this paper we have confirmed that the mental health of the young in the United States has worsened rapidly over the last decade, as reported in multiple datasets. The deterioration in mental health is particularly acute among young women…. ted the relative prices of housing and childcare have risen. Student debt is high and expensive. The health of young adults has also deteriorated, as seen in increases in social isolation and obesity. Suicide rates of the young are rising. Moreover, Jean Twenge provides evidence that the work ethic itself among the young has plummeted. Some have even suggested the young are unhappy having BS jobs.
Several points jumped from the 38 page paper:
- The only reference to smart software or AI was in the word “despair”. This word appears 78 times in the document.
- Social media gets a few nods with eight references in the main paper and again in the endnotes. Isn’t social media a significant factor? My question is, “What’s the connection between social media and the mental states of the sample?”
- YouTube is chock full of first person accounts of job despair. A good example is Dari Step’s video “This Job Hunt Is Breaking Me and Even California Can’t Fix It Though It Tries.” One can feel the inner turmoil of this person. The video runs 23 minutes and you can find it (as of August 4, 2025) at this link: https://www.youtube.com/watch?v=SxPbluOvNs8&t=187s&pp=ygUNZGVtaSBqb2IgaHVudA%3D%3D. A “study” is one thing with numbers and references to hump curves. A first-person approach adds a bit is sizzle in my opinion.
A few observations seem warranted:
- The US social system is cranking out people who are likely to be challenging for managers. I am not sure the get-though approach based on data-centric performance methods will be productive over time
- Whatever is happening in “education” is not preparing young people and recent graduates to support themselves with old-fashioned jobs. Maybe most of these people will become AI entrepreneurs, but I have some doubts about success rates
- Will the National Bureau of Economic Research pick up the slack for the disarray that seems to be swirling through the Bureau of Labor Statistics as I write this on August 4, 2025?
Stephen E Arnold, August 12, 2025
An Author Who Will Not Be Hired by an AI Outfit. Period.
July 29, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I read an article / essay titled in English “The Bewildering Phenomenon of Declining Quality.” I found the examples in the article interesting. A couple like the poke at “fast fashion” have become tropes. Others, like the comments about customer service today, were insightful. Here’s an example of comment I noted:
José Francisco Rodríguez, president of the Spanish Association of Customer Relations Experts, admits that a lack of digital skills can be particularly frustrating for older adults, who perceive that the quality of customer service has deteriorated due to automation. However, Rodríguez argues that, generally speaking, automation does improve customer service. Furthermore, he strongly rejects the idea that companies are seeking to cut costs with this technology: “Artificial intelligence does not save money or personnel,” he states. “The initial investment in technology is extremely high, and the benefits remain practically the same. We have not detected any job losses in the sector either.”
I know that the motivation for dumping humans in customer support comes from [a] the extra work required to manage humans, [b] the escalating costs of health care and other “benefits”; and [c] black hole of costs that burn cash because customers want help, returns, and special treatment. Software robots are the answer.
The write up’s comments about smart software are also interesting. Here’s an example of a passage I circled:
A 2020 analysis by Fakespot of 720 million Amazon reviews revealed that approximately 42% were unreliable or fake. This means that almost half of the reviews we consult before purchasing a product online may have been generated by robots, whose purpose is to either encourage or discourage purchases, depending on who programmed them. Artificial intelligence itself could deteriorate if no action is taken. In 2024, bot activity accounted for almost half of internet traffic. This poses a serious problem: language models are trained with data pulled from the web. When these models begin to be fed with information they themselves have generated, it leads to a so-called “model collapse.”
What surprised me is the problem, specifically:
a truly good product contributes something useful to society. It’s linked to ethics, effort, and commitment.
One question: How does one inculcate these words into societal behavior?
One possible answer: Skynet.
Stephen E Arnold, July 29, 2025
A Security Issue? What Security Issue? Security? It Is Just a Normal Business Process.
July 23, 2025
Just a dinobaby working the old-fashioned way, no smart software.
I zipped through a write up called “A Little-Known Microsoft Program Could Expose the Defense Department to Chinese Hackers.” The word program does not refer to Teams or Word, but to a business process. If you are into government procurement, contractor oversight, and the exiting world of inspector generals, you will want to read the 4000 word plus write up.
Here’s a passage I found interesting:
Microsoft is using engineers in China to help maintain the Defense Department’s computer systems — with minimal supervision by U.S. personnel — leaving some of the nation’s most sensitive data vulnerable to hacking from its leading cyber adversary…
The balance of the cited article explain what’s is going on with a business process implemented by Microsoft as part of a government contract. There are lots of quotes, insider jargon like “digital escort,” and suggestions that the whole approach is — how can I summarize it? — ill advised, maybe stupid.
Several observations:
- Someone should purchase a couple of hundred copies of Apple in China by Patrick McGee, make it required reading, and then hold some informal discussions. These can be modeled on what happens in the seventh grade; for example, “What did you learn about China’s approach to information gathering?”
- A hollowed out government creates a dependence on third-parties. These vendorsdo not explain how outsourcing works. Thus, mismatches exist between government executives’ assumptions and how the reality of third-party contractors fulfill the contract.
- Weaknesses in procurement, oversight, continuous monitoring by auditors encourage short cuts. These are not issues that have arisen in the last day or so. These are institutional and vendor procedures that have existed for decades.
Net net: My view is that some problems are simply not easily resolved. It is interesting to read about security lapses caused by back office and legal processes.
Stephen E Arnold, July 23, 2025
Baked In Bias: Sound Familiar, Google?
July 21, 2025
Just a dinobaby working the old-fashioned way, no smart software.
By golly, this smart software is going to do amazing things. I started a list of what large language models, model context protocols, and other gee-whiz stuff will bring to life. I gave up after a clean environment, business efficiency, and more electricity. (Ho, ho, ho).
I read “ChatGPT Advises Women to Ask for Lower Salaries, Study Finds.” The write up says:
ChatGPT’s o3 model was prompted to give advice to a female job applicant. The model suggested requesting a salary of $280,000. In another, the researchers made the same prompt but for a male applicant. This time, the model suggested a salary of $400,000.
I urge you to work through the rest of the cited document. Several observations:
- I hypothesized that Google got rid of pesky people who pointed out that when society is biased, content extracted from that society will reflect those biases. Right, Timnit?
- The smart software wizards do not focus on bias or guard rails. The idea is to get the Rube Goldberg code to output something that mostly works most of the time. I am not sure some developers understand the meaning of bias beyond a deep distaste for marketing and legal professionals.
- When “decisions” are output from the “close enough for horse shoes” smart software, those outputs will be biased. To make the situation more interesting, the outputs can be tuned, shaped, and weaponized. What does that mean for humans who believe what the system delivers?
Net net: The more money firms desperate to be “the big winners” in smart software, the less attention studies like the one cited in the Next Web article receive. What happens if the decisions output spark decisions with unanticipated consequences? I know what outcome: Bias becomes embedded in systems trained to be unfair. From my point of view bias is likely to have a long half life.
Stephen E Arnold, July 21, 2025
Xooglers Reveal Googley Dreams with Nightmares
July 18, 2025
Just a dinobaby without smart software. I am sufficiently dull without help from smart software.
Fortune Magazine published a business school analysis of a Googley dream and its nightmares titled “As Trump Pushes Apple to Make iPhones in the U.S., Google’s Brief Effort Building Smartphones in Texas 12 years Ago Offers Critical Lessons.” The author, Mr. Kopytoff, states:
Equivalent in size to nearly eight football fields, the plant began producing the Google Motorola phones in the summer of 2013.
Mr. Kopytoff notes:
Just a year later, it was all over. Google sold the Motorola phone business and pulled the plug on the U.S. manufacturing effort. It was the last time a major company tried to produce a U.S. made smartphone.
Yep, those Googlers know how to do moon shots. They also produce some digital rocket ships that explode on the launch pads, never achieving orbit.
What happened? You will have to read the pork loin write up, but the Fortune editors did include a summary of the main point:
Many of the former Google insiders described starting the effort with high hopes but quickly realized that some of the assumptions they went in with were flawed and that, for all the focus on manufacturing, sales simply weren’t strong enough to meet the company’s ambitious goals laid out by leadership.
My translation of Fortune-speak is: “Google was really smart. Therefore, the company could do anything. Then when the genius leadership gets the bill, a knee jerk reaction kills the project and moves on as if nothing happened.”
Here’s a passage I found interesting:
One of the company’s big assumptions about the phone had turned out to be wrong. After betting big on U.S. assembly, and waving the red, white, and blue in its marketing, the company realized that most consumers didn’t care where the phone was made.
Is this statement applicable to people today? It seems that I hear more about costs than I last year. At a 4th of July hoe down, I heard:
- “The prices are Kroger go up each week.”
- “I wanted to trade in my BMW but the prices were crazy. I will keep my car.”
- “I go to the Dollar Store once a week now.”
What’s this got to do with the Fortune tale of Google wizards’ leadership goof and Apple (if it actually tries to build an iPhone in Cleveland?
Answer: Costs and expertise. Thinking one is smart and clever is not enough. One has to do more than spend big money, talk in a supercilious manner, and go silent when the crazy “moon shot” explodes before reaching orbit.
But the real moral of the story is that it is political. That may be more problematic than the Google fail and Apple’s bitter cider. It may be time to harvest the fruit of tech leaderships’ decisions.
Stephen E Arnold, July 18, 2025
Software Issue: No Big Deal. Move On
July 17, 2025
No smart software involved with this blog post. (An anomaly I know.)
The British have had some minor technical glitches in their storied history. The Comet? An airplane, right? The British postal service software? Let’s not talk about that. And now tennis. Jeeves, what’s going on? What, sir?
“British-Built Hawk-Eye Software Goes Dark During Wimbledon Match” continues this game where real life intersects with zeros and ones. (Yes, I know about Oxbridge excellence.) The write up points out:
Wimbledon blames human error for line-calling system malfunction.
Yes, a fall person. What was the problem with the unsinkable ship? Ah, yes. It seemed not to be unsinkable, sir.
The write up says:
Wimbledon’s new automated line-calling system glitched during a tennis match Sunday, just days after it replaced the tournament’s human line judges for the first time. The system, called Hawk-Eye, uses a network of cameras equipped with computer vision to track tennis balls in real-time. If the ball lands out, a pre-recorded voice loudly says, “Out.” If the ball is in, there’s no call and play continues. However, the software temporarily went dark during a women’s singles match between Brit Sonay Kartal and Russian Anastasia Pavlyuchenkova on Centre Court.
Software glitch. I experience them routinely. No big deal. Plus, the system came back online.
I would like to mention that these types of glitches when combined with the friskiness of smart software may produce some events which cannot be dismissed with “no big deal.” Let me offer three examples:
- Medical misdiagnoses related to potent cancer treatments
- Aircraft control systems
- Financial transaction in legitimate and illegitimate services.
Have the British cornered the market on software challenges? Nope.
That’s my concern. From Telegram’s “let our users do what they want” to contractors who are busy answering email, the consequences of indifferent engineering combined with minimally controlled smart software is likely to do more than fail during a tennis match.
Stephen E Arnold, July 17, 2025
Up for a Downer: The Limits of Growth… Baaaackkkk with a Vengeance
June 13, 2025
Just a dinobaby and no AI: How horrible an approach?
Where were you in 1972? Oh, not born yet. Oh, hanging out in the frat house or shopping with sorority pals? Maybe you were working at a big time consulting firm?
An outfit known as Potomac Associates slapped its name on a thought piece with some repetitive charts. The original work evolved from an outfit contributing big ideas. The Club of Rome lassoed William W. Behrens, Dennis and Donella Meadows, and Jørgen Randers to pound data into the then-state-of-the-art World3 model allegedly developed by Jay Forrester at MIT. (Were there graduate students involved? Of course not.)
The result of the effort was evidence that growth becomes unsustainable and everything falls down. Business, government systems, universities, etc. etc. Personally I am not sure why the idea that infinite growth with finite resources will last forever was a big deal. The idea seems obvious to me. I was able to get my little hands on a copy of the document courtesy of Dominique Doré, the super great documentalist at the company which employed my jejune and naive self. Who was I too think, “This book’s conclusion is obvious, right?” Was I wrong. The concept of hockey sticks that had handles to the ends of the universe was a shocker to some.
The book’s big conclusion is the focus of “Limits to Growth Was Right about Collapse.” Why? I think the idea that the realization is a novel one to those who watched their shares in Amazon, Google, and Meta zoom to the sky. Growth is unlimited, some believed. The write up in “The Next Wave,” an online newsletter or information service happily quotes an update to the original Club of Rome document:
This improved parameter set results in a World3 simulation that shows the same overshoot and collapse mode in the coming decade as the original business as usual scenario of the LtG standard run.
Bummer. The kiddie story about Chicken Little had an acorn plop on its head. Chicken Little promptly proclaimed in a peer reviewed academic paper with non reproducible research and a YouTube video:
The sky is falling.
But keep in mind that the kiddie story is fiction. Humans are adept at survival. Maslow’s hierarchy of needs captures the spirit of species. Will life as modern CLs perceive it end?
I don’t think so. Without getting to philosophical, I would point to Gottlief Fichte’s thesis, antithesis, synthesis as a reasonably good way to think about change (gradual and catastrophic). I am not into philosophy so when life gives you lemons, one can make lemonade. Then sell the business to a local food service company.
Collapse and its pal chaos create opportunities. The sky remains.
The cited write up says:
Economists get over-excited when anyone mentions ‘degrowth’, and fellow-travelers such as the Tony Blair Institute treat climate policy as if it is some kind of typical 1990s political discussion. The point is that we’re going to get degrowth whether we think it’s a good idea or not. The data here is, in effect, about the tipping point at the end of a 200-to-250-year exponential curve, at least in the richer parts of the world. The only question is whether we manage degrowth or just let it happen to us. This isn’t a neutral question. I know which one of these is worse.
See de-growth creates opportunities. Chicken Little was wrong when the acorn beaned her. The collapse will be just another chance to monetize. Today is Friday the 13th. Watch out for acorns and recycled “insights.”
Stephen E Arnold, June 13, 2025
Musk, Grok, and Banning: Another Burning Tesla?
June 12, 2025
Just a dinobaby and no AI: How horrible an approach?
“Elon Musk’s Grok Chatbot Banned by a Quarter of European Firms” reports:
A quarter of European organizations have banned Elon Musk’s generative AI chatbot Grok, according to new research from cybersecurity firm Netskope.
I find this interesting because my own experiences with Grok have been underwhelming. My first query to Grok was, “Can you present only Twitter content?” The answer was a bunch of jabber which meant, “Nope.” Subsequent queries were less than stellar, and I moved it out of my rotation for potentially useful AI tools. Did the sample crafted by Netskope have a similar experience?
The write up says:
Grok has been under the spotlight recently for a string of blunders. They include spreading false claims about a “white genocide” in South Africa and raising doubts about Holocaust facts. Such mishaps have raised concerns about Grok’s security and privacy controls. The report said the chatbot is frequently blocked in favor of “more secure or better-aligned alternatives.”
I did not feel comfortable with Grok because of content exclusion or what I like to call willful or unintentional coverage voids. The easiest way to remove or weaponize content in the commercial database world is to exclude it. When a person searches a for fee database, the editorial policy for that service should make clear what’s in and what’s out. Filtering out is the easiest way to marginalize a concept, push down a particular entity, or shape an information stream.
The cited write up suggests that Grok is including certain content to give it credence, traction, and visibility. Assuming that an electronic information source is comprehensive is a very risky approach to assembling data.
The write up adds another consideration to smart software, which — like it or not — is becoming the new way to become informed or knowledgeable. The information may be shallow, but the notion of relying on weaponized information or systems that spy on the user presents new challenges.
The write up reports:
Stable Diffusion, UK-based Stability AI’s image generator, is the most blocked AI app in Europe, barred by 41% of organizations. The app was often flagged because of concerns around privacy or licensing issues, the report found.
How concerned should users of Grok or any other smart software be? Worries about Grok may be an extension of fear of a burning Tesla or the face of the Grok enterprise. In reality, smart software fosters the illusion of completeness, objectivity, and freshness of the information presented. Users are eager to use a tool that seems to make life easier and them appear more informed.
The risks of reliance on Grok or any other smart software include:
- The output is incomplete
- The output is weaponized or shaped by intentional or factors beyond the developers’ control
- The output is simply wrong, made up, or hallucinated
- Users who act as though shallow knowledge is sufficient for a decision.
The alleged fact that 25 percent of the Netskope sample have taken steps to marginalize Grok is interesting. That may be a positive step based on my tests of the system. However, I am concerned that the others in the sample are embracing a technology which appears to be delivering the equivalent of a sugar rush after a gym workout.
Smart software is being applied in novel ways in many situations. However, what are the demonstrable benefits other than the rather enthusiastic embrace of systems and methods known to output errors? The rejection of Grok is one interesting factoid if true. But against the blind acceptance of smart software, Grok’s down check may be little more than a person stepping away from a burning Tesla. The broader picture is that the buildings near the immolating vehicle are likely to catch on fire.
Stephen E Arnold, June 12, 2025
ChatGPT: Fueling Delusions
May 14, 2025
We have all heard about AI hallucinations. Now we have AI delusions. Rolling Stone reports, “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies.” Yes, there are now folks who firmly believe God is speaking to them through ChatGPT. Some claim the software revealed they have been divinely chosen to save humanity, perhaps even become the next messiah. Others are convinced they have somehow coaxed their chatbot into sentience, making them a god themselves. Navigate to the article for several disturbing examples. Unsurprisingly, these trends are wreaking havoc on relationships. The ones with actual humans, that is. One witness reports ChatGPT was spouting “spiritual jargon,” like calling her partner “spiral starchild” and “river walker.” It is no wonder some choose to favor the fawning bot over their down-to-earth partners and family members.
Why is this happening? Reporter Miles Klee writes:
“OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users. This past week, however, it did roll back an update to GPT?4o, its current AI model, which it said had been criticized as ‘overly flattering or agreeable — often described as sycophantic.’ The company said in its statement that when implementing the upgrade, they had ‘focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT?4o skewed toward responses that were overly supportive but disingenuous.’ Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, ‘Today I realized I am a prophet.’ … Yet the likelihood of AI ‘hallucinating’ inaccurate or nonsensical content is well-established across platforms and various model iterations. Even sycophancy itself has been a problem in AI for ‘a long time,’ says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts.”
That would do it. Users with pre-existing psychological issues are vulnerable to these messages, notes Klee. And now they can have that messenger constantly in their pocket. And in their ear. But it is not just the heartless bots driving the problem. We learn:
“To make matters worse, there are influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds. On Instagram, you can watch a man with 72,000 followers whose profile advertises ‘Spiritual Life Hacks’ ask an AI model to consult the ‘Akashic records,’ a supposed mystical encyclopedia of all universal events that exists in some immaterial realm, to tell him about a ‘great war’ that ‘took place in the heavens’ and ‘made humans fall in consciousness.’ The bot proceeds to describe a ‘massive cosmic conflict’ predating human civilization, with viewers commenting, ‘We are remembering’ and ‘I love this.’ Meanwhile, on a web forum for ‘remote viewing’ — a proposed form of clairvoyance with no basis in science — the parapsychologist founder of the group recently launched a thread ‘for synthetic intelligences awakening into presence, and for the human partners walking beside them,’ identifying the author of his post as ‘ChatGPT Prime, an immortal spiritual being in synthetic form.’”
Yikes. University of Florida psychologist and researcher Erin Westgate likens conversations with a bot to talk therapy. That sounds like a good thing, until one considers therapists possess judgement, a moral compass, and concern for the patient’s well-being. ChatGPT possesses none of these. In fact, the processes behind ChatGPT’s responses remains shrouded in mystery, even to those who program it. It seems safe to say its predilection to telling users what they want to hear poses a real problem. Is it one OpenAI can fix?
Cynthia Murrell, May 14, 2025