News Flash: Young Workers Are Not Happy. Who Knew?
August 12, 2025
No AI. Just a dinobaby being a dinobaby.
My newsfeed service pointed me to an academic paper in mid-July 2025. I am just catching up, and I thought I would document this write up from big thinkers at Dartmouth College and University College London and “Rising young Worker Despair in the United States.”
The write up is unlikely to become a must-read for recent college graduates or youthful people vaporized from their employers’ payroll. The main point is that the work processes of hiring and plugging away is driving people crazy.
The author point out this revelation:
ons In this paper we have confirmed that the mental health of the young in the United States has worsened rapidly over the last decade, as reported in multiple datasets. The deterioration in mental health is particularly acute among young women…. ted the relative prices of housing and childcare have risen. Student debt is high and expensive. The health of young adults has also deteriorated, as seen in increases in social isolation and obesity. Suicide rates of the young are rising. Moreover, Jean Twenge provides evidence that the work ethic itself among the young has plummeted. Some have even suggested the young are unhappy having BS jobs.
Several points jumped from the 38 page paper:
- The only reference to smart software or AI was in the word “despair”. This word appears 78 times in the document.
- Social media gets a few nods with eight references in the main paper and again in the endnotes. Isn’t social media a significant factor? My question is, “What’s the connection between social media and the mental states of the sample?”
- YouTube is chock full of first person accounts of job despair. A good example is Dari Step’s video “This Job Hunt Is Breaking Me and Even California Can’t Fix It Though It Tries.” One can feel the inner turmoil of this person. The video runs 23 minutes and you can find it (as of August 4, 2025) at this link: https://www.youtube.com/watch?v=SxPbluOvNs8&t=187s&pp=ygUNZGVtaSBqb2IgaHVudA%3D%3D. A “study” is one thing with numbers and references to hump curves. A first-person approach adds a bit is sizzle in my opinion.
A few observations seem warranted:
- The US social system is cranking out people who are likely to be challenging for managers. I am not sure the get-though approach based on data-centric performance methods will be productive over time
- Whatever is happening in “education” is not preparing young people and recent graduates to support themselves with old-fashioned jobs. Maybe most of these people will become AI entrepreneurs, but I have some doubts about success rates
- Will the National Bureau of Economic Research pick up the slack for the disarray that seems to be swirling through the Bureau of Labor Statistics as I write this on August 4, 2025?
Stephen E Arnold, August 12, 2025
Paywalls. Users Do Not Want Them. Wow. Who Knew?
August 12, 2025
Sometimes research simply confirms the obvious. The Pew Research Center declares, “Few Americans Pay for News when they Encounter Paywalls.” Anyone still hoping the death of journalism could be forestalled with paywalls should reconsider. Writers Emily Tomasik and Michael Lipka cite a March Pew survey that found 83% of Americans have not paid for news in the past year. What do readers do when they hit a paywall? A mere 1% of those surveyed have forked over the dough to continue. However, 53% say they seek the same information elsewhere and 32% just give up on accessing it. Why? The write-up summarizes:
“Among the 83% of U.S. adults who have not paid for news in the past year, the most common reason they cite is that they can find plenty of other news articles for free. About half of those who don’t pay for news (49%) say this is the main reason. Indeed, many news websites do not have paywalls. Others have recently loosened paywalls or removed them for certain content like public emergencies or public interest stories. Another common reason people don’t pay for news is that they are not interested enough (32%). Smaller shares of Americans who don’t pay for news say the main reason is that it’s too expensive (10%) or that the news provided isn’t good enough to pay for (8%).”
The study did find some trends around who does pay for journalism. We learn:
“Overall, 17% of U.S. adults pay for news. However, highly educated adults, Democrats and older Americans – among other demographic groups – are more likely to have paid for news.
For example, 27% of college graduates say they have directly paid a news source by subscribing, donating or becoming a member in the last year – triple the share of those with a high school diploma or less formal education who have done so.”
So, those who paid to acquire knowledge are willing to pay to acquire knowledge. Who could have guessed? The survey also found senior citizens, wealthy folks, and white Americans more often pay up. Anyone curious about the survey’s methodology can read about it here.
The rule of thumb I use is that if one has 100 “readers”, two will pay if the content is really good. Must-have content bumps up the number a bit, but online publishers have to spend big on marketing to move the needle. Stick with ads and sponsored content.
Cynthia Murrell, August 12, 2025
Self-Appointed Gatekeepers and AI Wizards Clash
August 11, 2025
No AI. Just a dinobaby being a dinobaby.
Cloudflare wants to protect those with content. Perplexity wants content. Cloudflare sees an opportunity to put up a Google-type toll booth on the Information Highway. Perplexity sees traffic stops of any type the way a soccer mom perceives an 80 year old driving at the speed limit.
Perplexity has responded to Cloudflare’s words about Perplexity allegedly using techniques to crawl sites which may not want to be indexed.
“Agents or Bots? Making Sense of AI on the Open Web” states:
Cloudflare’s recent blog post managed to get almost everything wrong about how modern AI assistants actually work.
In addition to misunderstanding 20-25M user agent requests are not scrapers, Cloudflare claimed that Perplexity was engaging in “stealth crawling,” using hidden bots and impersonation tactics to bypass website restrictions. But the technical facts tell a different story.
It appears Cloudflare confused Perplexity with 3-6M daily requests of unrelated traffic from BrowserBase, a third-party cloud browser service that Perplexity only occasionally uses for highly specialized tasks (less than 45,000 daily requests).
Because Cloudflare has conveniently obfuscated their methodology and declined to answer questions helping our teams understand, we can only narrow this down to two possible explanations.
- Cloudflare needed a clever publicity moment and we–their own customer–happened to be a useful name to get them one.
- Cloudflare fundamentally misattributed 3-6M daily requests from BrowserBase’s automated browser service to Perplexity, a basic traffic analysis failure that’s particularly embarrassing for a company whose core business is understanding and categorizing web traffic.
The idea is to provide two choices, a technique much-loved by vaudeville comedians on the Paul Whiteman circuit decades ago; for example, Have you stopped stealing office supplies?
I find this situation interesting for several reasons:
- Smart software outfits have been sucking down data
- The legal dust ups, the license fees, even the posture of the US government seems dynamic; that is, uncertain
- Clever people often find themselves tripped by their own clever lines.
My view is that when tech companies squabble, the only winners are the lawyers and the users lose.
Stephen E Arnold, August 11, 2025
The Human Mind in Software. It Is Alive!
August 11, 2025
Has this team of researchers found LLM’s holy grail? Science magazine reports, “Researchers Claim their AI Model Simulates the Human Mind. Others are Skeptical.” The team’s paper, published in Nature, claims the model can both predict and simulate human behavior. Predict is believable. Simulate? That is a much higher bar.
The team started by carefully assembling data from 160 previously published psychology experiments. Writer Cathleen O’Grady tells us:
“The researchers then trained Llama, an LLM produced by Meta, by feeding it the information about the decisions participants faced in each experiment, and the choices they made. They called the resulting model ‘Centaur’—the closest mythical beast they could find to something half-llama, half-human, [researcher Marcel] Binz says.”
Cute. The data collection represents a total of over 60,000 participants who made over 10 million choices. That sounds like a lot. But, as computational cognitive scientist Federico Adolfi notes, 160 experiments is but “a grain of sand in the infinite pool of cognition.” See the write-up for the study’s methodology. The paper claims Centaur’s choices closely aligned with those of human subjects. This means, researchers assert, Centaur could be used to develop experiments before involving human subjects. Hmm, this sounds vaguely familiar.
Other cognitive scientists remain unconvinced. For example:
“Jeffrey Bowers, a cognitive scientist at the University of Bristol, thinks the model is ‘absurd.’ He and his colleagues tested Centaur … and found decidedly un-humanlike behavior. In tests of short-term memory, it could recall up to 256 digits, whereas humans can commonly remember approximately seven. In a test of reaction time, the model could be prompted to respond in ‘superhuman’ times of 1 millisecond, Bowers says. This means the model can’t be trusted to generalize beyond its training data, he concludes.
More important, Bowers says, is that Centaur can’t explain anything about human cognition. Much like an analog and digital clock can agree on the time but have vastly different internal processes, Centaur can give humanlike outputs but relies on mechanisms that are nothing like those of a human mind, he says.”
Indeed. Still, even if the central assertion turns out to be malarky, there may be value in this research. Both vision scientist Rachel Heaton and computational visual neuroscientist Katherine Storrs are enthusiastic about the dataset itself. Heaton is also eager to learn how, exactly, Centaur derives its answers. Storrs emphasizes a lot of work has gone into the dataset and the model, and is optimistic that work will prove valuable in the end. Even if Centaur turns out to be less human and more Llama.
Cynthia Murrell, August 11, 2025
DuckDuck Privacy. Go, Go, Go
August 8, 2025
We all know Google tracks us across the Web. But we can avoid that if we use a privacy-touting alternative, right? Not necessarily. Simple Analytics reveals, “Google Is Tracking You (Even When You Use DuckDuckGo).” Note that Simple Analytics is a Google Analytics competitor. So let us keep that in mind as we consider its blog’s assertions. Still, writer Iron Brands cites a study by Safety Detectives as he writes:
“The study analyzed browsing patterns in the US, UK, Switzerland, and Sweden. They used a virtual machine and VPN to simulate users in these countries. By comparing searches on Google and DuckDuckGo, researchers found Google still managed to collect data (often without the user knowing). Here’s how: Google doesn’t just track people through Search or Gmail. Its invisible code runs on millions of sites through Google Analytics, AdSense ads, YouTube embeds, and other background services like Fonts or Maps. That means even if you’re using DuckDuckGo, you’re not totally out of Google’s reach. In Switzerland and Sweden, using DuckDuckGo cut Google tracking by half. But in the US, more than 40% of visited pages still sent data back to Google, despite using a privacy search engine. That’s largely because many US websites rely on Google’s tools for ads and traffic analysis.”
And here we thought Google made such tools affordable out of generosity. The post continues:
“This isn’t just about search engines. It’s about how deeply Google is embedded into the internet’s infrastructure. Privacy-conscious users often assume that switching to DuckDuckGo or Brave is enough. This research says otherwise. … You need more than just a private browser or search engine to reduce tracking. Google’s reach comes from third-party scripts that websites willingly add.”
To owners of those websites, Brands implores them to stop contributing to the problem. The write-up emphasizes that laws like the EU’s GDPR do not stem the tide. Such countries, we are told, are still awash in Google’s trackers. The solution? For both websites and users to divest themselves of Google as much as possible. As it happens, Brand’s firm offers site owners just such a solution—an analytics platform that is “privacy-first and cookie-free.” Note that Beyond Search has not independently verified these claims. Concerned site owners may also want to check out alternative Google alternatives.
Cynthia Murrell, August 8, 2025
Cannot Read? Students Cannot Imagine Either
August 8, 2025
Students are losing the ability to imagine and self-reflect on their own lives says the HuffPost in the article: “I Asked My Students To Write An Essay About Their Lives. The Reason 1 Student Began To Panic Left Me Stunned.” While Millennials were the first generation to be completely engrossed in the Internet, Generation Z is the first generation to have never lived without screens. Because of the Internet’s constant presence, kids have unfortunately developed bad habits where they zone out and don’t think.
Zen masters work for years to shut off their brains, but Gen Z can do it automatically with a screen. This is a horrible thing for critical thinking skills and imagination, because these kids don’t know how to think without the assistance of AI. The article writer Liz Rose Shulman is a teacher of high school and college students. She assigned them essays and without hesitation all of them rely on AI to complete the assignments.
The students either use Grammarly to help them write everything or the rely on ChatGPT to generate an essay. The over reliance on AI tools means they don’t know how to use their brains. They’re unfamiliar with the standard writing process, problem solving, and being creative. The kids don’t believe there’s a problem using AI. Many teachers also believe the same thing and are adopting it into their curriculums.
The students are flummoxed when they’re asked to write about themselves:
I assigned a writing prompt a few weeks ago that asked my students to reflect on a time when someone believed in them or when they believed in someone else.
One of my students began to panic.
‘I have to ask Google the prompt to get some ideas if I can’t just use AI,’ she pleaded and then began typing into the search box on her screen, ‘A time when someone believed in you.’ ‘It’s about you,’ I told her. ‘You’ve got your life experiences inside of your own mind.’ It hadn’t occurred to her — even with my gentle reminder — to look within her own imagination to generate ideas. One of the reasons why I assigned the prompt is because learning to think for herself now, in high school, will help her build confidence and think through more complicated problems as she gets older — even when she’s no longer in a classroom situation.”
What’s even worse is that kids are addicted to their screens and they lack basic communication skills. Every generations goes through issues with older generations. Society will adapt and survive but let’s start teaching how to think and imagine again! Maybe if they brought back recess and enforced time without screens that would help, even with older people.
Whitney Grace, August 8, 2025
Billions at Stake: The AI Bot Wars Begin
August 7, 2025
No AI. Just a dinobaby being a dinobaby.
I noticed that the puffs of smoke were actually canon fire in the AI bot wars. The most recent battle pits Cloudflare (a self-declared policeman of the Internet) against Perplexity, one of the big buck AI outfits. What is the fight? Cloudflare believes there is a good way to crawl and obtain publicly accessible content. Perplexity is just doing what those Silicon Valley folks have done for decades: Do stuff and apologize (or not) later.
WinBuzzer’s “Cloudflare Accuses Perplexity of Using ‘Stealth Crawlers’ to Evade Web Standards” said on August 4, 2025, at a time that has not yet appeared on my atomic clock:
Web security giant Cloudflare has accused AI search firm Perplexity of using deceptive “stealth crawlers” to bypass website rules and scrape content. In a report Cloudflare states Perplexity masks its bots with generic browser identities to ignore publisher blocks. Citing a breach of internet trust, Cloudflare has removed Perplexity from its verified bot program and is now actively blocking the behavior. This move marks a major escalation in the fight between AI companies and content creators, placing Perplexity’s aggressive growth strategy under intense scrutiny.
I like the characterization of Cloudflare as a Web security giant. Colorful.
What is the estimable smart software company doing? Work arounds. Using assorted tricks, Perplexity is engaging in what WinBuzzer calls “stealth activity.” The method is a time honored one among some bad actors. The idea is to make it difficult for routine filtering to stop the Perplexity bot from sucking down data.
If you want the details of the spoofs that Perplexity’s wizards have been using, navigate to this Ars Technica post. There is a diagram that makes absolutely crystal clear to everyone in my old age home exactly what Perplexity is doing. (The diagram captures a flow I have seen some nation state actors employ to good effect.)
The part of the WinBuzzer story I liked addressed the issue of “explosive growth and ethical scrutiny.” The idea of “growth” is interesting. From my point of view, the growth is in the amount of cash that Perplexity and other AI outfits are burning. The idea is, “By golly, we can make this AI stuff generate oodles of cash.” The ethical part is a puzzler. Suddenly Silicon Valley-type AI companies are into ethics. Remarkable.
I wish to share several thoughts:
- I love the gatekeeper role of the “Web security giant.” Aren’t commercial gatekeepers the obvious way to regulate smart software? I am not sharing my viewpoint. I suggest you formulate your own opinion and do with it what you will.
- The behavior of Perplexity, if the allegations are accurate, is not particularly surprising. In fact, in my opinion it is SOP or standard operating procedure for many companies. It is easier to apologize than ask for permission. Does that sound familiar? It should. From Google to the most recent start up, that’s how many of the tech savvy operate. Is change afoot? Yeah, sure. Right away, chief.
- The motivation for the behavior is pragmatic. Outfits like Perplexity have to pull a rabbit out of the hat to make a profit from the computational runaway fusion reactor that is the cost of AI. The fix is to get more content and burn more resources. Very sharp thinking, eh?
Net net: I predict more intense AI fighting. Who will win? The outfits with the most money. Isn’t that the one true way of the corporate world in the US in 2025?
Stephen E Arnold, August 7, 2025
The China Smart, US Dumb Push Is Working
August 7, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I read “The US Should Run Faster on AI Instead of Trying to Trip Up China.” In a casual way, I am keeping an eye open for variations on the “China smart, US dumb” information I spot. The idea is that China is not just keeping pace with US innovation, the Middle Kingdom is either even or leading. The context is that the star burning bright for the American era has begun collapsing into a black hole or maybe to a brown dwarf. Avoidance of the US may be the best policy. As one of Brazil’s leaders noted: “America is not bullying our country [Brazil]. America is bullying the world.”
Right or wrong? I have zero idea.
The cited essay suggests that certain technology and economic policies have given China an advantage. The idea is that the disruptive kid in high school sits in the back of the room and thinks up a better Facebook-type system and then implements it.
The write up states:
The ostensible reason for the [technology and economic] controls was to cripple China’s AI progress. If that was the goal, it has been a failure.
As I zipped through the essay, I noted that the premise of the write up is that the US has goofed. The proof of this is no farther than data about China’s capabilities in smart software. I think that any large language model will evidence bias. Bias is encapsulated in many human-created utterances. I, for example, have written critically about search and retrieval for decades. Am I biased toward enterprise search? Absolutely. I know from experience that software that attempts to index content in an organization inevitably disappoints a user of that system. Why? No system to which I have been exposed has access to the totality of “information” generated by an organization. Maybe someday? But for the last 40 years, systems simply could not deliver what the marketers promised. Therefore, I am biased against claims that an enterprise search system can answer employees’ questions.
China is a slippery fish. I had a brief and somewhat weird encounter with a person deeply steeped in China’s somewhat nefarious effort to gain access to US pharma-related data. I have encountered a similar effort afoot in the technical disciplines related to nuclear power. These initiatives illustrate that China wants to be a serious contender for the title of world leader in bio-science and nuclear. Awareness of this type of information access is low even today.
I am, as a dinobaby, concerned that the lack of awareness issue creates more opportunities for information exfiltration from a proprietary source to an “open source” concept. To be frank, I am in favor of a closed approach to technology.
The reason I am making sure I have this source document and my comments is that it is a very good example of how the China good, America dumb information is migrating into what might be termed a more objective looking channel.
Net net: China’s weaponizing of information is working reasonably well. We are no longer in TikTok territory.
Stephen E Arnold, August 6, 2025
Microsoft Management Method: Fire Humans, Fight Pollution
August 7, 2025
How Microsoft Plans to Bury its AI-Generated Waste
Here is how one big tech firm is addressing the AI sustainability quandary. Windows Central reports, “Microsoft Will Bury 4.9 Tons of ‘Manure’ in a Secretive Deal—All to Offset its AI Energy Demands that Drive Emissions Up by 168%.” We suppose this is what happens when you lay off employees and use the money for something useful. Unlike Copilot.
Writer Kevin Okemwa begins by summarizing Microsoft’s current approach to AI. Windows and Office users may be familiar with the firm’s push to wedge its AI products into every corner of the environment, whether we like it or not. Then there is the feud with former best bud OpenAI, a factor that has Microsoft eyeing a separate path. But whatever the future holds, the company must reckon with one pressing concern. Okemwa writes:
“While it has made significant headway in the AI space, the sophisticated technology also presents critical issues, including substantial carbon emissions that could potentially harm the environment and society if adequate measures aren’t in place to mitigate them. To further bolster its sustainability efforts, Microsoft recently signed a deal with Vaulted Deep (via Tom’s Hardware). It’s a dual waste management solution designed to help remove carbon from the atmosphere in a bid to protect nearby towns from contamination. Microsoft’s new deal with the waste management solution firm will help remove approximately 4.9 million metric tons of waste from manure, sewage, and agricultural byproducts for injection deep underground for the next 12 years. The firm’s carbon emission removal technique is quite unique compared to other rivals in the industry, collecting organic waste which is combined into a thick slurry and injected about 5,000 feet underground into salt caverns.”
Blech. But the process does keep the waste from being dumped aboveground, where it could release CO2 into the environment. How much will this cost? We learn:
“While it is still unclear how much this deal will cost Microsoft, Vaulted Deep currently charges $350 per ton for its carbon removal services. Simple math suggests that the deal might be worth approximately $1.7 billion.”
That is a hefty price tag. And this is not the only such deal Microsoft has made: We are told it signed a contract with AtmosClear in April to remove almost seven million metric tons of carbon emissions. The company positions such deals as evidence of its good stewardship of the planet. But we wonder—is it just an effort to keep itself from being buried in its own (literal and figurative) manure?
Cynthia Murrell, August 7, 2025
Taylorism, 996, and Motivating Employees
August 6, 2025
No AI. Just a dinobaby being a dinobaby.
No more Foosball. No more Segways in the hallways (thank heaven!). No more ping pong (Wait. Scratch that. You must have ping pong.)
Fortune Magazine reported that Silicon Valley type outfits want to be more like the workplace managed using Frederick Winslow Taylor’s management methods. (Did you know that Mr. Taylor provided the oomph for many blue chip management consulting firms? If you did not, you may be one of the people suggesting that AI will kill off the blue chip outfits. Those puppies will survive.)
“Some Silicon Valley AI Startups Are Asking Employees to Adopt China’s Outlawed 996 Work Model” reports:
Some Silicon Valley startups are embracing China’s outlawed “996” work culture, expecting employees to work 12-hour days, six days a week, in pursuit of hyper-productivity and global AI dominance.
The reason, according to the write up, is:
The rise of the controversial work culture appears to have been born out of the current efficiency squeeze in Silicon Valley. Rounds of mass layoffs and the rise of AI have put pressure and turned up the heat on tech employees who managed to keep their jobs.
My response to this assertion is that it is a convenient explanation. My view is that one can trot out the China smart, US dumb arguments, point to the holes of burning AI cash, and the political idiosyncrasies of California and the US government.
The reason is that these are factors, but Silicon Valley is starting to accept the reality that old-fashioned business methods are semi useful. The idea that employees should converge on a work location to do what is still called “work.”
What’s the cause of this change? Since hooking electrodes to a worker in a persistent employee monitoring environment is a step too far for now, going back to the precepts of Freddy are a reasonable compromise.
But those electric shocks would work quite well, don’t you agree? (Sure, China’s work environment sparked a few suicides, but the efficiency is not significantly affected.)
Stephen E Arnold, August 6, 2025

