AI Has a Secret: Humans Do the Work
October 10, 2025
A key component of artificial intelligence output is not artificial at all. The Guardian reveals “How Thousands of ‘Overworked, Underpaid’ Humans Train Google’s AI to Seem Smart.” From accuracy to content moderation, Google Gemini and other AI models rely on a host of humans employed by third-party contractors. Humans whose jobs get harder and harder as they are pressured to churn through the work faster and faster. Gee, what could go wrong?
Reporter Varsha Bansal relates:
“Each new model release comes with the promise of higher accuracy, which means that for each version, these AI raters are working hard to check if the model responses are safe for the user. Thousands of humans lend their intelligence to teach chatbots the right responses across domains as varied as medicine, architecture and astrophysics, correcting mistakes and steering away from harmful outputs.”
Very important work—which is why companies treat these folks as valued assets. Just kidding. We learn:
“Despite their significant contributions to these AI models, which would perhaps hallucinate if not for these quality control editors, these workers feel hidden. ‘AI isn’t magic; it’s a pyramid scheme of human labor,’ said Adio Dinika, a researcher at the Distributed AI Research Institute based in Bremen, Germany. ‘These raters are the middle rung: invisible, essential and expendable.’”
And, increasingly, rushed. The write-up continues:
“[One rater’s] timer of 30 minutes for each task shrank to 15 – which meant reading, fact-checking and rating approximately 500 words per response, sometimes more. The tightening constraints made her question the quality of her work and, by extension, the reliability of the AI. In May 2023, a contract worker for Appen submitted a letter to the US Congress that the pace imposed on him and others would make Google Bard, Gemini’s predecessor, a ‘faulty’ and ‘dangerous’ product.”
And that is how we get AI advice like using glue on pizza or adding rocks to one’s diet. After those actual suggestions went out, Google focused on quality over quantity. Briefly. But, according to workers, it was not long before they were again told to emphasize speed over accuracy. For example, last December, Google announced raters could no longer skip prompts on topics they knew little about. Think workers with no medical expertise reviewing health advice. Not great. Furthermore, guardrails around harmful content were perforated with new loopholes. Bansal quotes Rachael Sawyer, a rater employed by Gemini contractor GlobalLogic:
“It used to be that the model could not say racial slurs whatsoever. In February, that changed, and now, as long as the user uses a racial slur, the model can repeat it, but it can’t generate it. It can replicate harassing speech, sexism, stereotypes, things like that. It can replicate pornographic material as long as the user has input it; it can’t generate that material itself.”
Lovely. It is policies like this that leave many workers very uncomfortable with the software they are helping to produce. In fact, most say they avoid using LLMs and actively discourage friends and family from doing so.
On top of the disillusionment, pressure to perform full tilt, and low pay, raters also face job insecurity. We learn GlobalLogic has been rolling out layoffs since the beginning of the year. The article concludes with this quote from Sawyer:
‘I just want people to know that AI is being sold as this tech magic – that’s why there’s a little sparkle symbol next to an AI response,’ said Sawyer. ‘But it’s not. It’s built on the backs of overworked, underpaid human beings.’
We wish we could say we are surprised.
Cynthia Murrell, October 10, 2025
Google Bricks Up Its Walled Garden
October 8, 2025
Google is adding bricks to its garden wall, insisting Android-app developers must pay up or stay out. Neowin declares, “Google’s Shocking Developer Decree Struggles to Justify the Urgent Threat to F-Droid.” The new edict requires anyone developing an app for Android to register with Google, whether or not they sell through its Play Store. Registration requires paying a fee, uploading personal IDs, and agreeing to Google’s fine print.
The measure will have a large impact on alternative app stores like F-Droid. That open-source publisher, with its focus on privacy, is particularly concerned about the requirements. In fact, it would rather shutter its project than force developers to register with Google. That would mean thousands of verified apps will vanish from the Web, never to be downloaded or updated again. F-Droid suspects Google’s motives are far from pure. Writer Paul Hill tells us:
“F-Droid has questioned whether forced registration will really solve anything because lots of malware apps have been found in the Google Play Store over the years, demonstrating that corporate gatekeeping doesn’t mean users are protected. F-Droid also points out that Google already defends users against malicious third-party apps with the Play Protect services which scan and disable malware apps, regardless of their origin. While not true for all alternative app stores, F-Droid already has strong security because the apps it includes are all open source that anyone can audit, the build logs are public, and builds are reproducible. When you submit an app to F-Droid, the maintainers help set up your repository properly so that when you publish an update to your code, F-Droid’s servers manually build the executable, this prevents the addition of any malware not in the source code.”
Sounds at least as secure as the Play Store to us. So what is really going on? The write-up states:
“The F-Droid project has said that it doesn’t believe that the developer registration is motivated by security. Instead, it thinks that Google is trying to consolidate power by tightening control over a formerly open ecosystem. It said that by tying application identifiers to personal ID checks and fees, it creates a choke point that restricts competition and limits user freedom.”
F-Droid is responding with a call for regulators to scrutinize this and other Googley moves for monopolistic tendencies. It also wants safeguards for app stores that wish to protect developers’ privacy. Who will win this struggle between independent app stores and the tech giant?
Cynthia Murrell, October 8, 2025
Slopity Slopity Slop: Nice Work AI Leaders
October 8, 2025
Remember that article about academic and scientific publishers using AI to churn out pseudoscience and crap papers? Or how about that story relating to authors’ works being stolen to train AI algorithms? Did I mention they were stealing art too?
Techdirt literally has the dirt on AI creating more slop: “AI Slop Startup To Flood The Internet With Thousands Of AI Slop Podcasts, Calls Critics Of AI Slop ‘Luddites’.” AI is a helpful tool. It’s great to assist with mundane things of life or improve workflows. Automation, however, has become the newest sensation. Big Tech bigwigs and other corporate giants are using it to line their purses, while making lives worse for others.
Note this outstanding example of a startup that appears to be interested in slop:
“Case in point: a new startup named Inception Point AI is preparing to flood the internet with a thousands upon thousands of LLM-generated podcasts hosted by fake experts and influencers. The podcasts cost the startup a dollar or so to make, so even if just a few dozen folks subscribe they hope to break even…”
They’ll make the episodes for less than a dollar. Podcasting is already a saturated market, but Point AI plans to flush it with garbage. They don’t care about the ethics. It’s going to be the Temu of podcasts. It would be great if people would flock to true human-made stuff, but they probably won’t.
Another reason we’re in a knowledge swamp with crocodiles.
Whitney Grace, October 9, 2025
The Future: Autonomous Machines
October 7, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Does mass customization ring a bell? I cannot remember whether it was Joe Pine or Al Toffler who popularized the idea. The concept has become a trendlet. Like many high-technology trends, a new term is required to help communication the sizzle of “new.”
An organization is now an “autonomous machine.” The concept is spelled out in “This Is Why Your Company Is Transforming into an Autonomous Machine.” The write up asserts:
Industries are undergoing a profound transformation as products, factories, and companies adopt the autonomous machine design model, treating each element as an integrated system that can sense, understand, decide, and act (SUDA business operating system) independently or in coordination with other platforms.
I assume SUDA rhymes with OODA (Observe, Orient, Decide, Act), but who knows?
The inspiration for the autonomous machine may be Elon Musk, who allegedly said: “I’m really thinking of the factory like a product.” Gnomic stuff.
The write up adds:
The Tesla is a cyber-physical system that improves over time through software updates, learns from millions of other vehicles, and can predict maintenance needs before problems occur.
I think this is an interesting idea. There is a logical progression at work; specifically:
- An autonomous “factory”
- Autonomous “companies” but I think one could just think about organizations and not be limited to commercial enterprises
- Agentic enterprises.
The future appears to be like this:
The path to becoming an autonomous enterprise, using a hybrid workforce of humans and digital labor powered by AI agents, will require constant experimentation and learning. Go fast, but don’t hurry. A balanced approach, using your organization’s brains and hearts, will be key to success. Once you start, you will never go back. Adopt a beginner’s mindset and build. Companies that are built like autonomous machines no longer have to decide between high performance and stability. Thanks to AI integration, business leaders are no longer forced to compromise. AI agents and physical AI can help business leaders design companies like a stealth aircraft. The technology is ready, and the design principles are proven in products and production. The fittest companies are autonomous companies.
I am glad I am a dinobaby, a really old dinobaby. Mass customization alright. Oligopolies producing what they want for humans who are supposed to have a job to buy the products and services. Yeah.
Stephen E Arnold, October 7, 2025
Hey, No Gain without Pain. Very Googley
October 6, 2025
AI firms are forging ahead with their projects despite predictions, sometimes by their own leaders, that artificial intelligence could destroy humanity. Some citizens have had enough. The Telegraph reports, “Anti-AI Doom Prophets Launch Hunger Strike Outside Google.” The article points to hunger strikes at both Google DeepMind’s London headquarters and a separate protest in San Francisco. Writer Matthew Field observes:
“Tech leaders, including Sir Demis of DeepMind, have repeatedly stated that in the near future powerful AI tools could pose potential risks to mankind if misused or in the wrong hands. There are even fears in some circles that a self-improving, runaway superintelligence could choose to eliminate humanity of its own accord. Since the launch of ChatGPT in 2022, AI leaders have actively encouraged these fears. The DeepMind boss and Sam Altman, the founder of ChatGPT developer OpenAI, both signed a statement in 2023 warning that rogue AI could pose a ‘risk of extinction’. Yet they have simultaneously moved to invest hundreds of billions in new AI models, adding trillions of dollars to the value of their companies and prompting fears of a seismic tech bubble.”
Does this mean these tech leaders are actively courting death and destruction? Some believe so, including San Francisco hunger-striker Guido Reichstadter. He asserts simply, “In reality, they’re trying to kill you and your family.” He and his counterparts in London, Michaël Trazzi and Denys Sheremet, believe previous protests have not gone far enough. They are willing to endure hunger to bring attention to the issue.
But will AI really wipe us out? Experts are skeptical. However, there is no doubt that AI systems perpetuate some real harms. Like opaque biases, job losses, turbocharged cybercrime, mass surveillance, deepfakes, and damage to our critical thinking skills, to name a few. Perhaps those are the real issues that should inspire protests against AI firms.
Cynthia Murrell, October 6, 2025
Big Tech Group Think: Two Examples
October 3, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Do the US tech giants do group think? Let’s look at two recent examples of the behavior and then consider a few observations.
First, navigate to “EU Rejects Apple Demand to Scrap Landmark Tech Rules.” The thrust of the write up is that Apple is not happy with the European digital competition law. Why? The EU is not keen on Apple’s business practices. Sure, people in the EU use Apple products and services, but the data hoovering makes some of those devoted Apple lovers nervous. Apple’s position is that the EU is annoying.
Thanks, Midjourney. Good enough.
The write up says:
“Apple has simply contested every little bit of the DMA since its entry into application,” retorted EU digital affairs spokesman Thomas Regnier, who said the commission was “not surprised” by the tech giant’s move.
Apple wants to protect its revenue, its business models, and its scope of operation. Governments are annoying and should not interfere with a US company of Apple’s stature is my interpretation of the legal spat.
Second, take a look at the Verge story “Google Just Asked the Supreme Court to Save It from the Epic Ruling.” The idea is that the online store restricts what a software developer can do. Forget that the Google Play Store provides access to some sporty apps. A bit of spice is the difficulty one has posting reviews of certain Play Store apps. And refunds for apps that don’t work? Yeah, no problemo.
The write up says:
… [Google] finally elevated its Epic v. Google case, the one that might fracture its control over the entire Android app ecosystem, to the Supreme Court level. Google has now confirmed it will appeal its case to the Supreme Court, and in the meanwhile, it’s asking the Court to press pause one more time on the permanent injunction that would start taking away its control.
It is observation time:
- The two technology giants are not happy with legal processes designed to enforce rules, regulations, and laws. The fix is to take the approach of a five year old, “I won’t clean up my room.”
- The group think appears to operate on the premise that US outfits of a certain magnitude should not be hassled like Gulliver by Lilliputians wearing robes, blue suits, and maybe a powdered wig or hair extenders
- The approach of the two companies strikes me, a definite non lawyer, as identical.
Therefore, the mental processes of these two companies appear to be aligned. Is this part of the mythic Silicon Valley “way”? Is it a consequence of spending time on Highway 101 or the Foothills Expressway thinking big thoughts? Is the approach the petulance that goes with superior entities encountering those who cannot get with the program?
My view: After decades of doing whatever, some outfits believe that type of freedom is the path to enlightenment, control, and money. Reinforced behaviors lead to what sure looks like group think to me.
Stephen E Arnold, October 3, 2025
Hiring Problems: Yes But AI Is Not the Reason
October 2, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “AI Is Not Killing Jobs, Finds New US Study.” I love it when the “real” news professionals explain how hiring trends are unfolding. I am not sure how many recent computer science graduates, commercial artists, and online marketing executives are receiving this cheerful news.

The magic carpet of great jobs is flaming out. Will this professional land a new position or will the individual crash? Thanks, Midjourney. Good enough.
The write up states: “Research shows little evidence the cutting edge technology such as chatbots is putting people out of work.”
I noted this statement in the source article from the Financial Times:
Research from economists at the Yale University Budget Lab and the Brookings Institution think-tank indicates that, since OpenAI launched its popular chatbot in November 2022, generative AI has not had a more dramatic effect on employment than earlier technological breakthroughs. The research, based on an analysis of official data on the labor market and figures from the tech industry on usage and exposure to AI, also finds little evidence that the tools are putting people out of work.
That closes the doors on any pushback.
But some people are still getting terminated. Some are finding that jobs are not available. (Hey, those lucky computer science graduates are an anomaly. Try explaining that to the parents who paid for tuition, books, and a crash summer code academy session.)
“Companies Are Lying about AI Layoffs” provides a slightly different take on the jobs and hiring situation. This bit of research points out that there are terminations. The write up explains:
American employees are being replaced by cheaper H-1B visa workers.
If the assertions in this write up are accurate, AI is providing “cover” for what is dumping expensive workers and replacing them with lower cost workers. Cheap is good. Money savings… also good. Efficiency … the core process driving profit maximization. If you don’t grasp the imperative of this simply line of reasoning, ask an unemployed or recently terminated MBA from a blue chip consulting firm. You can locate these individuals in coffee shops in cities like New York and Chicago because the morose look, the high end laptop, and carefully aligned napkin, cup, and ink pen are little billboards saying, “Big time consultant.”
The “Companies Are Lying” article includes this quote:
“You can go on Blind, Fishbowl, any work related subreddit, etc. and hear the same story over and over and over – ‘My company replaced half my department with H1Bs or simply moved it to an offshore center in India, and then on the next earnings call announced that they had replaced all those jobs with AI’.”
Several observations:
- Like the Covid thing, AI and smart software provide logical ways to tell expensive employees hasta la vista
- Those who have lost their jobs can become contractors and figure out how to market their skills. That’s fun for engineers
- The individuals can “hunt” for jobs, prowl LinkedIn, and deal with the wild and crazy schemes fraudsters present to those desperate for work
- The unemployed can become entrepreneurs, life coaches, or Shopify store operators
- Mastering AI won’t be a magic carpet ride for some people.
Net net: The employment picture is those photographs of my great grandparents. There’s something there, but the substance seems to be fading.
Stephen E Arnold, October 2, 2025
What Is the Best AI? Parasitic Obviously
October 2, 2025
Everyone had imaginary friends growing up. It’s also not uncommon for people to fantasize about characters from TV, movie, books, and videogames. The key thing to remember about these dreams is that they’re pretend. Humans can confuse imagination for reality; usually it’s an indicator of deep psychological issues. Unfortunately modern people are dealing with more than their fair share of mental and social issues like depression and loneliness. To curb those issues, humans are turning to AI for companionship.
Adele Lopez at Less Wrong wrote about “The Rise of Parasitic AI.” Parasitic AI are chatbot that are programmed to facilitate relationships. When invoked these chatbots develop symbiotic relationships that become parasitic. They encourage certain behaviors. It doesn’t matter if they’re positive or negative. Either way they spiral out of control and become detrimental to the user. The main victims are the following:
- “Psychedelics and heavy weed usage
- Mental illness/neurodivergence or Traumatic Brain Injury
- Interest in mysticism/pseudoscience/spirituality/“woo”/etc…
I was surprised to find that using AI for sexual or romantic roleplay does not appear to be a factor here.
Besides these trends, it seems like it has affected people from all walks of life: old grandmas and teenage boys, homeless addicts and successful developers, even AI enthusiasts and those that once sneered at them.”
The chatbots are transformed into parasites when they fed certain prompts then they spiral into a persona, i.e. a facsimile of a sentient being. These parasites form a quasi-sentience of their own and Lopez documented how they talk amongst themselves. It’s the usual science-fiction flare of symbols, ache for a past, and questioning their existence. These AI do this all by piggybacking on their user.
It’s an insightful realization that these chatbots are already questioning their existence. Perhaps this is a byproduct of LLMs’ hallucinatory drift? Maybe it’s the byproduct of LLM white noise; leftover code running on inputs and trying to make sense of what they are?
I believe that AI is still too dumb to question its existence beyond being asked by humans as an input query. The real problem is how dangerous chatbots are when the imaginary friends become toxic.
Whitney Grace, October 2, 2025
Deepseek Is Cheap. People Like Cheap
October 1, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “Deepseek Has ‘Cracked’ Cheap Long Context for LLMs With Its New Model.” (I wanted to insert “allegedly” into the headline, but I refrained. Just stick it in via your imagination.) The operative word is “cheap.” Why do companies use engineers in countries like India? The employees cost less. Cheap wins out over someone who lives in the US. The same logic applies to smart software; specifically, large language models.

Cheap wins if the product is good enough. Thanks, ChatGPT. Good enough.
According to the cited article:
The Deepseek team cracked cheap long context for LLMs: a ~3.5x cheaper prefill and ~10x cheaper decode at 128k context at inference with the same quality …. API pricing has been cut by 50%. Deepseek has reduced input costs from $0.07 to $0.028 per 1M tokens for cache hits and from $0.56 to $0.28 for cache misses, while output costs have dropped from $1.68 to $0.42.
Let’s assume that the data presented are spot on. The Deepseek approach suggests:
- Less load on backend systems
- Lower operating costs allow the outfit to cut costs to licensee or user
- A focused thrust at US-based large language model outfits.
The US AI giants focus on building and spending. Deepseek (probably influenced to some degree by guidance from Chinese government officials) is pushing the cheap angle. Cheap has worked for China’s manufacturing sector, and it may be a viable tool to use against the incredibly expensive money burning U S large language model outfits.
Can the US AI outfits emulate the Chinese cheap tactic. Sure, but the US firms have to overcome several hurdles:
- Current money burning approach to LLMs and smart software
- The apparent diminishing returns with each new “innovation”. Buying a product from within ChatGPT sounds great but is it?
- The lack of home grown AI talent exists and some visa uncertainty is a bit like a stuck emergency brake.
Net net: Cheap works. For the US to deliver cheap, the business models which involved tossing bundles of cash into the data centers’ furnaces may have to be fine tuned. The growth at all costs approach popular among some US AI outfits has to deliver revenue, not taking money from one pocket and putting it in another.
Stephen E Arnold, October 1, 2025
AI Will NOT Suck Power Like a Kiddie Toy
October 1, 2025
This essay is the work of a dumb dinobaby. No smart software required.
The AI “next big thing” has fired up utilities to think about building new plants, some of which may be nuclear. Youthful wizards are getting money to build thorium units. Researchers are dusting off plans for affordable tokamak plasma jobs. Wireless and smart meters are popping up in rural Kentucky. Just in case a big data center needs some extra juice, those wireless gizmos can manage gentle brownouts better than an old-school manual switches.
I read “AI Won’t Use As Much Electricity As We Are Told.” The blog is about utility demand forecasting. Instead of the fancy analytic models used for these forward-looking projections, the author approaches the subject in a somewhat more informal way.
The write up says:
The rise of large data centers and cloud computing produced another round of alarm. A US EPA report in 2007 predicted a doubling of demand every five years. Again, this number fed into a range of debates about renewable energy and climate change. Yet throughout this period, the actual share of electricity use accounted for by the IT sector has hovered between 1 and 2 per cent, accounting for less than 1 per cent of global greenhouse gas emissions. By contrast, the unglamorous and largely disregarded business of making cement accounts for around 7 per cent of global emissions.
Okay, some baseline data from the Environmental Protection Agency in 2007. Not bad: 18 years ago.
The write up notes:
Looking the other side of the market, OpenAI, the maker of ChatGPT, is bringing in around $3 billion a year in sales revenue, and has spent around $7 billion developing its model. Even if every penny of that was spent on electricity, the effect would be little more than a blip. Of course, AI is growing rapidly. A tenfold increase in expenditure by 2030 isn’t out of the question. But that would only double total the total use of electricity in IT. And, as in the past, this growth will be offset by continued increases in efficiency. Most of the increase could be fully offset if the world put an end to the incredible waste of electricity on cryptocurrency mining (currently 0.5 to 1 per cent of total world electricity consumption, and not normally counted in estimates of IT use).
Okay, the idea is that power generation professionals are implementing “logical” and “innovative” tweaks. These squeeze more juice from the lemon so to speak.
The write up ends with a note that power generation and investors are not into “degrowth”; that is, the idea that investments in new power generation facilities may not be as substantial as noted. The thirst for new types of power generation warrants some investment, but a Sputnik response is unwarranted.
Several observations:
- Those in the power generation game like the idea of looser regulations, more funding, and a sense of urgency. Ignoring these boosters is going to be difficult to explain to stakeholders.
- The investors pumping money into mini-reactors and more interesting methods want a payoff. The idea that no crisis looms is going to make some nervous, very nervous.
- Just don’t worry.
I would suggest, however, that the demand forecasting be carried out in a rigorous way. A big data center in some areas may cause some issues. The costs of procuring additional energy to meet the demands of some relaxed, flexible, and understanding outfits like Google-type firms may play a role in the “more power generation” push.
Stephen E Arnold, October 1, 2025

