Brin: The Balloons Do Not Have Pull. It Is AI Now

June 18, 2025

It seems the nitty gritty of artificial intelligence has lured Sergey Brin back onto the Google campus. After stepping away from day-to-day operations in 2019, reports eWeek, “Google’s Co-Founder in Office ‘Pretty Much Every Day’ to Work on AI.” Writer Fiona Jackson tells us:

“Google co-founder Sergey Brin made an unannounced appearance on stage at the I/O conference on Tuesday, stating that he’s in the company’s office ‘pretty much every day now’ to work on Gemini. In a chat with DeepMind CEO Demis Hassabis, he claimed this is because artificial intelligence is something that naturally interests him. ‘I tend to be pretty deep in the technical details,’ Brin said, according to Business Insider. ‘And that’s a luxury I really enjoy, fortunately, because guys like Demis are minding the shop. And that’s just where my scientific interest is.’”

We love Brin’s work ethic. Highlights include borrowing Yahoo online ad ideas, the CLEVER patent, and using product promotions as a way to satisfy some primitive human desires. The executive also believes in 60-hour work weeks—at least for employees. Jackson notes Brin is also known for the downfall of Google Glass. Though that spiffy product faced privacy concerns and an unenthusiastic public, Brin recently blamed his ignorance of electronic supply chains for the failure. Great. Welcome back. But what about the big balloon thing?

Cynthia Murrell, June 18, 2025

AI Can Do Code, Right?

June 18, 2025

Developer Jj at Blogmobly deftly rants against AI code assistants in, “The Copilot Delusion.” Jj admits tools like GitHub Copilot and Claude Codex are good at some things, but those tasks are mere starting points for skillful humans to edit or expand upon. Or they should be. Instead, firms turn to bots more than they should in the name of speed. But AI gets its information from random blog posts and comment sections. Those are nowhere near the reasoning and skills of an experienced human coder. What good are lines of code that are briskly generated if they do not solve the problem?

Read the whole post for the strong argument for proficient humans and against overreliance on bots. These paragraphs stuck out to us:

“The real horror isn’t that AI will take our jobs. It’s that it will entice people who never wanted the job to begin with. People who don’t care for quality. It’ll remove the already tiny barrier to entry that at-least required people to try and comprehend control flow. Vampires with SaaS dreams and Web3 in their LinkedIn bio. Empty husks who see the terminal not as a frontier, but as a shovel for digging up VC money. They’ll drool over their GitHub Copilot like it’s the holy spirit of productivity, pumping out React CRUD like it’s oxygen. They’ll fork VS Code yet again, just to sell the same dream to a similarly deluded kid.”

Also:

“And what’s worse, we’ll normalize this mediocrity. Cement it in tooling. Turn it into a best practice. We’ll enshrine this current bloated, sluggish, over-abstracted hellscape as the pinnacle of software. The idea that building something lean and wild and precise, or even squeezing every last drop of performance out of a system, will sound like folklore. If that happens? If the last real programmers are drowned in a sea of button-clicking career-chasers – then I pity the smart outsider kids to come after me. Defer your thinking to the bot, and we all rot.”

Eloquently put: Good enough is  now excellence.

Cynthia Murrell, June 18, 2025

Control = Power and Money: Anything Else Is an Annoyance

June 17, 2025

I read “Self-Hosting Your Own Media Considered Harmful.” I worked through about 300 comments on Ycombinator’s hacker news page. The write up by Jeff Geerling, a YouTube content creator, found himself in the deadfall of a “strike” or “takedown” or whatever unilateral action by Google is called. The essay says:

Apparently self-hosted open source media library management is harmful. Who knew open source software could be so subversive?

Those YCombinator comments make clear that some people understand the Google game. Other comments illustrate the cloud of unknowing that distorts one’s perception of the nature of the Google magic show which has been running longer than the Sundar & Prabhakar Comedy Act.

YouTube, unlike Google AI, is no joke to many people who believe that they can build a life by creating videos without pay and posting them to a service that is what might be called a new version of the “old Hollywood” studio system.

Let’s think about an answer to this subversive question. (Here’s the answer: Content that undermines Google’s power, control, or money flow. But you knew that, right?)

Let’s expand, shall we?

First, Google makes rules, usually without much more than a group of wizards of assorted ages talking online, at Foosball, or (sometimes) in a room with a table, chairs, a whiteboard, and other accoutrements of what business life was like in the 1970s. Management friction is largely absent; sometimes when leadership input is required, leadership avoids getting into the weeds. “Push down” is much better than an old-fashioned, hierarchical “dumb” approach. Therefore, the decisions are organic and usually arbitrary until something “big” happens like the 2023 Microsoft announced about its deal with OpenAI. Then leadership does the deciding. Code Red or whatever it was called illustrates the knee-jerk approach to issues that just go critical. Phase change.

Second, the connections between common sense, professional behavior (yes, I am including suicide attempts induced by corporate dalliance and telling customers “they have created a problem”), and consistency are irrelevant. Actions are typically local and context free. Consequently the mysterious and often disastrous notifications of a “violation.” I love it when companies judged to be operating in an illegal manner dole out notices of an “offense.” Keep the idea of “power” in mind, please.

Third, the lack of consistent, informed mechanisms to find out the “rule” an individual allegedly violated are the preferred approach to grousing. If an action intentional or unintentional could, might, did, would, will, or some other indicator of revenue loss is identified, then the perpetrator is guilty. Some are banned. Others like a former CIA professional are just told, “Take that video down.”

How does the cited essay handle the topic? Mr. Geerling says:

I was never able to sustain my open source work based on patronage, and content production is the same—just more expensive to maintain to any standard (each video takes between 10-300 hours to produce, and I have a family to feed, and US health insurance companies to fund). YouTube was, and still is, a creative anomaly. I’m hugely thankful to my PatreonGitHub, and Floatplane supporters—and I hope to have direct funding fully able to support my work someday. But until that time, YouTube’s AdSense revenue and vast reach is a kind of ‘golden handcuff.’ The handcuff has been a bit tarnished of late, however, with Google recently adding AI summaries to videos—which seems to indicate maybe Gemini is slurping up my content and using it in their AI models?

This is an important series of statements. First, YouTube relies on content creators who post their work on YouTube for the same reason people use Telegram or BlueSky: These are free publicity channels that might yield revenue or a paying gig. Content creators trade off control and yield power to these “comms conduits” for the belief that something will come out of the effort. These channels are designed to produce revenue for their owners, not the content creators. The “hope” of a payoff means the content will continue to flow. No grousing, lawyer launch, or blog post is going to change the mechanism that is now entrenched.

Second, open source is now a problematic issue. For the Google the open source DeepSeek means that it must market its AI prowess more aggressively because it is threatened. For the Google content that could alienate an advertiser and a revenue stream is, by definition, bad content. That approach will become more widely used and more evident as the shift from Google search-based advertising is eroded by rather poor “smart” systems that just deliver answers. Furthermore, figuring out how to pay for smart software is going to lead to increasingly Draconian measures from Google-type outfits to sustain and grow revenue. Money comes from power to deliver information that will lure or force advertisers to buy access. End of story.

Third, Mr. Geerling politely raises the question about Google’s use of YouTube content to make its world-class smart software smarter. The answer to the question, based on what I have learned from my sources, is, “Yes.” Is this a surprise? Not to me. Maybe a content creator thinks that YouTube will set out rules, guidelines, and explanations of how it uses its digital vacuum cleaner to decrease the probability that that its AI system will spout stupidity like “Kids, just glue cheese on pizza”? That will not happen b because the Google-type of organization does not see additional friction as desirable. Google wants money. It has power.

What’s the payoff for Google? Control. If you want to play, you have to pay. Advertisers provide cash based on a rigged slot machine model. User provide “data exhaust” to feed into the advertising engine. YouTube creators provide free content to produce clicks, clusters of intent, and digital magnets designed to stimulate interest in that which Google provides.

Mr. Geerling’s essay is pretty good. Using good judgment, he does not work through the blood-drawing brambles of what Google does. That means he operates in a professional manner.

Bad news, Mr. Geering, that won’t work. The Google has been given control of information flows and that translates to money and power.

Salute the flag, adapt, and just post content that sells ads. Open source is a sub-genre of offensive content. Adapt or be deprived of Googley benefits.

Stephen E Arnold, June 17, 2025

Baby Naming: The AI Use Case

June 17, 2025

This is one way to use smart software, we suppose. Writer Amanda Smith at CNet reveals, “I Used AI to Brainstorm Baby Names. I Liked 1 Out of Every 10 Suggestions It Had.” While we are happy Smith and her wife are closing in on the perfect name, we are intrigued by something else this process reveals: the information ChatGPT gathers about users and the conclusions it draws. Smith writes:

“It’s best to log in to the chatbot, rather than just using the main search, so that ChatGPT can connect to its past history about you (ChatGPT shared some revealing insights about me when I asked it recently). This way, it’s not starting from scratch, and knows what kind of things you like.”

Not creepy at all. According to Smith’s screenshot, the chatbot prefaced its list with this observation:

“Based on what I know about you – a thoughtful, well-traveled, creative soul with a love for stories, global perspective, poetic yet grounded sensibilities, and a desire to raise a child who feels deeply wanted and connected—here are 10 boy baby names you might like.”

How insightful. When Smith asked where it got the idea to suggest one specific name, ChatGPT elaborated:

“You didn’t mention Cassian explicitly before – but the name fits your overall sensibility so well that I pulled it from a blend of cues you’ve given me over time. Here’s the thinking behind it:

  • Your tone and taste lean literary, poetic, and soulful – you love Franzen, Yanagihara, and coming-of-age stories with depth.
  • Your personal values (creative, worldly, introspective, emotionally intelligent) along with names that feel storied and thoughtful.
  • Names like Atlas and Finn [names Smith had told the bot she liked] signal you like modern, adventurous, slightly offbeat names with mythology, travel, or narrative resonance.
  • Your interest in cultural storytelling and spirituality made me think of names that carry historical or philosophical layers – Cassian being both the name of an early Christian monk and a name with a lyrical, almost cinematic feel.

Basically, Cassian hits that sweet spot between intellectual and emotive, rare but not strange – a name I imagined someone like you might already love.”

We are not sure whether Smith is an “Andor” fan, but we suspect more than a few of her son’s peers will be dubbed Cassian. Just an observation.

The above is quite the profile ChatGPT has built based on one user’s history of interactions. Granted, the bot’s list was not perfect. Some of the names were just too “out there,” Smith reports. The AI also seemed to think she’d prefer names from literature, which is not quite right. But those are minor errors, ones the bot may have already corrected. While Smith and others seem comfortable with this level of familiarity, some might find it alarming. One can imagine a few uses for such a dossier that are less benign than baby name suggestions.

Cynthia Murrell, June 17, 2025

Proton Move: What about the TON Foundation?

June 17, 2025

Surveillance laws are straight out of dystopian novels and they’ve become a reality. Proton mail is a popular alternative to Gmail and in response to a controversial spying bill they’re not happy says TechRadar: “"We Would Be Less Confidential Than Google" – Proton Threatens To Quit Switzerland Over New Surveillance Law."

Switzerland’s new surveillance law would require all social networks, VPNs, and messaging apps to identity and retain user data. Currently only mobile networks and ISPs are only required to do this. Proton mail provides users with VPN and encrypted email services. They’re not happy about this potential new law and they’ve threatened to leave Switzerland.

Proton’s CEO said:

“In an interview with RTS (Radio Télévision Suisse) on May 13, 2025, Proton CEO Andy Yen slammed the proposed amendment as a ‘major violation of the right to privacy’ that will also harm the country’s reputation and its ability to compete on an international level. ‘This revision attempts to implement something that has been deemed illegal in the EU and the United States. The only country in Europe with a roughly equivalent law is Russia,’ said Yen…. ‘’I think we would have no choice but to leave Switzerland,’ said Yen. ‘The law would become almost identical to the one in force today in Russia. It’s an untenable situation. We would be less confidential as a company in Switzerland than Google, based in the United States. So it’s impossible for our business model.’”

The new law would add three new types of information and two types of monitoring. Other tech companies and leaders are against the law.

Switzerland is the bastion of neutrality in Europe. In Zug, Switzerland, the TON Foundation (aka ONF and The Open Network Foundation) works to build support for Telegram’s blockchain, its Telegram-developed crypto currency, and its realigned management team. Will Swiss regulators take a more proactive approach to this interesting non-governmental organization?

Here’s a left-field idea: What if the Proton is a dry-run for some Telegram-related action?

Whitney Grace, June 17, 2025

Professor Marcus, You Missed One Point about the Apple Reasoning Paper

June 16, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software but for the so-so cartoon.

The intern-fueled Apple academic paper titled “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity” has caused a stir. An interesting analysis of the responses to this tour de force is “Seven Replies to the Viral Apple Reasoning Paper – and Why They Fall Short.” Professor Gary Marcus in his analysis identifies categories of reactions to the Apple document.

In my opinion, these are, and I paraphrase with abandon:

  1. Human struggle with complex problems; software does too
  2. Smart software needs lots of computation so deliver a good enough output that doesn’t cost too much
  3. The paper includes an intern’s work because recycling and cheap labor are useful to busy people
  4. Bigger models are better because that’s what people do in Texas
  5. System can solve some types of problems and fail at others
  6. Limited examples because the examples require real effort
  7. The paper tells a reader what is already known: Smart software can be problematic because it is probabilistic, not intelligent.

I look at the Apple paper from a different point of view.

The challenge for Apple has been for more than a year to make smart software with its current limitations work reasonably well. Apple’s innovation in smart software has been the somewhat flawed SIRI (sort of long in the tooth) and the formulation of a snappy slogan “Apple Intelligence.”

image

This individual is holding a “cover your a**” document. Thanks, You.com. Good enough given your constraints, guard rails, and internal scripts.

The job of a commercial enterprise is to create something useful and reasonably clever to pull users to a product. Apple failed. Other companies have rolled out products making use of smart software as it currently is. One of the companies with a reasonably good product is OpenAI’s ChatGPT. Another is Perplexity.

Apple is not in this part of the smart software game. Apple has failed to use “as is” software in a way that adds some zing to the firm’s existing products. Apple has failed, just as it failed with the weird googles, its push into streaming video, and the innovations for the “new” iPhone. Changing case colors and altering an interface to look sort of like Microsoft’s see-through approach are not game changers. Labeling software by the year of release does not make me want to upgrade.

What is missing from the analysis of the really important paper that says, “Hey, this  smart software has big  problems. The whole house of LLM cards is wobbling in the wind”?

The answer is, “The paper is a marketing play.” The best way to make clear that Apple has not rolled out AI is because the current technology is terrible. Therefore, we need more time to figure out how to do AI well with crappy tools and methods not invented at Apple.

I see the paper as pure marketing. The timing of the paper’s release is marketing. The weird colors of the charts are marketing. The hype about the paper itself is marketing.

Anyone who has used some of the smart software tools knows one thing: The systems make up stuff. Everyone wants the “next big thing.” I think some of the LLM capabilities can be quite  useful. In the coming months and years, smart software will enable useful functions beyond giving students a painless way to cheat, consultants a quick way to appear smart in a very short time, and entrepreneurs a way to vibe code their way into a job.

Apple has had one job: Find a way to use  the available technology to deliver something novel and useful to its customers. It has failed. The academic paper  is a “cover your a**”  memo more suitable for a scared 35 year old middle manager in an advertising agency. Keep in mind that I am no professor. I am a dinobaby. In my world, an “F” is an “F.” Apple’s viral paper is an excuse for delivering something useful with Apple Intelligence. The company has delivered an illustration of why there is no Apple smart TV or Apple smart vehicle.

The paper is marketing, and it is just okay marketing.

Stephen E Arnold, June 16, 2025

Googley: A Dip Below Good Enough

June 16, 2025

Dino 5 18 25_thumbA dinobaby without AI wrote this. Terrible, isn’t it? I did use smart software for the good enough cartoon. See, this dinobaby is adapting.

I was in Washington, DC, from June 9 to 11, 2025. My tracking of important news about the online advertising outfit was disrupted. I have been trying to catch up with new product mist, AI razzle dazzle, and faint signals of importance. The first little beep I noticed appeared in “Google’s Voluntary Buyouts Lead its Internal Restructuring Efforts.” “Ah, ha,” I thought. After decades of recruiting the smartest people in the world, the Google is dumping full time equivalents. Is this a move to become more efficient? Google has indicated that it is into “efficiency”; therefore, has the Google redefined the term? Had Google figured out that the change to tax regulations about research investments sparked a re-thing? Is Google so much more advanced than other firms, its leadership can jettison staff who choose to bail with a gentle smile and an enthusiastic wave of leadership’s hand?

image

The home owner evidences a surge in blood pressure. The handyman explains that the new door has been installed in a “good enough” manner. If it works for service labor, it may work for Google-type outfits too. Thanks, Sam AI-Man. Your ChatGPT came through with a good enough cartoon. (Oh, don’t kill too many dolphins, snail darters, and lady bugs today, please.)

Then I read “Google Cloud Outage Brings Down a Lot of the Internet.” Enticed by the rock solid metrics for the concept of “a lot,” I noticed this statement:

Large swaths of the internet went down on Thursday (June 12, 2025), affecting a range of services, from global cloud platform Cloudflare to popular apps like Spotify. It appears that a Google Cloud outage is at the root of these other service disruptions.

What? Google the fail over champion par excellence went down. Will the issue be blamed on a faulty upgrade? Will a single engineer who will probably be given an opportunity to find his or her future elsewhere be identified? Will Google be able to figure out what happened?

What are the little beeps my system continuously receives about the Google?

  1. Wikipedia gets fewer clicks than OpenAI’s ChatGPT? Where’s the Google AI in this? Answer: Reorganizing, buying out staff, and experiencing outages.
  2. Google rolls out more Gemini functions for Android devices. Where’s the stability and service availability for these innovations? Answer: I cannot look up the answer. Google is down.
  3. Where’s the revenue from online advertising as traditional Web search presents some thunderclouds? Answer: Well, that is a good question. Maybe revenues from Waymo, a deal with Databricks, or a bump in Pixel phone sales?

My view is that the little beeps may become self-amplifying. The magic of the online advertising model seems to be fading like the allure of Disneyland. When imagineering becomes imitation, more than marketing fairy dust may be required.

But what’s evident from the tiny beeps is that Google is now operating in “good enough” mode. Will it be enough to replace the Yahoo-GoTo-Overture pay-to-play approach to traffic?

Maybe Waymo is the dark horse when the vehicles are not combustible?

Stephen E Arnold, June 16, 2025

OpenA Ive: The Sam Altman Jive is Alive

June 16, 2025

Look out, Tim Apple, your pie may be burning. According to Fox News, “OpenAI’s $6.5B New Acquisition Signals Apple’s Biggest AI Crisis Yet.” The purchase in question? AI device startup io, founded by former Apple designer Jony Ive. “Cyber Guy” Kurt Knutsson tells us:

“This isn’t your typical business deal. It’s a collaboration between Sam Altman, who leads OpenAI, and the designer responsible for some of Apple’s most iconic products, including the iPhone and Apple Watch. Together, they want to create a new generation of AI-powered devices that could completely change how we use technology.”

Altman is trusting Ive and his team of (largely) Apple expats to propel OpenAI to the forefront of a “screenless” tech future. What, exactly, that will look like remains a mystery—a tantalizing one, if the pair has their way. Knutsson writes:

“While Altman and Ive are keeping most details secret, they have hinted at a family of AI devices that focus on seamless, intuitive interaction rather than screens. They want to create something that understands your context, adapts to your needs and helps you connect and create in new ways, all without requiring you to stare at a display. The device won’t be a phone or a pair of glasses but something entirely new that fits into your life as naturally as a MacBook or iPhone once did.”

Contrast this ambitious vision with Apple’s current position as a relative laggard in the AI field. Unlike Google, which tried to overtake Apple on the same AI racetrack, OpenAI plans to blaze a new path. Not only is the firm working on a new paradigm, it hopes to rapidly overtake the market when it gets there. We learn:

“OpenAI’s ambition is huge. In fact, they want to ship 100 million units faster than any company has ever done with a new product, which shows just how big their vision is.”

Will Apple be able to weather this innovation assault? Will tech devices soon ditch the display? Will OpenAI buy Google Chrome? Will soft AI carry the day? So many questions.

Cynthia Murrell, June 16, 2025

Brainyfone or Foneybrain?

June 16, 2025

If you spend too much time on your phone raise your hand. We’re not snoops, so we haven’t activated your device’s camera to spy on you. We’ll just affirm that you have and tell you what the BBC wrote: “How Mobile Phones Have Changed Our Brains.” We feel guilty about being on the phone so much, but it’s a very convenient tool.

Adults check their phone on average 344 times a day-once every four minutes. YIKES! We use our phones to complete a task and that leads to other activities like checking email, visiting social media, etc. Our neural pathways are being restructured to rely on phones. Here’s what it does:

“As you might expect, with our societal dependence on devices increasing rapidly every year, the research struggles to keep up. What we do know is that the simple distraction of checking a phone or seeing a notification can have negative consequences. This isn’t very surprising; we know that, in general, multitasking impairs memory and performance. One of the most dangerous examples is phone use while driving. One study found that merely speaking on the phone, not texting, was enough to make drivers slower to react on the road. It’s true for everyday tasks that are less high-stakes, too. Simply hearing a notification "ding" made participants of another study perform far worse on a task – almost as badly as participants who were speaking or texting on the phone during the task.”

Phones don’t contribute entirely to brain drain. The article did report on a study that did support the theory phones atrophy memory. Another study supported that phones helped improve memory when participants were allowed to make notes with their phone.

The articles makes a thought-provoking assertion:

“Individuals who think that our brains have "limited" resources (such as that resisting one temptation makes it harder to resist the next) are indeed more likely to exhibit this phenomenon in testing. But for those who think that the more we resist temptation, the more we’re strengthening the capacity to keep resisting temptation – that our brains, in other words, have unlimited resources. Exerting self-control or mental fatigue on one task doesn’t negatively affect their performance on the next one.

More fascinatingly still, whether we have a limited or non-limited view of the brain may be largely cultural – and that Western countries like the US may be more likely to think the mind is limited compared to other cultures, such as India.”

We’re not as limited as we think we are and the brain we adapt to mobile devices. However, it’s still healthy to get off your phones.

Whitney Grace, June 16, 2025

Up for a Downer: The Limits of Growth… Baaaackkkk with a Vengeance

June 13, 2025

Dino 5 18 25_thumbJust a dinobaby and no AI: How horrible an approach?

Where were you in 1972? Oh, not born yet. Oh, hanging out in the frat house or shopping with sorority pals? Maybe you were working at a big time consulting firm?

An outfit known as Potomac Associates slapped its name on a thought piece with some repetitive charts. The original work evolved from an outfit contributing big ideas. The Club of Rome lassoed  William W. Behrens, Dennis and Donella Meadows, and Jørgen Randers to pound data into the then-state-of-the-art World3 model allegedly developed by Jay Forrester at MIT. (Were there graduate students involved? Of course not.)

The result of the effort was evidence that growth becomes unsustainable and everything falls down. Business, government systems, universities, etc. etc.  Personally I am not sure why the idea that infinite growth with finite resources will last forever was a big deal. The idea seems obvious to me. I was able to get my little hands on a copy of the document courtesy of Dominique Doré, the super great documentalist at the company which employed my jejune and naive self. Who was I too think, “This book’s conclusion is obvious, right?” Was I wrong. The concept of hockey sticks that had handles to the ends of the universe was a shocker to some.

The book’s big conclusion is the focus of “Limits to Growth Was Right about Collapse.” Why? I think the idea that the realization is a novel one to those who watched their shares in Amazon, Google, and Meta zoom to the sky. Growth is unlimited, some believed. The write up in “The Next Wave,” an online newsletter or information service happily quotes an update to the original Club of Rome document:

This improved parameter set results in a World3 simulation that shows the same overshoot and collapse mode in the coming decade as the original business as usual scenario of the LtG standard run.

Bummer. The kiddie story about Chicken Little had an acorn plop on its head. Chicken Little promptly proclaimed in a peer reviewed academic paper with non reproducible research and a YouTube video:

The sky is falling.

But keep in mind that the kiddie story  is fiction. Humans are adept at survival. Maslow’s hierarchy of needs captures the spirit of  species. Will life as modern CLs perceive it end?

I don’t think so. Without getting to philosophical, I would point to Gottlief Fichte’s thesis, antithesis, synthesis as a reasonably good way to think about change (gradual and catastrophic). I am not into philosophy so when life gives you lemons, one can make lemonade. Then sell the business to a local food service company.

Collapse and its pal chaos create opportunities. The sky remains.

The cited write up says:

Economists get over-excited when anyone mentions ‘degrowth’, and fellow-travelers such as the Tony Blair Institute treat climate policy as if it is some kind of typical 1990s political discussion. The point is that we’re going to get degrowth whether we think it’s a good idea or not. The data here is, in effect, about the tipping point at the end of a 200-to-250-year exponential curve, at least in the richer parts of the world. The only question is whether we manage degrowth or just let it happen to us. This isn’t a neutral question. I know which one of these is worse.

See de-growth creates opportunities. Chicken Little was wrong when the acorn beaned her. The collapse will be just another chance to monetize. Today is Friday the 13th. Watch out for acorns and recycled “insights.”

Stephen E Arnold, June 13, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta