An AI Outfit Promises Something Infinite: Er, Really? Infinite?
November 28, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
Venture Beat publishes some write ups that strike me like marketing collateral. I am not sure what to make of an article that asserts an “infinite” service. Let’s not skip over the idea of infinity. I think a couple of mathy types drove themselves crazy thinking about infinities of infinities working two ways: Bigger numbers and then smaller numbers. Yep, those close to these mathy types checked them into a facility for those sufficiently unusual. Would this article “Anthropic’s Claude Opus 4.5 Is Here” Cheaper, AI, Infinite Chats, and Coding Skills That Beat Humans (sic).”

Georg Cantor makes an appearance at the Happy Valley Rest Home. He tells the nurse delivering his scopolamine that he wants to talk about the central infinite-cardinality problem before he is placed in isolation. Thanks, Venice.ai. Close enough, although the setting does not look too much like a 19th century facility. Oh, well.
The first step on what will be a short journey I assure you. Let’s let Anthropic Claude 4.5 define “infinite”, shall we? Here’s the response:
Infinity represents unboundedness—a concept of endlessness without limit. In mathematics, it describes quantities exceeding any finite value or processes continuing indefinitely. Infinity isn’t a number but an abstract notion of limitlessness, fundamental to calculus, set theory, and cosmology.
Notice that Claude 4.5 did not mention marketing and PR speak. This is important for “Anthropic’s Claude Opus 4.5 Is Here: Cheaper AI, Infinite Chats, and Coding Skills That Beat Humans.” Note the two-word phrase “infinite chats.” Nope, I don’t think so. Not even Amazon’s money pipe can serve “infinite chats.” Come on, Claude. Get real. Amazon cannot even keep its system online if my memory serves me. But infinite? Hmmm.
The write up says that Venture Beat engaged in an exclusive interview with Alex Albert, “Anthropic’s head of developer relations.” (I think this means a marketing job. What do you think?) Here’s a statement that caught my attention:
The new model, Claude Opus 4.5, scored higher on Anthropic’s most challenging internal engineering assessment than any human job candidate in the company’s history, according to materials reviewed by VentureBeat. The result underscores both the rapidly advancing capabilities of AI systems and growing questions about how the technology will reshape white-collar professions. The Amazon-backed company is pricing Claude Opus 4.5 at $5 per million input tokens and $25 per million output tokens — a dramatic reduction from the $15 and $75 rates for its predecessor, Claude Opus 4.1, released earlier this year. The move makes frontier AI capabilities accessible to a broader swath of developers and enterprises while putting pressure on competitors to match both performance and pricing.
Does this strike you as what I caller a “better, faster, cheaper” approach. I like to add cheaper is the operative word. Yep, buy the market and raise prices later. Does this surprise me? No. From a company that promises to deliver something that costs money infinitely, cheaper does not surprise me one whit.
Here’s another statement I found interesting:
Albert said employee testers consistently reported that the model demonstrates improved judgment and intuition across diverse tasks — a shift he described as the model developing a sense of what matters in real-world contexts. “The model just kind of gets it,” Albert said. “It just has developed this sort of intuition and judgment on a lot of real world things that feels qualitatively like a big jump up from past models.”
To test this statement, I fired up Claude 4.5 again and asked, “Is it possible for a commercial enterprise supported by modest customer sales and venture capital to deliver an “infinite service”; for example, infinite chats for users?”
Here’s what the “just kind of gets it” Claude told me:
No, it’s not sustainably possible. Here’s why:
- Real Costs Scale with Usage: Every chat consumes computational resources (servers, energy, bandwidth). Even efficient models incur marginal costs per interaction that accumulate with heavy users.
- Finite Capital: Venture funding and modest sales provide limited runway. Without usage caps, a small subset of power users could drain resources disproportionately, making the business model unsustainable.
- Economic Reality: “Unlimited” offerings typically include hidden constraints (throttling during peak times, deprioritization, or eventual tier upgrades). True infinity would require infinite capital and infrastructure.
- Tragedy of the Commons: Without limits, rational users maximize personal use, collectively overwhelming the system—forcing either service degradation or imposed restrictions.
Practical Outcome: Most companies offer generous but capped services (e.g., “50 free messages/day”) or tiered pricing. Marketing terms like “unlimited” often mean “more than typical users need,” not literally infinite. Would you like examples of how AI companies balance accessibility with sustainability?
I think that Mr. Albert might want to check his assertions with his product that “infinite” is a tough promise. But let’s not stop with the infinite. What about this statement in the write up?
Opus 4.5 outscores all human candidates on company’s toughest engineering test.
For me, the sticking point is the word “all.” Categorical affirmatives are problematic. Black swans aside, “all” is a cousin of infinite. Its use implies a rather sweeping approach to those who take a tough engineering test. What’s the sample? One, 100, 10,000? Yeah. Oh, well.
What’s the proof? Here’s the chart that did not knock me over with shock. I know it is difficult to read. Navigate to the cited article on a boat anchor computer with a big screen, and you can sort of read the numbers. Look for this:

Claude 4.5 is better than also-rans like Google and OpenAI. Well, why not? Anthropic has whipped infinite and tamed all. Dealing with weak wristed systems like Googzilla and ChatGPT is trivial.
Mr. Albert offered a statement which Venture Beat uses to complete this remarkable feat of content marketing, hyperbole, and logical impossibilities:
When asked about the engineering exam results and what they signal about AI’s trajectory, Albert was direct: “I think it’s a really important signal to pay attention to.”
Yep, pay attention. I did.
Stephen E Arnold, November 28, 2025
IBM on the Path to Dyson Spheres But Quantum Networks Come First
November 28, 2025
This essay is the work of a dumb dinobaby. No smart software required.
How does one of the former innovators in Fear, Uncertainty, and Doubt respond to the rare atmosphere of smart software? The answer, in my opinion, appears in “IBM, Cisco Outline Plans for Networks of Quantum Computers by Early 2030s.” My prediction was wrong about IBM. I thought that with a platform like Watson, IBM would aim directly at Freeman Dyson’s sphere. The idea is to build a sphere in space to gather energy and power advanced computing systems. Well, one can’t get to the Dyson sphere without a network of quantum computers. And the sooner the better.

A big thinker conceptualizes inventions anticipated by science fiction writers. The expert believes that if he thinks it, that “it” will become real. Sure, but usually more than a couple of years are needed for really big projects like affordable quantum computers linked via quantum networks. Thanks, Venice.ai. Good enough.
The write up from the “trust” outfit Thomson Reuters says:
IBM and Cisco Systems … said they plan to link quantum computers over long distances, with the goal of demonstrating the concept is workable by the end of 2030. The move could pave the way for a quantum internet, though executives at the two companies cautioned that the networks would require technologies that do not currently exist and will have to be developed with the help of universities and federal laboratories.
Imagine artificial general intelligence is like to arrive about the same time. IBM has Watson. Does this mean that Watson can run on quantum computers. Those can solve the engineering challenges of the Dyson sphere. IBM can then solve the world’s energy requirements. This sequence seems like a reasonable tactical plan.
The write up points out that building a quantum network poses a few engineering problems. I noted this statement in the news report:
The challenge begins with a problem: Quantum computers like IBM’s sit in massive cryogenic tanks that get so cold that atoms barely move. To get information out of them, IBM has to figure out how to transform information in stationary “qubits” – the fundamental unit of information in a quantum computer – into what Jay Gambetta, director of IBM Research and an IBM fellow, told Reuters are “flying” qubits that travel as microwaves. But those flying microwave qubits will have to be turned into optical signals that can travel between Cisco switches on fiber-optic cables. The technology for that transformation – called a microwave-optical transducer – will have to be developed with the help of groups like the Superconducting Quantum Materials and Systems Center, led by the Fermi National Accelerator Laboratory near Chicago, among others.
Trivial compared to the Dyson sphere confection. It is now sundown for year 2025. IBM and its partner target being operational in 2029. That works out to 24 months. Call it 36 just to add a margin of error.
Several observations:
- IBM and its partner Cisco Systems are staking out their claims to the future of computing
- Compared to the Dyson sphere idea, quantum computers networked together to provide the plumbing for an Internet that makes Jack Dorsey’s Web 5 vision seem like something from a Paleolithic sketch on the wall of the Lescaux Caves.
- Watson and IBM’s other advanced AI technologies probably assisted the IBM marketing professionals with publicizing Big Blue’s latest idea for moving beyond the fog of smart software.
Net net: The spirit of avid science fiction devotees is effervescing. Does the idea of a network of quantum computers tickle your nose or your fancy? I have marked my calendar.
Stephen E Arnold, November 28, 2025
Gizmodo Suggests Sam AI-Man Destroys the Mind of Youth.
November 28, 2025
This essay is the work of a dumb dinobaby. No smart software required.
If I were an ad sales person at Gizmodo, I would not be happy. I am all for a wall between editorial and advertising. I bet you did not know that I learned that basic rule when I worked at Ziff in Manhattan. However, writing articles that accuse a potential advertiser of destroying the minds of youth is unlikely to be forgotten. I am not saying the write up is not accurate, but I know that it is possible to write articles and stories that do not make a potential advertiser go nutso.
Gizmodo published “OpenAI Introduces ‘ChatGPT for Teachers’ to Further Destroy the Minds of Our Youth” to explain a new LexisNexis-type of play to get people used to their online product. OpenAI thinks the LexisNexis- or close variant model is a good way to get paying customers. Students in law school become familiar with LexisNexis. When and if they get a job, those students will use LexisNexis. The approach made sense when Don Wilson and his fellow travelers introduced the program. OpenAI is jumping on a marketing wagon pulled by a horse that knows how to get from A to B.

Have those laptops, tablets, and mobile phones made retail workers adept at making change? Thanks, Venice.ai. Good enough.
The Gizmodo article says:
ChatGPT for Teachers is designed to help educators prepare materials for their classes, and it will support Family Educational Rights and Privacy Act (FERPA) requirements so that teachers and school staff can securely work with student data within the workspace. The company says the suite of tools for teachers will be available for free through June 2027, which is probably the point at which OpenAI will need to show that it can actually generate revenue and stick its hand out to demand payment from teachers who have become reliant on the suite of tools.
Okay, no big innovation here.
Gizmodo states:
There is already mounting evidence that relying on AI can erode critical thinking skills, which is something you’d like kids to be engaging in, at least during school hours. Other studies have shown that people “offload” the more difficult cognitive work and rely on AI as a shortcut when it’s available, ultimately harming their ability to do that work when they don’t have the tool to lean on. So what could go wrong giving those tools to both students and teachers? Seems like we’re going to find out.
Okay, but that headline looms over the Ivory soap conclusion to the article. In my opinion, I know exactly how this free AI will work. Students will continue to look for the easiest way to complete assigned tasks. If ChatGPT is available, students will find out if it works. Then students will use AI for everything possible so the students have more time for digging into linear algebra. (That’s a joke.) A few students will understand that other students will not do the assignments but will pay someone to do that work for them. That other person will be [a] a paramour, [b] a classmate who is a friend, [c] a classmate who responds to threats, or [d] ChatGPT-type services.
Test scores will continue to fall until a group of teachers create easier tests. Furthermore, like putting A/V systems for students to learn a foreign language in 1962, the technology works only if the student concentrates, follows the lesson attentively, writes notes, and goes through the listen and repeat mechanisms in the language lab. PCs, tablets, Chrome books, mobile phones, or AI work the same way. When students do not have the discipline to pay attention and put in the effort required to learn, the technology cannot compensate. It can, however, replace certain jobs so companies and co-workers do not have to compensate for those who lack basic skills, the discipline required to do the work, and the social skills needed to fit into an organization.
The myth that technology can replace traditional educational techniques is more nutso than the sales professionals who have to overcome ideas like “destroy the minds of youth.”
Net net: Sam AI-Man has some challenge ahead with this free ChatGPT. Want evidence of the impact of technology on the minds of legal professionals? Just check out some of the YouTubing lawyers. There you go.
Stephen E Arnold, November 28, 2024
Can the Chrome Drone Deorbit Comet?
November 28, 2025
Perplexity developed Comet, an intuitive AI-powered Internet browser. Analytic Insight has a rundown on Comet in the article: “Perplexity CEO Aravind Srinivas Claims Comet AI Browser Could ‘Kill’ Android System.” Perplexity designed Comet for more complex tasks such as booking flights, shopping, and answering then executing simple prompts. The new browser is now being released for Android OS.
Until recently Comet was an exclusive, invite-only browser for the desktop version. It is now available for download. Comet is taking the same approach for an Android release. Perplexity hopes to overtake Android as the top mobile OS or so CEO Aravind Srinivas plans.
Another question is if Comet could overtake Chrome as the favored AI browser:
“The launch of Comet AI browser coincides with the onset of a new conflict between AI browsers. Not long ago, OpenAI introduced ChatGPT Atlas, while Microsoft Edge and Google Chrome are upgrading their platforms with top-of-the-line AI tools. Additionally, Perplexity previously received attention for a $34.5 billion proposal to acquire Google Chrome, a bold move indicating its aspirations.
Comet, like many contemporary browsers, is built on the open-source Chromium framework provided by Google, which is also the backbone for Chrome, Edge, and other major browsers. With Comet’s mobile rollout and Srinivas’s bold claim, Perplexity is obviously betting entirely on an AI-first future, one that will see a convergence of the browser and the operating system.”
Comet is built on Chromium. Chrome is too. Comet is a decent web browser, but it doesn’t have the power of Alphabet behind it. Chrome will dominate the AI-browser race because it has money to launch a swarm of digital drones at this frail craft.
Whitney Grace, November 28, 2025
Turkey Time: IT Projects Fail Like Pies and Cakes from Crazed Aunties
November 27, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
Today is Thanksgiving, and it is appropriate to consider the turkey approach to technology. The source of this idea comes from the IEEE.org online publication. The article explaining what I call “turkeyism” is “How IT Managers Fail Software Projects.” Because the write up is almost 4,000 words and far too long for reading during an American football game’s halftime break, I shall focus on a handful of points in the write up. I encourage you to read the entire article and, of course, sign up and subscribe. If you don’t, the begging for dollars pop up may motivate you to click away and lose the full wisdom of the IEEE write up. I want to point out that many IT managers are trained as electrical engineers or computer scientists who have had to endure the veritable wonderland of imaginary numbers for a semester or two. But increasingly IT managers can be MBAs or in some frisky Silicon Valley type companies, recent high school graduates with a native ability to solve complex problems and manage those older than they. Hey, that works, right?

Auntie knows how to manage the baking process. She practices excellent hygiene, but with age comes forgetfulness. Those cookies look yummy. Thanks, Venice.a. No mom. But good enough with Auntie pawing the bird.
Answer: Actually no.
The cited IEEE article states:
Global IT spending has more than tripled in constant 2025 dollars since 2005, from US $1.7 trillion to $5.6 trillion, and continues to rise. Despite additional spending, software success rates have not markedly improved in the past two decades. The result is that the business and societal costs of failure continue to grow as software proliferates, permeating and interconnecting every aspect of our lives.
Yep, and lots of those managers are members of IEEE or similar organizations. How about that jump from solving mathy problems to making software that works? It doesn’t seem to be working. Is it the universities, the on the job training, or the failure of continuing education? Not surprisingly, the write up doesn’t offer a solution.
What we have is a global, expensive problem. With more of everyday life dependent on “technology,” a failure can have some interesting consequences. Not only is it tough to get that new sweater delivered by Amazon, but downtime can kill a kid in a hospital when a system keels over. Dead is dead, isn’t it?
The write up says:
A report fromthe Consortium for Information & Software Quality (CISQ) estimated the annual cost of operational software failures in the United States in 2022 alone was $1.81 trillion, with another $260 billion spent on software-development failures. It is larger than the total U.S. defense budget for that year, $778 billion.
Chatter about the “cost” of AI tosses around even bigger numbers. Perhaps some of the AI pundits should consider the impact of AI failure in the context of IT failure. Frankly I am not confident about AI because of IT failure. The money is one thing, but given the evidence about the prevalence of failure, I am not ready to sing the JP Morgan tune about the sunny side of the street.
The write up adds:
Next to electrical infrastructure, with which IT is increasingly merging into a mutually codependent relationship, the failure of our computing systems is an existential threat to modern society. Frustratingly, the IT community stubbornly fails to learn from prior failures.
And what role does a professional organization play in this little but expensive drama? Are the arrows of accountability pointing at the social context in which the managers work? What about the education of these managers? What about the drive to efficiency? You know. Design the simplest possible solution. Yeah, these contextual components have created a high probability of failure. Will Auntie’s dessert give everyone food poisoning? Probably. Auntie thinks she has washed her hands and baked with sanitation in mind. Yep, great assumption because Auntie is old. Auntie Compute is going on 85 now. Have another cookie.
But here’s the killer statement in the write up:
Not much has worked with any consistency over the past 20 years.
This is like a line in a Jack Benny Show skit.
Several observations:
- The article identifies a global, systemic problem
- The existing mechanisms for training people to manage don’t work
- There is no solution.
Have a great Thanksgiving. Have another one of Auntie’s cookies. The two people who got food poisoning last year just had upset tummies. It will just get better. At least that’s what mom says.
Stephen E Arnold, November 27, 2025
Coca-Cola and AI: Things May Not Be Going Better
November 27, 2025
Coca-Cola didn’t learn its lesson last year with a less than bad AI-generated Christmas commercial. It repeated the mistake again in 2025. Although the technology has improved, the ad still bears all the fake-ness of early CGI (when examined in hindsight of course). Coca-Cola, according to Creative Bloq, did want to redeem itself, so the soft drink company controlled every detail in the ad: “Devastating Graphic Shows Just How Bad The Coca-Cola Christmas Ad Really Is.”
Here’s how one expert viewed it:
“In a post on LinkedIn, the AI consultant Dino Burbidge points out the glaring lack of consistency and continuity in the design of the trucks in the new AI Holidays are Coming ad, which was produced by AI studio Secret Level. At least one of the AI-generated vehicles appears to completely defy physics, putting half of the truck’s payload beyond the last wheel.
Dino suggests that the problem with the ad is not AI per se, but the fact that no human appears to have checked what the AI models generated… or that more worryingly they checked but didn’t care, which is extraordinary when the truck is the main character in the ad.”
It’s been suggested that Coca-Cola used AI to engage in rage bait instead of building a genuinely decent Christmas ad. There was a behind the scenes video of how the ad was made and even that used AI VoiceOver.
I liked the different horse drawn wagons. Very consistent.
Whitney Grace, November 27, 2025
Microsoft: Desperate to Be a Leader in the Agentic OS Push Decides to Shove, Not Lure Supporters
November 26, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I had a friend in high school who like a girl Mary B. He smiled at her. He complimented her plaid skirt. He gave her a birthday gift during lunch in the school cafeteria. My reaction to this display was, “Yo, Tommy, you are trying too hard.” I said nothing, I watched as Mary B. focused her attention on a football player with a C average but comic book Superman looks. Tommy became known as a person who tried too hard to reach a goal without realizing no girl wanted to be the focal point of a birthday gift in the school cafeteria with hundreds of students watching. Fail, Tommy.

Thanks, Venice.ai. Good enough, the gold standard today I believe.
I thought about this try-too-hard approach when I read “Windows President Addresses Current State of Windows 11 after AI Backlash.” The source is the on-again, off-again podcasting outfit called Windows Central. Here’s a snippet from the write up which recycles content from X.com. The source of the statement is a person named Pavan Davuluri, who is the Microsoft Windows lead:
The team (and I) take in a ton of feedback. We balance what we see in our product feedback systems with what we hear directly. They don’t always match, but both are important. I’ve read through the comments and see focus on things like reliability, performance, ease of use and more… we care deeply about developers. We know we have work to do on the experience, both on the everyday usability, from inconsistent dialogs to power user experiences. When we meet as a team, we discuss these pain points and others in detail, because we want developers to choose Windows.
Windows Central pointed out that Lead Davuluri demonstrated “leadership” with a bold move. He disabled comments to his X.com post about caring deeply about its customers. I like it when Lead Davuluri takes decisive leadership actions that prevent people from providing inputs. Is that why Microsoft ignored focus groups responding to Wi-Fi hardware that did not work and “ribbon” icons instead of words in Office application interfaces? I think I have possibly identified a trend at Microsoft: The aircraft carrier is steaming forward, and it is too bad about the dolphins, fishing boats, and scuba divers. I mean who cares about these unseen flotsam and jetsam.
Remarkably Windows Central’s write up includes another hint of negativism about Microsoft Windows:
What hasn’t helped in recent years is “Continuous Innovation,” Microsoft’s update delivery strategy that’s designed to keep the OS fresh with new features and changes on a consistent, monthly basis. On paper, it sounds like a good idea, but in practice, updating Windows monthly with new features often causes more headaches than joy for a lot of people. I think most users would prefer one big update at a predictable, certain time of the year, just like how Apple and Google do it.
Several observations if I may offer them as an aged dinobaby:
- Google has said it wants to become the agentic operating system. That means Google wants to kill off Microsoft, its applications, and its dreams.
- Microsoft knows that it faces competition from a person whom Satya Nadella knows, understands, absolutely must defeat because his family would make fun of him if he failed. Yep, a man-to-man dust up with annoying users trying to stop the march of technological innovation and revenue. Lead Davuluri has his marching orders; hence, the pablum tinged non-speak cited in the Windows Central write up.
- User needs and government regulation have zero — that’s right, none, nil, zip — chance of altering what these BAIT (big AI tech) outfits will do to win. Buckle up, Tommy. You are going to be rejected again.
Net net: That phrase agentic OS has a ring to it, doesn’t it?
Stephen E Arnold, November 26, 2025
Has Big Tech Taught the EU to Be Flexible?
November 26, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Here’s a question that arose in a lunch meeting today (November 19, 2025): Has Big Tech brought the European Union to heel? What’s your answer?
The “trust” outfit Thomson Reuters published “EU Eases AI, Privacy Rules As Critics Warn of Caving to Big Tech.”

European Union regulators demonstrate their willingness to be flexible. These exercises are performed in the privacy of a conference room in Brussels. The class is taught by those big tech leaders who have demonstrated their ability to chart a course and keep it. Thanks, Venice.ai. How about your interface? Yep, good enough I think.
The write up reported:
The EU Commission’s “Digital Omnibus”, which faces debate and votes from European countries, proposed to delay stricter rules on use of AI in “high-risk” areas until late 2027, ease rules around cookies and enable more use of data.
Ah, back peddling seems to be the new Zen moment for the European Union.
The “trust” outfit explains why, sort of:
Europe is scrabbling to balance tough rules with not losing more ground in the global tech race, where companies in the United States and Asia are streaking ahead in artificial intelligence and chips.
Several factors are causing this rethink. I am not going to walk the well-worn path called “Privacy Lane.” The reason for the softening is not a warm summer day. The EU is concerned about:
- Losing traction in the slippery world of smart software
- Failing to cultivate AI start ups with more than a snowball’s chance of surviving in the Dante’s inferno of the competitive market
- Keeping AI whiz kids from bailing out of European mathematics, computer science, and physics research centers for some work in Sillycon Valley or delightful Z Valley (Zhongguancun, China, in case you did not know).
From my vantage point in rural Kentucky, it certainly appears that the European Union is fearful of missing out on either the boom or the bust associated with smart software.
Several observations are warranted:
- BAITers are likely to win. (BAIT means Big AI Tech in my lingo.) Why? Money and FOMO
- Other governments are likely to adapt to the needs of the BAITers. Why? Money and FOMO
- The BAIT outfits will be ruthless and interpret the EU’s new flexibility as weakness.
Net net: Worth watching. What do you think? Money? Fear? A combo?
Stephen E Arnold, November 26, 2025
What Can a Monopoly Type Outfit Do? Move Fast and Break Things Not Yet Broken
November 26, 2025
This essay is the work of a dumb dinobaby. No smart software required.
CNBC published “Google Must Double AI Compute Every 6 Months to Meet Demand, AI Infrastructure Boss Tells Employees.”
How does the math work out? Big numbers result as well as big power demands, pressure on suppliers, and an incentive to enter hyper-hype mode for marketing I think.

Thanks, Venice.ai. Good enough.
The write up states:
Google ’s AI infrastructure boss [maybe a fellow named Amin Vahdat, the leadership responsible for Machine Learning, Systems and Cloud AI?] told employees that the company has to double its compute capacity every six months in order to meet demand for artificial intelligence services.
Whose demand exactly? Commercial enterprises, Google’s other leadership, or people looking for a restaurant in an unfamiliar town?
The write up notes:
Hyperscaler peers Microsoft, Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year.
Faced with this robust demand, what differentiates the Google for other monopoly-type companies? CNBC delivers a bang up answer to my question:
Google’s “job is of course to build this infrastructure but it’s not to outspend the competition, necessarily,” Vahdat said. “We’re going to spend a lot,” he said, adding that the real goal is to provide infrastructure that is far “more reliable, more performant and more scalable than what’s available anywhere else.” In addition to infrastructure buildouts, Vahdat said Google bolsters capacity with more efficient models and through its custom silicon. Last week, Google announced the public launch of its seventh generation Tensor Processing Unit called Ironwood, which the company says is nearly 30 times more power efficient than its first Cloud TPU from 2018. Vahdat said the company has a big advantage with DeepMind, which has research on what AI models can look like in future years.
I see spend the same as a competitor but, because Google is Googley, the company will deliver better reliability, faster, and more easily made bigger AI than the non-Googley competition. Google is focused on efficiency. To me, Google bets that its engineering and programming expertise will give it an unbeatable advantage. The VP of Machine Learning, Systems and Cloud AI does not mention the fact that Google has its magical advertising system and about 85 percent of the global Web search market via its assorted search-centric services. Plus one must not overlook the fact that the Google is vertically integrated: Chips, data centers, data, smart people, money, and smart software.
The write up points out that Google knows there are risks with its strategy. But FOMO is more important than worrying about costs and technology. But what about users? Sure, okay, eyeballs, but I think Google means humanoids who have time to use Google whilst riding in Waymos and hanging out waiting for a job offer to arrive on an Android phone. Google doesn’t need to worry. Plus it can just bump up its investments until competitors are left dying in the desert known as Death Vall-AI.
After kicking beaten to the draw in the PR battle with Microsoft, the Google thinks it can win the AI jackpot. But what if it fails? No matter. The AI folks at the Google know that the automated advertising system that collects money at numerous touch points is for now churning away 24×7. Googzilla may just win because it is sitting on the cash machine of cash machines. Even counterfeiters in Peru and Vietnam cannot match Google’s money spinning capability.
Is it game over? Will regulators spring into action? Will Google win the race to software smarter than humans? Sure. Even if it part of the push to own the next big thing is puffery, the Google is definitely confident that it will prevail just like Superman and the truth, justice, and American way has. The only hitch in the git along may be having captured enough electrical service to keep the lights on and the power flowing. Lots of power.
Stephen E Arnold, November 26, 2025
Telegram, Did You Know about the Kiddie Pix Pyramid Scheme?
November 25, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
The Independent, a newspaper in the UK, published “Leader of South Korea’s Biggest Telegram Sex Abuse Ring Gets Life Sentence.” The subtitle is a snappy one: “Seoul Court Says Kim Nok Wan Committed Crimes of Extreme Brutality.” Note: I will refer to this convicted person as Mr. Wan. The reason is that he will spend time in solitary confinement. In my experience individuals involved in kiddie crimes are at bottom of the totem pole among convicted people. If the prison director wants to keep him alive, he will be kept away from the general population. Even though most South Koreans are polite, it is highly likely that he will face a less than friendly greeting when he visits the TV room or exercise area. Therefore, my designation of Mr. Wan reflects the pallor his skin will evidence.
Now to the story:
The main idea is that Mr. Wan signed up for Telegram. He relied on Telegram’s Group and Channel function. He organized a social community dubbed the Vigilantes, a word unlikely to trigger kiddie pix filters. Then he “coerced victims, nearly 150 of them minors, into producing explicit material through blackmail and then distribute the content in online chat rooms.”

Telegram’s leader sets an example for others who want to break rules and be worshiped. Thanks, Venice.ai. Too bad you ignored my request for no facial hair. Good enough, the standard for excellence today I believe.
Mr. Wan’s innovation weas to set up what the Independent called “a pyramid hierarchy.” Think of an Herbal Life- or the OneCoin-type operation. He incorporated an interesting twist. According to the Independent:
He also sent a video of a victim to their father through an accomplice and threatened to release it at their workplace.
Let’s shift from the clever Mr. Wan to Telegram and its public and private Groups and Channels. The French arrested Pavel Durov in August 2024. The French judiciary identified a dozen crimes he allegedly committed. He awaits trial for these alleged crimes. Since that arrest, Telegram has, based on our monitoring of Telegram, blocked more aggressively a number of users and Groups for violating Telegram’s rules and regulations such as they are. However, Mr. Wan appears to have slipped through despite Telegram’s filtering methods.
Several observations:
- Will Mr. Durov implement content moderation procedures to block, prevent, and remove content like Mr. Wan’s?
- Will South Korea take a firm stance toward Telegram’s use in the country?
- Will Mr. Durov cave in to Iran’s demands so that Telegram is once again available in that country?
- Did Telegram know about Mr. Wan’s activities on the estimable Telegram platform?
Mr. Wan exploited Telegram. Perhaps more forceful actions should be taken by other countries against services which provide a greenhouse for certain types of online activity to flourish? Mr. Durov is a tech bro, and he has been pictured carrying a real (not metaphorical) goat to suggest that he is the greatest of all time.
That perception appears to be at odds with the risk his platform poses to children in my opinion.
Stephen E Arnold, November 25, 2025

