Has Big Tech Taught the EU to Be Flexible?
November 26, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Here’s a question that arose in a lunch meeting today (November 19, 2025): Has Big Tech brought the European Union to heel? What’s your answer?
The “trust” outfit Thomson Reuters published “EU Eases AI, Privacy Rules As Critics Warn of Caving to Big Tech.”

European Union regulators demonstrate their willingness to be flexible. These exercises are performed in the privacy of a conference room in Brussels. The class is taught by those big tech leaders who have demonstrated their ability to chart a course and keep it. Thanks, Venice.ai. How about your interface? Yep, good enough I think.
The write up reported:
The EU Commission’s “Digital Omnibus”, which faces debate and votes from European countries, proposed to delay stricter rules on use of AI in “high-risk” areas until late 2027, ease rules around cookies and enable more use of data.
Ah, back peddling seems to be the new Zen moment for the European Union.
The “trust” outfit explains why, sort of:
Europe is scrabbling to balance tough rules with not losing more ground in the global tech race, where companies in the United States and Asia are streaking ahead in artificial intelligence and chips.
Several factors are causing this rethink. I am not going to walk the well-worn path called “Privacy Lane.” The reason for the softening is not a warm summer day. The EU is concerned about:
- Losing traction in the slippery world of smart software
- Failing to cultivate AI start ups with more than a snowball’s chance of surviving in the Dante’s inferno of the competitive market
- Keeping AI whiz kids from bailing out of European mathematics, computer science, and physics research centers for some work in Sillycon Valley or delightful Z Valley (Zhongguancun, China, in case you did not know).
From my vantage point in rural Kentucky, it certainly appears that the European Union is fearful of missing out on either the boom or the bust associated with smart software.
Several observations are warranted:
- BAITers are likely to win. (BAIT means Big AI Tech in my lingo.) Why? Money and FOMO
- Other governments are likely to adapt to the needs of the BAITers. Why? Money and FOMO
- The BAIT outfits will be ruthless and interpret the EU’s new flexibility as weakness.
Net net: Worth watching. What do you think? Money? Fear? A combo?
Stephen E Arnold, November 26, 2025
What Can a Monopoly Type Outfit Do? Move Fast and Break Things Not Yet Broken
November 26, 2025
This essay is the work of a dumb dinobaby. No smart software required.
CNBC published “Google Must Double AI Compute Every 6 Months to Meet Demand, AI Infrastructure Boss Tells Employees.”
How does the math work out? Big numbers result as well as big power demands, pressure on suppliers, and an incentive to enter hyper-hype mode for marketing I think.

Thanks, Venice.ai. Good enough.
The write up states:
Google ’s AI infrastructure boss [maybe a fellow named Amin Vahdat, the leadership responsible for Machine Learning, Systems and Cloud AI?] told employees that the company has to double its compute capacity every six months in order to meet demand for artificial intelligence services.
Whose demand exactly? Commercial enterprises, Google’s other leadership, or people looking for a restaurant in an unfamiliar town?
The write up notes:
Hyperscaler peers Microsoft, Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year.
Faced with this robust demand, what differentiates the Google for other monopoly-type companies? CNBC delivers a bang up answer to my question:
Google’s “job is of course to build this infrastructure but it’s not to outspend the competition, necessarily,” Vahdat said. “We’re going to spend a lot,” he said, adding that the real goal is to provide infrastructure that is far “more reliable, more performant and more scalable than what’s available anywhere else.” In addition to infrastructure buildouts, Vahdat said Google bolsters capacity with more efficient models and through its custom silicon. Last week, Google announced the public launch of its seventh generation Tensor Processing Unit called Ironwood, which the company says is nearly 30 times more power efficient than its first Cloud TPU from 2018. Vahdat said the company has a big advantage with DeepMind, which has research on what AI models can look like in future years.
I see spend the same as a competitor but, because Google is Googley, the company will deliver better reliability, faster, and more easily made bigger AI than the non-Googley competition. Google is focused on efficiency. To me, Google bets that its engineering and programming expertise will give it an unbeatable advantage. The VP of Machine Learning, Systems and Cloud AI does not mention the fact that Google has its magical advertising system and about 85 percent of the global Web search market via its assorted search-centric services. Plus one must not overlook the fact that the Google is vertically integrated: Chips, data centers, data, smart people, money, and smart software.
The write up points out that Google knows there are risks with its strategy. But FOMO is more important than worrying about costs and technology. But what about users? Sure, okay, eyeballs, but I think Google means humanoids who have time to use Google whilst riding in Waymos and hanging out waiting for a job offer to arrive on an Android phone. Google doesn’t need to worry. Plus it can just bump up its investments until competitors are left dying in the desert known as Death Vall-AI.
After kicking beaten to the draw in the PR battle with Microsoft, the Google thinks it can win the AI jackpot. But what if it fails? No matter. The AI folks at the Google know that the automated advertising system that collects money at numerous touch points is for now churning away 24×7. Googzilla may just win because it is sitting on the cash machine of cash machines. Even counterfeiters in Peru and Vietnam cannot match Google’s money spinning capability.
Is it game over? Will regulators spring into action? Will Google win the race to software smarter than humans? Sure. Even if it part of the push to own the next big thing is puffery, the Google is definitely confident that it will prevail just like Superman and the truth, justice, and American way has. The only hitch in the git along may be having captured enough electrical service to keep the lights on and the power flowing. Lots of power.
Stephen E Arnold, November 26, 2025
Telegram, Did You Know about the Kiddie Pix Pyramid Scheme?
November 25, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
The Independent, a newspaper in the UK, published “Leader of South Korea’s Biggest Telegram Sex Abuse Ring Gets Life Sentence.” The subtitle is a snappy one: “Seoul Court Says Kim Nok Wan Committed Crimes of Extreme Brutality.” Note: I will refer to this convicted person as Mr. Wan. The reason is that he will spend time in solitary confinement. In my experience individuals involved in kiddie crimes are at bottom of the totem pole among convicted people. If the prison director wants to keep him alive, he will be kept away from the general population. Even though most South Koreans are polite, it is highly likely that he will face a less than friendly greeting when he visits the TV room or exercise area. Therefore, my designation of Mr. Wan reflects the pallor his skin will evidence.
Now to the story:
The main idea is that Mr. Wan signed up for Telegram. He relied on Telegram’s Group and Channel function. He organized a social community dubbed the Vigilantes, a word unlikely to trigger kiddie pix filters. Then he “coerced victims, nearly 150 of them minors, into producing explicit material through blackmail and then distribute the content in online chat rooms.”

Telegram’s leader sets an example for others who want to break rules and be worshiped. Thanks, Venice.ai. Too bad you ignored my request for no facial hair. Good enough, the standard for excellence today I believe.
Mr. Wan’s innovation weas to set up what the Independent called “a pyramid hierarchy.” Think of an Herbal Life- or the OneCoin-type operation. He incorporated an interesting twist. According to the Independent:
He also sent a video of a victim to their father through an accomplice and threatened to release it at their workplace.
Let’s shift from the clever Mr. Wan to Telegram and its public and private Groups and Channels. The French arrested Pavel Durov in August 2024. The French judiciary identified a dozen crimes he allegedly committed. He awaits trial for these alleged crimes. Since that arrest, Telegram has, based on our monitoring of Telegram, blocked more aggressively a number of users and Groups for violating Telegram’s rules and regulations such as they are. However, Mr. Wan appears to have slipped through despite Telegram’s filtering methods.
Several observations:
- Will Mr. Durov implement content moderation procedures to block, prevent, and remove content like Mr. Wan’s?
- Will South Korea take a firm stance toward Telegram’s use in the country?
- Will Mr. Durov cave in to Iran’s demands so that Telegram is once again available in that country?
- Did Telegram know about Mr. Wan’s activities on the estimable Telegram platform?
Mr. Wan exploited Telegram. Perhaps more forceful actions should be taken by other countries against services which provide a greenhouse for certain types of online activity to flourish? Mr. Durov is a tech bro, and he has been pictured carrying a real (not metaphorical) goat to suggest that he is the greatest of all time.
That perception appears to be at odds with the risk his platform poses to children in my opinion.
Stephen E Arnold, November 25, 2025
LLMs and Creativity: Definitely Not Einstein
November 25, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
I have a vague recollection of a very large lecture room with stadium seating. I think I was at the University of Illinois when I was a high school junior. Part of the odd ball program in which I found myself involved a crash course in psychology. I came away from that class with an idea that has lingered in my mind for lo these many decades; to wit: People who are into psychology are often wacky. Consequently I don’t read too much from this esteemed field of study. (I do have some snappy anecdotes about my consulting projects for a psychology magazine, but let’s move on.)

A semi-creative human explains to his robot that he makes up answers and is not creative in a helpful way. Thanks, Venice.ai. Good enough, and I see you are retiring models, including your default. Interesting.
I read in PsyPost this article: “A Mathematical Ceiling Limits Generative AI to Amateur-Level Creativity.” The main idea is that the current approach to smart software does not just answers dead wrong, but the algorithms themselves run into a creative wall.
Here’s the alleged reason:
The investigation revealed a fundamental trade-off embedded in the architecture of large language models. For an AI response to be effective, the model must select words that have a high probability of fitting the context. For instance, if the prompt is “The cat sat on the…”, the word “mat” is a highly effective completion because it makes sense and is grammatically correct. However, because “mat” is the most statistically probable ending, it is also the least novel. It is entirely expected. Conversely, if the model were to select a word with a very low probability to increase novelty, the effectiveness would drop. Completing the sentence with “red wrench” or “growling cloud” would be highly unexpected and therefore novel, but it would likely be nonsensical and ineffective. Cropley determined that within the closed system of a large language model, novelty and effectiveness function as inversely related variables. As the system strives to be more effective by choosing probable words, it automatically becomes less novel.
Let me take a whack at translating this quote from PsyPost: LLMs like Google-type systems have to decide. [a] Be effective and pick words that fit the context well, like “jelly” after “I ate peanut butter and jelly.” Or, [b] The LLM selects infrequent and unexpected words for novelty. This may lead to LLM wackiness. Therefore, effectiveness and novelty work against each other—more of one means less of the other.
The article references some fancy math and points out:
This comparison suggests that while generative AI can convincingly replicate the work of an average person, it is unable to reach the levels of expert writers, artists, or innovators. The study cites empirical evidence from other researchers showing that AI-generated stories and solutions consistently rank in the 40th to 50th percentile compared to human outputs. These real-world tests support the theoretical conclusion that AI cannot currently bridge the gap to elite [creative] performance.
Before you put your life savings into a giant can’t-lose AI data center investment, you might want to ponder this passage in the PsyPost article:
“For AI to reach expert-level creativity, it would require new architecture capable of generating ideas not tied to past statistical patterns … Until such a paradigm shift occurs in computer science, the evidence indicates that human beings remain the sole source of high-level creativity.
Several observations:
- Today’s best-bet approach is the Google-type LLM. It has creative limits as well as the problems of selling advertising like old-fashioned Google search and outputting incorrect answers
- The method itself erects a creative barrier. This is good for humans who can be creative when they are not doom scrolling.
- A paradigm shift could make those giant data centers extremely large white elephants which lenders are not very good at herding along.
Net net: I liked the angle of the article. I am not convinced I should drop my teen impression of psychology. I am a dinobaby, and I like land line phones with rotary dials.
Stephen E Arnold, November 26, 2025
Why the BAIT Outfits Are Drag Netting for Users
November 25, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Have you wondered why the BAIT (big AI tech) companies are pumping cash into what looks to many like a cash bonfire? Here’s one answer, and I think it is a reasonably good one. Navigate to “Best Case: We’re in a Bubble. Worst Case: The People Profiting Most Know Exactly What They’re Doing.” I want to highlight several passages and then often my usually-ignored observations.

Thanks, Venice.ai. Good enough, but I am not sure how many AI execs wear old-fashioned camping gear.
I noted this statement:
The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe.
My reaction to this bubble argument is that the BAIT outfits realized after Microsoft said, “AI in Windows” that a monopoly-type outfit was making a move. Was AI the next oil or railroad play? Then Google did its really professional and carefully-planned Code Red or Yellow whatever, the hair-on-fire moment arrived. Now almost three years later, the hot air from the flaming coifs are equaled by the fumes of incinerating bank notes.
The write up offers this comment:
My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain. The larger the use case, the larger the expense. Most of the larger use cases that I have observed — where AI is leveraged to automate entire workflows, or capture end to end operational data, or replace an entire function — the outlay of work is equal to or greater than the savings. The time we think we’ll save by using AI tends to be spent on doing something else with AI.
The experiences of my team and I support this statement. However, when I go back to the early days of online in the 1970s, the benefits of moving from print research to digital (online) research were fungible. They were quantifiable. Online is where AI lives. As a result, the technology is not global. It is a subset of functions. The more specific the problem, the more likely it is that smart software can help with a segment of the work. The idea that cobbled together methods based on built-in guesses will be wonderful is just plain crazy. Once one thinks of AI as a utility, then it is easier to identify a use case where careful application of the technology will deliver a benefit. I think of AI as a slightly more sophisticated spell checker for writing at the 8th grade level.
The essay points out:
The last ten years have practically been defined by filter bubbles, alternative facts, and weaponized social media — without AI. AI can do all of that better, faster, and with more precision. With a culture-wide degradation of trust in our major global networks, it leaves us vulnerable to lies of all kinds from all kinds of sources and no standard by which to vet the things we see, hear, or read.
Yep, this is a useful way to explain that flows of online information tear down social structures. What’s not referenced, however, is that rebuilding will take a long time. Think about smashing your mom’s favorite Knick- knack. Were you capable of making it as good as new? Sure, a few specialists might be able to do a good job, but the time and cost means that once something is destroyed, that something is gone. The rebuild is at best a close approximation. That’s why people who want to go back to social structures in the 1950s are chasing a fairy tale.
The essay notes:
When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style, you have a sudden imbalance of power that looks like a cancer spreading within a national body.
My view is that the BAIT outfits want to control, dominate, and cash in. Hey, if you have cancer and one company has the alleged cure, are you going to take the drug or just die?
Several observations are warranted:
- BAIT outfits want to be the winner and be the only alpha dog. Ruthless behavior will be the norm for these firms.
- AI is the next big thing. The idea is that if one wishes it, thinks it, or invests in it, AI will be. My hunch is that the present methodologies are on the path to becoming the equivalent of a dial up modem.
- The social consequences of the AI utility added to social media are either ignored or not understood. AI is the catalyst needed to turn one substance into an explosion.
Net net: Good essay. I think the downsides referenced in the essay understate the scope of the challenge.
Stephen E Arnold, November 25, 2025
Pavel Durov Can Travel As Some New Features Dribble from the Core Engineers
November 25, 2025
This essay is the work of a dumb dinobaby. No smart software required.
In November 2025, Telegram announced Cocoon, its AI system. Well, it is not yet revolutionizing writing code for smart contracts. Like Apple, Telegram is a bit late to the AI dog race. But there is hope for the company which has faced some headwinds. One blowing from the west is the criminal trial for which Pavel Durov, the founder of Telegram waits. Plus, the value of the much-hyped TONcoin and the subject of yet another investigation for financial fancy dancing is tanking.
What’s the good news? Telegram watching outfits like FoneArena and PCNews.ru have reported on some recent Telegram innovations. Keep in mind that Telegram means that a new user install the Messenger mini app. This is an “everything” app. Through the interface one can do a wide range of actions. Yep, that’s why it is called an “everything” app. You can read Telegram’s own explanation in the firm’s blog.
Fone Arena reports that “the Dubai-based virtual company (yeah, go figure that out) has rolled out Live Stories streaming, repeated messages, and gift auctions. Repeated messages will spark some bot developers to build this function into applications. Notifications (wanted and unwanted) are useful in certain types of advertising campaigns. The gift auctions is little more than a hybrid of Google ad auctions and eBay applied to the highly volatile, speculative crypto confections Telegram, users, and developers allegedly find of great value.
The Live Stories streaming is more significant. Rolled out in November 2025, Live Stories allows users to broadcast live streams within the Stories service. Viewers can post comments and interact in real time in a live chat. During a stream, viewers may highlight or pin their messages using Telegram Stars, which is a form of crypto cash. A visible Star counter appears in the corner of the broadcast. Gamification is a big part of the Telegram way. Gambling means crypto transactions. Transactions incur a service charge. A user can kick of a Live Story from a personal accounts or from a Groups or a Channels that have unlocked Story posting via boosts. Owners have to unlock the Live Story, however. Plus, the new service supports real time messaging protocol for external applications such as OBS and XSplit streaming software.

The interface for Live Stories steaming. Is Telegram angling to kill off Twitch and put a dent in Discord? Will the French judiciary forget to try Pavel Durov for his online service’s behavior. It appears that Mr. Durov and his core engineers think so.
Observations are warranted:
- Live Stories is likely to catch the attention of some of the more interesting crypto promoters who make use of Telegram
- Telegram’s monitoring service will have to operate in real time because dropping in a short but interesting video promo for certain illegal or controversial activities will have to operate better than the Cleveland Browns American football team
- The soft hooks to pump up service charges or “gas fees” in the lingo of the digital currency enthusiasts are an important part of gift and auction play. Think hooking users on speculative investments in digital goodies and then scraping off those service charges.
Net net: Will Cocoon make it easier for developers to code complex bots, mini apps, and distributed applications (dApps)? Answer: Not yet. Just go buy a gift on Telegram. PS. Mr. Zuckerberg, Telegram has aced you again it seems.
Stephen E Arnold, November 25, 2025
Watson: Transmission Is Doing Its Part
November 25, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read an article that stopped me in my tracks. It was “IBM Revisits 2011 AI Jeopardy Win to Capture B2B Demand.” The article reports that a former IBM executive said:
People want AI to be able to do what it can’t…. and immature technology companies are not disciplined enough to correct that thinking.
I find the statement fascinating. IBM Watson was supposed to address some of the challenges cancer patients faced. The reality is that cancer docs in Houston and Manhattan provided IBM with some feedback that shattered IBM’s own ill-disciplined marketing of Watson. What about that building near NYU that was stuffed with AI experts? What about IBM’s sale of its medical unit to Francisco Partners? Where is that smart software today? It is Merative Health, and it is not clear if the company is hitting home runs and generating a flood of cash. So that Watson technology is no longer part of IBM’s smart software solution.

Thanks, Venice.ai. Good enough.
The write up reports that a company called Transmission, which is a business to business or B2B marketing agency, made a documentary about Watson AI. It is not clear from the write up if the documentary was sponsored or if Transmission just had the idea to revisit Watson. According to the write up:
The documentary [“Who is…Watson? The Day AI Went Primetime”] underscores IBM’s legacy of innovation while framing its role in shaping an ethical, inclusive future for AI, a critical differentiator in today’s competitive landscape.
The Transmission/Earnest documentary is a rah rah for IBM and its Watsonx technology. Think of this as Watson Version 2 or Version 3. The Transmission outfit and its Earnest unit (yes, that is its name) in London, England, wants to land more IBM work. Furthermore, rumors suggest that the video created by Celia Aniskovich as a “spec project.” High quality videos running 18 minutes can burn through six figures quickly. A cost of $250,000 or $300,000 is not unexpected. Add to this the cost of the PR campaign to push Transmission brand story telling capability, and the investment strikes me as a bad-economy sales move. If a fat economy, a marketing outfit would just book business at trade shows or lunch. Now, it is rah rah time and cash outflow.
The write up makes clear that Transmission put its best foot forward. I learned:
The documentary was grounded in testimonials from former IBM staff, and more B2B players are building narratives around expert commentary. B2B marketers say thought leaders and industry analysts are the most effective influencer types (28%), according to an April LinkedIn and Ipsos survey. AI pushback is a hot topic, and so is creating more entertaining B2B content. The biggest concern among leveraging AI tools among adults worldwide is the loss of human jobs, according to a May Kantar survey. The primary goal for video marketing is brand awareness (35%), according to an April LinkedIn and Ipsos survey. In an era where AI is perceived as “abstract or intimidating,” this documentary attempts to humanize it while embracing the narrative style that makes B2B brands stand out,
The IBM message is important. Watson Jeopardy was “good” AI. The move fast, break things, and spend billions approach used today is not like IBM’s approach to Watson. (Too bad about those cancer docs not embracing Watson, a factoid not mentioned in the cited write up.)
The question is. “Will the Watson video go viral?” The Watson Jeopardy dust up took place in 2011, but the Watson name lives on. Google is probably shaking its talons at the sky wishing it had a flashy video too. My hunch is that Google would let its AI make a video or one of the YouTubers would volunteer hoping that an act of goodness would reduce the likelihood Google would cut their YouTube payments. I guess I could ask Watson when it thinks, but I won’t. Been there. Done that.
Stephen E Arnold, November 25, 2025
Tim Apple, Granny Scarfs, and Snooping
November 24, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I spotted a write in a source I usually ignore. I don’t know if the write up is 100 percent on the money. Let’s assume for the purpose of my dinobaby persona that it indeed is. The write up is “Apple to Pay $95 Million Settle Suit Accusing Siri Of Snoopy Eavesdropping.” Like Apple’s incessant pop ups about my not logging into Facetime, iMessage, and iCloud, Siri being in snoop mode is not surprising to me. Tim Apple, it seems, is winding down. The pace of innovation, in my opinion, is tortoise like. I haven’t nothing against turtle like creatures, but a granny scarf for an iPhone. That’s innovation, almost as cutting edge as the candy colored orange iPhone. Stunning indeed.

Is Frederick the Great wearing an Apple Granny Scarf? Thanks, Venice.ai. Good enough.
What does the write up say about this $95 million sad smile?
Apple has agreed to pay $95 million to settle a lawsuit accusing the privacy-minded company of deploying its virtual assistant Siri to eavesdrop on people using its iPhone and other trendy devices. The proposed settlement filed Tuesday in an Oakland, California, federal court would resolve a 5-year-old lawsuit revolving around allegations that Apple surreptitiously activated Siri to record conversations through iPhones and other devices equipped with the virtual assistant for more than a decade.
Apple has managed to work the legal process for five years. Good work, legal eagles. Billable hours and legal moves generate income if my understanding is correct. Also, the notion of “surreptitiously” fascinates me. Why do the crazy screen nagging? Just activate what you want and remove the users’ options to disable the function. If you want to be surreptitious, the basic concept as I understand it is to operate so others don’t know what you are doing. Good try, but you failed to implement appropriate secretive operational methods. Better luck next time or just enable what you want and prevent users from turning off the data vacuum cleaner.
The write up notes:
Apple isn’t acknowledging any wrongdoing in the settlement, which still must be approved by U.S. District Judge Jeffrey White. Lawyers in the case have proposed scheduling a Feb. 14 court hearing in Oakland to review the terms.
I interpreted this passage to mean that the Judge has to do something. I assume that lawyers will do something. Whoever brought the litigation will do something. It strikes me that Apple will not be writing a check any time soon, nor will the fine change how Tim Apple has set up that outstanding Apple entity to harvest money, data, and good vibes.
I have several questions:
- Will Apple offer a complementary Granny Scarf to each of its attorneys working this case?
- Will Apple’s methods of harvesting data be revealed in a white paper written by either [a] Apple, [b] an unhappy Apple employee, or [c] a researcher laboring in the vineyards of Stanford University or San Jose State?
- Will regulatory authorities and the US judicial folks take steps to curtail the “we do what we want” approach to privacy and security?
I have answers for each of these questions. Here we go:
- No. Granny Scarfs are sold out
- No. No one wants to be hassled endlessly by Apple’s legions of legal eagles
- No. As the recent Meta decision about WhatsApp makes clear, green light, tech bros. Move fast, break things. Just do it.
Stephen E Arnold, November 24, 2025
Google: AI or Else. What a Pleasant, Implicit Threat
November 24, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Do you remember that old chestnut of a how-to book. I think its title was How to Win Friends and Influence People. I think the book contains a statement like this:
“Instead of condemning people, let’s try to understand them. Let’s try to figure out why they do what they do. That’s a lot more profitable and intriguing than criticism; and it breeds sympathy, tolerance and kindness. “To know all is to forgive all.” ”
The Google leadership has mastered this approach. Look at its successes. An advertising system that sells access to users from an automated bidding system running within the Google platform. Isn’t that a way to breed sympathy for the company’s approach to serving the needs of its customers? Another example is the brilliant idea of making a Google-centric Agentic Operating System for the world. I know that the approach leaves plenty of room for Google partners, Google high performers, and Google services. Won’t everyone respond in a positive way to the “space” that Google leaves for others?

Thanks, Venice.ai. Good enough.
I read “Google Boss Warns No Company Is Going to Be Immune If AI Bubble Bursts.” What an excellent example of putting the old-fashioned precepts of Dale Carnegie’s book into practice. The soon-to-be-sued BBC article states:
Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an “extraordinary moment”, there was some “irrationality” in the current AI boom… “I think no company is going to be immune, including us,” he said.
My memory doesn’t work the way it did when I was 13 years old, but I think I heard this same Silicon Valley luminary say, “Code Red” when Microsoft announced a deal to put AI in its products and services. With the klaxon sounding and flashing warning lights, Google began pushing people and money into smart software. Thus, the AI craze was legitimized. Not even the spat between Sam Altman and Elon Musk could slow the acceleration. And where are we now?
The chief Googler, a former McKinsey & Company consultant, is explaining that the AI boom is rational and irrational. Is that a threat from a company that knee jerked its way forward? Is Google saying that I should embrace AI or suffer the consequences? Mr. Pichai is worried about the energy needs of AI. That’s good. Because one doesn’t need to be an expert in utility forecast demand analysis to figure out that if the announced data centers are built, there will probably be brown outs or power rationing. Companies like Google can pay its electric bills; others may not have the benefit of that outstanding advertising system to spit out cash with the heart beat of an atomic clock.
I am not sure that Dale Carnegie would have phrased statements like these if they are words tumbling from Google’s leader as presented in the article:
“We will have to work through societal disruptions.” he said, adding that it would also “create new opportunities”. “It will evolve and transition certain jobs, and people will need to adapt,” he said. Those who do adapt to AI “will do better”. “It doesn’t matter whether you want to be a teacher [or] a doctor. All those professions will be around, but the people who will do well in each of those professions are people who learn how to use these tools.”
This sure sounds like a dire prediction for people who don’t “learn how to use these tools.” I would go so far as to suggest that one of the progenitors of the AI craziness is making another threat. I interpret the comment as meaning, “Get with the program or you will never work again anywhere.”
How uplifting. Imagine that old coot Dale Carnegie saying in the 1930s that you will do poorly if you don’t get with the Googley AI program? Here’s one of Dale’s off-the-wall comments was:
“The only way to influence people is to talk in terms of what the other person wants.”
The statements in the BBC story make one thing clear: I know what Google wants. I am not sure it is what other people want. Obviously the wacko Dale Carnegie is not in tune with the McKinsey consultant’s pragmatic view of what Google wants. Poor Dale. It seems his observations do not line up with the Google view of life for those who don’t do AI.
Stephen E Arnold, November 24, 2025
Microsoft Factoid: 30 Percent of Our Code Is Vibey
November 24, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
Is Microsoft cranking out one fifth to one third of its code using vibey methods? A write up from Ibrahim Diallo seeks to answer this question in his essay “Is 30% of Microsoft’s Code Really AI-Generated?” My instinctive response was, “Nope. Marketing.” Microsoft feels the heat. The Google is pushing the message that it will deliver the Agentic Operating System for the emergence of a new computing epoch. In response, Microsoft has been pumping juice into its market collateral. For example, Microsoft is building data center systems that span nations. Copilot will make your Notepad “experience” more memorable. Visio, a step child application, is really cheap. Add these steps together, and you get a profile of a very large company under pressure and showing signs of cracking. Why? Google is turning up the heat and Microsoft feels it.
Mr. Diallo writes:
A few months back, news outlets were buzzing with reports that Satya Nadella claimed 30% of the code in Microsoft’s repositories was AI-generated. This fueled the hype around tools like Copilot and Cursor. The implication seemed clear: if Microsoft’s developers were now “vibe coding,” everyone should embrace the method.
Then he makes a pragmatic observation:
The line between “AI-generated” and “human-written” code has become blurrier than the headlines suggest. And maybe that’s the point. When AI becomes just another tool in the development workflow, like syntax highlighting or auto-complete, measuring its contribution as a simple percentage might not be meaningful at all.
Several observations:
- Microsoft’s leadership is outputting difficult to believe statements
- Microsoft apparently has been recycling code because those contributions from Stack Overflow are not tabulated
- Marketing is now the engine making AI the future of Microsoft unfold.
I would assert that the answer to the Mr. Diallo’s question is, “Whatever unfounded assertion Microsoft offers is actual factual.” That’s okay with me, but some people may be hooked by Google’s Agentic Operating System pitch.
Stephen E Arnold, November 24, 2025

