Can the Chrome Drone Deorbit Comet?

November 28, 2025

Perplexity developed Comet, an intuitive AI-powered Internet browser. Analytic Insight has a rundown on Comet in the article: “Perplexity CEO Aravind Srinivas Claims Comet AI Browser Could ‘Kill’ Android System.” Perplexity designed Comet for more complex tasks such as booking flights, shopping, and answering then executing simple prompts. The new browser is now being released for Android OS.

Until recently Comet was an exclusive, invite-only browser for the desktop version. It is now available for download. Comet is taking the same approach for an Android release. Perplexity hopes to overtake Android as the top mobile OS or so CEO Aravind Srinivas plans.

Another question is if Comet could overtake Chrome as the favored AI browser:

“The launch of Comet AI browser coincides with the onset of a new conflict between AI browsers. Not long ago, OpenAI introduced ChatGPT Atlas, while Microsoft Edge and Google Chrome are upgrading their platforms with top-of-the-line AI tools. Additionally, Perplexity previously received attention for a $34.5 billion proposal to acquire Google Chrome, a bold move indicating its aspirations.

Comet, like many contemporary browsers, is built on the open-source Chromium framework provided by Google, which is also the backbone for Chrome, Edge, and other major browsers. With Comet’s mobile rollout and Srinivas’s bold claim, Perplexity is obviously betting entirely on an AI-first future, one that will see a convergence of the browser and the operating system.”

Comet is built on Chromium. Chrome is too. Comet is a decent web browser, but it doesn’t have the power of Alphabet behind it. Chrome will dominate the AI-browser race because it has money to launch a swarm of digital drones at this frail craft.

Whitney Grace, November 28, 2025

Coca-Cola and AI: Things May Not Be Going Better

November 27, 2025

Coca-Cola didn’t learn its lesson last year with a less than bad AI-generated Christmas commercial. It repeated the mistake again in 2025. Although the technology has improved, the ad still bears all the fake-ness of early CGI (when examined in hindsight of course). Coca-Cola, according to Creative Bloq, did want to redeem itself, so the soft drink company controlled every detail in the ad: “Devastating Graphic Shows Just How Bad The Coca-Cola Christmas Ad Really Is.”

Here’s how one expert viewed it:

“In a post on LinkedIn, the AI consultant Dino Burbidge points out the glaring lack of consistency and continuity in the design of the trucks in the new AI Holidays are Coming ad, which was produced by AI studio Secret Level. At least one of the AI-generated vehicles appears to completely defy physics, putting half of the truck’s payload beyond the last wheel.

Dino suggests that the problem with the ad is not AI per se, but the fact that no human appears to have checked what the AI models generated… or that more worryingly they checked but didn’t care, which is extraordinary when the truck is the main character in the ad.”

It’s been suggested that Coca-Cola used AI to engage in rage bait instead of building a genuinely decent Christmas ad. There was a behind the scenes video of how the ad was made and even that used AI VoiceOver.

I liked the different horse drawn wagons. Very consistent.

Whitney Grace, November 27, 2025

Microsoft: Desperate to Be a Leader in the Agentic OS Push Decides to Shove, Not Lure Supporters

November 26, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

I had a friend in high school who like a girl Mary B. He smiled at her. He complimented her plaid skirt. He gave her a birthday gift during lunch in the school cafeteria. My reaction to this display was, “Yo, Tommy, you are trying too hard.” I said nothing, I watched as Mary B. focused her attention on a football player with a C average but comic book Superman looks. Tommy became known as a person who tried too hard to reach a goal without realizing no girl wanted to be the focal point of a birthday gift in the school cafeteria with hundreds of students watching. Fail, Tommy.

image

Thanks, Venice.ai. Good enough, the gold standard today I believe.

I thought about this try-too-hard approach when I read “Windows President Addresses Current State of Windows 11 after AI Backlash.” The source is the on-again, off-again podcasting outfit called Windows Central. Here’s a snippet from the write up which recycles content from X.com. The source of the statement is a person named Pavan Davuluri, who is the Microsoft Windows lead:

The team (and I) take in a ton of feedback. We balance what we see in our product feedback systems with what we hear directly. They don’t always match, but both are important. I’ve read through the comments and see focus on things like reliability, performance, ease of use and more… we care deeply about developers. We know we have work to do on the experience, both on the everyday usability, from inconsistent dialogs to power user experiences. When we meet as a team, we discuss these pain points and others in detail, because we want developers to choose Windows.

Windows Central pointed out that Lead Davuluri demonstrated “leadership” with a bold move. He disabled comments to his X.com post about caring deeply about its customers. I like it when Lead Davuluri takes decisive leadership actions that prevent people from providing inputs. Is that why Microsoft ignored focus groups responding to Wi-Fi hardware that did not work and “ribbon” icons instead of words in Office application interfaces? I think I have possibly identified a trend at Microsoft: The aircraft carrier is steaming forward, and it is too bad about the dolphins, fishing boats, and scuba divers. I mean who cares about these unseen flotsam and jetsam.

Remarkably Windows Central’s write up includes another hint of negativism about Microsoft Windows:

What hasn’t helped in recent years is “Continuous Innovation,” Microsoft’s update delivery strategy that’s designed to keep the OS fresh with new features and changes on a consistent, monthly basis. On paper, it sounds like a good idea, but in practice, updating Windows monthly with new features often causes more headaches than joy for a lot of people. I think most users would prefer one big update at a predictable, certain time of the year, just like how Apple and Google do it.

Several observations if I may offer them as an aged dinobaby:

  1. Google has said it wants to become the agentic operating system. That means Google wants to kill off Microsoft, its applications, and its dreams.
  2. Microsoft knows that it faces competition from a person whom Satya Nadella knows, understands, absolutely must defeat because his family would make fun of him if he failed. Yep, a man-to-man dust up with annoying users trying to stop the march of technological innovation and revenue. Lead Davuluri has his marching orders; hence, the pablum tinged non-speak cited in the Windows Central write up.
  3. User needs and government regulation have zero — that’s right, none, nil, zip — chance of altering what these BAIT (big AI tech) outfits will do to win. Buckle up, Tommy. You are going to be rejected again.

Net net: That phrase agentic OS has a ring to it, doesn’t it?

Stephen E Arnold, November 26, 2025

Has Big Tech Taught the EU to Be Flexible?

November 26, 2025

green-dino_thumb_thumb[3]This essay is the work of a dumb dinobaby. No smart software required.

Here’s a question that arose in a lunch meeting today (November 19, 2025): Has Big Tech brought the European Union to heel? What’s your answer?

The “trust” outfit Thomson Reuters published “EU Eases AI, Privacy Rules As Critics Warn of Caving to Big Tech.”

image

European Union regulators demonstrate their willingness to be flexible. These exercises are performed in the privacy of a conference room in Brussels. The class is taught by those big tech leaders who have demonstrated their ability to chart a course and keep it. Thanks, Venice.ai. How about your interface? Yep, good enough I think.

The write up reported:

The EU Commission’s “Digital Omnibus”, which faces debate and votes from European countries, proposed to delay stricter rules on use of AI in “high-risk” areas until late 2027, ease rules around cookies and enable more use of data.

Ah, back peddling seems to be the new Zen moment for the European Union.

The “trust” outfit explains why, sort of:

Europe is scrabbling to balance tough rules with not losing more ground in the global tech race, where companies in the United States and Asia are streaking ahead in artificial intelligence and chips.

Several factors are causing this rethink. I am not going to walk the well-worn path called “Privacy Lane.” The reason for the softening is not a warm summer day. The EU is concerned about:

  1. Losing traction in the slippery world of smart software
  2. Failing to cultivate AI start ups with more than a snowball’s chance of surviving in the Dante’s inferno of the competitive market
  3. Keeping AI whiz kids from bailing out of European mathematics, computer science, and physics research centers for some work in Sillycon Valley or delightful Z Valley (Zhongguancun, China, in case you did not know).

From my vantage point in rural Kentucky, it certainly appears that the European Union is fearful of missing out on either the boom or the bust associated with smart software.

Several observations are warranted:

  1. BAITers are likely to win. (BAIT means Big AI Tech in my lingo.) Why? Money and FOMO
  2. Other governments are likely to adapt to the needs of the BAITers. Why? Money and FOMO
  3. The BAIT outfits will be ruthless and interpret the EU’s new flexibility as weakness.

Net net: Worth watching. What do you think? Money? Fear? A combo?

Stephen E Arnold, November 26, 2025

What Can a Monopoly Type Outfit Do? Move Fast and Break Things Not Yet Broken

November 26, 2025

green-dino_thumb_thumb[3]This essay is the work of a dumb dinobaby. No smart software required.

CNBC published “Google Must Double AI Compute Every 6 Months to Meet Demand, AI Infrastructure Boss Tells Employees.”

How does the math work out? Big numbers result as well as big power demands, pressure on suppliers, and an incentive to enter hyper-hype mode for marketing I think.

image

Thanks, Venice.ai. Good enough.

The write up states:

Google ’s AI infrastructure boss [maybe a fellow named Amin Vahdat, the leadership responsible for Machine Learning, Systems and Cloud AI?] told employees that the company has to double its compute capacity every six months in order to meet demand for artificial intelligence services.

Whose demand exactly? Commercial enterprises, Google’s other leadership, or people looking for a restaurant in an unfamiliar town?

The write up notes:

Hyperscaler peers Microsoft, Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year.

Faced with this robust demand, what differentiates the Google for other monopoly-type companies? CNBC delivers a bang up answer to my question:

Google’s “job is of course to build this infrastructure but it’s not to outspend the competition, necessarily,” Vahdat said. “We’re going to spend a lot,” he said, adding that the real goal is to provide infrastructure that is far “more reliable, more performant and more scalable than what’s available anywhere else.” In addition to infrastructure buildouts, Vahdat said Google bolsters capacity with more efficient models and through its custom silicon. Last week, Google announced the public launch of its seventh generation Tensor Processing Unit called Ironwood, which the company says is nearly 30 times more power efficient than its first Cloud TPU from 2018. Vahdat said the company has a big advantage with DeepMind, which has research on what AI models can look like in future years.

I see spend the same as a competitor but, because Google is Googley, the company will deliver better reliability, faster, and more easily made bigger AI than the non-Googley competition. Google is focused on efficiency. To me, Google bets that its engineering and programming expertise will give it an unbeatable advantage. The VP of Machine Learning, Systems and Cloud AI does not mention the fact that Google has its magical advertising system and about 85 percent of the global Web search market via its assorted search-centric services. Plus one must not overlook the fact that the Google is vertically integrated: Chips, data centers, data, smart people, money, and smart software.

The write up points out that Google knows there are risks with its strategy. But FOMO is more important than worrying about costs and technology. But what about users? Sure, okay, eyeballs, but I think Google means humanoids who have time to use Google whilst riding in Waymos and hanging out waiting for a job offer to arrive on an Android phone. Google doesn’t need to worry. Plus it can just bump up its investments until competitors are left dying in the desert known as Death Vall-AI.

After kicking beaten to the draw in the PR battle with Microsoft, the Google thinks it can win the AI jackpot. But what if it fails? No matter. The AI folks at the Google know that the automated advertising system that collects money at numerous touch points is for now churning away 24×7. Googzilla may just win because it is sitting on the cash machine of cash machines. Even counterfeiters in Peru and Vietnam cannot match Google’s money spinning capability.

Is it game over? Will regulators spring into action? Will Google win the race to software smarter than humans? Sure. Even if it part of the push to own the next big thing is puffery, the Google is definitely confident that it will prevail just like Superman and the truth, justice, and American way has. The only hitch in the git along may be having captured enough electrical service to keep the lights on and the power flowing. Lots of power.

Stephen E Arnold, November 26, 2025

LLMs and Creativity: Definitely Not Einstein

November 25, 2025

green-dino_thumbAnother dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.

I have a vague recollection of a very large lecture room with stadium seating. I think I was at the University of Illinois when I was a high school junior. Part of the odd ball program in which I found myself involved a crash course in psychology. I came away from that class with an idea that has lingered in my mind for lo these many decades; to wit: People who are into psychology are often wacky. Consequently I don’t read too much from this esteemed field of study. (I do have some snappy anecdotes about my consulting projects for a psychology magazine, but let’s move on.)

image

A semi-creative human explains to his robot that he makes up answers and is not creative in a helpful way. Thanks, Venice.ai. Good enough, and I see you are retiring models, including your default. Interesting.

I read in PsyPost this article: “A Mathematical Ceiling Limits Generative AI to Amateur-Level Creativity.” The main idea is that the current approach to smart software does not just answers dead wrong, but the algorithms themselves run into a creative wall.

Here’s the alleged reason:

The investigation revealed a fundamental trade-off embedded in the architecture of large language models. For an AI response to be effective, the model must select words that have a high probability of fitting the context. For instance, if the prompt is “The cat sat on the…”, the word “mat” is a highly effective completion because it makes sense and is grammatically correct. However, because “mat” is the most statistically probable ending, it is also the least novel. It is entirely expected. Conversely, if the model were to select a word with a very low probability to increase novelty, the effectiveness would drop. Completing the sentence with “red wrench” or “growling cloud” would be highly unexpected and therefore novel, but it would likely be nonsensical and ineffective. Cropley determined that within the closed system of a large language model, novelty and effectiveness function as inversely related variables. As the system strives to be more effective by choosing probable words, it automatically becomes less novel.

Let me take a whack at translating this quote from PsyPost: LLMs like Google-type systems have to decide. [a] Be effective and pick words that fit the context well, like “jelly” after “I ate peanut butter and jelly.” Or, [b] The LLM selects infrequent and unexpected words for novelty. This may lead to LLM wackiness. Therefore,  effectiveness and novelty work against each other—more of one means less of the other.

The article references some fancy math and points out:

This comparison suggests that while generative AI can convincingly replicate the work of an average person, it is unable to reach the levels of expert writers, artists, or innovators. The study cites empirical evidence from other researchers showing that AI-generated stories and solutions consistently rank in the 40th to 50th percentile compared to human outputs. These real-world tests support the theoretical conclusion that AI cannot currently bridge the gap to elite [creative] performance.

Before you put your life savings into a giant can’t-lose AI data center investment, you might want to ponder this passage in the PsyPost article:

“For AI to reach expert-level creativity, it would require new architecture capable of generating ideas not tied to past statistical patterns … Until such a paradigm shift occurs in computer science, the evidence indicates that human beings remain the sole source of high-level creativity.

Several observations:

  1. Today’s best-bet approach is the Google-type LLM. It has creative limits as well as the problems of selling advertising like old-fashioned Google search and outputting incorrect answers
  2. The method itself erects a creative barrier. This is good for humans who can be creative when they are not doom scrolling.
  3. A paradigm shift could make those giant data centers extremely large white elephants which lenders are not very good at herding along.

Net net: I liked the angle of the article. I am not convinced I should drop my teen impression of psychology. I am a dinobaby, and I like land line phones with rotary dials.

Stephen E Arnold, November 26, 2025

Why the BAIT Outfits Are Drag Netting for Users

November 25, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Have you wondered why the BAIT (big AI tech) companies are pumping cash into what looks to many like a cash bonfire? Here’s one answer, and I think it is a reasonably good one. Navigate to “Best Case: We’re in a Bubble. Worst Case: The People Profiting Most Know Exactly What They’re Doing.” I want to highlight several passages and then often my usually-ignored observations.

image

Thanks, Venice.ai. Good enough, but I am not sure how many AI execs wear old-fashioned camping gear.

I noted this statement:

The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe.

My reaction to this bubble argument is that the BAIT outfits realized after Microsoft said, “AI in Windows” that a monopoly-type outfit was making a move. Was AI the next oil or railroad play? Then Google did its really professional and carefully-planned Code Red or Yellow whatever, the hair-on-fire moment arrived. Now almost three years later, the hot air from the flaming coifs are equaled by the fumes of incinerating bank notes.

The write up offers this comment:

My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain. The larger the use case, the larger the expense. Most of the larger use cases that I have observed — where AI is leveraged to automate entire workflows, or capture end to end operational data, or replace an entire function — the outlay of work is equal to or greater than the savings. The time we think we’ll save by using AI tends to be spent on doing something else with AI.

The experiences of my team and I support this statement. However, when I go back to the early days of online in the 1970s, the benefits of moving from print research to digital (online) research were fungible. They were quantifiable. Online is where AI lives. As a result, the technology is not global. It is a subset of functions. The more specific the problem, the more likely it is that smart software can help with a segment of the work. The idea that cobbled together methods based on built-in guesses will be wonderful is just plain crazy. Once one thinks of AI as a utility, then it is easier to identify a use case where careful application of the technology will deliver a benefit. I think of AI as a slightly more sophisticated spell checker for writing at the 8th grade level.

The essay points out:

The last ten years have practically been defined by filter bubbles, alternative facts, and weaponized social media — without AI. AI can do all of that better, faster, and with more precision. With a culture-wide degradation of trust in our major global networks, it leaves us vulnerable to lies of all kinds from all kinds of sources and no standard by which to vet the things we see, hear, or read.

Yep, this is a useful way to explain that flows of online information tear down social structures. What’s not referenced, however, is that rebuilding will take a long time. Think about smashing your mom’s favorite Knick- knack. Were you capable of making it as good as new? Sure, a few specialists might be able to do a good job, but the time and cost means that once something is destroyed, that something is gone. The rebuild is at best a close approximation. That’s why people who want to go back to social structures in the 1950s are chasing a fairy tale.

The essay notes:

When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style, you have a sudden imbalance of power that looks like a cancer spreading within a national body.

My view is that the BAIT outfits want to control, dominate, and cash in. Hey, if you have cancer and one company has the alleged cure, are you going to take the drug or just die?

Several observations are warranted:

  1. BAIT outfits want to be the winner and be the only alpha dog. Ruthless behavior will be the norm for these firms.
  2. AI is the next big thing. The idea is that if one wishes it, thinks it, or invests in it, AI will be. My hunch is that the present methodologies are on the path to becoming the equivalent of a dial up modem.
  3. The social consequences of the AI utility added to social media are either ignored or not understood. AI is the catalyst needed to turn one substance into an explosion.

Net net: Good essay. I think the downsides referenced in the essay understate the scope of the challenge.

Stephen E Arnold, November 25, 2025

Watson: Transmission Is Doing Its Part

November 25, 2025

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read an article that stopped me in my tracks. It was “IBM Revisits 2011 AI Jeopardy Win to Capture B2B Demand.” The article reports that a former IBM executive said:

People want AI to be able to do what it can’t…. and immature technology companies are not disciplined enough to correct that thinking.

I find the statement fascinating. IBM Watson was supposed to address some of the challenges cancer patients faced. The reality is that cancer docs in Houston and Manhattan provided IBM with some feedback that shattered IBM’s own ill-disciplined marketing of Watson. What about that building near NYU that was stuffed with AI experts? What about IBM’s sale of its medical unit to Francisco Partners? Where is that smart software today? It is Merative Health, and it is not clear if the company is hitting home runs and generating a flood of cash. So that Watson technology is no longer part of IBM’s smart software solution.

image

Thanks, Venice.ai. Good enough.

The write up reports that a company called Transmission, which is a business to business or B2B marketing agency, made a documentary about Watson AI. It is not clear from the write up if the documentary was sponsored or if Transmission just had the idea to revisit Watson. According to the write up:

The documentary [“Who is…Watson? The Day AI Went Primetime”] underscores IBM’s legacy of innovation while framing its role in shaping an ethical, inclusive future for AI, a critical differentiator in today’s competitive landscape.

The Transmission/Earnest documentary is a rah rah for IBM and its Watsonx technology. Think of this as Watson Version 2 or Version 3. The Transmission outfit and its Earnest unit (yes, that is its name) in London, England, wants to land more IBM work. Furthermore, rumors suggest that the video created by Celia Aniskovich as a “spec project.” High quality videos running 18 minutes can burn through six figures quickly. A cost of $250,000 or $300,000 is not unexpected. Add to this the cost of the PR campaign to push Transmission brand story telling capability, and the investment strikes me as a bad-economy sales move. If a fat economy, a marketing outfit would just book business at trade shows or lunch. Now, it is rah rah time and cash outflow.

The write up makes clear that Transmission put its best foot forward. I learned:

The documentary was grounded in testimonials from former IBM staff, and more B2B players are building narratives around expert commentary. B2B marketers say thought leaders and industry analysts are the most effective influencer types (28%), according to an April LinkedIn and Ipsos survey. AI pushback is a hot topic, and so is creating more entertaining B2B content. The biggest concern among leveraging AI tools among adults worldwide is the loss of human jobs, according to a May Kantar survey. The primary goal for video marketing is brand awareness (35%), according to an April LinkedIn and Ipsos survey. In an era where AI is perceived as “abstract or intimidating,” this documentary attempts to humanize it while embracing the narrative style that makes B2B brands stand out,

The IBM message is important. Watson Jeopardy was “good” AI. The move fast, break things, and spend billions approach used today is not like IBM’s approach to Watson. (Too bad about those cancer docs not embracing Watson, a factoid not mentioned in the cited write up.)

The question is. “Will the Watson video go viral?” The Watson Jeopardy dust up took place in 2011, but the Watson name lives on. Google is probably shaking its talons at the sky wishing it had a flashy video too. My hunch is that Google would let its AI make a video or one of the YouTubers would volunteer hoping that an act of goodness would reduce the likelihood Google would cut their YouTube payments. I guess I could ask Watson when it thinks, but I won’t. Been there. Done that.

Stephen E Arnold, November 25, 2025

Google: AI or Else. What a Pleasant, Implicit Threat

November 24, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Do you remember that old chestnut of a how-to book. I think its title was How to Win Friends and Influence People. I think the book contains a statement like this:

“Instead of condemning people, let’s try to understand them. Let’s try to figure out why they do what they do. That’s a lot more profitable and intriguing than criticism; and it breeds sympathy, tolerance and kindness. “To know all is to forgive all.” ”

The Google leadership has mastered this approach. Look at its successes. An advertising system that sells access to users from an automated bidding system running within the Google platform. Isn’t that a way to breed sympathy for the company’s approach to serving the needs of its customers? Another example is the brilliant idea of making a Google-centric Agentic Operating System for the world. I know that the approach leaves plenty of room for Google partners, Google high performers, and Google services. Won’t everyone respond in a positive way to the “space” that Google leaves for others?

image

Thanks, Venice.ai. Good enough.

I read “Google Boss Warns No Company Is Going to Be Immune If AI Bubble Bursts.” What an excellent example of putting the old-fashioned precepts of Dale Carnegie’s book into practice. The soon-to-be-sued BBC article states:

Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an “extraordinary moment”, there was some “irrationality” in the current AI boom… “I think no company is going to be immune, including us,” he said.

My memory doesn’t work the way it did when I was 13 years old, but I think I heard this same Silicon Valley luminary say, “Code Red” when Microsoft announced a deal to put AI in its products and services. With the klaxon sounding and flashing warning lights, Google began pushing people and money into smart software. Thus, the AI craze was legitimized. Not even the spat between Sam Altman and Elon Musk could slow the acceleration. And where are we now?

The chief Googler, a former McKinsey & Company consultant, is explaining that the AI boom is rational and irrational. Is that a threat from a company that knee jerked its way forward? Is Google saying that I should embrace AI or suffer the consequences? Mr. Pichai is worried about the energy needs of AI. That’s good. Because one doesn’t need to be an expert in utility forecast demand analysis to figure out that if the announced data centers are built, there will probably be brown outs or power rationing.  Companies like Google can pay its electric bills; others may not have the benefit of that outstanding advertising system to spit out cash with the heart beat of an atomic clock.

I am not sure that Dale Carnegie would have phrased statements like these if they are words tumbling from Google’s leader as presented in the article:

“We will have to work through societal disruptions.” he said, adding that it would also “create new opportunities”. “It will evolve and transition certain jobs, and people will need to adapt,” he said. Those who do adapt to AI “will do better”. “It doesn’t matter whether you want to be a teacher [or] a doctor. All those professions will be around, but the people who will do well in each of those professions are people who learn how to use these tools.”

This sure sounds like a dire prediction for people who don’t “learn how to use these tools.” I would go so far as to suggest that one of the progenitors of the AI craziness is making another threat. I interpret the comment as meaning, “Get with the program or you will never work again anywhere.”

How uplifting. Imagine that old coot Dale Carnegie saying in the 1930s that you will do poorly if you don’t get with the Googley AI program? Here’s one of Dale’s off-the-wall comments was:

“The only way to influence people is to talk in terms of what the other person wants.”

The statements in the BBC story make one thing clear: I know what Google wants. I am not sure it is what other people want. Obviously the wacko Dale Carnegie is not in tune with the McKinsey consultant’s pragmatic view of what Google wants. Poor Dale. It seems his observations do not line up with the Google view of life for those who don’t do AI.

Stephen E Arnold, November 24, 2025

Microsoft Factoid: 30 Percent of Our Code Is Vibey

November 24, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

Is Microsoft cranking out one fifth to one third of its code using vibey methods? A write up from Ibrahim Diallo seeks to answer this question in his essay “Is 30% of Microsoft’s Code Really AI-Generated?” My instinctive response was, “Nope. Marketing.” Microsoft feels the heat. The Google is pushing the message that it will deliver the Agentic Operating System for the emergence of a new computing epoch. In response, Microsoft has been pumping juice into its market collateral. For example, Microsoft is building data center systems that span nations. Copilot will make your Notepad “experience” more memorable. Visio, a step child application, is really cheap. Add these steps together, and you get a profile of a very large company under pressure and showing signs of cracking. Why? Google is turning up the heat and Microsoft feels it.

Mr. Diallo writes:

A few months back, news outlets were buzzing with reports that Satya Nadella claimed 30% of the code in Microsoft’s repositories was AI-generated. This fueled the hype around tools like Copilot and Cursor. The implication seemed clear: if Microsoft’s developers were now “vibe coding,” everyone should embrace the method.

Then he makes a pragmatic observation:

The line between “AI-generated” and “human-written” code has become blurrier than the headlines suggest. And maybe that’s the point. When AI becomes just another tool in the development workflow, like syntax highlighting or auto-complete, measuring its contribution as a simple percentage might not be meaningful at all.

Several observations:

  1. Microsoft’s leadership is outputting difficult to believe statements
  2. Microsoft apparently has been recycling code because those contributions from Stack Overflow are not tabulated
  3. Marketing is now the engine making AI the future of Microsoft unfold.

I would assert that the answer to the Mr. Diallo’s question is, “Whatever unfounded assertion Microsoft offers is actual factual.” That’s okay with me, but some people may be hooked by Google’s Agentic Operating System pitch.

Stephen E Arnold, November 24, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta