Can the Chrome Drone Deorbit Comet?

November 28, 2025

Perplexity developed Comet, an intuitive AI-powered Internet browser. Analytic Insight has a rundown on Comet in the article: “Perplexity CEO Aravind Srinivas Claims Comet AI Browser Could ‘Kill’ Android System.” Perplexity designed Comet for more complex tasks such as booking flights, shopping, and answering then executing simple prompts. The new browser is now being released for Android OS.

Until recently Comet was an exclusive, invite-only browser for the desktop version. It is now available for download. Comet is taking the same approach for an Android release. Perplexity hopes to overtake Android as the top mobile OS or so CEO Aravind Srinivas plans.

Another question is if Comet could overtake Chrome as the favored AI browser:

“The launch of Comet AI browser coincides with the onset of a new conflict between AI browsers. Not long ago, OpenAI introduced ChatGPT Atlas, while Microsoft Edge and Google Chrome are upgrading their platforms with top-of-the-line AI tools. Additionally, Perplexity previously received attention for a $34.5 billion proposal to acquire Google Chrome, a bold move indicating its aspirations.

Comet, like many contemporary browsers, is built on the open-source Chromium framework provided by Google, which is also the backbone for Chrome, Edge, and other major browsers. With Comet’s mobile rollout and Srinivas’s bold claim, Perplexity is obviously betting entirely on an AI-first future, one that will see a convergence of the browser and the operating system.”

Comet is built on Chromium. Chrome is too. Comet is a decent web browser, but it doesn’t have the power of Alphabet behind it. Chrome will dominate the AI-browser race because it has money to launch a swarm of digital drones at this frail craft.

Whitney Grace, November 28, 2025

Turkey Time: IT Projects Fail Like Pies and Cakes from Crazed Aunties

November 27, 2025

green-dino_thumb_thumb[3]Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.

Today is Thanksgiving, and it is appropriate to consider the turkey approach to technology. The source of this idea comes from the IEEE.org online publication. The article explaining what I call “turkeyism” is “How IT Managers Fail Software Projects.” Because the write up is almost 4,000 words and far too long for reading during an American football game’s halftime break, I shall focus on a handful of points in the write up. I encourage you to read the entire article and, of course, sign up and subscribe. If you don’t, the begging for dollars pop up may motivate you to click away and lose the full wisdom of the IEEE write up. I want to point out that many IT managers are trained as electrical engineers or computer scientists who have had to endure the veritable wonderland of imaginary numbers for a semester or two. But increasingly IT managers can be MBAs or in some frisky Silicon Valley type companies, recent high school graduates with a native ability to solve complex problems and manage those older than they. Hey, that works, right?

image

Auntie knows how to manage the baking process. She practices excellent hygiene, but with age comes forgetfulness. Those cookies look yummy. Thanks, Venice.a. No mom. But good enough with Auntie pawing the bird.

Answer: Actually no.

The cited IEEE article states:

Global IT spending has more than tripled in constant 2025 dollars since 2005, from US $1.7 trillion to $5.6 trillion, and continues to rise. Despite additional spending, software success rates have not markedly improved in the past two decades. The result is that the business and societal costs of failure continue to grow as software proliferates, permeating and interconnecting every aspect of our lives.

Yep, and lots of those managers are members of IEEE or similar organizations. How about that jump from solving mathy problems to making software that works? It doesn’t seem to be working. Is it the universities, the on the job training, or the failure of continuing education? Not surprisingly, the write up doesn’t offer a solution.

What we have is a global, expensive problem. With more of everyday life dependent on “technology,” a failure can have some interesting consequences. Not only is it tough to get that new sweater delivered by Amazon, but downtime can kill a kid in a hospital when a system keels over. Dead is dead, isn’t it?

The write up says:

A report fromthe Consortium for Information & Software Quality (CISQ) estimated the annual cost of operational software failures in the United States in 2022 alone was $1.81 trillion, with another $260 billion spent on software-development failures. It is larger than the total U.S. defense budget for that year, $778 billion.

Chatter about the “cost” of AI tosses around even bigger numbers. Perhaps some of the AI pundits should consider the impact of AI failure in the context of IT failure. Frankly I am not confident about AI because of IT failure. The money is one thing, but given the evidence about the prevalence of failure, I am not ready to sing the JP Morgan tune about the sunny side of the street.

The write up adds:

Next to electrical infrastructure, with which IT is increasingly merging into a mutually codependent relationship, the failure of our computing systems is an existential threat to modern society. Frustratingly, the IT community stubbornly fails to learn from prior failures.

And what role does a professional organization play in this little but expensive drama? Are the arrows of accountability pointing at the social context in which the managers work? What about the education of these managers? What about the drive to efficiency? You know. Design the simplest possible solution. Yeah, these contextual components have created a high probability of failure. Will Auntie’s dessert give everyone food poisoning? Probably. Auntie thinks she has washed her hands and baked with sanitation in mind. Yep, great assumption because Auntie is old. Auntie Compute is going on 85 now. Have another cookie.

But here’s the killer statement in the write up:

Not much has worked with any consistency over the past 20 years.

This is like a line in a Jack Benny Show skit.

Several observations:

  1. The article identifies a global, systemic problem
  2. The existing mechanisms for training people to manage don’t work
  3. There is no solution.

Have a great Thanksgiving. Have another one of Auntie’s cookies. The two people who got food poisoning last year just had upset tummies. It will just get better. At least that’s what mom says.

Stephen E Arnold, November 27, 2025

Coca-Cola and AI: Things May Not Be Going Better

November 27, 2025

Coca-Cola didn’t learn its lesson last year with a less than bad AI-generated Christmas commercial. It repeated the mistake again in 2025. Although the technology has improved, the ad still bears all the fake-ness of early CGI (when examined in hindsight of course). Coca-Cola, according to Creative Bloq, did want to redeem itself, so the soft drink company controlled every detail in the ad: “Devastating Graphic Shows Just How Bad The Coca-Cola Christmas Ad Really Is.”

Here’s how one expert viewed it:

“In a post on LinkedIn, the AI consultant Dino Burbidge points out the glaring lack of consistency and continuity in the design of the trucks in the new AI Holidays are Coming ad, which was produced by AI studio Secret Level. At least one of the AI-generated vehicles appears to completely defy physics, putting half of the truck’s payload beyond the last wheel.

Dino suggests that the problem with the ad is not AI per se, but the fact that no human appears to have checked what the AI models generated… or that more worryingly they checked but didn’t care, which is extraordinary when the truck is the main character in the ad.”

It’s been suggested that Coca-Cola used AI to engage in rage bait instead of building a genuinely decent Christmas ad. There was a behind the scenes video of how the ad was made and even that used AI VoiceOver.

I liked the different horse drawn wagons. Very consistent.

Whitney Grace, November 27, 2025

Microsoft: Desperate to Be a Leader in the Agentic OS Push Decides to Shove, Not Lure Supporters

November 26, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

I had a friend in high school who like a girl Mary B. He smiled at her. He complimented her plaid skirt. He gave her a birthday gift during lunch in the school cafeteria. My reaction to this display was, “Yo, Tommy, you are trying too hard.” I said nothing, I watched as Mary B. focused her attention on a football player with a C average but comic book Superman looks. Tommy became known as a person who tried too hard to reach a goal without realizing no girl wanted to be the focal point of a birthday gift in the school cafeteria with hundreds of students watching. Fail, Tommy.

image

Thanks, Venice.ai. Good enough, the gold standard today I believe.

I thought about this try-too-hard approach when I read “Windows President Addresses Current State of Windows 11 after AI Backlash.” The source is the on-again, off-again podcasting outfit called Windows Central. Here’s a snippet from the write up which recycles content from X.com. The source of the statement is a person named Pavan Davuluri, who is the Microsoft Windows lead:

The team (and I) take in a ton of feedback. We balance what we see in our product feedback systems with what we hear directly. They don’t always match, but both are important. I’ve read through the comments and see focus on things like reliability, performance, ease of use and more… we care deeply about developers. We know we have work to do on the experience, both on the everyday usability, from inconsistent dialogs to power user experiences. When we meet as a team, we discuss these pain points and others in detail, because we want developers to choose Windows.

Windows Central pointed out that Lead Davuluri demonstrated “leadership” with a bold move. He disabled comments to his X.com post about caring deeply about its customers. I like it when Lead Davuluri takes decisive leadership actions that prevent people from providing inputs. Is that why Microsoft ignored focus groups responding to Wi-Fi hardware that did not work and “ribbon” icons instead of words in Office application interfaces? I think I have possibly identified a trend at Microsoft: The aircraft carrier is steaming forward, and it is too bad about the dolphins, fishing boats, and scuba divers. I mean who cares about these unseen flotsam and jetsam.

Remarkably Windows Central’s write up includes another hint of negativism about Microsoft Windows:

What hasn’t helped in recent years is “Continuous Innovation,” Microsoft’s update delivery strategy that’s designed to keep the OS fresh with new features and changes on a consistent, monthly basis. On paper, it sounds like a good idea, but in practice, updating Windows monthly with new features often causes more headaches than joy for a lot of people. I think most users would prefer one big update at a predictable, certain time of the year, just like how Apple and Google do it.

Several observations if I may offer them as an aged dinobaby:

  1. Google has said it wants to become the agentic operating system. That means Google wants to kill off Microsoft, its applications, and its dreams.
  2. Microsoft knows that it faces competition from a person whom Satya Nadella knows, understands, absolutely must defeat because his family would make fun of him if he failed. Yep, a man-to-man dust up with annoying users trying to stop the march of technological innovation and revenue. Lead Davuluri has his marching orders; hence, the pablum tinged non-speak cited in the Windows Central write up.
  3. User needs and government regulation have zero — that’s right, none, nil, zip — chance of altering what these BAIT (big AI tech) outfits will do to win. Buckle up, Tommy. You are going to be rejected again.

Net net: That phrase agentic OS has a ring to it, doesn’t it?

Stephen E Arnold, November 26, 2025

Has Big Tech Taught the EU to Be Flexible?

November 26, 2025

green-dino_thumb_thumb[3]This essay is the work of a dumb dinobaby. No smart software required.

Here’s a question that arose in a lunch meeting today (November 19, 2025): Has Big Tech brought the European Union to heel? What’s your answer?

The “trust” outfit Thomson Reuters published “EU Eases AI, Privacy Rules As Critics Warn of Caving to Big Tech.”

image

European Union regulators demonstrate their willingness to be flexible. These exercises are performed in the privacy of a conference room in Brussels. The class is taught by those big tech leaders who have demonstrated their ability to chart a course and keep it. Thanks, Venice.ai. How about your interface? Yep, good enough I think.

The write up reported:

The EU Commission’s “Digital Omnibus”, which faces debate and votes from European countries, proposed to delay stricter rules on use of AI in “high-risk” areas until late 2027, ease rules around cookies and enable more use of data.

Ah, back peddling seems to be the new Zen moment for the European Union.

The “trust” outfit explains why, sort of:

Europe is scrabbling to balance tough rules with not losing more ground in the global tech race, where companies in the United States and Asia are streaking ahead in artificial intelligence and chips.

Several factors are causing this rethink. I am not going to walk the well-worn path called “Privacy Lane.” The reason for the softening is not a warm summer day. The EU is concerned about:

  1. Losing traction in the slippery world of smart software
  2. Failing to cultivate AI start ups with more than a snowball’s chance of surviving in the Dante’s inferno of the competitive market
  3. Keeping AI whiz kids from bailing out of European mathematics, computer science, and physics research centers for some work in Sillycon Valley or delightful Z Valley (Zhongguancun, China, in case you did not know).

From my vantage point in rural Kentucky, it certainly appears that the European Union is fearful of missing out on either the boom or the bust associated with smart software.

Several observations are warranted:

  1. BAITers are likely to win. (BAIT means Big AI Tech in my lingo.) Why? Money and FOMO
  2. Other governments are likely to adapt to the needs of the BAITers. Why? Money and FOMO
  3. The BAIT outfits will be ruthless and interpret the EU’s new flexibility as weakness.

Net net: Worth watching. What do you think? Money? Fear? A combo?

Stephen E Arnold, November 26, 2025

What Can a Monopoly Type Outfit Do? Move Fast and Break Things Not Yet Broken

November 26, 2025

green-dino_thumb_thumb[3]This essay is the work of a dumb dinobaby. No smart software required.

CNBC published “Google Must Double AI Compute Every 6 Months to Meet Demand, AI Infrastructure Boss Tells Employees.”

How does the math work out? Big numbers result as well as big power demands, pressure on suppliers, and an incentive to enter hyper-hype mode for marketing I think.

image

Thanks, Venice.ai. Good enough.

The write up states:

Google ’s AI infrastructure boss [maybe a fellow named Amin Vahdat, the leadership responsible for Machine Learning, Systems and Cloud AI?] told employees that the company has to double its compute capacity every six months in order to meet demand for artificial intelligence services.

Whose demand exactly? Commercial enterprises, Google’s other leadership, or people looking for a restaurant in an unfamiliar town?

The write up notes:

Hyperscaler peers Microsoft, Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year.

Faced with this robust demand, what differentiates the Google for other monopoly-type companies? CNBC delivers a bang up answer to my question:

Google’s “job is of course to build this infrastructure but it’s not to outspend the competition, necessarily,” Vahdat said. “We’re going to spend a lot,” he said, adding that the real goal is to provide infrastructure that is far “more reliable, more performant and more scalable than what’s available anywhere else.” In addition to infrastructure buildouts, Vahdat said Google bolsters capacity with more efficient models and through its custom silicon. Last week, Google announced the public launch of its seventh generation Tensor Processing Unit called Ironwood, which the company says is nearly 30 times more power efficient than its first Cloud TPU from 2018. Vahdat said the company has a big advantage with DeepMind, which has research on what AI models can look like in future years.

I see spend the same as a competitor but, because Google is Googley, the company will deliver better reliability, faster, and more easily made bigger AI than the non-Googley competition. Google is focused on efficiency. To me, Google bets that its engineering and programming expertise will give it an unbeatable advantage. The VP of Machine Learning, Systems and Cloud AI does not mention the fact that Google has its magical advertising system and about 85 percent of the global Web search market via its assorted search-centric services. Plus one must not overlook the fact that the Google is vertically integrated: Chips, data centers, data, smart people, money, and smart software.

The write up points out that Google knows there are risks with its strategy. But FOMO is more important than worrying about costs and technology. But what about users? Sure, okay, eyeballs, but I think Google means humanoids who have time to use Google whilst riding in Waymos and hanging out waiting for a job offer to arrive on an Android phone. Google doesn’t need to worry. Plus it can just bump up its investments until competitors are left dying in the desert known as Death Vall-AI.

After kicking beaten to the draw in the PR battle with Microsoft, the Google thinks it can win the AI jackpot. But what if it fails? No matter. The AI folks at the Google know that the automated advertising system that collects money at numerous touch points is for now churning away 24×7. Googzilla may just win because it is sitting on the cash machine of cash machines. Even counterfeiters in Peru and Vietnam cannot match Google’s money spinning capability.

Is it game over? Will regulators spring into action? Will Google win the race to software smarter than humans? Sure. Even if it part of the push to own the next big thing is puffery, the Google is definitely confident that it will prevail just like Superman and the truth, justice, and American way has. The only hitch in the git along may be having captured enough electrical service to keep the lights on and the power flowing. Lots of power.

Stephen E Arnold, November 26, 2025

Telegram, Did You Know about the Kiddie Pix Pyramid Scheme?

November 25, 2025

green-dino_thumb_thumb[3]Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

The Independent, a newspaper in the UK, published “Leader of South Korea’s  Biggest Telegram Sex Abuse Ring Gets Life Sentence.” The subtitle is a snappy one: “Seoul Court Says Kim Nok Wan Committed Crimes of Extreme Brutality.” Note: I will refer to this convicted person as Mr. Wan. The reason is that he will spend time in solitary confinement. In my experience individuals involved in kiddie crimes are at bottom of  the totem pole among convicted people. If the prison director wants to keep him alive, he will be kept away from the general population. Even though most South Koreans are polite, it is highly likely that he will face a less than friendly greeting when he visits the TV room or exercise area. Therefore, my designation of Mr. Wan reflects the pallor his skin will evidence.

Now to the story:

The main idea is that Mr. Wan signed up for Telegram. He relied on Telegram’s Group and Channel function. He organized a social community dubbed the Vigilantes, a word unlikely to trigger kiddie pix filters. Then he “coerced victims, nearly 150 of them minors, into producing explicit material through blackmail and then distribute the content in online chat rooms.”

image

Telegram’s leader sets an example for others who want to break rules and be worshiped. Thanks, Venice.ai. Too bad you ignored my request for no facial hair. Good enough, the standard for excellence today I believe.

Mr. Wan’s innovation weas to set up what the Independent called “a pyramid hierarchy.” Think of an Herbal Life- or the OneCoin-type operation. He incorporated an interesting twist. According to the Independent:

He also sent a video of a victim to their father through an accomplice and threatened to release it at their workplace.

Let’s shift from the clever Mr. Wan to Telegram and its public and private Groups and Channels. The French arrested Pavel Durov in August 2024. The French judiciary identified a dozen crimes he allegedly committed. He awaits trial for these alleged crimes. Since that arrest, Telegram has, based on our monitoring of Telegram, blocked more aggressively a number of users and Groups for violating Telegram’s rules and regulations such as they are. However, Mr. Wan appears to have slipped through despite Telegram’s filtering methods.

Several observations:

  1. Will Mr. Durov implement content moderation procedures to block, prevent, and remove content like Mr. Wan’s?
  2. Will South Korea take a firm stance toward Telegram’s use in the country?
  3. Will Mr. Durov cave in to Iran’s demands so that Telegram is once again available in that country?
  4. Did Telegram know about Mr. Wan’s activities on the estimable Telegram platform?

Mr. Wan exploited Telegram. Perhaps more forceful actions should be taken by other countries against services which provide a greenhouse for certain types of online activity to flourish? Mr. Durov is a tech bro, and he has been pictured carrying a real (not metaphorical) goat to suggest that he is the greatest of all time.

That perception appears to be at odds with the risk his platform poses to children in my opinion.

Stephen E Arnold, November 25, 2025

LLMs and Creativity: Definitely Not Einstein

November 25, 2025

green-dino_thumbAnother dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.

I have a vague recollection of a very large lecture room with stadium seating. I think I was at the University of Illinois when I was a high school junior. Part of the odd ball program in which I found myself involved a crash course in psychology. I came away from that class with an idea that has lingered in my mind for lo these many decades; to wit: People who are into psychology are often wacky. Consequently I don’t read too much from this esteemed field of study. (I do have some snappy anecdotes about my consulting projects for a psychology magazine, but let’s move on.)

image

A semi-creative human explains to his robot that he makes up answers and is not creative in a helpful way. Thanks, Venice.ai. Good enough, and I see you are retiring models, including your default. Interesting.

I read in PsyPost this article: “A Mathematical Ceiling Limits Generative AI to Amateur-Level Creativity.” The main idea is that the current approach to smart software does not just answers dead wrong, but the algorithms themselves run into a creative wall.

Here’s the alleged reason:

The investigation revealed a fundamental trade-off embedded in the architecture of large language models. For an AI response to be effective, the model must select words that have a high probability of fitting the context. For instance, if the prompt is “The cat sat on the…”, the word “mat” is a highly effective completion because it makes sense and is grammatically correct. However, because “mat” is the most statistically probable ending, it is also the least novel. It is entirely expected. Conversely, if the model were to select a word with a very low probability to increase novelty, the effectiveness would drop. Completing the sentence with “red wrench” or “growling cloud” would be highly unexpected and therefore novel, but it would likely be nonsensical and ineffective. Cropley determined that within the closed system of a large language model, novelty and effectiveness function as inversely related variables. As the system strives to be more effective by choosing probable words, it automatically becomes less novel.

Let me take a whack at translating this quote from PsyPost: LLMs like Google-type systems have to decide. [a] Be effective and pick words that fit the context well, like “jelly” after “I ate peanut butter and jelly.” Or, [b] The LLM selects infrequent and unexpected words for novelty. This may lead to LLM wackiness. Therefore,  effectiveness and novelty work against each other—more of one means less of the other.

The article references some fancy math and points out:

This comparison suggests that while generative AI can convincingly replicate the work of an average person, it is unable to reach the levels of expert writers, artists, or innovators. The study cites empirical evidence from other researchers showing that AI-generated stories and solutions consistently rank in the 40th to 50th percentile compared to human outputs. These real-world tests support the theoretical conclusion that AI cannot currently bridge the gap to elite [creative] performance.

Before you put your life savings into a giant can’t-lose AI data center investment, you might want to ponder this passage in the PsyPost article:

“For AI to reach expert-level creativity, it would require new architecture capable of generating ideas not tied to past statistical patterns … Until such a paradigm shift occurs in computer science, the evidence indicates that human beings remain the sole source of high-level creativity.

Several observations:

  1. Today’s best-bet approach is the Google-type LLM. It has creative limits as well as the problems of selling advertising like old-fashioned Google search and outputting incorrect answers
  2. The method itself erects a creative barrier. This is good for humans who can be creative when they are not doom scrolling.
  3. A paradigm shift could make those giant data centers extremely large white elephants which lenders are not very good at herding along.

Net net: I liked the angle of the article. I am not convinced I should drop my teen impression of psychology. I am a dinobaby, and I like land line phones with rotary dials.

Stephen E Arnold, November 26, 2025

Why the BAIT Outfits Are Drag Netting for Users

November 25, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Have you wondered why the BAIT (big AI tech) companies are pumping cash into what looks to many like a cash bonfire? Here’s one answer, and I think it is a reasonably good one. Navigate to “Best Case: We’re in a Bubble. Worst Case: The People Profiting Most Know Exactly What They’re Doing.” I want to highlight several passages and then often my usually-ignored observations.

image

Thanks, Venice.ai. Good enough, but I am not sure how many AI execs wear old-fashioned camping gear.

I noted this statement:

The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe.

My reaction to this bubble argument is that the BAIT outfits realized after Microsoft said, “AI in Windows” that a monopoly-type outfit was making a move. Was AI the next oil or railroad play? Then Google did its really professional and carefully-planned Code Red or Yellow whatever, the hair-on-fire moment arrived. Now almost three years later, the hot air from the flaming coifs are equaled by the fumes of incinerating bank notes.

The write up offers this comment:

My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain. The larger the use case, the larger the expense. Most of the larger use cases that I have observed — where AI is leveraged to automate entire workflows, or capture end to end operational data, or replace an entire function — the outlay of work is equal to or greater than the savings. The time we think we’ll save by using AI tends to be spent on doing something else with AI.

The experiences of my team and I support this statement. However, when I go back to the early days of online in the 1970s, the benefits of moving from print research to digital (online) research were fungible. They were quantifiable. Online is where AI lives. As a result, the technology is not global. It is a subset of functions. The more specific the problem, the more likely it is that smart software can help with a segment of the work. The idea that cobbled together methods based on built-in guesses will be wonderful is just plain crazy. Once one thinks of AI as a utility, then it is easier to identify a use case where careful application of the technology will deliver a benefit. I think of AI as a slightly more sophisticated spell checker for writing at the 8th grade level.

The essay points out:

The last ten years have practically been defined by filter bubbles, alternative facts, and weaponized social media — without AI. AI can do all of that better, faster, and with more precision. With a culture-wide degradation of trust in our major global networks, it leaves us vulnerable to lies of all kinds from all kinds of sources and no standard by which to vet the things we see, hear, or read.

Yep, this is a useful way to explain that flows of online information tear down social structures. What’s not referenced, however, is that rebuilding will take a long time. Think about smashing your mom’s favorite Knick- knack. Were you capable of making it as good as new? Sure, a few specialists might be able to do a good job, but the time and cost means that once something is destroyed, that something is gone. The rebuild is at best a close approximation. That’s why people who want to go back to social structures in the 1950s are chasing a fairy tale.

The essay notes:

When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style, you have a sudden imbalance of power that looks like a cancer spreading within a national body.

My view is that the BAIT outfits want to control, dominate, and cash in. Hey, if you have cancer and one company has the alleged cure, are you going to take the drug or just die?

Several observations are warranted:

  1. BAIT outfits want to be the winner and be the only alpha dog. Ruthless behavior will be the norm for these firms.
  2. AI is the next big thing. The idea is that if one wishes it, thinks it, or invests in it, AI will be. My hunch is that the present methodologies are on the path to becoming the equivalent of a dial up modem.
  3. The social consequences of the AI utility added to social media are either ignored or not understood. AI is the catalyst needed to turn one substance into an explosion.

Net net: Good essay. I think the downsides referenced in the essay understate the scope of the challenge.

Stephen E Arnold, November 25, 2025

Pavel Durov Can Travel As Some New Features Dribble from the Core Engineers

November 25, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In November 2025, Telegram announced Cocoon, its AI system. Well, it is not yet revolutionizing writing code for smart contracts. Like Apple, Telegram is a bit late to the AI dog race. But there is hope for the company which has faced some headwinds. One blowing from the west is the criminal trial for which Pavel Durov, the founder of Telegram waits. Plus, the value of the much-hyped TONcoin and the subject of yet another investigation for financial fancy dancing is tanking.

What’s the good news? Telegram watching outfits like FoneArena and PCNews.ru have reported on some recent Telegram innovations. Keep in mind that Telegram means that a new user install the Messenger mini app. This is an “everything” app. Through the interface one can do a wide range of actions. Yep, that’s why it is called an “everything” app. You can read Telegram’s own explanation in the firm’s blog.

Fone Arena reports that “the Dubai-based virtual company (yeah, go figure that out)  has rolled out Live Stories streaming, repeated messages, and gift auctions. Repeated messages will spark some bot developers to build this function into applications. Notifications (wanted and unwanted) are useful in certain types of advertising campaigns. The gift auctions is little more than a hybrid of Google ad auctions and eBay applied to the highly volatile, speculative crypto confections Telegram, users, and developers allegedly find of great value.

The Live Stories streaming is more significant. Rolled out in November 2025, Live Stories allows users to broadcast live streams within the Stories service. Viewers can post comments and interact in real time in a live chat. During a stream, viewers may highlight or pin their messages using Telegram Stars, which is a form of crypto cash. A visible Star counter appears in the corner of the broadcast. Gamification is a big part of the Telegram way. Gambling means crypto transactions. Transactions incur a service charge. A user can kick of a Live Story from a personal accounts or from a Groups or a Channels that have unlocked Story posting via boosts. Owners have to unlock the Live Story, however. Plus, the new service supports real time messaging protocol for external applications such as OBS and XSplit streaming software.

image

The interface for Live Stories steaming. Is Telegram angling to kill off Twitch and put a dent in Discord? Will the French judiciary forget to try Pavel Durov for his online service’s behavior. It appears that Mr. Durov and his core engineers think so.

Observations are warranted:

  1. Live Stories is likely to catch the attention of some of the more interesting crypto promoters who make use of Telegram
  2. Telegram’s monitoring service will have to operate in real time because dropping in a short but interesting video promo for certain illegal or controversial activities will have to operate better than the Cleveland Browns American football team
  3. The soft hooks to pump up service charges or “gas fees” in the lingo of the digital currency enthusiasts are an important part of gift and auction play. Think hooking users on speculative investments in digital goodies and then scraping off those service charges.

Net net: Will Cocoon make it easier for developers to code complex bots, mini apps, and distributed applications (dApps)? Answer: Not yet. Just go buy a gift on Telegram. PS. Mr. Zuckerberg, Telegram has aced you again it seems.

Stephen E Arnold, November 25, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta