Telegram Notes: Mama Durova and Her Inner Circle

January 14, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

We filtered our notes for my new book “The Telegram Labyrinth.” Information about Pavel Durov’s mom was sparse. What we had, however, was interesting. The inner circle boils down to her ex-husbands and her three sons. In Part One of a two-part write up, you can get a snapshot of the individuals who facilitated the technical and business plumbing for VKontakte until its sale to Kremlin-approved buyers and then for the Telegram messaging service. You can find part one of this interesting group on my Telegram Notes online service.

Stephen E Arnold, January 14, 2026

The Drivers for 2026

January 14, 2026

The new year is here. Decrypt.co runs down the high and lows in, “Emerge’s 2025 Story of the Year: How the AI Race Fractured the Global Tech Order.” The main events of 2025 revolve around China and the US battling for dominance over the AI market. With only $256,000, the young Chinese startup Deepseek claimed it trained an AI model that matched OpenAI. OpenAI spent over a hundred million to arrive at the same result.

After Deepseek hit the Apple app store, Nvidia lost $600 billion in revenue as the largest drop in market history. Nvidia’s China market share fell from 95% to zero. The Chinese government banned all foreign AI chips from its datacenters, then the US Pentagon signed $10 billion in AI defense contracts.

China and the US are now warring a cold technology war. Deepseek exposed the US’s belief that controlling advanced chips would hinder China. Here’s how the US responded:

“The AI market entered panic mode. Stocks tanked, politicians started polishing their patriotic speeches, analysis exposed the intricacies of what could end up in a bubble, and enthusiasts mocked American models that cost orders of magnitude more than the Chinese counterparts, which were free, cheap and required a fraction of the money and resources to train.

Washington’s response was swift and punishing. The Trump administration expanded export controls throughout the year, banning even downgraded chips designed specifically for the Chinese market. By April, Trump restricted Nvidia from shipping its H20 chips.”

Meanwhile China retaliated:

“The tit-for-tat escalated into full decoupling. A new China’s directive issued in September banned Nvidia, AMD, and Intel chips from any data center receiving government money—a market worth over $100 billion since 2021. Jensen Huang revealed the company’s market share in China had hit "zero, compared to 95% in 2022.”

The US lost a big market for chips and China’s chip manufacturers increased domestic production by 40%. The US then implemented tariffs, then China responded by exerting its control over the physical elements needed to make technology in the strictest rare earth export controls ever. China wants to hit US defenses hard.

The Pentagon then invested in MP Materials with a cool $400 million. Trump also signed the Genesis Mission executive order, a Department of Energy-led AI initiative that the Trump administration compared to the Manhattan Project. Then China did…etc, etc.

Net net: Hype and hostility are the fuels for the months ahead. Hey, that’s positive Decrypt.

Whitney Grace, January 14, 2026

Security Chaos: So We Just Live with Failure?

January 14, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read a write up that baffled me. The article appeared in what I consider a content marketing or pay to play publication. I may be wrong, but the content usually hits me as an infomercial. The story arresting my attention this morning (January 13, 2026) is “The 11 Runtime Attacks Breaking AI Security — And How CISOs Are Stopping Them.” I expected a how to. What did the write up deliver? Confusion and a question, “So we just give up?”

The article contains this cheerful statement from a consulting firm. Yellow lights flashed. I read this:

Gartner’s research puts it bluntly: “Businesses will embrace generative AI, regardless of security.” The firm found 89% of business technologists would bypass cybersecurity guidance to meet a business objective. Shadow AI isn’t a risk — it’s a certainty.

Does this mean that AI takes precedence over security?

The article spells out 11 different threats and provides solutions to each. The logic of the “stopping runtime attacks” with methods now available struck me as a remarkable suggestion.

image

The mice are the bad actors. Notice that the capable security system is now unable to deal with the little creatures. The realtime threats overwhelmed the expensive much hyped-cyber cat. Thanks, Venice.ai. Good enough.

Let’s look at three of the 11 threats and their solutions. Please, read the entire write up and make you own decision about the other eight problems presented and allegedly solved.

The first threat is called “multi turn crescendo attacks.” I had no idea what this meant when I read the phrase. That’s okay. I am a dinobaby and a stupid one at that. It turns out that this fancy phrase means that a bad actor plans prompts that work incrementally. The AI system responds. Then responds to another weaponized prompt. Over a series of prompts, the bad actor gets what he or she wants out of the system. ChatGPT and Gemini are vulnerable to this orchestrated prompt sequence. What’s the fix? I quote:

Stateful context tracking, maintaining conversation history, and flagging escalation patterns.

Really? I am not sure that LLM outfits or licensees have the tools and the technical resources to implement these linked functions. Furthermore, in the cat and mouse approach to security, the mice are many. The find and react approach is not congruent with runtime threats.

Another threat is synthetic identify fraud. The idea is that AI creates life like humans, statements, and supporting materials. For me, synthetic identities are phishing attacks on steroids. People are fooled by voice, video and voice, email, and SMS attacks. Some companies hire people who are not people because AI technology advances in real time. How does one fix this? The solution is, and I quote:

Multi-factor verification incorporating behavioral signals beyond static identity attributes, plus anomaly detection trained on synthetic identity patterns.

But when AI synthetic identity technology improves how will today’s solutions deal with the new spin from bad actors? Answer: They have not, cannot, and will not with the present solutions.

The last threat I will highlight is obfuscation attacks or fiddling with AI prompts. Developers of LLMs are in a cat and mouse game. Right now the mice are winning for one simple reason: The wizards developing these systems don’t have the perspective of bad actors. LLM developers just want to ship and slap on fixes that stop a discovered or exposed attack vector. What’s the fix? The solution, and I quote, is:

Wrap retrieved data in delimiters, instructing the model to treat content as data only. Strip control tokens from vector database chunks before they enter the context window.

How does this work when new attacks occur and are discovered? Not very well because the burden falls upon the outfit using the LLM. Do licensees have appropriate technical resources to “wrap retrieved data in delimiters” when the exploit may just work but no one is exactly sure why. Who knew that prompts in iambic pentameter or gibberish with embedded prompts ignore “guardrails”? The realtime is the killer. Licensees are not equipped to react and I am not confident smart AI cyber security systems are either.

Net net: Amazon Web Services will deal with these threats. Believe it or not. (I don’t believe it, but your mileage may vary.)

Stephen E Arnold, January 14, 2026

Apple Google Prediction: Get Real, Please

January 13, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Prediction is a risky business. I read “No, Google Gemini Will Not Be Taking Over Your iPhone, Apple Intelligence, or Siri.” The write up asserts:

Apple is licensing a Google Gemini model to help make Apple Foundation Models better. The deal isn’t a one-for-one swap of Apple Foundation Models for Gemini ones, but instead a system that will let Apple keep using its proprietary models while providing zero data to Google.

Yes, the check is in the mail. I will jump on that right now. Let’s have lunch.

image

Two giant creatures find joy in their deepening respect and love for one another. Will these besties step on the ants and grass under their paws? Will they leave high-value information on the shelf? What a beautiful relationship! Will these two get married? Thanks, Venice.ai. Good enough.

Each of these breezy statements sparks a chuckle in those who have heard direct statements and know that follow through is unlikely.

The article says:

Gemini is not being weaved into Apple’s operating systems. Instead, everything will remain Apple Foundation Models, but Gemini will be the "foundation" of that.

Yep, absolutely. The write up presents this interesting assertion:

To reiterate: everything the end user interacts with will be Apple technology, hosted on Apple-controlled server hardware, or on-device and not seen by Apple or anybody else at all. Period.

Plus, Apple is a leader in smart software. Here’s the article’s presentation of this interesting idea:

Apple has been a dominant force in artificial intelligence development, regardless of what the headlines and doom mongers might say. While Apple didn’t rush out a chatbot or claim its technology could cause an apocalypse, its work in the space has been clearly industry leading. The biggest problem so far is that the only consumer-facing AI features from Apple have been lackluster and got a tepid consumer response. Everything else, the research, the underlying technology, the hardware itself, is industry leading.

Okay. Several observations:

  1. Apple and Google have achieved significant market share. A basic rule of online is that efficiency drives the logic of consolidation. From my point of view, we now have two big outfits, their markets, their products, and their software getting up close and personal.
  2. Apple and Google may not want to hook up, but the financial upside is irresistible. Money is important.
  3. Apple, like Telegram, is taking time to figure out how to play the AI game. The approach is positioned as a smart management move. Why not figure out how to keep those users within the friendly confines of two great companies? The connection means that other companies just have to be more innovative.

Net net: When information flows through online systems, metadata about those actions presents an opportunity to learn more about what users and customers want. That’s the rationale for leveraging the information flows. Words may not matter. Money, data, and control do.

Stephen E Arnold, January 13, 2026

Pavel Durov Outputs His Philosophy That GOATs Do Not Worry.

January 13, 2026

The creator behind the anonymous messaging service Telegram Pavel Durov was charged in 2024 with allegations of drug trafficking, child pornography, organized fraud, and money laundering. Durov is a private individual. He left Russia in 2014 to preserve his freedom of expressive. He provided Le Point with an exclusive interview: “Pavel Durov On His Arrest In France, Macron, Russia, The FBI — And The Fight For Telegram.”

When he was questioned about bad actors using Telegram for nefarious purposes, Durov said that doesn’t make him or his associates who run the platform criminals. He also asserts that the allegations won’t stick, that the French police didn’t correctly follow international procedures, and Telegram how to show them how to proceed.

Durov was questioned if he had any ties with Russia and was close with Putin. He claims that he only met Putin once when he was head of the Russian version of Facebook. He was told that he had to comply with Russian authorities or face the consequences. Durov also denies that he works with Russian authorities and that he only travels to Russia to visit his family.

After sharing his opinion about the current state of technology, Donald Trump, and the world at large. Durov was asked about his competitors. He said that WhatsApp copies everything Telegram does, that Zuckerberg needs more imagination (despite having a large amount of respect for him), and he admires Signal messaging.

He’s had many offers to buy Telegram, including Google which offered him $1 billion. He declines and states:

“It’s not a question of price, Telegram is simply not for sale.”

He continued that he’s still Telegram’s sole shareholder, because he wants to guarantee Telegram’s independence. He doesn’t want to lose control and thusly freedom.

Durov also detailed that he doesn’t have any regrets about Telegram’s development.   He keeps a small “core” team of fifty people based in Dubai, because small teams move faster.   His brother Nikolai, whom he describes as a genius with two Ph.D.s, is experimenting with AI. Surprisingly he suggests Nikolai no longer works on Telegram with Durov. When queried about what AI will do for the future and jobs, he said.

“We are experiencing unprecedented technological acceleration. For a teenager, adapting is natural. But for experienced professionals, like lawyers or doctors who earn high salaries, the transition will be brutal. Their perceived value in the market could diminish, even if they are excellent. Yes, jobs will disappear. But history shows that others will appear. What matters is the wealth created. Living like a king without having to work like a slave is a form of progress. And as long as we want to create, to bring something to society, there will be a place for everyone.”

Telegram is also money pit. That’s true. The firm’s most recent financial report reveals a loss of more than $220 million US dollars in 2025. But Pavel Durov is a GOAT, or, rather, the greatest of all time in Russia. This is similar to Google’s pronouncements about its role in today’s world. However, Pavel Durov is heading to trial in France for a line up of charges related to some interesting crimes. Is he worried? Nah. GOATs don’t worry.

Whitney Grace, January 13, 2026

So What Smart Software Is Doing the Coding for Lagging Googlers?

January 13, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read “Google Programmer Claims AI Solved a Problem That Took Human Coders a Year.” I assume that I am supposed to divine that I should fill in “to crack,” “to solve,” or “to develop”? Furthermore, I don’t know if the information in the write up is accurate or if it is a bit of fluff devised by an art history major who got a job with a PR firm supporting Google.

image

I like the way a Googler uses Anthropic to outperform Googlers (I think). Anyway, thanks, ChatGPT, good enough.

The company’s commitment to praise its AI technology is notable. Other AI firms toss out some baloney before their “leadership” has a meeting with angry investors. Google, on the other hand, pumps out toots and confetti with appalling regularity.

This particular write up states:

Paul [a person with inside knowledge about Google’s AI coding system] passed on secondhand knowledge from "a Principal Engineer at Google [that] Claude Code matched 1 year of team output in 1 hour."

Okay, that’s about as unsupported an assertion I have seen this morning. The write up continues:

San Francisco-based programmer Jaana Dogan chimed in, outing herself as the Google engineer cited by Paul. "We have been trying to build distributed agent orchestrators at Google since last year," she commented. "There are various options, not everyone is aligned … I gave Claude Code a description of the problem, it generated what we built last year in an hour."

So the “anonymous” programmer is Jaana Dogan. She did not use Opal, Google’s own smart software. Ms. Dogan used the coding tools from Anthropic? Is this what the cited passage is telling me?

Let’s think about these statements for a moment:

  1. Perhaps Google’s coders were doom scrolling, playing Foosball, or thinking about how they could land a huge salary at Meta now that AI staff are allegedly jump off the good ship Zuck Up? Therefore, smart software could indeed produce code that took the Googlers one year to produce. Googlers are not necessarily productive unless it is in the PR department or the legal department.
  2. Is Google’s own coding capability so lousy that Googlers armed with Opal and other Googley smart software could not complete a project with software Google is pitching as the greatest thing since Google landed a Nobel Prize?
  3. Is the Anthropic software that much better than Google’s or Microsoft’s smart coding system? My experience is that none of these systems are that different from one another. In fact, I am not sure that new releases are much better than the systems we have tested over the last 12 months.

The larger question is, “Why does Google have to promote its approach to AI so relentlessly?” Why is Google using another firm’s smart software and presenting its use in a confusing way?

My answer to both these questions is, “Google has a big time inferiority complex. It is as if the leadership of Google believes that grandma is standing behind them when they were 12 years old. When attention flags doing homework, grandma bats the family loser with her open palm. “Do better. Concentrate,” she snarls at the hapless student.

Thus, PR emanates PR that seems to be about its own capabilities and staff while promoting a smart coding tool from another firm. What’s clear is that the need for PR coverage outpaces common sense and planning. Google is trying hard to convince people that AI is the greatest thing since ping pong tables at the office.

Stephen E Arnold, January 13, 2025

Fortune Magazine Is Not Hiding Its AI Skepticism

January 12, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Fortune Magazine appears to be less fearful of expressing its AI skepticism. However, instead of pointing out that the multiple cash fueled dumpster fires continue to burn, Fortune Magazine focuses on an alleged knock on effect of smart software.

AI Layoffs Are Looking More and More Like Corporate Fiction That’s Masking a Darker Reality, Oxford Economics Suggests” uses a stalking horse to deliver the news. The write up reports:

“firms don’t appear to be replacing workers with AI on a significant scale,” suggesting instead that companies may be using the technology as a cover for routine headcount reductions.

The idea seems to be a financially acceptable way to dump people and get the uplift by joining the front runners in smart use of artificial intelligence.

Fortune’s story blows away this smoke screen.

image

Are you kidding, you geezer? AI is now running the show until the part-time, sub-minimum wage folks show up at 1 am. Thanks, Venice.ai. Good enough.

The write up says:

The primary motivation for this rebranding of job cuts appears to be investor relations. The report notes that attributing staff reductions to AI adoption “conveys a more positive message to investors” than admitting to traditional business failures, such as weak consumer demand or “excessive hiring in the past.” By framing layoffs as a technological pivot, companies can present themselves as forward-thinking innovators rather than businesses struggling with cyclical downturns.

The write points out:

While AI was cited as the reason for nearly 55,000 U.S. job cuts in the first 11 months of 2025—accounting for over 75% of all AI-related cuts reported since 2023—this figure represents a mere 4.5% of total reported job losses…. AI-related job losses are still relatively limited.

True to form, the Fortune article tries hard to not ruffle feathers. The write up says:

recent data from the Bureau of Labor Statistics confirms that the “low-hire, low-fire” labor market is morphing into a “jobless expansion,” KPMG chief economist Diane Swonk previously told Fortune‘s Eva Roytburg.

Yep, that’s clear.

Several observations are warranted:

  1. Companies are dumping people to cut costs. We have noticed this across industries from outfits like Walgreen’s to Fancy Dan operations like American Airlines.
  2. AI is becoming an easy way to herd people over 40 into AI training classes and using class performance to winnow the herd. If one needs to replace an actual human, check out India’s work-from-Bangalore options.
  3. The smoke screen is dissipating. What will the excuse be then?

Net net: The true believers in AI created a work related effect that few want to talk about openly. That’s why we get the “low hire, low fire” gibberish. Nice work, AI.

Stephen E Arnold, January 12, 2026

Telegram Has a Plumber and a Pinger But Few Know

January 12, 2026

While reviewing research notes, the author of the “Telegram Labyrinth” spotted an interesting connection between Telegram and a firm with links to the Kremlin. A now-deleted Reuters report alleged that Telegram utilizes infrastructure linked to the FSB. The provider is Global Network Management (GNM), owned by Vladimir Vedeneev, a former Russian Space Force member with a Russian security clearance. Vedeneev’s relationship with Pavel Durov dates back to the VKontakte era, and he reportedly provided the networking foundation for Telegram in 2013.

Vedeneev maintains access to Telegram’s servers and possessed signatory authority as both CEO and CFO for Telegram. He also controls Electrotelecom, a firm servicing Russian security agencies. While Durov promises user privacy, Vedeneev’s firms provides a point of access. If exercised, Russia’s government agencies could legally harvest metadata via deep packet inspection. Registered in Antigua and Barbuda, the legal set up of GNM provides a possible work-around for EU and US sanctions on some Telegram-centric activities. GNM operates with opaque points of presence globally, raising questions about its partnerships with Google, Cloudflare, and others.

Stephen E Arnold hypothesizes that Telegram and GNM are tightly coupled, with Durov championing privacy while his partner facilitates state surveillance access. Political backing likely protects this classic “man in the middle” operation possible. If you want to read the complete article in Telegram Notes, click this link.

Kent Maxwell, January 12, 2026

Dell Reveals the Future of AI for Itself

January 12, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I was flipping through my Russian technology news feed and spotted an interesting story. “Consumers Don’t Buy Devices Because They Have AI. Dell Has Admitted That AI in Products Can Confuse Customers.” Yep, Russian technology media pays attention to AI signals from big outfits.

The write up states:

the company admits that at least for now this is not particularly important for users.

The Russian article then quotes from a Dell source:

You’ll notice one thing: we didn’t prioritize artificial intelligence as our primary goal when developing our products. So there’s been some shift from a year ago when we focused entirely on AI PCs. We are very focused on realizing the power of artificial intelligence in devices — in fact, all the devices we announce use a neural processor —, but over the course of this year we have realized that consumers are not buying devices because of the presence of AI. In fact, I think AI is more likely to confuse them than to help them understand a particular outcome.

image

Good enough, ChatGPT.

The chatter about an AI bubble notwithstanding, this Russian news item probes an important issue. AI may not be a revolution. It is “confusing” to some computer customers. The true believers are the ones writing checks to fund the engineering and plumbing required to allow an inanimate machine behave like a human. The word “confusing” is an important one. The messages about smart software don’t make sense to some people.

Dell, to its credit, listened to its customers and changed its approach. The AI goodness is in the device, but the gizmo is presented as a computer that a user can, confused or not, use to check email, write a message, and watch doom scroll by.

Let’s look at this from a different viewpoint. Google and Microsoft want to create AI operating systems. The decade old or older bits of software plumbing have to be upgraded. If the future is smart software, then the operating systems have to be built on smart software. To the believers, the need to AI everything is logical and obvious.

If we look at it from the point of view of a typical Dell customer, the AI jabber is confusing. What’s confusing mean? To me, confusing means unclear. AI marketing is definitely that. I am not sure I understand how typing a query and getting a response is not presented as “search and retrieval.” AI is also bewildering. I have watched a handful of YouTube AI explainer videos. I think I understand, but the reality for me is that AI seems to be a collection of methods developed over the last couple hundred years integrated to index text and output probabilistic sequences. Some make sense to an eighth grader wanting help with a 200 word paragraph about the Lincoln-Douglas debates. However, it wou8ld be difficult for the same kid to get information about Honest Abe’s sleeping with a guy for years. Yep, baffling. Explaining to a judge why an AI system made up case citations to legal actions that did  not take place is not just mystifying. The use of AI costs the lawyer money, credibility, and possibly the law license. Yep, puzzling.

Thus, an AI enabled Dell laptop doesn’t make sense to some consumers. Their child needs a laptop to do homework. What’s with the inclusion of AI. AI is available everywhere. Why double up on AI? Dell sidesteps the issue by focusing on its computers as computers.

Several observations are warranted:

  1. The AI shift at Dell is considered “news” in Russia. In the US, I am not sure how many people will get the Dell message. Maybe someone on TikTok or Reels will cover the Dell story?
  2. The Google- and Microsoft-type companies don’t care about Dell. These outfits are inventing the future. The firms are spending billions and now dumping staff to help pay for the vision of their visionaries. If it doesn’t work, these people will join the lawyers caught using made up information working as servers at the local Rooster’s chicken joint.
  3. The idea that “if they think it, the ‘it’ will become reality is fueling the AI boom. Stoked on the sci-fi novels consumed when high school students, the wizards in the AI game are convinced they can deliver smart software. Conviction is useful. However, a failure to deliver will be interesting to watch… from a distance.

Net net: Score one for Dell. No points for Google or Microsoft. Consumers are in the negative column. They are confused and there is one thing that the US economy abhors is a bewildered person. Believe or be gone.

Stephen E Arnold, January 12, 2026

Just Train AI with AI Output: What Could Go Wrong?

January 9, 2026

AI is still dumb technology and needs to be trained to improve. Unfortunately AI training datasets are limited. Patronus AI claims it has a solution to training problem and the news is told on VentureBeat in the article, “AI Agents Fail 63% Of The Time On Complex Tasks. Patronus AI Says Its New ‘Living’ Training Worlds Can Fix That.” Patronus AI is a new AI startup with backing from Datadog and Lightspeed Venture Partners.

The company’s newest project is called “Generative Simulators” that creates simulated environments that continuously generate new challenges for AI algorithms to evaluate. AI Patronus could potentially be a critical tool for the AI industry. Research discovered that AI algorithms with a 1% error rate per step compound a 63% chance of failure.

Patronus AI explains that traditional datasets and measurements are like standardized tests: “they measure specific capabilities at a fixed point in time but struggle to capture the messy, unpredictable nature of real work.” The new Generative Simulators produces environments and assignments that adapt based on how the algorithm responds:

“The technology builds on reinforcement learning — an approach where AI systems learn through trial and error, receiving rewards for correct actions and penalties for mistakes. Reinforcement learning is an approach where AI systems learn to make optimal decisions by receiving rewards or penalties for their actions, improving through trial and error. RL can help agents improve, but it typically requires developers to extensively rewrite their code. This discourages adoption, even though the data these agents generate could significantly boost performance through RL training.”

Patronus AI said that training has improved AI algorithm’s task completion by 10-20%. The company also says that Big Tech can’t build all of their AI training tools in house because the amount of specialized training needed for niche fields is infinite. It’s a natural place for third party companies like Patronus AI.

Patronus AI founds its niche and is cashing in! But that failure rate? No problem.

Whitney Grace, January 9, 2026

Next Page »

  • Archives

  • Recent Posts

  • Meta