Indiscriminate Scanning: Hello, Telegram, This Is for You
July 29, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I read a version of the message the European Union is sending to Pavel Durov. This super special human is awaiting trial in France for a couple of minor infractions. Yep, minor as in CSAM. Oh, the French judiciary tossed in a few other crimes.
The EU, following France’s long overdue action, is mustering some oomph, according to “The EU Could Be Scanning Your Chats by October 2025 – Here’s Everything We Know”:
Denmark kicked off its EU Presidency on July 1, 2025, and, among its first actions, lawmakers swiftly reintroduced the controversial child sexual abuse (CSAM) scanning bill to the top of the agenda. Having been deemed by critics as Chat Control, the bill aims to introduce new obligations for all messaging services operating in Europe to scan users’ chats, even if they’re encrypted.
After a three year hiatus, the EU is in “could” and “try” mode. The write up says:
As per its first version, all messaging software providers would be required to perform indiscriminate scanning of private messages to look for CSAM – so-called ‘client-side scanning’. The proposal was met with a strong backlash, and the European Court of Human Rights ended up banning all legal efforts to weaken encryption of secure communications in Europe.
Where does Telegram fit into this “could” initiative?
Telegram semi-encrypts. The idea is that the user’s Messenger mini app encrypts a message, adds routing, and whisks the contents to the user… sort of. Telegram has a command-and-control node which receives the encrypted message, the header, assorted metadata, and then decrypts the message in the Telegram command-and-control center. Why? Good question.
Telegram does support complete end-to-end encryption. The command-and-control center just hands off the encrypted message. There is no slam dunk information available about Telegram’s sucking up the metadata for these EE2E messages which may contain text, rich media, or other content objects.
How will Telegram interpret this “could” move? My view is that the French judiciary may have some ways to realign Mr. Durov’s thinking. I understand that France has some lovely prison facilities like the facilities at the French Foreign Legion headquarters and the salubrious quarters in Africa. I would not suggest these are five star hotel type detainment structures, but Mr. Durov’s attorneys may convince him to reconsider his position as a French citizen under the watchful eye of the French legal system.
Stephen E Arnold, August 29, 2025
Microsoft: Knee Jerk Management Enigma
July 29, 2025
This blog post is the work of an authentic dinobaby. Sorry. Not even smart software can help this reptilian thinker.
I read “In New Memo, Microsoft CEO Addresses Enigma of Layoffs Amid Record Profits and AI Investments.” The write up says in a very NPR-like soft voice:
“This is the enigma of success in an industry that has no franchise value,” he wrote. “Progress isn’t linear. It’s dynamic, sometimes dissonant, and always demanding. But it’s also a new opportunity for us to shape, lead through, and have greater impact than ever before.” The memo represents Nadella’s most direct attempt yet to reconcile the fundamental contradictions facing Microsoft and many other tech companies as they adjust to the AI economy. Microsoft, in particular, has been grappling with employee discontent and internal questions about its culture following multiple rounds of layoffs.
Discontent. Maybe the summer of discontent. No, it’s a reshaping or re-invention of a play by William Shakespeare (allegedly) which borrows from Chaucer’s Troilus and Criseyde with a bit more emphasis on pettiness and corruption to add spice to Boccaccio’s antecedent. Willie’s Troilus and Cressida makes the “love affair” more ironic.
Ah, the Microsoft drama. Let’s recap: [a] Troilus and Cressida’s Two Kids: Satya and Sam, [b] Security woes of SharePoint (who knew? eh, everyone]; [c] buying green credits or how much manure does a gondola rail card hold? [d] Copilot (are the fuel switches on? Nope); and [e] layoffs.
What’s the description of these issues? An enigma. This is a word popping up frequently it seems. An enigma is, according to Venice, a smart software system:
The word “enigma” derives from the Greek “ainigma” (meaning “riddle” or “dark saying”), which itself stems from the verb “aigin” (“to speak darkly” or “to speak in riddles”). It entered Latin as “aenigma”, then evolved into Old French as “énigme” before being adopted into English in the 16th century. The term originally referred to a cryptic or allegorical statement requiring interpretation, later broadening to describe any mysterious, puzzling, or inexplicable person or thing. A notable modern example is the Enigma machine, a cipher device used in World War II, named for its perceived impenetrability. The shift from “riddle” to “mystery” reflects its linguistic journey through metaphorical extension.
Okay, let’s work through this definition.
- Troilus and Cressida or Satya and Sam. We have a tortured relationship. A bit of a war among the AI leaders, and a bit of the collapse of moral certainty. The play seems to be going nowhere. Okay, that fits.
- Security woes. Yep, the cipher device in World War II. Its security or lack of it contributed to a number of unpleasant outcomes for a certain nation state associated with beer and Rome’s failure to subjugate some folks.
- Manure. This seems to be a metaphorical extension. Paying “green” or money for excrement is a remarkable image. Enough said.
- Fuel switches and the subsequent crash, explosion, and death of some hapless PowerPoint users. This lines up with “puzzling.” How did those Word paragraphs just flip around? I didn’t do it. Does anyone know why? Of course not.
- Layoffs. Ah, an allegorical statement. Find your future elsewhere. There is a demand for life coaches, LinkedIn profile consultants, and lawn service workers.
Microsoft is indeed speaking darkly. The billions burned in the AI push have clouded the atmosphere in Softie Land. When the smoke clears, what will remain? My thought is that the items a to e mentioned above are going to leave some obvious environmental alterations. Yep, dark saying because knee jerk reactions are good enough.
Stephen E Arnold, July 29, 2025
An Author Who Will Not Be Hired by an AI Outfit. Period.
July 29, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I read an article / essay titled in English “The Bewildering Phenomenon of Declining Quality.” I found the examples in the article interesting. A couple like the poke at “fast fashion” have become tropes. Others, like the comments about customer service today, were insightful. Here’s an example of comment I noted:
José Francisco Rodríguez, president of the Spanish Association of Customer Relations Experts, admits that a lack of digital skills can be particularly frustrating for older adults, who perceive that the quality of customer service has deteriorated due to automation. However, Rodríguez argues that, generally speaking, automation does improve customer service. Furthermore, he strongly rejects the idea that companies are seeking to cut costs with this technology: “Artificial intelligence does not save money or personnel,” he states. “The initial investment in technology is extremely high, and the benefits remain practically the same. We have not detected any job losses in the sector either.”
I know that the motivation for dumping humans in customer support comes from [a] the extra work required to manage humans, [b] the escalating costs of health care and other “benefits”; and [c] black hole of costs that burn cash because customers want help, returns, and special treatment. Software robots are the answer.
The write up’s comments about smart software are also interesting. Here’s an example of a passage I circled:
A 2020 analysis by Fakespot of 720 million Amazon reviews revealed that approximately 42% were unreliable or fake. This means that almost half of the reviews we consult before purchasing a product online may have been generated by robots, whose purpose is to either encourage or discourage purchases, depending on who programmed them. Artificial intelligence itself could deteriorate if no action is taken. In 2024, bot activity accounted for almost half of internet traffic. This poses a serious problem: language models are trained with data pulled from the web. When these models begin to be fed with information they themselves have generated, it leads to a so-called “model collapse.”
What surprised me is the problem, specifically:
a truly good product contributes something useful to society. It’s linked to ethics, effort, and commitment.
One question: How does one inculcate these words into societal behavior?
One possible answer: Skynet.
Stephen E Arnold, July 29, 2025
Telegram: Is Now in the USA and Armed with Crypto Services
July 28, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Telegram in the US is so yesterday. The company is 13 years old. The founder is awaiting trial in France for some charges related to a dozen or more French laws and regulations. The TONcoin has been in the lower tier of the crypto currencies for more than a year. The firm released yet another programming language in the hopes of luring more developers to its platform.
But two allegedly accurate facts about this firm founded by Pavel Durov, the fellow who created the “Russian version of Facebook.” I spotted these in an online publication called TechCrunch. “Telegram’s Crypto Wallet Launches in the US” reports:
Telegram is expanding access to its crypto wallet for its 87 million users in the U.S.
The article includes an assertion that 100 million Telegram Messenger users have activated their crypto wallets. Furthermore, these 100 million people execute 334,000 transactions on the Nikolai Durov-Level1 blockchain every 24 hours. That works out to about 13,900 per hour or 231 per second. No benchmark data from other blockchain services are included in the write up.
My team and I estimated that the Telegram Messenger eGame “Hamster Kombat” attracted about 300 million Telegram users. The “points” in that game were HAMSTR crypto tokens. STAR tokens, a Telegram invented device, were also involved. In order to “cash in” these points for other crypto, the Messenger wallets may have been required for some of these “moves.”
The numbers, like most Telegram user data, are soft and difficult to verify.
Several observations:
- The TON Foundation indicated at the Gateway Conference in 2024 that there were about five million users of Telegram in the US in 2023. The jump to 87 million users is notable and either [a] an indication that Telegram Messenger is a bigger player in the US than believed or [b] Telegram and the TON Foundation are exaggerating their data
- If Telegram does have more than one billion users, the active use of the Telegram crypto wallet is a rather dismal 10 percent of the user base. With Telegram working to build out its crypto services, the “success” of the firm is either [a] disappointing or [b] another bogus number.
- The eGame Hamster Kombat drew three times the number of Telegram users than the Messenger crypto wallet. This means that either [a] the crypto “play” mounted by Telegram after the US SEC investigation in 2020 and 2021 is moving at a snail’s pace or [b] the reported figures are incorrect.
Net net: Verifiable data about Telegram, its proxies, and its business activities are fuzzy. One fact is verifiable: Pavel Durov, the “owner” of Telegram Company, is awaiting trial in France for a number of serious charges.
Stephen E Arnold, July 29, 2025
AI, Math, and Cognitive Dissonance
July 28, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
AI marketers will have to spend some time positioning their smart software as great tools for solving mathematical problems. “Not Even Bronze: Evaluating LLMs on 2025 International Math Olympiad” reports that words about prowess are disconnected from performance. The write up says:
The best-performing model is Gemini 2.5 Pro, achieving a score of 31% (13 points), which is well below the 19/42 score necessary for a bronze medal. Other models lagged significantly behind, with Grok-4 and Deepseek-R1 in particular underperforming relative to their earlier results on other MathArena benchmarks.
The write up points out, possibly to call attention to the slight disconnect between the marketing of Google AI and its performance in this contest:
As mentioned above, Gemini 2.5 Pro achieved the highest score with an average of 31% (13 points). While this may seem low, especially considering the $400 spent on generating just 24 answers, it nonetheless represents a strong performance given the extreme difficulty of the IMO. However, these 13 points are not enough for a bronze medal (19/42). In contrast, other models trail significantly behind and we can already safely say that none of them will achieve the bronze medal. Full results are available on our leaderboard, where everyone can explore and analyze individual responses and judge feedback in detail.
This is one “competition”, the lousy performance of the high-profile models, and the complex process required to assess performance make it easy to ignore this result.
Let’s just assume that it is close enough for horse shoes and good enough. With that assumption in mind, do you want smart software making decisions about what information you can access, the medical prognosis for your nine-year-old child, or decisions about your driver’s license renewal?
Now, let’s consider this write up fragmented across Tweets: [Thread] An OpenAI researcher says the company’s latest experimental reasoning LLM achieved gold medal-level performance on the 2025 International Math Olympiad. The little posts are perfect for a person familliar with TikTok-type and Twitter-like content. Not me. The main idea is that in the same competition, OpenAI earned “gold medal-level performance.”
The $64 dollar question is, “Who is correct?” The answer is, “It depends.”
Is this an example of what I learned in 1962 in my freshman year at a so-so university? I think the term was cognitive dissonance.
Stephen E Arnold, July 28, 2025
Silicon Valley: The New Home of Unsportsmanlike Conduct
July 26, 2025
Sorry, no smart software involved. A dinobaby’s own emergent thoughts.
I read the Axios run down of Mark Zuckerberg’s hiring blitz. “Mark Zuckerberg Details Meta’s Superintelligence Plans” reports:
The company [Mark Zuckerberg’s very own Meta] is spending billions of dollars to hire key employees as it looks to jumpstart its effort and compete with Google, OpenAI and others.
Meta (formerly the estimable juicy brand Facebook) had some smart software people. (Does anyone remember Jerome Pesenti?) Then there was Llama, and like the guanaco, tamed and used to carry tourists to Peruvian sights, has been seen as a photo opp for parents wanting to document their kids’ visit to Cusco.
Is Mr. Zuckerberg creating a mini Bell Labs in order to take the lead in smart software?The Axios write up contains some names of people who may have some connection to the Middle Kingdom. The idea is to get smart people, put them in a two-story building in Silicon Valley, turn up the A/C, and inject snacks.
I interpret the hiring and the allegedly massive pay packets to a simpler, more direct idea: Move fast, break things.
What are the things Mr. Zuckerberg is breaking?
First, I worked in Silicon Valley (aka Plastic Fantastic) for a number of years. I lived in Berkely and loved that commute to San Mateo, Foster City, and environs. Poaching employees was done in a more relaxed way. A chat at a conference, a small gathering after a softball game at the public fields not far from Stanford (yes, the school which had a president who made up information), or at some event like a talk at the Computer Museum or whatever it was called. That’s history. Mr. Zuckerberg shows up (virtually or in a T shirt), offers an alleged $100 million and hires a big name. No muss. No fuss. No social conventions. Just money. Cash. (I almost wish I was 25 and working in Mountain View. Sigh.)
Second, Mr. Zuckerberg is targeting the sensitive private parts of big leadership people. No dancing. Just targeted castration of key talent. Ouch. The Axios write up provides the names of some of these individuals. What interesting is that these people come from the knowledge parts hidden from the journalistic spotlight. Those suffering life changing removals without anesthesia include Google, OpenAI, and similar firms. In the good old days, Silicon Valley firms competed less of that Manhattan, lower east side vibe. No more.
Third, Mr. Zuckerberg is not announcing anything at conferences or with friendly emails. He is just taking action. Let the people at Apple, Safe Superintelligence, and similar outfits read the news in a resignation email. Mr. Zuckerberg knows that those NDAs and employment contracts can be used to wipe away tears when the loss of a valuable person is discovered.
What’s up?
Obviously Mr. Zuckerberg is not happy that his outfit is perceived as a loser in the AI game. Will this Bell Labs’ West approach work? Probably not. It will deliver one thing, however. Mr. Zuckerberg is sending a message that he will spend money to cripple, hobble, and derail AI innovation at firms beating his former LLM to death.
Move fast and break things has come to the folks who used the approach to take out swaths of established businesses. Now the technique is being used on companies next door. Welcome to the ungentrified neighborhood. Oh, expect more fist fights at those once friendly, co-ed softball games.
Stephen E Arnold, July 26, 2025
Decentralization: Nope, a Fantasy It Seems
July 25, 2025
Just a dinobaby working the old-fashioned way, no smart software.
Web 3, decentralization, graceful fail over, alternative routing. Are these concepts baloney? I think the idea that the distributed approach to online systems is definitely not bullet proof.
Why would I, an online person, make such a statement? I read “Cloudflare 1.1.1.1 Incident on July 14, 2025.” I know a number of people who know zero about Cloudflare. One can argue that AT&T, Google, Microsoft, et al are the gate keepers of the online world. Okay, that sounds great. It is sort of true.
I quote from the write up:
For many users, not being able to resolve names using the 1.1.1.1 Resolver meant that basically all Internet services were unavailable.
The operative word is “all.”
What can one conclude if this explanation of a failure of “legacy” systems can be pinned on a “configuration error.”? Some observations:
- A bad actor able to replicate this can kill the Internet or at least Cloudflare’s functionality
- The baloney about decentralization is just that… baloney. Cheap words packed in a PR tube and “sold” as something good
- The fail over and resilience assertions? Three-day old fish. Remember Ben Franklin’s aphorism: Three-day old fish smell. Badly.
Net net: We have evidence that the reality of today’s Internet rests in the semi capable hands of certain large companies. Without real “innovation,” the centralization of certain functions will have wide spread and unexpected impacts. Yep, “all,” including the bad actors who make use of these points of concentration. The Cloudflare incident may motivate other technically adept groups to find a better way. Perhaps something in the sky like satellites or on the ground like device to device wireless? I wonder if adversaries of the US have noticed this incident?
Stephen E Arnold, July 25, 2025
Will Apple Do AI in China? Subsidies, Investment, Saluting Too
July 25, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Apple long ago vowed to use the latest tech to design its hardware. Now that means generative AI. Asia Financial reports, “Apple Keen to Use AI to Design Its Chips, Tech Executive Says.” That tidbit comes from a speech Apple VP Johny Srouji made as he accepted an award from tech R&D group Imec. We learn:
“In the speech, a recording of which was reviewed by Reuters, Srouji outlined Apple’s development of custom chips from the first A4 chip in an iPhone in 2010 to the most recent chips that power Mac desktop computers and the Vision Pro headset. He said one of the key lessons Apple learned was that it needed to use the most cutting-edge tools available to design its chips, including the latest chip design software from electronic design automation (EDA) firms. The two biggest players in that industry – Cadence Design Systems and Synopsys – have been racing to add artificial intelligence to their offerings. ‘EDA companies are super critical in supporting our chip design complexities,’ Srouji said in his remarks. ‘Generative AI techniques have a high potential in getting more design work in less time, and it can be a huge productivity boost.’”
Srouji also noted Apple is one to commit to its choices. The post notes:
“Srouji said another key lesson Apple learned in designing its own chips was to make big bets and not look back. When Apple transitioned its Mac computers – its oldest active product line – from Intel chips to its own chips in 2020, it made no contingency plans in case the switch did not work.”
Yes, that gamble paid off for the polished tech giant. Will this bet be equally advantageous?
Has Apple read “Apple in China”?
Cynthia Murrell, July 25, 2025
AI Content Marketing: Claims about Savings Are Pipe Dreams
July 24, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
My tiny team and I sign up for interesting smart software “innovations.” We plopped down $40 to access 1min.ai. Some alarm bells went off. These were not the panic inducing Code Red buzzers at the Google. But we noticed. First, registration was wonky. After several attempts were had an opportunity to log in. After several tries, we gained access to the cornucopia of smart software goodies. We ran one query and were surprised to see Hamster Kombat-style points. However, the 1min.ai crowd flipped the winning click-to-earn model on its head. Every click consumed points. When the points were gone, the user had to buy more. This is an interesting variation of taxi meter pricing, a method reviled in the 1980s when commercial databases were the rage.
I thought about my team’s experience with 1min.ai and figured that an objective person would present some of these wobbles. Was I wrong? Yes.
“Your New AI-Powered Team Costs Less Than $80. Meet 1min.ai” is one of the wildest advertorial or content marketing smoke screens I have encountered in the last week or so. The write up asserts as actual factual, hard-hitting, old-fashioned technology reporting:
If ChatGPT’s your sidekick, think of 1min.AI as your entire productivity squad. This AI-powered tool lets you automate all kinds of content and business tasks—including emails, social media posts, blog drafts, reports, and even ad copy—without ever opening a blank doc.
I would suggest that one might tap 1min.ai to write an article for a hard-working, logic-charged professional at Macworld.
How about this descriptive paragraph which may have been written by an entity or construct:
Built for speed and scale, 1min.AI gives you access to over 80 AI tools designed to handle everything from content generation to data analysis, customer support replies, and more. You can even build your own tools inside the platform using its AI builder—no coding required.
And what about this statement:
The UI is slick and works in any browser on macOS.
What’s going on?
First, this information is PR assertions without factual substance.
Two, the author did not try to explain the taxi meter business model. It is important if one uses one account for a “team.”
Three, the functionality of the system is less useful that You.com based on our tests. Comparing 1min.ai is a key word play. ChatGPT has some bit time flaws. These include system crashes and delivering totally incorrect information. But 1min.ai lags behind. When ChatGPT stumbles over the prompt finish line, 1min.ai is still lacing its sneakers.
Here’s the final line of this online advertorial:
Act now while plans are still in stock!
How does a digital subscription go out of stock. Isn’t the offer removed?
I think more of this type of AI play acting will appear in the months ahead.
Stephen E Arnold, July 24, 2025
AI and Customer Support: Cost Savings, Yes. Useful, No
July 24, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
AI tools help workers to be more efficient and effective, right? Not so much. Not in this call center, anyway. TechSpot reveals, “Call Center Workers Say Their AI Assistants Create More Problems than They Solve.” How can AI create problems? Sure, it hallucinates and it is unpredictable. But why should companies let that stop them? They paid a lot for these gimmicks, after all.
Writer Rob Thubron cites a study showing customer service reps at a Chinese power company are less than pleased with their AI assistants. For one thing, the tool often misunderstands customers’ accents and speech patterns, introducing errors into call transcripts. Homophones are a challenge. It also struggles to accurately convert number sequences to text—resulting in inaccurate phone numbers and other numeric data.
The AI designers somehow thought their product would be better at identifying human emotions than people. We learn:
“Emotion recognition technology, something we’ve seen several reports about – most of them not good – is also criticized by those interviewed. It often misclassified normal speech as being a negative emotion, had too few categories for the range of emotions people expressed, and often associated a high volume level as someone being angry or upset, even if it was just a person who naturally talks loudly. As a result, most CSRs [Customer Service Reps] ignored the emotional tags that the system assigned to callers, saying they were able to understand a caller’s tone and emotions themselves.”
What a surprise. Thubron summarizes:
“Ultimately, while the AI assistant did reduce the amount of basic typing required by CSRs, the content it produced was often filled with errors and redundancies. This required workers to go through the call summaries, correcting mistakes and deleting sections. Moreover, the AI often failed to record key information from customers.”
Isn’t customer service rep one of the jobs most vulnerable to AI takeover? Perhaps not, anymore. A June survey from Gartner found half the organizations that planned to replace human customer service reps with AI are doing an about-face. A couple weeks later, the research firm anticipated that more than 40% of agentic AI projects will be canceled by 2027. Are the remaining 60% firms that have sunk too much money into such ventures to turn back?
Cynthia Murrell, July 24, 2025

