Myanmar Direct Action: Online Cyber Crime Meets Kinetics
November 7, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “Stragglers from Myanmar Scam Center Raided by Army Cross into Thailand As Buildings Are Blown Up.” In August 2024, France took Pavel Durov at a Paris airport. The direct action worked. Telegram has been wobbling. Myanmar, perhaps learning from the French decision to arrest the Mr. Durov, shut down an online fraud operation. The Associated Press reported on October 28, 2025: “The KK Park site, identified by Thai officials and independent experts as housing a major cybercrime operation, was raided by Myanmar’s army in mid-October as part of operations starting in early September to suppress cross-border online scams and illegal gambling.”
News reports and white papers from the United Nations make clear that sites like KK Park are more like industrial estates. Dormitories, office space, and eating facilities are provided. “Workers” or captives remain within the defined area. The Golden Triangle region strikes me as a Wild West for a range of cyber crimes, including pig butchering and industrial-scale phishing.
The geographic names and the details of the different groups in an area with competing political groups can be confusing. However, what is clear is that Myanmar’s military assaulted the militia groups protecting the facilities. Reports of explosions and people fleeing the area have become public. The cited news report says that Myanmar has been a location known to be tolerant or indifferent to certain activities within its borders.
Will Myanmar take action against other facilities believed to be involved in cyber crime? KK Park is just one industrial campus from which threat actors conduct their activities. Is Myanmar’s response a signal that law enforcement is fed up with certain criminal activity and moving with directed prejudice at certain operations? Will other countries follow the French and Myanmar method?
The big question is, “What caused Myanmar to focus on KK Park?” Will Cambodia, Lao PDR, and Thailand follow French view that enough is enough and advance to physical engagement?
Stephen E Arnold, November 7, 2025
Iran and Crypto: A Short Cut Might Not Be Working
November 6, 2025
One factor about cryptocurrency mining (and AI) that is glossed over by news outlets is the amount of energy required to keep the servers running. In short, it’s a lot! The Cool Down reports how one Middle Eastern country is dealing with a cryptocurrency crisis: “Stunning Report Reveals Government-Linked Crypto Crisis: ‘Serious And Unimaginable’”.
What is very interesting (and not surprising) about the crypto-currency mining is who is doing it: the Iranian government. Iran is dealing with an energy crisis and the citizens are dismayed. Lakes are drying up and there are abundant power outages. Iran is dealing with one of the worst droughts in its modern history.
Iran’s people have protested, but it’s like pushing a boulder up hill: no one is listening. Iran is home to a large saltwater lake, Lake Urmia, and it has transformed into a marsh.
Here’s what one expert said:
“An Iranian engineer cited by The Observer alleged that cryptocurrency mining by the state is consuming up to 5% of electricity, contributing to water and power depletion. "We are in a serious and unimaginable crisis," Iran President Masoud Pezeshkian said as he urged action during a recent cabinet meeting.”
The Iranian government has temporarily closed offices and is rationing resources, but it likely won’t be enough to curb power demanded by the crypto mining.
Iran could demolish its authoritarian and fundamentalist religious government, invest in a mixed economy, liberate women, and invest in education and technology to prepare for a better future. That likely won’t happen.
Whitney Grace, November 6, 2025
Medical Fraud Meets AI. DRG Codes Meet AI. Enjoy
November 4, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I have heard that some large medical outfits make use of DRG “chains” or “coding sequences.” I picked up this information when my team and I worked on what is called a “subrogation project.” I am not going to explain how subrogation works or what the mechanisms operating are. These are back office or invisible services that accompany something that seems straightforward. One doesn’t buy stock from a financial advisor; there is plumbing and plumbing companies that do this work. The hospital sends you a bill; there is plumbing and plumbing companies providing systems and services. To sum up, a hospital bill is often large, confusing, opaque, and similar to a secret language. Mistakes happen, of course. But often inflated medical bills do more to benefit the institution and its professionals than the person with the bill in his or her hand. (If you run into me at an online fraud conference, I will explain how the “chain” of codes works. It is slick and not well understood by many of the professionals who care for the patient. It is a toss up whether Miami or Nashville is the Florence of medical fancy dancing. I won’t argue for either city, but I would add that Houston and LA should be in the running for the most creative center of certain activities.

“Grieving Family Uses AI Chatbot to Cut Hospital Bill from $195,000 to $33,000 — Family Says Claude Highlighted Duplicative Charges, Improper Coding, and Other Violations” contains some information that will be [a] good news for medical fraud investigators and [b] for some health care providers and individual medical specialists in their practices. The person with the big bill had to joust with the provider to get a detailed, line item breakdown of certain charges. Once that anti-social institution provider the detail, it was time for AI.
The write up says:
Claude [Anthropic, the AI outfit hooked up with Google] proved to be a dogged, forensic ally. The biggest catch was that it uncovered duplications in billing. It turns out that the hospital had billed for both a master procedure and all its components. That shaved off, in principle, around $100,000 in charges that would have been rejected by Medicare. “So the hospital had billed us for the master procedure and then again for every component of it,” wrote an exasperated nthmonkey. Furthermore, Claude unpicked the hospital’s improper use of inpatient vs emergency codes. Another big catch was an issue where ventilator services are billed on the same day as an emergency admission, a practice that would be considered a regulatory violation in some circumstances.
Claude, the smart software, clawed through the data. The smart software identified certain items that required closer inspection. The AI helped the human using Claude to get the health care provider to adjust the bill.
Why did the hospital make billing errors? Was it [a] intentional fraud programmed into the medical billing system; [b] was it an intentional chain of DRG codes tuned to bill as many items, actions, and services as possible within reason and applicable rules; or [c] a computer error. If you picked item c, you are correct. The write up says:
Once a satisfactory level of transparency was achieved (the hospital blamed ‘upgraded computers’), Claude AI stepped in and analyzed the standard charging codes that had been revealed.
Absolutely blame the problem on the technology people. Who issued the instructions to the technology people? Innocent MBAs and financial whiz kids who want to maximize their returns are not part of this story. Should they be? Of course not. Computer-related topics are for other people.
Stephen E Arnold, November 4, 2025
Starlink: Are You the Only Game in Town? Nope
October 23, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “SpaceX Disables More Than 2,000 Starlink Devices Used in Myanmar Scam Compounds.” Interesting from a quite narrow Musk-centric focus. I wonder if this is a PR play or the result of some cooperative government action. The write up says:
Lauren Dreyer, the vice-president of Starlink’s business operations, said in a post on X Tuesday night that the company “proactively identified and disabled over 2,500 Starlink Kits in the vicinity of suspected ‘scam centers’” in Myanmar. She cited the takedowns as an example of how the company takes action when it identifies a violation of its policies, “including working with law enforcement agencies around the world.”
The cyber outfit added:
Myanmar has recently experienced a handful of high-profile raids at scam compounds which have garnered headlines and resulted in the arrest, and in some cases release, of thousands of workers. A crackdown earlier this year at another center near Mandalay resulted in the rescue of 7,000 people. Nonetheless, construction is booming within the compounds around Mandalay, even after raids, Agence France-Presse reported last week. Following a China-led crackdown on scam hubs in the Kokang region in 2023, a Chinese court in September sentenced 11 members of the Ming crime family to death for running operations.
Thanks, Venice.ai. Good enough.
Just one Chinese crime family. Even more interesting.
I want to point out that the write up did not take a tiny extra step; for example, answer this question, “What will prevent the firms listed below from filling the Starlink void (if one actually exists)? Here are some Starlink options. These may be more expensive, but some surplus cash is spun off from pig butchering, human trafficking, drug brokering, and money laundering. Here’s the list from my files. Remember, please, that I am a dinobaby in a hollow in rural Kentucky. Are my resources more comprehensive than a big cyber security firm’s?
- AST
- EchoStar
- Eutelsat
- HughesNet
- Inmarsat
- NBN Sky Muster
- SES S.A.
- Telstra
- Telesat
- Viasat
With access to money, cut outs, front companies, and compensated government officials, will a Starlink “action” make a substantive difference? Again this is a question not addressed in the original write up. Myanmar is just one country operating in gray zones where government controls are ineffective or do not exist.
Starlink seems to be a pivot point for the write up. What about Starlinks in other “countries” like Lao PDR? What about a Starlink customer carrying his or her Starlink into Cambodia? I wonder if some US cyber security firms keep up with current actions, not those with some dust on the end tables in the marketing living room.
Stephen E Arnold, October 23, 2025
Want to Catch the Attention of Bad Actors? Say, Easier Cross Chain Transactions
September 24, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I know from experience that most people don’t know about moving crypto in a way that makes deanonymization difficult. Commercial firms offer deanonymization services. Most of the well-known outfits’ technology delivers. Even some home-grown approaches are useful.
For a number of years, Telegram has been the go-to service for some Fancy Dancing related to obfuscating crypto transactions. However, Telegram has been slow on the trigger when it comes to smart software and to some of the new ideas percolating in the bubbling world of digital currency.
A good example of what’s ahead for traders, investors, and bad actors is described in “Simplifying Cross-Chain Transactions Using Intents.” Like most crypto thought, confusing lingo is a requirement. In this article, the word “intent” refers to having crypto currency in one form like USDC and getting 100 SOL or some other crypto. The idea is that one can have fiat currency in British pounds, walk up to a money exchange in Berlin, and convert the pounds to euros. One pays a service charge. Now in crypto land, the crypto has to move across a blockchain. Then to get the digital exchange to do the conversion, one pays a gas fee; that is, a transaction charge. Moving USDC across multiple chains is a hassle and the fees pile up.
The article “Simplifying Cross Chain Transaction Using Intents” explains a brave new world. No more clunky Telegram smart contracts and bots. Now the transaction just happens. How difficult will the deanonymization process become? Speed makes life difficult. Moving across chains makes life difficult. It appears that “intents” will be a capability of considerable interest to entities interested in making crypto transactions difficult to deanonymize.
The write up says:
In technical terms,
intentsare signed messages that express a user’s desired outcome without specifying execution details. Instead of crafting complex transaction sequences yourself, you broadcast your intent to a network ofsolvers(sophisticated actors) who then compete to fulfill your request.
The write up explains the benefit for the average crypto trader:
when you broadcast an intent, multiple solvers analyze it and submit competing quotes. They might route through different DEXs, use off-chain liquidity, or even batch your intent with others for better pricing. The best solution wins.
Now, think of solvers as your personal trading assistants who understand every connected protocol, every liquidity source, and every optimization trick in DeFi. They make money by providing better execution than you could achieve yourself and saves you a a lot of time.
Does this sound like a use case for smart software? It is, but the approach is less complicated than what one must implement using other approaches. Here’s a schematic of what happens in the intent pipeline:
The secret sauce for the approach is what is called a “1Click API.” The API handles the plumbing for the crypto bridging or crypto conversion from currency A to currency B.
If you are interested in how this system works, the cited article provides a list of nine links. Each provides additional detail. To be up front, some of the write ups are more useful than others. But three things are clear:
- Deobfuscation is likely to become more time consuming and costly
- The system described could be implemented within the Telegram blockchain system as well as other crypto conversion operations.
- The described approach can be further abstracted into an app with more overt smart software enablements.
My thought is that money launderers are likely to be among the first to explore this approach.
Stephen E Arnold, September 24, 2025
Pavel Durov Was Arrested for Online Stubbornness: Will This Happen in the US?
September 23, 2025
Written by an unteachable dinobaby. Live with it.
In august 2024, the French judiciary arrested Pavel Durov, the founder of VKontakte and then Telegram, a robust but non-AI platform. Why? The French government identified more than a dozen transgressions by Pavel Durov, who holds French citizenship as a special tech bro. Now he has to report to his French mom every two weeks or experience more interesting French legal action. Is this an example of a failure to communicate?
Will the US take similar steps toward US companies? I raise the question because I read an allegedly accurate “real” news write up called “Anthropic Irks White House with Limits on Models’ Use.” (Like many useful online resources, this story requires the curious to subscribe, pay, and get on a marketing list.) These “models,” of course, are the zeros and ones which comprise the next big thing in technology: artificial intelligence.
The write up states:
Anthropic is in the midst of a splashy media tour in Washington, but its refusal to allow its models to be used for some law enforcement purposes has deepened hostility to the company inside the Trump administration…
The write up says as actual factual:
Anthropic recently declined requests by contractors working with federal law enforcement agencies because the company refuses to make an exception allowing its AI tools to be used for some tasks, including surveillance of US citizens…
I found the write up interesting. If France can take action against an upstanding citizen like Pavel Durov, what about the tech folks at Anthropic or other outfits? These firms allegedly have useful data and the tools to answer questions? I recently fed the output of one AI system (ChatGPT) into another AI system (Perplexity), and I learned that Perplexity did a good job of identifying the weirdness in the ChatGPT output. Would these systems provide similar insights into prompt patterns on certain topics; for instance, the charges against Pavel Durov or data obtained by people looking for information about nuclear fuel cask shipments?
With France’s action, is the door open to take direct action against people and their organizations which cooperate reluctantly or not at all when a government official makes a request?
I don’t have an answer. Dinobabies rarely do, and if they do have a response, no one pays attention to these beasties. However, some of those wizards at AI outfits might want to ponder the question about cooperation with a government request.
Stephen E Arnold, September 24, 2025
Grousing Employees Can Be Fun. Credible? You Decide
September 4, 2025
No AI. Just a dinobaby working the old-fashioned way.
I read “Former Employee Accuses Meta of Inflating Ad Metrics and Sidestepping Rules.” Now former employees saying things that cast aspersions on a former employer are best processed with care. I did that, and I want to share the snippets snagging my attention. I try not to think about Meta. I am finishing my monograph about Telegram, and I have to stick to my lane. But I found this write up a hoot.
The first passage I circled says:
Questions are mounting about the reliability of Meta’s advertising metrics and data practices after new claims surfaced at a London employment tribunal this week. A former Meta product manager alleged that the social media giant inflated key metrics and sidestepped strict privacy controls set by Apple, raising concerns among advertisers and regulators about transparency in the industry.
Imagine. Meta coming up at a tribunal. Does that remind anyone of the Cambridge Analytica excitement? Do you recall the rumors that fiddling with Facebook pushed Brexit over the finish line? Whatever happened to those oh-so-clever CA people?
I found this tribunal claim interesting:
… Meta bypassed Apple’s App Tracking Transparency (ATT) rules, which require user consent before tracking their activity across iPhone apps. After Apple introduced ATT in 2021, most users opted out of tracking, leading to a significant reduction in Meta’s ability to gather information for targeted advertising. Company investors were told this would trim revenues by about $10 billion in 2022.
I thought Apple had their system buttoned up. Who knew?
Did Meta have a response? Absolutely. The write up reports:
“We are actively defending these proceedings …” a Meta spokesperson told The Financial Times. “Allegations related to the integrity of our advertising practices are without merit and we have full confidence in our performance review processes.”
True or false? Well….
Stephen E Arnold, September 4, 2025
Spotify Does Messaging: Is That Good or Bad?
September 4, 2025
No AI. Just a dinobaby working the old-fashioned way.
My team and I have difficulty keeping up with the messaging apps that seem to be like mating gerbils. I noted that Spotify, the semi-controversial music app, is going to add messaging. “Spotify Adds In-App Messaging Feature to Let Users Share Music and Podcasts Directly” says:
According to the company, the update is designed “to give users what they want and make those moments of connection more seamless and streamlined in the Spotify app.” Users will be able to message people they have interacted with on Spotify before, such as through Jams, Blends and Collaborative Playlists, or those who share a Family or Duo plan.
The messaging app is no Telegram. The interesting question for me is, “Will Spotify emulate Telegram’s features as Meta’s WhatsApp has?”
Telegram, despite its somewhat negative press, has found a way to monetize user clicks, supplement subscription revenue with crypto service charges, and alleged special arrangement now being adjudicated by the French judiciary.
New messaging platforms get a look from bad actors. How will Spotify police the content? Avid music people often find ways to circumvent different rules and regulations to follow their passion.
Will Spotify cooperate with regulators or will it emulate some of the Dark Web messaging outfits or Telegram, a firm with a template for making money appear when necessary?
Stephen E Arnold, September 4, 2025
So Much AI and Now More Doom and Gloom
August 22, 2025
No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.
Amidst the hype about OpenAI’s ChatGPT 5, I have found it difficult to identify some quiet but to me meaningful signals. One, in my opinion, appears in “Sam Altman Sounds Alarm on AI Crisis That Even He Finds Terrifying.” I was hoping that the article would provide some color on the present negotiations between Sam and Microsoft. For a moment, I envisioned Sam in a meeting with the principals of the five biggest backers of OpenAI. The agenda had one item on the agenda, “When do we get our money back with a payoff, Mr. Altman?”
But no. The signal is that smart software will enable fast-moving, bureaucracy-free bad actors to apply smart software to online fraud. The write up says:
[Mr.] Altman fears that the current AI-fraud crisis will expand beyond voice cloning attacks, deepfake video call scams and phishing emails. He warns that in the future, FaceTime or video fakes may become indistinguishable from reality. The alarming abilities of current AI-technology in the hands of bad faith actors is already terrifying. Scammers can now use AI to create fake identification documents, explicit photos, and headshots for social media profiles.
Okay, he is on the money, but he overlooks one use case for smart software. A bad actor can use different smart software systems and equip existing malware with more interesting features. At some point, a clever bad actor will use AI to build a sophisticated money laundering mechanism that uses the numerous new crypto currencies and their attendant blockchain systems to make the wizards at Huione Guarantee look pretty pathetic.
Can this threat be neutralized. I don’t think it can be in the short term. The reason is that AI is here and has been available for more than a year. Code generation is getting easier. A skilled bad actor can, just like a Google-type engineer, become more productive. In the mid-term, the cyber security companies will roll out AI tools that, according to one outfit whose sales pitch I listened to last wee, will “predict the future.” Yeah, sure. News flash: Once a breach has been discovered, then the cyber security firms kick into action. If the predictive stuff were reliable, these outfits would be betting on horse races and investing in promising start ups, not trying to create such a company.
Mr. Altman captures significant media attention. His cyber fraud message is a faint signal amidst the cacophony of the AI marketing blasts. By the way, cyber fraud is booming, and our research into outfits like Telegram suggest that AI is a contributing factor.
With three new Telegram-type services in development at this time, the future for bad actors looks as bright and the future for cyber security firms looks increasingly reactive. For investors and those with retirement funds, the forecast is less cheery.
Stephen E Arnold, August 22, 2025
News Flash from the Past: Bad Actors Use New Technology and Adapt Quickly
August 18, 2025
No AI. Just a dinobaby working the old-fashioned way.
NBC News is on top of cyber security trends. I think someone spotted Axios report that bad actors were using smart software to outfox cyber security professionals. I am not sure this is news, but what do I know?
“Criminals, Good Guys and Foreign Spies: Hackers Everywhere Are Using AI Now” reports this “hot off the press” information. I quote:
The hackers included an attachment containing an artificial intelligence program. If installed, it would automatically search the victims’ computers for sensitive files to send back to Moscow.
My goodness. Who knew that stealers have been zipping around for many years? Even more startling old information is:
LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents. The technology has so far not revolutionized hacking by turning complete novices into experts, nor has it allowed would-be cyberterrorists to shut down the electric grid. But it’s making skilled hackers better and faster.
Stunning. A free chunk of smart software, unemployed or intra-gig programmers, and juicy targets pushed out with a fairy land of vulnerabilities. Isn’t it insightful that bad actors would apply these tools to clueless employees, inherently vulnerable operating systems, and companies too busy outputting marketing collateral to do routine security updates.
The cat-and-mouse game works this way. Bad actors with access to useful scripting languages, programming expertise, and smart software want to generate revenue or wreck havoc. One individual or perhaps a couple of people in a coffee shop hit upon a better way to access a corporate network or obtain personally identifiable information from a hapless online user.
Then, after the problem has been noticed and reported, cyber security professionals will take a closer look. If these outfits have smart software running, a human will look more closely at logs and say, “I think I saw something.”
Okay, mice are in and swarming. Now the cats jump into action. The cats will find [a] a way to block the exploit, [b] rush to push the fix to paying customers, and [c] share the information in a blog post or a conference.
What happens? The bad actors notice their mice aren’t working or they are being killed instantly. The bad actors go back to work. In most cases, the bad actors are not unencumbered by bureaucracy or tough thought problems about whether something is legal or illegal. The bad actors launch more attacks. If one works, its gravy.
Now the cats jump back into the fray.
In the current cyber crime world, cyber security firms, investigators, and lawyers are in reactive mode. The bad actors play offense.
One quick example: Telegram has been enabling a range of questionable online activities since 2013. In 2024 after a decade of inaction, France said, “Enough.” Authorities in France arrested Pavel Durov. The problem from my point of view is that it took 12 years to man up to the icon Pavel Durov.
What happens when a better Telegram comes along built with AI as part of its plumbing?
The answer is, “You can buy licenses to many cyber security systems. Will they work?”
There are some large, capable mice out there in cyber space.
Stephen E Arnold, August 18, 2025


