Pavel Durov and Telegram: In the Spotlight Again
October 21, 2024
No smart software used for the write up. The art, however, is a different story.
Several news sources reported that the entrepreneurial Pavel Durov, the found of Telegram, has found a way to grab headlines. Mr. Durov has been enjoying a respite in France, allegedly due to his contravention of what the French authorities views as a failure to cooperate with law enforcement. After his detainment, Mr. Durov signaled that he has cooperated and would continue to cooperate with investigators in certain matters.
A person under close scrutiny may find that the experience can be unnerving. The French are excellent intelligence operators. I wonder how Mr. Durov would hold up under the ministrations of Israeli and US investigators. Thanks, ChatGPT, you produced a usable cartoon with only one annoying suggestion unrelated to my prompt. Good enough.
Mr. Durov may have an opportunity to demonstrate his willingness to assist authorities in their investigation into documents published on the Telegram Messenger service. These documents, according to such sources as Business Insider and South China Morning Post, among others, report that the Telegram channel Middle East Spectator dumped information about Israel’s alleged plans to respond to Iran’s October 1, 2024, missile attack.
The South China Morning Post reported:
The channel for the Middle East Spectator, which describes itself as an “open-source news aggregator” independent of any government, said in a statement that it had “received, through an anonymous source on Telegram who refused to identify himself, two highly classified US intelligence documents, regarding preparations by the Zionist regime for an attack on the Islamic Republic of Iran”. The Middle East Spectator said in its posted statement that it could not verify the authenticity of the documents.
Let’s look outside this particular document issue. Telegram’s mostly moderation-free approach to the content posted, distributed, and pushed via the Telegram platform is like to come under more scrutiny. Some investigators in North America view Mr. Durov’s system as a less pressing issue than the content on other social media and messaging services.
This document matter may bring increased attention to Mr. Durov, his brother (allegedly with the intelligence of two PhDs), the 60 to 80 engineers maintaining the platform, and its burgeoning ancillary interests in crypto. Mr. Durov has some fancy dancing to do. One he is able to travel, he may find that additional actions will be considered to trim the wings of the Open Network Foundation, the newish TON Social service, and the “almost anything” goes approach to the content generated and disseminated by Telegram’s almost one billion users.
From a practical point of view, a failure to exercise judgment about what is allowed on Messenger may derail Telegram’s attempts to become more of a mover and shaker in the world of crypto currency. French actions toward Mr. Pavel should have alerted the wizardly innovator that governments can and will take action to protect their interests.
Now Mr. Durov is placing himself, his colleagues, and his platform under more scrutiny. Close scrutiny may reveal nothing out of the ordinary. On the other hand, when one pays close attention to a person or an organization, new and interesting facts may be identified. What happens then? Often something surprising.
Will Mr. Durov get that message?
Stephen E Arnold, October 21, 2024
Surveillance Watch Maps the Surveillance App Ecosystem
October 1, 2024
Here is an interesting resource: Surveillance Watch compiles information about surveillance tech firms, organizations that fund them, and the regions in which they are said to operate. The lists, compiled from contributions by visitors to the site, are not comprehensive. But they are full of useful information. The About page states:
“Surveillance technology and spyware are being used to target and suppress journalists, dissidents, and human rights advocates everywhere. Surveillance Watch is an interactive map that documents the hidden connections within the opaque surveillance industry. Founded by privacy advocates, most of whom were personally harmed by surveillance tech, our mission is to shed light on the companies profiting from this exploitation with significant risk to our lives. By mapping out the intricate web of surveillance companies, their subsidiaries, partners, and financial backers, we hope to expose the enablers fueling this industry’s extensive rights violations, ensuring they cannot evade accountability for being complicit in this abuse. Surveillance Watch is a community-driven initiative, and we rely on submissions from individuals passionate about protecting privacy and human rights.”
Yes, the site makes it easy to contribute information to its roundup. Anonymously, if one desires. The site’s information is divided into three alphabetical lists: Surveilling Entities, Known Targets, and Funding Organizations. As an example, here is what the service says about safeXai (formerly Banjo):
“safeXai is the entity that has quietly resumed the operations of Banjo, a digital surveillance company whose founder, Damien Patton, was a former Ku Klux Klan member who’d participated in a 1990 drive-by shooting of a synagogue near Nashville, Tennessee. Banjo developed real-time surveillance technology that monitored social media, traffic cameras, satellites, and other sources to detect and report on events as they unfolded. In Utah, Banjo’s technology was used by law enforcement agencies.”
We notice there are no substantive links which could have been included, like ones to footage of the safeXai surveillance video service or the firm’s remarkable body of patents. In our view, these patents represent an X-ray look at what most firms call artificial intelligence.
A few other names we recognize are IBM, Palantir, and Pegasus owner NSO Group. See the site for many more. The Known Targets page lists countries that, when clicked, list surveilling entities known or believed to be operating there. Entries on the Funding Organizations page include a brief description of each organization with a clickable list of surveillance apps it is known or believed to fund at the bottom. It is not clear how the site vets its entries, but the submission form does include boxes for supporting URL(s) and any files to upload. It also asks whether one consents to be contacted for more information.
Cynthia Murrell, October 1, 2024
Zapping the Ghost Comms Service
September 23, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Europol generated a news release titled “Global Coalition Takes Down New Criminal Communication Platform.” One would think that bad actors would have learned a lesson from the ANOM operation and from the take downs of other specialized communication services purpose built for bad actors. The Europol announcement explains:
Europol and Eurojust, together with law enforcement and judicial authorities from around the world, have successfully dismantled an encrypted communication platform that was established to facilitate serious and organized crime perpetrated by dangerous criminal networks operating on a global scale. The platform, known as Ghost, was used as a tool to carry out a wide range of criminal activities, including large-scale drug trafficking, money laundering, instances of extreme violence and other forms of serious and organized crime.
Eurojust, as you probably know, is the EU’s agency responsible for dealing with judicial cooperation in criminal matters among agencies. The entity was set up 2002 and concerns itself serious crime and cutting through the red tape to bring alleged bad actors to court. The dynamic of Europol and Eurojust is to investigate and prosecute with efficiency.
Two cyber investigators recognize that the bad actors can exploit the information environment to create more E2EE systems. Thanks, MSFT Copilot. You do a reasonable job of illustrating chaos. Good enough.
The marketing-oriented name of the system is or rather was Ghost. Here’s how Europol describes the system:
Users could purchase the tool without declaring any personal information. The solution used three encryption standards and offered the option to send a message followed by a specific code which would result in the self-destruction of all messages on the target phone. This allowed criminal networks to communicate securely, evade detection, counter forensic measures, and coordinate their illegal operations across borders. Worldwide, several thousand people used the tool, which has its own infrastructure and applications with a network of resellers based in several countries. On a global scale, around one thousand messages are being exchanged each day via Ghost.
With law enforcement compromising certain bad actor-centric systems like Ghost, what are the consequences of these successful shutdowns? Here’s what Europol says:
The encrypted communication landscape has become increasingly fragmented as a result of recent law enforcement actions targeting platforms used by criminal networks. Following these operations, numerous once-popular encrypted services have been shut down or disrupted, leading to a splintering of the market. Criminal actors, in response, are now turning to a variety of less-established or custom-built communication tools that offer varying degrees of security and anonymity. By doing so, they seek new technical solutions and also utilize popular communication applications to diversify their methods. This strategy helps these actors avoid exposing their entire criminal operations and networks on a single platform, thereby mitigating the risk of interception. Consequently, the landscape of encrypted communications remains highly dynamic and segmented, posing ongoing challenges for law enforcement.
Nevertheless, some entities want to create secure apps designed to allow criminal behaviors to thrive. These range from “me too” systems like one allegedly in development by a known bad actor to knock offs of sophisticated hardware-software systems which operate within the public Internet. Are bad actors more innovative than the whiz kids at the largest high-technology companies? Nope. Based on my team’s research, notable sources of ideas to create problems for law enforcement include:
- Scanning patent applications for nifty ideas. Modern patent search systems make the identification of novel ideas reasonably straightforward
- Hiring one or more university staff to identify and get students to develop certain code components as part of a normal class project
- Using open source methods and coming up with ad hoc ways to obfuscate what’s being done. (Hats off to the open source folks, of course.)
- Buying technology from middle “men” who won’t talk about their customers. (Is that too much information, Mr. Oligarch’s tech expert?)
Like much in today’s digital world or what I call the datasphere, each successful takedown provides limited respite. The global cat-and-mouse game between government authorities and bad actors is what some at the Santa Fe Institute might call “emergent behavior” at the boundary between entropy and chaos. That’s a wonderful insight despite suggesting another consequence of living at the edge of chaos.
Stephen E Arnold, September 23, 2024
x
A
The Fixed Network Lawful Interception Business is Booming
September 11, 2024
It is not just bad actors who profit from an increase in cybercrime. Makers of software designed to catch them are cashing in, too. The Market Research Report 224 blog shares “Fixed Network Lawful Interception Market Region Insights.” Lawful interception is the process by which law enforcement agencies, after obtaining the proper warrants of course, surveil circuit and packet-mode communications. The report shares findings from a study by Data Bridge Market Research on this growing sector. Between 2021 and 2028, this market is expected to grow by nearly 20% annually and hit an estimated value of $5,340 million. We learn:
“Increase in cybercrimes in the era of digitalization is a crucial factor accelerating the market growth, also increase in number of criminal activities, significant increase in interception warrants, rising surge in volume of data traffic and security threats, rise in the popularity of social media communications, rising deployment of 5G networks in all developed and developing economies, increasing number of interception warrants and rising government of both emerging and developed nations are progressively adopting lawful interception for decrypting and monitoring digital and analog information, which in turn increases the product demand and rising virtualization of advanced data centers to enhance security in virtual networks enabling vendors to offer cloud-based interception solutions are the major factors among others boosting the fixed network lawful interception market.”
Furthermore, the pace of these developments will likely increase over the next few years. The write-up specifies key industry players, a list we found particularly useful:
“The major players covered in fixed network lawful interception market report are Utimaco GmbH, VOCAL TECHNOLOGIES, AQSACOM, Inc, Verint, BAE Systems., Cisco Systems, Telefonaktiebolaget LM Ericsson, Atos SE, SS8 Networks, Inc, Trovicor, Matison is a subsidiary of Sedam IT Ltd, Shoghi Communications Ltd, Comint Systems and Solutions Pvt Ltd – Corp Office, Signalogic, IPS S.p.A, ZephyrTel, EVE compliancy solutions and Squire Technologies Ltd among other domestic and global players.”
See the press release for notes on Data Bridge’s methodology. It promises 350 pages of information, complete with tables and charts, for those who purchase a license. Formed in 2014, Data Bridge is based in Haryana, India.
Cynthia Murrell, September 11, 2024
Thoughts about the Dark Web
August 8, 2024
This essay is the work of a dumb humanoid. No smart software required.
The Dark Web. Wow. YouTube offers a number of tell-all videos about the Dark Web. Articles explain the topics one can find on certain Dark Web fora. What’s forgotten is that the number of users of the Dark Web has been chugging along, neither gaining tens of millions of users or losing tens of millions of users. Why? Here’s a traffic chart from the outfit that sort of governs The Onion Router:
Source: https://metrics.torproject.org/userstats-relay-country.html
The chart is not the snappiest item on the sprawling Torproject.org Web site, but the message seems to be that TOR has been bouncing around two million users this year. Go back in time and the number has increased, but not much. Online statistics, particularly those associated with obfuscation software, are mushy. Let’s toss in another couple million users to account for alternative obfuscation services. What happens? We are not in the tens of millions.
Our research suggests that the stability of TOR usage is due to several factors:
- The hard core bad actors comprise a small percentage of the TOR usage and probably do more outside of TOR than within it. In September 2024 I will be addressing this topic at a cyber fraud conference.
- The number of entities indexing the “Dark Web” remains relatively stable. Sure, some companies drop out of this data harvesting but the big outfits remain and their software looks a lot like a user, particularly with some of the wonky verification Dark Web sites use to block automated collection of data.
- Regular Internet users don’t pay much attention to TOR, including those with the one-click access browsers like Brave.
- Human investigators are busy looking and interacting, but the numbers of these professionals also remains relatively stable.
To sum up, most people know little about the Dark Web. When these individuals figure out how to access a Web site advertising something exciting like stolen credit cards or other illegal products and services, they are unaware of a simple fact: An investigator from some country maybe operating like a bad actor to find a malefactor. By the way, the Dark Web is not as big as some cyber companies assert. The actual number of truly bad Dark Web sites is fewer than 100, based on what my researchers tell me.
A very “good” person approaches an individual who looks like a very tough dude. The very “good” person has a special job for the touch dude. Surprise! Thanks, MSFT Copilot. Good enough and you should know what certain professionals look like.
I read “Former Pediatrician Stephanie Russell Sentenced in Murder Plot.” The story is surprisingly not that unique. The reason I noted a female pediatrician’s involvement in the Dark Web is that she lives about three miles from my office. The story is that the good doctor visited the Dark Web and hired a hit man to terminate an individual. (Don’t doctors know how to terminate as part of their studies?)
The write up reports:
A Louisville judge sentenced former pediatrician Stephanie Russell to 12 years in prison Wednesday for attempting to hire a hitman to kill her ex-husband multiple times.
I love the somewhat illogical phrase “kill her ex-husband multiple times.”
Russell pleaded guilty April 22, 2024, to stalking her former spouse and trying to have him killed amid a protracted custody battle over their two children. By accepting responsibility and avoiding a trial, Russell would have expected a lighter prison sentence. However, she again tried to find a hitman, this time asking inmates to help with the search, prosecutors alleged in court documents asking for a heftier prison sentence.
One rumor circulating at the pub which is a popular lunch spot near the doctor’s former office is that she used the Dark Web and struck up an online conversation with one of the investigators monitoring such activity.
Net net: The Dark Web is indeed interesting.
Stephen E Arnold, August 8, 2024
A Modern Spy Novel: A License to Snoop
April 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
“UK’s Investigatory Powers Bill to Become Law Despite Tech World Opposition” reports the Investigatory Powers Amendment Bill or IPB is now a law. In a nutshell, the law expands the scope of data collection by law enforcement and intelligence services. The Register, a UK online publication, asserts:
Before the latest amendments came into force, the IPA already allowed authorized parties to gather swathes of information on UK citizens and tap into telecoms activity – phone calls and SMS texts. The IPB’s amendments add to the Act’s existing powers and help authorities trawl through more data, which the government claims is a way to tackle “modern” threats to national security and the abuse of children.
Thanks, Copilot. A couple of omissions from my prompt, but your illustration is good enough.
One UK elected official said:
“Additional safeguards have been introduced – notably, in the most recent round of amendments, a ‘triple-lock’ authorization process for surveillance of parliamentarians – but ultimately, the key elements of the Bill are as they were in early versions – the final version of the Bill still extends the scope to collect and process bulk datasets that are publicly available, for example.”
Privacy advocates are concerned about expanding data collections’ scope. The Register points out that “big tech” feels as though it is being put on the hot seat. The article includes this statement:
Abigail Burke, platform power program manager at the Open Rights Group, previously told The Register, before the IPB was debated in parliament, that the proposals amounted to an “attack on technology.”
Several observations:
- The UK is a member in good standing of an intelligence sharing entity which includes Australia, Canada, New Zealand, and the US. These nation states watch one another’s activities and sometimes emulate certain policies and legal frameworks.
- The IPA may be one additional step on a path leading to a ban on end-to-end-encrypted messaging. Such a ban, if passed, would prove disruptive to a number of business functions. Bad actors will ignore such a ban and continue their effort to stay ahead of law enforcement using homomorphic encryption and other sophisticated techniques to keep certain content private.
- Opportunistic messaging firms like Telegram may incorporate technologies which effectively exploit modern virtual servers and other technology to deploy networks which are hidden and effectively less easily “seen” by existing monitoring technologies. Bad actors can implement new methods forcing LE and intelligence professionals to operate in reaction mode. IPA is unlikely to change this cat-and-mouse game.
- Each day brings news of new security issues with widely used software and operating systems. Banning encryption may have some interesting downstream and unanticipated effects.
Net net: I am not sure that modern threats will decrease under IPA. Even countries with the most sophisticated software, hardware, and humanware security systems can be blindsided. Gaffes in Israel have had devastating consequences that an IPA-type approach would remedy.
Stephen E Arnold, April 29, 2024
Lawyer, Former Government Official, and Podcaster to Head NSO Group
January 2, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The high-profile intelware and policeware vendor NSO Group has made clear that specialized software is a potent policing tool. NSO Group continues to market its products and services at low-profile trade shows like those sponsored by an obscure outfit in northern Virginia. Now the firm has found a new friend in a former US official. TechDirt reports, “Former DHS/NSA Official Stewart Baker Decides He Can Help NSO Group Turn a Profit.” Writer Tim Cushing tells us:
“This recent filing with the House of Representatives makes it official: Baker, along with his employer Steptoe and Johnson, will now be seeking to advance the interests of an Israeli company linked to abusive surveillance all over the world. In it, Stewart Baker is listed as the primary lobbyist. This is the same Stewart Baker who responded to the Commerce Department blacklist of NSO by saying it wouldn’t matter because authoritarians could always buy spyware from… say…. China.”
So, the reasoning goes, why not allow a Western company to fill that niche? This perspective apparently makes Baker just the fellow to help NSO buff up NSO Group’s reputation. Cushing predicts:
“The better Baker does clearing NSO’s tarnished name, the sooner it and its competitors can return to doing the things that got them in trouble in the first place. Once NSO is considered somewhat acceptable, it can go back to doing the things that made it the most money: i.e., hawking powerful phone exploits to human rights abusers. But this time, NSO has a former US government official in its back pocket. And not just any former government official but one who spent months telling US citizens who were horrified by the implications of the Snowden leaks that they were wrong for being alarmed about bulk surveillance.”
Perhaps the winning combination for the NSO Group is a lawyer, former US government official, and a podcaster in one sleek package will do the job? But there are now alternatives to the Pegasus solution. Some of these do not have the baggage carted around by the stealthy flying horse.
Perhaps there will be a podcast about NSO Group in the near future.
Cynthia Murrell, January 2, 2024
Facial Recognition: A Bit of Bias Perhaps?
November 24, 2023
This essay is the work of a dumb dinobaby. No smart software required.
It’s a running gag in the tech industry that AI algorithms and related advancements are “racist.” Motion sensors can’t recognize dark pigmented skin. Photo recognition software misidentifies black and other ethnicities as primates. AI-trained algorithms are also biased against ethnic minorities and women in the financial, business, and other industries. AI is “racist” because it’s trained on data sets heavy in white and male information.
Ars Technica shares another story about biased AI: “People Think White AI-Generated Faces Are More Real Than Actual Photos, Study Says.” The journal of Psychological Science published a peer reviewed study, “AI Hyperrealism: Why AI Faces Are Perceived As More Real Than Human Ones.” The study discovered that faces created from three-year old AI technology were found to be more real than real ones. Predominately, AI-generate faces of white people were perceived as the most realistic.
The study surveyed 124 white adults who were shown a mixture of 100 AI-generated images and 100 real ones. They identified 66% of the AI images as human and 51% of the real faces were identified as real. Real and AI images of ethnic minorities with high amounts of melanin were viewed as real 51%. The study also discovered that participants who made the most mistakes were also the most confident, a clear indicator of the Dunning-Kruger effect.
The researchers conducted a second study with 610 participants and learned:
“The analysis of participants’ responses suggested that factors like greater proportionality, familiarity, and less memorability led to the mistaken belief that AI faces were human. Basically, the researchers suggest that the attractiveness and "averageness" of AI-generated faces made them seem more real to the study participants, while the large variety of proportions in actual faces seemed unreal.
Interestingly, while humans struggled to differentiate between real and AI-generated faces, the researchers developed a machine-learning system capable of detecting the correct answer 94 percent of the time.”
The study could be swung in the typical “racist” direction that AI will perpetuate social biases. The answer is simple and should be invested: create better data sets to train AI algorithms.
Whitney Grace, November 24, 2023
A Rare Moment of Constructive Cooperation from Tech Barons
November 23, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Platform-hopping is one way bad actors have been able to cover their tracks. Now several companies are teaming up to limit that avenue for one particularly odious group. TechNewsWorld reports, “Tech Coalition Launches Initiative to Crackdown on Nomadic Child Predators.” The initiative is named Lantern, and the Tech Coalition includes Discord, Google, Mega, Meta, Quora, Roblox, Snap, and Twitch. Such cooperation is essential to combat a common tactic for grooming and/ or sextortion: predators engage victims on one platform then move the discussion to a more private forum. Reporter John P. Mello Jr. describes how Lantern works:
Participating companies upload ‘signals’ to Lantern about activity that violates their policies against child sexual exploitation identified on their platform.
Signals can be information tied to policy-violating accounts like email addresses, usernames, CSAM hashes, or keywords used to groom as well as buy and sell CSAM. Signals are not definitive proof of abuse. They offer clues for further investigation and can be the crucial piece of the puzzle that enables a company to uncover a real-time threat to a child’s safety.
Once signals are uploaded to Lantern, participating companies can select them, run them against their platform, review any activity and content the signal surfaces against their respective platform policies and terms of service, and take action in line with their enforcement processes, such as removing an account and reporting criminal activity to the National Center for Missing and Exploited Children and appropriate law enforcement agency.”
The visually oriented can find an infographic of this process in the write-up. We learn Lantern has been in development for two years. Why did it take so long to launch? Part of it was designing the program to be effective. Another part was to ensure it was managed responsibly: The project was subjected to a Human Rights Impact Assessment by the Business for Social Responsibility. Experts on child safety, digital rights, advocacy of marginalized communities, government, and law enforcement were also consulted. Finally, we’re told, measures were taken to ensure transparency and victims’ privacy.
In the past, companies hesitated to share such information lest they be considered culpable. However, some hope this initiative represents a perspective shift that will extend to other bad actors, like those who spread terrorist content. Perhaps. We shall see how much tech companies are willing to cooperate. They wouldn’t want to reveal too much to the competition just to help society, after all.
Cynthia Murrell, November 23, 2023
Predictive Analytics and Law Enforcement: Some Questions Arise
October 17, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
We wish we could prevent crime before it happens. With AI and predictive analytics it seems possible but Wired shares that “Predictive Policing Software Terrible At Predicting Crimes.” Plainfield, NJ’s police department purchased Geolitica predictive software and it was not a wise use go tax payer money. The Markup, a nonprofit investigative organization that wants technology serve the common good, reported Geolitica’s accuracy:
“We examined 23,631 predictions generated by Geolitica between February 25 and December 18, 2018, for the Plainfield Police Department (PD). Each prediction we analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield PD. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.”
The Markup also analyzed predictions for robberies and aggravated results that would occur in Plainfield and it was 0.6%. Burglary predictions were worse at 0.1%.
The police weren’t really interested in using Geolitica either. They wanted to be accurate in predicting and reducing crime. The Plainfield, NJ hardly used the software and discontinued the program. Geolitica charged $20,500 for a year subscription then $15,5000 for year renewals. Geolitica had inconsistencies with information. Police found training and experience to be as effective as the predictions the software offered.
Geolitica will go out off business at the end of 2023. The law enforcement technology company SoundThinking hired Geolitica’s engineering team and will acquire some of their IP too. Police software companies are changing their products and services to manage police department data.
Crime data are important. Where crimes and victimization occur should be recorded and analyzed. Newark, New Jersey, used risk terrain modeling (RTM) to identify areas where aggravated assaults would occur. They used land data and found that vacant lots were large crime locations.
Predictive methods have value, but they also have application to specific use cases. Math is not the answer to some challenges.
Whitney Grace, October 17, 2023