Israeli Intelware: Is It Time to Question Its Value?
October 9, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
In 2013, I believe that was the year, I attended an ISS TeleStrategies Conference. A friend of mine wanted me to see his presentation, and I was able to pass the Scylla and Charybdis-inspired security process and listen to the talk. (Last week I referenced that talk and quoted a statement posted on a slide for everyone in attendance to view. Yep, a quote from 2013, maybe earlier.)
After the talk, I walked quickly through the ISS exhibit hall. I won’t name the firms exhibiting because some of these are history (failures), some are super stealthy, and others have been purchased by other outfits as the intelware roll ups continue. I do recall a large number of intelware companies with their headquarters in or near Tel Aviv, Israel. My impression, as I recall, was that Israel’s butt-kicking software could make sense of social media posts, Dark Web forum activity, Facebook craziness, and Twitter disinformation. These Israeli outfits were then the alpha vendors. Now? Well, maybe a bit less alpha drifting to beta or gamma.
One major to another: “Do you think our intel was wrong?” The other officer says, “I sat in a briefing teaching me that our smart software analyzed social media in real time. We cannot be surprised. We have the super duper intelware.” The major says, jarred by an explosion, “Looks like we were snookered by some Madison Avenue double talk. Let’s take cover.” Thanks, MidJourney. You do understand going down in flames. Is that because you are thinking about your future?
My impression was that the Israeli-developed software shared a number of functional and visual similarities. I asked people at the conference if they had noticed the dark themes, the similar if not identical timeline functions, and the fondness for maps on which data were plotted and projected. “Peas in a pod,” my friend, a former NATO officer told me. Are not peas alike?
The reason — and no one has really provided this information — is that the developers shared a foxhole. The government entities in Israel train people with the software and systems proven over the years to be useful. The young trainees carry their learnings forward in their career. Then when mustered out, a few bright sparks form companies or join intelware giants like Verint and continue to enhance existing tools or building new ones. The idea is that life in the foxhole imbues those who experience it with certain similar mental furniture. The ideas, myths, and software experiences form the muddy floor and dirt walls of the foxhole. I suppose one could call this “digital bias”, which later manifests itself in the dozens of Tel Aviv -based intelware, policeware, and spyware companies’ products and services.
Why am I mentioning this?
The reason is that I was shocked and troubled by the allegedly surprise attack. If you want to follow the activity, navigate to X.com and search that somewhat crippled system for #OSINT. Skip top and go to the “Latest” tab.
Several observations:
- Are the Israeli intelware products (many of which are controversial and expensive) flawed? Obviously excellent software processing “signals” was blind to the surprise attack, right?
- Are the Israeli professionals operating the software unable to use it to prevent surprise attacks? Obviously excellent software in the hands of well-trained professionals flags signals and allows action to be taken when warranted. Did that happen? Has Israeli intel training fallen short of its goal of protecting the nation? Hmmm. Maybe, yes.
- Have those who hype intelware and the excellence of a particular system and method been fooled, falling into the dark pit of OSINT blind spots like groupthink and “reasoning from anecdote, not fact”? I am leaning toward a “yes”, gentle reader.
The time for a critical look at what works and what doesn’t is what the British call “from this day” work. The years of marketing craziness is one thing, but when either the system or the method allows people to be killed without warning or cause broadcasts one message: “Folks, something is very, very wrong.”
Perhaps certification of these widely used systems is needed? Perhaps a hearing in an appropriate venue is warranted?
Blind spots can cause harm. Marketers can cause harm. Poorly trained operators can cause harm. Even foxholes require tidying up. Technology for intelligence applications is easy to talk about, but it is now clear to everyone engaged in making sense of signals, one country’s glamped up systems missed the wicket.
Stephen E Arnold, October 9, 2023
Google and Its Use of the Word “Public”: A Clever and Revenue-Generating Policy Edit
July 6, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
If one has the cash, one can purchase user-generated data from more than 500 data publishers in the US. Some of these outfits are unknown. When a liberal Wall Street Journal reporter learns about Venntel or one of these outfits, outrage ensues. I am not going to explain how data from a user finds its ways into the hands of a commercial data aggregator or database publisher. Why not Google it? Let me know how helpful that research will be.
Why are these outfits important? The reasons include:
- Direct from app information obtained when a clueless mobile user accepts the Terms of Use. Do you hear the slurping sounds?
- Organizations with financial data and savvy data wranglers who cross correlate data from multiple sources?
- Outfits which assemble real-time or near-real-time user location data. How useful are those data in identifying military locations with a population of individuals who exercise wearing helpful heart and step monitoring devices?
Navigate to “Google’s Updated Privacy Policy States It Can Use Public Data to Train its AI Models.” The write up does not make clear what “public data” are. My hunch is that the Google is not exceptionally helpful with its definitions of important “obvious” concepts. The disconnect is the point of the policy change. Public data or third-party data can be purchased, licensed, used on a cloud service like an Oracle-like BlueKai clone, or obtained as part of a commercial deal with everyone’s favorite online service LexisNexis or one of its units.
A big advertiser demonstrates joy after reading about Google’s detailed prospect targeting reports. Dossiers of big buck buyers are available to those relying on Google for online text and video sales and marketing. The image of this happy media buyer is from the elves at MidJourney.
The write up states with typical Silicon Valley “real” news flair:
By updating its policy, it’s letting people know and making it clear that anything they publicly post online could be used to train Bard, its future versions and any other generative AI product Google develops.
Okay. “the weekend” mentioned in the write up is the 4th of July weekend. Is this a hot news or a slow news time? If you picked “hot”, you are respectfully wrong.
Now back to “public.” Think in terms of Google’s licensing third-party data, cross correlating those data with its log data generated by users, and any proprietary data obtained by Google’s Android or Chrome software, Gmail, its office apps, and any other data which a user clicking one of those “Agree” boxes cheerfully mouses through.
The idea, if the information in Google patent US7774328 B2. What’s interesting is that this granted patent does not include a quite helpful figure from the patent application US2007 0198481. Here’s the 16 year old figure. The subject is Michael Jackson. The text is difficult to read (write your Congressman or Senator to complain). The output is a machine generated dossier about the pop star. Note that it includes aliases. Other useful data are in the report. The granted patent presents more vanilla versions of the dossier generator, however.
The use of “public” data may enhance the type of dossier or other meaty report about a person. How about a map showing the travels of a person prior to providing a geo-fence about an individual’s location on a specific day and time. Useful for some applications? If these “inventions” are real, then the potential use cases are interesting. Advertisers will probably be interested? Can you think of other use cases? I can.
The cited article focuses on AI. I think that more substantive use cases fit nicely with the shift in “policy” for public data. Have your asked yourself, “What will Mandiant professionals find interesting in cross correlated data?”
Stephen E Arnold, July 6, 2023
NSO Group Restructuring Keeps Pegasus Aloft
July 4, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The NSO Group has been under fire from critics for the continuing deployment if its infamous Pegasus spyware. The company, however, might more resemble a different mythological creature: Since its creditors pulled their support, NSO appears to be rising from the ashes.
Pegasus continues to fly. Can it monitor some of the people who have mobile phones? Not in ancient Greece. Other places? I don’t know. MidJourney’s creative powers does not shed light on this question.
The Register reports, “Pegasus-Pusher NSO Gets New Owner Keen on the Commercial Spyware Biz.” Reporter Jessica Lyons Hardcastle writes:
“Spyware maker NSO Group has a new ringleader, as the notorious biz seeks to revamp its image amid new reports that the company’s Pegasus malware is targeting yet more human rights advocates and journalists. Once installed on a victim’s device, Pegasus can, among other things, secretly snoop on that person’s calls, messages, and other activities, and access their phone’s camera without permission. This has led to government sanctions against NSO and a massive lawsuit from Meta, which the Supreme Court allowed to proceed in January. The Israeli company’s creditors, Credit Suisse and Senate Investment Group, foreclosed on NSO earlier this year, according to the Wall Street Journal, which broke that story the other day. Essentially, we’re told, NSO’s lenders forced the biz into a restructure and change of ownership after it ran into various government ban lists and ensuing financial difficulties. The new owner is a Luxembourg-based holding firm called Dufresne Holdings controlled by NSO co-founder Omri Lavie, according to the newspaper report. Corporate filings now list Dufresne Holdings as the sole shareholder of NSO parent company NorthPole.”
President Biden’s executive order notwithstanding, Hardcastle notes governments’ responses to spyware have been tepid at best. For example, she tells us, the EU opened an inquiry after spyware was found on phones associated with politicians, government officials, and civil society groups. The result? The launch of an organization to study the issue. Ah, bureaucracy! Meanwhile, Pegasus continues to soar.
Cynthia Murrell, July 4, 2023
Call 9-1-1. AI Will Say Hello Soon
June 20, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
My informal research suggests that every intelware and policeware vendor is working to infuse artificial intelligence or in my lingo “smart software” into their products and services. Most of these firms are not Chatty Cathies. The information about innovations is dribbled out in talks at restricted attendance events or in talks given at these events. This means that information does not zip around like the posts on the increasingly less use Twitter service #osint.
Government officials talking about smart software which could reduce costs but the current budget does not allow its licensing. Furthermore, time is required to rethink what to do with the humanoids who will be rendered surplus and ripe for RIF’ing. One of the attendees wisely asks, “Does anyone want dessert?” A wag of the dinobaby’s tail to MidJourney which has generated an original illustration unrelated to any content object upon which the system inadvertently fed. Smart software has to gobble lunch just like government officials.
However, once in a while, some information becomes public and “real news” outfits recognize the value of the information and make useful factoids available. That’s what happened in “A.I. Call Taker Will Begin Taking Over Police Non-Emergency Phone Lines Next Week: Artificial Intelligence Is Kind of a Scary Word for Us,” Admits Dispatch Director.”
Let me highlight a couple of statements in the cited article.
First, I circled this statement about Portland, Oregon’s new smart system:
A automated attendant will answer the phone on nonemergency and based on the answers using artificial intelligence—and that’s kind of a scary word for us at times—will determine if that caller needs to speak to an actual call taker,” BOEC director Bob Cozzie told city commissioners yesterday.
I found this interesting and suggestive of how some government professionals will view the smart software-infused system.
Second, I underlined this passage:
The new AI system was one of several new initiatives that were either announced or proposed at yesterday’s 90-minute city “work session” where commissioners grilled officials and consultants about potential ways to address the crisis.
The “crisis”, as I understand it, boils down to staffing and budgets.
Several observations:
- The write up makes a cautious approach to smart software. What will this mean for adoption of even more sophisticated services included in intelware and policeware solutions?
- The message I derived from the write up is that governmental entities are not sure what to do. Will this cloud of unknowing have a impact on adoption of AI-infused intelware and policeware systems?
- The article did not include information from the vendor? Is this fact provide information about the reporter’s research or does it suggest the vendor was not cooperative. Intelware and policeware companies are not particularly cooperative nor are some of the firms set up to respond to outside inquiries. Will those marketing decisions slow down adoption of smart software?
I will let you ponder the implications of this brief, and not particularly detailed article. I would suggest that intelware and policeware vendors put on their marketing hats and plug them into smart software. Some new hurdles for making sales may be on the horizon.
Stephen E Arnold, June 20. 2023
NSO Group: How Easy Are Mobile Hacks?
April 25, 2023
I am at the 2023 US National Cyber Crime Conference, and I have been asked, “What companies offer NSO-type mobile phone capabilities?” My answer is, “Quite a few.” Will I name these companies in a free blog post? Sure, just call us at 1-800-YOU-WISH.
A more interesting question is, “Why is Israel-based NSO Group the pointy end of a three meter stick aimed at mobile devices?” (To get some public information about newly recognized NSO Group (Pegasus) tricks, navigate to “Triple Threat. NSO Group’s Pegasus Spyware Returns in 2022 with a Trio of iOS 15 and iOS 16 Zero-Click Exploit Chains.” I would point out that the reference to Access Now is interesting, and a crime analyst may find a few minutes examining what the organization does, its “meetings,” and its hosting services time well spent. Will I provide that information in a free blog post. Please, call the 800 number listed above.)
Now let’s consider the question regarding the productivity of the NSO technical team.
First, Israel’s defense establishment contains many bright people and a world-class training program. What happens when you take well educated people, the threat of war without warning, and an outstanding in-service instructional set up? The answer is, “Ideas get converted into exercises. Exercises become test code. Test code gets revised. And the functional software becomes weaponized.”
Second, the “in our foxhole” mentality extends once trained military specialists leave the formal service and enter the commercial world. As a result, individuals who studied, worked, and in some cases, fought together set up companies. These individuals are a bit like beavers. Beavers do what beavers do. Some of these firms replicate functionality similar to that developed under the government’s watch and sell those products. Please, note, that NSO Group is an exception of sorts. Some of the “insights” originated when the founders were repairing mobile phones. The idea, however, is the same. Learning, testing, deploying, and the hiring individuals with specialized training by the Israeli government. Keep in mind the “in my foxhole” notion, please.
Third, directly or indirectly important firms in Israel or, in some cases, government-assisted development programs provide: [a] Money, [b] meet up opportunities like “tech fests” in Tel Aviv, and [c] suggestions about whom to hire, partner with, consult with, or be aware of.
Do these conditions exist in other countries? In my experience, to some degree this approach to mobile technology exploits does. There are important differences. If you want to know what these are, you know the answer. Buzz that 800 number.
My point is that the expertise, insights, systems, and methods of what the media calls “the NSO Group” have diffused. As a result, there are more choices than ever before when it comes to exploiting mobile devices.
Where’s Apple? Where’s Google? Where’s Samsung? The firms, in my opinion, are in reactive mode, and, in some cases, they don’t know what they don’t know.
Stephen E Arnold, April 25, 2023
Is Intelware Square Dancing in Israel?
March 10, 2023
It is a hoe down. Allemande Left. Do Si Do. Circle Left. Now Promenade. I can hear the tune in “NSO Group Co-Founder Emerges As New Majority Owner.” My toe was tapping when I read:
Omri Lavie – the “O” in NSO Group … appears to have emerged as the company’s new majority owner. Luxembourg filings show that Lavie’s investment firm, Dufresne Holding, is – for now – the sole owner of a Luxembourg-based holding company that ultimately owns NSO Group.
What’s the company’s technology enable? The Guardian says:
Pegasus can hack into any phone without leaving an obvious trace, enabling users to gain access to a person’s encrypted calls and chats, photographs, emails, and any other information held on a phone. It can also be used to turn a phone into a remote listening device by controlling its recorder.
Is the Guardian certain that this statement embraces the scope of the NSO Group’s capabilities? I don’t know. But the real newspaper sounds sure that it has its facts lined up.
Was the transition smooth? Well, there may have been some choppy water as the new owner boarded. The article reports:
[The] move follows in the wake of multiple legal fights between NSO and a US-based financial company that is now known as Treo, which controls the equity fund that owns a majority stake in NSO. A person familiar with the matter said Treo had been alerted to the change in ownership of the company’s shares in a recent letter by Lavie, which appears to have caught the financial group by surprise. The person said Treo was still trying to figure out the financial mechanism that Lavie had used to assume control of the shares, but that it believed the company’s financial lenders had, in effect, ceded control of the group to the Israeli founder.
I find it interesting when the milieu of intelligence professionals intersects with go-go money people. Is Treo surprised.
Allemande Right. Do Si Do. Promenade home.
Stephen E Arnold, March 10, 2023
Adulting Desperation at TikTok? More of a PR Play for Sure
March 1, 2023
TikTok is allegedly harvesting data from its users and allegedly making that data accessible to government-associated research teams in China. The story “TikTok to Set One-Hour Daily Screen Time Limit by Default for Users under 18” makes clear that TikTok is in concession mode. The write up says:
TikTok announced Wednesday that every user under 18 will soon have their accounts default to a one-hour daily screen time limit, in one of the most aggressive moves yet by a social media company to prevent teens from endlessly scrolling….
Now here’s the part I liked:
Teenage TikTok users will be able to turn off this new default setting… [emphasis added]
The TikTok PR play misses the point. Despite the yip yap about Oracle as an intermediary, the core issue is suspicion that TikTok is sucking down data. Some of the information can be cross correlated with psychological profiles. How useful would it be to know that a TikTok behavior suggests a person who may be susceptible to outside pressure, threats, or bribes. No big deal? Well, it is a big deal because some young people enlist in the US military and others take jobs at government entities. How about those youthful contractors swarming around Executive Branch agencies’ computer systems, Congressional offices, and some interesting facilities involved with maps and geospatial work?
I have talked about TikTok risks for years. Now we get a limit on usage?
Hey, that’s progress like making a square wheel out of stone.
Stephen E Arnold, March 1, 2023
A Challenge for Intelware: Outputs Based on Baloney
February 23, 2023
I read a thought-troubling write up “Chat GPT: Writing Could Be on the Wall for Telling Human and AI Apart.” The main idea is:
historians will struggle to tell which texts were written by humans and which by artificial intelligence unless a “digital watermark” is added to all computer-generated material…
I noted this passage:
Last month researchers at the University of Maryland in the US said it was possible to “embed signals into generated text that are invisible to humans but algorithmically detectable” by identifying certain patterns of word fragments.
Great idea except:
- The US smart software is not the only code a bad actor could use. Germany’s wizards are moving forward with Aleph Alpha
- There is an assumption that “old” digital information will be available. Digital ephemera applies to everything to information on government Web sites which get minimal traffic to cost cutting at Web indexing outfits which see “old” data as a drain on profits, not a boon to historians
- Digital watermarks are likely to be like “bulletproof” hosting and advanced cyber security systems: The bullets get through and the cyber security systems are insecure.
What about intelware for law enforcement and intelligence professionals, crime analysts, and as-yet-unreplaced paralegals trying to make sense of available information? GIGO: Garbage in, garbage out.
Stephen E Arnold, February 23, 2023
Synthetic Content: A Challenge with No Easy Answer
January 30, 2023
Open source intelligence is the go-to method for many crime analysts, investigators, and intelligence professionals. Whether social media or third-party data from marketing companies, useful insights can be obtained. The upside of OSINT means that many of its supporters downplay or choose to sidestep its downsides. I call this “OSINT blindspots”, and each day I see more information about what is becoming a challenge.
For example, “As Deepfakes Flourish, Countries Struggle with Response” is a useful summary of one problem posed by synthetic (fake) content. What looks “real” may not be. A person sifting through data assumes that information is suspect. Verification is needed. But synthetic data can output multiple instances of fake information and then populate channels with “verification” statements of the initial item of information.
The article states:
Deepfake technology — software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a crypto currency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone. In most of the world, authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.
For some government professionals, the article says:
problematic applications are also plentiful. Legal experts worry that deepfakes could be misused to erode trust in surveillance videos, body cameras and other evidence. (A doctored recording submitted in a British child custody case in 2019 appeared to show a parent making violent threats, according to the parent’s lawyer.) Digital forgeries could discredit or incite violence against police officers, or send them on wild goose chases. The Department of Homeland Security has also identified risks including cyber bullying, blackmail, stock manipulation and political instability.
The most interesting statement in the essay, in my opinion, is this one:
Some experts predict that as much as 90 per cent of online content could be synthetically generated within a few years.
The number may overstate what will happen because no one knows the uptake of smart software and the applications to which the technology will be put.
Thinking in terms of OSINT blindspots, there are some interesting angles to consider:
- Assume the write up is correct and 90 percent of content is authored by smart software, how does a person or system determine accuracy? What happens when a self learning system learns from itself?
- How does a human determine what is correct or incorrect? Education appears to be struggling to teach basic skills? What about journals with non reproducible results which spawn volumes of synthetic information about flawed research? Is a person, even one with training in a narrow discipline, able to determine “right” or “wrong” in a digital environment?
- Are institutions like libraries being further marginalized? The machine generated content will exceed a library’s capacity to acquire certain types of information? Does one acquire books which are “right” when machine generated content produces information that shouts “wrong”?
- What happens to automated sense making systems which have been engineered on the often flawed assumption that available data and information are correct?
Perhaps an OSINT blind spot is a precursor to going blind, unsighted, or dark?
Stephen E Arnold, January 30, 2023
The LaundroGraph: Bad Actors Be On Your Toes
January 20, 2023
Now here is a valuable use of machine learning technology. India’s DailyHunt reveals, “This Deep Learning Technology Is a Money-Launderer’s Worst Nightmare.” The software, designed to help disrupt criminal money laundering operations, is the product of financial data-science firm Feedzai of Portugal. We learn:
“The Feedzai team developed LaundroGraph, a self-supervised model that might reduce the time-consuming process of assessing vast volumes of financial interactions for suspicious transactions or monetary exchanges, in a paper presented at the 3rd ACM International Conference on AI in Finance. Their approach is based on a graph neural network, which is an artificial neural network or ANN built to process vast volumes of data in the form of a graph.”
The AML (anti-money laundering) software simplifies the job of human analysts, who otherwise must manually peruse entire transaction histories in search of unusual activity. The article quotes researcher Mario Cardoso:
“Cardoso explained, ‘LaundroGraph generates dense, context-aware representations of behavior that are decoupled from any specific labels.’ ‘It accomplishes this by utilizing both structural and features information from a graph via a link prediction task between customers and transactions. We define our graph as a customer-transaction bipartite graph generated from raw financial movement data.’ Feedzai researchers put their algorithm through a series of tests to see how well it predicted suspicious transfers in a dataset of real-world transactions. They discovered that it had much greater predictive power than other baseline measures developed to aid anti-money laundering operations. ‘Because it does not require labels, LaundroGraph is appropriate for a wide range of real-world financial applications that might benefit from graph-structured data,’ Cardoso explained.”
For those who are unfamiliar but curious (like me), navigate to this explanation of bipartite graphs. The future applications Cardoso envisions include detecting other financial crimes like fraud. Since the researchers intend to continue developing their tools, financial crimes may soon become much trickier to pull off.
Cynthia Murrell, January 20, 2022