Google and Its Use of the Word “Public”: A Clever and Revenue-Generating Policy Edit

July 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

If one has the cash, one can purchase user-generated data from more than 500 data publishers in the US. Some of these outfits are unknown. When a liberal Wall Street Journal reporter learns about Venntel or one of these outfits, outrage ensues. I am not going to explain how data from a user finds its ways into the hands of a commercial data aggregator or database publisher. Why not Google it? Let me know how helpful that research will be.

Why are these outfits important? The reasons include:

  1. Direct from app information obtained when a clueless mobile user accepts the Terms of Use. Do you hear the slurping sounds?
  2. Organizations with financial data and savvy data wranglers who cross correlate data from multiple sources?
  3. Outfits which assemble real-time or near-real-time user location data. How useful are those data in identifying military locations with a population of individuals who exercise wearing helpful heart and step monitoring devices?

Navigate to “Google’s Updated Privacy Policy States It Can Use Public Data to Train its AI Models.” The write up does not make clear what “public data” are. My hunch is that the Google is not exceptionally helpful with its definitions of important “obvious” concepts. The disconnect is the point of the policy change. Public data or third-party data can be purchased, licensed, used on a cloud service like an Oracle-like BlueKai clone, or obtained as part of a commercial deal with everyone’s favorite online service LexisNexis or one of its units.

7 4 ad exec

A big advertiser demonstrates joy after reading about Google’s detailed prospect targeting reports. Dossiers of big buck buyers are available to those relying on Google for online text and video sales and marketing. The image of this happy media buyer is from the elves at MidJourney.

The write up states with typical Silicon Valley “real” news flair:

By updating its policy, it’s letting people know and making it clear that anything they publicly post online could be used to train Bard, its future versions and any other generative AI product Google develops.

Okay. “the weekend” mentioned in the write up is the 4th of July weekend. Is this a hot news or a slow news time? If you picked “hot”, you are respectfully wrong.

Now back to “public.” Think in terms of Google’s licensing third-party data, cross correlating those data with its log data generated by users, and any proprietary data obtained by Google’s Android or Chrome software, Gmail, its office apps, and any other data which a user clicking one of those “Agree” boxes cheerfully mouses through.

The idea, if the information in Google patent US7774328 B2. What’s interesting is that this granted patent does not include a quite helpful figure from the patent application US2007 0198481. Here’s the 16 year old figure. The subject is Michael Jackson. The text is difficult to read (write your Congressman or Senator to complain). The output is a machine generated dossier about the pop star. Note that it includes aliases. Other useful data are in the report. The granted patent presents more vanilla versions of the dossier generator, however.

profile 2007 0198481

The use of “public” data may enhance the type of dossier or other meaty report about a person. How about a map showing the travels of a person prior to providing a geo-fence about an individual’s location on a specific day and time. Useful for some applications? If these “inventions” are real, then the potential use cases are interesting. Advertisers will probably be interested? Can you think of other use cases? I can.

The cited article focuses on AI. I think that more substantive use cases fit nicely with the shift in “policy” for public data. Have your asked yourself, “What will Mandiant professionals find interesting in cross correlated data?”

Stephen E Arnold, July 6, 2023

NSO Group Restructuring Keeps Pegasus Aloft

July 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The NSO Group has been under fire from critics for the continuing deployment if its infamous Pegasus spyware. The company, however, might more resemble a different mythological creature: Since its creditors pulled their support, NSO appears to be rising from the ashes.

7 2 pegasus aloft

Pegasus continues to fly. Can it monitor some of the people who have mobile phones? Not in ancient Greece. Other places? I don’t know. MidJourney’s creative powers does not shed light on this question.

The Register reports, “Pegasus-Pusher NSO Gets New Owner Keen on the Commercial Spyware Biz.” Reporter Jessica Lyons Hardcastle writes:

“Spyware maker NSO Group has a new ringleader, as the notorious biz seeks to revamp its image amid new reports that the company’s Pegasus malware is targeting yet more human rights advocates and journalists. Once installed on a victim’s device, Pegasus can, among other things, secretly snoop on that person’s calls, messages, and other activities, and access their phone’s camera without permission. This has led to government sanctions against NSO and a massive lawsuit from Meta, which the Supreme Court allowed to proceed in January. The Israeli company’s creditors, Credit Suisse and Senate Investment Group, foreclosed on NSO earlier this year, according to the Wall Street Journal, which broke that story the other day. Essentially, we’re told, NSO’s lenders forced the biz into a restructure and change of ownership after it ran into various government ban lists and ensuing financial difficulties. The new owner is a Luxembourg-based holding firm called Dufresne Holdings controlled by NSO co-founder Omri Lavie, according to the newspaper report. Corporate filings now list Dufresne Holdings as the sole shareholder of NSO parent company NorthPole.”

President Biden’s executive order notwithstanding, Hardcastle notes governments’ responses to spyware have been tepid at best. For example, she tells us, the EU opened an inquiry after spyware was found on phones associated with politicians, government officials, and civil society groups. The result? The launch of an organization to study the issue. Ah, bureaucracy! Meanwhile, Pegasus continues to soar.

Cynthia Murrell, July 4, 2023

Call 9-1-1. AI Will Say Hello Soon

June 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

My informal research suggests that every intelware and policeware vendor is working to infuse artificial intelligence or in my lingo “smart software” into their products and services. Most of these firms are not Chatty Cathies. The information about innovations is dribbled out in talks at restricted attendance events or in talks given at these events. This means that information does not zip around like the posts on the increasingly less use Twitter service #osint.

6 17 govt lunch

Government officials talking about smart software which could reduce costs but the current budget does not allow its licensing. Furthermore, time is required to rethink what to do with the humanoids who will be rendered surplus and ripe for RIF’ing. One of the attendees wisely asks, “Does anyone want dessert?” A wag of the dinobaby’s tail to MidJourney which has generated an original illustration unrelated to any content object upon which the system inadvertently fed. Smart software has to gobble lunch just like government officials.

However, once in a while, some information becomes public and “real news” outfits recognize the value of the information and make useful factoids available. That’s what happened in “A.I. Call Taker Will Begin Taking Over Police Non-Emergency Phone Lines Next Week: Artificial Intelligence Is Kind of a Scary Word for Us,” Admits Dispatch Director.”

Let me highlight a couple of statements in the cited article.

First, I circled this statement about Portland, Oregon’s new smart system:

A automated attendant will answer the phone on nonemergency and based on the answers using artificial intelligence—and that’s kind of a scary word for us at times—will determine if that caller needs to speak to an actual call taker,” BOEC director Bob Cozzie told city commissioners yesterday.

I found this interesting and suggestive of how some government professionals will view the smart software-infused system.

Second, I underlined this passage:

The new AI system was one of several new initiatives that were either announced or proposed at yesterday’s 90-minute city “work session” where commissioners grilled officials and consultants about potential ways to address the crisis.

The “crisis”, as I understand it, boils down to staffing and budgets.

Several observations:

  1. The write up makes a cautious approach to smart software. What will this mean for adoption of even more sophisticated services included in intelware and policeware solutions?
  2. The message I derived from the write up is that governmental entities are not sure what to do. Will this cloud of unknowing have a impact on adoption of AI-infused intelware and policeware systems?
  3. The article did not include information from the vendor? Is this fact provide information about the reporter’s research or does it suggest the vendor was not cooperative. Intelware and policeware companies are not particularly cooperative nor are some of the firms set up to respond to outside inquiries. Will those marketing decisions slow down adoption of smart software?

I will let you ponder the implications of this brief, and not particularly detailed article. I would suggest that intelware and policeware vendors put on their marketing hats and plug them into smart software. Some new hurdles for making sales may be on the horizon.

Stephen E  Arnold, June 20. 2023

NSO Group: How Easy Are Mobile Hacks?

April 25, 2023

I am at the 2023 US National Cyber Crime Conference, and I have been asked, “What companies offer NSO-type mobile phone capabilities?” My answer is, “Quite a few.” Will I name these companies in a free blog post? Sure, just call us at 1-800-YOU-WISH.

A more interesting question is, “Why is Israel-based NSO Group the pointy end of a three meter stick aimed at mobile devices?” (To get some public information about newly recognized NSO Group (Pegasus) tricks, navigate to “Triple Threat. NSO Group’s Pegasus Spyware Returns in 2022 with a Trio of iOS 15 and iOS 16 Zero-Click Exploit Chains.” I would point out that the reference to Access Now is interesting, and a crime analyst may find a few minutes examining what the organization does, its “meetings,” and its hosting services time well spent. Will I provide that information in a free blog post. Please, call the 800 number listed above.)

Now let’s consider the question regarding the productivity of the NSO technical team.

First, Israel’s defense establishment contains many bright people and a world-class training program. What happens when you take well educated people, the threat of war without warning, and an outstanding in-service instructional set up? The answer is, “Ideas get converted into exercises. Exercises become test code. Test code gets revised. And the functional software becomes weaponized.”

Second, the “in our foxhole” mentality extends once trained military specialists leave the formal service and enter the commercial world. As a result, individuals who studied, worked, and in some cases, fought together set up companies. These individuals are a bit like beavers. Beavers do what beavers do. Some of these firms replicate functionality similar to that developed under the government’s watch and sell those products. Please, note, that NSO Group is an exception of sorts. Some of the “insights” originated when the founders were repairing mobile phones. The idea, however, is the same. Learning, testing, deploying, and the hiring individuals with specialized training by the Israeli government. Keep in mind the “in my foxhole” notion, please.

Third, directly or indirectly important firms in Israel or, in some cases, government-assisted development programs provide: [a] Money, [b] meet up opportunities like “tech fests” in Tel Aviv, and [c] suggestions about whom to hire, partner with, consult with, or be aware of.

Do these conditions exist in other countries? In my experience, to some degree this approach to mobile technology exploits does. There are important differences. If you want to know what these are, you know the answer. Buzz that 800 number.

My point is that the expertise, insights, systems, and methods of what the media calls “the NSO Group” have diffused. As a result, there are more choices than ever before when it comes to exploiting mobile devices.

Where’s Apple? Where’s Google? Where’s Samsung? The firms, in my opinion, are in reactive mode, and, in some cases, they don’t know what they don’t know.

Stephen E Arnold, April 25, 2023

Is Intelware Square Dancing in Israel?

March 10, 2023

It is a hoe down. Allemande Left. Do Si Do. Circle Left.  Now Promenade. I can hear the tune in “NSO Group Co-Founder Emerges As New Majority Owner.” My toe was tapping when I read:

Omri Lavie – the “O” in NSO Group … appears to have emerged as the company’s new majority owner. Luxembourg filings show that Lavie’s investment firm, Dufresne Holding, is – for now – the sole owner of a Luxembourg-based holding company that ultimately owns NSO Group.

What’s the company’s technology enable? The Guardian says:

Pegasus can hack into any phone without leaving an obvious trace, enabling users to gain access to a person’s encrypted calls and chats, photographs, emails, and any other information held on a phone. It can also be used to turn a phone into a remote listening device by controlling its recorder.

Is the Guardian certain that this statement embraces the scope of the NSO Group’s capabilities? I don’t know. But the real newspaper sounds sure that it has its facts lined up.

Was the transition smooth? Well, there may have been some choppy water as the new owner boarded. The article reports:

[The] move follows in the wake of multiple legal fights between NSO and a US-based financial company that is now known as Treo, which controls the equity fund that owns a majority stake in NSO. A person familiar with the matter said Treo had been alerted to the change in ownership of the company’s shares in a recent letter by Lavie, which appears to have caught the financial group by surprise. The person said Treo was still trying to figure out the financial mechanism that Lavie had used to assume control of the shares, but that it believed the company’s financial lenders had, in effect, ceded control of the group to the Israeli founder.

I find it interesting when the milieu of intelligence professionals intersects with go-go money people. Is Treo surprised.

Allemande Right. Do Si Do. Promenade home.

Stephen E Arnold, March 10, 2023

Adulting Desperation at TikTok? More of a PR Play for Sure

March 1, 2023

TikTok is allegedly harvesting data from its users and allegedly making that data accessible to government-associated research teams in China. The story “TikTok to Set One-Hour Daily Screen Time Limit by Default for Users under 18” makes clear that TikTok is in concession mode. The write up says:

TikTok announced Wednesday that every user under 18 will soon have their accounts default to a one-hour daily screen time limit, in one of the most aggressive moves yet by a social media company to prevent teens from endlessly scrolling….

Now here’s the part I liked:

Teenage TikTok users will be able to turn off this new default setting… [emphasis added]

The TikTok PR play misses the point. Despite the yip yap about Oracle as an intermediary, the core issue is suspicion that TikTok is sucking down data. Some of the information can be cross correlated with psychological profiles. How useful would it be to know that a TikTok behavior suggests a person who may be susceptible to outside pressure, threats, or bribes. No big deal? Well, it is a big deal because some young people enlist in the US military and others take jobs at government entities. How about those youthful contractors swarming around Executive Branch agencies’ computer systems, Congressional offices, and some interesting facilities involved with maps and geospatial work?

I have talked about TikTok risks for years. Now we get a limit on usage?

Hey, that’s progress like making a square wheel out of stone.

Stephen E Arnold, March 1, 2023

A Challenge for Intelware: Outputs Based on Baloney

February 23, 2023

I read a thought-troubling write up “Chat GPT: Writing Could Be on the Wall for Telling Human and AI Apart.” The main idea is:

historians will struggle to tell which texts were written by humans and which by artificial intelligence unless a “digital watermark” is added to all computer-generated material…

I noted this passage:

Last month researchers at the University of Maryland in the US said it was possible to “embed signals into generated text that are invisible to humans but algorithmically detectable” by identifying certain patterns of word fragments.

Great idea except:

  1. The US smart software is not the only code a bad actor could use. Germany’s wizards are moving forward with Aleph Alpha
  2. There is an assumption that “old” digital information will be available. Digital ephemera applies to everything to information on government Web sites which get minimal traffic to cost cutting at Web indexing outfits which see “old” data as a drain on profits, not a boon to historians
  3. Digital watermarks are likely to be like “bulletproof” hosting and advanced cyber security systems: The bullets get through and the cyber security systems are insecure.

What about intelware for law enforcement and intelligence professionals, crime analysts, and as-yet-unreplaced paralegals trying to make sense of available information? GIGO: Garbage in, garbage out.

Stephen E Arnold, February 23, 2023

Synthetic Content: A Challenge with No Easy Answer

January 30, 2023

Open source intelligence is the go-to method for many crime analysts, investigators, and intelligence professionals. Whether social media or third-party data from marketing companies, useful insights can be obtained. The upside of OSINT means that many of its supporters downplay or choose to sidestep its downsides. I call this “OSINT blindspots”, and each day I see more information about what is becoming a challenge.

For example, “As Deepfakes Flourish, Countries Struggle with Response” is a useful summary of one problem posed by synthetic (fake) content. What looks “real” may not be. A person sifting through data assumes that information is suspect. Verification is needed. But synthetic data can output multiple instances of fake information and then populate channels with “verification” statements of the initial item of information.

The article states:

Deepfake technology — software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a crypto currency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone. In most of the world, authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

For some government professionals, the article says:

problematic applications are also plentiful. Legal experts worry that deepfakes could be misused to erode trust in surveillance videos, body cameras and other evidence. (A doctored recording submitted in a British child custody case in 2019 appeared to show a parent making violent threats, according to the parent’s lawyer.) Digital forgeries could discredit or incite violence against police officers, or send them on wild goose chases. The Department of Homeland Security has also identified risks including cyber bullying, blackmail, stock manipulation and political instability.

The most interesting statement in the essay, in my opinion, is this one:

Some experts predict that as much as 90 per cent of online content could be synthetically generated within a few years.

The number may overstate what will happen because no one knows the uptake of smart software and the applications to which the technology will be put.

Thinking in terms of OSINT blindspots, there are some interesting angles to consider:

  1. Assume the write up is correct and 90 percent of content is authored by smart software, how does a person or system determine accuracy? What happens when a self learning system learns from itself?
  2. How does a human determine what is correct or incorrect? Education appears to be struggling to teach basic skills? What about journals with non reproducible results which spawn volumes of synthetic information about flawed research? Is a person, even one with training in a narrow discipline, able to determine “right” or “wrong” in a digital environment?
  3. Are institutions like libraries being further marginalized? The machine generated content will exceed a library’s capacity to acquire certain types of information? Does one acquire books which are “right” when machine generated content produces information that shouts “wrong”?
  4. What happens to automated sense making systems which have been engineered on the often flawed assumption that available data and information are correct?

Perhaps an OSINT blind spot is a precursor to going blind, unsighted, or dark?

Stephen E Arnold, January 30, 2023

The LaundroGraph: Bad Actors Be On Your Toes

January 20, 2023

Now here is a valuable use of machine learning technology. India’s DailyHunt reveals, “This Deep Learning Technology Is a Money-Launderer’s Worst Nightmare.” The software, designed to help disrupt criminal money laundering operations, is the product of financial data-science firm Feedzai of Portugal. We learn:

“The Feedzai team developed LaundroGraph, a self-supervised model that might reduce the time-consuming process of assessing vast volumes of financial interactions for suspicious transactions or monetary exchanges, in a paper presented at the 3rd ACM International Conference on AI in Finance. Their approach is based on a graph neural network, which is an artificial neural network or ANN built to process vast volumes of data in the form of a graph.”

The AML (anti-money laundering) software simplifies the job of human analysts, who otherwise must manually peruse entire transaction histories in search of unusual activity. The article quotes researcher Mario Cardoso:

“Cardoso explained, ‘LaundroGraph generates dense, context-aware representations of behavior that are decoupled from any specific labels.’ ‘It accomplishes this by utilizing both structural and features information from a graph via a link prediction task between customers and transactions. We define our graph as a customer-transaction bipartite graph generated from raw financial movement data.’ Feedzai researchers put their algorithm through a series of tests to see how well it predicted suspicious transfers in a dataset of real-world transactions. They discovered that it had much greater predictive power than other baseline measures developed to aid anti-money laundering operations. ‘Because it does not require labels, LaundroGraph is appropriate for a wide range of real-world financial applications that might benefit from graph-structured data,’ Cardoso explained.”

For those who are unfamiliar but curious (like me), navigate to this explanation of bipartite graphs. The future applications Cardoso envisions include detecting other financial crimes like fraud. Since the researchers intend to continue developing their tools, financial crimes may soon become much trickier to pull off.

Cynthia Murrell, January 20, 2022

The Intelware Sector: In the News Again

January 13, 2023

It’s Friday the 13th. Bad luck day for Voyager Labs, an Israel-based intelware vendor. But maybe there is bad luck for Facebook or Meta or whatever the company calls itself. Will there be more bad luck for outfits chasing specialized software and services firms?

Maybe.

The number of people interested in the savvy software and systems which comprise Israel’s intelware industry is small. In fact, even among some of the law enforcement and intelligence professionals whom I have encountered over the years, awareness of the number of firms, their professional and social linkages, and the capabilities of these systems is modest. NSO Group became the poster company for how some of these systems can be used. Not long ago, the Brennan Center made available some documents obtained via legal means about a company called Voyager Labs.

Now the Guardian newspaper (now begging for dollars with blue and white pleas) has published “Meta Alleges Surveillance Firm Collected Data on 600,000 Users via Fake Accounts.” the main idea of the write up is that an intelware vendor created sock puppet accounts with phony names. Under these fake identities, the investigators gathered information. The write up refers to “fake accounts” and says:

The lawsuit in federal court in California details activities that Meta says it uncovered in July 2022, alleging that Voyager used surveillance software that relied on fake accounts to scrape data from Facebook and Instagram, as well as Twitter, YouTube, LinkedIn and Telegram. Voyager created and operated more than 38,000 fake Facebook accounts to collect information from more than 600,000 Facebook users, including posts, likes, friends lists, photos, comments and information from groups and pages, according to the complaint. The affected users included employees of non-profits, universities, media organizations, healthcare facilities, the US armed forces and local, state and federal government agencies, along with full-time parents, retirees and union members, Meta said in its filing.

Let’s think about this fake account thing. How difficult is it to create a fake account on a Facebook property. About eight years ago as a test, my team created a fake account for a dog — about eight years ago. Not once in those eight years was any attempt to verify the humanness or the dogness of the animal. The researcher (a special librarian in fact) set up the account and demonstrated to others on my research team how the Facebook sign up system worked or did not work in this particularly example. Once logged in, faithful and trusting Facebook seemed to keep our super user logged into the test computer. For all I know, Tess is still logged in with Facebook doggedly tracking her every move. Here’s Tess:

image

Tough to see that Tess is not a true Facebook type, isn’t it?

Is the accusation directed at Voyager Labs a big deal? From my point of view, no. The reason that intelware companies use Facebook is that Facebook makes it easy to create a fake account, exercises minimal administrative review of registered user, and prioritizes other activities.

I personally don’t know what Voyager Labs did or did not do. I don’t care. I do know that other firms providing intelware have the capability of setting up, managing, and automating some actions of accounts for either a real human, an investigative team, or another software component or system. (Sorry, I am not at liberty to name these outfits.)

Grab your Tum’s bottle and consider these points:

  1. What other companies in Israel offer similar alleged capabilities?
  2. Where and when were these alleged capabilities developed?
  3. What entities funded start ups to implement alleged capabilities?
  4. What other companies offer software and services which deliver similar alleged capabilities?
  5. When did Facebook discover that its own sign up systems had become a go to source of social action for these intelware systems?
  6. Why did Facebook ignore its sign up procedures failings?
  7. Are other countries developing and investing in similar systems with these alleged capabilities? If so, name a company in England, France, China, Germany, or the US?

These one-shot “intelware is bad” stories chop indiscriminately. The vendors get slashed. The social media companies look silly for having little interest in “real” identification of registrants. The licensees of intelware look bad because somehow investigations are somehow “wrong.” I think the media reporting on intelware look silly because the depth of the information on which they craft stories strikes me as shallow.

I am pointing out that a bit more diligence is required to understand the who, what, why, when, and where of specialized software and services. Let’s do some heavy lifting, folks.

Stephen E Arnold, January 13, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta