NSO Group: Talking and Not Talking Is Quite a Trick

July 30, 2021

I read “A Tech Firm Has Blocked Some Governments from Using Its Spyware over Misuse Claims.” First, let’s consider the headline. If the headline is factual, the message I get is that NSO Group operates one or more servers through which Pegasus traffic flows. Thus, the Pegasus system includes one or more servers which have log files, uptime monitoring, and administrative tools which permit operations like filtering, updating, and the like. Thus, a systems administrator with authorized access to one or a fleet of NSO Group servers supporting Pegasus can do what some system administrators do: Check out what’s shakin’ with the distributed system. Is the headline accurate? I sure don’t know, but the implication of the headline (assuming it is not a Google SEO ploy to snag traffic) is that NSO Group is in a position to know — perhaps in real time via a nifty AWS-type dashboard — who is doing what, when, where, for how long, and other helpful details about which a curious observer finds interesting, noteworthy, or suitable for assessing an upcharge. Money is important in zippy modern online systems in my experience.

My goodness. That headline was inspirational.

What about the write up itself from the real news outfit National Public Radio or NPR, once home to Bob Edwards, who was from Louisville, not far from the shack next to a mine run off pond outside my door. Ah, Louisville, mine drainage, and a person who finds this passage suggestive:

“There is an investigation into some clients. Some of those clients have been temporarily suspended,” said the source in the company, who spoke to NPR on condition of anonymity because company policy states that NSO “will no longer be responding to media inquiries on this matter and it will not play along with the vicious and slanderous campaign.”

So the company won’t talk to the media, but does talk to the media, specifically NPR. What do I think about that? Gee, I just don’t know. Perhaps I don’t understand the logic of NSO Group. But I don’t grasp what “unlimited” means when a US wireless provider assures customers that they have unlimited bandwidth. I am just stupid.

Next, I noted:

NSO says it has 60 customers in 40 countries, all of them intelligence agencies, law enforcement bodies and militaries. It says in recent years, before the media reports, it blocked its software from five governmental agencies, including two in the past year, after finding evidence of misuse. The Washington Post reported the clients suspended include Saudi Arabia, Dubai in the United Arab Emirates and some public agencies in Mexico. The company says it only sells its spyware to countries for the purpose of fighting terrorism and crime, but the recent reports claim NSO dealt with countries known to engage in surveillance of their citizens and that dozens of smartphones were found to be infected with its spyware.

Okay, if the headline is on the beam, then NSO Group, maybe some unnamed Israeli government agencies like the unit issuing export licenses for NSO Group-type software, and possibly some “trusted” third parties are going to prowl through the data about the usage of Pegasus by entities. Some of these agencies may be quite secretive. Imagine the meetings going on in which those in these secret agencies. What will the top dogs in these secret outfits about the risks of having NSO Group’s data sifted, filtered, and processed by Fancy Dan analytics’ systems tell their bosses? Yeah, that will test the efficacy of advanced degrees, political acumen, and possible fear.

And what’s NSO Group’s position. The information does not come from an NSO Group professional who does not talk to the media but sort of does. Here’s the word from the NSO Group’s lawyer:

Shmuel Sunray, who serves as general counsel to NSO Group, said the intense scrutiny facing the company was unfair considering its own vetting efforts.

“What we are doing is, what I think today is, the best standard that can be done,” Sunray told NPR. “We’re on the one hand, I think, the world leaders in our human rights compliance, and the other hand we’re the poster child of human rights abuse.”

I like this. We have the notion of NSO Group doing what it can do to the “best standard.” How many times has this situation faced an outfit in the intelware game, based in Herliya, and under the scrutiny of an Israeli agency which says yes or no to an export license for a Pegasus type system. Is this a new situation? Might be. If true, what NSO Group does will define the trajectory of intelware going forward, won’t it?

Next, I like the “world leaders” and “Human rights compliance.” This line creates opportunities for some what I would call Comedy Central comments. I will refrain and just ask you to consider the phrase in the context of the core functions and instrumentality of intelware. (If you want to talk in detail, write benkent2020 at yahoo dot com and one of my team will get back to you with terms and fees. If not, I am retired, so I don’t care.)

Exciting stuff and the NSO Group ice cream melt is getting stickier by the day. And in Herzliya, the temperature is 29 C. “C” is the grade I would assign to this  allegedly accurate statement from the article that NSO Group does not talk to the media. Get that story straight is my advice.

And, gentle NPR news professional, why not ask the lawyer about log file retention and access to data in Pegasus by an NSO system administrator?

Stephen E Arnold, July 30, 2021

Is a New Wave of Disintermediation Gaining Momentum

July 9, 2021

Hacker News pointed to “We Replaced Rental Brokers with Software and Filled 200+ Vacant Apartments.” That real estate write up provides a good case example for using software to chop out the useless humanoids. Sound like an Amazon thing? I think so. Corporate special librarians were among the first to be allowed to find their future elsewhere. Other professions are finding ways to de-humanoid their business processes. How does that Ford Bronco get painted? Not by people with spray guns. Those made-for-TV car shows use humans. Real car makers don’t unless there is some compelling reason.

Now a start up is going to try and de-people Amazon AWS development and programming. Amazon is trying to train people to think Amazon for new t shirts and super duper online cloud services. But the company’s efforts are mostly free education plays and zippy presentations at Amazon-sponsored events.

The disintermediation of the Amazon developer is now a start up’s goal. Digger.dev says:

Digger automatically generates infrastructure for your code in your cloud account. So you can build on AWS without having to learn it.

Disenchanted with the Lyft and Uber thing? Tired of collecting unemployment? Bored with your lawyering gig? Now you can become an entrepreneur:

Deploy anything. Containers, Serverless Lambda functions, webapps, databases, queues, load balancers, autoscaling – Digger supports it all.

If Digger.dev is successful, the certified Amazon professional may be looking for a new career. COBOL programmer maybe?

Stephen E Arnold, July 9, 2021

Amusing Confusing Wizards

July 7, 2021

More from the Redmond wizards’ humor generating machines.

Microsoft has found a way to deflect attention from yet another security issue. Do you print over the Internet? “Microsoft Acknowledges PrintNightmare Remote Code Execution Vulnerability Affecting Windows Pint Spooler Service” says:

IT Admins are also invited to disable the Print Spooler service via Powershell commands, though this will disable the ability to print both locally and remotely. Another workaround is to disable inbound remote printing through Group Policy, which will block the remote attack vector while allowing local printing.

So what distracts one from a print nightmare? That’s easy. Just try to figure out if your PC can run Windows 11? TPM, you say? Intel what?

PrintNightmare aptly characterizes Microsoft’s organizational acumen perhaps?

Stephen E Arnold, July 7, 2021

Has Google Smart Software Become the Sad Clown for AI?

April 20, 2021

“Is Google’s AI Research about to Implode?” raises an interesting question. The answer depends on whom one asks. For the high profile ethical AI Googlers who are now Xooglers (former Google employees), the answer is probably along the lines of “About. Okay, boomer, it has imploded.” Ask a Googler who still has a job at the GOOG and received a bonus for his or her work in smart software and the answer is probably more like, “Dude, we are AI.” With matters Googley, I am not sure where the truth exists.

The write up states:

in making certain “corrections” to large datasets, for example removing references to sex, the voices of LGBTQ people will be given less prominence. The lack of transparency and accountability in the data makes these models useless for anything other than generating amusing Guardian articles (my words, not the authors). But they have substantial negative consequences: in producing reams of factually incorrect texts and requiring computing resources that can have a major environmental impact.

Ah, ha, the roots of bias.

Google has not made enough progress is making its models neutral. Thus, human fiddling is required. And where there are humans fiddling, there are discordant notes.

The write up concludes with this statement:

What concerns me is that when Google’s own researchers start to produce novel ideas then the company perceives these as a threat. So much of a threat that they fire their most innovative researcher and shut down the groups that are doing truly novel work.

Right now, I think the Google wants to squelch talk about algorithmic “issues.”  Smart software appears to be related maximizing efficiency. The idea is that efficiency yields lower costs. Lower costs provide more cash to incentivize employees to find ways to improve, for example, ad auction efficiency. Ethics are not an emergent phenomenon of this type of system. The result is algorithmic road kill, a major PR problem, a glimpse of the inner Google, and writers who are skeptical about the world’s largest online ad vendor’s use of “smart” technology.

Stephen E Arnold, April 20, 2021

Software Development: Big Is the One True Way

April 13, 2021

I read an essay called “Everyone Is Still Terrible At Creating Software At Scale.” I am often skeptical about categorical affirmatives. Sometimes a sweeping statement captures an essential truth. This essay in Marginally Interesting has illuminated software development in a useful way.

I found this passage thought provoking:

I’ve seen a few e-commerce companies from the inside, and while their systems are marvel of technologies able to handle thousands of transactions per second, it does not feel like this, but things like the app and the website are very deeply entangled with the rest. Even if you wanted, you couldn’t create a completely new app or website.

After I read this, I thought about rotational velocity. I also thought about the idea of how easy it is to break something. Users want a software component to work and be usable. Software often appears fluid. What’s clear is that outages at big vendors and security lapses are seemingly the stuff of daily headlines. Big outfits deliver one thing; users get another.

Here’s another statement I circled:

My recommendation is to look at structures and ask yourself, how hard is it for any one “unit” in your “system” to get stuff done. Everything that cuts across areas of responsibility adds complexity.

Complexity is an interesting idea. Does Google “change” how the Page Rank method is implemented, or is Google in the software wrapper business? Can Microsoft plug security gaps when those gaps are the fabric of core Azure and Windows 10 processes? Can Facebook actually change feedback loops which feed its content processes? Is it possible for an outfit like Honda to change how it makes automobiles? In theory, a Honda-type operation can change, but the enemies are time, Tesla-like disruptions, Covid, and money.

Like the big ship which managed to get stuck in the Suez Canal, altering a method once underway is tricky.

The essay ends with this observation:

Unless you take care everyone has different understanding of the problem, and there is no focus on information gathering and constructive creativity.

But big is the way, right?

Stephen E Arnold, April 14, 2021

Business Process Management Is The New Buzzword

March 21, 2021

How does one “fix” the SolarWinds’ misstep? BPM. GovWizely will present a webinar addressing remediation of SolarWinds’ issues on March 25, 2021. You can sign up at this url: https://www.govwizely.com/contact/. The program is free and pre-registration is required.

If you never heard about business process management (BPM) it means the practice of discovering and controlling an organization’s processes so they will align with business goals as the company evolves.  BPM software is the next phase of business intelligence software for enterprises.  CIO explains what to expect from BPM software in the article: “What Is Business Process Management? The Key To Enterprise Agility.”

BPM software maps definitions to existing processes, defines steps to carry out tasks, and tips for streamlining/improving practices.  Organizations are constantly shifting to meet their goals and BPM is software is advertised as the best way to refine and control changing environments.  All good BPM software should have the following: alignment of the firm’s resources, increase discipline in daily operations, and clarify on strategic direction.  While most organizations want flexibility they lack it:

“A company can only be as flexible, efficient, and agile as the interaction of its business processes allow. Here’s the problem: Many companies develop business processes in isolation from other processes they interact with, or worse, they don’t “develop” business processes at all. In many cases, processes simply come into existence as “the way things have always been done,” or because software systems dictate them. As a result, many companies are hampered by their processes, and will continue to be so until those processes are optimized.”

When selecting a BPM software it should be capable of integrations, analytics, collaboration, form generation, have a business rules engine, and workflow managements.

BPM sounds like the next phase of big data, where hidden insights are uncovered in unstructured data.  BPM takes these insights, then merges them with an organization’s goals.  Business intelligence improves business processes, big data discovers insights, and BPM organizes all of it.

Whitney Grace, March 21, 2021

The Microsoft Supply Chain Works Even Better Going Backwards

March 4, 2021

Do you remember the character KIR-mit.  He once allegedly said:

Yeah, well, I’ve got a dream too, but it’s about singing and dancing and making people happy. That’s the kind of dream that gets better the more people you share it with.

I am not talking about Jim Henson’s memorable character. That frog spelled its name Kermit. This is KIR-mit, an evil doppelgänger from another universe called Redmonium.

Respect Kermit! (DevilArtemis Universe): respectthreads

This KIR-mit is described in “Microsoft Is Using Known Issue Rollback (KIR) to Fix Problems Caused by Windows 10 Updates.” I learned that KIR

enables Microsoft to rollback changes introduced by problematic patches rolled out through Windows Update. KIR only applies to non-security updates.

Does the method expand the attack service for bad actors? Will weird calls to senior citizens increase with offers to assist with KIR-mit modifications? Will questionable types provide links to download KIRs which are malware? Yes, yes, and yes.

The article points out:

Known Issue Rollback is an important Windows servicing improvement to support non-security bug fixes, enabling us to quickly revert a single, targeted fix to a previously released behavior if a critical regression is discovered.

KIR is something users have said they wanted. Plus Microsoft has had this capability for a long time. I recall reading that Microsoft had a method for verifying the “digital birth certificate” of software in order to identify and deal with the SolarWinds-type of supply chain hack. I point this out in my upcoming lecture for a law enforcement entity. Will my audience find the statement and link interesting? I have a hunch the cyber officers will perk up their ears. Even the JEDI fans will catch my drift.

Just regular users may become woozy from too much KIR in the system. Plus, enterprise users will be “in charge of things.” Wonderful. Users at home are one class of customers; enterprise users are another. In between, attack surface the size of the moon.

Several questions:

  • Why not improve the pre release quality checks?
  • Why not adopt the type of practices spelled out by In Toto and other business method purveyors?
  • Why not knock off the crazy featuritis and deliver stable software in a way that does not obfuscate, mask, and disguise what’s going on?

And the answers to these questions is, “The cloud is more secure.”

Got it. By the way a “kir” is a French cocktail. Some Microsoft customers may need a couple of these to celebrate Microsoft’s continuous improvement of its outstanding processes.

Don't mess with Kermit - Album on Imgur

As KIR-mit said, “It’s about making people happy.” That includes bad actors, malefactors, enemies of the US, criminals, and Microsoft professionals like Eric Vernon and Vatsan Madhava, the lucky explainers of KIR-mit’s latest adventure.

Stephen E Arnold, March 4, 2021

Facebook Found Lax in Enforcement of Own Privacy Rules. Surprised?

March 4, 2021

Facebook is refining its filtering AI for app data after investigators at New York’s Department of Financial Services found the company was receiving sensitive information it should not have received. The Jakarta Post reports, “Facebook Blocks Medical Data Shared by Apps.” Facebook regularly accepts app-user information and feeds it to an analysis tool that helps developers improve their apps. It never really wanted responsibility for safeguarding medical and other sensitive data, but did little to block it until now. The write-up quotes state financial services superintendent Linda Lacewell:

“Facebook instructed app developers and websites not to share medical, financial, and other sensitive personal consumer data but took no steps to police this rule. By continuing to do business with app developers that broke the rule, Facebook put itself in a position to profit from sensitive data that it was never supposed to receive in the first place.”

Facebook is now stepping up its efforts to block sensitive information from reaching its databases. We learn:

“Facebook created a list of terms blocked by its systems and has been refining artificial intelligence to more adaptively filter sensitive data not welcomed in the analytics tool, according to the report. The block list contains more than 70,000 terms, including diseases, bodily functions, medical conditions, and real-world locations such as mental health centers, the report said.”

A spokesperson says the company is also “doing more to educate advertisers on how to set-up and use our business tools.” We shall see whether these efforts will be enough to satisfy investigators next time around.

Cynthia Murrell, March 4, 2021

Quantum Computing: A Nasty Business

March 3, 2021

In a PhD program, successful candidates push the boundaries of knowledge and change the world for the better. Sometimes. One illustration of this happy outcome is the case of Zak Romaszko at the University of Sussex, who contributed to the school’s ion trap quantum computer project. Robaszko is now working at his professor’s spin-off company Universal Quantum on commercialization of the tech to create large-scale quantum computers. Bravo!

Unfortunately, not all PhD programs are crucibles of such success stories. One in particular appears to be just the opposite, as described in “A Dishonest, Indifferent, and Toxic Culture” posted at the Huixiang Voice. The blog is dedicated to covering the heartbreaking experience of PhD candidate Huixiang Chen, who was studying at the University of Florida’s department of Electrical and Computer Engineering when he took his own life. The note Chen left behind indicated the reason, at least in part, was the pressure put on him by his advisor to go along with a fraudulent peer-review process.

We learn:

“It has been 20 months since the tragedy that a Ph.D. candidate from the University of Florida committed suicide, accusing his advisor coerce him into academic misconduct. Our latest article dropped a bump into the academic world by exposing the evidence of those academic misconduct. The Nature Index followed up with an in-depth report with comments from scientists and academic organizations worldwide expressing their shock and deep concerns about this scandal that happened at the University of Florida.”

A joint committee of the academic publisher Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) investigated the matter and found substance in the allegations. ACM has imposed a 15-year ban on participation in any ACM Conference or Publication on the offenders, the most severe penalty the organization has ever imposed. The post continues:

“The conclusion finally confirmed two important accusations listed in Huixiang Chen’s suicide note that:
1) The review process for his ISCA-2019 paper was broken, and most of the reviewers of the paper are ‘friends’ of his advisor Dr. Tao Li. The review process became organized and colluded academic fraud:
2)After recognizing that there are severe problems in his ISCA-2019 paper, Huixiang Chen was coerced by his advisor Dr. Tao Li to proceed with a submission despite that Huixiang Chen repeatedly expressed concerns about the correctness of the results reported in work, which led to a strong conscience condemnation and caused the suicide.
“Finally, the paper with academic misconduct got retracted by ACM as Huixiang’s last wish.”

Chen hoped the revelations he left behind would lead to a change in the world; perhaps they will. The problem, though, is much larger than the culture at one university. Peer reviewed publications have become home to punitive behavior, non-reproducible results, and bureaucratic pressure. Perhaps it is time to find another way to review and share academic findings? Google’s AI ethics department may have some thoughts on academic scope and research reviews.

Cynthia Murrell, March 3, 2021

MIT Report about Deloitte Omits One Useful Item of Information

February 1, 2021

This is not big deal. Big government software project does not work. Yo, anyone remember DCGS, the Obama era health site, the reinvigoration of the IRS systems, et al? Guess not. The outfit which accepted money from Mr. Epstein and is now explaining how a faculty member could possibly be ensnared in an international intellectual incident is now putting Deloitte in its place.

Yeah, okay. A blue chip outfit takes a job and – surprise – the software does not work. Who is the bad actor? The group which wrote the statement of work, the COTR, the assorted government and Deloitte professionals trying to make government software super duper? Why not toss in the 18F, the Googler involved in government digitization, and the nifty oversight board for the CDC itself?

The write up “What Went Wrong with America’s $44 Million Vaccine Data System?” analyzes this all-too-common standard operating result from big technology projects. I noted:

So early in the pandemic, the CDC outlined the need for a system that could handle a mass vaccination campaign, once shots were approved. It wanted to streamline the whole thing: sign-ups, scheduling, inventory tracking, and immunization reporting. In May, it gave the task to consulting company Deloitte, a huge federal contractor, with a $16 million no-bid contract to manage “Covid-19 vaccine distribution and administration tracking.” In December, Deloitte snagged another $28 million for the project, again with no competition. The contract specifies that the award could go as high as $32 million, leaving taxpayers with a bill between $44 and $48 million. Why was Deloitte awarded the project on a no-bid basis? The contracts claim the company was the only “responsible source” to build the tool.

Yep, the fault was the procurement process. That’s a surprise?

The MIT write up relishes its insights about government procurement; for example:

“Nobody wants to hear about it, because it sounds really complicated and boring, but the more you unpeel the onion of why all government systems suck, the more you realize it’s the procurement process,” says Hana Schank, the director of strategy for public-interest technology at the think tank New America.  The explanation for how Deloitte could be the only approved source for a product like VAMS, despite having no direct experience in the field, comes down to onerous federal contracting requirements, Schank says. They often require a company to have a long history of federal contracts, which blocks smaller or newer companies that might be a better fit for the task.

And the fix? None offered. That’s helpful.

There is one item of information missing from the write up; specifically the answer to this question:

How many graduates of MIT worked on this project?

My hunch is that the culprit begins with the education and expertise of the individuals involved. The US government procurement process is a challenge, but aren’t institutions training the people in consulting firms and working government agencies supposed to recognize a problem and provide an education to remediate the issue. Sure, it takes time, but government procurement has been a tangle for decades, yet outfits like MIT are eager to ignore the responsibility they have to turn out graduates who solve problems, not create them.

Now about that Epstein and Chinese alleged double dipping thing? Oh, right. Not our job?

Consistent, just like government procurement processes it seems to me.

Stephen E Arnold, February 1, 2021

Next Page »

  • Archives

  • Recent Posts

  • Meta