Kill Off the Dinobabies and Get Younger, Bean Counter-Pleasing Workers. Sound Familiar?
September 21, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Google, Meta, Amazon Hiring low-Paid H1B Workers after US Layoffs: Report.” Is it accurate? Who knows? In the midst of a writers’ strike in Hollywood, I thought immediately about endless sequels to films like “Batman 3: Deleting Robin” and Halloween 8: The Night of the Dinobaby Purge.”
The write up reports a management method similar to those implemented when the high school science club was told that a school field trip to the morgue was turned down. The school’s boiler suffered a mysterious malfunction and school was dismissed for a day. Heh heh heh.
I noted this passage:
Even as global tech giants are carrying out mass layoffs, several top Silicon Valley companies are reportedly looking to hire lower-paid tech workers from foreign countries. Google, Meta, Amazon, Microsoft, Zoom, Salesforce and Palantir have applied for thousands of H1B worker visas this year…
I heard a rumor that IBM used a similar technique. Would Big Blue replace older, highly paid employees with GenX professionals not born in the US? Of course not! The term “dinobabies” was a product of spontaneous innovation, not from a personnel professional located in a suburb of New York City. Happy bean counters indeed. Saving money with good enough work. I love the phrase “minimal viable product” for “minimally viable” work environments.
There are so many ways to allow people to find their futures elsewhere. Shelf stockers are in short supply I hear.
Stephen E Arnold, September 21, 2023
Thomson Reuters, Where Is Your Large Language Model?
April 3, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I have to give the lovable Bloomberg a pat on the back. Not only did the company explain its large language model for finance, the end notes to the research paper are fascinating. One cited document has 124 authors. Why am I mentioning the end notes? The essay is 65 pages in length, and the notes consume 25 pages. Even more interesting is that the “research” apparently involved nVidia and everyone’s favorite online bookstore, Amazon and its Web services. No Google. No Microsoft. No Facebook. Just Bloomberg and the tenure-track researcher’s best friend: The end notes.
The article with a big end … note that is presents this title: “BloombergGPT: A Large Language Model for Finance.” I would have titled the document with its chunky equations “A Big Headache for Thomson Reuters,” but I know most people are not “into” the terminal rivalry, the analytics rivalry and the Thomson Reuters’ Fancy Dancing with Palantir Technologies, nor the “friendly” competition in which the two firms have engaged for decades.
Smart software score appears to be: Bloomberg 1, Thomson Reuters, zippo. (Am I incorrect? Of course, but this beefy effort, the mind boggling end notes, and the presence of Johns Hopkins make it clear that Thomson Reuters has some marketing to do. What Microsoft Bing has done to the Google may be exactly what Bloomberg wants to do to Thomson Reuters: Make money on the next big thing and marginalize a competitor. Bloomberg obviously wants more than the ageing terminal business and the fame achieved on free TV’s Bloomberg TV channels.
What is the Bloomberg LLM or large language model? Here’s what the paper asserts. Please, keep in mind that essays stuffed with mathy stuff and researchy data are often non-reproducible. Heck, even the president of Stanford University took short cuts. Plus more than half of the research results my team has tried to reproduce ends up in Nowheresville, which is not far from my home in rural Kentucky:
we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks.
My interpretations of this quotation is:
- Lots of data
- Big model
- Informed financial decisions.
“Informed financial decisions” means to me that a crazed broker will give this Bloomberg thing a whirl in the hope of getting a huge bonus, a corner office which is never visited, and fame at the New York Athletic Club.
Will this happen? Who knows.
What I do know is that Thomson Reuters’ executives in London, New York, and Toronto are doing some humanoid-centric deep thinking about Bloomberg. And that may be what Bloomberg really wants because Bloomberg may be ahead. Imagine that Bloomberg ahead of the “trust” outfit.
Stephen E Arnold, April 3, 2023
Google and OpenAI: The Big Dust Up
February 8, 2023
Let’s go back to high school English class and a very demanding spinster named Miss Drake. Her test question was, “Who wrote this line?”
O heaven! that one might read the book of fate, and see the revolution of the times. (Henry IV, Part 2 [~ 1597] Act 3)
Do you remember? I do. The answer is the Bard of Avon, and he allegedly spun the phrase from his brain cells or he ripped it off from another playwright. Yeah, the plagiarism thing has not been resolved, and it is unclear if and with whom the Bard sucked in content and output a money-making play. Was the real Bard a primordial version of a creator on YouTube? Sure, why not draw that connection?
Now back to the quote. I like the idea of “the revolution of the times.”
The idea is that some wizard like Sundar or Prabhakar can check out the “Book of Fate” which may or may be among the works in the Google data warehouse and see the future. Just the Palantir seeing stone which works so darned well as those SPAC bets attest. Perhaps that’s what happened when Google declared a Code Red? Fear and a big bet that the GOOG can de-momentum ChatGPT.
When did OpenAI become a thing? I would suggest that it was in 2015 if one believes Sillycon Valley history. The next highlight of what was something of note took place in 2019 but possibly earlier when Microsoft plopped some Azure cycles on the OpenAI conference table. Two years later we get this:
Google at Code Red over ChatGPT As Teams Reassigned to Work on Competing AI Products
Almost coincident with Google’s realizing that people were genuinely excited about ChatGPT, Google realized that Prabhakar’s hair had caught on fire. Sundar, the Sillycon Valley manager par excellence called back the original Relevance Revolutionaries (Messrs. Brin and Page) after Microsoft made it evident to the well-fed at Davos that Softies were good at marketing. Maybe Microsoft fell short of the “Start Me Up” for the outstandingly “good enough” Windows 95, but the Chat GPT deal is notable. To make sure the GOOG got the message that Microsoft was surfing on the waves created by ChatGPT, another bundle of billions were allocated to OpenAI and ChatGPT. The time was January 2023, and it was clear that millions of norms interested in Microsoft’s use of ChatGPT in those “good enough” engineering marvels, Bing.com and the Google infused Edge browser.
Where are we on Wednesday, February 8, 2023? How about this as a marker:
Google said Bard would be widely available to the public in the next few weeks. Source: MSN.com
Yep, the killer words are right there—”would be.” Not here, the conditional future, just a limited test. Not integrated into heaven knows how many apps like OpenAI. Not here like the collection of links generated by Matt Shumer. Not here like the YouTube videos explaining how to build an app from scratch with ChatGPT. Nope. Google is into the “to be” and “demo” mode.
Let’s do simple math on Google’s situational awareness:
- 2015 OpenAI and Elon Musk are visible
- 2019 Microsoft provides some “money” which may be a round trip to pay for Azure cycles
- 2022 (November) ChatGPT gets a million users in five days
- 2022 (December) Google feels the heat from burning hair and screeches “Code Red”
- 2023 (January) Davos talks about ChatGPT, not just the new world order, power, money, and where to eat dinner
- 2023 (February) Google says, “Out version of ChatGPT is coming… soon. Really, very soon.” But for now it’s a limited demo. And Microsoft? The ChatGPT thing turned up when one of my team ran a query on Tuesday, February 7, 2023. Yep, ready or not there it was.
Several observations:
- The Sundar and Prabhakar duo missed the boat on the impact of ChatGPT for search. Err, you are supposed to be the search wizards, and you are standing on the platform waiting for the next train to arrive?
- The uptake of ChatGPT may be a reaction against the Google search system, not a reaction to the inherent greatness of ChatGPT. A certain search company with a 90 percent share and a genuine disdain to relevant responses to users’ queries may have boosted the interest in ChatGPT. If I am correct, this is an unintended consequence of being Googley.
- The Microsoft marketing move is an outstanding [a] bit of luck, [b] a super star tactic that warrants a Grammy (oh, wait, the Grammies suck), or [c] a way to breathe new life into products which suffer from featuritis and a lack of differentiation.
Net net: Sundar and Prabhakar are destined for Harvard case study glory. Also, the dynamic duo may pull a marshmallow from the fire, but will it make a great S’more? Does sophomoric rhyme with Anthropic. And Code Red? The revolution of the times might be here as the Bard wrote or obtained from a fellow playwright.
Stephen E Arnold, February 8, 2023
Smart Software: Just One Real Problem? You Wish
January 6, 2023
I read “The One Real Problem with Synthetic Media.” when consulting and publishing outfits point out the “one real problem” analysis, I get goose bumps. Am I cold? Nah, I am frightened. Write ups that propose the truth frighten me. Life is — no matter what mid tier consulting outfits say — slightly more nuanced.
What is the one real problem? The write up asserts:
Don’t use synthetic media for your business in any way. Yes, use it for getting ideas, for learning, for exploration. But don’t publish words or pictures generated by AI — at least until there’s a known legal framework for doing so. AI-generated synthetic media is arguably the most exciting realm in technology right now. Some day, it will transform business. But for now, it’s a legal third rail you should avoid.
What’s the idea behind the shocking metaphor? The third rail provides electric power to a locomotive. I think the idea is that one will be electrocuted should an individual touch a live third rail.
Okay.
Are there other issues beyond the legal murkiness?
Yes, let me highlight several which strike me as important.
First, the smart software can output quickly and economically weaponized information. Whom can one believe? A college professor funded by a pharmaceutical company or a robot explaining the benefits of an electric vehicle? The hosing of synthetic content and data into a society may provide more corrosive than human outputs alone. Many believe that humans are expert misinformation generators. I submit that smart software will blow the doors off the human content jalopies.
Second, smart software ingests data, when right or wrong, human generated or machine generated, and outputs results on these data. What happens when machine generated content makes the human generated content into tiny rivulets? The machine output is as formidable as Hokusai’s wave. Those humans in the boats: Goners perhaps?
Third, my thought is that in some parts of the US the slacker culture is the dominant mode. Forget that crazy, old-fashioned industrial revolution 9-to-5 work day. Ignore the pressure to move up, earn more, and buy a Buick, not a Chevrolet. Slacker culture dwellers look for the easy way to accomplish what they want. Does this slacker thing explain some FTX-type behavior? What about Amazon’s struggles with third-party resellers’ products? What about Palantir Technology buying advertising space in the Wall Street Journal to convince me that it is the leader in smart software? Yeah, slacker stuff in my opinion. These examples and others mean that the DALL-E and ChatGPT type of razzle dazzle will gain traction.
Where are legal questions in these three issues? Sure legal eagles will fly when there is an opportunity to bill.
I think the smart software thing is a good example of “technology is great” thinking. The one real problem is that it is not.
Stephen E Arnold, January 6, 2023
CNN Surfaces an Outstanding Quote from the Zuck
December 30, 2022
Tucked in “The Year That Brought Silicon Valley Back Down to Earth” was an outstanding quotation from the chief Meta professional, Mark (the Zucker) Zuckerberg. Here’s the quote:
“Unfortunately, this did not play out the way I expected.”
The CNN article revisits what are by now old tropes and saws.
When I spotted the title, I thought a handful of topics would be mentioned; for example:
- The medical testing fraud
- The crazy “value” of wild hair styles and digital currency, lawyer parents, and disappearing billions. Poof.
- Assorted security issues (Yes, I am thinking of Microsoft and poisoned open source libraries. Hey, isn’t GitHub part of the Softies’ empire?)
- Apple’s mystical interactions with China
- Taylor Swift’s impact on Congressional interest in online ticket excitement
- An annual update on Google’s progress in solving death
- Amazon’s interaction with trusted third party sellers (Yes, I am thinking of retail thefts)
- Tesla’s outer space thinking about self driving
- Palantir’s ads asserting that it is the leader in artificial intelligence.
None of these made the CNN story. However, that quote from the Zuck touches some of these fascinating 2022 developments.
Stephen E Arnold, December 30, 2022
The Cloud and Points of Failure: Really?
September 13, 2022
A professional affiliated with Syntropy points out one of my “laws” of online; namely, that centralization is inevitable. What’s interesting about “The Internet is Now So Centralized That One Company Can Break It” is that it does not explain much about Syntropy. In my opinion, there is zero information about the c9ompany. The firm’s Web site explains:
Unlocking the power of the world’s scientific data requires more than a new tool or method – it requires a catalyst for change and collaboration across industries.
The Web site continues:
We are committed to inspiring others around our vision — a world in which the immense power of a single source of truth in biomedical data propels us towards discoveries, breakthroughs and cures faster than ever before.
The company is apparently involved with Merck KGaA, which as I recall from my Pharmaceutical News Index days, is not too keen on sharing its intellectual property, trial data, or staff biographies. Also, the company has some (maybe organic, maybe more diaphanous) connection with Palantir Technologies. Palantir, an interesting search and retrieval company morphing into search based applications and consulting, is a fairly secretive outfit despite its being a publicly traded company. (The firm’s string of quarterly disappointments and its share price send a signal to some astute observers I think.)
But what’s in the article by individual identified at the foot of the essay as Domas Povilauskas, the top dog at Syntropy. Note that the byline for the article is Benzinga Contributor which is not particularly helpful.
Hmmm. What’s up?
The write up recycles the online leads to centralization notion. Okay. But centralization is a general feature of online information, and that’s not a particularly new idea either.
The author continues:
The problem with the modern Internet is that it is essentially a set of private networks run by individual internet service providers. Each has a network, and most connections occur between these networks…. Networks are only managed locally. Routing decisions are made locally by the providers via the BGP protocol. There’s no shared knowledge, and nobody controls the entire route of the connection. Using these public ISPs is like using public transport. You have no control over where it goes. Providers own the cables and everything else. In this system, there are no incentives for ISPs to provide a good service.
The set up of ISPs strikes me as a mix of centralization and whatever works. My working classification of ISPs and providers has three categories: Constrained services (Amazon-type outfits), Boundary Operators (the TOR relay type outfits), and Unconstrained ISPs and providers (CyberBunker-type organizations). My view is that this is the opposite of centralization. In each category there are big and small outfits, but 90 percent of the action follows Arnold’s Law of Centralization. What’s interesting is that in each category — for instance, boundary operators — the centralization repeats just on a smaller scale. AccessNow runs a conference. At this conference are many operators unknown by the general online user.
The author of the article says:
The only way to get a more reliable service is to pay ISPs a lot for high-speed private connections. That’s the only way big tech companies like Amazon run their data centers. But the biggest irony is that there is enough infrastructure to handle much more growth. 70% of Internet infrastructure isn’t utilized because nobody knows about these routes, and ISPs don’t have an excellent solution to monetize them on demand. They prefer to work based on fixed, predetermined contracts, which take a lot of time to negotiate and sign.
I think this is partially correct. As soon as one shifts from focusing on what appear to be legitimate online activities to more questionable and possibly illegal activities, evidence of persistent online services which are difficult for law enforcement to take down thrive. CyberBunker generated millions and required more than two years to knock offline and reign in the owners. There is more dimensionality in the ISP/provider sector than the author of the essay considers.
The knock-offline idea sounds good. One can point to the outages and the pain caused by Microsoft Azure/Microsoft Cloud, Google Cloud, Amazon, and others as points of weakness with as many vulnerabilities as a five-legged Achilles would have.
The reality is that the generalizations about centralization sound good, seem logical, and appear to follow the Arnold Law that says online services tend to centralization. Unfortunately new technologies exist which make it possible for more subtle approaches to put services online.
Plus, I am not sure how a company focused on a biomedical single source of truth fits into what is an emerging and diverse ecosystem of ISPs and service providers.
Stephen E Arnold, September 13, 2022
Machine Learning: Cheating Is a Feature?
August 9, 2022
I read “MIT Boffins Make AI Chips 1 Million Times Faster Than the Synapses in the Human Brain. Plus: Why ML Research Is Difficult to Produce – and Army Lab Extends AI Contract with Palantir.” I dismissed the first item as some of the quantum supremacy stuff output by high school science club types. I ignored the Palantir Technologies’ item because the US Army has to make a distributed common ground system work and leave resolution to the next team rotation. Good or bad, Palantir has the ball. But the middle item in the club sandwich article contains a statement I found particularly interesting.
If you have followed out comments about smart software, we have taken a pragmatic view of getting “AI/ML” systems to work in the 80 to 95 percent confidence range in a consistent way even when new “content objects” are fed into the zeros and ones. To get off on the right foot, human subject matter experts assembled training data which reflected the content the system would be processing in the real world. The way smart software is expected to work is that it learns… on its own… sort of. It is very time consuming and very expensive to create hand crafted training sets and then “update” the system with the affected module. What if the prior content had to be reprocessed? Well, not too many have the funds, time, resources, and patience for that.
Thus, today’s AI/ML forward leaning cost conscious wizards want to use synthetic data, minimize the human SMEs’ cost and time, and do everything auto-magically. Sounds good. Yes, and the ideas make great PowerPoint decks too.
The sentence in the article which caught may attention is this one:
Data leakage occurs when the data used to train an algorithm can leak into its testing; when its performance is assessed the model seems better than it actually is because it has already, in effect, seen the answers to the questions. Sometimes machine learning methods seem more effective than they are because they aren’t tested in more robust settings.
Here’s the link to “Leakage and the Reproducibility Crisis in ML-Based Science in which more details appear. Wowza if these experts are correct. Who goes swimming without a functioning snorkel? Maybe the Google?
Stephen E Arnold, August 8, 2022
Google Management Insights: About Personnel Matters No Less
June 16, 2022
Google is an interesting company. Should we survive Palantir Technologies’ estimate of a 30 percent plus chance of a nuclear war, we can turn to Alphabet Google YouTube to provide management guidance. Keep in mind that the Google has faced some challenges in the human resource, people asset department in the past. Notable examples range from frisky attorneys to high profile terminations of individuals like Dr. Timnit Gebru. The lawyer thing was frisky; the Timnit thing was numbers about bias.
“Google’s CEO Says If Your Return to the Office Plan Doesn’t Include These 3 Things You’re Doing It Wrong. It’s All About What You Value” provides information about the human resource functionality of a very large online advertising bar room door. Selling, setting prices, auctioning, etc. flip flop as part of the design of the digital saloon. “Pony up them ad collars, partner or else” is ringing in my ears.
The conjunction of human resources and “value” is fascinating. How does one value one Timnit?
What are these management insights:
First, you must have purpose. The write up provides this explanatory quote:
A set of our workforce will be fully remote, but most of our workforce will be coming in three days a week. But I think we can be more purposeful about the time they’re in, making sure group meetings, collaboration, creative brainstorming, or community building happens then.
Okay, purpose seems to be more organized. Okay, in the pre Covid era why did Google require multiple messaging apps? What about those social media plays going way back to Orkut?
Second, you must be flexible. Again the helpful expository statements appear in the write up:
At Google, that means giving people choices. Some employees will be back in the office full time. Others will adopt a hybrid approach where they work in the office three days a week, and from home the rest of the time. In other cases, employees might choose to relocate and work fully remotely for a period of time.
Flexibility also suggests being able to say one thing and then changing that thing. How will Googlers working in locations with lower costs of living? Maybe pay them less? Move them from one position to another in order to grow or not impede their more productive in office colleagues? Perhaps shifting a full timer to a contractor basis? That’s a good idea too. Flexibility is the key. For the worker, sorry, we’re talking management not finding a life partner.
Third, you must do something with choice. Let’s look at the cited article to figure out choice:
The sense of creating community, fostering creativity in the workplace collaboration all makes you a better company. I view giving flexibility to people in the same way, to be very clear. I do think we strongly believe in in-person connections, but I think we can achieve that in a more purposeful way, and give employees more agency and flexibility.
Okay, decide, Googler. No, not the employee, the team leader. If Googlers had choice, some of those who pushed back and paraded around the Google parking lot, would be getting better personnel evaluation scores.
Stepping back, don’t these quotes sound like baloney? They do to me. And I won’t mention the Glass affair, the overdosed VP on his yacht, or the legal baby thing.
Wow. Not quite up to MIT – Epstein grade verbiage, but darned close. And what about “value”? Sort of clear, isn’t it, Dr. Gebru.
Stephen E Arnold, June 16, 2022
The UK National Health Service: The Search for a Silver Bullet
June 13, 2022
Modern health care is a bit of muddle. The UK’s National Health Service has licensed, tested, tire kicked, and tried every angle to manage its myriad activities.
According to the odd orange newspaper (the Financial Times), the often befuddled NHS may be ready to embrace the PowerPoint assertions of a US company. “Palantir Gears Up to Expand Its Reach into UK’s NHS” reports:
Over the next few months, Palantir will bid for the five-year £360mn contract for the proposed Federated Data Platform (FDP), a new data tool to connect and integrate patient and other data sources from across the health system, so real-time decisions can be made effectively by clinicians and bureaucrats.
How similar is delivering health care to analyzing information to win a battle or figure out what an adversary is likely to do?
I am not sure. I do know that many intelware companies (this is my term for firms providing specialized software and services to law enforcement, crime analysts, and intelligence professionals) find that commercial clients can become squeamish under these conditions:
- Question from potential customer: “Who are your customers?” Intelware vendor: “Sorry, that information is classified.”
- Question from potential customer: “Can you provide a specific example of how your system delivered fungible results?” Intelware vendor: “We are not permitted to disclose either the use or effect of our system.”
- Question from potential customer: “How much consulting and engineering are needed before we can provide access to the system?” Intelware vendor: “That depends.” Customer asks a follow up question: “Can you be more specific?” Intelware vendors: “That information is classified.”
You can see how the commercial outfits not engaged in fighting crimes against children, drug smuggling, terrorist actions, termination of adversaries, etc. can be a tough sell.
But one of the big issues is the question, “Is our data available to government entities in our country or elsewhere without our knowledge or permission?”
Every licensee wants to here assurances that data are private, encrypted, protected by 20 somethings in Slough, or whatever is required to close the deal.
But there is the suspicion that when a company does quite a bit of work for certain government agencies in one or more countries, stuff happens. Data mining, insider actions, or loss of data control due to bad actors behavior.
It will be interesting to see if this deal closes and how it plays out. Based on NHS’s track record with Google-type outfits and Smartlogic-type innovators, I have a hunch that the outcome will be a case study of modern business processes.
Palantir needs many big wins to regain some stock market momentum. At least the Financial Times did not reference Palantir’s estimate of a 30 percent chance of nuclear war. Undoubtedly such a terrible event would stretch NHS’s capabilities regardless of technology vendor underpinning the outfit.
Stephen E Arnold, June 13, 2022
NSO Group: Here We Go Again
June 1, 2022
That Israeli outfit NSO Group has nailed the art of publicity. Positive PR? Nope. Not so positive? Yep. But as a wit allegedly said, “Any publicity is good publicity?”
Maybe.
“NSO’s Cash Dilemma: Miss Debt Repayment or Sell to Risky Customers” tries to explain some of NSO Group’s alleged activities. [This Financial Times’ article resides behind a paywall.] The write up states:
Hulio [one of NSO Group’s senior managers] said there was one option to bring in some cash quickly enough to pay salaries and service debt: reassemble a defunct internal committee and approve sales to customers flagged as “elevated risk” during due diligence.
Why is this allegation of money pressures sparking consideration of sales to nation states which may present some challenges to NSO Group, its managers and staff, and its investors?
My thought is that money must be followed.
A pursuit of money sparked some actions at other search and content processing centric companies. I mentioned this idea in my recent essay “Autonomy Business Details: Are These Relevant to Search- and Content Processing Type Outfits Today?”
The decision to generate revenues seems to open the door for many ideas. Some of these are okay; for example, selling more licenses to governments of NATO countries. A few may have been less well received; for example, relaxing the criteria used to determine what countries could license Israeli surveillance innovations.
US sanctions and the PR cyclone have created a number of business challenges for NSO Group. The path forward according to the Financial Times’ article looks like this:
In recent months, Hulio has come up with a new plan dubbed the “phoenix plan” by company insiders. The idea is to split NSO’s greatest assets from its greatest liabilities — this meant separating the code behind Pegasus and company engineers who are highly paid graduates of Israel’s elite military intelligence units, from the clients that have drawn the ire of the US and human rights groups. Hulio and a group of creditors hope that by spinning out a new entity that houses the code and engineers, it can sidestep the commerce department’s blacklist, especially if a new owner were a top US defence contractor.
What’s the outlook for NSO Group? Three possibilities strike me:
- Other companies will fill the gap. Just as Cellebrite has to deal with an upstart iPhone penetration solution, NSO Group will find that its methods provide a springboard to other innovators.
- NSO Group gets folded into a government agency. One can be sure it will not be a part of a nation state with negative thoughts about Israel.
- NSO Group folds its tent, and certain senior managers and engineers set up another company and move on.
I want to mention that the reason there is a glass ceiling for revenues from intelware and policeware is that there are a finite number of customers for the number of products and services on offer. Once that glass ceiling bumps the head of senior managers and stakeholders, then what I see as “drastic” actions kick in. Are Palantir’s comments about nuclear war and example of this?
I am certain about one thing: NSO Group is one of the most recognized brands of intelware in the world.
Stephen E Arnold, June 1, 2022