Can the Bezos Bulldozer Crush Temu, Shein, Regulators, and AI?

June 27, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The question, to be fair, should be, “Can the Bezos-less bulldozer crush Temu, Shein, Regulators, Subscriptions to Alexa, and AI?” The article, which appeared in the “real” news online service Venture Beat, presents an argument suggesting that the answer is, “Yes! Absolutely.”

image

Thanks MSFT Copilot. Good bulldozer.

The write up “AWS AI Takeover: 5 Cloud-Winning Plays They’re [sic] Using to Dominate the Market” depends upon an Amazon Big Dog named Matt Wood, VP of AI products at AWS. The article strikes me as something drafted by a small group at Amazon and then polished to PR perfection. The reasons the bulldozer will crush Google, Microsoft, Hewlett Packard’s on-premises play, and the keep-on-searching IBM Watson, among others, are:

  1. Covering the numbers or logo of the AI companies in the “game”; for example, Anthropic, AI21 Labs, and other whale players
  2. Hitting up its partners, customers, and friends to get support for the Amazon AI wonderfulness
  3. Engineering AI to be itty bitty pieces one can use to build a giant AI solution capable of dominating D&B industry sectors like banking, energy, commodities, and any other multi-billion sector one cares to name
  4. Skipping the Google folly of dealing with consumers. Amazon wants the really big contracts with really big companies, government agencies, and non-governmental organizations.
  5. Amazon is just better at security. Those leaky S3 buckets are not Amazon’s problem. The customers failed to use Amazon’s stellar security tools.

Did these five points convince you?

If you did not embrace the spirit of the bulldozer, the Venture Beat article states:

Make no mistake, fellow nerds. AWS is playing a long game here. They’re not interested in winning the next AI benchmark or topping the leaderboard in the latest Kaggle competition. They’re building the platform that will power the AI applications of tomorrow, and they plan to power all of them. AWS isn’t just building the infrastructure, they’re becoming the operating system for AI itself.

Convinced yet? Well, okay. I am not on the bulldozer yet. I do hear its engine roaring and I smell the no-longer-green emissions from the bulldozer’s data centers. Also, I am not sure the Google, IBM, and Microsoft are ready to roll over and let the bulldozer crush them into the former rain forest’s red soil. I recall researching Sagemaker which had some AI-type jargon applied to that “smart” service. Ah, you don’t know Sagemaker? Yeah. Too bad.

The rather positive leaning Amazon write up points out that as nifty as those five points about Amazon’s supremacy in the AI jungle, the company has vision. Okay, it is not the customer first idea from 1998 or so. But it is interesting. Amazon will have infrastructure. Amazon will provide model access. (I want to ask, “For how long?” but I won’t.), and Amazon will have app development.

The article includes a table providing detail about these three legs of the stool in the bulldozer’s cabin. There is also a run down of Amazon’s recent media and prospect directed announcements. Too bad the article does not include hyperlinks to these documents. Oh, well.

And after about 3,300 words about Amazon, the article includes about 260 words about Microsoft and Google. That’s a good balance. Too bad IBM. You did not make the cut. And HP? Nope. You did not get an “Also participated” certificate.

Net net: Quite a document. And no mention of Sagemaker. The Bezos-less bulldozer just smashes forward. Success is in crushing. Keep at it. And that “they” in the Venture Beat article title: Shouldn’t “they” be an “it”?

Stephen E Arnold, June 27, 2024

Nerd Flame War: AI AI AI

June 27, 2024

The Internet is built on trolls and their boorish behavior. The worst of the trolls are self-confessed “experts” on anything. Every online community has their loitering trolls and tech enthusiasts aren’t any different. In the old days of Internet lore, online verbal battles were dubbed “flame wars” and XDA-Developers reports that OpenAI started one: “AI Has Thrown Stack Overflow Into Civil War.”

A huge argument in AI development is online content being harvested for large language models (LLMs) to train algorithms. Writers and artists were rightly upset were used to train image and writing algorithms. OpenAI recently partnered with Stack Overflow to collect data and the users aren’t happy. Stack Overflow is a renowned tech support community for sysadmin, developers, and programmers. Stack Overflow even brags that it is world’s largest developer community.

Stack Overflow users are angry, because they weren’t ask permission to use their content for AI training models and they don’t like the platform’s response to their protests. Users are deleting their posts or altering them to display correct information. In response, Stack Overflow is restoring deleted and incorrect information, temporarily suspending users who delete content, and hiding behind the terms of service. The entire situation is explained here:

“Delving into discussion online about OpenAI and Stack Overflow’s partnership, there’s plenty to unpack. The level of hostility towards Stack Overflow varies, with some users seeing their answers as being posted online without conditions – effectively free for all to use, and Stack Overflow granting OpenAI access to that data as no great betrayal. These users might argue that they’ve posted their answers for the betterment of everyone’s knowledge, and don’t place any conditions on its use, similar to a highly permissive open source license.

Other users are irked that Stack Overflow is providing access to an open-resource to a company using it to build closed-source products, which won’t necessarily better all users (and may even replace the site they were originally posted on.) Despite OpenAI’s stated ambition, there is no guarantee that Stack Overflow will remain freely accessible in perpetuity, or that access to any AIs trained on this data will be free to the users who contributed to it.”

Reddit and other online communities are facing the same problems. LLMs are made from Stack Overflow and Reddit to train generative AI algorithms like ChatGPT. OpenAI’s ChatGPT is regarded as overblown because it continues to fail multiple tests. We know, however, that generative AI will improve with time. We also know that people will use the easiest solution and generative AI chatbots will become those tools. It’s easier to verbally ask or write a question than searching.

Whitney Grace, June 27, 2024

Can Anthropic Break Into the AI Black Box?

June 20, 2024

The inner workings of large language models have famously been a mystery, even to their creators. That is a problem for those who would like transparency around pivotal AI systems. Now, however, Anthropic may have found the solution. Time reports, “No One Truly Knows Bow AI Systems Work. A New Discovery Could Change That.” If the method pans out, this will be perfect for congressional hearings and anti trust testimony. Reporter Billy Perrigo writes:

“Researchers developed a technique for essentially scanning the ‘brain’ of an AI model, allowing them to identify collections of neurons—called ‘features’—corresponding to different concepts. And for the first time, they successfully used this technique on a frontier large language model, Anthropic’s Claude Sonnet, the lab’s second-most powerful system, .In one example, Anthropic researchers discovered a feature inside Claude representing the concept of ‘unsafe code.’ By stimulating those neurons, they could get Claude to generate code containing a bug that could be exploited to create a security vulnerability. But by suppressing the neurons, the researchers found, Claude would generate harmless code. The findings could have big implications for the safety of both present and future AI systems. The researchers found millions of features inside Claude, including some representing bias, fraudulent activity, toxic speech, and manipulative behavior. And they discovered that by suppressing each of these collections of neurons, they could alter the model’s behavior. As well as helping to address current risks, the technique could also help with more speculative ones.”

The researchers hope their method will replace “red-teaming,” where developers chat with AI systems in order to uncover toxic or dangerous traits. On the as-of-yet theoretical chance an AI gains the capacity to deceive its creators, the more direct method would be preferred.

A happy side effect of the method could be better security. Anthropic states being able to directly manipulate AI features may allow developers to head off AI jailbreaks. The research is still in the early stages, but Anthropic is singing an optimistic tune.

Cynthia Murrell, June 20, 2024

Great Moments in Smart Software: IBM Watson Gets to Find Its Future Elsewhere Again

June 19, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The smart software game is a tough one. Whip up some compute, download the models, and go go go. Unfortunately artificial intelligence is artificial and often not actually intelligent. I read an interesting article in Time Magazine (who knew it was still in business?). The story has a clickable title: “McDonald’s Ends Its Test Run of AI Drive-Throughs With IBM.” The juicy word IBM, the big brand McDonald’s, and the pickle on top: IBM.

image

A college student tells the smart software system at a local restaurant that his order was misinterpreted. Thanks, MSFT Copilot. How your “recall” today? What about system security? Oh, that’s too bad.

The write up reports with the glee of a kid getting a happy meal:

McDonald’s automated order taker with IBM received scores of complaints in recent years, for example — with many taking to social media to document the chatbot misunderstanding their orders.

Consequently, the IBM fast food service has been terminated.

Time’s write up included a statement from Big Blue too:

In an initial statement, IBM said that “this technology is proven to have some of the most comprehensive capabilities in the industry, fast and accurate in some of the most demanding conditions," but did not immediately respond to a request for further comment about specifics of potential challenges.

IBM suggested its technology could help fight cancer in Houston a few years ago. How did that work out? That smart software worker had an opportunity to find its future elsewhere. The career trajectory, at first glance, seems to be from medicine to grilling burgers. One might interpret this as an interesting employment trajectory. The path seems to be heading down to Sleepy Town.

What’s the future of the IBM smart software test? The write up points out:

Both IBM and McDonald’s maintained that, while their AI drive-throughs partnership was ending, the two would continue their relationship on other projects. McDonalds said that it still plans to use many of IBM’s products across its global system.

But Ronald McDonald has to be practical. The article adds:

In December, McDonald’s launched a multi-year partnership with Google Cloud. In addition to moving restaurant computations from servers into the cloud, the partnership is also set to apply generative AI “across a number of key business priorities” in restaurants around the world.

Google’s smart software has been snagged in some food controversies too. The firm’s smart system advised some Googlers to use glue to make the cheese topping stick better. Yum.

Several observations seem to be warranted:

  1. Practical and money-saving applications of IBM’s smart software do not have the snap, crackle, and pop of OpenAI’s PR coup with Microsoft in January 2023. Time is writing about IBM, but the case example is not one that makes me crave this particular application. Customers want a sandwich, not something they did not order.
  2. Examples of reliable smart software applications which require spontaneous reaction to people ordering food or asking basic questions are difficult to find. Very narrow applications of smart software do result in positive case examples; for example, in some law enforcement software (what I call policeware), the automatic processes of some vendors’ solutions work well; for example, automatic report generation in the Shadowdragon Horizon system.
  3. Big companies spend money, catch attention, and then have to spend more money to remediate and clean up the negative publicity.

Net net: More small-scale testing and less publicity chasing seem to be two items to add to the menu. And, Watson, keep on trying. Google is.

Stephen E Arnold, June 19, 2024

x

Palantir: Fear Is Good. Fear Sells.

June 18, 2024

President Eisenhower may not have foreseen AI when he famously warned of the military-industrial complex, but certain software firms certainly fit the bill. One of the most successful, Palantir, is pursuing Madison Avenue type marketing with a message of alarm. The company’s co-founder, Alex Karp, is quoted in the fear-mongering post at right-wing Blaze Media, “U.S. Prepares for War Amid Growing Tensions that China Could Invade Taiwan.”

After several paragraphs of panic over tensions between China and Taiwan, writer Collin Jones briefly admits “It is uncertain if and when the Chinese president will deploy an attack against the small country.” He quickly pivots to the scary AI arms race, intimating Palantir and company can save us as long as we let (fund) them. The post concludes:

“Palantir’s CEO and co-founder Alex Karp said: ‘The way to prevent a war with China is to ramp up not just Palantir, but defense tech startups that produce software-defining weapons systems that scare the living F out of our adversaries.’ Karp noted that the U.S. must stay ahead of its military opponents in the realm of AI. ‘Our adversaries have a long tradition of being not interested in the rule of law, not interested in fairness, not interested in human rights and on the battlefield. It really is going to be us or them.’ Karp noted that the U.S. must stay ahead of its military opponents in the realm of AI. You do not want a world order where our adversaries try to define new norms. It would be very bad for the world, and it would be especially bad for America,’ Karp concluded.”

Wow. But do such scare tactics work? Of course they do. For instance, we learn from DefenseScoop, “Palantir Lands $480M Army Contract for Maven Artificial Intelligence Tech.” That article reports on not one but two Palantir deals: the titular Maven expansion and, we learn:

“The company was recently awarded another AI-related deal by the Army for the next phase of the service’s Tactical Intelligence Targeting Access Node (TITAN) ground station program, which aims to provide soldiers with next-generation data fusion and deep-sensing capabilities via artificial intelligence and other tools. That other transaction agreement was worth $178 million.”

Those are just two recent examples of Palantir’s lucrative government contracts, ones that have not, as of this writing, been added this running tally. It seems the firm has found its winning strategy. Ramping up tensions between world powers is a small price to pay for significant corporate profits, apparently.

Cynthia Murrell, June 18, 2024

AI May Not Be Magic: The Salesforce Signal

June 10, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Salesforce has been a steady outfit. However, the company suffered a revenue miss, its first in about a quarter century. The news reports cited broad economic factors like “macro headwinds.” Salesforce, according to the firm’s chief people officer, the company has been experimenting with AI for “over a decade.” But the magic of AI was not able to ameliorate the company’s dip or add some chrome trim to its  revenue guidance.

image

John Milton’s god character from Paradise Lost watches the antics of super-sophisticated artificial intelligence algorithms. This character quickly realizes that zeros and ones crafted by humans and enhanced by smart machines is definitely not omniscient, omnipresent, and omnipotent character who knows everything before it happens no matter what the PR firms or company spokesperson asserts. Thanks, MSFT Copilot. Good enough.

Piecing together fragments of information, it appears that AI has added to the company’s administrative friction. In a Fortune interview, recycled for MSN.com, consider these administrative process examples:

  • The company has deployed 50 AI tools.
  • Salesforce has an AI governance council.
  • There is an Office of Ethical and Humane Use, started in 2019.
  • Salesforce uses surveys to supplement its “robust listening strategies.”
  • There are phone calls and meetings.

Some specific uses of AI appear to address inherent design constraints in Salesforce software; for example, AI has:

saved employees 50,000 hours within one business quarter, and the bot answered nearly 370,000 employee questions, according to the company. Merging into Project Basecamp, the company’s project management platform, has resolved 88,000 worker requests, speeding up issue resolution from an average of 48 hours to just 30 minutes.

What’s the payoff to the bottom line? That information is scant. What we know is that Salesforce may not be benefiting from the additional AI investment or the friction AI’s bureaucratic processes imposes on the company.

What’s this mean for those who predict that AI will change everything? I continue to think about the two ends of the spectrum: Go fast and break things crowd and the stop AI contingent.

First, the type of AI which is the one that does high school essay writing is easy to understand. These systems work as long as the subject matter clumps into piles of factoids which limit the craziness of the algorithms’ outputs. The topic “How to make a taco” is nailed down. The topic “How to decrypt Telegram’s encryption system” is not. Big brains can explain why the taco question is relatively hallucination free but not why the Telegram question generates useless drivel. I have, therefore, concluded, “Limited, narrow domain questions are okay for AI.”

Second, the current systems are presented as super wonderful. An example is the steady flow of PR about Google DeepMind’s contributions to biological science. Yet Google’s search system generates baloney. I think the different is that whacking away at proteins is a repetitive combinatorial problem. Calling the methods AI is similar to describing Daylight Chemical Information Systems a manifestation of the Oracle at Delphi is hogwash. PR erases important differences in critical lines of research. Does Google DeepMind feel shame? Let’s ask IBM Watson. That will be helpful. PR has a role; it is not AI.

Third, the desire for a silver bullet is deep-seated in many Peter Principle managers. These “leaders” of “leadership teams” don’t know what to do. Managing becomes figuring out risks. AI has legs, so let’s give that pony a chance to win the cart race. But pony cart races are trivial. The real races require winning three competitions. Few horses pull of that trick. I watch in wonder the launch, retreat, PR explanation, and next launch of some AI outfits. The focus seems to be on getting $20 per month. Degrading the service. Asking for more money. Then repeat.

The lack of AI innovation is becoming obvious. From the starter’s gun cracking in time with Microsoft’s AI announcement in January 2023 how much progress has been made?

We have the Salesforce financial report. We have the management craziness at OpenAI. We have Microsoft investing in or partnering with a number of technology outfits, including one in Paris. We have Google just doddering and fumbling. We have lawsuits. We have craziness like Adobe’s “owning” any image created with its software. We have start ups which bandy about the term “AI” like a shuttlecock at a high school in India badminton league. We have so many LinkedIn AI experts, I marvel that no one pins these baloney artists to a piece of white bread. We have the Dutch police emphasizing home-grown AI which helped make sense of the ANOM phone stings when the procedures are part of most policeware systems. Statistics, yes. AI, no. Clustering, yes. AI, no. Metadata assignment, yes. AI, no. The ANOM operation took place about 2017 to its shut down four years later. AI? Nope.

What does the lack of financial payoff and revenue generating AI solutions tell me? My answer to this question is:

  1. The cost of just using and letting prospects use an AI system are high. Due to the lack of a Triple Crown contender, no company has the horse or can afford the costs of getting the nag ready to race and keeping the animal from keeling over dead.
  2. The tangible results are tough to express. Despite the talk about reducing the costs of customer service, the cost of the AI system and the need to have humans ride herd on what the crazed cattle-like algorithms yield is not evident to me. The Salesforce experience is that AI cannot fix or make the Slack system generate oodles of cost savings or revenues from new, happy customers.
  3. The AI systems, particularly the services promoted via Product Hunt, are impossible for me to differentiate. Some do images, but the functions are similar. Some AI system do text things. Okay. But what’s new? Money is being spent to produce endless variations and me-too services. Fun for some. But boring and a waste of time to a dinobaby like me.

Net net: With economic problems growing in numerous sectors, those with money or a belief that garlic will kill Count Vampire, Baron of Revenue Loss are in for a surprise. Sorry. No software equivalent to Milton’s eternal, all-knowing, omnipotent God. I won’t tell the PR people. That Salesforce signal is meaningful.

Stephen E Arnold, June 10, 2024

Selling AI with Scare Tactics

June 6, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Ah, another article with more assertions to make workers feel they must adopt the smart software that threatens their livelihoods. AI automation firm UiPath describes “3 Common Barriers to AI Adoption and How to Overcome Them.” Before marketing director Michael Robinson gets to those barriers, he tries to motivate readers who might be on the fence about AI. He writes:

“There’s a growing consensus about the need for businesses to embrace AI. McKinsey estimated that generative AI could add between $2.6 to $4.4 trillion in value annually, and Deloitte’s ’State of AI in the Enterprise’ report found that 94% of surveyed executives ‘agree that AI will transform their industry over the next five years.’ The technology is here, it’s powerful, and innovators are finding new use cases for it every day. But despite its strategic importance, many companies are struggling to make progress on their AI agendas. Indeed, in that same report, Deloitte estimated that 74% of companies weren’t capturing sufficient value from their AI initiatives. Nevertheless, companies sitting on the sidelines can’t afford to wait any longer. As reported by Bain & Company, a ‘larger wedge’ is being driven ‘between those organizations that have a plan [for AI] and those that don’t—amplifying advantage and placing early adopters into stronger positions.’”

Oh, no! What can the laggards do? Fret not, the article outlines the biggest hurdles: lack of a roadmap, limited in-house expertise, and security or privacy concerns. Curious readers can see the post for details about each. As it happens, software like UiPath’s can help businesses clear every one. What a coincidence.

Cynthia Murrell, June 6, 2024

Publication Founded by a Googler Cheers for Google AI Search

June 5, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

To understand the “rah rah” portion of this article, you need to know the backstory behind Search Engine Land, a news site about search and other technology. It was founded by Danny Sullivan, who pushed the SEO bandwagon. He did this because he was angling for a job at Google, he succeeded, and now he’s the point person for SEO.

Another press release touting the popularity of Google search dropped: “Google SEO Says AI Overviews Are Increasing Search Usage.” The author Danny Goodwin remains skeptical about Google’s spiked popularity due to AI and despite the bias of Search Engine Land’s founder.

During the QI 2024 Alphabet earnings call, Google/Alphabet CEO Sundar Pichai said that the search engine’s generative AI has been used for billions of queries and there are plans to develop the feature further. Pichai said positive things about AI, including that it increased user engagement, could answer more complex questions, and how there will be opportunities for monetization.

Goodwin wrote:

“All signs continue to indicate that Google is continuing its slow evolution toward a Search Generative Experience. I’m skeptical about user satisfaction increasing, considering what an unimpressive product AI overviews and SGE continues to be. But I’m not the average Google user – and this was an earnings call, where Pichai has mastered the art of using a lot of words to say a whole lot of nothing.”

AI is the next evolution of search and Google is heading the parade, but the technology still has tons of bugs. Who founded the publication? A Googler. Of course there is no interaction between the online ad outfit and an SEO mouthpiece. Un-uh. No way.

Whitney Grace, June 5, 2024

So AI Is — Maybe, Just Maybe — Not the Economic Big Kahuna?

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I find it amusing how AI has become the go-to marketing word. I suppose if I were desperate, lacking an income, unsure about what will sell, and a follow-the-hyperbole-type person I would shout, “AI.” Instead I vocalize, “Ai-Yai-Ai” emulating the tones of a Central American death whistle. Yep, “Ai-Yai-AI.”

image

Thanks, MSFT Copilot. A harbinger? Good enough.

I read “MIT Professor Hoses Down Predictions AI Will Put a Rocket under the Economy.” I won’t comment upon the fog of distrust which I discern around Big Name Universities, nor will I focus my adjustable Walgreen’s spectacles on MIT’s fancy dancing with the quite interesting and decidedly non-academic Jeffrey Epstein. Nope. Forget those two factoids.

The write up reports:

…Daron Acemoglu, professor of economics at Massachusetts Institute of Technology, argues that predictions AI will improve productivity and boost wages in a “blue-collar bonanza” are overly optimistic.

The good professor is rowing against the marketing current. According to the article, the good professor identifies some wild and crazy forecasts. One of these is from an investment bank whose clients are unlikely to be what some one percenters perceive as non-masters of the universe.

That’s interesting. But it pales in comparison to the information in “Few People Are Using ChatGPT and Other AI Tools Regularly, Study Suggests.” (I love suggestive studies!) That write up reports about a study involving Thomson Reuters, the “trust” outfit:

Carried out by the Reuters Institute and Oxford University and involving 6,000 respondents from the U.S., U.K., France, Denmark, Japan, and Argentina, the researchers found that OpenAI’s ChatGPT is by far the most widely used generative-AI tool and is two or three times more widespread than the next most widely used products — Google Gemini and Microsoft Copilot. But despite all the hype surrounding generative AI over the last 18 months, only 1% of those surveyed are using ChatGPT on a daily basis in Japan, 2% in France and the UK, and 7% in the U.S. The study also found that between 19% and 30% of the respondents haven’t even heard of any of the most popular generative AI tools, and while many of those surveyed have tried using at least one generative-AI product, only a very small minority are, at the current time, regular users deploying them for a variety of tasks.

My hunch is that these contrarians want clicks. Well, the tactic worked for me. However, how many of those in AI-Land will take note? My thought is that these anti-AI findings are likely to be ignored until some of the Big Money folks lose their cash. Then the voices of negativity will be heard.

Several observations:

  1. The economics of AI seem similar to some early online ventures like Pets.com, not “all” mind you, just some
  2. Expertise in AI may not guarantee a job at a high-flying techno-feudalist outfit
  3. The difficulties Google appears to be having suggest that the road to AI-Land on the information superhighway may have some potholes. (If Google cannot pull AI off, how can Bob’s Trucking Company armed with Microsoft Word with Copilot?)

Net net: It will be interesting to monitor the frequency of “AI balloon deflating” analyses.

Stephen E Arnold,  June 3, 2024

x

NSO Group: Making Headlines Again and Again and Again

May 31, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

NSO Group continues to generate news. One example is the company’s flagship sponsorship of an interesting conference going on in Prague from June 4th to the 6th. What’s interesting mean? I think those who attend the conference are engaged in information-related activities connected in some way to law enforcement and intelligence. How do I know NSO Group ponied up big bucks to be the “lead sponsor”? Easy. I saw this advertisement on the conference organizer’s Web site. I know you want me to reveal the url, but I will treat the organizer in a professional manner. Just use those Google Dorks, and you will locate the event. The ad:

image

What’s the ad from the “lead sponsor” say? Here are a few snippets from the marketing arm of NSO Group:

NSO Group develops and provides state-of-the-art solutions, designed to assist in preventing terrorism and crime. Our solutions address diverse strategical, tactical and operational needs and scenarios to serve authorized government agencies including intelligence, military and law enforcement. Developed by the top technology and data science experts, the NSO portfolio includes cyber intelligence, network and homeland security solutions. NSO Group is proud to help to protect lives, security and personal safety of citizens around the world.

Innocent stuff with a flavor jargon-loving Madison Avenue types prefer.

image

Citizen’s Lab is a bit like mules in an old-fashioned grist mill. The researchers do not change what they think about. Source: Royal Mint Museum in the UK.

Just for some fun, let’s look at the NSO Group through a different lens. The UK newspaper The Guardian, which counts how many stories I look at a year, published “Critics of Putin and His Allies Targeted with Spyware Inside the EU.” Here’s a sample of the story’s view of NSO Group:

At least seven journalists and activists who have been vocal critics of the Kremlin and its allies have been targeted inside the EU by a state using Pegasus, the hacking spyware made by Israel’s NSO Group, according to a new report by security researchers. The targets of the hacking attempts – who were first alerted to the attempted cyber-intrusions after receiving threat notifications from Apple on their iPhones – include Russian, Belarusian, Latvian and Israeli journalists and activists inside the EU.

And who wrote the report?

Access Now, the Citizen Lab at the Munk School of Global Affairs & Public Policy at the University of Toronto (“the Citizen Lab”), and independent digital security expert Nikolai Kvantiliani

The Citizen Lab has been paying attention to NSO Group for years. The people surveilled or spied upon via the NSO Group’s Pegasus technology are anti-Russia; that is, none of the entities will be invited to a picnic at Mr. Putin’s estate near Sochi.

Obviously some outfit has access to the Pegasus software and its command-and-control system. It is unlikely that NSO Group provided the software free of charge. Therefore, one can conclude that NSO Group could reveal what country was using its software for purposes one might consider outside the bounds of the write up’s words cited above.

NSO Group remains one of the — if not the main — poster children for specialized software. The company continues to make headlines. Its technology remains one of the leaders in the type of software which can be used to obtain information for a mobile device. There are some alternatives, but NSO Group remains the Big Dog.

One wonders why Israel, presumably with the Pegasus tool, could not have obtained information relevant to the attack in October 2023. My personal view is that having Fancy Dan ways to get data from a mobile phone, human analysts have to figure out what’s important and what to identify as significant.

My point is that the hoo-hah about NSO Group and Pegasus may not be warranted. Information without the trained analysts and downstream software may have difficulty getting the information required to take a specific action. Israel’s lack of intelligence means that software alone can’t do the job. No matter what the marketing material says or how slick the slide deck used to brief those with a “need to know” appears — software is not intelligence.

Will NSO Group continue to make headlines? Probably. Those with access to Pegasus will make errors and disclose their ineptness. Citizen’s Lab will be at the ready. New reports will be forthcoming.

Net net: Is anyone surprised Mr. Putin is trying to monitor anti-Russia voices? Is Pegasus the only software pressed into service? My answer to this question is: “Mr. Putin will use whatever tool he can to achieve his objectives.” Perhaps Citizen’s Lab should look for other specialized software and expand its opportunities to write reports? When will Apple address the vulnerability which NSO Group continues to exploit?

Stephen E Arnold, May 31, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta