NSO Group Determines Public Officials Are Legitimate Targets

July 12, 2024

Well, that is a point worth making if one is the poster child of the specialized software industry.

NSO Group, makers of the infamous Pegasus spyware, makes a bold claim in a recent court filing: “Government and Military Officials Fair Targets of Pegasus Spyware in All Cases, NSO Group Argues,” reports cybersecurity news site The Record. The case at hand is Pegasus’ alleged exploitation of a WhatsApp vulnerability back in 2019. Reporter Suzanne Smalley cites former United Nations official David Kaye, who oversaw the right to free expression at that time. Smalley writes:

“Friday’s filing seems to suggest a broader purpose for Pegasus, Kaye said, pointing to NSO’s explanation that the technology can be used on ‘persons who, by virtue of their positions in government or military organizations, are the subject of legitimate intelligence investigations.’ ‘This appears to be a much more extensive claim than made in 2019, since it suggests that certain persons are legitimate targets of Pegasus without a link to the purpose for the spyware’s use,’ said Kaye, who was the U.N.’s special rapporteur on freedom of opinion and expression from 2014 to 2020. … The Israeli company’s statement comes as digital forensic researchers are increasingly finding Pegasus infections on phones belonging to activists, opposition politicians and journalists in a host of countries worldwide. NSO Group says it only sells Pegasus to governments, but the frequent and years-long discoveries of the surveillance technology on civil society phones have sparked a public uproar and led the U.S. government to crack down on the company and commercial spyware manufacturers in general.”

See the article for several examples of suspected targets around the world. We understand both the outrage and the crack down. However, publicly arguing about the targets of spyware may have unintended consequences. Now everyone knows about mobile phone data exfiltration and how that information can be used to great effect.

As for the WhatsApp court case, it is proceeding at the sluggish speed of justice. In March 2024, a California federal judge ordered NSO Group to turn over its secret spyware code. What will be the verdict? When will it be handed down? And what about the firm’s senior managers?

Cynthia Murrell, July 12, 2024

OpenAI Says, Let Us Be Open: Intentionally or Unintentionally

July 12, 2024

dinosaur30a_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read a troubling but not too surprising write up titled “ChatGPT Just (Accidentally) Shared All of Its Secret Rules – Here’s What We Learned.” I have somewhat skeptical thoughts about how big time organizations implement, manage, maintain, and enhance their security. It is more fun and interesting to think about moving fast, breaking things, and dominating a market sector. In my years of dinobaby experience, I can report this about senior management thinking about cyber security:

  1. Hire a big name and let that person figure it out
  2. Ask the bean counter and hear something like this, “Security is expensive, and its monetary needs are unpredictable and usually quite large and just go up over time. Let me know what you want to do.”
  3. The head of information technology will say, “I need to license a different third party tool and get those cyber experts from [fill in your own preferred consulting firm’s name].”
  4. How much is the ransom compared to the costs of dealing with our “security issue”? Just do what costs less.
  5. I want to talk right now about the meeting next week with our principal investor. Let’s move on. Now!

image

The captain of the good ship OpenAI asks a good question. Unfortunately the situation seems to be somewhat problematic. Thanks, MSFT Copilot.

The write up reports:

ChatGPT has inadvertently revealed a set of internal instructions embedded by OpenAI to a user who shared what they discovered on Reddit. OpenAI has since shut down the unlikely access to its chatbot’s orders, but the revelation has sparked more discussion about the intricacies and safety measures embedded in the AI’s design. Reddit user F0XMaster explained that they had greeted ChatGPT with a casual "Hi," and, in response, the chatbot divulged a complete set of system instructions to guide the chatbot and keep it within predefined safety and ethical boundaries under many use cases.

Another twist to the OpenAI governance approach is described in “Why Did OpenAI Keep Its 2023 Hack Secret from the Public?” That is a good question, particularly for an outfit which is all about “open.” This article gives the wonkiness of OpenAI’s technology some dimensionality. The article reports:

Last April [2023], a hacker stole private details about the design of Open AI’s technologies, after gaining access to the company’s internal messaging systems. …

OpenAI executives revealed the incident to staffers in a company all-hands meeting the same month. However, since OpenAI did not consider it to be a threat to national security, they decided to keep the attack private and failed to inform law enforcement agencies like the FBI.

What’s more, with OpenAI’s commitment to security already being called into question this year after flaws were found in its GPT store plugins, it’s likely the AI powerhouse is doing what it can to evade further public scrutiny.

What these two separate items suggest to me is that the decider(s) at OpenAI decide to push out products which are not carefully vetted. Second, when something surfaces OpenAI does not find amusing, the company appears to zip its sophisticated lips. (That’s the opposite of divulging “secrets” via ChatGPT, isn’t it?)

Is the company OpenAI well managed? I certainly do not know from first hand experience. However, it seems to be that the company is a trifle erratic. Imagine the Chief Technical Officer did not allegedly know a few months ago if YouTube data were used to train ChatGPT. Then the breach and keeping quiet about it. And, finally, the OpenAI customer who stumbled upon company secrets in a ChatGPT output.

Please, make your own decision about the company. Personally I find it amusing to identify yet another outfit operating with the same thrilling erraticism as other Sillycon Valley meteors. And security? Hey, let’s talk about August vacations.

Stephen E Arnold, July 12, 2024

Big Plays or Little Plays: The Key to AI Revenue

July 11, 2024

I keep thinking about the billions and trillions of dollars required to create a big AI win. A couple of snappy investment banks have edged toward the idea that AI might not pay off with tsunamis of money right away. The fix is to become brokers for GPU cycles or “humble brags” about how more money is needed to fund the next big thing in what venture people want to be the next big thing. Yep, AI: A couple of winners and the rest are losers at least in terms of the pay off scale whacked around like a hapless squash ball at the New York Athletic Club.

However, a radical idea struck me as I read a report from the news service that oozes “trust.” The Reuters’ story is “China Leads the World in Adoption of Generative AI Survey Shows.” Do I trust surveys? Not really. Do I trust trusted “real” news outfits? Nope, not really. But the write up includes an interesting statement, and the report sparked what is for me a new idea.

First, here’s the passage I circled:

“Enterprise adoption of generative AI in China is expected to accelerate as a price war is likely to further reduce the cost of large language model services for businesses. The SAS report also said China led the world in continuous automated monitoring (CAM), which it described as “a controversial but widely-deployed use case for generative AI tools”.”

I interpreted this to mean:

  • Small and big uses of AI in somewhat mundane tasks
  • Lots of small uses with more big outfits getting with the AI program
  • AI allows nifty monitoring which is going to catch the attention of some Chinese government officials who may be able to repurpose these focused applications of smart software

With models available as open source like the nifty Meta Facebook Zuck concoction, big technology is available. Furthermore the idea of applying smart software to small problems makes sense. The approach avoids the Godzilla lumbering associated with some outfits and, second, fast iteration with fast failures provides useful factoids for other developers.

The “real” news report does not provide numbers or much in the way of analysis. I think the idea of small-scale applications does not make sense when one is eating fancy food at a smart software briefing in mid town Manhattan. Small is not going to generate that. big wave of money from AI. The money is needed to raise more money.

My thought is that the Chinese approach has value because it is surfing on open source and some proprietary information known to Chinese companies solving or trying to solve a narrow problem. Also, the crazy pace of try-fail, try-fail enables acceleration of what works. Failures translate to lessons about what lousy path to follow.

Therefore, my reaction to the “real” news about the survey is that China may be in a position to do better, faster, and cheaper AI applications that the Godzilla outfits. The chase for big money exists, but in the US without big money, who cares? In China, big money may not be as large as the pile of cash some VCs and entrepreneurs argue is absolutely necessary.

So what? The “let many flowers bloom” idea applies to AI. That’s a strength possibly neither appreciated or desired by the US AI crowd. Combined with China’s patent surge, my new thought translates to “oh, oh.”

Stephen E Arnold, July 11, 2024

Cloudflare, What Else Can You Block?

July 11, 2024

I spotted an interesting item in Silicon Angle. The article is “Cloudflare Rolls Out Feature for Blocking AI Companies’ Web Scrapers.” I think this is the main point:

Cloudflare Inc. today debuted a new no-code feature for preventing artificial intelligence developers from scraping website content. The capability is available as part of the company’s flagship CDN, or content delivery network. The platform is used by a sizable percentage of the world’s websites to speed up page loading times for users. According to Cloudflare, the new scraping prevention feature is available in both the free and paid tiers of its CDN.

Cloudflare is what I call an “enabler.” For example, when one tries to do some domain research, one often encounters Cloudflare, not the actual IP address of the service. This year I have been doing some talks for law enforcement and intelligence professionals about Telegram and its Messenger service. Guess what? Telegram is a Cloudflare customer. My team and I have encountered other interesting services which use Cloudflare the way Natty Bumpo’s sidekick used branches to obscure footprints in the forest.

Cloudflare has other capabilities too; for instance, the write up reports:

Cloudflare assigns every website visit that its platform processes a score of 1 to 99. The lower the number, the greater the likelihood that the request was generated by a bot. According to the company, requests made by the bot that collects content for Perplexity AI consistently receive a score under 30.

I wonder what less salubrious Web site operators score. Yes, there are some pretty dodgy outfits that may be arguably worse than an AI outfit.

The information in this Silicon Angle write up raises a question, “What other content blocking and gatekeeping services can Cloudflare provide?

Stephen E Arnold, July 11, 2024

Common Sense from an AI-Centric Outfit: How Refreshing

July 11, 2024

green-dino_thumb_thumb_thumb_thumb_tThis essay is the work of a dumb dinobaby. No smart software required.

In the wild and wonderful world of smart software, common sense is often tucked beneath a stack of PowerPoint decks and vaporized in jargon-spouting experts in artificial intelligence. I want to highlight “Interview: Nvidia on AI Workloads and Their Impacts on Data Storage.” An Nvidia poohbah named Charlie Boyle output some information that is often ignored by quite a few of those riding the AI pony to the pot of gold at the end of the AI rainbow.

image

The King Arthur of senior executives is confident that in his domain he is the master of his information. By the way, this person has an MBA, a law degree, and a CPA certification. His name is Sir Walter Mitty of Dorksford, near Swindon. Thanks, MSFT Copilot.  Good enough.

Here’s the pivotal statement in the interview:

… a big part of AI for enterprise is understanding the data you have.

Yes, the dwellers in carpetland typically operate with some King Arthur type myths galloping around the castle walls; specifically:

Myth 1: We have excellent data

Myth 2: We have a great deal of data and more arriving every minute our systems are online

Myth 3: Out data are available and in just a few formats. Processing the information is going to be pretty easy.

Myth 4: Out IT team can handle most of the data work. We may not need any outside assistance for our AI project.

Will companies map these myths to their reality? Nope.

The Nvidia expert points out:

…there’s a ton of ready-made AI applications that you just need to add your data to.

“Ready made”: Just like a Betty Crocker cake mix my grandmother thought tasted fake, not as good as home made. Granny’s comment could be applied to some of the AI tests my team have tracked; for example, the Big Apple’s chatbot outputting  comments which violated city laws or the exciting McDonald’s smart ordering system. Sure, I like bacon on my on-again, off-again soft serve frozen dessert. Doesn’t everyone?

The Nvidia experts offers this comment about storage:

If it’s a large model you’re training from scratch you need very fast storage because a lot of the way AI training works is they all hit the same file at the same time because everything’s done in parallel. That requires very fast storage, very fast retrieval.

Is that a problem? Nope. Just crank up the cloud options. No big deal, except it is. There are costs and time to consider. But otherwise this is no big deal.

The article contains one gems and wanders into marketing “don’t worry” territory.

From my point of view, the data issue is the big deal. Bad, stale, incomplete, and information in odd ball formats — these exist in organizations now. The mass of data may have 40 percent or more which has never been accessed. Other data are back ups which contain versions of files with errors, copyright protected data, and Boy Scout trip plans. (Yep, non work information on “work” systems.)

Net net: The data issue is an important one to consider before getting into the let’s deploy a customer support smart chatbot. Will carpetland dwellers focus on the first step? Not too often. That’s why some AI projects get lost or just succumb to rising, uncontrollable costs. Moving data? No problem. Bad data? No problem. Useful AI system? Hmmm. How much does storage cost anyway? Oh, not much.

Stephen E Arnold, July 11, 2024

A Digital Walden Despond: Life without Social Media

July 11, 2024

Here is a refreshing post from Deep Work Culture and More about the author’s shift to an existence mostly offline, where he discovered … actual life. Upon marking one year without Facebook, Instagram, or Twitter / X, the blogger describes “Rediscovering Time and Relationships: The Impact of Quitting Social Media.” After a brief period of withdrawal, he learned to put his newly freed time and attention to good use. He writes:

“Hours previously lost to mindless scrolling were now available for activities that brought genuine enrichment. I rediscovered the joy of uninterrupted reading, long walks, and deep conversations. This newfound time became a fertile ground for hobbies that had languished in the shadows of digital distractions. The absence of the incessant need to document and share every moment of my life allowed me to be fully present in my experiences.”

Imagine that. The author states more time for reflection and self-discovery, as well as abandoning the chase for likes and comments, provided clarity and opportunities for personal growth. He even rediscovered his love of books. He considers:

“Without the constant distractions of social media, I found myself turning to books more frequently and with greater enthusiasm. … My recent literary journey has been instrumental in fostering a deeper sense of empathy and curiosity, encouraging me to view the world through varied lenses and enhancing my overall cognitive and emotional well-being. Additionally, reading more has cultivated a more reflective mindset, allowing me to draw connections between my personal experiences and broader human themes. This has translated into a more nuanced approach to both my professional endeavors and personal relationships, as the wisdom gleaned from books has informed my decision-making, problem-solving, and communication skills.”

Enticing, is it not? Strangely, this freedom, time, and depth of experience are available to any of us. All we have to do is log out of social media once and for all. Are you ready, dear reader? Find a walled in despond.

Cynthia Murrell, July 11, 2024

Oxygen: Keep the Bait Alive for AI Revenue

July 10, 2024

Andreessen Horowitz published “Who Owns the Generative AI Platform?” in January 2023. The rah-rah appeared almost at the same time as the Microsoft OpenAI deal marketing coup.  In that essay, the venture firm and publishing firm stated this about AI: 

…there is enough early data to suggest massive transformation is taking place. What we don’t know, and what has now become the critical question, is: Where in this market will value accrue?

Now a partial answer is emerging. 

The Information, an online information service with a paywall revealed “Andreessen Horowitz Is Building a Stash of More Than 20,000 GPUs to Win AI Deals.” That report asserts:

The firm has secured thousands of AI chips, including Nvidia H100 graphics processing units, and is renting them to portfolio companies, according to a person who has discussed the initiative with the firm’s partners…. Andreessen Horowitz has told startup founders the initiative is called “oxygen.”

The initiative reflects what might be a way to hook promising AI outfits and plop them into the firm’s large foldable floating fish basket for live caught gill-bearing vertebrate animals, sometimes called chum.

This factoid emerges shortly after a big Silicon Valley venture outfit raved about the oodles of opportunity AI represents. Plus reports about Blue Chip consulting firms’ through-the-roof AI consulting has encouraged a couple of the big outfits to offer AI services. In addition to opining and advising, the consulting firms are moving aggressively into the AI implementing and operating business. 

The morphing of a venture firm into a broker of GPU cycles complements the thinking-for-money firms’ shifting gears to a more hands-on approach.

There are several implications from my point of view:

  • The fastest way to make money from the AI frenzy is to charge people so they can “do” AI
  • Without a clear revenue stream of sufficient magnitude to foot the bill for the rather hefty costs of “doing” AI with a chance of making cash, selling blue jeans to the miners makes sense. But changing business tactics can add an element of spice to an unfamiliar restaurant’s special of the day
  • The move from passive (thinking and waiting) to a more active (doing and charging for hardware and services) brings a different management challenge to the companies making the shift.

These factors suggest that the best way to cash in on AI is to provide what Andreessen Horowitz calls oxygen. It is a clear indication that the AI fish will die without some aggressive intervention. 

I am a dinobaby, sitting in my rocker on the porch of the rest home watching the youngsters scramble to make money from what was supposed to be a sure-fire winner. What we know from watching those lemonade stand operators is that success is often difficult to achieve. The grade school kids setting up shop in a subdivision where heat and fatigue take their toll give up and go inside where the air is cool and TikTok waits.

Net net: The Andreessen Horowitz revelation is one more indication that the costs of AI and the difficulty of generating sufficient revenue is starting to hit home. Therefore, advisors’ thoughts seems to be turning to actions designed to produce cash, magnetism, and success. Will the efforts produce the big payoffs? I wonder if these tactical plays are brilliant moves or another neighborhood lemonade stand?

Stephen E Arnold, July 10, 2024

Microsoft Security: Big and Money Explain Some Things

July 10, 2024

I am heading out for a couple of day. I spotted this story in my newsfeed: “The President Ordered a Board to Probe a Massive Russian Cyberattack. It Never Did.” The main point of the write up, in my opinion, is captured in this statement:

The tech company’s failure to act reflected a corporate culture that prioritized profit over security and left the U.S. government vulnerable, a whistleblower said.

But there is another issue in the write up. I think it is:

The president issued an executive order establishing the Cyber Safety  Review Board in May 2021 and ordered it to start work by reviewing the SolarWinds attack. But for reasons that experts say remain unclear, that never happened.

The one-two punch may help explain why some in other countries do not trust Microsoft, the US government, and the cultural forces in the US of A.

Let’s think about these three issues briefly.

image

A group of tomorrow’s leaders responding to their teacher’s request to pay attention and do what she is asking. One student expresses the group’s viewpoint. Thanks, MSFT Copilot. How the Recall today? What about those iPhones Mr. Ballmer disdained?

First, large technology companies use the word “trust”; for example, Microsoft apparently does not trust Android devices. On the other hand, China does not have trust in some Microsoft products. Can one trust Microsoft’s security methods? For some, trust has become a bit like artificial intelligence. The words do not mean much of anything.

Second, Microsoft, like other big outfits needs big money. The easiest way to free up money is to not spend it. One can talk about investing in security and making security Job One. The reality is that talk is cheap. Cutting corners seems to be a popular concept in some corporate circles. One recent example is Boeing dodging trials with a deal. Why? Money maybe?

Third, the committee charged with looking into SolarWinds did not. For a couple of years after the breach became known, my SolarWinds’ misstep analysis was popular among some cyber investigators. I was one of the few people reviewing the “misstep.”

Okay, enough thinking.

The SolarWinds’ matter, the push for money and more money, and the failure of a committee to do what it was asked to do explicitly three times suggests:

  1. A need for enforcement with teeth and consequences is warranted
  2. Tougher procurement policies are necessary with parallel restrictions on lobbying which one of my clients called “the real business of Washington”
  3. Ostracism of those who do not follow requests from the White House or designated senior officials.

Enough of this high-vulnerability decision making. The problem is that as I have witnessed in my work in Washington for decades, the system births, abets, and provides the environment for doing what is often the “wrong” thing.

There you go.

Stephen E Arnold, July 10, 2024

Market Research Shortcut: Fake Users Creating Fake Data

July 10, 2024

Market research can be complex and time consuming. It would save so much time if one could consolidate thousands of potential respondents into one model. A young AI firm offers exactly that, we learn from Nielsen Norman Group’s article, “Synthetic Users: If, When, and How to Use AI Generated ‘Research.’

But are the results accurate? Not so much, according to writers Maria Rosala and Kate Moran. The pair tested fake users from the young firm Synthetic Users and ones they created using ChatGPT. They compared responses to sample questions from both real and fake humans. Each group gave markedly different responses. The write-up notes:

“The large discrepancy between what real and synthetic users told us in these two examples is due to two factors:

  • Human behavior is complex and context-dependent. Synthetic users miss this complexity. The synthetic users generated across multiple studies seem one-dimensional. They feel like a flat approximation of the experiences of tens of thousands of people, because they are.
  • Responses are based on training data that you can’t control. Even though there may be proof that something is good for you, it doesn’t mean that you’ll use it. In the discussion-forum example, there’s a lot of academic literature on the benefits of discussion forums on online learning and it is possible that the AI has based its response on it. However, that does not make it an accurate representation of real humans who use those products.”

That seems obvious to us, but apparently some people need to be told. The lure of fast and easy results is strong. See the article for more observations. Here are a couple worth noting:

“Real people care about some things more than others. Synthetic users seem to care about everything. This is not helpful for feature prioritization or persona creation. In addition, the factors are too shallow to be useful.”

Also:

“Some UX [user experience] and product professionals are turning to synthetic users to validate or product concepts or solution ideas. Synthetic Users offers the ability to run a concept test: you describe a potential solution and have your synthetic users respond to it. This is incredibly risky. (Validating concepts in this way is risky even with human participants, but even worse with AI.) Since AI loves to please, every idea is often seen as a good one.”

So as appealing as this shortcut may be, it is a fast track to incorrect results. Basing business decisions on “insights” from shallow, eager-to-please algorithms is unwise. The authors interviewed Synthetic Users’ cofounder Hugo Alves. He acknowledged the tools should only be used as a supplement to surveys of actual humans. However, the post points out, the company’s website seems to imply otherwise: it promises “User research. Without the users.” That is misleading, at best.

Cynthia Murrell, July 10, 2024

TV Pursues Nichification or 1 + 1 = Barrels of Money

July 10, 2024

green-dino_thumb_thumb_thumb_thumb_t_thumbThis essay is the work of a dumb dinobaby. No smart software required.

When an organization has a huge market like the Boy Scouts and the Girl Scouts? What do they do to remain relevant and have enough money to pay the overhead and salaries of the top dogs? They merge.

What does an old-school talking heads television channel do to remain relevant and have enough money to pay the overhead and salaries of the top dogs? They create niches.

image

A cheese maker who can’t sell his cheddar does some MBA-type thinking. Will his niche play work? Thanks, MSFT Copilot. How’s that Windows 11 update doing today?

Which path is the optimal one? I certainly don’t have a definitive answer. But if each “niche” is a new product, I remember hearing that the failure rate was of sufficient magnitude to make me a think in terms of a regular job. Call me risk averse, but I prefer the rational dinobaby moniker, thank you.

CNBC Launches Sports Vertical amid Broader Biz Shift” reports with “real” news seriousness:

The idea is to give sports business executives insights and reporting about sports similar to the data and analysis CNBC provides to financial professionals, CNBC President KC Sullivan said in a statement.

I admit. I am not a sports enthusiast. I know some people who are, but their love of sport is defined by gambling, gambling and drinking at the 19th hole, and dressing up in Little League outfits and hitting softballs in the Harrod’s Creek Park. Exciting.

The write up held one differentiator from the other seemingly endless sports programs like those featuring Pat McAfee-type personalities. Here’s the pivot upon which the nichification turns:

The idea is to give sports business executives insights and reporting about sports similar to the data and analysis CNBC provides to financial professionals…

Imagine the legions of viewers who are interested in dropping billions on a major sports franchise. For me, it is easier to visualize sports betting. One benefit of gambling is a source of “addicts” for rehabilitation centers.

I liked the wrap up for the article. Here it is:

Between the lines: CNBC has already been investing in live coverage of sports, and will double down as part of the new strategy.

  • CNBC produces an annual business of sports conference, Game Plan, in partnership with Boardroom.
  • Andrew Ross Sorkin, Carl Quintanilla and others will host coverage from the 2024 Olympic Games in Paris this summer.

Zoom out: Cable news companies are scrambling to reimagine their businesses for a digital future.

  • CNBC already sells digital subscriptions that include access to its live TV feed.
  • In the future, it could charge professionals for niche insights around specific verticals, or beats.

Okay, I like the double down, a gambling term. I like the conference angle, but the named entities do not resonate with me. I am a dinobaby and nichification is not a tactic that an outfit with eyeballs going elsewhere makes sense to me. The subscription idea is common. Isn’t there something called “subscription fatigue”? And the plan to charge to access a sports portal is an interesting one. But if one has 1,000 people looking at content, the number who subscribe seems to be in the < one to two percent range based on my experience.

But what do I know? I am a dinobaby and I know about TikTok and other short form programming. Maybe that’s old hat too? Did CNBC talk to influencers?

Stephen E Arnold, July 10, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta