AI Hermeneutics: The Fire Fights of Interpretation Flame

March 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My hunch is that not too many of the thumb-typing, TikTok generation know what hermeneutics means. Furthermore, like most of their parents, these future masters of the phone-iverse don’t care. “Let software think for me” would make a nifty T shirt slogan at a technology conference.

This morning (March 12, 2024) I read three quite different write ups. Let me highlight each and then link the content of those documents to the the problem of interpretation of religious texts.

image

Thanks, MSFT Copilot. I am confident your security team is up to this task.

The first write up is a news story called “Elon Musk’s AI to Open Source Grok This Week.” The main point for me is that Mr. Musk will put the label “open source” on his Grok artificial intelligence software. The write up includes an interesting quote; to wit:

Musk further adds that the whole idea of him founding OpenAI was about open sourcing AI. He highlighted his discussion with Larry Page, the former CEO of Google, who was Musk’s friend then. “I sat in his house and talked about AI safety, and Larry did not care about AI safety at all.”

The implication is that Mr. Musk does care about safety. Okay, let’s accept that.

The second story is an ArXiv paper called “Stealing Part of a Production Language Model.” The authors are nine Googlers, two ETH wizards, one University of Washington professor, one OpenAI researcher, and one McGill University smart software luminary. In short, the big outfits are making clear that closed or open, software is rising to the task of revealing some of the inner workings of these “next big things.” The paper states:

We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI’s ChatGPT or Google’s PaLM-2…. For under $20 USD, our attack extracts the entire projection matrix of OpenAI’s ada and babbage language models.

The third item is “How Do Neural Networks Learn? A Mathematical Formula Explains How They Detect Relevant Patterns.” The main idea of this write up is that software can perform an X-ray type analysis of a black box and present some useful data about the inner workings of numerical recipes about which many AI “experts” feign total ignorance.

Several observations:

  1. Open source software is available to download largely without encumbrances. Good actors and bad actors can use this software and its components to let users put on a happy face or bedevil the world’s cyber security experts. Either way, smart software is out of the bag.
  2. In the event that someone or some organization has secrets buried in its software, those secrets can be exposed. One the secret is known, the good actors and the bad actors can surf on that information.
  3. The notion of an attack surface for smart software now includes the numerical recipes and the model itself. Toss in the notion of data poisoning, and the notion of vulnerability must be recast from a specific attack to a much larger type of exploitation.

Net net: I assume the many committees, NGOs, and government entities discussing AI have considered these points and incorporated these articles into informed policies. In the meantime, the AI parade continues to attract participants. Who has time to fool around with the hermeneutics of smart software?

Stephen E Arnold, March 12, 2024

Microsoft and Security: A Rerun with the Same Worn-Out Script

March 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Marvel cinematic universe has spawned two dozen sequels. Microsoft’s security circus features are moving up fast in the reprise business. Unfortunately there is no super hero who comes to the rescue of the giant American firm. The villains in these big screen stunners are a bit like those in the James Bond films. Microsoft seems to prefer to wrestle with the allegedly Russian cozy bear or at least convert a cartoon animal into the personification of evil.

image

Thanks, MSFT, you have nailed security theater and reruns of the same tired story.

What’s interesting about these security blockbusters is that each follows a Hollywood style “you’ve seen this before nudge nudge” approach to the entertainment. The sequence is a belated announcement that Microsoft security has been breached. The evil bad actors have stolen data, corrupted software, and by brute force foiled the norm cores in Microsoft World. Then announcements about fixes that the Microsoft custoemr must implement along with admonitions to keep that MSFT software updated and warnings about using “old” computers, etc. etc.

Russian Hackers Accessed Microsoft Source Code” is the equivalent of New York Times film review. The write up reports:

In January, Microsoft disclosed that Russian hackers had breached the company’s systems and managed to read emails belonging to senior executives. Now, the company has revealed that the breach was worse than initially understood and that the Russian hackers accessed Microsoft source code. Friday’s revelation — made in a blog post and a filing with the Securities and Exchange Commission — is the latest in a string of breaches affecting the company that have raised major questions in Washington about Microsoft’s security posture.

Well, that’s harsh. No mention of the estimable alleged monopoly’s releasing the information on March 7, 2024. I am capturing my thoughts on March 8, 2024. But with college basketball moving toward tournament time, who cares? I am not really sure any more. And Washington? Does the name evoke a person, a committee, a committee consisting of the heads of security committees, someone in the White House, an “expert” at the suddenly famous National Bureau of Standards, or absolutely no one.

The write asserts:

The company is concerned, however, that “Midnight Blizzard is attempting to use secrets of different types it has found,” including in emails between customers and Microsoft. “As we discover them in our exfiltrated email, we have been and are reaching out to these customers to assist them in taking mitigating measures,” the company said in its blog post. The company describes the incident as an example of “what has become more broadly an unprecedented global threat landscape, especially in terms of sophisticated nation-state attacks.” In response, the company has said it is increasing the resources and attention devoted to securing its systems.

Microsoft is “reaching out.” I can reach for a donut, but I do not grasp it and gobble it down. “Reach” is not the same as fixing the problems Microsoft caused.

Several observations:

  1. Microsoft is an alleged monopoly, and it is allowing its digital trains to set fire to the fields, homes, and businesses which have to use its tracks. Isn’t it time for purposeful action from the US government agencies with direct responsibility for cyber security and appropriate business conduct?
  2. Can Microsoft remediate its problems? My answer is, “No.” Vulnerabilities are engineered in because no one has the time, energy, or interest to chase down problems and fix them. There is an ageing programmer named Steve Gibson. His approach to software is the exact opposite of Microsoft’s. Mr. Gibson will never be a trillion dollar operation, but his software works. Perhaps Microsoft should consider adopting some of Mr. Gibson’s methods.
  3. Customers have to take a close look at the security breaches endlessly reported by cyber security companies. Some outfits’ software is on the list most of the time. Other companies’ software is an infrequent visitor to these breach parties. Is it time for customers to be looking for an alternative to what Microsoft provides?

Net net: A new security release will be coming to the computer near you. Don’t fail to miss it.

Stephen E Arnold, March 12, 2024

x

x

x

x

x

NSO Group: Pegasus Code Wings Its Way to Meta and Mr. Zuckerberg

March 7, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

NSO Group’s senior managers and legal eagles will have an opportunity to become familiar with an okay Brazilian restaurant and a waffle shop. That lovable leader of Facebook, Instagram, Threads, and WhatsApp may have put a stick in the now-ageing digital bicycle doing business as NSO Group. The company’s mark is pegasus, which is a flying horse. Pegasus’s dad was Poseidon, and his mom was the knock out Gorgon Medusa, who did some innovative hair treatments. The mythical pegasus helped out other gods until Zeus stepped in an acted with extreme prejudice. Quite a myth.

image

Poseidon decides to kill the mythical Pegasus, not for its software, but for its getting out of bounds. Thanks, MSFT Copilot. Close enough.

Life imitates myth. “Court Orders Maker of Pegasus Spyware to Hand Over Code to WhatsApp” reports that the hand over decision:

is a major legal victory for WhatsApp, the Meta-owned communication app which has been embroiled in a lawsuit against NSO since 2019, when it alleged that the Israeli company’s spyware had been used against 1,400 WhatsApp users over a two-week period. NSO’s Pegasus code, and code for other surveillance products it sells, is seen as a closely and highly sought state secret. NSO is closely regulated by the Israeli ministry of defense, which must review and approve the sale of all licenses to foreign governments.

NSO Group hired former DHS and NSA official Stewart Baker to fix up NSO Group gyro compass. Mr. Baker, who is a podcaster and affiliated with the law firm Steptoe and Johnson. For more color about Mr. Baker, please scan “Former DHS/NSA Official Stewart Baker Decides He Can Help NSO Group Turn A Profit.”

A decade ago, Israel’s senior officials might have been able to prevent a social media company from getting a copy of the Pegasus source code. Not anymore. Israel’s home-grown intelware technology simply did not thwart, prevent, or warn about the Hamas attack in the autumn of 2023. If NSO Group were battling in court with Harris Corp., Textron, or Harris Corp., I would not worry. Mr. Zuckerberg’s companies are not directly involved with national security technology. From what I have heard at conferences, Mr. Zuckerberg’s commercial enterprises are responsive to law enforcement requests when a bad actor uses Facebook for an allegedly illegal activity. But Mr. Zuckerberg’s managers are really busy with higher priority tasks. Some folks engaged in investigations of serious crimes must be patient. Presumably the investigators can pass their time scrolling through #Shorts. If the Guardian’s article is accurate, now those Facebook employees can learn how Pegasus works. Will any of those learnings stick? One hopes not.

Several observations:

  1. Companies which make specialized software guard their systems and methods carefully. Well, that used to be true.
  2. The reorganization of NSO Group has not lowered the firm’s public relations profile. NSO Group can make headlines, which may not be desirable for those engaged in national security.
  3. Disclosure of the specific Pegasus systems and methods will get a warm, enthusiastic reception from those who exchange ideas for malware and related tools on private Telegram channels, Dark Web discussion groups, or via one of the “stealth” communication services which pop up like mushrooms after rain in rural Kentucky.

Will the software Pegasus be terminated? I remain concerned that source code revealing how to perform certain tasks may lead to downstream, unintended consequences. Specialized software companies try to operate with maximum security. Now Pegasus may be flying away unless another legal action prevents this.

Where is Zeus when one needs him?

Stephen E Arnold, March 7, 2024

Security Debt: So Just Be a Responsible User / Developer

February 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Security appears to be one of the next big things. Smart software strapped onto to cyber safeguard systems is a no-lose proposition for vendors. Does it matter that bolted on AI may not work? Nope. The important point is to ride the opportunity wave.

What’s interesting is that security is becoming a topic discussed at 75-something bridge groups and at lunch gatherings in government agencies concerned about fish and trees. Can third-party security services, grandmothers chasing a grand slam, or an expert in river fowl address security problems? I would suggest that the idea that security is the user’s responsibility is an interesting way to dodge responsibility. The estimable 23andMe tried this play, and I am not too sure that it worked.

image

Can security debt become the invisible hand creating opportunities for bad actors? Has the young executive reached the point of no return for a personal debt crisis? Thanks, MSFT Pilot Bing for a good enough illustration.

Who can address the security issues in the software people and organizations use today. “Why Software Security Debt Is Becoming a Serious Problem for Developers” states:

Over 70% of organizations have software containing flaws that have remained unfixed for longer than a year, constituting security debt,

Plus, the article asserts:

46% of organizations were found to have persistent, high-severity flaws that went unaddressed for over a year

Security issues exist. But the question is, “Who will address these flaws, gaps, and mistakes?”

The article cites an expert who opines:

“The further that you shift [security testing] to the developer’s desktop and have them see it as early as possible so they can fix it, the better, because number one it’s going to help them understand the issue more and [number two] it’s going to build the habits around avoiding it.”

But who is going to fix the security problems?

In-house developers may not have the expertise or access to the uncompiled code to identify and remediate. Open source and other third-party software can change without notice because why not do what’s best for those people or the bad actors manipulating open source software and “approved” apps available from a large technology company’s online store.

The article offers a number of suggestions, but none of these strike me as practical for some or most organizations.

Here’s the problem: Security is not a priority until a problem surfaces. Then when a problem becomes known, the delay between compromise, discovery, and public announcement can be — let’s be gentle — significant. Once a cyber security vendor “discovers” the problem or learns about it from a customer who calls and asks, “What has happened?”, the PR machines grind into action.

The “fixes” are typically rush jobs for these reasons:

  1. The vendor and the developer who made the zero a one does not earn money by fixing old code. Another factor is that the person or team responsible for the misstep is long gone, working as an Uber driver, or sitting in a rocking chair in a warehouse for the elderly
  2. The complexity of “going back” and making a fix may create other problems. These dependencies are unknown, so a fix just creates more problems. Writing a shim or wrapper code may be good enough to get the angry dogs to calm down and stop barking.
  3. The security flaw may be unfixable; that is, the original approach includes and may need flaws for performance, expediency, or some quite revenue-centric reason. No one wants to rebuild a Pinto that explodes in a rear end collision. Let the lawyers deal with it. When it comes to code, lawyers are definitely equipped to resolve security problems.

The write up contains a number of statistics, but it makes one major point:

Security debt is mounting.

Like a young worker who lives by moving credit card debt from vendor to vendor, getting out of the debt hole may be almost impossible. But, hey, it is that individual’s responsibility, not the system. Just be responsible. That is easy to say, and it strikes me as somewhat hollow.

Stephen E Arnold, February 15, 2024

Ho-Hum Write Up with Some Golden Nuggets

January 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Anthropic Confirms It Suffered a Data Leak.” I know. I know. Another security breach involving an outfit working with the Bezos bulldozer and Googzilla. Snore. But in the write up, tucked away were a couple of statements I found interesting.

image

“Hey, pardner, I found an inconsistency.” Two tries for a prospector and a horse. Good enough, MSFT Copilot Bing thing. I won’t ask about your secure email.

Here these items are:

  1. Microsoft, Amazon and others are being asked by a US government agency “to provide agreements and rationale for collaborations and their implications; analysis of competitive impact; and information on any other government entities requesting information or performing investigations.” Regulatory scrutiny of the techno feudal champions?
  2. The write up asserts: “Anthropic has made a “long-term commitment” to provide AWS customers with “future generations” of its models through Amazon Bedrock, and will allow them early access to unique features for model customization and fine-tuning purposes.” Love at first sight?
  3. And a fascinating quote from a Googler. Note: I have put in bold some key words which I found interesting:

“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” Google Cloud CEO Thomas Kurian said in a statement on their relationship. “This expanded partnership with Anthropic, built on years of working together, will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”

Yeah, but the article is called “Anthropic Confirms It Suffered a Data Leak.” What’s with the securely?

Ah, regulatory scrutiny and obvious inconsistency. Ho-hum with a good enough tossed in for spice.

Stephen E Arnold, January 30, 2024

Apple, Now Number One, But Maybe Not in Mobile Security?

January 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

MIT Professor Stuart E. Madnick allegedly discovered that iPhone data breaches tripled between 2013-2022. Venture Beat explains more in the article “Why Attackers Love To Target Misconfigured Clouds And Phones.”

Hackers use every method to benefit from misconfiguration, but ransomware is their favorite technique. Madnick discovered a near 50% increase in ransomware attacks in organizations in the first six months of 2023 compared to 2022. After finding the breach, hackers then attack organizations’ mobile phone fleets. They freeze all communications until the ransom is paid.

Bad actors want to find the easiest ways into clouds. Unfortunately organizations are unaware that attacks happen when they don’t monitor their networks:

Merritt Baer, Field CISO at Lacework, says that bad actors look first for an easy front door to access misconfigured clouds, the identities and access to entire fleets of mobile devices. “Novel exploits (zero-days) or even new uses of existing exploits are expensive to research and discover. Why burn an expensive zero-day when you don’t need to? Most bad actors can find a way in through the “front door”– that is, using legitimate credentials (in unauthorized ways).”

Baer added, ‘This avenue works because most permissions are overprovisioned (they aren’t pruned down/least privileged as much as they could be), and because with legitimate credentials, it’s hard to tell which calls are authorized/ done by a real user versus malicious/ done by a bad actor.’”

Almost 99% of cloud security breaches are due to incorrectly set manual controls. Also nearly 50% of organizations unintentionally exposed storage, APIs, network scents, and applications. These breaches cost an average of $4 million to solve.

Organizations need to rely on more than encryption to protect their infrastructures. Most attacks occur because bad actors use authenticate credentials. Unified endpoint management, passwordless multi-factor authentication, and mobile device management housed on a single platform is the best defense.

How about these possibly true revelations about Apple?

Whitney Grace, January 26, 2024

Cyber Security Investing: A Money Pit?

January 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Cyber security is a winner, a sure-fire way to take home the big bucks. Slam dunk. But the write up “Cybersecurity Startup Funding Hits 5-Year Low, Drops 50% from 2022” may signal that some money people have a fear of what might be called a money pit. The write up states:

In 2023, cyber startups saw only about a third of that, as venture funding dipped to its lowest total since 2018. Security companies raised $8.2 billion in 692 venture capital deals last year — per Crunchbase numbers — compared to $16.3 billion in 941 deals in 2022.

image

Have investors in cyber security changed their view of a slam-dunk investment? That winning hoop now looks like a stinking money pit perhaps? Thanks, MSFT Copilot Bing thing with security to boot. Good enough.

Let’s believe these data which are close enough for horseshoes. I also noted this passage:

“What we saw in terms of cybersecurity funding in 2023 were the ramifications of the exceptional surge of 2021, with bloated valuations and off-the-charts funding rounds, as well as the wariness of investors in light of market conditions,” said Ofer Schreiber, senior partner and head of the Israel office for cyber venture firm YL Ventures.

The reference to Israel is bittersweet. The Israeli cyber defenses failed to detect, alert, and thus protect those who were in harm’s way in October 2023. How you might ask because Israel is the go-to innovator in cyber security? Maybe the over-hyped, super-duper, AI-infused systems don’t work as well as the marketer’s promotional material assert? Just a thought.

My views:

  1. Cyber security is difficult; for instance, Microsoft’s announcement that the Son of SolarWinds has shown up inside the Softies’ email
  2. Bad actors can use AI faster than cyber security firms can — and make the smart software avoid being dumb
  3. Cyber security requires ever-increasing investments because the cat-and-mouse game between good actors and bad actors is a variant of the cheerful 1950s’ arms race.

Do you feel secure with your mobile, your laptop, and your other computing devices? Do you scan QR codes in restaurants without wondering if the code is sandbagged? Are you an avid downloader? I don’t want to know, but you may want answers.

Stephen E Arnold, January 22, 2024

Microsoft Security: Are the Doors Falling Off?

January 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Microsoft Network Breached Through Password-Spraying by Russian-State Hackers” begs to be set to music. I am thinking about Chubby Checker and his hit “Let’s Twist Again.” One lyric change. Twist becomes “hacked.” So “let’s hack again like we did last summer.” Hit?

image

A Seattle-based quality and security engineer finds that his automobile door has fallen off. Its security system is silent. It must be the weather. Thanks, MSFT second class Copilot Bing thing. Good enough but the extra wheel is an unusual and creative touch.

The write up states:

Russia-state hackers exploited a weak password to compromise Microsoft’s corporate network and accessed emails and documents that belonged to senior executives and employees working in security and legal teams, Microsoft said [on January 19, 2024]. The attack, which Microsoft attributed to a Kremlin-backed hacking group it tracks as Midnight Blizzard, is at least the second time in as many years that failures to follow basic security hygiene has resulted in a breach that has the potential to harm customers.

The Ars Technica story noted:

A Microsoft representative said the company declined to answer questions, including whether basic security practices were followed.

Who did this? One of the Axis of Evil perhaps. Why hack Microsoft? Because it is a big, juicy target? Were the methods sophisticated, using artificial intelligence to outmaneuver state-of-the-art MSFT cyber defenses? Nope. It took seven weeks to detect the password guessing tactic.

Did you ever wonder why door fall off Seattle-linked aircraft and security breaches occur at Seattle’s big software outfit? A desire for profits, laziness, indifference, or some other factor is causing these rather high-profile issues. It must be the Seattle water or the rain. That’s it. The rain! No senior manager can do anything about the rain. Perhaps a solar wind will blow and make everything better?

Stephen E Arnold, January 22, 2024

Stretchy Security and Flexible Explanations from SEC and X

January 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Gizmodo presented an interesting write up about an alleged security issue involving the US Securities & Exchange Commission. Is this an important agency? I don’t know. “X Confirms SEC Hack, Says Account Didn’t Have 2FA Turned On” states:

Turns out that the SEC’s X account was hacked, partially because it neglected a very basic rule of online security.

image

“Well, Pa, that new security fence does not seem too secure to me,” observes the farmer’s wife. Flexible and security with give are not the optimal ways to protect the green. Thanks, MSFT Copilot Bing thing. Four tries and something good enough. Yes!

X.com — now known by some as the former Twitter or the Fail Whale outfit — puts the blame on the US SEC. That’s a familiar tactic in Silicon Valley. The users are at fault. Some people believe Google’s incognito mode is secret, and others assume that Apple iPhones do not have a backdoor. Wow, I believe these companies, don’t you?

The article reports:

[The] hacking episode temporarily threw the web3 community into chaos after the SEC’s compromised account made a post falsely claiming that the SEC had approved the much anticipated Bitcoin ETFs that the crypto world has been obsessed with of late. The claims also briefly sent Bitcoin on a wild ride, as the asset shot up in value temporarily, before crashing back down when it became apparent the news was fake.

My question is, “How stretchy and flexible are security systems available from outfits like Twitter (now X)?” Another question is, “How secure are government agencies?”

The apparent answer is, “Good enough.” That’s the high water mark in today’s world. Excellence? Meh.

Stephen E Arnold, January 18, 2024

Cybersecurity AI: Yet Another Next Big Thing

January 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Not surprisingly, generative AI has boosted the cybersecurity arms race. As bad actors use algorithms to more efficiently breach organizations’ defenses, security departments can only keep up by using AI tools. At least that is what VentureBeat maintains in, “How Generative AI Will Enhance Cybersecurity in a Zero-Trust World.” Writer Louis Columbus tells us:

Deep Instinct’s recent survey, Generative AI and Cybersecurity: Bright Future of Business Battleground? quantifies the trends VentureBeat hears in CISO interviews. The study found that while 69% of organizations have adopted generative AI tools, 46% of cybersecurity professionals feel that generative AI makes organizations more vulnerable to attacks. Eighty-eight percent of CISOs and security leaders say that weaponized AI attacks are inevitable. Eighty-five percent believe that gen AI has likely powered recent attacks, citing the resurgence of  WormGPT, a new generative AI advertised on underground forums to attackers interested in launching phishing and business email compromise attacks. Weaponized gen AI tools for sale on the dark web and over Telegram quickly become best sellers. An example is how quickly FraudGPT reached 3,000 subscriptions by July.”

That is both predictable and alarming. What should companies do about it? The post warns:

“‘Businesses must implement cyber AI for defense before offensive AI becomes mainstream. When it becomes a war of algorithms against algorithms, only autonomous response will be able to fight back at machine speeds to stop AI-augmented attacks,’ said Max Heinemeyer, director of threat hunting at Darktrace.

Before AI is mainstream? Better get moving. We’re told the market for generative AI cybersecurity solutions is already growing, and Forrester divides it into three use cases: content creation, behavior prediction, and knowledge articulation. Of course, Columbus notes, each organization will have different needs, so adaptable solutions are important. See the write-up for some specific tips and links to further information. The tools may be new but the dynamic is a constant: as bad actors up their game, so too must security teams.

Cynthia Murrell, January 15, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta