Fancy Cyber Methods Are Useless Against Insider Threats

August 2, 2024

dinosaur30a_thumb_thumb_thumb_thumb__thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

In my lectures to law enforcement and intelligence professionals, I end the talks with one statement: “Do not assume. Do not reduce costs by firing experienced professionals. Do not ignore human analyses of available information. Do not take short cuts.” Cyber security companies are often like the mythical kids of the village shoemaker. Those who can afford to hire the shoemaker have nifty kicks and slides. Those without resources have almost useless footware.

Companies in the security business often have an exceptionally high opinion of their capabilities and expertise. I think of this as the Google Syndrome or what some have called by less salubrious names. The idea is that one is just so smart, nothing bad can happen here. Yeah, right.

image

An executive answers questions about a slight security misstep. Thanks, Microsoft Copilot. You have been there and done that I assume.

I read “North Korean Hacker Got Hired by US Security Vendor, Immediately Loaded Malware.” The article is a reminder that outfits in the OSINT, investigative, and intelligence business can make incredibly interesting decisions. Some of these lead to quite significant consequences. This particular case example illustrates how a hiring process using humans who are really smart and dedicated can be fooled, duped, and bamboozled.

The write up explains:

KnowBe4, a US-based security vendor, revealed that it unwittingly hired a North Korean hacker who attempted to load malware into the company’s network. KnowBe4 CEO and founder Stu Sjouwerman described the incident in a blog post yesterday, calling it a cautionary tale that was fortunately detected before causing any major problems.

I am a dinobaby, and I translated the passage to mean: “We hired a bad actor but, by the grace of the Big Guy, we avoided disaster.”

Sure, sure, you did.

I would suggest you know you trapped an instance of the person’s behavior. You may not know and may never know what that individual told a colleague in North Korea or another country what the bad actor said or emailed from a coffee shop using a contact’s computer. You may never know what business processes the person absorbed, converted to an encrypted message, and forwarded via a burner phone to a pal in a nation-state whose interests are not aligned with America’s.

In short, the cyber security company dropped the ball. It need not feel too bad. One of the companies I worked for early in my 60 year working career hired a person who dumped top secrets into journalists’ laps. Last week a person I knew was complaining about Delta Airlines which was shown to be quite addled in the wake of the CrowdStrike misstep.

What’s the fix? Go back to how I end my lectures. Those in the cyber security business need to be extra vigilant. The idea that “we are so smart, we have the answer” is an example of a mental short cut. The fact is that the company KnowBe4 did not. It is lucky it KnewAtAll. Some tips:

  1. Seek and hire vetted experts
  2. Question procedures and processes in “before action” and “after action” incidents
  3. Do not rely on assumptions
  4. Do not believe the outputs of smart software systems
  5. Invest in security instead of fancy automobiles and vacations.

Do these suggestions run counter to your business goals and your image of yourself? Too bad. Life is tough. Cyber crime is the growth business. Step up.

Stephen E Arnold, August 2, 2024

Survey Finds Two Thirds of us Believe Chatbots Are Conscious

August 2, 2024

Well this is enlightening. TechSpot reports, “Survey Shows Many People Believe AI Chatbots Like ChatGPT Are Conscious.” And by many, writer Rob Thubron means two-thirds of those surveyed by researchers at the University of Waterloo. Two-thirds! We suppose it is no surprise the general public has this misconception. After all, even an AI engineer was famously convinced his company’s creation was sentient. We learn:

“The survey asked 300 people in the US if they thought ChatGPT could have the capacity for consciousness and the ability to make plans, reason, feel emotions, etc. They were also asked how often they used OpenAI’s product. Participants had to rate ChatGPT responses on a scale of 1 to 100, where 100 would mean absolute confidence that ChatGPT was experiencing consciousness, and 1 absolute confidence it was not. The results showed that the more someone used ChatGPT, the more they were likely to believe it had some form of consciousness. ‘These results demonstrate the power of language,’ said Dr. Clara Colombatto, professor of psychology at Waterloo’s Arts faculty, ‘because a conversation alone can lead us to think that an agent that looks and works very differently from us can have a mind.’”

That is a good point. And these “agents” will only get more convincing even as more of us interact with them more often. It is encouraging that some schools are beginning to implement AI Literacy curricula. These programs include important topics like how to effectively work with AI, when to double-check its conclusions, and a rundown of ethical considerations. More to the point here, they give students a basic understanding of what is happening under the hood.

But it seems we need a push for adults to educate themselves, too. Even a basic understanding of machine learning and LLMs would help. It will take effort to thwart our natural tendency to anthropomorphize, which is reinforced by AI hype. That is important, because when we perceive AI to think and feel as we do, we change how we interact with it. The write-up notes:

“The study, published in the journal Neuroscience of Consciousness, states that this belief could impact people who interact with AI tools. On the one hand, it may strengthen social bonds and increase trust. But it may also lead to emotional dependence on the chatbots, reduce human interactions, and lead to an over-reliance on AI to make critical decisions.”

Soon we might even find ourselves catering to perceived needs of our software (or the actual goals of the firms that make them) instead of using them as inanimate tools. Is that a path we really want to go down? Is it too late to avoid it?

Cynthia Murrell, August 2, 2024

Yep, the Old Internet Is Gone. Learn to Love the New Internet

August 1, 2024

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The market has given the Google the green light to restrict information. The information highway has a new on ramp. If you want content created by people who were not compensated, you have to use Google search. Toss in the advertising system and that good old free market is going to deliver bumper revenue to stakeholders.

image

Online search is a problem. Here’s an old timer like me who broke his leg. The young wizard who works at a large online services firm explains that I should not worry. By the time my leg heals, I will be dead. Happy thoughts from one of those Gen somethings. Thanks, MSFT Copilot. How your security systems today?

What about users? The reality is that with Google the default search system in Apple iPhones, the brand that has redefined search and retrieval to mean “pay to play,” what’s the big deal?

Years ago I explained in numerous speeches and articles in publications like Online Magazine that online fosters the creation of centralized monopolistic information services. Some information professionals dismissed my observation as stupid. The general response was that online would generate benefits. I agree. But there were a few downsides. I usually pointed to the duopoly in online for fee legal information. I referenced the American Chemical Society’s online service Chemical Abstracts. I even pointed out that outfits like Predicasts and the New York Times would have a very, very tough time creating profitable information centric standalone businesses. The centralization or magnetic pull of certain online services would make generating profits very expensive.

So where are we now? I read “Reddit, Google, and the Real Cost of the AI Data Rush.” The article is representative of “real” journalists’, pundits’, and some regulators’ understanding of online information. The write up says:

Google, like Reddit, owes its existence and success to the principles and practices of the open web, but exclusive arrangements like these mark the end of that long and incredibly fruitful era. They’re also a sign of things to come. The web was already in rough shape, reduced over the last 15 years by the rise of walled-off platforms, battered by advertising consolidation, and polluted by a glut of content from the AI products that used it for training. The rise of AI scraping threatens to finish the job, collapsing a flawed but enormously successful, decades-long experiment in open networking and human communication to a set of antagonistic contracts between warring tech firms.

I want to point out that Google bought rights to Reddit. If you want to search Reddit, you use Google. Because Reddit is a high traffic site, users have to use Google. Guess what? Most online users do not care. Search means Google. Information access means Google. Finding a restaurant means Google. Period.

Google has become a center of gravity in the online universe. One can argue that Google is the Internet. In my monograph Google Version 2.0: The Calculating Predator that is exactly what some Googlers envisioned for the firm. Once a user accesses Google, Google controls the information world. One can argue that Meta and TikTok are going to prevent that. Some folks suggest that one of the AI start ups will neutralize Google’s centralized gravitational force. Google is a distributed outfit. Think of it as like the background radiation in our universe. It is just there. Live with it.

Google has converted content created by people who were not compensated into zeros and ones that will enhance its magnetic pull on users.

Several observations:

  1. Users were so enamored of a service which could show useful results from the quite large and very disorganized pools of digital information that it sucked the life out of its competitors.
  2. Once a funding source got the message through to the Backrub boys that they had to monetize, the company obtained inspiration from the Yahoo pay to play model which Yahoo acquired from Overture.com, formerly GoTo.com. That pay to play thing produces lots of money when there is traffic. Google was getting traffic.
  3. Regulators ignored Google’s slow but steady march to information dominance. In fact, some regulatory professionals with whom I spoke thought Google was the cat’s pajamas and asked me if I could get them Google T shirts for their kids. Google was not evil; it was fund; it was success.
  4. Almost the entire world’s intelligence professionals relay on Google for OSINT. If you don’t know what that means, forget the term. Knowing the control Google can exert by filtering information on a topic will probably give you a tummy ache.

The future is going to look exactly like the world of online in the year 1980. Google and maybe a couple of smaller also rans will control access to digital information. To get advertising free and to have a shot at bias free answers to online queries, users will have to pay. The currency will be watching advertising or subscribing to a premium service. The business model of Dialog Information Services, SDC, DataStar, and Dialcom is coming back. The prices will inflate. Control of information will be easy. And shaping or weaponizing content flow from these next generation online services will be too profitable to resist. Learn to love Google. It is more powerful than a single country’s government. If a country gets too frisky for Google’s liking, the company has ways to evade issues that make it uncomfortable.

The cartoon in this blog post summarizes my view of the situation. A fix will take a long time. I will be pushing up petunias before the problems of online search and the Information Superhighway are remediated.

Stephen E Arnold, August 1, 2024

Every Cloud Has a Silver Lining: Cyber Security Software from Israel

August 1, 2024

dinosaur30a_thumb_thumb_thumb_thumb__thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I wonder if those lucky Delta passengers have made it to their destinations yet? The Crowdstrike misstep caused a bit of a problem for some systems and for humans too. I saw a notice that CrowdStrike, founded by a Russian I believe, offered $10 to each person troubled by the teenie tiny mistake. Isn’t that too much for something which cannot be blamed on any one person, just on an elusive machine-centric process that had a bad hair day? Why pay anything?

And there is a silver lining to the CrowdStrike cloud! I read “CrowdStrike’s Troubles Open New Doors for Israeli Cyber Companies.” [Note that this source document may be paywalled. Just a heads up, gentle reader.] The write up asserts:

For the Israeli cyber sector, CrowdStrike’s troubles are an opportunity.

Yep, opportunity.

The write up adds:

Friday’s [July 26, 2024] drop in CrowdStrike shares reflects investor frustration and the expectation that potential customers will now turn to competitors, strengthening the position of Israeli companies. This situation may renew interest in smaller startups and local procurement in Israel, given how many institutions were affected by the CrowdStrike debacle.

The write up uses the term platformization, which is a marketing concept of the Palo Alto Networks cyber security firm. The idea is that a typical company is a rat’s nest of cyber security systems. No one is able to keep the features, functions, and flaws of several systems in mind. When something misfires or a tiny stumble occurs, Mr. Chaos, the friend of every cyber security professional, strolls in and asks, “Planning on a fun weekend, folks?”

image

The sales person makes reality look different. Thanks, Microsoft Copilot. Your marketing would never distort anything, right?

Platformization sounds great. I am not sure that any cyber security magic wand works. My econo-box automobile runs, but I would not say, “It works.” I can ponder this conundrum as I wait for the mobile repair fellow to arrive and riding in an Uber back to my office in rural Kentucky. The rides are evidence that “just works” is not exactly accurate. Your mileage may vary.

I want to point out that the write up is a bit of content marketing for Palo Alto Networks. Furthermore, I want to bring up a point which irritates some of my friends; namely, the Israeli cyber security systems, infrastructure, and smart software did not work in October 2023. Sure, there are lots of explanations. But which is more of a problem? CrowdStrike or the ineffectiveness of multiple systems?

Your call. The solution to cyber issues resides in informed professionals, adequate resources like money, and a commitment to security. Assumptions, marketing lingo, and fancy trade show booths simply prove that overpromising and under delivering is standard operating procedure at this time.

Stephen E Arnold, August 1, 2024

A Reliability Test for General-Purpose AI

August 1, 2024

A team of researchers has developed a valuable technique: “How to Assess a General-Purpose AI Model’s Reliability Before It’s Deployed.” The ScienceDaily article begins by defining foundation models—the huge, generalized deep-learning models that underpin generative AI like ChatGPT and DALL-E. We are reminded these tools often make mistakes, and that sometimes these mistakes can have serious consequences. (Think self-driving cars.) We learn:

“To help prevent such mistakes, researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task. They do this by considering a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable. When they compared their technique to state-of-the-art baseline methods, it was better at capturing the reliability of foundation models on a variety of downstream classification tasks. Someone could use this technique to decide if a model should be applied in a certain setting, without the need to test it on a real-world dataset. This could be especially useful when datasets may not be accessible due to privacy concerns, like in health care settings. In addition, the technique could be used to rank models based on reliability scores, enabling a user to select the best one for their task.”

Great! See the write-up for the technical details behind the technique. This breakthrough can help companies avoid mistakes before they launch their products. That is, if they elect to use it. Will organizations looking to use AI for cost cutting go through these processes? Sadly, we suspect that, if costs go down and lawsuits are few and far between, the AI is deemed good enough. But thanks for the suggestion, MIT.

Cynthia Murrell, August 1, 2024

Google and Its Smart Software: The Emotion Directed Use Case

July 31, 2024

green-dino_thumb_thumb_thumb_thumb_t_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

How different are the Googlers from those smack in the middle of a normal curve? Some evidence is provided to answer this question in the Ars Technica article “Outsourcing Emotion: The Horror of Google’s “Dear Sydney” AI Ad.” I did not see the advertisement. The volume of messages flooding through my channels each days has allowed me to develop what I call “ad blindness.” I don’t notice them; I don’t watch them; and I don’t care about the crazy content presentation which I struggle to understand.

image

A young person has to write a sympathy card. The smart software is encouraging to use the word “feel.” This is a word foreign to the individual who wants to work for big tech someday. Thanks, MSFT Copilot. Do you have your hands full with security issues today?

Ars Technica watches TV and the Olympics. The write up reports:

In it, a proud father seeks help writing a letter on behalf of his daughter, who is an aspiring runner and superfan of world-record-holding hurdler Sydney McLaughlin-Levrone. “I’m pretty good with words, but this has to be just right,” the father intones before asking Gemini to “Help my daughter write a letter telling Sydney how inspiring she is…” Gemini dutifully responds with a draft letter in which the LLM tells the runner, on behalf of the daughter, that she wants to be “just like you.”

What’s going on? The father wants to write something personal to his progeny. A Hallmark card may never be delivered from the US to France. The solution is an emessage. That makes sense. Essential services like delivering snail mail are like most major systems not working particularly well.

Ars Technica points out:

But I think the most offensive thing about the ad is what it implies about the kinds of human tasks Google sees AI replacing. Rather than using LLMs to automate tedious busywork or difficult research questions, “Dear Sydney” presents a world where Gemini can help us offload a heartwarming shared moment of connection with our children.

I find the article’s negative reaction to a Mad Ave-type of message play somewhat insensitive. Let’s look at this use of smart software from the point of view of a person who is at the right hand tail end of the normal distribution. The factors in this curve are compensation, cleverness as measured in a Google interview, and intelligence as determined by either what school a person attended, achievements when a person was in his or her teens, or solving one of the Courant Institute of Mathematical Sciences brain teasers. (These are shared at cocktail parties or over coffee. If you can’t answer, you pay the bill and never get invited back.)

Let’s run down the use of AI from this hypothetical right of loser viewpoint:

  1. What’s with this assumption that a Google-type person has experience with human interaction. Why not send a text even though your co-worker is at the next desk? Why waste time and brain cycles trying to emulate a Hallmark greeting card contractor’s phraseology. The use of AI is simply logical.
  2. Why criticize an alleged Googler or Googler-by-the-gig for using the company’s outstanding, quantumly supreme AI system? This outfit spends millions on running AI tests which allow the firm’s smart software to perform in an optimal manner in the messaging department. This is “eating the dog food one has prepared.” Think of it as quality testing.
  3. The AI system, running in the Google Cloud on Google technology is faster than even a quantumly supreme Googler when it comes to generating feel-good platitudes. The technology works well. Evaluate this message in terms of the effectiveness of the messaging generated by Google leadership with regard to the Dr. Timnit Gebru matter. Upper quartile of performance which is far beyond the dead center of the bell curve humanoids.

My view is that there is one positive from this use of smart software to message a partially-developed and not completely educated younger person. The Sundar & Prabhakar Comedy Act has been recycling jokes and bits for months. Some find them repetitive. I do not. I am fascinated by the recycling. The S&P Show has its fans just as Jack Benny does decades after his demise. But others want new material.

By golly, I think the Google ad showing Google’s smart software generating a parental note is a hoot and a great demo. Plus look at the PR the spot has generated.

What’s not to like? Not much if you are Googley. If you are not Googley, sorry. There’s not much that can be done except shove ads at you whenever you encounter a Google product or service. The ad illustrates the mental orientation of Google. Learn to love it. Nothing is going to alter the trajectory of the Google for the foreseeable future. Why not use Google’s smart software to write a sympathy note to a friend when his or her parent dies? Why not use Google to write a note to the dean of a college arguing that your child should be admitted? Why not let Google think for you? At least that decision would be intentional.

Stephen E Arnold, July 31, 2024

How

How

How

How

How

Spotting Machine-Generated Content: A Work in Progress

July 31, 2024

dinosaur30a_thumb_thumb_thumb_thumb_This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Some professionals want to figure out if a chunk of content is real, fabricated, or fake. In my experience, making that determination is difficult. For those who want to experiment with identifying weaponized, generated, or AI-assisted content, you may want to review the tools described in “AI Tools to Detect Disinformation – A Selection for Reporters and Fact-Checkers.” The article groups tools into categories. For example, there are utilities for text, images, video, and five bonus tools. There is a suggestion to address the bot problem. The write up is intended for “journalists,” a category which I find increasingly difficult to define.

The big question is, of course, do these systems work? I tried to test the tool from FactiSearch and the link 404ed. The service is available, but a bit of clicking is involved. I tried the Exorde tool and was greeted with the register for a free trial.

I plugged some machine-generated text produced with the You.com “Genius” LLM system in to GPT Radar (not in the cited article’s list by the way). That system happily reported that the sample copy was written by a human.

image

The test content was not. I plugged some text I wrote and the system reported:

image

Three items in my own writing were identified as text written by a large language model. I don’t know whether to be flattered or horrified.

The bottom line is that systems designed to identify machine-generated content are a work in progress. My view is that as soon as a bright your spark rolls out a new detection system, the LLM output become better. So a cat-and-mouse game ensues.

Stephen E Arnold, July 31, 2024

No Llama 3 for EU

July 31, 2024

Frustrated with European regulators, Meta is ready to take its AI ball and go home. Axios reveals, “Scoop: Meta Won’t Offer Future Multimodal AI Models in EU.” Reporter Ina Fried writes:

“Meta will withhold its next multimodal AI model — and future ones — from customers in the European Union because of what it says is a lack of clarity from regulators there, Axios has learned. Why it matters: The move sets up a showdown between Meta and EU regulators and highlights a growing willingness among U.S. tech giants to withhold products from European customers. State of play: ’We will release a multimodal Llama model over the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment,’ Meta said in a statement to Axios.”

So there. And Meta is not the only firm petulant in the face of privacy regulations. Apple recently made a similar declaration. So governments may not be able to regulate AI, but AI outfits can try to regulate governments. Seems legit. The EU’s stance is that Llama 3 may not feed on European users’ Facebook and Instagram posts. Does Meta hope FOMO will make the EU back down? We learn:

“Meta plans to incorporate the new multimodal models, which are able to reason across video, audio, images and text, in a wide range of products, including smartphones and its Meta Ray-Ban smart glasses. Meta says its decision also means that European companies will not be able to use the multimodal models even though they are being released under an open license. It could also prevent companies outside of the EU from offering products and services in Europe that make use of the new multimodal models. The company is also planning to release a larger, text-only version of its Llama 3 model soon. That will be made available for customers and companies in the EU, Meta said.”

The company insists EU user data is crucial to be sure its European products accurately reflect the region’s terminology and culture. Sure That is almost a plausible excuse.

Cynthia Murrell, July 31, 2024

Which Outfit Will Win? The Google or Some Bunch of Busy Bodies

July 30, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

  

It may not be the shoot out at the OK Corral, but the dust up is likely to be a fan favorite. It is possible that some crypto outfit will find a way to issue an NFT and host pay-per-view broadcasts of the committee meetings, lawyer news conferences, and pundits recycling press releases. On the other hand, maybe the shoot out is a Hollywood deal. Everyone knows who is going to win before the real action begins.

“Third Party Cookies Have Got to Go” reports:

After reading Google’s announcement that they no longer plan to deprecate third-party cookies, we wanted to make our position clear. We have updated our TAG finding Third-party cookies must be removed to spell out our concerns.

image

A great debate is underway. Who or what wins? Experience suggests that money has an advantage in this type of disagreement. Thanks, MSFT. Good enough.

Who is making this draconian statement? A government regulator? A big-time legal eagle representing an NGO? Someone running for president of the United States? A member of the CCP? Nope, the World Wide Web Consortium or W3C. This group was set up by Tim Berners-Lee, who wanted to find and link documents at CERN. The outfit wants to cook up Web standards, much to the delight of online advertising interests and certain organizations monitoring Web traffic. Rules allow crafting ways to circumvent their intent and enable the magical world of the modern Internet. How is that working out? I thought the big technology companies set standards like no “soft 404s” or “sorry, Chrome created a problem. We are really, really sorry.”

The write up continues:

We aren’t the only ones who are worried. The updated RFC that defines cookies says that third-party cookies have “inherent privacy issues” and that therefore web “resources cannot rely upon third-party cookies being treated consistently by user agents for the foreseeable future.” We agree. Furthermore, tracking and subsequent data collection and brokerage can support micro-targeting of political messages, which can have a detrimental impact on society, as identified by Privacy International and other organizations. Regulatory authorities, such as the UK’s Information Commissioner’s Office, have also called for the blocking of third-party cookies.

I understand, but the Google seems to be doing one of those “let’s just dump this loser” moves. Revenue is more important than the silly privacy thing. Users who want privacy should take control of their technology.

The W3C points out:

The unfortunate climb-down will also have secondary effects, as it is likely to delay cross-browser work on effective alternatives to third-party cookies. We fear it will have an overall detrimental impact on the cause of improving privacy on the web. We sincerely hope that Google reverses this decision and re-commits to a path towards removal of third-party cookies.

Now the big question: “Who is going to win this shoot out?”

Normal folks might compromise or test a number of options to determine which makes the most sense at a particularly interesting point in time. There is post-Covid weirdness, the threat of escalating armed conflict in what six, 27, or 95 countries, and financial brittleness. That anti-fragile handwaving is not getting much traction in my opinion.

At one end of the corral are the sleek, technology wizards. These norm core  folks have phasers, AI, and money. At the other end of the corral are the opponents who look like a random selection of Café de Paris customers. Place you bets.

Stephen E Arnold, July 30, 2024

1

.

One Legal Stab at CrowdStrike Liability

July 30, 2024

dinosaur30a_thumb_thumb_thumb_thumb_This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “CrowdStrike Will Be Liable for Damages in France, Based on the OVH Precedent.” OVH is a provider of hosting and what I call “enabling services” to organizations in France, Europe, and other countries. The write up focuses on a modest problem OVH experienced in 2021. A fire consumed four of OVH’s data centers. Needless to say the customers of one of the largest online services providers in Europe were not too happy for two reasons: Backups were not available and the affected organizations were knocked offline.

image

Two astronauts look down at earth from the soon to be decommissioned space station. The lights and power on earth just flicked off. Thanks, Microsoft Copilot. No security meetings today?

The article focuses on the French courts’ decision that OVH was liable for damages. A number of details about the legal logic appear in the write up. For those of you who still watch Perry Mason reruns on Sling, please, navigate to the cited article for the details. I boiled the OVH tale down to a single dot point from the excellent article:

The court ruled the OVH backup service was not operated to a reasonable standard and failed at its purpose.

This means that in France and probably the European Union those technology savvy CrowdStrike wizards will be writing checks. The firm’s lawyers will get big checks for a number of years. Then the falconers of cyber threats will be scratching out checks to the customers and probably some of the well-heeled downstream airport lounge sleepers, the patients’ families died because surgeries could not be performed, and a kettle of seething government agencies whose emergency call services were dead.

The write concludes with this statement:

Customers operating in regulated industries like healthcare, finance, aerospace, transportation, are actually required to test and stage and track changes. CrowdStrike claims to have a dozen certifications and standards which require them to follow particular development practices and carry out various level of testing, but they clearly did not. The simple fact that CrowdStrike does not do any of that and actively refuses to, puts them in breach of compliance, which puts customers themselves in breach of compliance by using CrowdStrike. All together, there may be sufficient grounds to unilaterally terminate any CrowdStrike contracts for any customer who wishes to.

The key phrase is “in breach of compliance”. That’s going to be an interesting bit of lingo for lawyers involved in the dead Falcon affair to sort out.

Several observations:

  1. Will someone in the post-Falcon mess raise the question, “Could this be a recipe for a bad actor to emulate?” Could friends of one of the founder who has some ties to Russia be asked questions?
  2. What about that outstanding security of the Microsoft servers? How will the smart software outfit fixated on putting ads for a browser in an operating system respond? Those blue screens are not what I associate with my Apple Mini servers. I think our Linux boxes display a somewhat ominous black screen. Blue is who?
  3. Will this incident be shoved around until absolutely no one knows who signed off on the code modules which contributed to this somewhat interesting global event? My hunch it could be a person working as a contractor from a yurt somewhere northeast of Armenia. What’s your best guess?

Net net: It is definite that a cyber attack aimed at the heart of Microsoft’s software can create global outages. How many computer science students in Bulgaria are thinking about this issue? Will bad actors’ technology wizards rethink what can be done with a simple pushed update?

Stephen E Arnold, July 30, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta