Is This Incident the Price of Marketing: A Lesson for Specialized Software Companies

April 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A comparatively small number of firms develop software an provide specialized services to analysts, law enforcement, and intelligence entities. When I started work at a nuclear consulting company, these firms were low profile. In fact, if one tried to locate the names of the companies in one of those almost-forgotten reference books (remember telephone books), the job was a tough one. First, the firms would have names which meant zero; for example, Rice Labs or Gray & Associates. Next, if one were to call, a human (often a person with a British accent) would politely inquire, “To whom did you wish to speak?” The answer had to conform to a list of acceptable responses. Third, if you were to hunt up the address, you might find yourself in Washington, DC, staring at the second floor of a non-descript building once used to bake pretzels.

image

Decisions, decisions. Thanks, MSFT Copilot. Good enough. Does that phrase apply to one’s own security methods?

Today, the world is different. Specialized firms in a country now engaged in a controversial dust up in the Eastern Mediterranean has companies which have Web sites, publicize their capabilities as mechanisms to know your customer, or make sense of big data. The outfits have trade show presences. One outfit, despite between the poster child from going off the rails, gives lectures and provides previews of its technologies at public events. How times have changed since I have been working in commercial and government work since the early 1970s.

Every company, including those engaged in the development and deployment of specialized policeware and intelware are into marketing. The reason is cultural. Madison Avenue is the whoo-whoo part of doing something quite interesting and wanting to talk about the activity. The other reason is financial. Cracking tough technical problems costs money, and those who have the requisite skills are in demand. The fix, from my point of view, is to try to operate with a public presence while doing the less visible, often secret work required of these companies. The evolution of the specialized software business has been similar to figuring out how to walk a high wire over a circus crowd. Stay on the wire and the outfit is visible and applauded. Fall off the wire and fail big time. But more and more specialized software vendors make the decision to try to become visible and get recognition for their balancing act. I think the optimal approach is to stay out of the big tent avoid the temptations of fame, bright lights, and falling to one’s death.

Why CISA Is Warning CISOs about a Breach at Sisense” provides a good example of public visibility and falling off the high wire. The write up says:

New York City based Sisense has more than a thousand customers across a range of industry verticals, including financial services, telecommunications, healthcare and higher education. On April 10, Sisense Chief Information Security Officer Sangram Dash told customers the company had been made aware of reports that “certain Sisense company information may have been made available on what we have been advised is a restricted access server (not generally available on the internet.)”

Let me highlight one other statement in the write up:

The incident raises questions about whether Sisense was doing enough to protect sensitive data entrusted to it by customers, such as whether the massive volume of stolen customer data was ever encrypted while at rest in these Amazon cloud servers. It is clear, however, that unknown attackers now have all of the credentials that Sisense customers used in their dashboards.

This firm enjoys some visibility because it markets itself using the hot button “analytics.” The function of some of the Sisense technology is to integrate “analytics” into other products and services. Thus it is an infrastructure company, but one that may have more capabilities than other types of firms. The company has non commercial companies as well. If one wants to get “inside” data, Sisense has done a good job of marketing. The visibility makes it easy to watch. Someone with skills and a motive can put grease on the high wire. The article explains what happens when the actor slips up: “More than a thousand customers.”

How can a specialized software company avoid a breach? One step is to avoid visibility. Another is to curtail dreams of big money. Redefine success because those in your peer group won’t care much about you with or without big bucks. I don’t think that is just not part of the game plan of many specialized software companies today. Each time I visit a trade show featuring specialized software firms as speakers and exhibitors I marvel at the razz-ma-tazz the firms bring to the show. Yes, there is competition. But when specialized software companies, particularly those in the policeware and intelware business, market to both commercial and non-commercial firms, that visibility increases their visibility. The visibility attracts bad actors the way Costco roasted chicken makes my French bulldog shiver with anticipation. Tibby wants that chicken. But he is not a bad actor and will not get out of bounds. Others do get out of bounds. The fix is to move the chicken, then put it in the fridge. Tibby will turn his attention elsewhere. He is a dog.

Net net: Less blurring of commercial and specialized customer services might be useful. Fewer blogs, podcasts, crazy marketing programs, and oddly detailed marketing write ups to government agencies. (Yes, these documents can be FOIAed by the Brennan folks, for instance. Yes, those brochures and PowerPoints can find their way to public repositories.) Less marketing. More judgment. Increased security attention, please.

Stephen E Arnold, April 12, 2024

Information: Cheap, Available, and Easy to Obtain

April 9, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I worked in Sillycon Valley and learned a few factoids I found somewhat new. Let me highlight three. First, a person with whom my firm had a business relationship told me, “Chinese people are Chinese for their entire life.” I interpreted this to mean  that a person from China might live in Mountain View, but that individual had ties to his native land. That makes sense but, if true, the statement has interesting implications. Second, another person told me that there was a young person who could look at a circuit board and then reproduce it in sufficient detail to draw a schematic. This sounded crazy to me, but the individual took this person to meetings, discussed his company’s interest in upcoming products, and asked for briefings. With the delightful copying machine in tow, this person would have information about forthcoming hardware, specifically video and telecommunications devices. And, finally, via a colleague I learned of an individual who was a naturalized citizen and worked at a US national laboratory. That individual swapped hard drives in photocopy machines and provided them to a family member in his home town in Wuhan. Were these anecdotes true or false? I assumed each held a grain of truth because technology adepts from China and other countries comprised a significant percentage of the professionals I encountered.

image

Information flows freely in US companies and other organizational entities. Some people bring buckets and collect fresh, pure data. Thanks, MSFT Copilot. If anyone knows about security, you do. Good enough.

I thought of these anecdotes when I read an allegedly accurate “real” news story called “Linwei Ding Was a Google Software Engineer. He Was Also a Prolific Thief of Trade Secrets, Say Prosecutors.” The subtitle is a bit more spicy:

U.S. officials say some of America’s most prominent tech firms have had their virtual pockets picked by Chinese corporate spies and intelligence agencies.

The write up, which may be shaped by art history majors on a mission, states:

Court records say he had others badge him into Google buildings, making it appear as if he were coming to work. In fact, prosecutors say, he was marketing himself to Chinese companies as an expert in artificial intelligence — while stealing 500 files containing some of Google’s most important AI secrets…. His case illustrates what American officials say is an ongoing nightmare for U.S. economic and national security: Some of America’s most prominent tech firms have had their virtual pockets picked by Chinese corporate spies and intelligence agencies.

Several observations about these allegedly true statements are warranted this fine spring day in rural Kentucky:

  1. Some managers assume that when an employee or contractor signs a confidentiality agreement, the employee will abide by that document. The problem arises when the person shares information with a family member, a friend from school, or with a company paying for information. That assumption underscores what might be called “uninformed” or “naive” behavior.
  2. The language barrier and certain cultural norms lock out many people who assume idle chatter and obsequious behavior signals respect and conformity with what some might call “US business norms.” Cultural “blindness” is not uncommon.
  3. Individuals may possess technical expertise unknown to colleagues and contracting firms offering body shop services. Armed with knowledge of photocopiers in certain US government entities, swapping out a hard drive is no big deal. A failure to appreciate an ability to draw a circuit leads to similar ineptness when discussing confidential information.

America operates in a relatively open manner. I have lived and worked in other countries, and that openness often allows information to flow. Assumptions about behavior are not based on an understanding of the cultural norms of other countries.

Net net: The vulnerability is baked in. Therefore, information is often easy to get, difficult to keep privileged, and often aided by companies and government agencies. Is there a fix? No, not without a bit more managerial rigor in the US. Money talks, moving fast and breaking things makes sense to many, and information seeps, maybe floods, from the resulting cracks.  Whom does one trust? My approach: Not too many people regardless of background, what people tell me, or what I believe as an often clueless American.

Stephen E Arnold, April 9, 2024

Another Bottleneck Issue: Threat Analysis

April 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My general view of software is that it is usually good enough. You just cannot get ahead of the problems. For example, I recall doing a project to figure out why Visio (an early version) simply did not do what the marketing collateral said it did. We poked around, and in short order, we identified features that were not implemented or did not work as advertised. Were we surprised? Nah. That type of finding holds for consumer software as well as enterprise software. I recall waiting for someone who worked at Fast Search & Transfer in North Carolina to figure out why hit boosting was not functioning. The reason, if memory serves, was that no one had completed the code. What about security of the platform? Not discussed: The enthusiastic worker in North Carolina turned his attention to the task, but it took time to address the issue. The intrepid engineer encountered “undocumented dependencies.” These are tough to resolve when coders disappear, change jobs, or don’t know how to make something work. These functional issues stack up, and many are never resolved. Many are not considered in terms of security. Even worse, the fix applied by a clueless intern fascinated with Foosball screws something up because… the “leadership team” consists of former consultants, accountants, and lawyers. Not too many professionals with MBAs, law degrees and expertise in SEC accounting requirements are into programming, security practices, and technical details. These stellar professionals gain technical expertise watching engineers with PowerPoint presentations. The meetings feature this popular question: “Where’s the lunch menu?”

image

The person in the row boat is going to have a difficult time dealing with software flaws and cyber security issues which emulate the gusher represented in the Microsoft Copilot illustration. Good enough image, just like good enough software security.

I read “NIST Unveils New Consortium to Operate National Vulnerability Database.” The focus is on software which invites bad actors to the Breach Fun Park. The write up says:

In early March, many security researchers noticed a significant drop in vulnerability enrichment data uploads on the NVD website that had started in mid-February. According to its own data, NIST has analyzed only 199 Common Vulnerabilities and Exposures (CVEs) out of the 2957 it has received so far in March. In total, over 4000 CVEs have not been analyzed since mid-February. Since the NVD is the most comprehensive vulnerability database in the world, many companies rely on it to deploy updates and patches.

The backlog is more than 3,800 vulnerability issues. The original fix was to shut down the US National Vulnerability Database. Yep, this action was kicked around at the exact same time as cyber security fires were blazing in a certain significant vendor providing software to the US government and when embedded exploits in open source software were making headlines.

How does one solve the backlog problem. In the examples I mentioned in the first paragraph of this essay, there was a single player and a single engineer who was supposed to solve the problem. Forget dependences, just make the feature work in a manner that was good enough. Where does a government agency get a one-engineer-to-one-issue set up?

Answer: Create a consortium, a voluntary one to boot.

I have a number of observations to offer, but I will skip these. The point is that software vulnerabilities have overwhelmed a government agency. The commercial vendors issue news releases about each new “issue” a specific team of a specific individual in the case of Microsoft have identified. However, vendors rarely stumble upon the same issue. We identified a vector for ransomware which we will explain in our April 24, 2024, National Cyber Crime Conference lecture.

Net net: Software vulnerabilities illustrate the backlog problem associated with any type of content curation or software issue. The volume is overwhelming available resources. What’s the fix? (You will love this answer.) Artificial intelligence. Yep, sure.

Stephen E Arnold, April 8, 2024

Who Is Responsible for Security Problems? Guess, Please

March 28, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

“In my opinion, Zero-Days Exploited in the Wild Jumped 50% in 2023, Fueled by Spyware Vendors” is a semi-sophisticated chunk of content marketing and an example of information shaping. The source of the “report” is Google. The article appears in what was a Google- and In-Q-Tel-backed company publication. The company is named “Recorded Future” and appears to be owned in whole or in part by a financial concern. In a separate transaction, Google purchased a cyber security outfit called Mandiant which provides services to government and commercial clients. This is an interesting collection of organizations and each group’s staff of technical professionals.

image

The young players are arguing about whose shoulders will carry the burden of the broken window. The batter points to the fielder. The fielder points to the batter. Watching are the coaches and team mates. Everyone, it seems, is responsible. So who will the automobile owner hold responsible? That’s a job for the lawyer retained by the entity with the deepest pockets and an unfettered communications channel. Nice work MSFT Copilot. Is this scenario one with which you are familiar?

The article contains what seems to me quite shocking information; that is, companies providing specialized services to government agencies like law enforcement and intelligence entities, are compromising the security of mobile phones. What’s interesting is that Google’s Android software is one of the more widely used “enablers” of what is now a ubiquitous computing device.

I noted this passage:

Commercial surveillance vendors (CSVs) were the leading culprit behind browser and mobile device exploitation, with Google attributing 75% of known zero-day exploits targeting Google products as well as Android ecosystem devices in 2023 (13 of 17 vulnerabilities). [Emphasis added. Editor.]

Why do I find the article intriguing?

  1. This “revelatory” write up can be interpreted to mean that spyware vendors have to be put in some type of quarantine, possibly similar to those odd boxes in airports where people who smoke can partake of potentially harmful habit. In the special “room”, these folks can be monitored perhaps?
  2. The number of exploits parallels the massive number of security breaches create by widely-used laptop, desktop, and server software systems. Bad actors have been attacking for many years and now the sophistication and volume of cyber attacks seems to be increasing. Every few days cyber security vendors alert me to a new threat; for example, entering hotel rooms with Unsaflok. It seems that security problems are endemic.
  3. The “fix” or “remedial” steps involve users, government agencies, and industry groups. I interpret the argument as suggesting that companies developing operating systems need help and possibly cannot be responsible for these security problems.

The article can be read as a summary of recent developments in the specialized software sector and its careless handling of its technology. However, I think the article is suggesting that the companies building and enabling mobile computing are just victimized by bad actors, lousy regulations, and sloppy government behaviors.

Maybe? I believe I will tilt toward the content marketing purpose of the write up. The argument “Hey, it’s not us” is not convincing me. I think it will complement other articles that blur responsibility the way faces are blurred in some videos.

Stephen E Arnold, March 28, 2024

Can Ma Bell Boogie?

March 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AT&T provides numerous communication and information services to the US government and companies. People see the blue and white trucks with obligatory orange cones and think nothing about their presence. Decades after Judge Green rained on the AT&T monopoly parade, the company has regained some of its market chutzpah. The old-line Bell heads knew that would happen. One reason was the simple fact that communications services have a tendency to pool; that is, online, for instance, wants to be a monopoly. Like water, online and communication services seek the lowest level. One can grouse about a leaking basement, but one is complaining about a basic fact. Complain away, but the water pools. Similarly AT&T benefits and knows how to make the best of this pooling, consolidating, and collecting reality.

I do miss the “old” AT&T. Say what you will about today’s destabilizing communications environment, just don’t forget that the pre-Judge Green world produced useful innovations, provided hardware that worked, and made it possible for some government functions to work much better than those operations perform today.

image

Thanks, MSFT, it seems you understand ageing companies which struggle in the midst of the cyber whippersnappers.

But what’s happened?

In February 2024, AT&T experienced an outage. The redundant, fail-safe, state-of-the-art infrastructure failed. “AT&T Cellular Service Restored after Daylong Outage; Cause Still Unknown” reported:

AT&T said late Thursday [February 24, 2024] that based on an initial review, the outage was “caused by the application and execution of an incorrect process used as we were expanding our network, not a cyber attack.” The company will continue to assess the outage.

What do we publicly know about this remarkable event a month ago? Not much. I am not going to speculate how a single misstep can knock out AT&T, but it raises some questions about AT&T’s procedures, its security, and, yes, its technical competence. The AT&T Ashburn data center is an interesting cluster of facilities. Could it be “knocked offline”? My concern is that the answer to this question is, “You bet your bippy it could.”

A second interesting event surfaced as well. AT&T suffered a mysterious breach which appears to have compromised data about millions of “customers.” And “AT&T Won’t Say How Its Customers’ Data Spilled Online.” Here’s a statement from the report of the breach:

When reached for comment, AT&T spokesperson Stephen Stokes told TechCrunch in a statement: “We have no indications of a compromise of our systems. We determined in 2021 that the information offered on this online forum did not appear to have come from our systems. This appears to be the same dataset that has been recycled several times on this forum.”

Leaked data are no big deal and the incident remains unexplained. The AT&T system went down essential at one fell swoop. Plus there is no explanation which resonates with my understanding of the Bell “way.”

Some questions:

  1. What has AT&T accomplished by its lack of public transparency?
  2. Has the company lost its ability to manage a large, dynamic system due to cost cutting?
  3. Is a lack of training and perhaps capable staff undermining what I think of as “mission critical capabilities” for business and government entities?
  4. What are US regulatory authorities doing to address what is, in my opinion, a threat to the economy of the US and the country’s national security?

Couple the AT&T events with emerging technology like artificial intelligence, will the company make appropriate decisions or create vulnerabilities typically associated with a dominant software company?

Not a positive set up in my opinion. Ma Bell, are you to old and fat to boogie?

Stephen E Arnold, March 26, 2024

AI Hermeneutics: The Fire Fights of Interpretation Flame

March 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My hunch is that not too many of the thumb-typing, TikTok generation know what hermeneutics means. Furthermore, like most of their parents, these future masters of the phone-iverse don’t care. “Let software think for me” would make a nifty T shirt slogan at a technology conference.

This morning (March 12, 2024) I read three quite different write ups. Let me highlight each and then link the content of those documents to the the problem of interpretation of religious texts.

image

Thanks, MSFT Copilot. I am confident your security team is up to this task.

The first write up is a news story called “Elon Musk’s AI to Open Source Grok This Week.” The main point for me is that Mr. Musk will put the label “open source” on his Grok artificial intelligence software. The write up includes an interesting quote; to wit:

Musk further adds that the whole idea of him founding OpenAI was about open sourcing AI. He highlighted his discussion with Larry Page, the former CEO of Google, who was Musk’s friend then. “I sat in his house and talked about AI safety, and Larry did not care about AI safety at all.”

The implication is that Mr. Musk does care about safety. Okay, let’s accept that.

The second story is an ArXiv paper called “Stealing Part of a Production Language Model.” The authors are nine Googlers, two ETH wizards, one University of Washington professor, one OpenAI researcher, and one McGill University smart software luminary. In short, the big outfits are making clear that closed or open, software is rising to the task of revealing some of the inner workings of these “next big things.” The paper states:

We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI’s ChatGPT or Google’s PaLM-2…. For under $20 USD, our attack extracts the entire projection matrix of OpenAI’s ada and babbage language models.

The third item is “How Do Neural Networks Learn? A Mathematical Formula Explains How They Detect Relevant Patterns.” The main idea of this write up is that software can perform an X-ray type analysis of a black box and present some useful data about the inner workings of numerical recipes about which many AI “experts” feign total ignorance.

Several observations:

  1. Open source software is available to download largely without encumbrances. Good actors and bad actors can use this software and its components to let users put on a happy face or bedevil the world’s cyber security experts. Either way, smart software is out of the bag.
  2. In the event that someone or some organization has secrets buried in its software, those secrets can be exposed. One the secret is known, the good actors and the bad actors can surf on that information.
  3. The notion of an attack surface for smart software now includes the numerical recipes and the model itself. Toss in the notion of data poisoning, and the notion of vulnerability must be recast from a specific attack to a much larger type of exploitation.

Net net: I assume the many committees, NGOs, and government entities discussing AI have considered these points and incorporated these articles into informed policies. In the meantime, the AI parade continues to attract participants. Who has time to fool around with the hermeneutics of smart software?

Stephen E Arnold, March 12, 2024

Microsoft and Security: A Rerun with the Same Worn-Out Script

March 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Marvel cinematic universe has spawned two dozen sequels. Microsoft’s security circus features are moving up fast in the reprise business. Unfortunately there is no super hero who comes to the rescue of the giant American firm. The villains in these big screen stunners are a bit like those in the James Bond films. Microsoft seems to prefer to wrestle with the allegedly Russian cozy bear or at least convert a cartoon animal into the personification of evil.

image

Thanks, MSFT, you have nailed security theater and reruns of the same tired story.

What’s interesting about these security blockbusters is that each follows a Hollywood style “you’ve seen this before nudge nudge” approach to the entertainment. The sequence is a belated announcement that Microsoft security has been breached. The evil bad actors have stolen data, corrupted software, and by brute force foiled the norm cores in Microsoft World. Then announcements about fixes that the Microsoft custoemr must implement along with admonitions to keep that MSFT software updated and warnings about using “old” computers, etc. etc.

Russian Hackers Accessed Microsoft Source Code” is the equivalent of New York Times film review. The write up reports:

In January, Microsoft disclosed that Russian hackers had breached the company’s systems and managed to read emails belonging to senior executives. Now, the company has revealed that the breach was worse than initially understood and that the Russian hackers accessed Microsoft source code. Friday’s revelation — made in a blog post and a filing with the Securities and Exchange Commission — is the latest in a string of breaches affecting the company that have raised major questions in Washington about Microsoft’s security posture.

Well, that’s harsh. No mention of the estimable alleged monopoly’s releasing the information on March 7, 2024. I am capturing my thoughts on March 8, 2024. But with college basketball moving toward tournament time, who cares? I am not really sure any more. And Washington? Does the name evoke a person, a committee, a committee consisting of the heads of security committees, someone in the White House, an “expert” at the suddenly famous National Bureau of Standards, or absolutely no one.

The write asserts:

The company is concerned, however, that “Midnight Blizzard is attempting to use secrets of different types it has found,” including in emails between customers and Microsoft. “As we discover them in our exfiltrated email, we have been and are reaching out to these customers to assist them in taking mitigating measures,” the company said in its blog post. The company describes the incident as an example of “what has become more broadly an unprecedented global threat landscape, especially in terms of sophisticated nation-state attacks.” In response, the company has said it is increasing the resources and attention devoted to securing its systems.

Microsoft is “reaching out.” I can reach for a donut, but I do not grasp it and gobble it down. “Reach” is not the same as fixing the problems Microsoft caused.

Several observations:

  1. Microsoft is an alleged monopoly, and it is allowing its digital trains to set fire to the fields, homes, and businesses which have to use its tracks. Isn’t it time for purposeful action from the US government agencies with direct responsibility for cyber security and appropriate business conduct?
  2. Can Microsoft remediate its problems? My answer is, “No.” Vulnerabilities are engineered in because no one has the time, energy, or interest to chase down problems and fix them. There is an ageing programmer named Steve Gibson. His approach to software is the exact opposite of Microsoft’s. Mr. Gibson will never be a trillion dollar operation, but his software works. Perhaps Microsoft should consider adopting some of Mr. Gibson’s methods.
  3. Customers have to take a close look at the security breaches endlessly reported by cyber security companies. Some outfits’ software is on the list most of the time. Other companies’ software is an infrequent visitor to these breach parties. Is it time for customers to be looking for an alternative to what Microsoft provides?

Net net: A new security release will be coming to the computer near you. Don’t fail to miss it.

Stephen E Arnold, March 12, 2024

x

x

x

x

x

NSO Group: Pegasus Code Wings Its Way to Meta and Mr. Zuckerberg

March 7, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

NSO Group’s senior managers and legal eagles will have an opportunity to become familiar with an okay Brazilian restaurant and a waffle shop. That lovable leader of Facebook, Instagram, Threads, and WhatsApp may have put a stick in the now-ageing digital bicycle doing business as NSO Group. The company’s mark is pegasus, which is a flying horse. Pegasus’s dad was Poseidon, and his mom was the knock out Gorgon Medusa, who did some innovative hair treatments. The mythical pegasus helped out other gods until Zeus stepped in an acted with extreme prejudice. Quite a myth.

image

Poseidon decides to kill the mythical Pegasus, not for its software, but for its getting out of bounds. Thanks, MSFT Copilot. Close enough.

Life imitates myth. “Court Orders Maker of Pegasus Spyware to Hand Over Code to WhatsApp” reports that the hand over decision:

is a major legal victory for WhatsApp, the Meta-owned communication app which has been embroiled in a lawsuit against NSO since 2019, when it alleged that the Israeli company’s spyware had been used against 1,400 WhatsApp users over a two-week period. NSO’s Pegasus code, and code for other surveillance products it sells, is seen as a closely and highly sought state secret. NSO is closely regulated by the Israeli ministry of defense, which must review and approve the sale of all licenses to foreign governments.

NSO Group hired former DHS and NSA official Stewart Baker to fix up NSO Group gyro compass. Mr. Baker, who is a podcaster and affiliated with the law firm Steptoe and Johnson. For more color about Mr. Baker, please scan “Former DHS/NSA Official Stewart Baker Decides He Can Help NSO Group Turn A Profit.”

A decade ago, Israel’s senior officials might have been able to prevent a social media company from getting a copy of the Pegasus source code. Not anymore. Israel’s home-grown intelware technology simply did not thwart, prevent, or warn about the Hamas attack in the autumn of 2023. If NSO Group were battling in court with Harris Corp., Textron, or Harris Corp., I would not worry. Mr. Zuckerberg’s companies are not directly involved with national security technology. From what I have heard at conferences, Mr. Zuckerberg’s commercial enterprises are responsive to law enforcement requests when a bad actor uses Facebook for an allegedly illegal activity. But Mr. Zuckerberg’s managers are really busy with higher priority tasks. Some folks engaged in investigations of serious crimes must be patient. Presumably the investigators can pass their time scrolling through #Shorts. If the Guardian’s article is accurate, now those Facebook employees can learn how Pegasus works. Will any of those learnings stick? One hopes not.

Several observations:

  1. Companies which make specialized software guard their systems and methods carefully. Well, that used to be true.
  2. The reorganization of NSO Group has not lowered the firm’s public relations profile. NSO Group can make headlines, which may not be desirable for those engaged in national security.
  3. Disclosure of the specific Pegasus systems and methods will get a warm, enthusiastic reception from those who exchange ideas for malware and related tools on private Telegram channels, Dark Web discussion groups, or via one of the “stealth” communication services which pop up like mushrooms after rain in rural Kentucky.

Will the software Pegasus be terminated? I remain concerned that source code revealing how to perform certain tasks may lead to downstream, unintended consequences. Specialized software companies try to operate with maximum security. Now Pegasus may be flying away unless another legal action prevents this.

Where is Zeus when one needs him?

Stephen E Arnold, March 7, 2024

Security Debt: So Just Be a Responsible User / Developer

February 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Security appears to be one of the next big things. Smart software strapped onto to cyber safeguard systems is a no-lose proposition for vendors. Does it matter that bolted on AI may not work? Nope. The important point is to ride the opportunity wave.

What’s interesting is that security is becoming a topic discussed at 75-something bridge groups and at lunch gatherings in government agencies concerned about fish and trees. Can third-party security services, grandmothers chasing a grand slam, or an expert in river fowl address security problems? I would suggest that the idea that security is the user’s responsibility is an interesting way to dodge responsibility. The estimable 23andMe tried this play, and I am not too sure that it worked.

image

Can security debt become the invisible hand creating opportunities for bad actors? Has the young executive reached the point of no return for a personal debt crisis? Thanks, MSFT Pilot Bing for a good enough illustration.

Who can address the security issues in the software people and organizations use today. “Why Software Security Debt Is Becoming a Serious Problem for Developers” states:

Over 70% of organizations have software containing flaws that have remained unfixed for longer than a year, constituting security debt,

Plus, the article asserts:

46% of organizations were found to have persistent, high-severity flaws that went unaddressed for over a year

Security issues exist. But the question is, “Who will address these flaws, gaps, and mistakes?”

The article cites an expert who opines:

“The further that you shift [security testing] to the developer’s desktop and have them see it as early as possible so they can fix it, the better, because number one it’s going to help them understand the issue more and [number two] it’s going to build the habits around avoiding it.”

But who is going to fix the security problems?

In-house developers may not have the expertise or access to the uncompiled code to identify and remediate. Open source and other third-party software can change without notice because why not do what’s best for those people or the bad actors manipulating open source software and “approved” apps available from a large technology company’s online store.

The article offers a number of suggestions, but none of these strike me as practical for some or most organizations.

Here’s the problem: Security is not a priority until a problem surfaces. Then when a problem becomes known, the delay between compromise, discovery, and public announcement can be — let’s be gentle — significant. Once a cyber security vendor “discovers” the problem or learns about it from a customer who calls and asks, “What has happened?”, the PR machines grind into action.

The “fixes” are typically rush jobs for these reasons:

  1. The vendor and the developer who made the zero a one does not earn money by fixing old code. Another factor is that the person or team responsible for the misstep is long gone, working as an Uber driver, or sitting in a rocking chair in a warehouse for the elderly
  2. The complexity of “going back” and making a fix may create other problems. These dependencies are unknown, so a fix just creates more problems. Writing a shim or wrapper code may be good enough to get the angry dogs to calm down and stop barking.
  3. The security flaw may be unfixable; that is, the original approach includes and may need flaws for performance, expediency, or some quite revenue-centric reason. No one wants to rebuild a Pinto that explodes in a rear end collision. Let the lawyers deal with it. When it comes to code, lawyers are definitely equipped to resolve security problems.

The write up contains a number of statistics, but it makes one major point:

Security debt is mounting.

Like a young worker who lives by moving credit card debt from vendor to vendor, getting out of the debt hole may be almost impossible. But, hey, it is that individual’s responsibility, not the system. Just be responsible. That is easy to say, and it strikes me as somewhat hollow.

Stephen E Arnold, February 15, 2024

Ho-Hum Write Up with Some Golden Nuggets

January 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Anthropic Confirms It Suffered a Data Leak.” I know. I know. Another security breach involving an outfit working with the Bezos bulldozer and Googzilla. Snore. But in the write up, tucked away were a couple of statements I found interesting.

image

“Hey, pardner, I found an inconsistency.” Two tries for a prospector and a horse. Good enough, MSFT Copilot Bing thing. I won’t ask about your secure email.

Here these items are:

  1. Microsoft, Amazon and others are being asked by a US government agency “to provide agreements and rationale for collaborations and their implications; analysis of competitive impact; and information on any other government entities requesting information or performing investigations.” Regulatory scrutiny of the techno feudal champions?
  2. The write up asserts: “Anthropic has made a “long-term commitment” to provide AWS customers with “future generations” of its models through Amazon Bedrock, and will allow them early access to unique features for model customization and fine-tuning purposes.” Love at first sight?
  3. And a fascinating quote from a Googler. Note: I have put in bold some key words which I found interesting:

“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” Google Cloud CEO Thomas Kurian said in a statement on their relationship. “This expanded partnership with Anthropic, built on years of working together, will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”

Yeah, but the article is called “Anthropic Confirms It Suffered a Data Leak.” What’s with the securely?

Ah, regulatory scrutiny and obvious inconsistency. Ho-hum with a good enough tossed in for spice.

Stephen E Arnold, January 30, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta