Microsoft: Security Debt and a Cooked Goose

May 3, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Microsoft has a deputy security officer. Who is it? For reasons of security, I don’t know. What I do know is that our test VPNs no longer work. That’s a good way to enforce reduced security: Just break Windows 11. (Oh, the pushed messages work just fine.)

image

Is Microsoft’s security goose cooked? Thanks, MSFT Copilot. Keep following your security recipe.

I read “At Microsoft, Years of Security Debt Come Crashing Down.” The idea is that technical debt has little hidden chambers, in this case, security debt. The write up says:

…negligence, misguided investments and hubris have left the enterprise giant on its back foot.

How has Microsoft responded? Great financial report and this type of news:

… in early April, the federal Cyber Safety Review Board released a long-anticipated report which showed the company failed to prevent a massive 2023 hack of its Microsoft Exchange Online environment. The hack by a People’s Republic of China-linked espionage actor led to the theft of 60,000 State Department emails and gained access to other high-profile officials.

Bad? Not as bad as this reminder that there are some concerning issues

What is interesting is that big outfits, government agencies, and start ups just use Windows. It’s ubiquitous, relatively cheap, and good enough. Apple’s software is fine, but it is different. Linux has its fans, but it is work. Therefore, hello Windows and Microsoft.

The article states:

Just weeks ago, the Cybersecurity and Infrastructure Security Agency issued an emergency directive, which orders federal civilian agencies to mitigate vulnerabilities in their networks, analyze the content of stolen emails, reset credentials and take additional steps to secure Microsoft Azure accounts.

The problem is that Microsoft has been successful in becoming for many government and commercial entities the only game in town. This warrants several observations:

  1. The Microsoft software ecosystem may be impossible to secure due to its size and complexity
  2. Government entities from America to Zimbabwe find the software “good enough”
  3. Security — despite the chit chat — is expensive and often given cursory attention by system architects, programmers, and clients.

The hope is that smart software will identify, mitigate, and choke off the cyber threats. At cyber security conferences, I wonder if the attendees are paying attention to Emily Dickinson (the sporty nun of Amherst), who wrote:

Hope is the thing with feathers
That perches in the soul
And sings the tune without the words
And never stops at all.

My thought is that more than hope may be necessary. Hope in AI is the cute security trick of the day. Instead of a happy bird, we may end up with a cooked goose.

Stephen E Arnold, May 3, 2024

Security Conflation: A Semantic Slippery Slope to Persistent Problems

May 2, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

My view is that secrets can be useful. When discussing who has what secret, I think it is important to understand who the players / actors are. When I explain how to perform a task to a contractor in the UK, my transfer of information is a secret; that is, I don’t want others to know the trick to solve a problem that can take others hours or day to resolve. The context is an individual knows something and transfers that specific information so that it does not become a TikTok video. Other secrets are used by bad actors. Some are used by government officials. Commercial enterprises — for example, pharmaceutical companies wrestling with an embarrassing finding from a clinical trial — have their secrets too. Blue-chip consulting firms are bursting with information which is unknown by all but a few individuals.

image

Good enough, MSFT Copilot. After “all,” you are the expert in security.

I read “Hacker Free-for-All Fights for Control of Home and Office Routers Everywhere.” I am less interested in the details of shoddy security and how it is exploited by individuals and organizations. What troubles me is the use of these words: “All” and “Everywhere.” Categorical affirmatives are problematic in a today’s datasphere. The write up conflates any entity working for a government entity with any bad actor intent on committing a crime as cut from the same cloth.

The write up makes two quite different types of behavior identical. The impact of such conflation, in my opinion, is to suggest:

  1. Government entities are criminal enterprises, using techniques and methods which are in violation of the “law”. I assume that the law is a moral or ethical instruction emitted by some source and known to be a universal truth. For the purposes of my comments, let’s assume the essay’s analysis is responding to some higher authority and anchored on that “universal” truth. (Remember the danger of all and everywhere.)
  2. Bad actors break laws just like governments and are, therefore, both are criminals. If true, these people and entities must be punished.
  3. Some higher authority — not identified in the write up — must step in and bring these evil doers to justice.

The problem is that there is a substantive difference among the conflated bad actors. Those engaged in enforcing laws or protecting a nation state are, one hopes, acting within that specific context; that is, the laws, rules, and conventions of that nation state. When one investigator or analyst seeks “secrets” from an adversary, the reason for the action is, in my opinion, easy to explain: The actor followed the rules spelled out by the context / nation state for which the actor works. If one doesn’t like how France runs its railroad, move to Saudi Arabia. In short, find a place to live where the behaviors of the nation state match up with one’s individual perceptions.

When a bad actor — for example a purveyor of child sexual abuse material on an encrypted messaging application operating in a distributed manner from a country in the Middle East — does his / her business, government entities want to shut down the operation. Substitute any criminal act you want, and the justification for obtaining information to neutralize the bad actor is at least understandable to the child’s mother.

The write up dances into the swamp of conflation in an effort to make clear that the system and methods of good and bad actors are the same. That’s the way life is in the datasphere.

The real issue, however, is not the actors who exploit the datasphere, in my view, the problems begins with:

  • Shoddy, careless, or flawed security created and sold by commercial enterprises
  • Lax, indifferent, and false economies of individuals and organizations when dealing with security their operating environment
  • Failure of regulatory authorities to certify that specific software and hardware meet requirements for security.

How does the write up address fixing the conflation problem, the true root of security issues, and the fact that exploited flaws persist for years? I noted this passage:

The best way to keep routers free of this sort of malware is to ensure that their administrative access is protected by a strong password, meaning one that’s randomly generated and at least 11 characters long and ideally includes a mix of letters, numbers, or special characters. Remote access should be turned off unless the capability is truly needed and is configured by someone experienced. Firmware updates should be installed promptly. It’s also a good idea to regularly restart routers since most malware for the devices can’t survive a reboot. Once a device is no longer supported by the manufacturer, people who can afford to should replace it with a new one.

Right. Blame the individual user. But that individual is just one part of the “problem.” The damage done by conflation and by failing to focus on the root causes remains. Therefore, we live in a compromised environment. Muddled thinking makes life easier for bad actors and harder for those who are charged with enforcing rules and regulations. Okay, mom, change your password.

Stephen E Arnold, May 2, 2024

A High-Tech Best Friend and Campfire Lighter

May 1, 2024

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

A dog is allegedly man’s best friend. I  have a French bulldog,

tibby asleep 3

and I am not 100 percent sure that’s an accurate statement. But I have a way to get the pal I have wanted for years.

 Ars Technica reports “You Can Now Buy a Flame-Throwing Robot Dog for Under $10,000” from Ohio-based maker Throwflame. See the article for footage of this contraption setting fire to what appears to be a forest. Terrific. Reporter Benj Edwards writes:

“Thermonator is a quadruped robot with an ARC flamethrower mounted to its back, fueled by gasoline or napalm. It features a one-hour battery, a 30-foot flame-throwing range, and Wi-Fi and Bluetooth connectivity for remote control through a smartphone. It also includes a LIDAR sensor for mapping and obstacle avoidance, laser sighting, and first-person view (FPV) navigation through an onboard camera. The product appears to integrate a version of the Unitree Go2 robot quadruped that retails alone for $1,600 in its base configuration. The company lists possible applications of the new robot as ‘wildfire control and prevention,’ ‘agricultural management,’ ‘ecological conservation,’ ‘snow and ice removal,’ and ‘entertainment and SFX.’ But most of all, it sets things on fire in a variety of real-world scenarios.”

And what does my desired dog look like? The GenY Tibby asleep at work? Nope.

image

I hope my Thermonator includes an AI at the controls. Maybe that will be an add-on feature in 2025? Unitree, maker of the robot base mentioned above, once vowed to oppose the weaponization of their products (along with five other robotics firms.) Perhaps Throwflame won them over with assertions their device is not technically a weapon, since flamethrowers are not considered firearms by federal agencies. It is currently legal to own this mayhem machine in 48 states. Certain restrictions apply in Maryland and California. How many crazies can get their hands on a mere $9,420 plus tax for that kind of power? Even factoring in the cost of napalm (sold separately), probably quite a few.

Cynthia Murrell, May 1, 2024

From the Cyber Security Irony Department: We Market and We Suffer Breaches. Hire Us!

April 24, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Irony, according to You.com, means:

Irony is a rhetorical device used to express an intended meaning by using language that conveys the opposite meaning when taken literally. It involves a noticeable, often humorous, difference between what is said and the intended meaning. The term “irony” can be used to describe a situation in which something which was intended to have a particular outcome turns out to have been incorrect all along. Irony can take various forms, such as verbal irony, dramatic irony, and situational irony. The word “irony” comes from the Greek “eironeia,” meaning “feigned ignorance”

I am not sure I understand the definition, but let’s see if these two “communications” capture the smart software’s definition.

The first item is an email I received from the cyber security firm Palo Alto Networks. The name evokes the green swards of Stanford University, the wonky mall, and the softball games (co-ed, of course). Here’s the email solicitation I received on April 15, 2024:

image

The message is designed to ignite my enthusiasm because the program invites me to:

Join us to discover how you can harness next-generation, AI-powered security to:

  • Solve for tomorrow’s security operations challenges today
  • Enable cloud transformation and deployment
  • Secure hybrid workforces consistently and at scale
  • And much more.

I liked the much more. Most cyber outfits do road shows. Will I drive from outside Louisville, Kentucky, to Columbus, Ohio? I was thinking about it until I read:

Major Palo Alto Security Flaw Is Being Exploited via Python Zero-Day Backdoor.”

Maybe it is another Palo Alto outfit. When I worked in Foster City (home of the original born-dead mall), I think there was a Palo Alto Pizza. But my memory is fuzzy and Plastic Fantastic Land does blend together. Let’s look at the write up:

For weeks now, unidentified threat actors have been leveraging a critical zero-day vulnerability in Palo Alto Networks’ PAN-OS software, running arbitrary code on vulnerable firewalls, with root privilege. Multiple security researchers have flagged the campaign, including Palo Alto Networks’ own Unit 42, noting a single threat actor group has been abusing a vulnerability called command injection, since at least March 26 2024.

Yep, seems to be the same outfit wanting me to “solve for tomorrow’s security operations challenges today.” The only issue is that the exploit was discovered a couple of weeks ago. If the write up is accurate, the exploit remains unfixed.,

Perhaps this is an example of irony? However, I think it is a better example of the over-the-top yip yap about smart software and the efficacy of cyber security systems. Yes, I know it is a zero day, but it is a zero day only to Palo Alto. The bad actors who found the problem and exploited already know the company has a security issue.

I mentioned in articles about some intelware that the developers do one thing, the software project manager does another, and the marketers output what amounts to hoo hah, baloney, and Marketing 101 hyperbole.

Yep, ironic.

Stephen E Arnold, April 24, 2024

Is This Incident the Price of Marketing: A Lesson for Specialized Software Companies

April 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A comparatively small number of firms develop software an provide specialized services to analysts, law enforcement, and intelligence entities. When I started work at a nuclear consulting company, these firms were low profile. In fact, if one tried to locate the names of the companies in one of those almost-forgotten reference books (remember telephone books), the job was a tough one. First, the firms would have names which meant zero; for example, Rice Labs or Gray & Associates. Next, if one were to call, a human (often a person with a British accent) would politely inquire, “To whom did you wish to speak?” The answer had to conform to a list of acceptable responses. Third, if you were to hunt up the address, you might find yourself in Washington, DC, staring at the second floor of a non-descript building once used to bake pretzels.

image

Decisions, decisions. Thanks, MSFT Copilot. Good enough. Does that phrase apply to one’s own security methods?

Today, the world is different. Specialized firms in a country now engaged in a controversial dust up in the Eastern Mediterranean has companies which have Web sites, publicize their capabilities as mechanisms to know your customer, or make sense of big data. The outfits have trade show presences. One outfit, despite between the poster child from going off the rails, gives lectures and provides previews of its technologies at public events. How times have changed since I have been working in commercial and government work since the early 1970s.

Every company, including those engaged in the development and deployment of specialized policeware and intelware are into marketing. The reason is cultural. Madison Avenue is the whoo-whoo part of doing something quite interesting and wanting to talk about the activity. The other reason is financial. Cracking tough technical problems costs money, and those who have the requisite skills are in demand. The fix, from my point of view, is to try to operate with a public presence while doing the less visible, often secret work required of these companies. The evolution of the specialized software business has been similar to figuring out how to walk a high wire over a circus crowd. Stay on the wire and the outfit is visible and applauded. Fall off the wire and fail big time. But more and more specialized software vendors make the decision to try to become visible and get recognition for their balancing act. I think the optimal approach is to stay out of the big tent avoid the temptations of fame, bright lights, and falling to one’s death.

Why CISA Is Warning CISOs about a Breach at Sisense” provides a good example of public visibility and falling off the high wire. The write up says:

New York City based Sisense has more than a thousand customers across a range of industry verticals, including financial services, telecommunications, healthcare and higher education. On April 10, Sisense Chief Information Security Officer Sangram Dash told customers the company had been made aware of reports that “certain Sisense company information may have been made available on what we have been advised is a restricted access server (not generally available on the internet.)”

Let me highlight one other statement in the write up:

The incident raises questions about whether Sisense was doing enough to protect sensitive data entrusted to it by customers, such as whether the massive volume of stolen customer data was ever encrypted while at rest in these Amazon cloud servers. It is clear, however, that unknown attackers now have all of the credentials that Sisense customers used in their dashboards.

This firm enjoys some visibility because it markets itself using the hot button “analytics.” The function of some of the Sisense technology is to integrate “analytics” into other products and services. Thus it is an infrastructure company, but one that may have more capabilities than other types of firms. The company has non commercial companies as well. If one wants to get “inside” data, Sisense has done a good job of marketing. The visibility makes it easy to watch. Someone with skills and a motive can put grease on the high wire. The article explains what happens when the actor slips up: “More than a thousand customers.”

How can a specialized software company avoid a breach? One step is to avoid visibility. Another is to curtail dreams of big money. Redefine success because those in your peer group won’t care much about you with or without big bucks. I don’t think that is just not part of the game plan of many specialized software companies today. Each time I visit a trade show featuring specialized software firms as speakers and exhibitors I marvel at the razz-ma-tazz the firms bring to the show. Yes, there is competition. But when specialized software companies, particularly those in the policeware and intelware business, market to both commercial and non-commercial firms, that visibility increases their visibility. The visibility attracts bad actors the way Costco roasted chicken makes my French bulldog shiver with anticipation. Tibby wants that chicken. But he is not a bad actor and will not get out of bounds. Others do get out of bounds. The fix is to move the chicken, then put it in the fridge. Tibby will turn his attention elsewhere. He is a dog.

Net net: Less blurring of commercial and specialized customer services might be useful. Fewer blogs, podcasts, crazy marketing programs, and oddly detailed marketing write ups to government agencies. (Yes, these documents can be FOIAed by the Brennan folks, for instance. Yes, those brochures and PowerPoints can find their way to public repositories.) Less marketing. More judgment. Increased security attention, please.

Stephen E Arnold, April 12, 2024

Information: Cheap, Available, and Easy to Obtain

April 9, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I worked in Sillycon Valley and learned a few factoids I found somewhat new. Let me highlight three. First, a person with whom my firm had a business relationship told me, “Chinese people are Chinese for their entire life.” I interpreted this to mean  that a person from China might live in Mountain View, but that individual had ties to his native land. That makes sense but, if true, the statement has interesting implications. Second, another person told me that there was a young person who could look at a circuit board and then reproduce it in sufficient detail to draw a schematic. This sounded crazy to me, but the individual took this person to meetings, discussed his company’s interest in upcoming products, and asked for briefings. With the delightful copying machine in tow, this person would have information about forthcoming hardware, specifically video and telecommunications devices. And, finally, via a colleague I learned of an individual who was a naturalized citizen and worked at a US national laboratory. That individual swapped hard drives in photocopy machines and provided them to a family member in his home town in Wuhan. Were these anecdotes true or false? I assumed each held a grain of truth because technology adepts from China and other countries comprised a significant percentage of the professionals I encountered.

image

Information flows freely in US companies and other organizational entities. Some people bring buckets and collect fresh, pure data. Thanks, MSFT Copilot. If anyone knows about security, you do. Good enough.

I thought of these anecdotes when I read an allegedly accurate “real” news story called “Linwei Ding Was a Google Software Engineer. He Was Also a Prolific Thief of Trade Secrets, Say Prosecutors.” The subtitle is a bit more spicy:

U.S. officials say some of America’s most prominent tech firms have had their virtual pockets picked by Chinese corporate spies and intelligence agencies.

The write up, which may be shaped by art history majors on a mission, states:

Court records say he had others badge him into Google buildings, making it appear as if he were coming to work. In fact, prosecutors say, he was marketing himself to Chinese companies as an expert in artificial intelligence — while stealing 500 files containing some of Google’s most important AI secrets…. His case illustrates what American officials say is an ongoing nightmare for U.S. economic and national security: Some of America’s most prominent tech firms have had their virtual pockets picked by Chinese corporate spies and intelligence agencies.

Several observations about these allegedly true statements are warranted this fine spring day in rural Kentucky:

  1. Some managers assume that when an employee or contractor signs a confidentiality agreement, the employee will abide by that document. The problem arises when the person shares information with a family member, a friend from school, or with a company paying for information. That assumption underscores what might be called “uninformed” or “naive” behavior.
  2. The language barrier and certain cultural norms lock out many people who assume idle chatter and obsequious behavior signals respect and conformity with what some might call “US business norms.” Cultural “blindness” is not uncommon.
  3. Individuals may possess technical expertise unknown to colleagues and contracting firms offering body shop services. Armed with knowledge of photocopiers in certain US government entities, swapping out a hard drive is no big deal. A failure to appreciate an ability to draw a circuit leads to similar ineptness when discussing confidential information.

America operates in a relatively open manner. I have lived and worked in other countries, and that openness often allows information to flow. Assumptions about behavior are not based on an understanding of the cultural norms of other countries.

Net net: The vulnerability is baked in. Therefore, information is often easy to get, difficult to keep privileged, and often aided by companies and government agencies. Is there a fix? No, not without a bit more managerial rigor in the US. Money talks, moving fast and breaking things makes sense to many, and information seeps, maybe floods, from the resulting cracks.  Whom does one trust? My approach: Not too many people regardless of background, what people tell me, or what I believe as an often clueless American.

Stephen E Arnold, April 9, 2024

Another Bottleneck Issue: Threat Analysis

April 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My general view of software is that it is usually good enough. You just cannot get ahead of the problems. For example, I recall doing a project to figure out why Visio (an early version) simply did not do what the marketing collateral said it did. We poked around, and in short order, we identified features that were not implemented or did not work as advertised. Were we surprised? Nah. That type of finding holds for consumer software as well as enterprise software. I recall waiting for someone who worked at Fast Search & Transfer in North Carolina to figure out why hit boosting was not functioning. The reason, if memory serves, was that no one had completed the code. What about security of the platform? Not discussed: The enthusiastic worker in North Carolina turned his attention to the task, but it took time to address the issue. The intrepid engineer encountered “undocumented dependencies.” These are tough to resolve when coders disappear, change jobs, or don’t know how to make something work. These functional issues stack up, and many are never resolved. Many are not considered in terms of security. Even worse, the fix applied by a clueless intern fascinated with Foosball screws something up because… the “leadership team” consists of former consultants, accountants, and lawyers. Not too many professionals with MBAs, law degrees and expertise in SEC accounting requirements are into programming, security practices, and technical details. These stellar professionals gain technical expertise watching engineers with PowerPoint presentations. The meetings feature this popular question: “Where’s the lunch menu?”

image

The person in the row boat is going to have a difficult time dealing with software flaws and cyber security issues which emulate the gusher represented in the Microsoft Copilot illustration. Good enough image, just like good enough software security.

I read “NIST Unveils New Consortium to Operate National Vulnerability Database.” The focus is on software which invites bad actors to the Breach Fun Park. The write up says:

In early March, many security researchers noticed a significant drop in vulnerability enrichment data uploads on the NVD website that had started in mid-February. According to its own data, NIST has analyzed only 199 Common Vulnerabilities and Exposures (CVEs) out of the 2957 it has received so far in March. In total, over 4000 CVEs have not been analyzed since mid-February. Since the NVD is the most comprehensive vulnerability database in the world, many companies rely on it to deploy updates and patches.

The backlog is more than 3,800 vulnerability issues. The original fix was to shut down the US National Vulnerability Database. Yep, this action was kicked around at the exact same time as cyber security fires were blazing in a certain significant vendor providing software to the US government and when embedded exploits in open source software were making headlines.

How does one solve the backlog problem. In the examples I mentioned in the first paragraph of this essay, there was a single player and a single engineer who was supposed to solve the problem. Forget dependences, just make the feature work in a manner that was good enough. Where does a government agency get a one-engineer-to-one-issue set up?

Answer: Create a consortium, a voluntary one to boot.

I have a number of observations to offer, but I will skip these. The point is that software vulnerabilities have overwhelmed a government agency. The commercial vendors issue news releases about each new “issue” a specific team of a specific individual in the case of Microsoft have identified. However, vendors rarely stumble upon the same issue. We identified a vector for ransomware which we will explain in our April 24, 2024, National Cyber Crime Conference lecture.

Net net: Software vulnerabilities illustrate the backlog problem associated with any type of content curation or software issue. The volume is overwhelming available resources. What’s the fix? (You will love this answer.) Artificial intelligence. Yep, sure.

Stephen E Arnold, April 8, 2024

Who Is Responsible for Security Problems? Guess, Please

March 28, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

“In my opinion, Zero-Days Exploited in the Wild Jumped 50% in 2023, Fueled by Spyware Vendors” is a semi-sophisticated chunk of content marketing and an example of information shaping. The source of the “report” is Google. The article appears in what was a Google- and In-Q-Tel-backed company publication. The company is named “Recorded Future” and appears to be owned in whole or in part by a financial concern. In a separate transaction, Google purchased a cyber security outfit called Mandiant which provides services to government and commercial clients. This is an interesting collection of organizations and each group’s staff of technical professionals.

image

The young players are arguing about whose shoulders will carry the burden of the broken window. The batter points to the fielder. The fielder points to the batter. Watching are the coaches and team mates. Everyone, it seems, is responsible. So who will the automobile owner hold responsible? That’s a job for the lawyer retained by the entity with the deepest pockets and an unfettered communications channel. Nice work MSFT Copilot. Is this scenario one with which you are familiar?

The article contains what seems to me quite shocking information; that is, companies providing specialized services to government agencies like law enforcement and intelligence entities, are compromising the security of mobile phones. What’s interesting is that Google’s Android software is one of the more widely used “enablers” of what is now a ubiquitous computing device.

I noted this passage:

Commercial surveillance vendors (CSVs) were the leading culprit behind browser and mobile device exploitation, with Google attributing 75% of known zero-day exploits targeting Google products as well as Android ecosystem devices in 2023 (13 of 17 vulnerabilities). [Emphasis added. Editor.]

Why do I find the article intriguing?

  1. This “revelatory” write up can be interpreted to mean that spyware vendors have to be put in some type of quarantine, possibly similar to those odd boxes in airports where people who smoke can partake of potentially harmful habit. In the special “room”, these folks can be monitored perhaps?
  2. The number of exploits parallels the massive number of security breaches create by widely-used laptop, desktop, and server software systems. Bad actors have been attacking for many years and now the sophistication and volume of cyber attacks seems to be increasing. Every few days cyber security vendors alert me to a new threat; for example, entering hotel rooms with Unsaflok. It seems that security problems are endemic.
  3. The “fix” or “remedial” steps involve users, government agencies, and industry groups. I interpret the argument as suggesting that companies developing operating systems need help and possibly cannot be responsible for these security problems.

The article can be read as a summary of recent developments in the specialized software sector and its careless handling of its technology. However, I think the article is suggesting that the companies building and enabling mobile computing are just victimized by bad actors, lousy regulations, and sloppy government behaviors.

Maybe? I believe I will tilt toward the content marketing purpose of the write up. The argument “Hey, it’s not us” is not convincing me. I think it will complement other articles that blur responsibility the way faces are blurred in some videos.

Stephen E Arnold, March 28, 2024

Can Ma Bell Boogie?

March 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AT&T provides numerous communication and information services to the US government and companies. People see the blue and white trucks with obligatory orange cones and think nothing about their presence. Decades after Judge Green rained on the AT&T monopoly parade, the company has regained some of its market chutzpah. The old-line Bell heads knew that would happen. One reason was the simple fact that communications services have a tendency to pool; that is, online, for instance, wants to be a monopoly. Like water, online and communication services seek the lowest level. One can grouse about a leaking basement, but one is complaining about a basic fact. Complain away, but the water pools. Similarly AT&T benefits and knows how to make the best of this pooling, consolidating, and collecting reality.

I do miss the “old” AT&T. Say what you will about today’s destabilizing communications environment, just don’t forget that the pre-Judge Green world produced useful innovations, provided hardware that worked, and made it possible for some government functions to work much better than those operations perform today.

image

Thanks, MSFT, it seems you understand ageing companies which struggle in the midst of the cyber whippersnappers.

But what’s happened?

In February 2024, AT&T experienced an outage. The redundant, fail-safe, state-of-the-art infrastructure failed. “AT&T Cellular Service Restored after Daylong Outage; Cause Still Unknown” reported:

AT&T said late Thursday [February 24, 2024] that based on an initial review, the outage was “caused by the application and execution of an incorrect process used as we were expanding our network, not a cyber attack.” The company will continue to assess the outage.

What do we publicly know about this remarkable event a month ago? Not much. I am not going to speculate how a single misstep can knock out AT&T, but it raises some questions about AT&T’s procedures, its security, and, yes, its technical competence. The AT&T Ashburn data center is an interesting cluster of facilities. Could it be “knocked offline”? My concern is that the answer to this question is, “You bet your bippy it could.”

A second interesting event surfaced as well. AT&T suffered a mysterious breach which appears to have compromised data about millions of “customers.” And “AT&T Won’t Say How Its Customers’ Data Spilled Online.” Here’s a statement from the report of the breach:

When reached for comment, AT&T spokesperson Stephen Stokes told TechCrunch in a statement: “We have no indications of a compromise of our systems. We determined in 2021 that the information offered on this online forum did not appear to have come from our systems. This appears to be the same dataset that has been recycled several times on this forum.”

Leaked data are no big deal and the incident remains unexplained. The AT&T system went down essential at one fell swoop. Plus there is no explanation which resonates with my understanding of the Bell “way.”

Some questions:

  1. What has AT&T accomplished by its lack of public transparency?
  2. Has the company lost its ability to manage a large, dynamic system due to cost cutting?
  3. Is a lack of training and perhaps capable staff undermining what I think of as “mission critical capabilities” for business and government entities?
  4. What are US regulatory authorities doing to address what is, in my opinion, a threat to the economy of the US and the country’s national security?

Couple the AT&T events with emerging technology like artificial intelligence, will the company make appropriate decisions or create vulnerabilities typically associated with a dominant software company?

Not a positive set up in my opinion. Ma Bell, are you to old and fat to boogie?

Stephen E Arnold, March 26, 2024

AI Hermeneutics: The Fire Fights of Interpretation Flame

March 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My hunch is that not too many of the thumb-typing, TikTok generation know what hermeneutics means. Furthermore, like most of their parents, these future masters of the phone-iverse don’t care. “Let software think for me” would make a nifty T shirt slogan at a technology conference.

This morning (March 12, 2024) I read three quite different write ups. Let me highlight each and then link the content of those documents to the the problem of interpretation of religious texts.

image

Thanks, MSFT Copilot. I am confident your security team is up to this task.

The first write up is a news story called “Elon Musk’s AI to Open Source Grok This Week.” The main point for me is that Mr. Musk will put the label “open source” on his Grok artificial intelligence software. The write up includes an interesting quote; to wit:

Musk further adds that the whole idea of him founding OpenAI was about open sourcing AI. He highlighted his discussion with Larry Page, the former CEO of Google, who was Musk’s friend then. “I sat in his house and talked about AI safety, and Larry did not care about AI safety at all.”

The implication is that Mr. Musk does care about safety. Okay, let’s accept that.

The second story is an ArXiv paper called “Stealing Part of a Production Language Model.” The authors are nine Googlers, two ETH wizards, one University of Washington professor, one OpenAI researcher, and one McGill University smart software luminary. In short, the big outfits are making clear that closed or open, software is rising to the task of revealing some of the inner workings of these “next big things.” The paper states:

We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI’s ChatGPT or Google’s PaLM-2…. For under $20 USD, our attack extracts the entire projection matrix of OpenAI’s ada and babbage language models.

The third item is “How Do Neural Networks Learn? A Mathematical Formula Explains How They Detect Relevant Patterns.” The main idea of this write up is that software can perform an X-ray type analysis of a black box and present some useful data about the inner workings of numerical recipes about which many AI “experts” feign total ignorance.

Several observations:

  1. Open source software is available to download largely without encumbrances. Good actors and bad actors can use this software and its components to let users put on a happy face or bedevil the world’s cyber security experts. Either way, smart software is out of the bag.
  2. In the event that someone or some organization has secrets buried in its software, those secrets can be exposed. One the secret is known, the good actors and the bad actors can surf on that information.
  3. The notion of an attack surface for smart software now includes the numerical recipes and the model itself. Toss in the notion of data poisoning, and the notion of vulnerability must be recast from a specific attack to a much larger type of exploitation.

Net net: I assume the many committees, NGOs, and government entities discussing AI have considered these points and incorporated these articles into informed policies. In the meantime, the AI parade continues to attract participants. Who has time to fool around with the hermeneutics of smart software?

Stephen E Arnold, March 12, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta