Stretchy Security and Flexible Explanations from SEC and X
January 18, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Gizmodo presented an interesting write up about an alleged security issue involving the US Securities & Exchange Commission. Is this an important agency? I don’t know. “X Confirms SEC Hack, Says Account Didn’t Have 2FA Turned On” states:
Turns out that the SEC’s X account was hacked, partially because it neglected a very basic rule of online security.
“Well, Pa, that new security fence does not seem too secure to me,” observes the farmer’s wife. Flexible and security with give are not the optimal ways to protect the green. Thanks, MSFT Copilot Bing thing. Four tries and something good enough. Yes!
X.com — now known by some as the former Twitter or the Fail Whale outfit — puts the blame on the US SEC. That’s a familiar tactic in Silicon Valley. The users are at fault. Some people believe Google’s incognito mode is secret, and others assume that Apple iPhones do not have a backdoor. Wow, I believe these companies, don’t you?
The article reports:
[The] hacking episode temporarily threw the web3 community into chaos after the SEC’s compromised account made a post falsely claiming that the SEC had approved the much anticipated Bitcoin ETFs that the crypto world has been obsessed with of late. The claims also briefly sent Bitcoin on a wild ride, as the asset shot up in value temporarily, before crashing back down when it became apparent the news was fake.
My question is, “How stretchy and flexible are security systems available from outfits like Twitter (now X)?” Another question is, “How secure are government agencies?”
The apparent answer is, “Good enough.” That’s the high water mark in today’s world. Excellence? Meh.
Stephen E Arnold, January 18, 2024
Cybersecurity AI: Yet Another Next Big Thing
January 15, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Not surprisingly, generative AI has boosted the cybersecurity arms race. As bad actors use algorithms to more efficiently breach organizations’ defenses, security departments can only keep up by using AI tools. At least that is what VentureBeat maintains in, “How Generative AI Will Enhance Cybersecurity in a Zero-Trust World.” Writer Louis Columbus tells us:
“Deep Instinct’s recent survey, Generative AI and Cybersecurity: Bright Future of Business Battleground? quantifies the trends VentureBeat hears in CISO interviews. The study found that while 69% of organizations have adopted generative AI tools, 46% of cybersecurity professionals feel that generative AI makes organizations more vulnerable to attacks. Eighty-eight percent of CISOs and security leaders say that weaponized AI attacks are inevitable. Eighty-five percent believe that gen AI has likely powered recent attacks, citing the resurgence of WormGPT, a new generative AI advertised on underground forums to attackers interested in launching phishing and business email compromise attacks. Weaponized gen AI tools for sale on the dark web and over Telegram quickly become best sellers. An example is how quickly FraudGPT reached 3,000 subscriptions by July.”
That is both predictable and alarming. What should companies do about it? The post warns:
“‘Businesses must implement cyber AI for defense before offensive AI becomes mainstream. When it becomes a war of algorithms against algorithms, only autonomous response will be able to fight back at machine speeds to stop AI-augmented attacks,’ said Max Heinemeyer, director of threat hunting at Darktrace.”
Before AI is mainstream? Better get moving. We’re told the market for generative AI cybersecurity solutions is already growing, and Forrester divides it into three use cases: content creation, behavior prediction, and knowledge articulation. Of course, Columbus notes, each organization will have different needs, so adaptable solutions are important. See the write-up for some specific tips and links to further information. The tools may be new but the dynamic is a constant: as bad actors up their game, so too must security teams.
Cynthia Murrell, January 15, 2024
Canada and Mobile Surveillance: Is It a Reality?
January 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
It appears a baker’s dozen of Canadian agencies are ignoring a longstanding federal directive on privacy protections. Yes, Canada. According to CBC/ Radio Canada, “Tools Capable of Extracting Personal Data from Phones Being Used by 13 Federal Departments, Documents Show.” The trend surprised even York University associate professor Evan Light, who filed the original access-to-information request. Reporter Brigitte Bureau shares:
“Tools capable of extracting personal data from phones or computers are being used by 13 federal departments and agencies, according to contracts obtained under access to information legislation and shared with Radio-Canada. Radio-Canada has also learned those departments’ use of the tools did not undergo a privacy impact assessment as required by federal government directive. The tools in question can be used to recover and analyze data found on computers, tablets and mobile phones, including information that has been encrypted and password-protected. This can include text messages, contacts, photos and travel history. Certain software can also be used to access a user’s cloud-based data, reveal their internet search history, deleted content and social media activity. Radio-Canada has learned other departments have obtained some of these tools in the past, but say they no longer use them. … ‘I thought I would just find the usual suspects using these devices, like police, whether it’s the RCMP or [Canada Border Services Agency]. But it’s being used by a bunch of bizarre departments,’ [Light] said.
To make matters worse, none of the agencies had conducted the required Privacy Impact Assessments. A federal directive issued in 2002 and updated in 2010 required such PIAs to be filed with the Treasury Board of Canada Secretariat and the Office of the Privacy Commissioner before any new activities involving collecting or handling personal data. Light is concerned that agencies flat out ignoring the directive means digital surveillance of citizens has become normalized. Join the club, Canada.
Cynthia Murrell, January 12, 2024
British Library: The Math of Can Kicking Security Down the Road
January 9, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read a couple of blog posts about the security issues at the British Library. I am not currently working on projects in the UK. Therefore, I noted the issue and moved on to more pressing matters. Examples range from writing about the antics of the Google to keeping my eye on the new leader of the highly innovative PR magnet, the NSO Group.
Two well-educated professionals kick a security can down the road. Why bother to pick it up? Thanks, MSFT Copilot Bing thing. I gave up trying to get you to produce a big can and big shoe. Sigh.
I read “British Library to Burn Through Reserves to Recover from Cyber Attack.” The weird orange newspaper usually has semi-reliable, actual factual information. The write up reports or asserts (the FT is a newspaper, after all):
The British Library will drain about 40 per cent of its reserves to recover from a cyber attack that has crippled one of the UK’s critical research bodies and rendered most of its services inaccessible.
I won’t summarize what the bad actors took down. Instead, I want to highlight another passage in the article:
Cyber-intelligence experts said the British Library’s service could remain down for more than a year, while the attack highlighted the risks of a single institution playing such a prominent role in delivering essential services.
A couple of themes emerge from these two quoted passages:
- Whatever cash the library has, spitting distance of half is going to be spent “recovering,” not improving, enhancing, or strengthening. Just “recovering.”
- The attack killed off “most” of the British Libraries services. Not a few. Not one or two. Just “most.”
- Concentration for efficiency leads to failure for downstream services. But concentration makes sense, right. Just ask library patrons.
My view of the situation is familiar of you have read other blog posts about Fancy Dan, modern methods. Let me summarize to brighten your day:
First, cyber security is a function that marketers exploit without addressing security problems. Those purchasing cyber security don’t know much. Therefore, the procurement officials are what a falcon might label “easy prey.” Bad for the chihuahua sometimes.
Second, when security issues are identified, many professionals don’t know how to listen. Therefore, a committee decides. Committees are outstanding bureaucratic tools. Obviously the British Library’s managers and committees may know about manuscripts. Security? Hmmm.
Third, a security failure can consume considerable resources in order to return to the status quo. One can easily imagine a scenario months or years in the future when the cost of recovery is too great. Therefore, the security breach kills the organization. Termination can be rationalized by a committee, probably affiliated with a bureaucratic structure further up the hierarchy.
I think the idea of “kicking the security can” down the road a widespread characteristic of many organizations. Is the situation improving? No. Marketers move quickly to exploit weaknesses of procurement teams. Bad actors know this. Excitement ahead.
Stephen E Arnold, January 9, 2024
Cyber Security Software and AI: Man and Machine Hook Up
January 8, 2024
This essay is the work of a dumb dinobaby. No smart software required.
My hunch is that 2024 is going to be quite interesting with regards to cyber security. The race among policeware vendors to add “artificial intelligence” to their systems began shortly after Microsoft’s ChatGPT moment. Smart agents, predictive analytics coupled to text sources, real-time alerts from smart image monitoring systems are three application spaces getting AI boosts. The efforts are commendable if over-hyped. One high-profile firm’s online webinar presented jargon and buzzwords but zero evidence of the conviction or closure value of the smart enhancements.
The smart cyber security software system outputs alerts which the system manager cannot escape. Thanks, MSFT Copilot Bing thing. You produced a workable illustration without slapping my request across my face. Good enough too.
Let’s accept as a working presence that everyone from my French bulldog to my neighbor’s ex wife wants smart software to bring back the good old, pre-Covid, go-go days. Also, I stipulate that one should ignore the fact that smart software is a demonstration of how numerical recipes can output “good enough” data. Hallucinations, errors, and close-enough-for-horseshoes are part of the method. What’s the likelihood the door of a commercial aircraft would be removed from an aircraft in flight? Answer: Well, most flights don’t lose their doors. Stop worrying. Those are the rules for this essay.
Let’s look at “The I in LLM Stands for Intelligence.” I grant the title may not be the best one I have spotted this month, but here’s the main point of the article in my opinion. Writing about automated threat and security alerts, the essay opines:
When reports are made to look better and to appear to have a point, it takes a longer time for us to research and eventually discard it. Every security report has to have a human spend time to look at it and assess what it means. The better the crap, the longer time and the more energy we have to spend on the report until we close it. A crap report does not help the project at all. It instead takes away developer time and energy from something productive. Partly because security work is consider one of the most important areas so it tends to trump almost everything else.
The idea is that strapping on some smart software can increase the outputs from a security alerting system. Instead of helping the overworked and often reviled cyber security professional, the smart software makes it more difficult to figure out what a bad actor has done. The essay includes this blunt section heading: “Detecting AI Crap.” Enough said.
The idea is that more human expertise is needed. The smart software becomes a problem, not a solution.
I want to shift attention to the managers or the employee who caused a cyber security breach. In what is another zinger of a title, let’s look at this research report, “The Immediate Victims of the Con Would Rather Act As If the Con Never Happened. Instead, They’re Mad at the Outsiders Who Showed Them That They Were Being Fooled.” Okay, this is the ostrich method. Deny stuff by burying one’s head in digital sand like TikToks.
The write up explains:
The immediate victims of the con would rather act as if the con never happened. Instead, they’re mad at the outsiders who showed them that they were being fooled.
Let’s assume the data in this “Victims” write up are accurate, verifiable, and unbiased. (Yeah, I know that is a stretch.)
What do these two articles do to influence my view that cyber security will be an interesting topic in 2024? My answers are:
- Smart software will allegedly detect, alert, and warn of “issues.” The flow of “issues” may overwhelm or numb staff who must decide what’s real and what’s a fakeroo. Burdened staff can make errors, thus increasing security vulnerabilities or missing ones that are significant.
- Managers, like the staffer who lost a mobile phone, with company passwords in a plain text note file or an email called “passwords” will blame whoever blows the whistle. The result is the willful refusal to talk about what happened, why, and the consequences. Examples range from big libraries in the UK to can kicking hospitals in a flyover state like Kentucky.
- Marketers of remediation tools will have a banner year. Marketing collateral becomes a closed deal making the art history majors writing copy secure in their job at a cyber security company.
Will bad actors pay attention to smart software and the behavior of senior managers who want to protect share price or their own job? Yep. Close attention.
Stephen E Arnold, January 8, 2024
THE I IN LLM STANDS FOR INTELLIGENCE
xx
x
x
x
x
x
23AndMe: The Genetics of Finger Pointing
January 4, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Well, well, another Silicon Valley outfit with Google-type DNA relies on its hard-wired instincts. What’s the situation this time? “23andMe Tells Victims It’s Their Fault That Their Data Was Breached” relates a now a well-known game plan approach to security problems. What’s the angle? Here’s what the story in Techcrunch asserts:
Some rhetorical tactics are exemplified by children who blame one another for knocking the birthday cake off the counter. Instinct for self preservation creates these all-too-familiar situations. Are Silicon Valley-type outfit childish? Thanks, MSFT Copilot Bing thing. I had to change the my image request three times to avoid the negative filter for arguing children. Your approach is good enough.
Facing more than 30 lawsuits from victims of its massive data breach, 23andMe is now deflecting the blame to the victims themselves in an attempt to absolve itself from any responsibility…
I particularly liked this statement from the Techcrunch article:
And the consequences? The US legal processes will determine what’s going to happen.
After disclosing the breach, 23andMe reset all customer passwords, and then required all customers to use multi-factor authentication, which was only optional before the breach. In an attempt to pre-empt the inevitable class action lawsuits and mass arbitration claims, 23andMe changed its terms of service to make it more difficult for victims to band together when filing a legal claim against the company. Lawyers with experience representing data breach victims told TechCrunch that the changes were “cynical,” “self-serving” and “a desperate attempt” to protect itself and deter customers from going after the company.
Several observations:
- I particularly like the angle that cyber security is not the responsibility of the commercial enterprise. The customers are responsible.
- The lack of consequences for corporate behaviors create opportunities for some outfits to do some very fancy dancing. Since a company is a “Person,” Maslow’s hierarchy of needs kicks in.
- The genetics of some firms function with little regard for what some might call social responsibility.
The result is the situation which not even the original creative team for the 1980 film Airplane! (Flying High!) could have concocted.
Stephen E Arnold, January 4, 2024
Exploit Lets Hackers Into Google Accounts, PCs Even After Changing Passwords
January 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Google must be so pleased. The Register reports, “Google Password Resets Not Enough to Stop these Info-Stealing Malware Strains.” In October a hacker going by PRISMA bragged they had found a zero-day exploit that allowed them to log into Google users’ accounts even after the user had logged off. They could then use the exploit generate a new session token and go after data in the victim’s email and cloud storage. It was not an empty boast, and it gets worse. Malware developers have since used the hack to create “info stealers” that infiltrate victims’ local data. (Mostly Windows users.) Yes, local data. Yikes. Reporter Connor Jones writes:
“The total number of known malware families that abuse the vulnerability stands at six, including Lumma and Rhadamanthys, while Eternity Stealer is also working on an update to release in the near future. They’re called info stealers because once they’re running on some poor sap’s computer, they go to work finding sensitive information – such as remote desktop credentials, website cookies, and cryptowallets – on the local host and leaking them to remote servers run by miscreants. Eggheads at CloudSEK say they found the root of the Google account exploit to be in the undocumented Google OAuth endpoint ‘MultiLogin.’ The exploit revolves around stealing victims’ session tokens. That is to say, malware first infects a person’s PC – typically via a malicious spam or a dodgy download, etc – and then scours the machine for, among other things, web browser session cookies that can be used to log into accounts. Those session tokens are then exfiltrated to the malware’s operators to enter and hijack those accounts. It turns out that these tokens can still be used to login even if the user realizes they’ve been compromised and change their Google password.”
So what are Google users to do when changing passwords is not enough to circumvent this hack? The company insists stolen sessions can be thwarted by signing out of all Google sessions on all devices. It is, admittedly, kind of a pain but worth the effort to protect the data on one’s local drives. Perhaps the company will soon plug this leak so we can go back to checking our Gmail throughout the day without logging in every time. Google promises to keep us updated. I love promises.
Cynthia Murrell, January 3, 2024
Cyber Security Crumbles When Staff Under Stress
December 22, 2023
This essay is the work of a dumb dinobaby. No smart software required.
How many times does society need to say that happy employees mean a better, more profitable company? The world is apparently not getting the memo, because employees, especially IT workers, are overworked, stressed, exhausted, and burnt out like blackened match. While zombie employees are bad for productivity, they’re even worse for cyber security. BetaNews reports on an Adarma, a detection and response specialist company, survey, “Stressed Staff Put Enterprises At Risk Of Cyberattack.”
The survey responders believe they’re at a greater risk of cyberattack due to the poor condition of their employees. Five hundred cybersecurity professionals from UK companies with over 2000 employees were studied and 51% believed their IT security are dead inside. This puts them at risk of digital danger. Over 40% of the cybersecurity leaders felt that their skills were limited to understand threats. An additional 43% had little or zero expertise to respond or detect threats to their enterprises.
IT people really love computers and technology but when they’re working in an office environment and dealing with people, stress happens:
“‘Cybersecurity professionals are typically highly passionate people, who feel a strong personal sense of duty to protect their organization and they’ll often go above and beyond in their roles. But, without the right support and access to resources in place, it’s easy to see how they can quickly become victims of their own passion. The pressure is high and security teams are often understaffed, so it is understandable that many cybersecurity professionals are reporting frustration, burnout, and unsustainable stress. As a result, the potential for mistakes being made that will negatively impact an organization increases. Business leaders should identify opportunities to ease these gaps, so that their teams can focus on the main task at hand, protecting the organization,’ says John Maynard, Adarma’s CEO.”
The survey demonstrates why it’s important to diversify the cybersecurity talent pool? Wait, is this in regard to ethnicity and biological sex? Is Adarma advocating for a DEI quota in cybersecurity or is the organization advocating for a diverse talent pool with varied experience to offer differ perspectives?
While it is important to have different education backgrounds and experience, hiring someone simply based on DEI quotas is stupid. It’s failing in the US and does more harm than good.
Whitney Grace, December 22, 2023
AI: Are You Sure You Are Secure?
December 19, 2023
This essay is the work of a dumb dinobaby. No smart software required.
North Carolina University published an interesting article. Are the data in the write up reproducible. I don’t know. I wanted to highlight the report in the hopes that additional information will be helpful to cyber security professionals. The article is “AI Networks Are More Vulnerable to Malicious Attacks Than Previously Thought.”
I noted this statement in the article:
Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.
A corporate decision maker looks at a point of vulnerability. One of his associates moves a sign which explains that smart software protects the castel and its crown jewels. Thanks, MSFT Copilot. Numerous tries, but I finally got an image close enough for horseshoes.
What is the specific point of alleged weakness?
At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system in order to confuse it.
The example presented in the article is that a bad actor manipulates data provided to the smart software; for example, causing an image or content to be deleted or ignored. Another use case is that a bad actor could cause an X-ray machine to present altered information to the analyst.
The write up includes a description of software called QuadAttacK. The idea is to test a network for “clean” data. Four different networks were tested. The report includes a statement from Tianfu Wu, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. He allegedly said:
“We were surprised to find that all four of these networks were very vulnerable to adversarial attacks,” Wu says. “We were particularly surprised at the extent to which we could fine-tune the attacks to make the networks see what we wanted them to see.”
You can download the vulnerability testing tool at this link.
Here are the observations my team and I generated at lunch today (Friday, December 14, 2023):
- Poisoned data is one of the weak spots in some smart software
- The free tool will allow bad actors with access to certain smart systems a way to identify points of vulnerability
- AI, at this time, may be better at marketing than protecting its reasoning systems.
Stephen E Arnold, December 19, 2023
Stressed Staff Equals Security Headaches
December 14, 2023
This essay is the work of a dumb dinobaby. No smart software required.
How many times does society need to say that happy employees mean a better, more profitable company? The world is apparently not getting the memo, because employees, especially IT workers, are overworked, stressed, exhausted, and burnt out like blackened match. While zombie employees are bad for productivity, they’re even worse for cyber security. BetaNews reports on an Adarma, a detection and response specialist company, survey, “Stressed Staff Put Enterprises At Risk Of Cyberattack.”
The overworked IT person says, “Are these sticky notes your passwords?” The stressed out professional service worker replies, “Hey, buddy, did I ask you if your company’s security system actually worked? Yeah, you are one of those cyber security experts, right? Next!” Thanks, MSFT Copilot. I don’t think you had a human intervene to create this image like you know who.
The survey responders believe they’re at a greater risk of cyberattack due to the poor condition of their employees. Five hundred cybersecurity professionals from UK companies with over 2000 employees were studied and 51% believed their IT security are dead inside. This puts them at risk of digital danger. Over 40% of the cybersecurity leaders felt that their skills were limited to understand threats. An additional 43% had little or zero expertise to respond or detect threats to their enterprises.
IT people really love computers and technology but when they’re working in an office environment and dealing with people, stress happens:
“‘Cybersecurity professionals are typically highly passionate people, who feel a strong personal sense of duty to protect their organization and they’ll often go above and beyond in their roles. But, without the right support and access to resources in place, it’s easy to see how they can quickly become victims of their own passion. The pressure is high and security teams are often understaffed, so it is understandable that many cybersecurity professionals are reporting frustration, burnout, and unsustainable stress. As a result, the potential for mistakes being made that will negatively impact an organization increases. Business leaders should identify opportunities to ease these gaps, so that their teams can focus on the main task at hand, protecting the organization,’ says John Maynard, Adarma’s CEO.”
The survey demonstrates why it’s important to diversify the cybersecurity talent pool? Wait, is this in regard to ethnicity and biological sex? Is Adarma advocating for a DEI quota in cybersecurity or is the organization advocating for a diverse talent pool with varied experience to offer differ perspectives?
While it is important to have different education backgrounds and experience, hiring someone simply based on DEI quotas is stupid. It’s failing in the US and does more harm than good.
Whitney Grace, December 14, 2023