Cyber Security Crumbles When Staff Under Stress

December 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

How many times does society need to say that happy employees mean a better, more profitable company? The world is apparently not getting the memo, because employees, especially IT workers, are overworked, stressed, exhausted, and burnt out like blackened match. While zombie employees are bad for productivity, they’re even worse for cyber security. BetaNews reports on an Adarma, a detection and response specialist company, survey, “Stressed Staff Put Enterprises At Risk Of Cyberattack.”

The survey responders believe they’re at a greater risk of cyberattack due to the poor condition of their employees. Five hundred cybersecurity professionals from UK companies with over 2000 employees were studied and 51% believed their IT security are dead inside. This puts them at risk of digital danger. Over 40% of the cybersecurity leaders felt that their skills were limited to understand threats. An additional 43% had little or zero expertise to respond or detect threats to their enterprises.

IT people really love computers and technology but when they’re working in an office environment and dealing with people, stress happens:

“‘Cybersecurity professionals are typically highly passionate people, who feel a strong personal sense of duty to protect their organization and they’ll often go above and beyond in their roles. But, without the right support and access to resources in place, it’s easy to see how they can quickly become victims of their own passion. The pressure is high and security teams are often understaffed, so it is understandable that many cybersecurity professionals are reporting frustration, burnout, and unsustainable stress. As a result, the potential for mistakes being made that will negatively impact an organization increases. Business leaders should identify opportunities to ease these gaps, so that their teams can focus on the main task at hand, protecting the organization,’ says John Maynard, Adarma’s CEO.”

The survey demonstrates why it’s important to diversify the cybersecurity talent pool? Wait, is this in regard to ethnicity and biological sex? Is Adarma advocating for a DEI quota in cybersecurity or is the organization advocating for a diverse talent pool with varied experience to offer differ perspectives?

While it is important to have different education backgrounds and experience, hiring someone simply based on DEI quotas is stupid. It’s failing in the US and does more harm than good.

Whitney Grace, December 22, 2023

AI: Are You Sure You Are Secure?

December 19, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

North Carolina University published an interesting article. Are the data in the write up reproducible. I don’t know. I wanted to highlight the report in the hopes that additional information will be helpful to cyber security professionals. The article is “AI Networks Are More Vulnerable to Malicious Attacks Than Previously Thought.

I noted this statement in the article:

Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.

image

A corporate decision maker looks at a point of vulnerability. One of his associates moves a sign which explains that smart software protects the castel and its crown jewels. Thanks, MSFT Copilot. Numerous tries, but I finally got an image close enough for horseshoes.

What is the specific point of alleged weakness?

At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system in order to confuse it.

The example presented in the article is that a bad actor manipulates data provided to the smart software; for example, causing an image or content to be deleted or ignored. Another use case is that a bad actor could cause an X-ray machine to present altered information to the analyst.

The write up includes a description of software called QuadAttacK. The idea is to test a network for “clean” data. Four different networks were tested. The report includes a statement from Tianfu Wu, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. He allegedly said:

“We were surprised to find that all four of these networks were very vulnerable to adversarial attacks,” Wu says. “We were particularly surprised at the extent to which we could fine-tune the attacks to make the networks see what we wanted them to see.”

You can download the vulnerability testing tool at this link.

Here are the observations my team and I generated at lunch today (Friday, December 14, 2023):

  1. Poisoned data is one of the weak spots in some smart software
  2. The free tool will allow bad actors with access to certain smart systems a way to identify points of vulnerability
  3. AI, at this time, may be better at marketing than protecting its reasoning systems.

Stephen E Arnold, December 19, 2023

Stressed Staff Equals Security Headaches

December 14, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

How many times does society need to say that happy employees mean a better, more profitable company? The world is apparently not getting the memo, because employees, especially IT workers, are overworked, stressed, exhausted, and burnt out like blackened match. While zombie employees are bad for productivity, they’re even worse for cyber security. BetaNews reports on an Adarma, a detection and response specialist company, survey, “Stressed Staff Put Enterprises At Risk Of Cyberattack.”

image

The overworked IT person says, “Are these sticky notes your passwords?” The stressed out professional service worker replies, “Hey, buddy, did I ask you if your company’s security system actually worked? Yeah, you are one of those cyber security experts, right? Next!” Thanks, MSFT Copilot. I don’t think you had a human intervene to create this image like you know who.

The survey responders believe they’re at a greater risk of cyberattack due to the poor condition of their employees. Five hundred cybersecurity professionals from UK companies with over 2000 employees were studied and 51% believed their IT security are dead inside. This puts them at risk of digital danger. Over 40% of the cybersecurity leaders felt that their skills were limited to understand threats. An additional 43% had little or zero expertise to respond or detect threats to their enterprises.

IT people really love computers and technology but when they’re working in an office environment and dealing with people, stress happens:

“‘Cybersecurity professionals are typically highly passionate people, who feel a strong personal sense of duty to protect their organization and they’ll often go above and beyond in their roles. But, without the right support and access to resources in place, it’s easy to see how they can quickly become victims of their own passion. The pressure is high and security teams are often understaffed, so it is understandable that many cybersecurity professionals are reporting frustration, burnout, and unsustainable stress. As a result, the potential for mistakes being made that will negatively impact an organization increases. Business leaders should identify opportunities to ease these gaps, so that their teams can focus on the main task at hand, protecting the organization,’ says John Maynard, Adarma’s CEO.”

The survey demonstrates why it’s important to diversify the cybersecurity talent pool? Wait, is this in regard to ethnicity and biological sex? Is Adarma advocating for a DEI quota in cybersecurity or is the organization advocating for a diverse talent pool with varied experience to offer differ perspectives?

While it is important to have different education backgrounds and experience, hiring someone simply based on DEI quotas is stupid. It’s failing in the US and does more harm than good.

Whitney Grace, December 14, 2023

Allegations That Canadian Officials Are Listening

December 13, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Widespread Use of Phone Surveillance Tools Documented in Canadian Federal Agencies

It appears a baker’s dozen of Canadian agencies are ignoring a longstanding federal directive on privacy protections. Yes, Canada. According to CBC/ Radio Canada, “Tools Capable of Extracting Personal Data from Phones Being Used by 13 Federal Departments, Documents Show.” The trend surprised even York University associate professor Evan Light, who filed the original access-to-information request. Reporter Brigitte Bureau shares:

image

Many people, it seems, are listening to Grandma’s conversations in a suburb of Calgary. (Nice weather in the winter.) Thanks, MSFT Copilot. I enjoyed the flurry of messages that you were busy creating my other image requests. Just one problemo. I had only one image request.

“Tools capable of extracting personal data from phones or computers are being used by 13 federal departments and agencies, according to contracts obtained under access to information legislation and shared with Radio-Canada. Radio-Canada has also learned those departments’ use of the tools did not undergo a privacy impact assessment as required by federal government directive. The tools in question can be used to recover and analyze data found on computers, tablets and mobile phones, including information that has been encrypted and password-protected. This can include text messages, contacts, photos and travel history. Certain software can also be used to access a user’s cloud-based data, reveal their internet search history, deleted content and social media activity. Radio-Canada has learned other departments have obtained some of these tools in the past, but say they no longer use them. … ‘I thought I would just find the usual suspects using these devices, like police, whether it’s the RCMP or [Canada Border Services Agency]. But it’s being used by a bunch of bizarre departments,’ [Light] said.

To make matters worse, none of the agencies had conducted the required Privacy Impact Assessments. A federal directive issued in 2002 and updated in 2010 required such PIAs to be filed with the Treasury Board of Canada Secretariat and the Office of the Privacy Commissioner before any new activities involving collecting or handling personal data. Light is concerned that agencies flat out ignoring the directive means digital surveillance of citizens has become normalized. Join the club, Canada.

Cynthia Murrell, December 13, 2023

23andMe: Fancy Dancing at the Security Breach Ball

December 11, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Here’s a story I found amusing. Very Sillycon Valley. Very high school science clubby. Navigate to “23andMe Moves to Thwart Class-Action Lawsuits by Quietly Updating Terms.” The main point of the write up is that the firm’s security was breached. How? Probably those stupid customers or a cyber security vendor installing smart software that did not work.

image

How some influential wizards work to deflect actions hostile to their interests. In the cartoon, the Big Dog tells a young professional, “Just change the words.” Logical, right? Thanks, MSFT Copilot. Close enough for horseshoes.

The article reports:

Following a hack that potentially ensnared 6.9 million of its users, 23andMe has updated its terms of service to make it more difficult for you to take the DNA testing kit company to court, and you only have 30 days to opt out.

I have spit in a 23andMe tube. I’m good at least for this most recent example of hard-to-imagine security missteps. The article cites other publications but drives home what I think is a useful insight into the thought process of big-time Sillycon Valley firms:

customers were informed via email that “important updates were made to the Dispute Resolution and Arbitration section” on Nov. 30 “to include procedures that will encourage a prompt resolution of any disputes and to streamline arbitration proceedings where multiple similar claims are filed.” Customers have 30 days to let the site know if they disagree with the terms. If they don’t reach out via email to opt out, the company will consider their silence an agreement to the new terms.

No more neutral arbitrators, please. To make the firm’s intentions easier to understand, the cited article concludes:

The new TOS specifically calls out class-action lawsuits as prohibited. “To the fullest extent allowed by applicable law, you and we agree that each party may bring disputes against the only party only in an individual capacity, and not as a class action or collective action or class arbitration” …

I like this move for three reasons:

  1. It provides another example of the tactics certain Information Highway contractors view the Rules of the Road. In a word, “flexible.” In another word, “malleable.”
  2. The maneuver is one that seems to be — how shall I phrase it — elephantine, not dainty and subtle.
  3. The “fix” for the problem is to make the estimable company less likely to get hit with massive claims in a court. Courts, obviously, are not to be trusted in some situations.

I find the entire maneuver chuckle invoking. Am I surprised at the move? Nah. You can’t kid this dinobaby.

Stephen E Arnold, December 11, 2023

How about Fear and Paranoia to Advance an Agenda?

December 6, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I thought sex sells. I think I was wrong. Fear seems to be the barn burner at the end of 2023. And why not? We have the shadow of another global pandemic? We have wars galore. We have craziness on US air planes. We have a Cybertruck which spells the end for anyone hit by the behemoth.

I read (but did not shake like the delightful female in the illustration “AI and Mass Spying.” The author is a highly regarded “public interest technologist,” an internationally renowned security professional, and a security guru. For me, the key factoid is that he is a fellow at the Berkman Klein Center for Internet & Society at Harvard University and a lecturer in public policy at the Harvard Kennedy School. Mr. Schneier is a board member of the Electronic Frontier Foundation and the most, most interesting organization AccessNow.

image

Fear speaks clearly to those in retirement communities, elder care facilities, and those who are uninformed. Let’s say, “Grandma, you are going to be watched when you are in the bathroom.” Thanks, MSFT Copilot. I hope you are sending data back to Redmond today.

I don’t want to make too much of the Harvard University connection. I feel it is important to note that the esteemed educational institution got caught with its ethical pants around its ankles, not once, but twice in recent memory. The first misstep involved an ethics expert on the faculty who allegedly made up information. The second is the current hullabaloo about a whistleblower allegation. The AP slapped this headline on that report: “Harvard Muzzled Disinfo Team after $500 Million Zuckerberg Donation.” (I am tempted to mention the Harvard professor who is convinced he has discovered fungible proof of alien technology.)

So what?

The article “AI and Mass Spying” is a baffler to me. The main point of the write up strikes me as:

Summarization is something a modern generative AI system does well. Give it an hourlong meeting, and it will return a one-page summary of what was said. Ask it to search through millions of conversations and organize them by topic, and it’ll do that. Want to know who is talking about what? It’ll tell you.

I interpret the passage to mean that smart software in the hands of law enforcement, intelligence operatives, investigators in one of the badge-and-gun agencies in the US, or a cyber lawyer is really, really bad news. Smart surveillance has arrived. Smart software can process masses of data. Plus the outputs may be wrong. I think this means the sky is falling. The fear one is supposed to feel is going to be the way a chicken feels when it sees the Chik-fil-A butcher truck pull up to the barn.

Several observations:

  1. Let’s assume that smart software grinds through whatever information is available to something like a spying large language model. Are those engaged in law enforcement are unaware that smart software generates baloney along with the Kobe beef? Will investigators knock off the verification processes because a new system has been installed at a fusion center? The answer to these questions is, “Fear advances the agenda of using smart software for certain purposes; specifically, enforcement of rules, regulations, and laws.”
  2. I know that the idea that “all” information can be processed is a jazzy claim. Google made it, and those familiar with Google search results knows that Google does not even come close to all. It can barely deliver useful results from the Railway Retirement Board’s Web site. “All” covers a lot of ground, and it is unlikely that a policeware vendor will be able to do much more than process a specific collection of data believed to be related to an investigation. “All” is for fear, not illumination. Save the categorical affirmatives for the marketing collateral, please.
  3. The computational cost for applying smart software to large domains of data — for example, global intercepts of text messages — is fun to talk about over lunch. But the costs are quite real. Then the costs of the computational infrastructure have to be paid. Then the cost of the downstream systems and people who have to figure out if the smart software is hallucinating or delivering something useful. I would suggest that Israel’s surprise at the unhappy events in October 2023 to the present day unfolded despite the baloney for smart security software, a great intelligence apparatus, and the tons of marketing collateral handed out at law enforcement conferences. News flash: The stuff did not work.

In closing, I want to come back to fear. Exactly what is accomplished by using fear as the pointy end of the stick? Is it insecurity about smart software? Are there other messages framed in a different way to alert people to important issues?

Personally, I think fear is a low-level technique for getting one’s point across. But when those affiliated with an outfit with the ethics matter and now the payola approach to information, how about putting on the big boy pants and select a rhetorical trope that is unlikely to anything except remind people that the Covid thing could have killed us all. Err. No. And what is the agenda fear advances?

So, strike the sex sells trope. Go with fear sells.

Stephen E Arnold, December 6, 2023

23andMe: Those Users and Their Passwords!

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Silicon Valley and health are match fabricated in heaven. Not long ago, I learned about the estimable management of Theranos. Now I find out that “23andMe confirms hackers stole ancestry data on 6.9 million users.” If one follows the logic of some Silicon Valley outfits, the data loss is the fault of the users.

image

“We have the capability to provide the health data and bioinformation from our secure facility. We have designed our approach to emulate the protocols implemented by Jack Benny and his vault in his home in Beverly Hills,” says the enthusiastic marketing professional from a Silicon Valley success story. Thanks, MSFT Copilot. Not exactly Jack Benny, Ed, and the foghorn, but I have learned to live with “good enough.”

According to the peripatetic Lorenzo Franceschi-Bicchierai:

In disclosing the incident in October, 23andMe said the data breach was caused by customers reusing passwords, which allowed hackers to brute-force the victims’ accounts by using publicly known passwords released in other companies’ data breaches.

Users!

What’s more interesting is that 23andMe provided estimates of the number of customers (users) whose data somehow magically flowed from the firm into the hands of bad actors. In fact, the numbers, when added up, totaled almost seven million users, not the original estimate of 14,000 23andMe customers.

I find the leak estimate inflation interesting for three reasons:

  1. Smart people in Silicon Valley appear to struggle with simple concepts like adding and subtracting numbers. This gap in one’s education becomes notable when the discrepancy is off by millions. I think “close enough for horse shoes” is a concept which is wearing out my patience. The difference between 14,000 and almost 17 million is not horse shoe scoring.
  2. The concept of “security” continues to suffer some set backs. “Security,” one may ask?
  3. The intentional dribbling of information reflects another facet of what I call high school science club management methods. The logic in the case of 23andMe in my opinion is, “Maybe no one will notice?”

Net net: Time for some regulation, perhaps? Oh, right, it’s the users’ responsibility.

Stephen E Arnold, December 5, 2023 

Cyber Security Responsibility: Where It Belongs at Last!

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I want to keep this item brief. Navigate to “CISA’s Goldstein Wants to Ditch ‘Patch Faster, Fix Faster’ Model.”

CISA means the US government’s Cybersecurity and Infrastructure Security Agency. The “Goldstein” reference points to Eric Goldstein, the executive assistant director of CISA.

The main point of the write up is that big technology companies have to be responsible for cleaning up their cyber security messes. The write up reports:

Goldstein said that CISA is calling on technology providers to “take accountability” for the security of their customers by doing things like enabling default security controls such as multi-factor authentication, making security logs available, using secure development practices and embracing memory safe languages such as Rust.

I may be incorrect, but I picked up a signal that the priorities of some techno feudalists are not security. Perhaps these firms’ goals are maximizing profit, market share, and power over their paying customers. Security? Maybe it is easier to describe in a slide deck or a short YouTube video?

image

The use of a parental mode seems appropriate for a child? Will it work for techno feudalists who have created a digital mess in kitchens throughout the world? Thanks, MSFT Copilot. You must have ingested some “angry mommy” data when your were but a wee sprout.

Will this approach improve the security of mission-critical systems? Will the enjoinder make a consumer’s mobile phone more secure?

My answer? Without meaningful consequences, security is easier to talk about than deliver. Therefore, minimal change in the near future. I wish I were wrong.

Stephen E Arnold, December 5, 2023

Omegle: Hasta La Vista

November 30, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In the Internet’s early days, users could sign into chatrooms and talk with strangers. While chatrooms have fallen out of favor, the idea of talking with strangers hung on but now it’s accompanied by video. Chat Roulette and Omegle are popular chatting applications that allow users to video chat with random individuals. The apps are notorious for pranks and NSFW content, including child sexual abuse. The Independent shared a story about one of the two: “Omegle Anonymous Chat App Shuts Down After 14 Years.”

Omegle had a simple concept: sign in, be connected to another random person, and video chat for as long as you like. Leif K-Brooks launched the chat platform with good intentions in 2009, but it didn’t take long for bad actors to infiltrate it. K-Brooks tried to stop criminal activities on Omegle with features, such as the “monitored chats” with moderators. They didn’t work and Omegle continued to receive flack. K-Brooks doesn’t want to deal with the criticism anymore:

“The intensity of the fight over use of the site had forced him to decide to shut it down, he said, and it will stop working straight away. ‘As much as I wish circumstances were different, the stress and expense of this fight – coupled with the existing stress and expense of operating Omegle, and fighting its misuse – are simply too much. Operating Omegle is no longer sustainable, financially nor psychologically. Frankly, I don’t want to have a heart attack in my 30s,’ wrote Leif K-Brooks, who has run the website since founding it.”

Omegle’s popularity rose during the pandemic. The sudden popularity surge highlighted the criminal acts on the video chat platform. K-Brooks believes that his critics used fear to shut down the Web site. He also acknowledged that people are quicker to attack and slower to recognize shared humanity. He theorizes that social media platforms are being labeled negatively because of small groups of bad actors.

Whitney Grace, November 30, 2023

Who Benefits from Advertising Tracking Technology? Teens, Bad Actors, You?

November 23, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Don’t get me wrong. I absolutely love advertising. When I click to Sling’s or Tubi’s free TV, a YouTube video about an innovation in physics, or visit the UK’s Daily Mail — I see just a little bit of content. The rest, it seems to this dinobaby, to be advertising. For some reason, YouTube this morning (November 17, 2023) is showing me ads for a video game or undergarments for a female-oriented person before I can watch an update on the solemnity of Judge Engoran’s courtroom.

However, there are some people who are not “into” advertising. I want to point out that these individuals are in the minority; otherwise, people flooded with advertising would not disconnect or navigate to a somewhat less mercantile souk. Yes, a few exist; for example, government Web sites. (I acknowledge that some governments’ Web sites are advertising, but there’s not much I can do about that fusion of pitches and objective information about the location of a nation’s embassy.)

But to the matter at hand. I read a PDF titled “Europe’s Hidden Security Crisis.” The document is a position paper, a white paper, or a special report. The terminology varies depending on the entities involved in the assembly of the information. The group apparently nudging the intrepid authors to reveal the “hidden security crisis” could be the Irish Council for Civil Liberties. I know zero about the group, and I know even less about the authors, Dr. Johnny Ryan and Wolfie Christl. Dr. Ryan has written for the newspaper which looks like a magazine, and Mr. Christl is a principal of Cracked Labs.

So what’s the “hidden security crisis”? There is a special operation underway in Ukraine. The open source information crowd is documenting assorted facts and developments on X.com. We have the public Telegram channels outputting a wealth of information about the special operation and the other unhappy circumstances in Europe. We have the Europol reports about cyber crime, takedowns, and multi-nation operations. I receive in my newsfeed pointers to “real” news about a wide range of illegal activities. In short, what’s hidden?

image

An evil Web bug is capturing information about a computer user. She is not afraid. She is unaware… apparently. Thanks Microsoft Bing. Ooops. Strike that. Thanks, Copilot. Good Grinch. Did you accidentally replicate a beloved character or just think it up?

The report focuses on what I have identified as my first love — commercial messaging aka advertising.

The “hidden”, I think, refers to data collected when people navigate to a Web site and click, drag a cursor, or hover on a particular region. Those data along with date, time, browser used, and similar information are knitted together into a tidy bundle. These data can be used to have other commercial messages follow a person to another Web site, trigger an email urging the surfer to buy more or just buy something, or populate one of the cross tabulation companies’ databases.

The write up uses the lingo RTB or real time bidding to describe the data collection. The report states:

Our investigation highlights a widespread trade in data about sensitive European personnel and leaders that exposes them to blackmail, hacking and compromise, and undermines the security of their organizations and institutions.  These data flow from Real-Time Bidding (RTB), an advertising technology that is active on almost all websites and apps. RTB involves the broadcasting of sensitive data about people using those websites and apps to large numbers of other entities, without security measures to protect the data. This occurs billions of times a day. Our examination of tens of thousands of pages of RTB data reveals that EU military personnel and political decision makers are targeted using RTB.

In the US, the sale of data gathered via advertising cookies, beacons, and related technologies is a business with nearly 1,000 vendors offering data. I am not sure about the “hidden” idea, however. If the term applies to an average Web user, most of those folks do not know about changing defaults. That is not a hidden function; that is an indication of the knowledge the user has about a specific software.

If you are interested in the report, navigate to this link. You may find the “security crisis” interesting. If not, keep in mind that one can eliminate such tracking with fairly straightforward preventative measures. For me, I love advertising. I know the beacons and bugs want to do the right thing: Capture and profile me to the nth degree. Advertising! It is wonderful and its data exhaust informative and useful.

Stephen E Arnold, November 23, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta