British Library: The Math of Can Kicking Security Down the Road

January 9, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a couple of blog posts about the security issues at the British Library. I am not currently working on projects in the UK. Therefore, I noted the issue and moved on to more pressing matters. Examples range from writing about the antics of the Google to keeping my eye on the new leader of the highly innovative PR magnet, the NSO Group.

image

Two well-educated professionals kick a security can down the road. Why bother to pick it up? Thanks, MSFT Copilot Bing thing. I gave up trying to get you to produce a big can and big shoe. Sigh.

I read “British Library to Burn Through Reserves to Recover from Cyber Attack.” The weird orange newspaper usually has semi-reliable, actual factual information. The write up reports or asserts (the FT is a newspaper, after all):

The British Library will drain about 40 per cent of its reserves to recover from a cyber attack that has crippled one of the UK’s critical research bodies and rendered most of its services inaccessible.

I won’t summarize what the bad actors took down. Instead, I want to highlight another passage in the article:

Cyber-intelligence experts said the British Library’s service could remain down for more than a year, while the attack highlighted the risks of a single institution playing such a prominent role in delivering essential services.

A couple of themes emerge from these two quoted passages:

  1. Whatever cash the library has, spitting distance of half is going to be spent “recovering,” not improving, enhancing, or strengthening. Just “recovering.”
  2. The attack killed off “most” of the British Libraries services. Not a few. Not one or two. Just “most.”
  3. Concentration for efficiency leads to failure for downstream services. But concentration makes sense, right. Just ask library patrons.

My view of the situation is familiar of you have read other blog posts about Fancy Dan, modern methods. Let me summarize to brighten your day:

First, cyber security is a function that marketers exploit without addressing security problems. Those purchasing cyber security don’t know much. Therefore, the procurement officials are what a falcon might label “easy prey.” Bad for the chihuahua sometimes.

Second, when security issues are identified, many professionals don’t know how to listen. Therefore, a committee decides. Committees are outstanding bureaucratic tools. Obviously the British Library’s managers and committees may know about manuscripts. Security? Hmmm.

Third, a security failure can consume considerable resources in order to return to the status quo. One can easily imagine a scenario months or years in the future when the cost of recovery is too great. Therefore, the security breach kills the organization. Termination can be rationalized by a committee, probably affiliated with a bureaucratic structure further up the hierarchy.

I think the idea of “kicking the security can” down the road a widespread characteristic of many organizations. Is the situation improving? No. Marketers move quickly to exploit weaknesses of procurement teams. Bad actors know this. Excitement ahead.

Stephen E Arnold, January 9, 2024

Cyber Security Software and AI: Man and Machine Hook Up

January 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My hunch is that 2024 is going to be quite interesting with regards to cyber security. The race among policeware vendors to add “artificial intelligence” to their systems began shortly after Microsoft’s ChatGPT moment. Smart agents, predictive analytics coupled to text sources, real-time alerts from smart image monitoring systems are three application spaces getting AI boosts. The efforts are commendable if over-hyped. One high-profile firm’s online webinar presented jargon and buzzwords but zero evidence of the conviction or closure value of the smart enhancements.

image

The smart cyber security software system outputs alerts which the system manager cannot escape. Thanks, MSFT Copilot Bing thing. You produced a workable illustration without slapping my request across my face. Good enough too.

Let’s accept as a working presence that everyone from my French bulldog to my neighbor’s ex wife wants smart software to bring back the good old, pre-Covid, go-go days. Also, I stipulate that one should ignore the fact that smart software is a demonstration of how numerical recipes can output “good enough” data. Hallucinations, errors, and close-enough-for-horseshoes are part of the method. What’s the likelihood the door of a commercial aircraft would be removed from an aircraft in flight? Answer: Well, most flights don’t lose their doors. Stop worrying. Those are the rules for this essay.

Let’s look at “The I in LLM Stands for Intelligence.” I grant the title may not be the best one I have spotted this month, but here’s the main point of the article in my opinion. Writing about automated threat and security alerts, the essay opines:

When reports are made to look better and to appear to have a point, it takes a longer time for us to research and eventually discard it. Every security report has to have a human spend time to look at it and assess what it means. The better the crap, the longer time and the more energy we have to spend on the report until we close it. A crap report does not help the project at all. It instead takes away developer time and energy from something productive. Partly because security work is consider one of the most important areas so it tends to trump almost everything else.

The idea is that strapping on some smart software can increase the outputs from a security alerting system. Instead of helping the overworked and often reviled cyber security professional, the smart software makes it more difficult to figure out what a bad actor has done. The essay includes this blunt section heading: “Detecting AI Crap.” Enough said.

The idea is that more human expertise is needed. The smart software becomes a problem, not a solution.

I want to shift attention to the managers or the employee who caused a cyber security breach. In what is another zinger of a title, let’s look at this research report, “The Immediate Victims of the Con Would Rather Act As If the Con Never Happened. Instead, They’re Mad at the Outsiders Who Showed Them That They Were Being Fooled.” Okay, this is the ostrich method. Deny stuff by burying one’s head in digital sand like TikToks.

The write up explains:

The immediate victims of the con would rather act as if the con never happened. Instead, they’re mad at the outsiders who showed them that they were being fooled.

Let’s assume the data in this “Victims” write up are accurate, verifiable, and unbiased. (Yeah, I know that is a stretch.)

What do these two articles do to influence my view that cyber security will be an interesting topic in 2024? My answers are:

  1. Smart software  will allegedly detect, alert, and warn of “issues.” The flow of “issues” may overwhelm or numb staff who must decide what’s real and what’s a fakeroo. Burdened staff can make errors, thus increasing security vulnerabilities or missing ones that are significant.
  2. Managers, like the staffer who lost a mobile phone, with company passwords in a plain text note file or an email called “passwords” will blame whoever blows the whistle. The result is the willful refusal to talk about what happened, why, and the consequences. Examples range from big libraries in the UK to can kicking hospitals in a flyover state like Kentucky.
  3. Marketers of remediation tools will have a banner year. Marketing collateral becomes a closed deal making the art history majors writing copy secure in their job at a cyber security company.

Will bad actors pay attention to smart software and the behavior of senior managers who want to protect share price or their own job? Yep. Close attention.

Stephen E Arnold, January 8, 2024

THE I IN LLM STANDS FOR INTELLIGENCE

xx

x

x

x

x

x

23AndMe: The Genetics of Finger Pointing

January 4, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Well, well, another Silicon Valley outfit with Google-type DNA relies on its hard-wired instincts. What’s the situation this time? “23andMe Tells Victims It’s Their Fault That Their Data Was Breached” relates a now a well-known game plan approach to security problems. What’s the angle? Here’s what the story in Techcrunch asserts:

image

Some rhetorical tactics are exemplified by children who blame one another for knocking the birthday cake off the counter. Instinct for self preservation creates these all-too-familiar situations. Are Silicon Valley-type outfit childish? Thanks, MSFT Copilot Bing thing. I had to change the my image request three times to avoid the negative filter for arguing children. Your approach is good enough.

Facing more than 30 lawsuits from victims of its massive data breach, 23andMe is now deflecting the blame to the victims themselves in an attempt to absolve itself from any responsibility…

I particularly liked this statement from the Techcrunch article:

And the consequences? The US legal processes will determine what’s going to happen.

After disclosing the breach, 23andMe reset all customer passwords, and then required all customers to use multi-factor authentication, which was only optional before the breach. In an attempt to pre-empt the inevitable class action lawsuits and mass arbitration claims, 23andMe changed its terms of service to make it more difficult for victims to band together when filing a legal claim against the company. Lawyers with experience representing data breach victims told TechCrunch that the changes were “cynical,” “self-serving” and “a desperate attempt” to protect itself and deter customers from going after the company.

Several observations:

  1. I particularly like the angle that cyber security is not the responsibility of the commercial enterprise. The customers are responsible.
  2. The lack of consequences for corporate behaviors create opportunities for some outfits to do some very fancy dancing. Since a company is a “Person,” Maslow’s hierarchy of needs kicks in.
  3. The genetics of some firms function with little regard for what some might call social responsibility.

The result is the situation which not even the original creative team for the 1980 film Airplane! (Flying High!) could have concocted.

Stephen E Arnold, January 4, 2024

Exploit Lets Hackers Into Google Accounts, PCs Even After Changing Passwords

January 3, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google must be so pleased. The Register reports, “Google Password Resets Not Enough to Stop these Info-Stealing Malware Strains.” In October a hacker going by PRISMA bragged they had found a zero-day exploit that allowed them to log into Google users’ accounts even after the user had logged off. They could then use the exploit generate a new session token and go after data in the victim’s email and cloud storage. It was not an empty boast, and it gets worse. Malware developers have since used the hack to create “info stealers” that infiltrate victims’ local data. (Mostly Windows users.) Yes, local data. Yikes. Reporter Connor Jones writes:

“The total number of known malware families that abuse the vulnerability stands at six, including Lumma and Rhadamanthys, while Eternity Stealer is also working on an update to release in the near future. They’re called info stealers because once they’re running on some poor sap’s computer, they go to work finding sensitive information – such as remote desktop credentials, website cookies, and cryptowallets – on the local host and leaking them to remote servers run by miscreants. Eggheads at CloudSEK say they found the root of the Google account exploit to be in the undocumented Google OAuth endpoint ‘MultiLogin.’ The exploit revolves around stealing victims’ session tokens. That is to say, malware first infects a person’s PC – typically via a malicious spam or a dodgy download, etc – and then scours the machine for, among other things, web browser session cookies that can be used to log into accounts. Those session tokens are then exfiltrated to the malware’s operators to enter and hijack those accounts. It turns out that these tokens can still be used to login even if the user realizes they’ve been compromised and change their Google password.”

So what are Google users to do when changing passwords is not enough to circumvent this hack? The company insists stolen sessions can be thwarted by signing out of all Google sessions on all devices. It is, admittedly, kind of a pain but worth the effort to protect the data on one’s local drives. Perhaps the company will soon plug this leak so we can go back to checking our Gmail throughout the day without logging in every time. Google promises to keep us updated. I love promises.

Cynthia Murrell, January 3, 2024

Cyber Security Crumbles When Staff Under Stress

December 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

How many times does society need to say that happy employees mean a better, more profitable company? The world is apparently not getting the memo, because employees, especially IT workers, are overworked, stressed, exhausted, and burnt out like blackened match. While zombie employees are bad for productivity, they’re even worse for cyber security. BetaNews reports on an Adarma, a detection and response specialist company, survey, “Stressed Staff Put Enterprises At Risk Of Cyberattack.”

The survey responders believe they’re at a greater risk of cyberattack due to the poor condition of their employees. Five hundred cybersecurity professionals from UK companies with over 2000 employees were studied and 51% believed their IT security are dead inside. This puts them at risk of digital danger. Over 40% of the cybersecurity leaders felt that their skills were limited to understand threats. An additional 43% had little or zero expertise to respond or detect threats to their enterprises.

IT people really love computers and technology but when they’re working in an office environment and dealing with people, stress happens:

“‘Cybersecurity professionals are typically highly passionate people, who feel a strong personal sense of duty to protect their organization and they’ll often go above and beyond in their roles. But, without the right support and access to resources in place, it’s easy to see how they can quickly become victims of their own passion. The pressure is high and security teams are often understaffed, so it is understandable that many cybersecurity professionals are reporting frustration, burnout, and unsustainable stress. As a result, the potential for mistakes being made that will negatively impact an organization increases. Business leaders should identify opportunities to ease these gaps, so that their teams can focus on the main task at hand, protecting the organization,’ says John Maynard, Adarma’s CEO.”

The survey demonstrates why it’s important to diversify the cybersecurity talent pool? Wait, is this in regard to ethnicity and biological sex? Is Adarma advocating for a DEI quota in cybersecurity or is the organization advocating for a diverse talent pool with varied experience to offer differ perspectives?

While it is important to have different education backgrounds and experience, hiring someone simply based on DEI quotas is stupid. It’s failing in the US and does more harm than good.

Whitney Grace, December 22, 2023

AI: Are You Sure You Are Secure?

December 19, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

North Carolina University published an interesting article. Are the data in the write up reproducible. I don’t know. I wanted to highlight the report in the hopes that additional information will be helpful to cyber security professionals. The article is “AI Networks Are More Vulnerable to Malicious Attacks Than Previously Thought.

I noted this statement in the article:

Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.

image

A corporate decision maker looks at a point of vulnerability. One of his associates moves a sign which explains that smart software protects the castel and its crown jewels. Thanks, MSFT Copilot. Numerous tries, but I finally got an image close enough for horseshoes.

What is the specific point of alleged weakness?

At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system in order to confuse it.

The example presented in the article is that a bad actor manipulates data provided to the smart software; for example, causing an image or content to be deleted or ignored. Another use case is that a bad actor could cause an X-ray machine to present altered information to the analyst.

The write up includes a description of software called QuadAttacK. The idea is to test a network for “clean” data. Four different networks were tested. The report includes a statement from Tianfu Wu, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. He allegedly said:

“We were surprised to find that all four of these networks were very vulnerable to adversarial attacks,” Wu says. “We were particularly surprised at the extent to which we could fine-tune the attacks to make the networks see what we wanted them to see.”

You can download the vulnerability testing tool at this link.

Here are the observations my team and I generated at lunch today (Friday, December 14, 2023):

  1. Poisoned data is one of the weak spots in some smart software
  2. The free tool will allow bad actors with access to certain smart systems a way to identify points of vulnerability
  3. AI, at this time, may be better at marketing than protecting its reasoning systems.

Stephen E Arnold, December 19, 2023

Stressed Staff Equals Security Headaches

December 14, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

How many times does society need to say that happy employees mean a better, more profitable company? The world is apparently not getting the memo, because employees, especially IT workers, are overworked, stressed, exhausted, and burnt out like blackened match. While zombie employees are bad for productivity, they’re even worse for cyber security. BetaNews reports on an Adarma, a detection and response specialist company, survey, “Stressed Staff Put Enterprises At Risk Of Cyberattack.”

image

The overworked IT person says, “Are these sticky notes your passwords?” The stressed out professional service worker replies, “Hey, buddy, did I ask you if your company’s security system actually worked? Yeah, you are one of those cyber security experts, right? Next!” Thanks, MSFT Copilot. I don’t think you had a human intervene to create this image like you know who.

The survey responders believe they’re at a greater risk of cyberattack due to the poor condition of their employees. Five hundred cybersecurity professionals from UK companies with over 2000 employees were studied and 51% believed their IT security are dead inside. This puts them at risk of digital danger. Over 40% of the cybersecurity leaders felt that their skills were limited to understand threats. An additional 43% had little or zero expertise to respond or detect threats to their enterprises.

IT people really love computers and technology but when they’re working in an office environment and dealing with people, stress happens:

“‘Cybersecurity professionals are typically highly passionate people, who feel a strong personal sense of duty to protect their organization and they’ll often go above and beyond in their roles. But, without the right support and access to resources in place, it’s easy to see how they can quickly become victims of their own passion. The pressure is high and security teams are often understaffed, so it is understandable that many cybersecurity professionals are reporting frustration, burnout, and unsustainable stress. As a result, the potential for mistakes being made that will negatively impact an organization increases. Business leaders should identify opportunities to ease these gaps, so that their teams can focus on the main task at hand, protecting the organization,’ says John Maynard, Adarma’s CEO.”

The survey demonstrates why it’s important to diversify the cybersecurity talent pool? Wait, is this in regard to ethnicity and biological sex? Is Adarma advocating for a DEI quota in cybersecurity or is the organization advocating for a diverse talent pool with varied experience to offer differ perspectives?

While it is important to have different education backgrounds and experience, hiring someone simply based on DEI quotas is stupid. It’s failing in the US and does more harm than good.

Whitney Grace, December 14, 2023

Allegations That Canadian Officials Are Listening

December 13, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Widespread Use of Phone Surveillance Tools Documented in Canadian Federal Agencies

It appears a baker’s dozen of Canadian agencies are ignoring a longstanding federal directive on privacy protections. Yes, Canada. According to CBC/ Radio Canada, “Tools Capable of Extracting Personal Data from Phones Being Used by 13 Federal Departments, Documents Show.” The trend surprised even York University associate professor Evan Light, who filed the original access-to-information request. Reporter Brigitte Bureau shares:

image

Many people, it seems, are listening to Grandma’s conversations in a suburb of Calgary. (Nice weather in the winter.) Thanks, MSFT Copilot. I enjoyed the flurry of messages that you were busy creating my other image requests. Just one problemo. I had only one image request.

“Tools capable of extracting personal data from phones or computers are being used by 13 federal departments and agencies, according to contracts obtained under access to information legislation and shared with Radio-Canada. Radio-Canada has also learned those departments’ use of the tools did not undergo a privacy impact assessment as required by federal government directive. The tools in question can be used to recover and analyze data found on computers, tablets and mobile phones, including information that has been encrypted and password-protected. This can include text messages, contacts, photos and travel history. Certain software can also be used to access a user’s cloud-based data, reveal their internet search history, deleted content and social media activity. Radio-Canada has learned other departments have obtained some of these tools in the past, but say they no longer use them. … ‘I thought I would just find the usual suspects using these devices, like police, whether it’s the RCMP or [Canada Border Services Agency]. But it’s being used by a bunch of bizarre departments,’ [Light] said.

To make matters worse, none of the agencies had conducted the required Privacy Impact Assessments. A federal directive issued in 2002 and updated in 2010 required such PIAs to be filed with the Treasury Board of Canada Secretariat and the Office of the Privacy Commissioner before any new activities involving collecting or handling personal data. Light is concerned that agencies flat out ignoring the directive means digital surveillance of citizens has become normalized. Join the club, Canada.

Cynthia Murrell, December 13, 2023

23andMe: Fancy Dancing at the Security Breach Ball

December 11, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Here’s a story I found amusing. Very Sillycon Valley. Very high school science clubby. Navigate to “23andMe Moves to Thwart Class-Action Lawsuits by Quietly Updating Terms.” The main point of the write up is that the firm’s security was breached. How? Probably those stupid customers or a cyber security vendor installing smart software that did not work.

image

How some influential wizards work to deflect actions hostile to their interests. In the cartoon, the Big Dog tells a young professional, “Just change the words.” Logical, right? Thanks, MSFT Copilot. Close enough for horseshoes.

The article reports:

Following a hack that potentially ensnared 6.9 million of its users, 23andMe has updated its terms of service to make it more difficult for you to take the DNA testing kit company to court, and you only have 30 days to opt out.

I have spit in a 23andMe tube. I’m good at least for this most recent example of hard-to-imagine security missteps. The article cites other publications but drives home what I think is a useful insight into the thought process of big-time Sillycon Valley firms:

customers were informed via email that “important updates were made to the Dispute Resolution and Arbitration section” on Nov. 30 “to include procedures that will encourage a prompt resolution of any disputes and to streamline arbitration proceedings where multiple similar claims are filed.” Customers have 30 days to let the site know if they disagree with the terms. If they don’t reach out via email to opt out, the company will consider their silence an agreement to the new terms.

No more neutral arbitrators, please. To make the firm’s intentions easier to understand, the cited article concludes:

The new TOS specifically calls out class-action lawsuits as prohibited. “To the fullest extent allowed by applicable law, you and we agree that each party may bring disputes against the only party only in an individual capacity, and not as a class action or collective action or class arbitration” …

I like this move for three reasons:

  1. It provides another example of the tactics certain Information Highway contractors view the Rules of the Road. In a word, “flexible.” In another word, “malleable.”
  2. The maneuver is one that seems to be — how shall I phrase it — elephantine, not dainty and subtle.
  3. The “fix” for the problem is to make the estimable company less likely to get hit with massive claims in a court. Courts, obviously, are not to be trusted in some situations.

I find the entire maneuver chuckle invoking. Am I surprised at the move? Nah. You can’t kid this dinobaby.

Stephen E Arnold, December 11, 2023

How about Fear and Paranoia to Advance an Agenda?

December 6, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I thought sex sells. I think I was wrong. Fear seems to be the barn burner at the end of 2023. And why not? We have the shadow of another global pandemic? We have wars galore. We have craziness on US air planes. We have a Cybertruck which spells the end for anyone hit by the behemoth.

I read (but did not shake like the delightful female in the illustration “AI and Mass Spying.” The author is a highly regarded “public interest technologist,” an internationally renowned security professional, and a security guru. For me, the key factoid is that he is a fellow at the Berkman Klein Center for Internet & Society at Harvard University and a lecturer in public policy at the Harvard Kennedy School. Mr. Schneier is a board member of the Electronic Frontier Foundation and the most, most interesting organization AccessNow.

image

Fear speaks clearly to those in retirement communities, elder care facilities, and those who are uninformed. Let’s say, “Grandma, you are going to be watched when you are in the bathroom.” Thanks, MSFT Copilot. I hope you are sending data back to Redmond today.

I don’t want to make too much of the Harvard University connection. I feel it is important to note that the esteemed educational institution got caught with its ethical pants around its ankles, not once, but twice in recent memory. The first misstep involved an ethics expert on the faculty who allegedly made up information. The second is the current hullabaloo about a whistleblower allegation. The AP slapped this headline on that report: “Harvard Muzzled Disinfo Team after $500 Million Zuckerberg Donation.” (I am tempted to mention the Harvard professor who is convinced he has discovered fungible proof of alien technology.)

So what?

The article “AI and Mass Spying” is a baffler to me. The main point of the write up strikes me as:

Summarization is something a modern generative AI system does well. Give it an hourlong meeting, and it will return a one-page summary of what was said. Ask it to search through millions of conversations and organize them by topic, and it’ll do that. Want to know who is talking about what? It’ll tell you.

I interpret the passage to mean that smart software in the hands of law enforcement, intelligence operatives, investigators in one of the badge-and-gun agencies in the US, or a cyber lawyer is really, really bad news. Smart surveillance has arrived. Smart software can process masses of data. Plus the outputs may be wrong. I think this means the sky is falling. The fear one is supposed to feel is going to be the way a chicken feels when it sees the Chik-fil-A butcher truck pull up to the barn.

Several observations:

  1. Let’s assume that smart software grinds through whatever information is available to something like a spying large language model. Are those engaged in law enforcement are unaware that smart software generates baloney along with the Kobe beef? Will investigators knock off the verification processes because a new system has been installed at a fusion center? The answer to these questions is, “Fear advances the agenda of using smart software for certain purposes; specifically, enforcement of rules, regulations, and laws.”
  2. I know that the idea that “all” information can be processed is a jazzy claim. Google made it, and those familiar with Google search results knows that Google does not even come close to all. It can barely deliver useful results from the Railway Retirement Board’s Web site. “All” covers a lot of ground, and it is unlikely that a policeware vendor will be able to do much more than process a specific collection of data believed to be related to an investigation. “All” is for fear, not illumination. Save the categorical affirmatives for the marketing collateral, please.
  3. The computational cost for applying smart software to large domains of data — for example, global intercepts of text messages — is fun to talk about over lunch. But the costs are quite real. Then the costs of the computational infrastructure have to be paid. Then the cost of the downstream systems and people who have to figure out if the smart software is hallucinating or delivering something useful. I would suggest that Israel’s surprise at the unhappy events in October 2023 to the present day unfolded despite the baloney for smart security software, a great intelligence apparatus, and the tons of marketing collateral handed out at law enforcement conferences. News flash: The stuff did not work.

In closing, I want to come back to fear. Exactly what is accomplished by using fear as the pointy end of the stick? Is it insecurity about smart software? Are there other messages framed in a different way to alert people to important issues?

Personally, I think fear is a low-level technique for getting one’s point across. But when those affiliated with an outfit with the ethics matter and now the payola approach to information, how about putting on the big boy pants and select a rhetorical trope that is unlikely to anything except remind people that the Covid thing could have killed us all. Err. No. And what is the agenda fear advances?

So, strike the sex sells trope. Go with fear sells.

Stephen E Arnold, December 6, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta