A Convenient Deep-Fake Time Saver

February 1, 2023

There are some real concerns about deepfakes, and identifying AI imposters remains a challenge. Amid the excitement, there is one outfit determined to put the troublesome tech to use for average folks. We learn about a recent trial run in Motherboard‘s piece, “Researcher Deepfakes His Voice, Uses AI to Demand Refund from Wells Fargo.” 

Yes, among other things, Do Not Pay is working to take the tedium out of wrangling with customer service. Writer Joseph Cox describes a video posted on Twitter by founder Joshua Browder in which he uses an AI copy of his voice to request a refund for certain wire transfer fees. In the clip, the tool appears to successfully negotiate with a live representative, though a Wells Fargo spokesperson claims this was not the case and the video was doctored. Browder vigorously insists it was not. We are told Motherboard has requested a recording of the call from Wells Fargo’s side, but they had apparently not supplied on as of this writing. Cox writes:

“‘Hi, I’m calling to get a refund for wire transfer fees,’ the fake Browder says around half way through the clip. The customer support worker then asks for the callers first and last name, which the bot dutifully provides. For a while, the bot and worker spar back and forth on which wire transfer fees the bot is calling about, before settling on the fees for the past three months. In a tweet, Browder said the tool was built from a combination of Resemble.ai, a site that lets users create their own AI voices, GPT-J, an open source casual language model, and Do Not Pay’s own AI models for the script. Do Not Pay has previously used AI-powered bots to negotiate Comcast bills. The conversation from this latest bot is very unnatural. There are long pauses where the bot processes what the customer support worker has said, and works on its response. You can’t help but feel bad for the Wells Fargo worker who had to sit silently while the bot slowly did its thing. But in this case, the bot was effective and did manage to secure the refunds, judging by the video.”

Do Not Pay does plan to make this time-saving tool available to the public, though equipping it with one’s own voice will be a premium option. As uses for deep fake technology go, this does seem like one of the least nefarious. Corporations like Wells Fargo, however, may disagree.

Cynthia Murrell, February 1, 2023

Social Media Scam-A-Rama

January 26, 2023

The Internet is a virtual playground for scam artists.  While it is horrible that bad actors can get away with their crimes, it is also impressive the depth and creativity they go to for “easy money.”  Fortune shares the soap opera-worthy saga of how: “Social Media Influencers Are Charged With Feeding Followers ‘A Steady Diet Of Misinformation’ In A Pump And Dump Stock Scheme That Netted $100 Million.”

The US Justice Department and the Securities and Exchange Commission (SEC) busted eight purported social media influencers who specialized in stock market trading advice.  From 2020 to April 2022, they tricked their amateur investor audience of over 1.5 million Twitter users to invest funds in a “pump-and-dump” scheme.  The scheme worked as follows:

“Seven of the social-media influencers promoted themselves as successful traders on Twitter and in Discord chat rooms and encouraged their followers to buy certain stocks, the SEC said. When prices or volumes of the promoted stocks would rise, the influencers ‘regularly sold their shares without ever having disclosed their plans to dump the securities while they were promoting them,’ the agency said. ‘The defendants used social media to amass a large following of novice investors and then took advantage of their followers by repeatedly feeding them a steady diet of misinformation,’ said the SEC’s Joseph Sansone, chief of the SEC Enforcement Division’s Market Abuse Unit.”

The ring’s eighth member hosted a podcast that promoted the co-conspirators as experts.  The entire group posted about their luxury lifestyles to fool their audiences further about their stock market expertise.

All of the bad actors could face a max penalty of ten to twenty-five years in prison for fraud and/or unlawful monetary transactions.  The SEC is cracking down on cryptocurrency schemes given the large number of celebrities who are hired to promote schemes.  The celebrities claim to be innocent, because they were paid to promote a product and were not aware of the scam.  

However, how innocent are these people when they use their status to make more money off their fans?  They should follow Shaq’s example and research the products they are associated with before accepting a check…unless they are paid in cryptocurrency.   That would be poetic justice!

Whitney Grace, January 26, 2023

The LaundroGraph: Bad Actors Be On Your Toes

January 20, 2023

Now here is a valuable use of machine learning technology. India’s DailyHunt reveals, “This Deep Learning Technology Is a Money-Launderer’s Worst Nightmare.” The software, designed to help disrupt criminal money laundering operations, is the product of financial data-science firm Feedzai of Portugal. We learn:

“The Feedzai team developed LaundroGraph, a self-supervised model that might reduce the time-consuming process of assessing vast volumes of financial interactions for suspicious transactions or monetary exchanges, in a paper presented at the 3rd ACM International Conference on AI in Finance. Their approach is based on a graph neural network, which is an artificial neural network or ANN built to process vast volumes of data in the form of a graph.”

The AML (anti-money laundering) software simplifies the job of human analysts, who otherwise must manually peruse entire transaction histories in search of unusual activity. The article quotes researcher Mario Cardoso:

“Cardoso explained, ‘LaundroGraph generates dense, context-aware representations of behavior that are decoupled from any specific labels.’ ‘It accomplishes this by utilizing both structural and features information from a graph via a link prediction task between customers and transactions. We define our graph as a customer-transaction bipartite graph generated from raw financial movement data.’ Feedzai researchers put their algorithm through a series of tests to see how well it predicted suspicious transfers in a dataset of real-world transactions. They discovered that it had much greater predictive power than other baseline measures developed to aid anti-money laundering operations. ‘Because it does not require labels, LaundroGraph is appropriate for a wide range of real-world financial applications that might benefit from graph-structured data,’ Cardoso explained.”

For those who are unfamiliar but curious (like me), navigate to this explanation of bipartite graphs. The future applications Cardoso envisions include detecting other financial crimes like fraud. Since the researchers intend to continue developing their tools, financial crimes may soon become much trickier to pull off.

Cynthia Murrell, January 20, 2022

The Intelware Sector: In the News Again

January 13, 2023

It’s Friday the 13th. Bad luck day for Voyager Labs, an Israel-based intelware vendor. But maybe there is bad luck for Facebook or Meta or whatever the company calls itself. Will there be more bad luck for outfits chasing specialized software and services firms?

Maybe.

The number of people interested in the savvy software and systems which comprise Israel’s intelware industry is small. In fact, even among some of the law enforcement and intelligence professionals whom I have encountered over the years, awareness of the number of firms, their professional and social linkages, and the capabilities of these systems is modest. NSO Group became the poster company for how some of these systems can be used. Not long ago, the Brennan Center made available some documents obtained via legal means about a company called Voyager Labs.

Now the Guardian newspaper (now begging for dollars with blue and white pleas) has published “Meta Alleges Surveillance Firm Collected Data on 600,000 Users via Fake Accounts.” the main idea of the write up is that an intelware vendor created sock puppet accounts with phony names. Under these fake identities, the investigators gathered information. The write up refers to “fake accounts” and says:

The lawsuit in federal court in California details activities that Meta says it uncovered in July 2022, alleging that Voyager used surveillance software that relied on fake accounts to scrape data from Facebook and Instagram, as well as Twitter, YouTube, LinkedIn and Telegram. Voyager created and operated more than 38,000 fake Facebook accounts to collect information from more than 600,000 Facebook users, including posts, likes, friends lists, photos, comments and information from groups and pages, according to the complaint. The affected users included employees of non-profits, universities, media organizations, healthcare facilities, the US armed forces and local, state and federal government agencies, along with full-time parents, retirees and union members, Meta said in its filing.

Let’s think about this fake account thing. How difficult is it to create a fake account on a Facebook property. About eight years ago as a test, my team created a fake account for a dog — about eight years ago. Not once in those eight years was any attempt to verify the humanness or the dogness of the animal. The researcher (a special librarian in fact) set up the account and demonstrated to others on my research team how the Facebook sign up system worked or did not work in this particularly example. Once logged in, faithful and trusting Facebook seemed to keep our super user logged into the test computer. For all I know, Tess is still logged in with Facebook doggedly tracking her every move. Here’s Tess:

image

Tough to see that Tess is not a true Facebook type, isn’t it?

Is the accusation directed at Voyager Labs a big deal? From my point of view, no. The reason that intelware companies use Facebook is that Facebook makes it easy to create a fake account, exercises minimal administrative review of registered user, and prioritizes other activities.

I personally don’t know what Voyager Labs did or did not do. I don’t care. I do know that other firms providing intelware have the capability of setting up, managing, and automating some actions of accounts for either a real human, an investigative team, or another software component or system. (Sorry, I am not at liberty to name these outfits.)

Grab your Tum’s bottle and consider these points:

  1. What other companies in Israel offer similar alleged capabilities?
  2. Where and when were these alleged capabilities developed?
  3. What entities funded start ups to implement alleged capabilities?
  4. What other companies offer software and services which deliver similar alleged capabilities?
  5. When did Facebook discover that its own sign up systems had become a go to source of social action for these intelware systems?
  6. Why did Facebook ignore its sign up procedures failings?
  7. Are other countries developing and investing in similar systems with these alleged capabilities? If so, name a company in England, France, China, Germany, or the US?

These one-shot “intelware is bad” stories chop indiscriminately. The vendors get slashed. The social media companies look silly for having little interest in “real” identification of registrants. The licensees of intelware look bad because somehow investigations are somehow “wrong.” I think the media reporting on intelware look silly because the depth of the information on which they craft stories strikes me as shallow.

I am pointing out that a bit more diligence is required to understand the who, what, why, when, and where of specialized software and services. Let’s do some heavy lifting, folks.

Stephen E Arnold, January 13, 2023

Spammers, Propagandists, and Phishers Rejoice: ChatGPT Is Here

January 12, 2023

AI-generate dart is already receiving tons of backlash from the artistic community and now writers should trade lightly because according to the No Film School said, “You Will Be Impacted By AI Writing…Here Is How.” Hollywood is not a friendly place, but it is certainly weird. Scriptwriters deal with all personalities, especially bad actors, who comment on their work. Now AI algorithms will offer notes on their scripts too.

ChatGPT is a new AI tool that blurs the line between art and aggregation because it can “help” scriptwriters with their work aka made writers obsolete:

“ChatGPT, and programs like it, scan the internet to help people write different prompts. And we’re seeing it begin to be employed by Hollywood as well. Over the last few days, people have gone viral on Twitter asking the AI interface to write one-act plays based on sentences you type in, as well as answer questions….This is what the program spat back out at me:

‘There is concern among some writers and directors in Hollywood that the use of AI in the entertainment industry could lead to the creation of content that is indistinguishable from human-generated content. This could potentially lead to the loss of jobs for writers and directors, as AI algorithms could be used to automate the process of creating content. Additionally, there is concern that the use of AI in Hollywood could result in the creation of content that is formulaic and lacks the creativity and uniqueness that is typically associated with human-generated content.’”

Egads, that is some good copy! AI automation, however, lacks the spontaneity of human creativity. But the machine generated prose is good enough for spammers, propagandists, phishers, and college students.

Humans are still needed to break the formulaic, status quo, but Hollywood bigwigs only see dollar signs and not art. AI create laughable stories, but they are getting better all the time. AI could and might automate the industry, but the human factor is still needed. The bigger question is: How will humanity’s role change in entertainment?

Whitney Grace, January 12, 2023

Cyber Investigators: Feast, Famine, or Poisoned Data in 2023

January 11, 2023

At this moment in time, the hottest topic among some cyber investigators is open source intelligence or OSINT. In 2022, the number of free and for-fee OSINT tools and training sessions grew significantly. Plus, each law enforcement and intelligence conference I attended in 2022 was awash with OSINT experts, exhibitors, and investigators eager to learn about useful sites, Web and command line techniques, and intelware solutions combining OSINT information with smart software. I anticipate that 2023 will be a bumper year for DYOR or do your own research. No collegial team required, just a Telegram group or a Twitter post with comments. The Ukraine-Russia conflict has become the touchstone for the importance of OSINT.

Over pizza, my team and I have been talking about how the OSINT “revolution” will unwind in 2023. On the benefit side of the cyber investigative ledger, OSINT is going to become even more important. After 30 years in the background, OSINT has become the next big thing for investigators, intelligence professionals, entrepreneurs, and Beltway bandits. Systems developed in the US, Israel, and other countries continue to bundle sophisticated analytics plus content. The approach is to migrate basic investigative processes into workflows. A button click automates certain tasks. Some of the solutions have proven themselves to be controversial. Voyager Lab and the Los Angeles Police Department generated attention in late 2021. The Brennan Center released a number of once-confidential documents revealing the capabilities of a modern intelware system. Many intelware vendors have regrouped and appear to be ready to returned to aggressive marketing of their systems, its built-in data, and smart software. These tools are essential for certain types of investigations whether in US agencies like Homeland Security or in financial crime investigations at FINCEN. Even state and city entities have embraced the mantra of better, faster, easier, and, in some cases, cheaper investigations.

Another development in 2023 will be more tension between skilled human investigators and increasingly smarter software. The bean counters (accountants) see intelware as a way to reduce the need for headcount (full time equivalents) and up the amount of smart software and OSINT information. Investigators will face an increase in cyber crime. Some involved in budgeting will emphasize smart software instead of human officers. The crypto imbroglio is just one facet of the factors empowering online criminal behavior. Some believe that the Dark Web, CSAM, and contraband have faded from the scene. That’s a false idea. In the last year or so, what my team and I call the “shadow Web” has become a new, robust, yet hard-to-penetrate infrastructure for cyber crime. Investigators now face an environment into which a digital Miracle-Gro has been injected. Its components are crypto, encryption, and specialized software that moves Web sites from Internet host to Internet host in the click of a mouse. Chasing shadows is a task even the most recent intelware systems find difficult to accomplish.

However, my team and I believe that there is another downside for law enforcement and a major upside for bad actors. The wide availability of smart software capable of generating misinformation in the form of text, videos, and audio. Unfortunately today’s intelware is not yet able to flag and filter weaponized information in real time or in a reliable way. OSINT advocates and marketers unfamiliar with the technical challenges of ignoring “fake” information downplay the risk of weaponized or poisoned information. A smart software system ingesting masses of digital information can, at this time, learn from bogus data and, therefore, output misleading or incorrect recommendations. In 2023, poisoned data continue to derail many intelware systems as well as traditional investigations when insufficient staff are available to determine provenance and accuracy. Our research has identified 10 widely-used mathematical procedures particularly sensitive to bogus information. Few want to discuss these out-of-sight sinkholes in public forums. Hopefully the reluctance to talks about OSINT blindspots will fade in 2023.

The feast? Smart software. Masses of information.

The famine? Funds to expand the hiring of full time (not part time) investigators and the money needed to equip these professionals with high-value, timely instruction about tools, sources, pitfalls, and methods for verification of data.

The poison? The ChatGPT and related tools which can make anyone with basic scripting expertise into a volcano of misinformation.

Let me suggest four steps to begin to deal with the feast, famine, and poison challenges?

First, individuals, trade groups, and companies marketing intelware to law enforcement and intelligence entities stick to the facts about their systems. The flowery language and the truth-stretching lingo must be decreased. Why do intelware vendors experience brutal churn among licensees? The distance between the reality of the system and the assertions made to sell the system.

Second, procurement processes and procurement professionals must become advocates for reform. Vendors often provide “free” trials and then work to get “on the budget.” The present procurement methods can lead to wasted time, money, and contracting missteps. Outside-the-box ideas like a software sandbox require consideration. (If you want to know more about this, message me.)

Third, consulting firms which are often quick to offer higher salaries to cyber investigators need to evaluate the impact of their actions on investigative units. There is no regulatory authority monitoring the behavior of these firms. The Wild West of cyber investigator poaching hampers some investigations. Legislation perhaps? More attention from the Federal Trade Commission maybe? Putting the needs of the investigators ahead of the needs of the partners in the consulting firms?

Fourth, a stepped up recruitment effort is needed to attract investigators to the agencies engaged in dealing with cyber crime. In my years of work for the US government and related entities, I learned that government units are not very good at identifying, enlisting, and retaining talent. This is an administrative function that requires more attention from individuals with senior administrative responsibilities. Perhaps 2023 will generate some progress in this core personnel function.

Don’t get me wrong. I am optimistic about smart software. I believe techniques to identify and filter weaponized information can be enhanced and improved. I am confident that forward leaning professionals in government agencies can have a meaningful impact on institutionalized procedures and methods associated with fighting cyber crime.

My team and I are committed to conducting research and sharing our insights with law enforcement and intelligence professionals in 2023. My hope is that others will adopt a similar “give back” and “pay it forward” approach in 2023 in the midst of feasts, famines, and poisoned data.

Thank you for reading. — Stephen E Arnold, January 11, 2023

Google: Do Small Sites Need Anti Terrorism Help or Is the Issue Better Addressed Elsewhere?

January 3, 2023

Are “little sites” really in need of Google’s anti-terrorism tool? Oh, let me be clear. Google is — according to “Google Develops Free Terrorism-Moderation Tool for Smaller Websites” — in the process of creating Googley software. This software will be:

a free moderation tool that smaller websites can use to identify and remove terrorist material, as new legislation in the UK and the EU compels Internet companies to do more to tackle illegal content.

And what institutions are working with Google on this future software? The article reports:

The software is being developed in partnership with the search giant’s research and development unit Jigsaw and Tech Against Terrorism, a UN-backed initiative that helps tech companies police online terrorism.

What’s interesting to me is that the motivation for this to-be software or filtering system is in development. The software, it seems, does not exist.

Why would Google issue statements about vaporware?

The article provides a clue:

The move comes as Internet companies will be forced to remove extremist content from their platforms or face fines and other penalties under laws such as the Digital Services Act in the EU, which came into force in November, and the UK’s Online Safety bill, which is expected to become law this year.

I understand. Google’s management understands that regulation and fines are not going away in 2023. It is logical, therefore, to get in front of the problem. How does Google propose to do this?

Yep, vaporware. (I have a hunch there is a demonstration available.) Nevertheless, the genuine article is not available to small Web sites, who need help in coping with terrorism-related content.

How will the tool work? The article states:

Jigsaw’s tool aims to tackle the next step of the process and help human moderators make decisions on content flagged as dangerous and illegal. It will begin testing with two unnamed sites at the beginning of this year.

Everything sounds good when viewed the top of Mount Public Relations, where the vistas are clear and the horizons are unlimited.

I want to make one modest observation: Small Web sites run on hosting services. These hosting services are, in my opinion, more suitable locations for filtering software. The problem is that hosting providers comprise a complex and diverse group of enterprises. In fact, I have yet to receive from my research team a count of service providers that is accurate and comprehensive.

Pushing the responsibility to the operator of a single Web site strikes me as a non-functional approach. Would it make sense for Google’s tool to be implemented in service providers. The content residing on the service providers equipment or co-located hardware and in the stream of data for virtual private systems or virtual private servers. The terrorism related content would be easier to block.

Let’s take a reasonable hosting service; for example, Hertzner in Germany or OVHCloud in France. The European Union could focus on these enabling nodes and implement either the Google system if and when it becomes available and actually works or an alternative filtering method devised by  a European team. (I would suggest that Europol or similar entities can develop the needed filters, test them, and maintain them.) Google has a tendency to create or talk about solutions and then walk away after a period of time. Remember Google’s Web Accelerator?)

Based on our research for an upcoming presentation to a group of investigators focused on cyber crime, service providers (what I call enablers) should be the point of attention in an anti-terrorism action. Furthermore, these enablers are also pivotal in facilitating certain types of online crime. Examples abound. These range from right-wing climate activists using services in Romania to child pornography hosted on what we call “shadow ISPs.” These shadow enablers operate specialized services specifically to facilitate illegal activities within specialized software like The Onion Router and other obfuscation methods.

For 2023, I advocate ignoring PR motivated “to be” software. I think the efforts of national and international law enforcement should be directed at the largely unregulated and often reluctant “enablers.” I agree that some small Web site operators could do more. But I think it is time to take a closer look at enablers operating from vacant lots in the Seychelles or service providers running cyber fraud operations to be held responsible.

Fixing the Internet requires consequences. Putting the focus on small Web sites is a useful idea. But turning up the enforcement and regulatory heat on the big outfits will deliver more heat where being “chill” has allowed criminal activity to flourish. I have not mentioned the US and Canada. I have not forgotten that there are enablers operating in plain sight in such places as Detroit and Québec City. Google’s PR play is a way to avoid further legal and financial hassles.

It is time to move from “to be” software to “taking purposeful, intentional action.”

Stephen E Arnold, January 3, 2023

Identity Theft Made Easy: Why?

December 30, 2022

Some automobiles are lemons aka money holes, because they have defects that keep breaking. Many services are like that as well, including rental car insurance, extended warranties on electronics, and identity theft protection. Life Hacker explains why identity theft protection services are a scam in the story: “Identity Theft Protection Is Mostly Bullshit.”

Most Americans receive emails or physical letters from their place of work, medical offices, insurance agencies, etc. that their personal information was involved in a data breach. As a token of atonement, victims are given free Identity Theft Protection (ITP) aka a useless service. These services promise to monitor the Internet and Dark Web for your personal information. This includes anything from your credit cards to social security number. Identity theft victims deal with ruined credit scores and possibly stolen funds. Identity Theft Protection services seem to be a good idea, until you realize that you can do the monitoring yourself for free.

ITP services monitor credit reports, social media accounts, the Dark Web, and personal financial accounts. Some of these services such as credit reports and your financial accounts will alert you when there is suspicious activity. You can do the following for free:

“You can access your credit reports for free once a year. And you should! It’s a fast and pretty straightforward operation, and at a glance you can see if someone has opened a credit card or taken out a loan in your name. In fact, the number one best way to stop folks from stealing your identity is to freeze your credit, which prevents anyone—even if they have your personal information—from getting a new credit card or loan. While this doesn’t protect you from every single kind of fraud out there, it removes the most common vectors that identity thieves use.”

The US government also maintains a Web site to assist identity theft victims. It is wise to remember that ITP services are different from identity theft insurance. The latter is the same as regular insurance, except it is meant to help when your information is stolen.

Practice good identity hygiene by monitoring your accounts and not posting too much personal information online.

Why is identity theft like a chicken wing left on a picnic table? Careless human or indifferent maintenance worker?

Whitney Grace, December 30, 2022

Need a Human for Special Work? Just Buy One Maybe?

December 29, 2022

Is it possible to purchase a person? Judging from the rumors I have heard in rural Romania, outside the airport in Khartoum, and in a tavern in Tirana — I would suggest that the answer is “possibly.” The Times of London is not into possibilities if the information in “Maids Trafficked and Sold to Wealthy Saudis on Black Market” is accurate. Keep in mind that I am mindful of what I call open source information blindspots. Shaped, faked, and weaponized information is now rampant.

The article focuses on an ecommerce site called Haraj.sa. The article explains:

[The site] Saudi Arabia’s largest online marketplace, through which a Times investigation shows that hundreds of domestic workers are being illegally trafficked and sold to the highest bidders.

Furthermore, the Times adds:

The app, which had 2.5 million visits last year — more than Amazon or AliExpress within the kingdom — is still available on the Apple and Google Play stores despite being criticised by the UN’s Special Rapporteurs in 2020 for facilitating modern slavery.

If true, the article is likely to make for some uncomfortable days as the world swings into 2023; specifically:

  1. The Saudi government
  2. Apple
  3. Google
  4. Assorted law enforcement professionals.

If the information in the write up is accurate, several of the newspaper’s solicitors will be engaged in conversations with other parties’ solicitors. I assume that there will be some conversations in Mayfair and Riyadh about the article. Will Interpol become curious? Probably.

Let’s step back and ask some different questions. I am assuming that some of the information in the article is “correct”; that is, one can verify screenshots or chase down the source of the information. Maybe the lead journalist will consent to an interview on a true crime podcast. Whatever.

Consider these questions:

  1. Why release the story at the peak of some countries’ holiday season? Is the timing designed to minimize or emphasize the sensitive topic of alleged slavery, the Kingdom’s conventions, or the apparent slipshod app review process at controversial US high technology companies?
  2. What exactly did or does Apple and Google know about the app for the Haraj marketplace? If the Times’ story is accurate, what management issue exists at each of these large, but essential to some, companies?
  3. Is the ecommerce site operating within the Kingdom’s cultural norms or is the site itself breaking outside legal guidelines? What does Saudi Arabia say about this site?

To sum up, human trafficking is a concern for many individuals, government entities, and non-governmental organizations. I keep coming back to the question “Why now?” The article states:

Apple said: “We strictly prohibit the solicitation or promotion of illegal behaviour, including human trafficking and child exploitation, in the App Store and across every part of our business. We take any accusations or claims around this behaviour very seriously.” Google declined to comment. Haraj, Saudi Arabia’s human rights commission and the government have been contacted for a response.

Perhaps taking more time to obtain comments would have been useful? What’s the political backstory for the disclosure of the allegedly accurate information during the holiday season? Note that the story is behind a paywall which further limits its diffusion.

Net net: Many questions have I.

Stephen E Arnold, December 29, 2022

Cyber Security: Is It Time for a Brazen Bull?

December 28, 2022

The cyber security industry has weathered Covid, mergers, acquisitions, system failures, and — excuse the lousy pun — solar winds. The flow of exploits with increasingly poetic names continues; for example, Azov, Zerobot, Killnet, etc. However, the cyber defense systems suffer from what one might call a slight misalignment. Bad actors find ways to compromise [a] humans to get user names and passwords, [b] exploit what is now the industry standard for excellence (MVP or minimal viable product, good enough engineering, and close-enough-for-horseshows technology), any gizmo or process connected to something connected to a public-facing network. The list of “bad” actors is a lengthy one. It includes bird-owning individuals in the UK, assorted government agencies hostile to the US, students in computer science class or hanging out in a coffee shop, and double agents with computing know how.

To add to the pain of cyber security, there are organizations which do great marketing but less great systems. “What’s in a PR Statement: LastPass Breach Explained” discusses a serious problem which underscores a number of issues.

LastPass is a product with a past reaching backwards more than a decade. The software made it easier for a user to keep track of what user name and password was whipped up to log into an online service or software. Over the years, PC Magazine found the password manager excellent. (Software can be excellent? Who knew?) Wikipedia has a list of “issues” the security software faced over the years. You can find that information here. More amusing is security expert Steve Gibson’s positive review of LastPass. Should you have the time, you can read about that expert’s conclusions in 2010 here.

But what does the PR statement article say? Here are a couple of snippets from the cited December 26, 2022, essay:

Snippet 1: Right before the holiday season, LastPass published an update on their breach. As people have speculated, this timing was likely not coincidental but rather intentional to keep the news coverage low. …Their statement is also full of omissions, half-truths and outright lies.

Harsh.

Snippet 2: Again, it seems that LastPass attempts to minimize the risk of litigation (hence alerting businesses) while also trying to prevent a public outcry (so not notifying the general public). Priorities…

My take on LastPass is that the company is doing what other cyber security firms do: Manage information about problems.

Let’s talk about cyber security on a larger stage. How does a global scale sound?

First, security is defined by [a] what bad actors have been discovered to do and [b] marketing. A breach occurs. A fix — ideally one enabled by artificial intelligence and chock full of predictive analytics — is created and marketed. Does the fix work? How about those Exchange Server exploits or those 24×7 phishing attacks? The point for me is that cyber security seems to be reactive; that is, dictated by what bad actors do.

Second, the “fix” is verified by whom and what? In the US there are Federal cyber groups. There are state cyber groups. There are cyber associations. There are specialty labs in fun places like Quantico. For a LastPass incident, which cowpoke moves the cow along? The point: Bureaucracy, friction, artificial barriers, time, expertise, money, and more.

Third, technical layoffs and time mean that cyber crime may be an attractive business opportunity for some.

Considering these three points, I want to hazard several observations:

  1. Cyber security may be an oxymoron
  2. Bad actors have the advantages granted by good enough software and systems, tools, talent, and time
  3. Users and customers who purchase security may be faced with a continual flow of surprises

What’s the fix? May I suggest that we consider bringing back the Bull of Phalaris aka the brazen bull.

The “bull” is fabricated of a suitable metal; for example, bronze. The inside of the bull is hollow. A trapdoor allows access to the interior space. When the trapdoor is closed, there is an opening from the interior to the bull’s nose. The malefactor — let’s say a venture firm’s managing director who is rolling up cyber security companies with flawed software — is placed inside the bull. A fire is built beneath the bull and the shouts and possible other noises are emitted from the opening in the bull’s head.

The use of the brazen bull for software developers pumping out “good enough” cyber security solutions can be an option as well. Once law enforcement snags the head of a notorious hacking gang, the bull will be pressed into duty. Keep in mind that Microsoft blamed 1,000 cyber warriors working in a country hostile to the US for the SolarWinds’ misstep. This would necessitate more bulls which would provide meaningful work to some.

I would advocate that marketer types who sell cyber security systems which don’t work be included in the list of individuals who can experience the thrill of the brazen bull.

My thought is that the use of the brazen bull with clips released as short videos would capture some attention.

What’s is going on now is not getting through? More robust measures are necessary. No bull.

Stephen E Arnold, December 28, 2022

Next Page »

  • Archives

  • Recent Posts

  • Meta