AI Is a Rainmaker for Bad Actors
November 16, 2023
This essay is the work of a dumb dinobaby. No smart software required.
How has smart software, readily available as open source code and low-cost online services, affected cyber crime? Please, select from one of the following answers. No cheating allowed.
[a] Bad actors love smart software.
[b] Criminals are exploiting smart orchestration and business process tools to automate phishing.
[c] Online fraudsters have found that launching repeated breaching attempts is faster and easier when AI is used to adapt to server responses.
[d] Finding mules for drug and human trafficking is easier than ever because social media requests for interested parties can be cranked out at high speed 24×7.
“Well, Slim, your idea to use that new fangled smart software to steal financial data is working. Sittin’ here counting the money raining down on us is a heck of a lot easier than robbing old ladies in the Trader Joe’s parking lot,” says the bad actor with the coffin nail of death in his mouth and the ill-gotten gains in his hands. Thanks, Copilot, you are producing nice cartoons today.
And the correct answer is … a, b, c, and d.
For some supporting information, navigate to “Deepfake Fraud Attempts Are Up 3000% in 2023. Here’s Why.” The write up reports:
Face-swapping apps are the most common example. The most basic versions crudely paste one face on top of another to create a “cheapfake.” More sophisticated systems use AI to morph and blend a source face onto a target, but these require greater resources and skills. The simple software, meanwhile, is easy to run and cheap or even free. An array of forgeries can then be simultaneously used in multiple attacks.
I like the phrase “cheap fakes.”
Several observations:
- Bad actors, unencumbered by bureaucracy, can download, test, tune, and deploy smart criminal actions more quickly than law enforcement can thwart them
- Existing cyber security systems are vulnerable to some smart attacks because AI can adapt and try different avenues
- Large volumes of automated content can be created and emailed without the hassle of manual content creation
- Cyber security vendors operate in “react mode”; that is, once a problem is discovered then the good actors will develop a defense. The advantage goes to those with a good offense, not a good defense.
Net net: 2024 will be fraught with security issues.
Stephen E Arnold, November 17, 2023
Cyberwar Crimes? Yep and Prosecutions Coming Down the Pike
November 15, 2023
This essay is the work of a dumb humanoid. No smart software required.
Existing international law has appeared hamstrung in the face of cyber-attacks for years, with advocates calling for new laws to address the growing danger. It appears, however, that step will no longer be necessary. Wired reports, “The International Criminal Court Will Now Prosecute Cyberwar Crimes.” The Court’s lead prosecutor, Karim Khan, acknowledged in an article published by Foreign Policy Analytics that cyber warfare perpetuates serious harm in the real world. Attacks on critical infrastructure like medical facilities and power grids may now be considered “war crimes, crimes against humanity, genocide, and/or the crime of aggression” as defined in the 1998 Rome Statute. That is great news, but why now? Writer Andy Greenberg tells us:
“Neither Khan’s article nor his office’s statement to WIRED mention Russia or Ukraine. But the new statement of the ICC prosecutor’s intent to investigate and prosecute hacking crimes comes in the midst of growing international focus on Russia’s cyberattacks targeting Ukraine both before and after its full-blown invasion of its neighbor in early 2022. In March of last year, the Human Rights Center at UC
Berkeley’s School of Law sent a formal request to the ICC prosecutor’s office urging it to consider war crime prosecutions of Russian hackers for their cyberattacks in Ukraine—even as the prosecutors continued to gather evidence of more traditional, physical war crimes that Russia has carried out in its invasion. In the Berkeley Human Rights Center’s request, formally known as an Article 15 document, the Human Rights Center focused on cyberattacks carried out by a Russian group known as Sandworm, a unit within Russia’s GRU military intelligence agency. Since 2014, the GRU and Sandworm, in particular, have carried out a series of cyberwar attacks against civilian critical infrastructure in Ukraine beyond anything seen in the history of the internet.”
See the article for more details of Sandworm’s attacks. Greenberg consulted Lindsay Freeman, the Human Rights Center’s director of technology, law, and policy, who expects the ICC is ready to apply these standards well beyond the war in Ukraine. She notes the 123 countries that signed the Rome Statute are obligated to detain and extradite convicted war criminals. Another expert, Strauss Center director Bobby Chesney, points out Khan paints disinformation as a separate, “gray zone.” Applying the Rome Statute to that tactic may prove tricky, but he might make it happen. Khan seems determined to hold international bad actors to account as far as the law will possibly allow.
Cynthia Murrell, November 15, 2023
The Risks of Smart Software in the Hands of Fullz Actors and Worse
November 7, 2023
This essay is the work of a dumb humanoid. No smart software required.
The ChatGPT and Sam AI-Man parade is getting more acts. I spotted some thumbs up from Satya Nadella about Sam AI-Man and his technology. The news service Techmeme provided me with dozens of links and enticing headlines about enterprise this and turbo that GPT. Those trumpets and tubas were pumping out the digital version of Funiculì, Funiculà.
I want to highlight one write up and point out an issue with smart software that appears to have been ignored, overlooked, or like the iceberg possibly that sank the RMS Titanic, was a heck of a lot more dangerous than Captain Edward Smith appreciated.
The crowd is thrilled with the new capabilities of smart software. Imagine automating mundane, mindless work. Over the oom-pah of the band, one can sense the excitement of the Next Big Thing getting Bigger and more Thingier. In the crowd, however, are real or nascent bad actors. They are really happy too. Imagine how easy it will be to automate processes designed to steal personal financial data or other chinks in humans’ armor!
The article is “How OpenAI Is Building a Path Toward AI Agents.” The main idea is that one can type instructions into Sam AI-Man’s GPT “system” and have smart software hook together discrete functions. These functions can then deliver an output requiring the actions of different services.
The write up approaches this announcement or marketing assertion with some prudence. The essay points out that “customer chatbots aren’t a new idea.” I agree. Connecting services has been one of the basic ideas of the use of software. Anyone who has used notched cards to retrieve items related to one another is going to understand the value of automation. And now, if the Sam AI-Man announcements are accurate that capability no longer requires old-fashioned learning the ropes.
The cited write up about building a path asserts:
Once you start enabling agents like the ones OpenAI pointed toward today, you start building the path toward sophisticated algorithms manipulating the stock market; highly personalized and effective phishing attacks; discrimination and privacy violations based on automations connected to facial recognition; and all the unintended (and currently unimaginable) consequences of infinite AIs colliding on the internet.
Fear, uncertainty, and doubt are staples of advanced technology. And the essay makes clear that the rule maker in chief is Sam AI-Man; to wit the essay says:
After the event, I asked Altman how he was thinking about agents in general. Which actions is OpenAI comfortable letting GPT-4 take on the internet today, and which does the company not want to touch? Altman’s answer is that, at least for now, the company wants to keep it simple. Clear, direct actions are OK; anything that involves high-level planning isn’t.
Let me introduce my observations about the Sam AI-Man innovations and the type of explanations about the PR and marketing event which has whipped up pundits, poohbahs, and Twitter experts (perhaps I should say X-spurts?)
First, the Sam AI-Man announcements strike me as making orchestration a service easy to use and widely available. Bad things won’t be allowed. But the core idea of what I call “orchestration” is where the parade is marching. I hear the refrain “Some think the world is made for fun and frolic.” But I don’t agree, I don’t agree. Because as advanced tools become widely available, the early adopters are not exclusively those who want to link a calendar to an email to a document about a meeting to talk about a new marketing initiative.
Second, the ability of Sam AI-Man to determine what’s in bounds and out of bounds is different from refereeing a pickleball game. Some of the players will be nation states with an adversarial view of the US of A. Furthermore, there are bad actors who have a knack for linking automated information to online extortion. These folks will be interested in cost cutting and efficiency. More problematic, some of these individuals will be more active in testing how orchestration can facilitate their human trafficking activities or drug sales.
Third, government entities and people like Sam AI-Man are, by definition, now in reactive mode. What I mean is that with the announcement and the chatter about automating the work required to create a snappy online article is not what a bad actor will do. Individuals will see opportunities to create new ways to exploit the cluelessness of employees, senior citizens, and young people. The cheerful announcements and the parade tunes cannot drown out the low frequency rumbles of excitement now rippling through the bad actor grapevines.
Net net: Crime propelled by orchestration is now officially a thing. The “regulations” of smart software, like the professionals who will have to deal with the downstream consequences of automation, are out of date. Am I worried? For me personally, no, I am not worried. For those who have to enforce the laws which govern a social construct? Yep, I have a bit of concern. Certainly more than those who are laughing and enjoying the parade.
Stephen E Arnold, November 7, 2023
Social Media: A No-Limits Zone Scammers
November 6, 2023
This essay is the work of a dumb humanoid. No smart software required.
Scams have plagued social media since its inception and it’s only getting worse. The FTC described the current state of social media scams in, “Social Media: A Golden Goose For Scammers.” Scammers and other bad actors are hiding in plain sight on popular social media platforms. The FTC’s Consumer Sentinel Network reported that one in four people lost money to scams that began on social media. In total people reported losing $2.7 billion to social media scams but the number could be greater because most cases aren’t reported.
It’s sobering the way bad actors target victims:
“Social media gives scammers an edge in several ways. They can easily manufacture a fake persona, or hack into your profile, pretend to be you, and con your friends. They can learn to tailor their approach from what you share on social media. And scammers who place ads can even use tools available to advertisers to methodically target you based on personal details, such as your age, interests, or past purchases. All of this costs them next to nothing to reach billions of people from anywhere in the world.”
Scammers don’t discriminate against age. Surprisingly, younger groups lost the most to bad actors. Forty-seven percent of people 18-19 were defrauded in the first six months of 2023, while only 38% of people 20-29 were hit. The numbers decrease with age and the decline of older generations not using social media.
The biggest reported scams were related to online shopping, usually people who tried to buy something off social media. The total loss was 44% from January-June 2023. Fake investment opportunities grossed the largest amount of profit for scammers at 53%. Most of the “opportunities” were cryptocurrency operations. Romance scams had the second highest losses for victims. These encounters start innocuous enough but always end with love bombing and money requests.
Take precautions such as making your social media profiles private, investigate if your friends suddenly ask you for money, don’t instantly fall in love with random strangers, and research companies before you make investments. It’s all old, yet sagacious advice for the digital age.
Whitney Grace, November 6, 2023
Europol Focuses on Child Centric Crime
October 16, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Children are the most vulnerable and exploited population in the world. The Internet unfortunately aides bad actors by allowing them to distribute child sexual abuse material aka CSAM to avoid censors. Europol (the European-only sector of Interpol) wants to end CSAM by overriding Europeans’ privacy rights. Tech Dirt explores the idea in the article, “Europol Tells EU Commission: Hey, When It Comes To CSAM, Just Let Us Do Whatever We Want.”
Europol wants unfiltered access to a EU proposed AI algorithm and its data programmed to scan online content for CSAM. The police agency also wants to use the same AI to detect other crimes. This information came from a July 2022 high-level meeting that involved Europol Executive Director Catherine de Belle and the European Commission’s Director-General for Migration and Home Affairs Monique Pariat. Europol pitched this idea when the EU believed it would mandate client-side scanning on service providers.
Privacy activists and EU member nations vetoed the idea, because it would allow anyone to eavesdrop on private conversations. They also found it violated privacy rights. Europol used the common moniker “for the children” or “save the children” to justify the proposal. Law enforcement, politicians, religious groups, and parents have spouted that rhetoric for years and makes more nuanced people appear to side with pedophiles.
“It shouldn’t work as well as it does, since it’s been a cliché for decades. But it still works. And it still works often enough that Europol not only demanded access to combat CSAM but to use this same access to search for criminal activity wholly unrelated to the sexual exploitation of children… Europol wants a police state supported by always-on surveillance of any and all content uploaded by internet service users. Stasi-on-digital-steroids. Considering there’s any number of EU members that harbor ill will towards certain residents of their country, granting an international coalition of cops unfiltered access to content would swiftly move past the initial CSAM justification to governments seeking out any content they don’t like and punishing those who dared to offend their elected betters.”
There’s also evidence that law enforcement officials and politicians are working in the public sector to enforce anti-privacy laws then leaving for the private sector. Once there, they work at companies that sell surveillance technology to governments. Is that a type of insider trading or nefarious influence?
Whitney Grace, October 16, 2023
India: Okay, No More CSAM or Else the Cash Register Will Ring
October 12, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
X (the Tweeter thing), YouTube, and Telegram get a tough assignment. India wants child sexual abuse material or CSAM for those who want to do acronym speak scrubbed from content or services delivered in the great nation of India. There are some interesting implications for these US technology giants. First, the outfits are accustomed to just agreeing and not doing much to comply with government suggestions. In fact, most of the US high-tech firms offer promises, and those can be slippery fish. Second, determining what is and what is not CSAM can be a puzzler as well. Bad actors are embracing smart software and generating some realistic images and videos without having to find, coerce, film, and pay off humans involved in the distasteful but lucrative business. Questions about the age of a synthetic child porno star are embarrassing to ask and debate. Remember the need for a diverse group to deliberate about such matters. Also, the advent of smart software invites orchestration so that text prompts can be stuffed into a system. The system happily outputs videos with more speed than a human adult industry star speeding to a shoot after a late call. Zeros and ones are likely to take over CSAM because … efficiency.
“India Tells X, YouTube, Telegram to Remove Any Child Sexual Abuse Material from Platforms” reports:
The companies could be stripped of their protection from legal liability if they don’t comply, the government said in a statement. The notices, sent by the federal Ministry of Electronics and Information Technology (MEITY), emphasized the importance of prompt and permanent removal of any child sexual abuse material on these platforms.
My dinobaby perspective is that [a] these outfits cannot comply because neither smart software nor legions of human content curators can keep up with the volume of videos and images pumped by these systems. [b] India probably knows that the task is a tough one and may be counting on some hefty fines to supplement other sources of cash for a delightful country. [c] Telegram poses a bit of a challenge because bad actors use Dark Web and Clear Web lures to attract CSAM addicts and then point to a private Telegram group to pay for and get delivery of the digital goods. That encryption thing may be a sticky wicket.
Net net: Some high-tech outfits may find doing business in India hotter than a Chettinad masala.
Stephen E Arnold, October 13, 2023
Cognitive Blind Spot 4: Ads. What Is the Big Deal Already?
October 11, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Last week, I presented a summary of Dark Web Trends 2023, a research update my team and I prepare each year. I showed a visual of the ads on a Dark Web search engine. Here’s an example of one of my illustrations:
The TorLanD service, when it is accessible via Tor, displays a search box and advertising. What is interesting about this service and a number of other Dark Web search engines is the ads. The search results are so-so, vastly inferior to those information retrieval solutions offered by intelware vendors.
Some of the ads appear on other Dark Web search systems as well; for example, Bobby and DarkSide, among others. The advertisements off a range of interesting content. TorLanD screenshot pitches carding, porn, drugs, gadgets (skimmers and software), illegal substances. I pointed out that the ads on TorLanD looked a lot like the ads on Bobby; for instance:
I want to point out that the Silk Road 4.0 and the Gadgets, Docs, Fakes ads are identical. Notice also that TorLanD advertises on Bobby. The Helsinki Drug Marketplace on the Bobby search system offers heroin.
Most of these ads are trade outs. The idea is that one Dark Web site will display an ad for another Dark Web site. There are often links to Dark Web advertising agencies as well. (For this short post, I won’t be listing these vendors, but if you are interested in this research, contact benkent2020 at yahoo dot com. One of my team will follow up and explain our for-fee research policy.)
The point of these two examples is make clear that advertising has become normalized, even among bad actors. Furthermore, few are surprised that bad actors (or alleged bad actors) communicate, pat one another on the back, and support an ecosystem to buy and sell space on the increasingly small Dark Web. Please, note that advertising appears in public and private Telegram groups focused on he topics referenced in these Dark Web ads.
Can you believe the ads? Some people do. Users of the Clear Web and the Dark Web are conditioned to accept ads and to believe that these are true, valid, useful, and intended to make it easy to break the law and buy a controlled substance or CSAM. Some ads emphasize “trust.”
People trust ads. People believe ads. People expect ads. In fact, one can poke around and identify advertising and PR agencies touting the idea that people “trust” ads, particularly those with brand identity. How does one build brand? Give up? Advertising and weaponized information are two ways.
The cognitive bias that operates is that people embrace advertising. Look at a page of Google results. Which are ads and which are ads but not identified. What happens when ads are indistinguishable from plausible messages? Some online companies offer stealth ads. On the Dark Web pages illustrating this essay are law enforcement agencies masquerading as bad actors. Can you identify one such ad? What about messages on Twitter which are designed to be difficult to spot as paid messages or weaponized content. For one take on Twitter technology, read “New Ads on X Can’t Be Blocked or Reported, and Aren’t Labeled as Advertisements.”
Let me highlight some of the functions on online ads like those on the Dark Web sites. I will ignore the Clear Web ads for the purposes of this essay:
- Click on the ad and receive malware
- Visit the ad and explore the illegal offer so that the site operator can obtain information about you
- Sell you a product and obtain the identifiers you provide, a deliver address (either physical or digital), or plant a beacon on your system to facilitate tracking
- Gather emails for phishing or other online initiatives
- Blackmail.
I want to highlight advertising as a vector of weaponization for three reasons: [a] People believe ads. I know it sound silly, but ads work. People suspend disbelief when an ad on a service offers something that sounds too good to be true; [b] many people do not question the legitimacy of an ad or its message. Ads are good. Ads are everywhere. and [c] Ads are essentially unregulated.
What happens when everything drifts toward advertising? The cognitive blind spot kicks in and one cannot separate the false from the real.
Public service note: Before you explore Dark Web ads or click links on social media services like Twitter, consider that these are vectors which can point to quite surprising outcomes. Intelligence agencies outside the US use Dark Web sites as a way to harvest useful information. Bad actors use ads to rip off unsuspecting people like the doctor who once lived two miles from my office when she ordered a Dark Web hitman to terminate an individual.
Ads are unregulated and full of surprises. But the cognitive blind spot for advertising guarantees that the technique will flourish and gain technical sophistication. Are those objective search results useful information or weaponized? Will the Dark Web vendor really sell you valid stolen credit cards? Will the US postal service deliver an unmarked envelope chock full of interesting chemicals?
Stephen E Arnold, October 11, 2023
Savvy GenZs: Scammers Love Those Kids
October 3, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Many of us assumed the generation that has grown up using digital devices would be the most cyber-crime savvy. Apparently not. Vox reports, “Gen Z Falls for Online Scams More than their Boomer Grandparents Do.” Writer A.W. Ohlheiser cites a recent Deloitte survey that found those born between 1997 and 2012 were three times more likely to fall victim to an online scam than Boomers, twice as likely to have their social media accounts hacked, and more likely to have location information misused than any other generation.
One might think they should know better and, apparently, they do: the survey found Gen Z respondents to be quite aware of cybersecurity issues. The problem may instead lie in the degree to which young people are immersed in the online world(s). We learn:
“There are a few theories that seem to come up again and again. First, Gen Z simply uses technology more than any other generation and is therefore more likely to be scammed via that technology. Second, growing up with the internet gives younger people a familiarity with their devices that can, in some instances, incentivize them to choose convenience over safety. And third, cybersecurity education for school-aged children isn’t doing a great job of talking about online safety in a way that actually clicks with younger people’s lived experiences online.”
So one thing we might to is adjust our approach to cybersecurity education in schools. How else can we persuade Gen Z to accept hassles like two-factor authentication in the interest of safety? Maybe that is the wrong question. Ohlheiser consulted 21-year-old Kyla Guru, a Stanford computer science student and founder of a cybersecurity education organization. The article suggests:
“Instead, online safety best practices should be much more personalized to how younger people are actually using the internet, said Guru. Staying safer online could involve switching browsers, enabling different settings in the apps you use, or changing how you store passwords, she noted. None of those steps necessarily involve compromising your convenience or using the internet in a more limited way. Approaching cybersecurity as part of being active online, rather than an antagonist to it, might connect better with Gen Z, Guru said.”
Guru also believes learning about online bad actors and their motivations may help her peers be more attentive to the issue. The write-up also points to experts who insist apps and platforms bear at least some responsibility to protect users, and there is more they could be doing. For example, social media platforms could send out test phishing emails, as many employers do, then send educational resources to anyone who bites. And, of course, privacy settings could be made much easier to access and understand. Those steps, in fact, could help users of all ages.
Cynthia Murrell, October 3, 2023
Good New and Bad News: Smart Software Is More Clever Than Humanoids
September 11, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
After a quick trip to Europe, I will be giving a lecture about fake data. One of the case examples concerns the alleged shortcuts taken by Frank Financial in its efforts to obtain about $175 million from JPMorgan Chase. I like to think of JPMC as “the smartest guys in the room” when it comes to numbers related to money. I suppose wizards at Goldman or McKinsey would disagree. But the interesting swizzle on the JPMC story is that alleged fraudster was a graduate of Wharton.
That’s good news for getting an education in moral probity at a prestigious university.
A big, impressive university’s smart software beats smart students at Tic Tac Toe. Imagine what these wizards will be able to accomplish when smart software innovates and assists the students with financial fancy dancing. Thanks, Mother MJ. Deep on the gradient descent, please.
Flash forward to the Murdoch real news story “M.B.A. Students vs. ChatGPT: Who Comes Up With More Innovative Ideas?” [The Rupert toll booth is operating.] The main idea of the write up is that humanoid Wharton students were less “creative,” “innovative,” and “inventive” than smart software. What’s this say for the future of financial fraud. Mere humanoids like those now in the spotlight at the Southern District of New York show may become more formidable with the assistance of smart software. The humanoids were caught, granted it took JPMC a few months after the $175 million check was cashed, but JPMC did figure it out via a marketing text.
Imagine. Wharton grads with smart software. How great will that be for the targets of financial friskiness? Let’s hope JPMC gets its own cyber fraud detecting software working. In late 2022, the “smartest guys in the room” were not smart enough to spot synthetic and faked data. Will smart software be able to spot smart software scams?
That’s the bad new. No.
Stephen E Arnold, September 11, 2023
Why Encrypted Messaging Is Getting Love from Bad Actors
August 17, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The easier it is to break the law or circumvent regulations, the more people will give into their darker nature. Yes, this is another of Arnold’s Laws of Online along with online data flows erode ethical behavior. I suppose the two “laws” go together like Corvettes and fuel stops, tattoos and body art, or Barbie and Ken dolls.
“Banks Hit with $549 Million in Fines for Use of Signal, WhatsApp to Evade Regulators’ Reach” explains a behavior I noticed when I was doing projects for a hoop-de-do big time US financial institution.
Let’s jump back in time to 2005: I arrived for a meeting with the bank lugging my lecture equipment. As I recall, I had a couple of laptops, my person LCD projector, a covey of connectors, and a couple of burner phones and SIMs from France and the UK.
“What are you looking at?” queries the young financial analyst on the sell side. I had interrupted a young, whip-smart banker who was organizing her off-monitoring client calls. I think she was deciding which burner phone and pay-as-you-go SIM to use to pass a tip about a major financial deal to a whale. Thanks, MidJourney. It only took three times for your smart software to show mobile phones. Outstanding C minus work. Does this MBA CFA look innocent to you? She does to me. Doesn’t every banker have multiple mobile phones?
One bright bank type asked upon entering the meeting room as I was stowing and inventorying my gear after a delightful taxi ride from the equally thrilling New York Hilton, “Why do you have so many mobile phones?” I explained that I used the burners in my talks about cyber crime. The intelligent young person asked, “How do you connect them?” I replied, “When I travel, I buy SIMs in other countries. I also purchase them if I see a US outfit offering a pay-as-you-go SIM.” She did not ask how I masked my identity when acquiring SIMs, and I did not provide any details like throwing the phone away after one use.
Flash forward two months. This time it was a different conference room. My client had his assistant and the bright young thing popped into the meeting. She smiled and said, “I have been experimenting with the SIMs and a phone I purchased on Lexington Avenue from a phone repair shop.”
“What did you learn?” I asked.
She replied, “I can do regular calls on the mobile the bank provides. But I can do side calls on this other phone.”
I asked, “Do you call clients on the regular phone or the other phone?”
She said, “I use the special phone for special clients.”
Remember this was late 2005.
The article dated August 8, 2023, appeared 18 years after my learning how quickly bright young things can suck in an item of information and apply it to transferring information supposedly regulated by a US government agency. That’s when I decided my Arnold Law about people breaking the law when it is really easy one of my go-to sayings.
The write up stated:
U.S. regulators on Tuesday announced a combined $549 million in penalties against Wells Fargo and a raft of smaller or non-U.S. firms that failed to maintain electronic records of employee communications. The Securities and Exchange Commission disclosed charges and $289 million in fines against 11 firms for “widespread and longstanding failures” in record-keeping, while the Commodity Futures Trading Commission also said it fined four banks a total of $260 million for failing to maintain records required by the agency.
How long has a closely regulated sector like banking been “regulated”? A long time.
I want to mention that I have been talking about getting around regulations which require communication monitoring for a long time. In fact, in October 2023, at the Massachusetts / New York Association of Crime Analysts conference. In my keynote, I will update my remarks about Telegram and its expanding role in cyber and regular crime. I will also point out how these encrypted messaging apps have breathed new, more secure life into certain criminal activities. We have an organic ecosystem of online-facilitated crime, crime that is global, not a local stick up at a convenient store at 3 am on a rainy Thursday morning.
What does this news story say about regulatory action? What does it make clear about behavior in financial services firms?
I, of course, have no idea. Just like some of the regulatory officers at financial institutions and some regulatory agencies.
Stephen E Arnold, August 17, 2023

