AI Silly Putty: Squishes Easily, Impossible to Remove from Hair
December 29, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I like happy information. I navigated to “Meta’s Chief AI Scientist Says Terrorists and Rogue States Aren’t Going to Take Over the World with Open Source AI.” Happy information. Terrorists and the Axis of Evil outfits are just going to chug along. Open source AI is not going to give these folks a super weapon. I learned from the write up that the trustworthy outfit Zuckbook has a Big Wizard in artificial intelligence. That individual provided some cheerful words of wisdom for me. Here’s an example:
It won’t be easy for terrorists to takeover the world with open-source AI.
Obviously there’s a caveat:
they’d need a lot money and resources just to pull it off.
That’s my happy thought for the day.
“Wow, getting this free silly putty out of your hair is tough,” says the scout mistress. The little scout asks, “Is this similar to coping with open source artificial intelligence software?” Thanks, MSFT Copilot. After a number of weird results, you spit out one that is good enough.
Then I read “China’s Main Intel Agency Has Reportedly Developed An AI System To Track US Spies.” Oh, oh. Unhappy AI information. China, I assume, has the open source AI software. It probably has in its 1.4 billion population a handful of AI wizards comparable to the Zuckbook’s line up. Plus, despite economic headwinds, China has money.
The write up reports:
The CIA and China’s Ministry of State Security (MSS) are toe to toe in a tense battle to beat one another’s intelligence capabilities that are increasingly dependent on advanced technology… , the NYT reported, citing U.S. officials and a person with knowledge of a transaction with contracting firms that apparently helped build the AI system. But, the MSS has an edge with an AI-based system that can create files near-instantaneously on targets around the world complete with behavior analyses and detailed information allowing Beijing to identify connections and vulnerabilities of potential targets, internal meeting notes among MSS officials showed.
Not so happy.
Several observations:
- The smart software is a cat out of the bag
- There are intelligent people who are not pals of the US who can and will use available tools to create issues for a perceived adversary
- The AI technology is like silly putty: Easy to get, free or cheap, and tough to get out of someone’s hair.
What’s the deal with silly putty? Cheap, easy, and tough to remove from hair, carpet, and seat upholstery. Just like open source AI software in the hands of possibly questionable actors. How are those government guidelines working?
Stephen E Arnold, December 29, 2023
Microsoft Snags Cyber Criminal Gang: Enablers Finally a Target
December 14, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Earlier this year at the National Cyber Crime Conference, we shared some of our research about “enablers.” The term is our shorthand for individuals, services, and financial outfits providing the money, services, and management support to cyber criminals. Online crime comes, like Baskin & Robbins ice cream, in a mind-boggling range of “flavors.” To make big bucks, funding and infrastructure are needed. The reasons include amped up enforcement from the US Federal Bureau of Investigation, Europol, and cooperating law enforcement agencies. The cyber crime “game” is a variation of a cat-and-mouse game. With each technological advance, bad actors try out the latest and greatest. Then enforcement agencies respond and neutralize the advantage. The bad actors then scan the technology horizon, innovate, and law enforcement responds. There are many implications of this innovate-react-innovate cycle. I won’t go into those in this short essay. Instead I want to focus on a Microsoft blog post called “Disrupting the Gateway Services to Cybercrime.”
Industrialized cyber crime uses existing infrastructure providers. That’s a convenient, easy, and economical means of hiding. Modern obfuscation technology adds to law enforcements’ burden. Perhaps some oversight and regulation of these nearly invisible commercial companies is needed? Thanks, MSFT Copilot. Close enough and I liked the investigators on the roof of a typical office building.
Microsoft says:
Storm-1152 [the enabler?] runs illicit websites and social media pages, selling fraudulent Microsoft accounts and tools to bypass identity verification software across well-known technology platforms. These services reduce the time and effort needed for criminals to conduct a host of criminal and abusive behaviors online.
What moved Microsoft to take action? According to the article:
Storm-1152 created for sale approximately 750 million fraudulent Microsoft accounts, earning the group millions of dollars in illicit revenue, and costing Microsoft and other companies even more to combat their criminal activity.
Just 750 million? One question which struck me was: “With the updating, the telemetry, and the bits and bobs of Microsoft’s “security” measures, how could nearly a billion fake accounts be allowed to invade the ecosystem?” I thought a smaller number might have been the tipping point.
Another interesting point in the essay is that Microsoft identifies the third party Arkose Labs as contributing to the action against the bad actors. The company is one of the firms engaged in cyber threat intelligence and mitigation services. The question I had was, “Why are the other threat intelligence companies not picking up signals about such a large, widespread criminal operation?” Also, “What is Arkose Labs doing that other sophisticated companies and OSINT investigators not doing?” Google and In-Q-Tel invested in Recorded Future, a go to threat intelligence outfit. I don’t recall seeing, but I heard that Microsoft invested in the company, joining SoftBank’s Vision Fund and PayPal, among others.
I am delighted that “enablers” have become a more visible target of enforcement actions. More must be done, however. Poke around in ISP land and what do you find? As my lecture pointed out, “Respectable companies in upscale neighborhoods harbor enablers, so one doesn’t have to travel to Bulgaria or Moldova to do research. Silicon Valley is closer and stocked with enablers; the area is a hurricane of crime.
In closing, I ask, “Why are discoveries of this type of industrialized criminal activity unearthed by one outfit?" And, “What are the other cyber threat folks chasing?”
Stephen E Arnold, December 14, 2023
23andMe: Those Users and Their Passwords!
December 5, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Silicon Valley and health are match fabricated in heaven. Not long ago, I learned about the estimable management of Theranos. Now I find out that “23andMe confirms hackers stole ancestry data on 6.9 million users.” If one follows the logic of some Silicon Valley outfits, the data loss is the fault of the users.
“We have the capability to provide the health data and bioinformation from our secure facility. We have designed our approach to emulate the protocols implemented by Jack Benny and his vault in his home in Beverly Hills,” says the enthusiastic marketing professional from a Silicon Valley success story. Thanks, MSFT Copilot. Not exactly Jack Benny, Ed, and the foghorn, but I have learned to live with “good enough.”
According to the peripatetic Lorenzo Franceschi-Bicchierai:
In disclosing the incident in October, 23andMe said the data breach was caused by customers reusing passwords, which allowed hackers to brute-force the victims’ accounts by using publicly known passwords released in other companies’ data breaches.
Users!
What’s more interesting is that 23andMe provided estimates of the number of customers (users) whose data somehow magically flowed from the firm into the hands of bad actors. In fact, the numbers, when added up, totaled almost seven million users, not the original estimate of 14,000 23andMe customers.
I find the leak estimate inflation interesting for three reasons:
- Smart people in Silicon Valley appear to struggle with simple concepts like adding and subtracting numbers. This gap in one’s education becomes notable when the discrepancy is off by millions. I think “close enough for horse shoes” is a concept which is wearing out my patience. The difference between 14,000 and almost 17 million is not horse shoe scoring.
- The concept of “security” continues to suffer some set backs. “Security,” one may ask?
- The intentional dribbling of information reflects another facet of what I call high school science club management methods. The logic in the case of 23andMe in my opinion is, “Maybe no one will notice?”
Net net: Time for some regulation, perhaps? Oh, right, it’s the users’ responsibility.
Stephen E Arnold, December 5, 2023
Deepfakes: Improving Rapidly with No End in Sight
December 1, 2023
This essay is the work of a dumb dinobaby. No smart software required.
The possible applications of AI technology are endless and we’ve barely imagined the opportunities. While tech experts mainly focus on the benefits of AI, bad actors are concentrating how to use them for illegal activities. The Next Web explains how bad actors are using AI for scams, “Deepfake Fraud Attempts Are Up 3000% In 2023-Here’s Why.” Bad actors are using cheap and widely available AI technology to create deepfake content for fraud attempts.
According to Onfido, an ID verification company in London, reports that deepfake scams increased by 31% in 2023. It’s an entire 3000% year-on-year gain. The AI tool of choice for bad actors is face-swapping apps. They range in quality from a bad copy and paste job to sophisticated, blockbuster quality fakes. While the crude attempts are laughable, it only takes one successful facial identity verification for fraudsters to win.
The bad actors concentrate on quantity over quality and account for 80.3% of attacks in 2023. Biometric information is a key component to stop fraudsters:
“Despite the rise of deepfake fraud, Onfido insists that biometric verification is an effective deterrent. As evidence, the company points to its latest research. The report found that biometrics received three times fewer fraudulent attempts than documents. The criminals, however, are becoming more creative at attacking these defenses. As GenAI tools become more common, malicious actors are increasingly producing fake documents, spoofing biometric defenses, and hijacking camera signals.”
Onfido suggests using “liveness” biometrics in verification technology. Liveness determines if a user if actually present instead of a deepfake, photo, recording, or masked individual.
As AI technology advances so will bad actors in their scams.
Whitney Grace, December 1, 2023
Speeding Up and Simplifying Deep Fake Production
November 24, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Remember the good old days when creating a deep fake required having multiple photographs, maybe a video clip, and minutes of audio? Forget those requirements. To whip up a deep fake, one needs only a short audio clip and a single picture of the person.
The pace of innovation in deep face production is speeding along. Bad actors will find it easier than ever to produce interesting videos for vulnerable grandparents worldwide. Thanks, MidJourney. It was a struggle but your produced a race scene that is good enough, the modern benchmark for excellence.
Researchers at Nanyang Technological University has blasted through the old-school requirements. The teams software can generate realistic videos. These can show facial expressions and head movements. The system is called DIRFA, a tasty acronym for Diverse yet Realistic Facial Animations. One notable achievement of the researchers is that the video is produced in 3D.
The report “Realistic Talking Faces Created from Only and Audio Clip and a Person’s Photo” includes more details about the system and links to demonstration videos. If the story is not available, you may be able to see the video on YouTube at this link.
Stephen E Arnold, November 24, 2023
A Rare Moment of Constructive Cooperation from Tech Barons
November 23, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Platform-hopping is one way bad actors have been able to cover their tracks. Now several companies are teaming up to limit that avenue for one particularly odious group. TechNewsWorld reports, “Tech Coalition Launches Initiative to Crackdown on Nomadic Child Predators.” The initiative is named Lantern, and the Tech Coalition includes Discord, Google, Mega, Meta, Quora, Roblox, Snap, and Twitch. Such cooperation is essential to combat a common tactic for grooming and/ or sextortion: predators engage victims on one platform then move the discussion to a more private forum. Reporter John P. Mello Jr. describes how Lantern works:
Participating companies upload ‘signals’ to Lantern about activity that violates their policies against child sexual exploitation identified on their platform.
Signals can be information tied to policy-violating accounts like email addresses, usernames, CSAM hashes, or keywords used to groom as well as buy and sell CSAM. Signals are not definitive proof of abuse. They offer clues for further investigation and can be the crucial piece of the puzzle that enables a company to uncover a real-time threat to a child’s safety.
Once signals are uploaded to Lantern, participating companies can select them, run them against their platform, review any activity and content the signal surfaces against their respective platform policies and terms of service, and take action in line with their enforcement processes, such as removing an account and reporting criminal activity to the National Center for Missing and Exploited Children and appropriate law enforcement agency.”
The visually oriented can find an infographic of this process in the write-up. We learn Lantern has been in development for two years. Why did it take so long to launch? Part of it was designing the program to be effective. Another part was to ensure it was managed responsibly: The project was subjected to a Human Rights Impact Assessment by the Business for Social Responsibility. Experts on child safety, digital rights, advocacy of marginalized communities, government, and law enforcement were also consulted. Finally, we’re told, measures were taken to ensure transparency and victims’ privacy.
In the past, companies hesitated to share such information lest they be considered culpable. However, some hope this initiative represents a perspective shift that will extend to other bad actors, like those who spread terrorist content. Perhaps. We shall see how much tech companies are willing to cooperate. They wouldn’t want to reveal too much to the competition just to help society, after all.
Cynthia Murrell, November 23, 2023
Why Suck Up Health Care Data? Maybe for Cyber Fraud?
November 20, 2023
This essay is the work of a dumb humanoid. No smart software required.
In the US, medical care is an adventure. Last year, my “wellness” check up required a visit to another specialist. I showed up at the appointed place on the day and time my printed form stipulated. I stood in line for 10 minutes as two “intake” professionals struggled to match those seeking examinations with the information available to the check in desk staff. The intake professional called my name and said, “You are not a female.” I said, “That’s is correct.” The intake professional replied, “We have the medical records from your primary care physician for a female named Tina.” Nice Health Insurance Portability and Accountability Act compliance, right?
A moose in Maine learns that its veterinary data have been compromised by bad actors, probably from a country in which the principal language is not moose grunts. With those data, the shocked moose can be located using geographic data in his health record. Plus, the moose’s credit card data is now on the loose. If the moose in Maine is scared, what about the humanoids with the fascinating nasal phonemes?
That same health care outfit reported that it was compromised and was a victim of a hacker. The health care outfit floundered around and now, months later, struggles to update prescriptions and keep appointments straight. How’s that for security? In my book, that’s about par for health care managers who [a] know zero about confidentiality requirements and [b] even less about system security. Horrified? You can read more about this one-horse travesty in “Norton Healthcare Cyber Attack Highlights Record Year for Data Breaches Nationwide.” I wonder if the grandparents of the Norton operation were participants on Major Bowes’ Amateur Hour radio show?
Norton Healthcare was a poster child for the Commonwealth of Kentucky. But the great state of Maine (yep, the one with moose, lovable black flies, and citizens who push New York real estate agents’ vehicles into bays) managed to lose the personal data for 2,192,515 people. You can read about that “minor” security glitch in the Office of the Maine Attorney General’s Data Breach Notification.
What possible use is health care data? Let me identify a handful of bad actor scenarios enabled by inept security practices. Note, please, that these are worse than being labeled a girl or failing to protect the personal information of what could be most of the humans and probably some of the moose in Maine.
- Identity theft. Those newborns and entries identified as deceased can be converted into some personas for a range of applications, like applying for Social Security numbers, passports, or government benefits
- Access to bank accounts. With a complete array of information, a bad actor can engage in a number of maneuvers designed to withdraw or transfer funds
- Bundle up the biological data and sell it via one of the private Telegram channels focused on such useful information. Bioweapon researchers could find some of the data fascinating.
Why am I focusing on health care data? Here are the reasons:
- Enforcement of existing security guidelines seems to be lax. Perhaps it is time to conduct audits and penalize those outfits which find security easy to talk about but difficult to do?
- Should one or more Inspector Generals’ offices conduct some data collection into the practices of state and Federal health care security professionals, their competencies, and their on-the-job performance? Some humans and probably a moose or two in Maine might find this idea timely.
- Should the vendors of health care security systems demonstrate to one of the numerous Federal cyber watch dog groups the efficacy of their systems and then allow one or more of the Federal agencies to probe those systems to verify that the systems do, in fact, actually work?
Without meaningful penalties for security failures, it may be easier to post health care data on a Wikipedia page and quit the crazy charade that health information is secure.
Stephen E Arnold, November 20, 2023
Smart Software for Cyber Security Mavens (Good and Bad Mavens)
November 17, 2023
This essay is the work of a dumb humanoid. No smart software required.
One of my research team (who wishes to maintain a low profile) called my attention to the “Awesome GPTs (Agents) for Cybersecurity.” The list on GitHub says:
The "Awesome GPTs (Agents) Repo" represents an initial effort to compile a comprehensive list of GPT agents focused on cybersecurity (offensive and defensive), created by the community. Please note, this repository is a community-driven project and may not list all existing GPT agents in cybersecurity. Contributions are welcome – feel free to add your own creations!
Open source cyber security tools and smart software can be used by good actors to make people safe. The tools can be used by less good actors to create some interesting situations for cyber security professionals, the elderly, and clueless organizations. Thanks, Microsoft Bing. Does MSFT use these tools to keep people safe or unsafe?
When I viewed the list, it contained more than 30 items. Let me highlight three, and invite you to check out the other 30 at the link to the repository:
- The Threat Intel Bot. This is a specialized GPT for advanced persistent threat intelligence
- The Message Header Analyzer. This dissects email headers for “insights.”
- Hacker Art. The software generates hacker art and nifty profile pictures.
Several observations:
- More tools and services will be forthcoming; thus, the list will grow
- Bad actors and good actors will find software to help them accomplish their objectives.
- A for fee bundle of these will be assembled and offered for sale, probably on eBay or Etsy. (Too bad fr0gger.)
Useful list!
Stephen E Arnold, November 17, 2023
xx
test
AI Is a Rainmaker for Bad Actors
November 16, 2023
This essay is the work of a dumb dinobaby. No smart software required.
How has smart software, readily available as open source code and low-cost online services, affected cyber crime? Please, select from one of the following answers. No cheating allowed.
[a] Bad actors love smart software.
[b] Criminals are exploiting smart orchestration and business process tools to automate phishing.
[c] Online fraudsters have found that launching repeated breaching attempts is faster and easier when AI is used to adapt to server responses.
[d] Finding mules for drug and human trafficking is easier than ever because social media requests for interested parties can be cranked out at high speed 24×7.
“Well, Slim, your idea to use that new fangled smart software to steal financial data is working. Sittin’ here counting the money raining down on us is a heck of a lot easier than robbing old ladies in the Trader Joe’s parking lot,” says the bad actor with the coffin nail of death in his mouth and the ill-gotten gains in his hands. Thanks, Copilot, you are producing nice cartoons today.
And the correct answer is … a, b, c, and d.
For some supporting information, navigate to “Deepfake Fraud Attempts Are Up 3000% in 2023. Here’s Why.” The write up reports:
Face-swapping apps are the most common example. The most basic versions crudely paste one face on top of another to create a “cheapfake.” More sophisticated systems use AI to morph and blend a source face onto a target, but these require greater resources and skills. The simple software, meanwhile, is easy to run and cheap or even free. An array of forgeries can then be simultaneously used in multiple attacks.
I like the phrase “cheap fakes.”
Several observations:
- Bad actors, unencumbered by bureaucracy, can download, test, tune, and deploy smart criminal actions more quickly than law enforcement can thwart them
- Existing cyber security systems are vulnerable to some smart attacks because AI can adapt and try different avenues
- Large volumes of automated content can be created and emailed without the hassle of manual content creation
- Cyber security vendors operate in “react mode”; that is, once a problem is discovered then the good actors will develop a defense. The advantage goes to those with a good offense, not a good defense.
Net net: 2024 will be fraught with security issues.
Stephen E Arnold, November 17, 2023
Cyberwar Crimes? Yep and Prosecutions Coming Down the Pike
November 15, 2023
This essay is the work of a dumb humanoid. No smart software required.
Existing international law has appeared hamstrung in the face of cyber-attacks for years, with advocates calling for new laws to address the growing danger. It appears, however, that step will no longer be necessary. Wired reports, “The International Criminal Court Will Now Prosecute Cyberwar Crimes.” The Court’s lead prosecutor, Karim Khan, acknowledged in an article published by Foreign Policy Analytics that cyber warfare perpetuates serious harm in the real world. Attacks on critical infrastructure like medical facilities and power grids may now be considered “war crimes, crimes against humanity, genocide, and/or the crime of aggression” as defined in the 1998 Rome Statute. That is great news, but why now? Writer Andy Greenberg tells us:
“Neither Khan’s article nor his office’s statement to WIRED mention Russia or Ukraine. But the new statement of the ICC prosecutor’s intent to investigate and prosecute hacking crimes comes in the midst of growing international focus on Russia’s cyberattacks targeting Ukraine both before and after its full-blown invasion of its neighbor in early 2022. In March of last year, the Human Rights Center at UC
Berkeley’s School of Law sent a formal request to the ICC prosecutor’s office urging it to consider war crime prosecutions of Russian hackers for their cyberattacks in Ukraine—even as the prosecutors continued to gather evidence of more traditional, physical war crimes that Russia has carried out in its invasion. In the Berkeley Human Rights Center’s request, formally known as an Article 15 document, the Human Rights Center focused on cyberattacks carried out by a Russian group known as Sandworm, a unit within Russia’s GRU military intelligence agency. Since 2014, the GRU and Sandworm, in particular, have carried out a series of cyberwar attacks against civilian critical infrastructure in Ukraine beyond anything seen in the history of the internet.”
See the article for more details of Sandworm’s attacks. Greenberg consulted Lindsay Freeman, the Human Rights Center’s director of technology, law, and policy, who expects the ICC is ready to apply these standards well beyond the war in Ukraine. She notes the 123 countries that signed the Rome Statute are obligated to detain and extradite convicted war criminals. Another expert, Strauss Center director Bobby Chesney, points out Khan paints disinformation as a separate, “gray zone.” Applying the Rome Statute to that tactic may prove tricky, but he might make it happen. Khan seems determined to hold international bad actors to account as far as the law will possibly allow.
Cynthia Murrell, November 15, 2023