Deepfakes: Improving Rapidly with No End in Sight

December 1, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The possible applications of AI technology are endless and we’ve barely imagined the opportunities. While tech experts mainly focus on the benefits of AI, bad actors are concentrating how to use them for illegal activities. The Next Web explains how bad actors are using AI for scams, “Deepfake Fraud Attempts Are Up 3000% In 2023-Here’s Why.” Bad actors are using cheap and widely available AI technology to create deepfake content for fraud attempts.

According to Onfido, an ID verification company in London, reports that deepfake scams increased by 31% in 2023. It’s an entire 3000% year-on-year gain. The AI tool of choice for bad actors is face-swapping apps. They range in quality from a bad copy and paste job to sophisticated, blockbuster quality fakes. While the crude attempts are laughable, it only takes one successful facial identity verification for fraudsters to win.

The bad actors concentrate on quantity over quality and account for 80.3% of attacks in 2023. Biometric information is a key component to stop fraudsters:

“Despite the rise of deepfake fraud, Onfido insists that biometric verification is an effective deterrent. As evidence, the company points to its latest research. The report found that biometrics received three times fewer fraudulent attempts than documents. The criminals, however, are becoming more creative at attacking these defenses. As GenAI tools become more common, malicious actors are increasingly producing fake documents, spoofing biometric defenses, and hijacking camera signals.”

Onfido suggests using “liveness” biometrics in verification technology. Liveness determines if a user if actually present instead of a deepfake, photo, recording, or masked individual.

As AI technology advances so will bad actors in their scams.

Whitney Grace, December 1, 2023

Speeding Up and Simplifying Deep Fake Production

November 24, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Remember the good old days when creating a deep fake required having multiple photographs, maybe a video clip, and minutes of audio? Forget those requirements. To whip up a deep fake, one needs only a short audio clip and a single picture of the person.

11 18 racing cars 2

The pace of innovation in deep face production is speeding along. Bad actors will find it easier than ever to produce interesting videos for vulnerable grandparents worldwide. Thanks, MidJourney. It was a struggle but your produced a race scene that is good enough, the modern benchmark for excellence.

Researchers at Nanyang Technological University has blasted through the old-school requirements. The teams software can generate realistic videos. These can show facial expressions and head movements. The system is called DIRFA, a tasty acronym for Diverse yet Realistic Facial Animations. One notable achievement of the researchers is that the video is produced in 3D.

The report “Realistic Talking Faces Created from Only and Audio Clip and a Person’s Photo” includes more details about the system and links to demonstration videos. If the story is not available, you may be able to see the video on YouTube at this link.

Stephen E Arnold, November 24, 2023

A Rare Moment of Constructive Cooperation from Tech Barons

November 23, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Platform-hopping is one way bad actors have been able to cover their tracks. Now several companies are teaming up to limit that avenue for one particularly odious group. TechNewsWorld reports, “Tech Coalition Launches Initiative to Crackdown on Nomadic Child Predators.” The initiative is named Lantern, and the Tech Coalition includes Discord, Google, Mega, Meta, Quora, Roblox, Snap, and Twitch. Such cooperation is essential to combat a common tactic for grooming and/ or sextortion: predators engage victims on one platform then move the discussion to a more private forum. Reporter John P. Mello Jr. describes how Lantern works:

Participating companies upload ‘signals’ to Lantern about activity that violates their policies against child sexual exploitation identified on their platform.

Signals can be information tied to policy-violating accounts like email addresses, usernames, CSAM hashes, or keywords used to groom as well as buy and sell CSAM. Signals are not definitive proof of abuse. They offer clues for further investigation and can be the crucial piece of the puzzle that enables a company to uncover a real-time threat to a child’s safety.

Once signals are uploaded to Lantern, participating companies can select them, run them against their platform, review any activity and content the signal surfaces against their respective platform policies and terms of service, and take action in line with their enforcement processes, such as removing an account and reporting criminal activity to the National Center for Missing and Exploited Children and appropriate law enforcement agency.”

The visually oriented can find an infographic of this process in the write-up. We learn Lantern has been in development for two years. Why did it take so long to launch? Part of it was designing the program to be effective. Another part was to ensure it was managed responsibly: The project was subjected to a Human Rights Impact Assessment by the Business for Social Responsibility. Experts on child safety, digital rights, advocacy of marginalized communities, government, and law enforcement were also consulted. Finally, we’re told, measures were taken to ensure transparency and victims’ privacy.

In the past, companies hesitated to share such information lest they be considered culpable. However, some hope this initiative represents a perspective shift that will extend to other bad actors, like those who spread terrorist content. Perhaps. We shall see how much tech companies are willing to cooperate. They wouldn’t want to reveal too much to the competition just to help society, after all.

Cynthia Murrell, November 23, 2023

Why Suck Up Health Care Data? Maybe for Cyber Fraud?

November 20, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

In the US, medical care is an adventure. Last year, my “wellness” check up required a visit to another specialist. I showed up at the appointed place on the day and time my printed form stipulated. I stood in line for 10 minutes as two “intake” professionals struggled to match those seeking examinations with the information available to the check in desk staff. The intake professional called my name and said, “You are not a female.” I said, “That’s is correct.” The intake professional replied, “We have the medical records from your primary care physician for a female named Tina.” Nice Health Insurance Portability and Accountability Act compliance, right?

image

A moose in Maine learns that its veterinary data have been compromised by bad actors, probably from a country in which the principal language is not moose grunts. With those data, the shocked moose can be located using geographic data in his health record. Plus, the moose’s credit card data is now on the loose. If the moose in Maine is scared, what about the humanoids with the fascinating nasal phonemes?

That same health care outfit reported that it was compromised and was a victim of a hacker. The health care outfit floundered around and now, months later, struggles to update prescriptions and keep appointments straight. How’s that for security? In my book, that’s about par for health care managers who [a] know zero about confidentiality requirements and [b] even less about system security. Horrified? You can read more about this one-horse travesty in “Norton Healthcare Cyber Attack Highlights Record Year for Data Breaches Nationwide.” I wonder if the grandparents of the Norton operation were participants on Major Bowes’ Amateur Hour radio show?

Norton Healthcare was a poster child for the Commonwealth of Kentucky. But the great state of Maine (yep, the one with moose, lovable black flies, and citizens who push New York real estate agents’ vehicles into bays) managed to lose the personal data for 2,192,515 people. You can read about that “minor” security glitch in the Office of the Maine Attorney General’s Data Breach Notification.

What possible use is health care data? Let me identify a handful of bad actor scenarios enabled by inept security practices. Note, please, that these are worse than being labeled a girl or failing to protect the personal information of what could be most of the humans and probably some of the moose in Maine.

  1. Identity theft. Those newborns and entries identified as deceased can be converted into some personas for a range of applications, like applying for Social Security numbers, passports, or government benefits
  2. Access to bank accounts. With a complete array of information, a bad actor can engage in a number of maneuvers designed to withdraw or transfer funds
  3. Bundle up the biological data and sell it via one of the private Telegram channels focused on such useful information. Bioweapon researchers could find some of the data fascinating.

Why am I focusing on health care data? Here are the reasons:

  1. Enforcement of existing security guidelines seems to be lax. Perhaps it is time to conduct audits and penalize those outfits which find security easy to talk about but difficult to do?
  2. Should one or more Inspector Generals’ offices conduct some data collection into the practices of state and Federal health care security professionals, their competencies, and their on-the-job performance? Some humans and probably a moose or two in Maine might find this idea timely.
  3. Should the vendors of health care security systems demonstrate to one of the numerous Federal cyber watch dog groups the efficacy of their systems and then allow one or more of the Federal agencies to probe those systems to verify that the systems do, in fact, actually work?

Without meaningful penalties for security failures, it may be easier to post health care data on a Wikipedia page and quit the crazy charade that health information is secure.

Stephen E Arnold, November 20, 2023

Smart Software for Cyber Security Mavens (Good and Bad Mavens)

November 17, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

One of my research team (who wishes to maintain a low profile) called my attention to the “Awesome GPTs (Agents) for Cybersecurity.” The list on GitHub says:

The "Awesome GPTs (Agents) Repo" represents an initial effort to compile a comprehensive list of GPT agents focused on cybersecurity (offensive and defensive), created by the community. Please note, this repository is a community-driven project and may not list all existing GPT agents in cybersecurity. Contributions are welcome – feel free to add your own creations!

image

Open source cyber security tools and smart software can be used by good actors to make people safe. The tools can be used by less good actors to create some interesting situations for cyber security professionals, the elderly, and clueless organizations. Thanks, Microsoft Bing. Does MSFT use these tools to keep people safe or unsafe?

When I viewed the list, it contained more than 30 items. Let me highlight three, and invite you to check out the other 30 at the link to the repository:

  1. The Threat Intel Bot. This is a specialized GPT for advanced persistent threat intelligence
  2. The Message Header Analyzer. This dissects email headers for “insights.”
  3. Hacker Art. The software generates hacker art and nifty profile pictures.

Several observations:

  • More tools and services will be forthcoming; thus, the list will grow
  • Bad actors and good actors will find software to help them accomplish their objectives.
  • A for fee bundle of these will be assembled and offered for sale, probably on eBay or Etsy. (Too bad fr0gger.)

Useful list!

Stephen E Arnold, November 17, 2023

xx

test

AI Is a Rainmaker for Bad Actors

November 16, 2023

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

How has smart software, readily available as open source code and low-cost online services, affected cyber crime? Please, select from one of the following answers. No cheating allowed.

[a] Bad actors love smart software.

[b] Criminals are exploiting smart orchestration and business process tools to automate phishing.

[c] Online fraudsters have found that launching repeated breaching attempts is faster and easier when AI is used to adapt to server responses.

[d] Finding mules for drug and human trafficking is easier than ever because social media requests for interested parties can be cranked out at high speed 24×7.

image_thumb

“Well, Slim, your idea to use that new fangled smart software to steal financial data is working. Sittin’ here counting the money raining down on us is a heck of a lot easier than robbing old ladies in the Trader Joe’s parking lot,” says the bad actor with the coffin nail of death in his mouth and the ill-gotten gains in his hands. Thanks, Copilot, you are producing nice cartoons today.

And the correct answer is … a, b, c, and d.

For some supporting information, navigate to “Deepfake Fraud Attempts Are Up 3000% in 2023. Here’s Why.” The write up reports:

Face-swapping apps are the most common example. The most basic versions crudely paste one face on top of another to create a “cheapfake.” More sophisticated systems use AI to morph and blend a source face onto a target, but these require greater resources and skills.  The simple software, meanwhile, is easy to run and cheap or even free. An array of forgeries can then be simultaneously used in multiple attacks.

I like the phrase “cheap fakes.”

Several observations:

  1. Bad actors, unencumbered by bureaucracy, can download, test, tune, and deploy smart criminal actions more quickly than law enforcement can thwart them
  2. Existing cyber security systems are vulnerable to some smart attacks because AI can adapt and try different avenues
  3. Large volumes of automated content can be created and emailed without the hassle of manual content creation
  4. Cyber security vendors operate in “react mode”; that is, once a problem is discovered then the good actors will develop a defense. The advantage goes to those with a good offense, not a good defense.

Net net: 2024 will be fraught with security issues.

Stephen E Arnold, November 17, 2023

Cyberwar Crimes? Yep and Prosecutions Coming Down the Pike

November 15, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Existing international law has appeared hamstrung in the face of cyber-attacks for years, with advocates calling for new laws to address the growing danger. It appears, however, that step will no longer be necessary. Wired reports, “The International Criminal Court Will Now Prosecute Cyberwar Crimes.” The Court’s lead prosecutor, Karim Khan, acknowledged in an article published by Foreign Policy Analytics that cyber warfare perpetuates serious harm in the real world. Attacks on critical infrastructure like medical facilities and power grids may now be considered “war crimes, crimes against humanity, genocide, and/or the crime of aggression” as defined in the 1998 Rome Statute. That is great news, but why now? Writer Andy Greenberg tells us:

“Neither Khan’s article nor his office’s statement to WIRED mention Russia or Ukraine. But the new statement of the ICC prosecutor’s intent to investigate and prosecute hacking crimes comes in the midst of growing international focus on Russia’s cyberattacks targeting Ukraine both before and after its full-blown invasion of its neighbor in early 2022. In March of last year, the Human Rights Center at UC

Berkeley’s School of Law sent a formal request to the ICC prosecutor’s office urging it to consider war crime prosecutions of Russian hackers for their cyberattacks in Ukraine—even as the prosecutors continued to gather evidence of more traditional, physical war crimes that Russia has carried out in its invasion. In the Berkeley Human Rights Center’s request, formally known as an Article 15 document, the Human Rights Center focused on cyberattacks carried out by a Russian group known as Sandworm, a unit within Russia’s GRU military intelligence agency. Since 2014, the GRU and Sandworm, in particular, have carried out a series of cyberwar attacks against civilian critical infrastructure in Ukraine beyond anything seen in the history of the internet.”

See the article for more details of Sandworm’s attacks. Greenberg consulted Lindsay Freeman, the Human Rights Center’s director of technology, law, and policy, who expects the ICC is ready to apply these standards well beyond the war in Ukraine. She notes the 123 countries that signed the Rome Statute are obligated to detain and extradite convicted war criminals. Another expert, Strauss Center director Bobby Chesney, points out Khan paints disinformation as a separate, “gray zone.” Applying the Rome Statute to that tactic may prove tricky, but he might make it happen. Khan seems determined to hold international bad actors to account as far as the law will possibly allow.

Cynthia Murrell, November 15, 2023

The Risks of Smart Software in the Hands of Fullz Actors and Worse

November 7, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The ChatGPT and Sam AI-Man parade is getting more acts. I spotted some thumbs up from Satya Nadella about Sam AI-Man and his technology. The news service Techmeme provided me with dozens of links and enticing headlines about enterprise this and turbo that GPT. Those trumpets and tubas were pumping out the digital version of Funiculì, Funiculà.

I want to highlight one write up and point out an issue with smart software that appears to have been ignored, overlooked, or like the iceberg possibly that sank the RMS Titanic, was a heck of a lot more dangerous than Captain Edward Smith appreciated.

11 7 parade

The crowd is thrilled with the new capabilities of smart software. Imagine automating mundane, mindless work. Over the oom-pah of the band, one can sense the excitement of the Next Big Thing getting Bigger and more Thingier. In the crowd, however, are real or nascent bad actors. They are really happy too. Imagine how easy it will be to automate processes designed to steal personal financial data or other chinks in humans’ armor!

The article is “How OpenAI Is Building a Path Toward AI Agents.” The main idea is that one can type instructions into Sam AI-Man’s GPT “system” and have smart software hook together discrete functions. These functions can then deliver an output requiring the actions of different services.

The write up approaches this announcement or marketing assertion with some prudence. The essay points out that “customer chatbots aren’t a new idea.” I agree. Connecting services has been one of the basic ideas of the use of software. Anyone who has used notched cards to retrieve items related to one another is going to understand the value of automation. And now, if the Sam AI-Man announcements are accurate that capability no longer requires old-fashioned learning the ropes.

The cited write up about building a path asserts:

Once you start enabling agents like the ones OpenAI pointed toward today, you start building the path toward sophisticated algorithms manipulating the stock market; highly personalized and effective phishing attacks; discrimination and privacy violations based on automations connected to facial recognition; and all the unintended (and currently unimaginable) consequences of infinite AIs colliding on the internet.

Fear, uncertainty, and doubt are staples of advanced technology. And the essay makes clear that the rule maker in chief is Sam AI-Man; to wit the essay says:

After the event, I asked Altman how he was thinking about agents in general. Which actions is OpenAI comfortable letting GPT-4 take on the internet today, and which does the company not want to touch? Altman’s answer is that, at least for now, the company wants to keep it simple. Clear, direct actions are OK; anything that involves high-level planning isn’t.

Let me introduce my observations about the Sam AI-Man innovations and the type of explanations about the PR and marketing event which has whipped up pundits, poohbahs, and Twitter experts (perhaps I should say X-spurts?)

First, the Sam AI-Man announcements strike me as making orchestration a service easy to use and widely available. Bad things won’t be allowed. But the core idea of what I call “orchestration” is where the parade is marching. I hear the refrain “Some think the world is made for fun and frolic.” But I don’t agree, I don’t agree. Because as advanced tools become widely available, the early adopters are not exclusively those who want to link a calendar to an email to a document about a meeting to talk about a new marketing initiative.

Second, the ability of Sam AI-Man to determine what’s in bounds and out of bounds is different from refereeing a pickleball game. Some of the players will be nation states with an adversarial view of the US of A. Furthermore, there are bad actors who have a knack for linking automated information to online extortion. These folks will be interested in cost cutting and efficiency. More problematic, some of these individuals will be more active in testing how orchestration can facilitate their human trafficking activities or drug sales.

Third, government entities and people like Sam AI-Man are, by definition, now in reactive mode. What I mean is that with the announcement and the chatter about automating the work required to create a snappy online article is not what a bad actor will do. Individuals will see opportunities to create new ways to exploit the cluelessness of employees, senior citizens, and young people. The cheerful announcements and the parade tunes cannot drown out the low frequency rumbles of excitement now rippling through the bad actor grapevines.

Net net: Crime propelled by orchestration is now officially a thing. The “regulations” of smart software, like the professionals who will have to deal with the downstream consequences of automation, are out of date. Am I worried? For me personally, no, I am not worried. For those who have to enforce the laws which govern a social construct? Yep, I have a bit of concern. Certainly more than those who are laughing and enjoying the parade.

Stephen E Arnold, November 7, 2023

Social Media: A No-Limits Zone Scammers

November 6, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Scams have plagued social media since its inception and it’s only getting worse. The FTC described the current state of social media scams in, “Social Media: A Golden Goose For Scammers.” Scammers and other bad actors are hiding in plain sight on popular social media platforms. The FTC’s Consumer Sentinel Network reported that one in four people lost money to scams that began on social media. In total people reported losing $2.7 billion to social media scams but the number could be greater because most cases aren’t reported.

It’s sobering the way bad actors target victims:

“Social media gives scammers an edge in several ways. They can easily manufacture a fake persona, or hack into your profile, pretend to be you, and con your friends. They can learn to tailor their approach from what you share on social media. And scammers who place ads can even use tools available to advertisers to methodically target you based on personal details, such as your age, interests, or past purchases. All of this costs them next to nothing to reach billions of people from anywhere in the world.”

Scammers don’t discriminate against age. Surprisingly, younger groups lost the most to bad actors. Forty-seven percent of people 18-19 were defrauded in the first six months of 2023, while only 38% of people 20-29 were hit. The numbers decrease with age and the decline of older generations not using social media.

The biggest reported scams were related to online shopping, usually people who tried to buy something off social media. The total loss was 44% from January-June 2023. Fake investment opportunities grossed the largest amount of profit for scammers at 53%. Most of the “opportunities” were cryptocurrency operations. Romance scams had the second highest losses for victims. These encounters start innocuous enough but always end with love bombing and money requests.

Take precautions such as making your social media profiles private, investigate if your friends suddenly ask you for money, don’t instantly fall in love with random strangers, and research companies before you make investments. It’s all old, yet sagacious advice for the digital age.

Whitney Grace, November 6, 2023

Europol Focuses on Child Centric Crime

October 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Children are the most vulnerable and exploited population in the world. The Internet unfortunately aides bad actors by allowing them to distribute child sexual abuse material aka CSAM to avoid censors. Europol (the European-only sector of Interpol) wants to end CSAM by overriding Europeans’ privacy rights. Tech Dirt explores the idea in the article, “Europol Tells EU Commission: Hey, When It Comes To CSAM, Just Let Us Do Whatever We Want.”

Europol wants unfiltered access to a EU proposed AI algorithm and its data programmed to scan online content for CSAM. The police agency also wants to use the same AI to detect other crimes. This information came from a July 2022 high-level meeting that involved Europol Executive Director Catherine de Belle and the European Commission’s Director-General for Migration and Home Affairs Monique Pariat. Europol pitched this idea when the EU believed it would mandate client-side scanning on service providers.

Privacy activists and EU member nations vetoed the idea, because it would allow anyone to eavesdrop on private conversations. They also found it violated privacy rights. Europol used the common moniker “for the children” or “save the children” to justify the proposal. Law enforcement, politicians, religious groups, and parents have spouted that rhetoric for years and makes more nuanced people appear to side with pedophiles.

“It shouldn’t work as well as it does, since it’s been a cliché for decades. But it still works. And it still works often enough that Europol not only demanded access to combat CSAM but to use this same access to search for criminal activity wholly unrelated to the sexual exploitation of children… Europol wants a police state supported by always-on surveillance of any and all content uploaded by internet service users. Stasi-on-digital-steroids. Considering there’s any number of EU members that harbor ill will towards certain residents of their country, granting an international coalition of cops unfiltered access to content would swiftly move past the initial CSAM justification to governments seeking out any content they don’t like and punishing those who dared to offend their elected betters.”

There’s also evidence that law enforcement officials and politicians are working in the public sector to enforce anti-privacy laws then leaving for the private sector. Once there, they work at companies that sell surveillance technology to governments. Is that a type of insider trading or nefarious influence?

Whitney Grace, October 16, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta