Viruses Get Intelligence Upgrade When Designed With AI

March 21, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Viruses are still a common problem on the Internet despite all the PSAs, firewalls, antiviral software, and other precautions users take to protect their technology and data. Intelligent and adaptable viruses have remained a concept of science-fiction but bad actors are already designing them with AI. It’s only going to get worse. Tom’s Hardware explains that an AI virus is already wreaking havoc: “AI Worm Infects Users Via AI-Enabled Email Clients-Morris II Generative AI Worm Steals Confidential Data As It Spreads.”

The Morris II Worm was designed by researchers Ben Nassi of Cornell Tech, Ron Button from Intuit, and Stav Cohen from the Israel Institute of Technology. They built the worm to understand how to better combat bad actors. The researchers named it after the first computer worm Morris. The virus is a generative AI for that steals data, spams with email, spreads malware, and spreads to multiple systems.

Morris II attacks AI apps and AI-enabled email assistants that use generative text and image engines like ChatGPT, LLaVA, and Gemini Pro. It also uses adversarial self-replicating prompts. The researchers described Morris II’s attacks:

“ ‘The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication) and engage in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem. We demonstrate the application of Morris II against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images).’”

The worm continues to harvest information and update it in databases. The researchers shared their information with OpenAI and Google. OpenAI responded by saying the organization will make its systems more resilient and advises designers to watch out for harmful inputs. The advice is better worded as “sleep with one eye open.”

Whitney Grace, March 21, 2024


Comments are closed.

  • Archives

  • Recent Posts

  • Meta