AI Has An Invisible Language. Bad Actors Will Learn It

October 28, 2024

Do you remember those Magic Eyes back from the 1990s? You needed to cross your eyes a certain way to see the pony or the dolphin. The Magic Eyes were a phenomenon of early computer graphics and it was like an exclusive club with a secret language. There’s a new secret language on the Internet generated by AI and it could potentially sneak in malicious acts says Ars Technica: “Invisible Text That AI Chatbots Understand And Humans Can’t? Yep, It’s A Thing.”

The secret text could potentially include harmful instructions into AI chatbots and other code. The purpose would be to steal confidential information and conduct other scams all without a user’s knowledge:

“The invisible characters, the result of a quirk in the Unicode text encoding standard, create an ideal covert channel that can make it easier for attackers to conceal malicious payloads fed into an LLM. The hidden text can similarly obfuscate the exfiltration of passwords, financial information, or other secrets out of the same AI-powered bots. Because the hidden text can be combined with normal text, users can unwittingly paste it into prompts. The secret content can also be appended to visible text in chatbot output.”

The steganographic framework is built into a text encoding network and LLMs and read it. Researcher Johann Rehberger ran two proof-of-concept attacks with the hidden language to discover potential risks. He ran the tests on Microsoft 365 Copilot to find sensitive information. It worked:

“When found, the attacks induced Copilot to express the secrets in invisible characters and append them to a URL, along with instructions for the user to visit the link. Because the confidential information isn’t visible, the link appeared benign, so many users would see little reason not to click on it as instructed by Copilot. And with that, the invisible string of non-renderable characters covertly conveyed the secret messages inside to Rehberger’s server.”

What is nefarious is that the links and other content generated by the steganographic code is literally invisible. Rehberger and his team used a tool to decode the attack. Regular users are won’t detect the attacks. As we rely more on AI chatbots, it will be easier to infiltrate a person’s system.

Thankfully the Big Tech companies are aware of the problem, but not before it will probably devastate some people and companies.

Whitney Grace, October 28, 2024

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta