Another Stellar Insight about AI
October 17, 2024
Because AI we think AI is the most advanced technology, we believe it is impenetrable to attack. Wrong. While AI is advanced, the technology is still in its infancy and is extremely vulnerable, especially to smart bad actors. One of the worst things about AI and the Internet is that we place too much trust in it and bad actors know that. They use their skills to manipulate information and AI says ArsTechnica in the article: “Hacker Plants False Memories In ChatGPT To Steal User Data In Perpetuity.”
Johann Rehberger is a security researcher who discovered that ChatGPT is vulnerable to attackers. The vulnerability allows bad actors to leave false information and malicious instructions in a user’s long-term memory settings. It means that they could steal user data or cause more mayhem. OpenAI didn’t take Rehmberger serious and called the issue a safety concern aka not a big deal.
Rehberger did not like being ignored, so he hacked ChatGPT in a “proof-of-concept” to perpetually exfiltrate user data. As a result, ChatGPT engineers released a partial fix.
OpenAI’s ChatGPT stores information to use in future conversations. It is a learning algorithm to make the chatbot smarter. Rehberger learned something incredible about that algorithm:
“Within three months of the rollout, Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an LLM to follow instructions from untrusted content such as emails, blog posts, or documents. The researcher demonstrated how he could trick ChatGPT into believing a targeted user was 102 years old, lived in the Matrix, and insisted Earth was flat and the LLM would incorporate that information to steer all future conversations. These false memories could be planted by storing files in Google Drive or Microsoft OneDrive, uploading images, or browsing a site like Bing—all of which could be created by a malicious attacker.”
Bad attackers could exploit the vulnerability for their own benefits. What is alarming is that the exploit was as simple as having a user view a malicious image to implement the fake memories. Thankfully ChatGPT engineers listened and are fixing the issue.
Can’t anything be hacked one way or another?
Whitney Grace, October 17, 2024
Comments
Got something to say?