AI Chatbots Now Learning Russian Propaganda
May 6, 2025
Gee, who would have guessed? Forbes reports, “Russian Propaganda Has Now Infected Western AI Chatbots—New Study.” Contributor Tor Constantino cites a recent NewsGuard report as he writes:
“A Moscow-based disinformation network known as ‘Pravda’ — the Russian word for ‘truth’ — has been flooding search results and web crawlers with pro-Kremlin falsehoods, causing AI systems to regurgitate misleading narratives. The Pravda network, which published 3.6 million articles in 2024 alone, is leveraging artificial intelligence to amplify Moscow’s influence at an unprecedented scale. The audit revealed that 10 leading AI chatbots repeated false narratives pushed by Pravda 33% of the time. Shockingly, seven of these chatbots directly cited Pravda sites as legitimate sources. In an email exchange, NewsGuard analyst Isis Blachez wrote that the study does not ‘name names’ of the AI systems most susceptible to the falsehood flow but acknowledged that the threat is widespread.”
Blachez believes a shift is underway from Russian operatives directly targeting readers to manipulation of AI models. Much more efficient. And sneaky. We learn:
“One of the most alarming practices uncovered is what NewsGuard refers to as ‘LLM grooming.’ This tactic is described as the deliberate deception of datasets that AI models — such as ChatGPT, Claude, Gemini, Grok 3, Perplexity and others — train on by flooding them with disinformation. Blachez noted that this propaganda pile-on is designed to bias AI outputs to align with pro-Russian perspectives. Pravda’s approach is methodical, relying on a sprawling network of 150 websites publishing in dozens of languages across 49 countries.”
AI firms can try to block propaganda sites from their models’ curriculum, but the operation is so large and elaborate it may be impossible. And also, how would they know if they had managed to do so? Nevertheless, Blachez encourages them to try. Otherwise, tech firms are destined to become conduits for the Kremlin’s agenda, she warns.
Of course, the rest of us have a responsibility here as well. We can and should double check information served up by AI. NewsGuard suggests its own Misinformation Fingerprints, a catalog of provably false claims it has found online. Or here is an idea: maybe do not turn to AI for information in the first place. After all, the tools are notoriously unreliable. And that is before Russian operatives get involved.
Cynthia Murrell, May 6, 2025
Comments
Got something to say?