Identifying Misinformation: A Task Not Yet Mastered
January 8, 2025
This is an official dinobaby post. No smart software involved in this blog post.
On New Year’s eve the US Department of Treasury issued a news release about Russian interference in the recent US presidential election. Tucked into the document “Treasury Sanctions Entities in Iran and Russia That Attempted to Interfere in the U.S. 2024 Election” was this passage:
GRU-AFFILIATED ENTITY USES ARTIFICIAL INTELLIGENCE TOOLS TO INTERFERE IN THE U.S. 2024 ELECTION
The Moscow-based Center for Geopolitical Expertise (CGE), founded by OFAC-designated [Office of Foreign Asset Control — Editor] Aleksandr Dugin, directs and subsidizes the creation and publication of deepfakes and circulated disinformation about candidates in the U.S. 2024 general election. CGE personnel work directly with a GRU unit that oversees sabotage, political interference operations, and cyberwarfare targeting the West. Since at least 2024, a GRU officer and CGE affiliate directed CGE Director Valery Mikhaylovich Korovin (Korovin) and other CGE personnel to carry out various influence operations targeting the U.S. 2024 presidential election. At the direction of, and with financial support from, the GRU, CGE and its personnel used generative AI tools to quickly create disinformation that would be distributed across a massive network of websites designed to imitate legitimate news outlets to create false corroboration between the stories, as well as to obfuscate their Russian origin. CGE built a server that hosts the generative AI tools and associated AI-created content, in order to avoid foreign web-hosting services that would block their activity. The GRU provided CGE and a network of U.S.-based facilitators with financial support to: build and maintain its AI-support server; maintain a network of at least 100 websites used in its disinformation operations; and contribute to the rent cost of the apartment where the server is housed. Korovin played a key role in coordinating financial support from the GRU to his employees and U.S.-based facilitators. In addition to using generative AI to construct and disseminate disinformation targeting the U.S. electorate in the lead up to the U.S. 2024 general election, CGE also manipulated a video it used to produce baseless accusations concerning a 2024 vice presidential candidate in an effort to sow discord amongst the U.S. electorate. Today, OFAC is designating CGE and Korovin pursuant to E.O. 13848 for having directly or indirectly engaged in, sponsored, concealed, or otherwise been complicit in foreign malign influence in the 2024 U.S. election. Additionally, OFAC is designating CGE pursuant to E.O. 13694, as amended, E.O. 14024, and section 224 of the Countering America’s Adversaries Through Sanctions Act of 2017 (CAATSA) for being owned or controlled by, or having acted or purported to act for or on behalf of, directly or indirectly, the GRU, a person whose property and interests in property are blocked pursuant to E.O. 13694, as amended, E.O. 14024, and section 224 of CAATSA. OFAC is also designating Korovin pursuant to E.O. 14024 for being or having been a leader, official, senior executive officer, or member of the board of directors of CGE, a person whose property and interests in property are blocked pursuant to E.O. 14024.
Several questions arise:
- Was the smart software open source or commercial? What model or models powered the misinformation effort?
- What functions could intermediaries / service providers add to their existing systems to identify and block the actions of an adversary’s operative? (Obviously existing software to identify “fake” content do not work particularly well.)
- What safeguard standards can be used to prevent misuse of smart software? Are safeguard standards possible or too difficult to implement in a “run fast and break things” setting?
- What procedures and specialized software are required to provide security professionals with a reliable early warning system? The fact of this interference illustrates that the much-hyped cyber alert services do not function in a way sufficiently accurate to deal with willful misinformation “factories.”
Stephen E Arnold, January 8, 2025
Comments
Got something to say?