LLMs Paired With AI Are Dangerous Propaganda Tools

February 13, 2025

AI chatbots are in their infancy. While they have been tested for a number of years, they are still prone to bias and other devastating mistakes. Big business and other organizations aren’t waiting for the technology to improve. Instead they’re incorporating chatbots and more AI into their infrastructures. Baldur Bjarnason warns about the dangers of AI, especially when it comes to LLMs and censorship:

“Poisoning For Propaganda: Rising Authoritarianism Makes LLMs More Dangerous.”

Large language models (LLMs) are used to train AI algorithms. Bjarnason warns that using any LLM, even those run locally, are dangerous.

Why?

LLMs are contained language databases that are programmed around specific parameters. These parameters are prone to error, because they were written by humans—ergo why AI algorithms are untrustworthy. They can also be programmed to be biased towards specific opinions aka propaganda machines. Bjarnason warns that LLMs are being used for the lawless takeover of the United States. He also says that corporations, in order to maintain their power, won’t hesitate to remove or add the same information from LLMs if the US government asks them.

This is another type of censorship:

“The point of cognitive automation is NOT to enhance thinking. The point of it is to avoid thinking in the first place. That’s the job it does. You won’t notice when the censorship kicks in… The alternative approach to censorship, fine-tuning the model to return a specific response, is more costly than keyword blocking and more error-prone. And resorting to prompt manipulation or preambles is somewhat easily bypassed but, crucially, you need to know that there is something to bypass (or “jailbreak”) in the first place. A more concerning approach, in my view, is poisoning.”

Corporations paired with governments (it’s not just the United States) are “poisoning” the AI LLMs with propagandized sentiments. It’s a subtle way of transforming perspectives without loud indoctrination campaigns. It is comparable to subliminal messages in commercials or teaching only one viewpoint.

Controls seem unlikely.

Whitney Grace, February 13, 2025

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta