Google Mandiant on Influence Campaigns: Hey, They Do Not Work Very Well

August 18, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

AI Use Rising in Influence Campaigns Online, But Impact Limited – US Cyber Firm” is a remarkable report for three reasons: [a] The write up does not question why the report was generated at this time, the dead of summer. [b] What methods were used to determine that online “manipulative information campaigns” were less than effective, and [c] What data substantiate that the “problem” will get “bigger over time”.

8 18 lecturer

The speaker says, “Online information campaigns do not work too well. Nevertheless, you should use them because it generates money. And money, as you know, is the ultimate good.” MidJourney did a C job of rendering a speaker with whom the audience disagrees.

Frankly I view this type of cyber security report as a public relations and marketing exercise. Why would a dinobaby like me view this information generated by the leader in online information finding, pointing, and distributing as specious? One reason is that paid advertising is a version of “manipulative information campaigns.” Therefore, the report suggests that Google’s online advertising business is less effective than Google has for 20 years explained as an effective way to generate leads and sales.

Second, I am skeptical about dismissing the impact of online manipulative information campaigns as a poor way to cause a desired thought or action. Sweden has set up a government agency to thwart anti-Sweden online information. Nation states continue to use social media, state controlled or state funded online newsletters to output information specifically designed to foster a specific type of behavior. Examples range from self harm messaging to videos about the perils of allowing people to vote in a fair election.

Third, the problem is a significant one. Amazon has a fake review problem. The solution may be to allow poorly understood algorithms to generate “reviews.” Data about the inherent bias and the ability of developers of smart software to steer results are abundant. Let me give an example. Navigate to MidJourney and ask for an image of a school building on fire. The system will not generate the image. This decision is based on inputs from humans who want to keep the smart software generating “good” images. Google’s own capabilities to block certain types of medical information illustrate the knobs and dials available to a small group of high technology companies which are alleged monopolies.

Do I believe the Google Mandiant information? Maybe some. But there are two interesting facets of this report which I want to highlight.

The first is that the Mandiant information undercuts what Google has suggested is the benefit of its online advertising business; that is, it works. Mandiant’s report seems to say, “Well, not too well.”

The second is that the article cited is from Thomson Reuters. The “trust principles” phrase appears on the story. Nevertheless ignores the likelihood that the Mandiant study is probably a fairly bad PR effort. Yep, trust. Not so much for this dinobaby.

Stephen E Arnold, August 18, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta