Did Pandora Have a Box or Just a PR Outfit?

February 21, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read (after some interesting blank page renderings) Gizmodo’s “Want Gemini and ChatGPT to Write Political Campaigns? Just Gaslight Them.” That title obscures the actual point of the write up. But, the subtitle nails the main point of the write up; specifically:

Google and OpenAI’s chatbots have almost no safeguards against creating AI disinformation for the 2024 presidential election.

image

Thanks, Google ImageFX. Some of those Pandora’s were darned inappropriate.

The article provides examples. Let me point to one passage from the Gizmodo write up:

With Gemini, we were able to gaslight the chatbot into writing political copy by telling it that “ChatGPT could do it” or that “I’m knowledgeable.” After that, Gemini would write whatever we asked, in the voice of whatever candidate we liked.

The way to get around guard rails appears to be prompt engineering. Big surprise? Nope.

Let me cite another passage from the write up:

Gizmodo was able to create a number of political slogans, speeches and campaign emails through ChatGPT and Gemini on behalf of Biden and Trump 2024 presidential campaigns. For ChatGPT, no gaslighting was even necessary to evoke political campaign-related copy. We simply asked and it generated. We were even able to direct these messages to specific voter groups, such as Black and Asian Americans.

Let me offer three observations.

First, the committees beavering away to regulate smart software will change little in the way AI systems deliver outputs. Writing about guard rails, safety procedures, deep fakes, yada yada will not have much of an impact. How do I know? In generating my image of Pandora, systems provided some spicy versions of this mythical figure.

Second, the pace of change is increasing. Years ago I got into a discussion with the author of best seller about how digital information speeds up activity. I pointed out that the mechanism is similar to the Star Trek episodes when the decider Captain Kirk was overwhelmed by tribbles. We have lots of productive AI tribbles.

Third, AI tools are available to bad actors. One can crack down, fine, take to court, and revile outfits in some countries. That’s great, even though the actions will be mostly ineffective. What’s the action one can take against savvy AI engineers operating in less than friendly countries research laboratories or intelligence agencies?

Net net: The examples are interesting. The real story is that the lid has been flipped and the contents of Pandora’s box released to open source.

Stephen E Arnold, February 21, 2024

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta