OpenAI Dips Its Toe in Dark Waters

October 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Facebook, TikTok, YouTube, Instagram, and other social media platforms have exacerbated woke and PC culture. It’s gotten to the point where everyone and everything are viewed as offensive. Even AI assistants aka chatbots are being programmed with censorship. OpenAI designed the Chat GPT assistant and the organization is constantly upgrading the generative text algorithm. OpenAI released a white paper about upgrading version four of Chat GPT: “GPT-4V(ision) System Card.”

GPT-4V relies on large language models (LLMs) to expand its knowledge base to solve new problems and prompts. OpenAI used publicly available data and licensed sources to train GPT-4V then refined it with human feedback. The paper explains that while GPT-4V was proficient in many areas it severely lacked in presented factual information.

OpenAI tested GPT-4V’s ability to replicate scientific and medical information. Unfortunately GPT-4V continued to stereotype and offer ungrounded inferences from text and images as AI algorithms have proven to do in many cases. The biggest concern is that Chat GPT’s latest upgrade will be utilized to spread disinformation:

“As noted in the GPT-4 system card, the model can be used to generate plausible realistic and targeted text content. When paired with vision capabilities, image and text content can pose increased risks with disinformation since the model can create text content tailored to an image input. Previous work has shown that people are more likely to believe true and false statements when they’re presented alongside an image, and have false recall of made up headlines when they are accompanied with a photo. It is also known that engagement with content increases when it is associated with an image.”

After GPT-4V was tested on multiple tasks it failed to accurately convey information. GPT-4V has learned to interpret data through a warped cultural lens and is a reflection of the Internet. It lacks nuance to understand gray areas despite OpenAI’s attempts to enhance the AI’s capabilities.

OpenAI is implementing censorship protocols to dispel harmful prompts; that is, GPT-4V won’t respond to sexist and racist tasks. It’s similar to how YouTube blocks videos that contain trigger or “stop” words: Gun, death, etc. OpenAI is proactively preventing bad actors from using Chat GPT as a misinformation tool. But bad actors are smart and will design their own AI chatbot to skirt around censorship. They’ll see it as a personal challenge and will revel when they succeed.

Then what will OpenAI do?

Whitney Grace, October 20, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta