Open AI and Its Alignment Pipeline

July 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Yep, alignment pipeline. No, I have zero clue what that means. I came across this felicitous phrase in “OpenAI Co-Founder Warns Superintelligent AI Must Be Controlled to Prevent Possible Human Extinction.” The “real news” story focuses on the PR push for Sam AI-Man’s OpenAI outfit. The idea for the story strikes me as a PR confection, but I am a dinobaby. Dinobabies can be skeptical.

7 6 bullshit artist

An OpenAI professional explains to some of his friends that smart software may lead to human extinction. Maybe some dogs and cockroaches will survive. He points out that his company may save the world with an alignment pipeline. The crowd seems to be getting riled up. Someone says, “What’s an alignment pipeline.” A happy honk from the ArnoldIT logo to the ever-creative MidJourney system. (Will it be destroyed too?)

The write up reports a quote from one of Sam AI-Man’s colleagues; to wit:

“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” Ilya Sutskever and head of alignment Jan Leike wrote in a Tuesday blog post, saying they believe such advancements could arrive as soon as this decade.

There you go. Global warming, the threat of nuclear discharges in Japan and Ukraine, post-Covid hangover, and human extinction. Okay

What’s interesting to this dinobaby is that OpenAI made a decision to make the cloud service available. OpenAI hooked up with the thoughtful, kind, and humane Microsoft. OpenAI forced the somewhat lethargic Googzilla to shift into gear and respond.

The Murdoch article presents another OpenAI wizard output:

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us and so our current alignment techniques will not scale to superintelligence,” they wrote. “We need new scientific and technical breakthroughs.”

This type of jibber jabber is fascinating. I wonder why the OpenAI folks did not do a bit of that “what if” thinking before making the service available. Yeah, woulda, shoulda, coulda. It sounds to me like a driver saying to a police officer, “I didn’t mean to run over Grandma Wilson.”

How does that sound to the grand children, Grandma’s insurance company, and the judge?

Sounds good, but someone ran over Grandma Wilson, right, Mr. OpenAI wizards? Answer the question, please.

The OpenAI geniuses have an answer, and I quote:

To solve these problems, within a period of four years, they said they’re leading a new team and dedicating 20% of the compute power secured to date to this effort. “While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem,” they said.

Now the capstone:

Its goal is to devise a roughly human-level automated alignment researcher, using vast amounts of compute to scale it and “iteratively align superintelligence.” In order to do so, OpenAI will develop a scalable training method, validate the resulting model and then stress test its alignment pipeline.

Yes, the alignment pipeline. What a crock of high school science club yip yap. Par for the course today. Nice thinking, PR people. One final thought: Grandma is dead. CYA words may not impress some people. To a high school science club type, the logic and the committee make perfect sense. Good work, Mr. AI-Men.

Stephen E Arnold, July 12, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta