Big Tech and AI: Trust Us. We Just Ooze Trust

May 28, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Amid rising concerns, The Register reports, “Top AI Players Pledge to Pull the Plug on Models that Present Intolerable Risk” at the recent AI Seoul Summit. How do they define “intolerable?” That little detail has yet to be determined. The non-binding declaration was signed by OpenAI, Anthropic, Microsoft, Google, Amazon, and other AI heavyweights. Reporter Laura Dobberstein writes:

“The Seoul Summit produced a set of Frontier AI Safety Commitments that will see signatories publish safety frameworks on how they will measure risks of their AI models. This includes outlining at what point risks become intolerable and what actions signatories will take at that point. And if mitigations do not keep risks below thresholds, the signatories have pledged not to ‘develop or deploy a model or system at all.’”

We also learn:

“Signatories to the Seoul document have also committed to red-teaming their frontier AI models and systems, sharing information, investing in cyber security and insider threat safeguards in order to protect unreleased tech, incentivizing third-party discovery and reporting of vulnerabilities, AI content labelling, prioritizing research on the societal risks posed by AI, and to use AI for good.”

Promises, promises. And where are these frameworks so we can hold companies accountable? Hang tight, the check is in the mail. The summit produced a document full of pretty words, but as the article notes:

“All of that sounds great … but the details haven’t been worked out. And they won’t be, until an ‘AI Action Summit’ to be staged in early 2025.”

If then. After all, there’s no need to hurry. We are sure we can trust these AI bros to do the right thing. Eventually. Right?

Cynthia Murrell, May 28, 2024

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta