Who Will Ultimately Control AI?

September 27, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

In the Marvel comics universe, there is a being on Earth’s moon called The Watcher. He observes humanity and is not supposed to interfere with their affairs. Marvel’s The Watcher brings to mind the old adage, “Who watches the watcher?” While there is an endless amount of comic book lore to answer that question, the current controversial discussion surrounding AI regulations and who will watch AI does not. Time delves into the conversation about, “The Heated Debate Over Who Should Control Access To AI.”

In May 2023, the CEOs of three AI companies, OpenAI, Google, DeepMind, and Anthropic, signed a letter that stated AI could be harmful to humanity and as dangerous as nuclear weapons or a pandemic. AI experts and leaders are calling for restrictions on specific AI models to prevent bad actors from using it to spread disinformation, launch cyber attacks, make bioweapons, and cause other harm.

Not all of the experts and leaders agree, including the folks at Meta. US Senators Josh Hawley and Richard Blumenthal, Ranking Member and Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and Law don’t like that Meta is sharing powerful AI models.

“The disagreement between Meta and the Senators is just the beginning of a debate over who gets to control access to AI, the outcome of which will have wide-reaching implications. On one side, many prominent AI companies and members of the national security community, concerned by risks posed by powerful AI systems and possibly motivated by commercial incentives, are pushing for limits on who can build and access the most powerful AI systems. On the other, is an unlikely coalition of Meta, and many progressives, libertarians, and old-school liberals, who are fighting for what they say is an open, transparent approach to AI development.

OpenAI published a paper titled Frontier Model Regulation by researchers and academics from OpenAI, DeepMind, and Google with tips about how to control AI. Developing safety standards and requiring regulators to have visibility are no brainers. Other ideas, such as requiring AI developers to acquire a license to train and deploy powerful AI models, caused arguments. Licensing would be a good idea in the future but not great for today’s world.

Meta releases its AI models via open source or paid licenses for its more robust models. Meta’s CEO did say something idiotic:

Meta’s leadership is also not convinced that powerful AI systems could pose existential risks. Mark Zuckerberg, co-founder and CEO of Meta, has said that he doesn’t understand the AI doomsday scenarios, and that those who drum up these scenarios are “pretty irresponsible.” Yann LeCun, Turing Award winner and chief AI scientist at Meta has said that fears over extreme AI risks are ‘preposterously stupid.’’”

The remainder of the article delves into how regulations limit innovation, surveillance would be Orwellian in nature, and how bad acting countries wouldn’t follow the rules. It’s once again the same old arguments repackaged with an AI sticker.

Who will control AI? Gee, maybe the same outfits controlling information and software right this minute?

Whitney Grace, September 27, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta