Infohazards: Another 2020 Requirement

October 20, 2020

New technologies that become society staples have risks and require policies to rein in potential dangers. Artificial intelligence is a developing technology. Governing policies have yet to catch up with the emerging tool. Experts in computer science, government, and other controlling organizations need to discuss how to control AI says Vanessa Kosoy in the Less Wrong blog post: “Needed: AI Infohazard Policy.”

Kosoy approaches her discussion about the need for a controlling AI information policy with the standard science fiction warning argument: “AI risk is that AI is a danger, and therefore research into AI might be dangerous.” It is good to draw caution from science fiction to prevent real world disaster. Experts must develop a governing body of AI guidelines to determine what learned information should be shared and how to handle results that are not published.

Individuals and single organizations cannot make these decisions alone, even if they do have their own governing policies. Governing organizations and people must coordinate their knowledge regarding AI and develop a consensual policies to control AI information. Kozoy determines that any AI policy shoulder consider the following:

• “Some results might have implications that shorten the AI timelines, but are still good to publish since the distribution of outcomes is improved.

• Usually we shouldn’t even start working on something which is in the should-not-be-published category, but sometimes the implications only become clear later, and sometimes dangerous knowledge might still be net positive as long as it’s contained.

• In the midgame, it is unlikely for any given group to make it all the way to safe AGI by itself. Therefore, safe AGI is a broad collective effort and we should expect most results to be published. In the endgame, it might become likely for a given group to make it all the way to safe AGI. In this case, incentives for secrecy become stronger.

• The policy should not fail to address extreme situations that we only expect to arise rarely, because those situations might have especially major consequences.”

She continues that any AI information policy should determine the criteria for what information is published, what channels should be consulted to determine publication, and how to handle potentially dangerous information.

These questions are universal for any type of technology and information that has potential hazards. However, specificity of technological policies weeds out any pedantic bickering and sets standards for everyone, individuals and organizations. The problem is getting everyone to agree on the policies.

Whitney Grace, October 20, 2020

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta