AI: Are Algorithms House Trained?
March 30, 2021
“Containment Algorithms Don’t Work for Our Machines” includes a thought-provoking passage; namely:
Director of the Center for Humans and Machines, Iyad Rahwan, described it this way: “If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable.”
What’s the write up’s take on this “challenge”? Here’s the statement in the article:
The lesson of the study’s computability theory is that we do not know how or if we will be able to build a program that eliminates the risk associated with a sufficiently advanced artificial intelligence. As some AI theorists and scientists believe, no advanced AI systems can ever be guaranteed entirely safe. But their work continues; nothing in our lives has ever been guaranteed safe to begin with.
With the US doing yoga to maintain its perceived lead in smart software, the trajectory of smart software and its receptivity to house training may reside elsewhere.
Stephen E Arnold, March 30, 2021