Let Technology Solve the Problem: Ever Hear of Russell and His Paradox?
September 21, 2022
I read “You Can’t Solve AI Security Problems with More AI.” The main idea, in my opinion, is that Russell’s Paradox is alive and well. The article states:
When you’re engineering for security, a solution that works 99% of the time is no good. You are dealing with adversarial attackers here. If there is a 1% gap in your protection they will find it—that’s what they do!
Obvious? Yep. That one percent is an issue. But the belief that technology can solve a problem is more of a delusional, marketing-oriented approach to reality. Some informed people are confident that one percent does not make much of a difference. Maybe? But what about a smart software system that is generating outputs with probabilities greater than one percent. Can technology address these issues? The answer offered by some is, “Sure, we have added this layer, that process, and these procedures to deliver accuracy in the 85, 90, or 95 percent range. Yes, that’s “confidence.”
The write up points out:
Trying to prevent AI attacks with more AI doesn’t work like this. If you patch a hole with even more AI, you have no way of knowing if your solution is 100% reliable. The fundamental challenge here is that large language models remain impenetrable black boxes. No one, not even the creators of the model, has a full understanding of what they can do.
Eeep.
The article has what I think is a quite helpful suggestion; to wit:
There may be systems that should not be built at all until we have a robust solution.
What if we generalize beyond the issue of cyber security? What if we think about the smart software “fixing up” the problems in today’s zippy digitized world?
Rethink, go slow, and remembering Russell’s Law? Not a chance.
Stephen E Arnold, September 21, 2022