AI Neural Networks: A Mathy Explanation of Close Enough for Horseshoes
March 31, 2022
If you are comfortable with math, you will find the information in “The Difficulty of Computing Stable and Accurate Neural Networks: On the Barriers of Deep Learning and Smale’s 18th Problem.” If you have a snapshot of Steve Smale, a fellow who interacted with one of my relatives, you may be familiar with his problems. Smale’s contribution to Vladimir Arnold’s request were supposed to be the 21st century equivalent to Hilbert’s problems. The cited paper focuses on problem 18. The idea is addressing the limits of computational intelligence.
If you are into SAIL, Snorkel, and Google’s efficiency approach based on oodles of data and synthetic data, you will find Smale’s 18th problem a bit annoying. (The same, I have been told, was a characteristic of my relative who one labored for the somewhat quirky Andrey Kolmogorov.) Smale’s sniveling 18th is not an issue with the Googley DeepMind.
If you are not familiar with this group of people and their mathy concerns, you probably will find an NCAA basketball game a more enjoyable way to spend an hour or two.
Here’s my summary: Good enough is okay. For gooder enough, human (for now) interventions may be required. What if the outputs are off the mark? Close enough for horseshoes — and reducing costs.
Stephen E Arnold, March 31, 2022