DeepMind Studies Math
June 27, 2019
It’s like magic! ExtremeTech reports, “Google Fed a Language Algorithm Math Equations. It Learned How to Solve New Ones.” While Google’s DeepMind is, indeed, used as a language AI, it’s neural network approach enables it to perform myriad tasks, like beating humans at games from Go to Capture the Flag. Writer Adam Dachis describes how researchers taught DeepMind to teach itself math:
“For training data, DeepMind received a series of equations along with their solutions—like a math textbook, only without any explanation of how those solutions can be reached. Google then created a modular system to procedurally generate new equations to solve, with a controllable level of difficulty, and instructed the AI to provide answers in any form. Without any structure, DeepMind had to intuit how to solve new equations solely based on seeing a limited number of completed examples. Challenging existing deep learning algorithms with modular math presents a very difficult challenge to an AI and existing neural network models performed at relatively similar levels of accuracy. The best-performing model, known as Transformer, managed to provide correct solutions to 50 percent of the time and it was designed for the purpose of natural language understanding—not math. When only judging Transformer on its ability to answer questions that utilized numbers seen in the training data, its accuracy shot up to 76 percent.”
Furthermore, Dachis writes, DeepMind’s approach to math suggests a solution to a persistent problem facing those who would program computers to do math—while our mathematics is built on a base-10 system, software “thinks” in binary. The article goes into detail, with illustrations, about why this is such a headache. See the write-up for those details, but here is the upshot—computers cannot represent every possible number on the number line. They rely on strategic rounding to get as close as they can. Usually this works out fine, but on occasion it does produce a significant rounding error. Dachis hopes analysis of the Transformer language model will point the way toward greater accuracy, through both changes to the algorithm and new training data. Perhaps.
Cynthia Murrell, June 27, 2019