Neural Networks Finally Have Their Day
May 11, 2015
The Toronto Star offers a thoughtful piece about deep learning titled, “How a Toronto Professor’s Research Revolutionized Artificial Intelligence.” Professor Geoffrey Hinton was instrumental in pursuing the development of neural network-based AI since long before the concept was popular. Lately, though, this “deep learning” approach has taken off, launching many a product, corporate division, and startup. Reporter Kate Allen reveals who we can credit for leading neural networks through the shadows of doubt:
“Ask anyone in machine learning what kept neural network research alive and they will probably mention one or all of these three names: Geoffrey Hinton, fellow Canadian Yoshua Bengio and Yann LeCun, of Facebook and New York University.
“But if you ask these three people what kept neural network research alive, they are likely to cite CIFAR, the Canadian Institute for Advanced Research. The organization creates research programs shaped around ambitious topics. Its funding, drawn from both public and private sources, frees scientists to spend more time tackling those questions, and draws experts from different disciplines together to collaborate.”
Hooray for CIFAR! The detailed article describes what gives deep learning the edge, explains why “machine learning” is a better term than “AI”, and gives several examples of ways deep learning is being used today, including Hinton’s current work at Google and the University of Toronto. Allen also traces the history of the neural network from its conceptualization in 1958 by Frank Rosenblatt, through an era of skepticism, to its recent warm embrace by the AI field. I recommend interested parties check out the full article. We’re reminded:
“In 2006, Hinton and a PhD student, Ruslan Salakhutdinov, published two papers that demonstrated how very large neural networks, once too slow to be effective, could work much more quickly than before. The new nets had more layers of computation: they were ‘deep,’ hence the method’s rebranding as deep learning. And when researchers began throwing huge data sets at them, and combining them with new and powerful graphics processing units originally built for video games, the systems began beating traditional machine learning systems that had been tweaked for decades. Neural nets were back.”
What detailed discussion of machine learning would be complete without a nod to concerns that we develop AI at our peril? Allen takes some time to sketch out both sides of that debate, and summarizes:
“Some in the field believe that artificial intelligence will augment, not replace: algorithms will free us from rote tasks like memorizing reams of legal precedents and allow us to pursue the higher-order thinking our massive brains are capable of. Others think the only tasks machines can’t do better are creative ones.”
I suppose the answers to those debates will present themselves eventually. Personally, I’m more excited than scared by the possibilities. How about you, dear reader?
Cynthia Murrell, May 11, 2015