The Secret AI Sauce: Blending Recipes

September 13, 2019

What is next for AI? According to PC Magazine, a union of sorts. Their headline declares, “The AI Breakthrough Will Require Researchers Burying Their Hatchets.” Though the piece may overstate the “rivalry” between rule-based AI (symbolism) and neural networks (connectionism), it presents an interesting perspective. Writer Ben Dickson begins with a little background—Symbolic AI was the way to go until 2012, when a breakthrough at the University of Toronto made neural-network AIs much more practical. Since then, he asserts, the field has been all abuzz about that approach, leaving symbolism in the dust. Now, though, Dickson writes:

“Seven years into the deep-learning revolution, we’ve seen that deep learning is not a perfect solution and has distinct weaknesses that limit its applications. One group of researchers at MIT and IBM believe the next breakthrough in AI might come from putting an end to the rivalry between symbolic AI and neural networks. In a paper presented at the International Conference on Learning Representations (ICLR) earlier this month, these researchers presented a concept called Neuro-Symbolic Concept Learner, which brings symbolic AI and neural networks together. This hybrid approach can create AI that is more flexible than the traditional models and can solve problems that neither symbolic AI nor neural networks can solve on their own.”

The article delves a bit into the limitations of deep learning and how a return to some symbolic AI tools can help, so navigate to the write-up for those details. Dickson presents this example on combining the two approaches:

“The MIT and IBM researchers used the Neuro-Symbolic Concept Learner (NSCL) to solve VQA [visual question-answering] problems. The NSCL uses neural networks to process the image in the VQA problem and then to transform it into a tabular representation of the objects it contains. Next, it uses another neural network to parse the question and transform it into a symbolic AI program that can run on the table of information produced in the previous step.”

We see the logic here. Researchers tested NSCL on an image dataset called CLEVR and achieved 99.8 percent accuracy with much less data than required to train a stand-alone neural network to do the same things. IBM’s David Cox reports that incorporating symbolism also makes it much easier to see what the AI is doing under the hood. Though, as Dickson points out, voices on each side have spoken out against the other, the way forward may be to tap into the strengths of each approach. Seems logical.

Cynthia Murrell, September 13, 2019

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta