Artificial Intelligence Competition Reveals Need for More Learning

March 3, 2016

The capabilities of robots are growing but, on the whole, have not surpassed a middle school education quite yet. The article Why AI can still hardly pass an eighth grade science test from Motherboard shares insights into the current state of artificial intelligence as revealed in a recent artificial intelligence competition. Chaim Linhart, a researcher from an Israel startup, TaKaDu, received the first place prize of $50,000. However, the winner only scored a 59.3 percent on this series of tasks tougher than the conventionally used Turing Test. The article describes how the winners utilized machine learning models,

“Tafjord explained that all three top teams relied on search-style machine learning models: they essentially found ways to search massive test corpora for the answers. Popular text sources included dumps of Wikipedia, open-source textbooks, and online flashcards intended for studying purposes. These models have anywhere between 50 to 1,000 different “features” to help solve the problem—a simple feature could look at something like how often a question and answer appear together in the text corpus, or how close words from the question and answer appear.”

The second and third place winners scored just around one percent behind Linhart’s robot. This may suggest a competitive market when the time comes. Or, perhaps, as the article suggests, nothing very groundbreaking has been developed quite yet. Will search-based machine learning models continue to be expanded and built upon or will another paradigm be necessary for AI to get grade A?

Megan Feil, March 3, 2016

Sponsored by, publisher of the CyberOSINT monograph


Comments are closed.

  • Archives

  • Recent Posts

  • Meta