Facebook AI Explainer

June 10, 2016

Facebook posted a partial explanation of its artificial intelligence system. You can review the document “Introducing DeepText: Facebook’s Text Understanding Engine” and decide if Facebook or IBM is winning the smart software race. The Facebook document states:

In traditional NLP approaches, words are converted into a format that a computer algorithm can learn. The word “brother” might be assigned an integer ID such as 4598, while the word “bro” becomes another integer, like 986665. This representation requires each word to be seen with exact spellings in the training data to be understood. With deep learning, we can instead use “word embeddings,” a mathematical concept that preserves the semantic relationship among words. So, when calculated properly, we can see that the word embeddings of “brother” and “bro” are close in space. This type of representation allows us to capture the deeper semantic meaning of words. Using word embeddings, we can also understand the same semantics across multiple languages, despite differences in the surface form. As an example, for English and Spanish, “happy birthday” and “feliz cumpleaños” should be very close to each other in the common embedding space. By mapping words and phrases into a common embedding space, DeepText is capable of building models that are language-agnostic.

Due to Facebook’s grip on the 18 to 35 demographic, its approach may have more commercial impact than the methods in use at other firms. Just ask IBM Watson.

Stephen E Arnold, June 10, 2016

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta