AWS AI Improves Its Accuracy According to Amazon

January 31, 2020

An interesting bit of jargon creeps into “On Benchmark Data Set, Question-Answering System Halves Error Rate.” That word is “transfer.” Amazon, it seems, is trying to figure out how to reuse data, threshold settings, and workflow outputs.

Think about IBM’s DeepBlue defeat of Gary Kasparov in 1996 or the IBM Watson thing allegedly defeating Ken Jenkins in 2011 without any help from post production or judicious video editing. Two IBM systems and zero “transfer” or more in more Ivory Towerish jargon “transference.”

Humans learn via transfer. Artificial intelligence, despite the marketer assurances, don’t transfer very well. One painful and expensive fact of life which many venture funding outfits ignore is that most AI innovations start from ground zero for each new application of a particular AI technology mash up.

Imagine if DeepBlue were able to transfer its “learnings” to Watson. IBM may have avoided becoming a poster child for inept technology marketing. Watson is now a collection of software modules, but these don’t transfer particularly well. Hand crafting, retraining, testing, tweaking, and tuning are required and then must be reapplied as data drift causes “accuracy” scores to erode like a 1971 Vega.

Amazon suggests that it is making progress on the smart software transference challenge. The write up states:

Language models can be used to compute the probability of any given sequence (even discontinuous sequences) of words, which is useful in natural-language processing. The new language models are all built atop the Transformer neural architecture, which is particularly good at learning long-range dependencies among input data, such as the semantic and syntactic relationships between individual words of a sentence.

DarkCyber has dubbed some of these efforts as Bert and Ernie exercises, but that point of view is DarkCyber’s, not the views of those with skin in the AI game.

Amazon adds:

Our approach uses transfer learning, in which a machine learning model pretrained on one task — here, word sequence prediction — is fine-tuned on another — here, answer selection. Our innovation is to introduce an intermediate step between the pre-training of the source model and its adaptation to new target domains.

Yikes! A type of AI learning. The Amazon approach is named Tanda, not Ernie thankfully. Here’s a picture of how Tanda (transfer and adapt) works:

image

The write up reveals more about how the method functions.

The key part of the write up, in DarkCyber’s opinion, is the “accuracy” data; to wit:

On WikiQA and TREC-QA, our system’s MAP was 92% and 94.3%, respectively, a significant improvement over the previous records of 83.4% and 87.5%. MRR for our system was 93.3% and 97.4%, up from 84.8% and 94%, respectively.

If true, Amazon has now officially left Google, Microsoft, and others working to reduce the costs of training machine learning systems and delivering many wonderful services with a problem.

Most smart systems are fortunate to hit 85 percent accuracy in carefully controlled lab settings. Amazon is nosing into an accuracy range few humans can consistently deliver when indexing, classifying, or identifying if a picture that looks like a dog is actually a dog.

DarkCyber generally doubts data produced by a single research team. That rule holds for these data. Since the author of the report works on Alexa search, maybe Alexa will be able to answer this question, “Will Amazon overturn Microsoft’s JEDI contract award?”

Jargon is one thing. Real world examples are another.

Stephen E Arnold, January 31, 2020

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta