The TREC 2011 Results and Predictive Whatevers

July 20, 2012

Law.com reports “Technology-Assisted Review Boosted in TREC 2011 Results” how technology-assisted review boasts that it may be capable of ousting predictive coding’s title. TREC Legal Track is an annual government sponsored project (2012 was canceled) to examine document review methods. From the 2011 TREC, participants voted in favor of technology-assisted review, but it may have a way to go:

“As such, ‘There is still plenty of room for improvement in the efficiency and effectiveness of technology-assisted review efforts, and, in particular, the accuracy of intra-review recall estimation tools, so as to support a reasonable decision that ‘enough is enough’ and to declare the review complete. Commensurate with improvements in review efficiency and effectiveness is the need for improved external evaluation methodologies,’ the report states.”

The 2011 TREC asked participants to test three document review requests, but different from past years the rules were more specific in requirements by having participants rank documents as well as which were the most responsive. The extra requirement meant that researchers were able to test hypothetical situations, but there were some downsides:

“TREC 2011 had its share of controversy. ‘Some participants may have conducted an all-out effort to achieve the best possible results, while others may have conducted experiments to illuminate selected aspects of document review technology. … Efficacy must be interpreted in light of effort,’ the report authors wrote. They noted that six teams devoted 10 or fewer hours for document review during individual rounds, two took 20 hours, one used 48 hours, and one, Recommind, invested 150 hours in one round and 500 in another.”

We noticed this passage in the write up as well:

“`It is inappropriate –- and forbidden by the TREC participation agreement –- to claim that the results presented here show that one participant’s system or approach is generally better than another’s. It is also inappropriate to compare the results of TREC 2011 with the results of past TREC Legal Track exercises, as the test conditions as well as the particular techniques and tools employed by the participating teams are not directly comparable. One TREC 2011 Legal Track participant was barred from future participation in TREC for advertising such invalid comparisons,’ the report states.”

TREC is sensitive to participants who use the data for commercial purposes. We wonder which vendor allegedly stepped over the end line. We also wonder if TREC is breaking out of the slump which traditional indexing seems have relaxed into. Is “predictive” the future of search? We are not sure about the TREC results. We do have an opinion, however. Predictive works in certain situations. For others, there are other, more reliable tools. We also believe that there is a role for humans, particularly when the risks of an algorithm going crazy exist. A goof in placing an ad on a Web page is one thing. An error predicting more significant events? Well, we are more cautious. Marketers are afoot. We prefer the more pragmatic approach of outfits like Ikanow and we avoid the high fliers whom we will not name.

Stephen E Arnold, July 20, 2012

Sponsored by Polyspot

 

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta