Enterprise Search: Still Crazy after All These Years

November 20, 2020

This is not old wine in new bottles. This is wine in those weird clay jars with the nifty moniker “amphora” filled with Oak Leaf Vineyards Sauvignon Blanc White Wine. Cough, cough.

CMS Wire gets it correct when it declares, “Scanning and Selecting Enterprise Search Results: Not as Easy as it Looks.” The article doesn’t even approach the formation of a query—finding the right wording then tweaking filters and facets to produce a manageable list. Here we are only looking at the next step. Though the task seems simple on its surface—scan a list of results and select the most relevant ones—writer Martin White explains why it is not so straightforward.

First is scanning results. Users’ perceptual speed differs, so for some folks (like those who are dyslexic, for example) the process can be so tedious as to make searching pointless. White tells us that inconvenient fact is often overlooked in the discussion of search functionality. Also under-considered is the issue of snippet length. A bit of research has been performed, but it involved web pages, which are themselves more easily scanned and assessed than content found in enterprise databases. Those documents are often several hundred pages long, so ranking algorithms often have trouble picking out a helpful snippet. Some platforms serve up a text sequence that contains the query term, others create computer-generated summaries of documents, and others reproduce the first few lines of each document. Each of these approaches is imperfect. Still others produce a thumbnail of a whole page that contains the search term, and that probably helps many users. However, there are accessibility problems with that method.

White concludes:

“We know from recent research that people may make different decisions from the information they perceive initially as relevant based on their expertise. Equally, most search metrics are based around the notional relevance of the results being presented in response to a query. If the true value of relevance cannot be well judged from the snippet, that calls any metrics associated with query performance (especially precision) into question.

“There are no easy solutions to the issues raised in this column. In the quest for achieving an acceptable user experience the points to consider are:

*Are the techniques used by the search application to create snippets appropriate to the types of content being searched?

*Can the format of snippets be customized by the user?

*How easy is it to scan and assess results from a federated search?

“In the final analysis, it doesn’t matter how sophisticated the search technology is (in terms of semantic analysis, etc.). What matters is if the user can make an informed judgment of which piece of content in the results serves their information requirement, reinforces their trust in the application and maintains the highest possible level of overall search satisfaction.”

Sigh. It seems the more developers work on enterprise search, the more complicated it is to effectively operate. The field has been at it for 50 years, and is still trying to deliver something useful. Still crazy after all these years too.

PS. Our esteemed check writer (Stephen E Arnold) wrote a book about enterprise search with the author of the source document. No wonder this essay seemed weirdly familiar. I had to proofread what turned out to be prose that made the Oak Leaf stuff welcome at the end of an editing day. Cough, cough, eeep. 

Cynthia Murrell, November 20, 2020


Comments are closed.

  • Archives

  • Recent Posts

  • Meta