Enterprise Search, Knowledge Management, & Customer Service: Some of the Study Stuff Ups Evident?

October 27, 2014

One of my two or three readers sent me a link to “The 10 Stuff Ups We All Make When Interpreting Research.” The article walks through some common weaknesses individuals make when “interpreting research.” I don’t agree with the “all” in the title.

This article arrived as I was reading a recent study about search. As an exercise on a surprisingly balmy Sunday afternoon in Kentucky, I jotted down the 10 “stuff ups” presented in the Interpreting Research article. Here they are in my words, paraphrased to sidestep plagiarism, copyright, and Google duplication finder issues:

  1. One study, not a series of studies. In short, an anomaly report.
  2. One person’s notion of what is significant may be irrelevant.
  3. Mixing up risk and the Statistics 101 notion of “number needed to treat” gets the cart before the horse.
  4. Trends may not be linear.
  5. Humans find what they want to find; that is, pre existing bias or cooking the study.
  6. Ignore the basics and layer cake the jargon.
  7. Numbers often require context. Context in the form of quotes in one on one interviews require numbers.
  8. Models and frameworks do not match reality; that is, a construct is not what is.
  9. Specific situations do matter.
  10. Inputs from colleagues may not identify certain study flaws.

To test the article’s premises, I I turned to a study sent to me by a persona named Alisa Lipzen. Its title is “The State of Knowledge Management: 2014. Growing role & Value of Unified Search in Customer Service.” (If the link does not work for you, you will have to contact either of the sponsors, the Technology Services Industry Association or Coveo, an enterprise search vendor based in Canada.) You may have to pay for the report. My copy was free. Let’s do a quick pass through the document to see if it avoids the “stuff ups.”

First, the scope of the report is broad:

1. Knowledge management. Although I write a regular column for KMWorld, I must admit that I am not able to define exactly what this concept means. Like many information access buzzwords, the shotgun marriage of “knowledge” and “management” glues together two abstractions. In most usages, knowledge management refers to figuring out what a person “knows” and making that information available to others in an organization. After all, when a person quits, having access to that person’s “knowledge” has a value. But “knowledge” is as difficult to nail down as “management.” I suppose one knows it when one encounters it.

2. Unified search. The second subject is “unified search.” This is the idea that a person can use a single system to locate information germane to a query from a single search box. Unified suggests that widely disparate types of information are presented in a useful manner. For me, the fact that Google, arguably the best resourced information access company, has been unable to deliver unified search. Note that Google calls its goal “universal search.” In the 1980s, Fulcrum Technologies (Ottawa, Canada) search offered a version of federated search. In 2014, Google requires that a user run a query across different silos of information; for example, if I require informatio0n about NGFW I have to run the query across Google’s Web index, Google scholarly articles, Google videos, Google books, Google blogs, and Google news. This is not very universal. Most “unified” search solutions are marketing razzle dazzle for financial, legal, technical, and other reasons. Therefore, organizations have to have different search systems.

3. Customer service. This is a popular bit of jargon. The meaning of customer service, for me, boils down to cost savings. Few companies have the appetite to pay for expensive humans to deal with the problems paying customers experience. Last week, I spent one hour on hold with an outfit called Wellcare. The insurance company’s automated system reassured me that my call was important. The call was never answered. What did I learn. Neither my call nor my status as a customer was important. Most information access systems applied to “customer service” are designed to drive the cost of support and service as low as possible.

switchboard_thumb.png

“Get rid of these expensive humans,” says the MBA. “I want my annual bonus.”

I was not familiar with the TSIA. What is its mission? According the the group’s Web site:

TSIA is organized around six major service disciplines that address the major service businesses found in a typical technology company.

Each service discipline has its own membership community led by a seasoned research executive. Additionally, each service discipline has the following:

In addition, we have a research practice on Service Technology that spans across all service discipline focus areas.

My take is that TSIA is a marketing-oriented organization for its paying members.

Now let’s look at some of the the report’s key findings:

The people, process, and technology components of technology service knowledge management (KM) programs. This year’s survey examined core metrics and practices related to knowledge capture, sharing, and maintenance, as well as forward-looking elements such as video, crowd sourcing, and expertise management. KM is no longer just of interest to technical support and call centers. The survey was open to all TSIA disciplines, and 50% of the 400-plus responses were from groups other than support services, including 24% of responses from professional services organizations.

It is not clear if the sample is one that is “convenient” (based on membership and membership supplied contacts) or assembled according to the stodgy rules of Statistics 101.

The Executive Summary presents a projection for “Planned Spending 2014-2015.” Like Amazon’s financials, certain data are not presented. The percentages suggest that knowledge management and intelligent search (an undefined niche) will be flat except for “support services,” another undefined niche.

After spending, the survey data—according to TSIA—reveal that the survey respondents take quite different approaches to information access. Silo operations are characteristic of 79 percent of the sample. This begs the question about the value of “unified search.”

Not surprisingly, the sample suggests that established methods of providing information to customers are in favor. It is no surprise that a search function and a library of frequently asked questions are popular. More surprising is the surfacing of real time suggestions. I assume this is supposed to evoke the Google type search suggestions. The flagging of a decision tree approach to finding information was interesting. Does this suggest that the sample is skewed?

I found the data (again with the Amazon approach to presenting numbers) puzzling. Here’s the chart that gave me pause:

tsiachart_thumb.png

Where is the sample or “n”? “Where are the numbers? Percentage of what?” I ask.

The tall blue and red columns show that “knowledgebase” searches are the cat’s pajamas in more than half of the companies in the sample. The baffling distinction between “federated” and “unified” search show that in the sample, these approaches are almost evenly divided among the outfits not searching a knowledgebase. What if a knowledgebase is an XML repository like those possible with MarkLogic’s data management system? This  question raises a number of questions about the terminology used in the survey instrument. I know that I would be hard pressed to make distinctions if the wordage in this report were used to ask me questions. I would ask many questions about definitions. In my experience, individuals have quite a bit of trouble differentiating among the buzzwords used to describe information access. This problem is a natural consequence of the three metaphors used to explain the research.

I urge you to work through the survey report. Your close reading may reveal gaps in my understanding.

I do want to comment on the ROI or return on investment section. As I have written before, ROI is a financial measurement. You can get the type of definition bandied about at Booz, Allen & Hamilton when I worked at the firm:

Return on investment (ROI) is the ratio of a profit or loss made in a fiscal year expressed in terms of an investment. For example, if you invested $100 dollars in share of stock and its value rises to $110 by the end of the fiscal year, the return on the investment is a healthy 10%, assuming no dividends were paid.

In the TSIA report, ROI is calculated using “nearly 200 operational, quality, and financial metrics.” Okay. Why not use the business school definition? Perhaps the data do not support the desired “assertion”?

In my view, converting an information access system to a financial return is pretty tough. I had this task when I did a study about the US government’s licensing of a now defunct search system. The available data did not map to any cost savings whatsoever. Information access as a cost, and there were no metrics which could be tied in a meaningful way to paybacks. I tried and tried. But when no money comes from search, the task is subjective. Google, on the other hand, can tie queries to ads and thus to revenue. No so in the organizations with which I am familiar.

What does TSIA say about ROI? Here’s a snippet:

Overall increases in productivity contribute to multiple areas, from increased retention of formerly frustrated agents (turnover) to requirements for fewer agents during peak traffic times (capacity) to increases in first call resolution (FCR), and the time to solve even challenging issues (AHT). To calculate additional cost savings from improving knowledge management, identify measures accounting for time savings outside of those previously listed. These may be specific to your organization. As an example, agents may have more time to focus on value-added tasks instead of administrative tasks.

Then this interesting comment with a reference to the study’s “partner”:

Some examples from Coveo customers include: 67% reduction in time to identify duplicate issues, and an average of 40% to 60% less time spent searching multiple systems or content repositories one at a time.

But where are the dollars? Where are hard numbers?

The questions that flashed through my mind were:

  1. Was this an objective study or was it a ruse to present an argument that Coveo delivers what Google cannot do?
  2. Is this study one which would struggle to meet the wonky guidelines of the “stuff ups” Web write up?
  3. Why zero data required of a beginning student in Statistics 101?
  4. Why is a dollar free ROI presented in these references: Productivity, self service case resolution, support center capacity, escalated incidents, first contact resolution, average handling time, and my personal favorite, human resource metrics?

Like many of the enterprise search related surveys, the report falls short of the mark. The main point of the study, in my opinion, is:

Coveo can deliver information retrieval for customer support personnel and users.

This may be true, but the report is content marketing. Unlike the AIIM report which I wrote about here, this study focuses on one company, not a sweeping business sector.

So let’s look at the 10 points in the stuff up article and see how this search study fares:

  1. One study, not a series of studies. In short, an anomaly report. This apparently is a second study. Where are the comparative year on year data?
  2. One person’s notion of what is significant may be irrelevant. The TSIA study advances a single company’s point of view. Not quite a single person but definitely narrow.
  3. Mixing up risk and the Statistics 101 notion of “number needed to treat” gets the cart before the horse. The lack of numerical data deals a body blow to the study’s credibility in my view.
  4. Trends may not be linear. No trends are supported with hard data. The percentages suggest that in key areas like funding, the status quo persists.
  5. Humans find what they want to find; that is, pre existing bias or cooking the study. I would suggest that this report goes to great lengths to justify the Coveo software for customer support.
  6. Ignore the basics and layer cake the jargon. The report’s jargon speaks for itself.
  7. Numbers often require context. Context in the form of quotes in one on one interviews require numbers. The assertion is global. The data seem quite narrow. Despite that, where are quotes and specific case examples? Their absence makes the report appear float free.
  8. Models and frameworks do not match reality; that is, a construct is not what is. The survey instrument is not included so without this information I have no way to determine if the questions elicit useful data about the reality of customer support information access. Potentially interesting information about the persistence of silos loses momentum without hard data.
  9. Specific situations do matter. I agree. The report is general but does not anchor “knowledge management,” “unified search,” or “services” in a helpful, informative manner.
  10. Inputs from colleagues may not identify certain study flaws. I believe several people worked on the report, but the document is attributed to organizations, not individuals who can be held accountable for the words, observations, and assertions.

For anyone looking for an information retrieval solution specifically for customer support, there are many choices.

Legal, technical, and financial hurdles create silos of content and tend to protect those silos regardless of ill-advised, Don Quixotic efforts to remove them.

The report is a good example of enterprise search and content processing struggling to make a financial case for their products.

Bottom-line? The business school definition of ROI is the only ROI that makes sense to a business person. My hunch is that verifiable financial data about the payback value of search is not available or, if available, not likely to support the vendors’ assertions about value. The core of the Hewlett Packard Autonomy dust up pivots on this type of dollar payback.

Talking about value, knowledge management, unified search, or customer support does not make certain truths self evident. Data do. The marketing of search does not equate to quantification of the value of information access. Google talks about search in terms of advertising revenue. Many search vendors talk everything except payback dollars.

For this reason I am skeptical of many search centric reports. Stuff ups are not confined to surveys. Stuff ups appears in most information about the value of information access.

Stephen E Arnold, October 26, 2014

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta