Is Customer Support a Revenue Winner for Search Vendors?

February 26, 2011

In a word, “Maybe.” Basic search is now widely available at low or

InQuira has been a player in customer support for a number of years. The big dogs in customer support are outfits like RightNowPega, and a back pack full of off shore outfits. In the last couple of weeks, we have snagged news releases that suggest search vendors are in the customer support business.

Two firms have generated somewhat similar news releases. Coveo, based in Canada, was covered in Search CRM in a story titled “2011 Customer Service Trends: The Mobile Revolution.” The passage that caught our attention was:

The most sophisticated level of mobile enablement includes native applications, such as iPhone applications available from Apple’s App Store, which have been tested and approved by the device manufacturer. Not only do these applications offer the highest level of usability, they allow integration with other device applications. For example, Coveo’s mobile interface for the company’s Customer Information Access Solutions allows you to take action on items in a list of search returns, such as reply to an email or add a comment to a Salesforce.com incident. Like any hot technology trend, when investing in mobile enablement it is important to prioritize projects based on potential return on investment, not “cool” factor.

Okay, mobile for customer support.

Then we saw a few days later “Vivisimo Releases New Customer Experience Optimization Solution” in Destination CRM. Originally a vendor of on-the-fly clustering, Vivisimo has become a full service content processing firm specializing in “information optimization.” The passage that caught our attention was:

Vivisimo has begun to address the needs of these customer-facing professionals with the development of its Customer Experience Optimization (CXO) solution, which gives customer service representatives and account managers quick access to all the information about a customer, no matter where that information is housed and managed—inside or outside a company’s systems, and regardless of the source or type. The company’s products are a hybrid of enterprise search, text-based search, and business intelligence solutions. CXO also targets the $1.4 trillion problem of lost worker productivity fueled by employees losing time looking for information. “All content comes through a single search box,” Calderwood says, “which reduces the amount of time to find information.” CXO works with an enterprise search platform that indexes unstructured data, and a display mechanism that uses analytics to find the data. It sits on top of all the systems and applications a company can have—even hosted applications—and pulls data from them all. It can sync up with major systems from Remedy, Siebel, SAP, Oracle, Microsoft, Salesforce.com, and many others.

So, customer support and customer relationship management it is.

image

Promises are easy to make and sometimes difficult to keep. Source: http://dwellingintheword.wordpress.com/2009/12/29/172-numbers-30-and-31/

I have documented the changes that search and content processing companies have made in the last year. There have been significant executive changes at Lucid Imagination, MarkLogic, and Sinequa. Companies like Attensity and JackBe have shifted from a singular focus on serving a specific business sector to a broader commercial market. Brainware is pushing into document processing and medical information. Recommind has moved from eDiscovery into enterprise search. Palantir, the somewhat interesting visualization and analytics operation, is pushing into financial services, not just government intelligence sectors. There are numerous examples of search vendors looking for revenue love in various market sectors.

So what?

I see four factors influencing search and content processing vendors. I am putting the finishing touches on a “landscape report” in conjunction with Pandia.com about enterprise search. I dipped into the reference material for that study and noted these points:

  1. Repositioning is becoming a standard operating positioning for most search and content processing vendors. Even the giants like Google are trying to find ways to lash their indexing technology to words in hopes of increasing revenue. So wordsmithing is the order of the day. Do these firms have technology that will deliver on the repositioned capability? I am not sure, but I have ample evidence that plain old search is now a commodity. Search does not generate too much excitement among some organizations.
  2. The niches themselves that get attention—customer support, marketers interested in social content, and business intelligence—are in flux. The purpose of customer support is to reduce costs, not put me in touch with an expert who can answer my product question. The social content band wagon is speeding along, but it is unclear if “social media” is useful across a wide swath of business types. Consumer products, yes. Specialty metals, not so much.
  3. A “herd” mentality seems to be operating. Search vendors who once chased “one size fits all” buyers now look at niches. The problem is that some niches like eDiscovery and customer support have quite particular requirements. Consultative selling Endeca-style may be needed, but few search vendors has as many MBA types as Endeca and a handful of other firms. Engineers are not so good at MBA style tailoring, but with staff additions, the gap can be closed, just not overnight. Thus, the herd charges into a sector but there may not be enough grazing to feed everyone.
  4. Significant marketing forces are now at work. You have heard of Watson, I presume. When a company like IBM pushes into search and content processing with a consumer assault, other vendors have to differentiate themselves. Google and Microsoft are also marketing their corporate hearts into 150 beat per minute range. That type of noise forces smaller vendors to amp up their efforts. The result is the type of shape shifting that made the liquid metal terminator so fascinating. But that was a motion picture. Selling information retrieval is real life.

I am confident that the smaller vendors of search and content processing will be moving through a repositioning cycle. The problem for some firms is that their technology is, at the end of the day, roughly equivalent to Lucene/Solr. This means that unless higher value solutions can be delivered, an open source solution may be good enough. Simply saying that a search and retrieval system can deliver eDiscovery, customer support, business intelligence, medical fraud detection, or knowledge management may not be enough to generate needed revenue.

In fact, I think the hunt for revenue is driving the repositioning. Basic search has crumbled away as a money maker. But key word retrieval backed with some categorization is not what makes a customer support solution or one of the other “search positioning plays” work. Each of these niches has specific needs and incumbents who are going to fight back.

Enterprise search and its many variants remains a fascinating niche to monitor. There are changes afoot which are likely to make the known outfits sweat bullets in an effort to find a way to break through the revenue ceilings that seem to be imposed on many vendors of information retrieval technology. Even Google has a challenge, and it has lots of money and smart people. If Google can’t get off its one trick pony, what’s that imply for search vendors with fewer resources?

It is easy to say one has a solution. It is quite another to deliver that solution to an organization with a very real, very large, and very significant problem.

Stephen E Arnold, February 26, 2011

What Is a Fresh Start, Alex?

February 26, 2011

With the mounting anticipation of the man versus machine episode of Jeopardy! set to air on February 14, 15, and 16, 2011, it is hard to ignore the buzz over Watson.  If you’ve been locked in a closet for the last month, Watson is IBM’s supercomputing experiment in AI.  Recent articles in The Pittsburgh Post-Gazette and USA Today can bring you up to speed if necessary.  In previous weeks we covered Watson’s win in the game’s practice round.  But trivialities aside, what does Watson actually mean for IBM?

Well, Watson won.

The head gander in Harrod’s Creek maintains that IBM is pulling a PR stunt considering the company’s long history of work in the search field without ever impacting the market in a significant way.  Omnifind 9.1 for Lucene has not been met with much fanfare, at least not here at Beyond Search, largely in part to the convoluted web of features (or corresponding fixes) and lack of support.

Yes, it is all happening on a game show, so the possibility of rigging exists.  But how would introducing a fraud over national airwaves benefit IBM or what appears to be a quest to remind the public of its ingenuity?  Watson performs so well due to refined natural language processing (NLP) and QA technology, two facets of search that are likely to be important players in the future.  So if all goes according to plan, rather than typing a query into a search engine and waiting for a list of results out of which you must dig your own answer, the quandary will automatically be resolved.  That is the claim of Watson’s power, it accurately plucks answers out of information stores and the range of applications is huge.  This could be the next step in search and IBM could once again be a great innovator in the foreground.  Even though IBM processors are in nearly every gadget on the market, it’s been a while since IBM has had any real recognition.  That is why it does not surprise me they choose to roll-out their new tech on a prime-time television show, making advanced technology more palatable and memorable to the average consumer.

Perhaps I am being naïve or am too huge a fan of the science fiction novel, but I can’t help but be in Watson’s corner.  Hey, Arthur C. Clarke got satellite communication right; maybe HAL 9000 is on its way.

Now Watson is headed to health care. Stay tuned.

Sarah Rogers, February 26, 2011

Freebie

Google and Search Tweaks

February 25, 2011

Chatter blizzard! There is a flurry of commentary about Google’s change to cope with outfits that generate content to attract traffic, get a high Google ranking, and deliver information to users! You can read the Google explanation in “Finding More High-Quality Sites in Search” and learn about the tweaks. I found this passage interesting:

We can’t make a major improvement without affecting rankings for many sites. It has to be that some sites will go up and some will go down. Google depends on the high-quality content created by wonderful websites around the world, and we do have a responsibility to encourage a healthy web ecosystem. Therefore, it is important for high-quality sites to be rewarded, and that’s exactly what this change does.

Google faces increasing scrutiny for its display of content from some European Web sites. In fact, one of the companies affected has filed an anti trust complain against Google. You can read about the 1PlusV matter and the legal information site EJustice at this link (at least for a while. News has a tendency to disappear these days.)

image

Source: http://www.mentalamusement.com/our%20store/poker/casino_accessories.htm

Why did I find this passage interesting?

Well, it seems that when Google makes a fix, some sites go up or down in the results list. Interesting because as I understand the 1PlusV issue, the site arbitrarily disappeared and then reappeared. On one hand, human intervention doesn’t work very well. And, if 1PlusV is correct, human intervention does work pretty well.

Which is it? Algorithm that adapts or a human or two doing their thing independently or as the fingers of a committee.

I don’t know. My interest in how Google indexed Web sites diminished when I realized that Google results were deteriorating over the last few years. Now my queries are fairly specialized, and most of the information I need appears in third party sources. Google’s index, for me, is useful, but it is now just another click on a series of services I must use to locate information.

A good example is trying to locate information about a specific US government program. The line up of services I had to use to locate the specific item of information I sought included:

I also enlisted the help of two specialists. One in Israel and one here in the United States. As you can see, Google’s two services made up about one tenth of my bibliographic research.

Why?

First, Google’s Web index appears larger to me, but it seems to me that it returns hits that are distorted by either search engine optimization tricks such as auto-generated pages. These are mostly useless to me as are links to sites that contain incorrect information and Web pages for which the link is dead and the content no longer in the Google cache.

In my experience, this happens frequently when running queries for certain government agencies such as Health and Human Services or the documents for a US Congressional hearing. Your mileage may differ because the topics for which I want information are far from popular.

Second, I need coverage that does not arbitrarily stop after following links a couple of levels deep. Some services like Exalead do a better job of digging into the guts of large sites, particularly for certain European sources.

Third, the Blekko folks are going a pretty good job of keeping the older information easily identifiable. This date tagging is important to me, and I appreciate either seeing an explicit date or have a link to a page that displays a creation date.

Read more

Sampling Information for eDiscovery

February 25, 2011

One way to manage data is to use sampling.  Clearwell Solutions has an article discussing the uses and mathematical principles behind the art of sampling: “How Do You Sample Electronically Stored Information (ESI) in E-Discovery?”  The article mentions useful sources to understand sampling and how it is being used in the eDiscovery field.  It first mentions the Electronic Discovery Search Group and its EDRM Search Guide, which offers a general glimpse of eDiscovery and its importance for attorneys.  The Sedona Conference’s, Working Group Commentary has an article: Achieving Quality in the E-Discovery Process that will delve further sampling, applications, purposes, and court case examples when it was used.  It also explains how eDiscovery teams can shift sampling responsibilities to the requesting party.  That might not be a good thing.

The Sedona Conference Paper mentions that this type of sampling is called “judgmental sampling” and statistical sampling. A key point was:

“…Wherein the practitioner has a general sense of which of the several custodians and date range is most likely to offer the greatest yield. As judgmental sampling becomes more widely adopted as a way of controlling costs, electronic discovery sampling can embrace the benefits of statistical sampling as well.  One area where statistical sampling has an advantage is that quantifiable measures of error and confidence intervals are possible, while judgmental sampling has no such formal measurement.”

Sampling is good way to go to process unstructured e-mail data, but what will you do when the info you need isn’t in the sample?  It should be used to achieve a general understanding of data, but not all the details.

Whitney Grace, February 25, 2011

Freebie

The Need for Granular Search

February 25, 2011

Companies that use taxonomies for their business to business (B2B) sites could be in big trouble. That is, if we can believe results of a survey conducted by e-commerce application provider Endeca.

As reported in “The Evolution of E-Commerce,” the survey showed that people who shop B2B sites now expect the same personalized experience that shoppers of B2C (business to consumer) sites expect. Neither wants to sort through generalized search results like those returned by taxonomies.

The solution? According to John Andrews with Endeca:

“Websites … need to make use of much more granular approaches to tagging content in order maximize the Web customer experience. That will require in many cases new content management systems that make managing the relationship between all those tags a lot simpler. The increased nuances of e-commerce is going to push more companies to embrace a SaaS model rather than try to build it themselves.”

We tend to agree that there are better ways of structuring an e-commerce site than using taxonomies. We also agree that it makes sense for most companies to outsource the development of a content management system rather than tackling this in-house.

We have no problem with Endeca, but we feel that some of their competitors, such as Mark Logic, should also be considered.

Companies that use taxonomies for their business to business (B2B) sites could be in big trouble. That is, if we can believe results of a survey conducted by e-commerce application provider Endeca.

As reported in “The Evolution of E-Commerce,” the survey showed that people who shop B2B sites now expect the same personalized experience that shoppers of B2C (business to consumer) sites expect. Neither wants to sort through generalized search results like those returned by taxonomies.

The solution? According to John Andrews with Endeca:

“Websites … need to make use of much more granular approaches to tagging content in order maximize the Web customer experience. That will require in many cases new content management systems that make managing the relationship between all those tags a lot simpler. The increased nuances of e-commerce is going to push more companies to embrace a SaaS model rather than try to build it themselves.”

We tend to agree that there are better ways of structuring an e-commerce site than using taxonomies. We also agree that it makes sense for most companies to outsource the development of a content management system rather than tackling this in-house.

We have no problem with Endeca, but we feel that some of their competitors, such as Mark Logic, should also be considered.

Robin Broyles, February 25, 2011

Freebie

SEO: The Dark Art

February 25, 2011

Many firms provide legitimate search engine optimization (SEO) services to companies in order to improve their ranking in search results. However, cheating the system is frowned upon, as JCPenney recently learned. And Overture. Ouch!

With so many players manipulating the field, David Dorf asks “Can you Trust Search?” Government intervention isn’t the answer, but how is fairness maintained without oligarchic rule by the chosen few: Google, Yahoo!, Bing, etc.?

He goes on,

“Now Google is incorporating more social aspects into their search results. For example, when Google knows it’s me . . . search results will be influenced by my Twitter network . . . the blogs and re-tweeted articles from my network will be higher in the search results than they otherwise would be. So in the case of product searches, things discussed in my network will rise to the top.”

SEO services will continue to grow, especially in the realm of social media, and everyone will be expected to play by the rules. The question remains: who makes the rules?

Emily Rae Aldridge, February 25, 2011

Online Video Platform Tussle: Kaltura vs Brightcover

February 25, 2011

As video becomes the new way to communicate, online video is getting significant attention.

Kaltura, an Israeli startup founded in 2006, recently developed an open-source video platform to compete with the industry leader Brightcove. Brightcove has been pioneering the market since 2004. In their third round of financing Open-source video platform co Kaltura raises $20m. Previous rounds had raised $14 million in funds.

The open-source video platform:

\“enables any website to incorporate video, picture, and voice functionalities, including video management, searching, uploading, importing, editing, annotating, remixing, sharing, and advertising. The platform supports desktop computers, tablet computers, television, and other devices.”

Already in use by high-profile companies such as Fox, Paramount, HBO, and Warner Brothers, Kaltura offers an attractive and flexible alternative to Brightcove at a lower price. Time will tell if the recent injection of capital will be enough to overcome the entrenched competition.

Emily Rae Aldridge, February 25, 2011

Protected: SharePoint Best Bets

February 25, 2011

This content is password protected. To view it please enter your password below:

Who Defriended Google?

February 24, 2011

Did Facebook defriend Google? Did Google defriend Facebook? With Xooglers making up about 20 percent of the Facebook staff, the questions are not innocuous. The fate of Google’s new social play may hang in the balance. What are friends for?

Meow.

There’s something catty about how Google has snubbed Facebook in the latest iteration of Google Social.  The official blog post to announce the new improvements says not one word about Facebook, the elephant in the room.  In “Analysis: Google Social Search Is All About Blocking Facebook/Twitter Search”  Tom Foremski’s take is that  this

“Google move is better understood as a blocking measure to stop people from asking their social network directly. “

Will it work?  Let’s think about it.
Google Social has been around since 2009, but these latest improvements take results that were at the bottom of the screen and place them high up in the search results, as well as adding notes for links your connections have shared, and expanded the ways you can connect your accounts.  Google, of course, always tries to act like it’s taking the high road when it comes to Facebook, stressing that Facebook is a closed system while Google is as open and free as the air we breathe.   Personally, I think public data is overrated and I think many other people do too.  Why else is there a huge backlash every time Facebook tries to sneak in more openness to its users’ profiles?

image

What happens when the big dogs set up a pack without a little dog? Answer: Bowling alone.

When I look at Google Social, I have to ask myself if people would choose this over Facebook.  Facebook, of course, has momentum on its side since nearly everyone and his grandmother is on Facebook already and accessing it frequently.  Another question is how can Google know whose opinion I actually care about when giving me search results?

Read more

Invention Machine Embraces NLP

February 24, 2011

Natural Language Processing (NLP) is not a new science, but it has yet to be perfected. A wealth of corporate and common knowledge lies in unstructured text documents, but most NLP search retrieves piles of seemingly disconnected documents. Users need precise relevant results, but NLP has yet to get there. Goldfire claims to know How Natural Language Processing Can Solve the Knowledge Retrieval Problem. He notes:

“Goldfire’s Natural Language query interface enables the user to put a question in a free text format, which would be the same format as if the question were given to another person. And, once relevant knowledge has been retrieved, Goldfire presents the results in a way that makes their meaning readily apparent. “

Claiming to have found a way for computer-aided knowledge extraction to overcome the natural language obstacles of semantics, syntax, and context, Goldfire marries high-level concept extraction and problem-solving capabilities. Despite such improvements, the problem of how we structure and retrieve information is unlikely to be solved anytime soon.

Emily Rae Aldridge, February 24, 2011

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta