Does Anything Matter Other Than the Interface?

August 7, 2014

I read what I thought was a remarkable public relations story. You will want to check the write up out for two reasons. First, it demonstrates how content marketing converts an assertion into what a company believes will generate business. And, second, it exemplifies how a fix can address complex issues in information access. You may, like Archimedes, exclaim, “I have found it.”

The title and subtitle of the “news” are:

NewLane’s Eureka! Search Discovery Platform Provides Self-Servicing Configurable User Interface with No Software Development. Eureka! Delivers Outstanding Results in the Cloud, Hybrid Environments, and On Premises Applications.

My reaction was, “What?”

The guts of the NewLane “search discovery platform” is explained this way:

Eureka! was developed from the ground up as a platform to capture all the commonalities of what a search app is and allows for the easy customization of what a company’s search app specifically needs.

I am confused. I navigated to the company’s Web site and learned:

Eureka! empowers key users to configure and automatically generate business applications for fast answers to new question that they face every day. http://bit.ly/V0E8pI

The Web site explains:

Need a solution that provides a unified view of available information housed in multiple locations and formats? Finding it hard to sort among documents, intranet and wiki pages, and available reporting data? Create a tailored view of available information that can be grouped by source, information type or other factors. Now in a unified, organized view you can search for a project name and see results for related documents from multiple libraries, wiki pages from collaboration sites, and the profiles of project team members from your company’s people directory or social platform.

“Unified information access” is a buzzword used by Attivio and PolySpot, among other search vendors. The Eureka! approach seems to be an interface tool for “key users.”

Here’s the Eureka technology block diagram:

image

Notice that Eureka! has connectors to access the indexes in Solr, the Google Search Appliance, Google Site Search, and a relational database. The content that these indexing and search systems can access include Documentum, Microsoft SharePoint, OpenText LiveLink, IBM FileNet, files shares, databases (presumably NoSQL and XML data management systems as well), and content in “the cloud.”

For me the diagram makes clear that NewLane’s Eureka is an interface tool. A “key user” can create an interface to access content of interest to him or her. I think there are quite a few people who do not care where data come from or what academic nit picking went on to present information. The focus is on something a harried professional like an MBA who has to make a decision “now” needs some information.

image

Archimedes allegedly jumped from his bath, ran into the street, and shouted “Eureka.” He reacted, I learned from a lousy math teacher, that he had a mathematical insight about displacement. The teacher did not tell me that Archimedes was killed because he was working on a math problem and ignored a Roman soldier’s command to quit calculating. Image source: http://blocs.xtec.cat/sucdecocu/category/va-de-cientifics/

I find interfaces a bit like my wife’s questions about the color of paint to use for walls. She shows me antique ivory and then parchment. For me, both are white. But for her, the distinctions are really important. She knows nothing about paint chemistry, paint cost, and application time. She is into the superficial impact the color has for her. To me, the colors colors are indistinguishable. I want to know about durability, how many preparation steps the painter must go through between brands, and the cost of getting the room painted off white.

Interfaces for “key users” work like this in my experience. The integrity of the underlying data, the freshness of the indexes, the numerical recipes used to prioritize the information in a report are niggling details of zero interest to many system users. An answer—any answer—may be good enough.

Eureka! makes it easier to create interfaces. My view is that a layer on top of connectors, on top of indexing and content processing systems, on top of wildly diverse content is interesting. However, I see the interfaces as a type of paint. The walls look good but the underlying structure may be deeply flawed. The interface my wife uses for her walls does not address the fact that the wallboard has to be replaced BEFORE she paints again. When I explain this to her when she wants to repaint the garage walls, she says, “Why can’t we just paint it again?” I don’t know about you, but I usually roll over, particularly if it is a rental property.

Now what does the content marketing-like “news” story tell me about Eureka!

I found this statement yellow highlight worthy:

Seth Earley, CEO of Earley and Associates, describes the current global search environment this way, “What many executives don’t realize is that search tools and technologies have advanced but need to be adapted to the specific information needed by the enterprise and by different types of employees accomplishing their tasks. The key is context. Doing this across the enterprise quickly and efficiently is the Holy Grail. Developing new classes of cloud-based search applications are an essential component for achieving outstanding results.”

Yep, context is important. My hunch is that the context of the underlying information is more important. Mr. Earley, who sponsored an IDC study by an “expert” named Dave Schubmehl on what I call information saucisson, is an expert on the quasi academic “knowledge quotient” jargon. He, in this quote, seems to be talking about a person in shipping or a business development professional being able to use Eureka! to get the interface that puts needed information front and center. I think that shipping departments use dedicated systems who data typically does not find their way into enterprise information access systems. I also think that business development people use Google, whatever is close at hand, and enterprise tools if there is time. When time is short, concise reports can be helpful. But what if the data on which the reports are based are incorrect, stale, incomplete, or just wrong? Well, that is not a question germane to a person focused on the “Holy Grail.”

I also noted this statement from Paul Carney, president and founder of NewLane:

The full functionality of Eureka! enables understaffed and overworked IT departments to address the immediate search requirements as their companies navigate the choppy waters of lessening their dependence on enterprise and proprietary software installations while moving critical business applications to the Cloud. Our ability to work within all their existing systems and transparently find content that is being migrated to the Cloud is saving time, reducing costs and delivering immediate business value.

The point is similar to what Google has used to sell licenses for its Google Search Appliance. Traditional information technology departments can be disintermediated.

If you want to know more about FastLane, navigate to www.fastlane.com. Keep a bathrobe handy if you review the Web site relaxing in a pool or hot tube. Like Archimedes, you may have an insight and jump from the water and run through the streets to tell others about your insight.

Stephen E Arnold, August 7, 2014

Data Augmentation: Is a Step Missing or Mislocated?

August 6, 2014

I read “Data Warehouse Augmentation, Part 4.” You can find the write up a http://ibm.co/1obWXDh. There are other sections of the write, but I want to focus on the diagrams in this fourth chapter/section.

IBM is working overtime to generate additional revenues. Some of the ideas are surprising; for example, positioning Vivisimo’s metasearch function as a Big Data solution or buying Cybertap and then making the quite valuable technology impossible to find unless one is an intelligence procurement official. Then there is Watson, and I am just not up to commenting on this natural language processing system.

To the matter at hand. There is basic information about in this write up about specific technical components of a Big Data solution. The words, for the most part, will not surprise anyone who has looked at marketing collateral from any of the Big Data vendors/integrators.

What is fascinating about the write up is the wealth of diagrams in the document. I worked through the text and the diagrams and I noticed that one task is not identified as important; specifically, the conversion of source content into a file type or form that the content processing system can process.

Here’s an example. First the IBM diagram:

image

Source: IBM, Data Warehouse Augmentation, 2014.

Notice that after “staging”, there is a function described in time-honored database speak, “ETL.” Now “extract, transform, and load” is a very important step. But is there a step that precedes ETL?

image

How can one extract from disparate content if a connector is not available or the source system cannot support file transfers, direct access, or reports that reflect in memory data?

In my experience, there will be different methods of acquiring content to process. There are internal systems. If there is an ancient AS/400, some work is required to generate outputs that provide the data required. Due to the nature of the AS/400, direct interaction with the outstanding memory system of the AS/400, some care is needed to get the data and the updates not yet written to disc without corrupting the in memory information. We have addressed this “memory fragility” by using a standalone machine that accepts an output from the AS/400 and then disconnects. The indexing system, then, connects to the standalone machine to pick up the AS/400 outputs. Clunky? You bet. But there are some upsides. To learn about the excitement of direct interaction with AS/400, just do some real time data acquisition. Let me know how this works out for you.

The same type of care is often needed with the content assembled for the data warehouse pipeline. Let me illustrate this. Assume the data warehouse will obtain data from these sources: internal legacy systems, third party providers, custom crawls with the content residing on a hosted service, and direct data acquisition from mobile devices that feed information into a collection point parked at Amazon.

Now each of these content streams has different feathers in its war bonnet. Some of the data will be well formed XML. Some will be JSON. Some will be a proprietary format unique to the source. For each file type, there will be examples of content objects that are different, due to a vendor format change or a glitch.

These disparate content objects, therefore, have to be processed before extraction can occur. So has IBM put ETL in the wrong place in this diagram or has IBM omitted the pre-processing (normalization) operation.

In our experience, content that cannot be processed is not available to the system. If big chunks of content end up in the exceptions folder, the resulting content processing may be flawed. One of the data points that must be checked is the number of content objects that can be normalized in a pre processing stream. We have encountered situations like these. Your mileage may vary:

  1. Entire streams of certain types of content are exceptions, so the resulting indexing does not contain the data. Example: outputs from certain intercept systems.
  2. Streams of content skip non processable content without writing exceptions to a file due to configuration or resource availability
  3. Streams of content are automatically “capped” when the processing system cannot keep pace. When the system accepts more content, it does not pull information from a cache or storage pool. The system just ignores the information it was unable to process.

There are fixes for each of these situations. What we have learned is that this pre processing function can be very expensive, have an impact on the reliability of the outputs from the data warehousing system when queried, and generate a bottleneck that affects downstream processes.

After decades of data warehousing refinement, why does this problem keep surfacing?

The answer is that recycling traditional thinking about content processing is much easier than figuring out what causes a complex system to derail itself. I think that may be part of the reason the IBM diagram may be misleading.

Pre-processing can be time consuming, hungry for machine resources, and very expensive to implement.

Stephen E Arnold, August 6, 2014

The March of IBM Watson: From Kitchen to Executive Suite

August 5, 2014

Watson, fresh from its recipe innovations at Bon Appétit, is on the move…again. From the game show to the hospital, Watson has been demonstrating its expertise in the most interesting venues.

I read “A Room Where Executives Go to Get Help from IBM’s Watson.” The subtitle is an SEO dream: “Researchers at IBM are testing a version of Watson designed to listen and contribute to business meetings.” I know IBM has loads of search and content processing capability. In addition to the gems cranked out by Dr. Jon Kleinberg and Dr. Ramanathan Guha, IBM has oodles of acquisitions in the search and content processing sector. Do you know about Clementine? Are you familiar with iPhrase? Have your explored Cybertap’s indexing and search function with your local IBM representative? What about Vivisimo? What about the search functions in DB2, FileNet, and OminFind regardless of its incarnation? Whew. That’s a lot of search and content processing horsepower. I think most of that power remains in the barn.

Watson is not in the barn. Watson is a raging bull. Watson is, I believe, something special. Based on open source technology plus home brew wizardry, Watson is a next-generation information retrieval world beater. The idea is that Watson is trained in a manner similar to the approach used by Autonomy in 1996. Then that indexed content is whipped into a question answering system. Hapless chefs, litigation wary physicians, and now risk averse MBAs can use Watson to make better decisions or answer really tough questions.

I know this to be true because Technology Review tells me so. Whatever MIT-tinged Technology Review says is pretty darned solid. Here’s a passage I noted:

Everything said in the room can be instantly transcribed, providing a detailed record of any meeting, and allowing the system to listen out for commands addressed to “Watson.” Those commands can be simple requests for information of the kind you might type into a search box. But Watson can also take a more active role in a discussion. In a live demonstration, it helped researchers role-playing as executives to generate a short list of companies to acquire.

The write up explains that a little bit of preparation is required. There’s the pesky training, which is particularly annoying when the topic of the meeting is, “The DOJ attorneys are here to discuss the depositions” or “We have a LOCA at the reactor. Everyone to my conference room now.” I suppose most business meetings are even more exciting.

Technology Review points out that the technology has a tough time converting executive speech to text. Watson uses the text as fodder for the indexing and parsing required to pass queries to the internal subsystems which then tap into Watson for answers. The natural language query and automatic query refinement functions seem to work well for game show questions and for discerning uses of tamarind. For a LOCA meeting or discussion of a deposition, Watson may need a bit more work.

I find the willingness of major “real” news outlets to describe Watson in juicy write ups an indication of the esteem in which IBM is held. My view is a bit different. I am not sure the Watson group at IBM knows how to generate substantial revenues. The folks have to make some progress toward $1 billion in revenue and then grow that revenue to a modest $10 billion in five or six years.

The fact that outfits in search and content processing have failed to hit more modest benchmarks for decades is irrelevant. The only search company that I know has generated billions is Google. Keep in mind that those billions come from online advertising. HP bought Autonomy for $11 billion in the hopes of owning a Klondike. IBM wisely went with open source technology and home grown code.

But the eventual effect of both HP’s and IBM’s approach will be more modest revenues. HP makes a name for itself via litigation and IBM is making a name for itself with demonstrations and some recipes.

Search and content processing, whether owned by a large company or a small one, faces some credibility, marketing, revenue, technology, and profit challenges. I am not sure a business triathlete can complete the course at this time. Talk is just so much easier than getting over or around the course intact.

Stephen E Arnold, August 5, 2014

Attensity Leverages Biz360 Invention

August 4, 2014

In 2010, Attensity purchased Biz360. The Beyond Search comment on this deal is at http://bit.ly/1p4were. One of the goslings reminded me that I had not instructed a writer to tackle Attensity’s July 2014 announcement “Attensity Adds to Patent Portfolio for Unstructured Data Analysis Technology.” PR-type “stories” can disappear, but for now you can find a description of “Attensity Adds to Patent Portfolio for Unstructured Data Analysis Technology” at http://reut.rs/1qU8Sre.

My researcher showed me a hard copy of 8,645,395, and I scanned the abstract and claims. The abstract, like many search and content processing inventions, seemed somewhat similar to other text parsing systems and methods. The invention was filed in April 2008, two years before Attensity purchased Biz360, a social media monitoring company. Attensity, as you may know, is a text analysis company founded by Dr. David Bean. Dr. Bean employed various “deep” analytic processes to figure out the meaning of words, phrases, and documents. My limited understanding of Attensity’s methods suggested to me that Attensity’s Bean-centric technology could process text to achieve a similar result. I had a phone call from AT&T regarding the utility of certain Attensity outputs. I assume that the Bean methods required some reinforcement to keep pace with customers’ expectations about Attensity’s Bean-centric system. Neither the goslings nor I are patent attorneys. So after you download 395, seek out a patent attorney and get him/her to explain its mysteries to you.

The abstract states:

A system for evaluating a review having unstructured text comprises a segment splitter for separating at least a portion of the unstructured text into one or more segments, each segment comprising one or more words; a segment parser coupled to the segment splitter for assigning one or more lexical categories to one or more of the one or more words of each segment; an information extractor coupled to the segment parser for identifying a feature word and an opinion word contained in the one or more segments; and a sentiment rating engine coupled to the information extractor for calculating an opinion score based upon an opinion grouping, the opinion grouping including at least the feature word and the opinion word identified by the information extractor.

This invention tackles the Mean Joe Green of content processing from the point of view of a quite specific type of content: A review. Amazon has quite a few reviews, but the notion of an “shaped” review is a thorny one. See, for example, http://bit.ly/1pz1q0V.) The invention’s approach identifies words with different roles; some words are “opinion words” and others are “feature words.” By hooking a “sentiment engine” to this indexing operation, the Biz360 invention can generate an “opinion score.” The system uses item, language, training model, feature, opinion, and rating modifier databases. These, I assume, are either maintained by subject matter experts (expensive), smart software working automatically (often evidencing “drift” so results may not be on point), or a hybrid approach (humans cost money).

image

The Attensity/Biz360 system relies on a number of knowledge bases. How are these updated? What is the latency between identifying new content and updating the knowledge bases to make the new content available to the user or a software process generating an alert or another type of report?

The 20 claims embrace the components working as a well oiled content analyzer. The claim I noted is that the system’s opinion score uses a positive and negative range. I worked on a sentiment system that made use of a stop light metaphor: red for negative sentiment and green for positive sentiment. When our system could not figure out whether the text was positive or negative we used a yellow light.

image

The approach used for a US government project a decade ago, used a very simple metaphor to communicate a situation without scores, values, and scales. Image source: http://bit.ly/1tNvkT8

Attensity said, according the news story cited above:

By splitting the unstructured text into one or more segments, lexical categories can be created and a sentiment-rating engine coupled to the information can now evaluate the opinions for products, services and entities.

Okay, but I think that the splitting of text into segment was a function of iPhrase and search vendors converting unstructured text into XML and then indexing the outputs.

Attensity’s Jonathan Schwartz, General Counsel at Attensity is quoted in the news story as asserting:

“The issuance of this patent further validates the years of research and affirms our innovative leadership. We expect additional patent issuances, which will further strengthen our broad IP portfolio.”

Okay, this sounds good but the invention took place prior to Attensity’s owning Biz360. Attensity, therefore, purchased the invention of folks who did not work at Attensity in the period prior to the filing in 2008. I understand that company’s buy other companies to get technology and people. I find it interesting that Attensity’s work “validates” Attensity’s research and “affirms” Attensity’s “innovative leadership.”

I would word what the patent delivers and Attensity’s contributions differently. I am no legal eagle or sentiment expert. I do like less marketing razzle dazzle, but I am in the minority on this point.

Net net: Attensity is an interesting company. Will it be able to deliver products that make the licensees’ sentiment score move in a direction that leads to sustaining revenue and generous profits. With the $90 million in funding the company received in 2014, the 14-year-old company will have some work to do to deliver a healthy return to its stakeholders. Expert System, Lexalytics, and others are racing down the same quarter mile drag strip. Which firm will be the winner? Which will blow an engine?

Stephen E Arnold, August 4, 2014

Training Your Smart Search System

August 2, 2014

With the increasing chatter about smart software, I want to call to your attention this article, “Improving the Way Neural Networks Learn.” Keep in mind that some probabilistic search systems have to be trained on content that closely resembles the content the system will index. The training is important, and training can be time consuming. The licensee has to create a training set of data that is similar to what the software will index. Then the training process is run, a human checks the system outputs, and makes “adjustments.” If the training set is not representative, the indexing will be off. If the human makes corrections that are wacky, then the indexing will be off. When the system is turned loose, the resulting index may return outputs that are not what the user expected or the outputs are incorrect. Whether the system uses knows enough to recognize incorrect results varies from human to human.

If you want to have a chat with your vendor regarding the time required to train or re-train a search system relying on sample content, print out this article. If the explanation does not make much sense to you, you can document off query results sets, complain to the search system vendor, or initiate a quick fix. Note that quick fixes involve firing humans believed to be responsible for the system, initiate a new search procurement, or pretend that the results are just fine. I suppose there are other options, but I have encountered these three approach seasoned with either legal action or verbal grousing to the vendor. Even when the automated indexing is tuned within an inch of its life, accuracy is likely to start out in the 85 to 90 percent range and then degrade.

Training can be a big deal. Ignoring the “drift” that occurs when the smart software has been taught or learned something that distorts the relevance of results can produce some sharp edges.

Stephen E Arnold, August 2, 2014

More Knowledge Quotient Silliness: The Florida Gar of Search Marketing

August 1, 2014

I must be starved for intellectual Florida Gar. Nibble on this fish’s lateral line and get nauseous or dead. Knowledge quotient as a concept applied to search and retrieval is like a largish Florida gar. Maybe a Florida gar left too long in the sun.

image

Lookin’ yummy. Looks can be deceiving in fish and fishing for information. A happy quack to https://www.flmnh.ufl.edu/fish/Gallery/Descript/FloridaGar/FloridaGar.html

I ran a query on one of the search systems that I profile in my lectures for the police and intelligence community. With a bit of clicking, I unearthed some interesting uses of the phrase “knowledge quotient.”

What surprised me is that the phrase is a favorite of some educators. The use of the term as a synonym for plain old search seems to be one of those marketing moments of magic. A group of “experts” with degrees in home economics, early childhood education, or political science sit around and try to figure out how to sell a technology that is decades old. Sure, the search vendors make “improvements” with ever increasing speed. As costs rise and sales fail to keep pace, the search “experts” gobble a cinnamon latte and innovate.

In Dubai earlier this year, I saw a reference to a company engaged in human resource development. I think this means “body shop,” “lower cost labor,” or “mercenary registry,” but I could be off base. The company is called Knowledge Quotient FZ LLC. If one tries to search for the company, the task becomes onerous. Google is giving some love to the recent IDC study by an “expert” named Dave Schubmehl. As you may know, this is the “professional” who used by information and then sold it on Amazon until July 2014 without paying me for my semi-valuable name. For more on this remarkable approach to professional publishing, see http://wp.me/pf6p2-auy.

Also, in Dubai is a tutoring outfit called Knowledge Quotient which delivers home tutoring to the children of parents with disposable income. The company explains that it operates a place where learning makes sense.

Companies in India seem to be taken with the phrase “knowledge quotient.” Consider Chessy Knowledge Quotient Private Limited. In West Bengal, one can find one’s way to Mukherjee Road and engage the founders with regard to an “effective business solution.” See http://chessygroup.co.in. Please, do not confuse Chessy with KnowledgeQ, the company operating as Knowledge Quotient Education Services India Pvt Ltd. in Bangalore. See http://www.knowledgeq.org.

What’s the relationship between these companies operating as “knowledge quotient” vendors and search? For me, the appropriation of names and applying them to enterprise search contributes to the low esteem in which many search vendors are held.

Why is Autonomy IDOL such a problem for Hewlett Packard? This is a company that bought a mobile operating system and stepped away from. This is a company that brought out a tablet and abandoned it in a few months. This is a company that wrote off billions and then blamed the seller for not explaining how the business worked. In short, Autonomy, which offers a suite of technology that performs as well or better than any other search system, has become a bit of Florida gar in my view. Autonomy is not a fish. Autonomy is a search and content processing system. When properly configured and resourced, it works as well as any other late 1990s search system. I don’t need meaningless descriptions like “knowledge quotient” to understand that the “problem” with IDOL is little more than HP’s expectations exceeding what a decades old technology can deliver.

Why is Fast Search & Transfer an embarrassment to many who work in the search sector. Perhaps the reason has to do with the financial dealings of the company. In addition to fines and jail terms, the Fast Search system drifted from its roots in Web search and drifted into publishing, smart software, and automatic functions. The problem was that when customers did not pay, the company did not suck it up, fix the software, and renew their efforts to deliver effective search. Nah, Fast Search became associated with a quick sale to Microsoft, subsequent investigations by Norwegian law enforcement, and the culminating decision to ban one executive from working in search. Yep, that is a story that few want to analyze. Search marketers promised and the technology did not deliver, could not deliver given Fast Search’s circumstances.

What about Excalibur/Convera? This company managed to sell advanced search and retrieval to Intel and the NBA. In a short time, both of these companies stepped away from Convera. The company then focused on a confection called “vertical search” based on indexing the Internet for customers who wanted narrow applications. Not even the financial stroking of Allen & Co. could save Convera. In an interesting twist, Fast Search purchased some of Convera’s assets in an effort to capture more US government business. Who digs into the story of Excalibur/Convera? Answer: No one.

What passes for analysis in enterprise search, information retrieval, and content processing is the substitution of baloney for fact-centric analysis. What is the reason that so many search vendors need multiple injections of capital to stay in business? My hunch is that companies like Antidot, Attivio, BA Insight, Coveo, Sinequa, and Palantir, among others, are in the business of raising money, spending it in an increasingly intense effort to generate sustainable revenue, and then going once again to capital markets for more money. When the funding sources dry up or just cut off the company, what happens to these firms? They fail. A few are rescued like Autonomy, Exalead, and Vivisimo. Others just vaporize as Delphes, Entopia, and Siderean did.

When I read a report from a mid tier consulting firm, I often react as if I had swallowed a chunk of Florida gar. An example in my search file is basic information about “The Knowledge Quotient: Unlocking the Hidden Value of Information.” You can buy this outstanding example of ahistorical analysis from IDC.com, the employer of Dave Schubmehl. (Yep, the same professional who used my research without bothering to issue me a contract or get permission from me to fish with my identity. My attorney, if I understand his mumbo jumbo, says this action was not identity theft, but Schubmehl’s actions between May 2012 and July 2014 strikes me as untoward.)

Net net: I wonder if any of the companies using the phrase “knowledge quotient” are aware of brand encroachment. Probably not. That may be due to the low profile search enjoys in some geographic regions where business appears to be more healthy than in the US.

Can search marketing be compared to Florida gar? I want to think more about this.

Stephen E Arnold, August 1, 2014

The IHS Invention Machine: US 8,666,730

July 31, 2014

I am not an attorney. I consider this a positive. I am not a PhD with credentials as impressive Vladimir Igorevich Arnold, my distant relative. He worked with Andrey Kolmogorov, who was able to hike in some bare essentials AND do math at the same time. Kolmogorov and Arnold—both interesting, if idiosyncratic, guys. Hiking in the wilderness with some students, anyone?

Now to the matter at hand. Last night I sat down with a copy of US 8,666,730 B2 (hereinafter I will use this shortcut for the patent, 730), filed in an early form in 2009, long before Information Handing Service wrote a check to the owners of The Invention Machine.

The title of the system and method is “Question Answering System and Method Based on  Semantic Labeling of Text Documents and User Questions.” You can get your very own copy at www.uspto.gov. (Be sure to check out the search tips; otherwise, you might get a migraine dealing with the search system. I heard that technology was provided by a Canadian vendor, which seems oddly appropriate if true. The US government moves in elegant, sophisticated ways.

Well, 730 contains some interesting information. If you want to ferret out more details, I suggest you track down a friendly patent attorney and work through the 23 page document word by word.

My analysis is that of a curious old person residing in rural Kentucky. My advisors are the old fellows who hang out at the local bistro, Chez Mine Drainage. You will want to keep this in mind as I comment on this James Todhunter (Framingham, Mass), Igor Sovpel (Minsk, Belarus), and Dzianis Pastanohau (Minsk, Belarus). Mr. Todhunter is described as “a seasoned innovator and inventor.” He was the Executive Vice President and Chief Technology Officer for Invention Machine. See http://bit.ly/1o8fmiJ, Linked In at (if you are lucky) http://linkd.in/1ACEhR0, and  this YouTube video at http://bit.ly/1k94RMy. Igor Sovpel, co inventor of 730, has racked up some interesting inventions. See http://bit.ly/1qrTvkL. Mr. Pastanohau was on the 730 team and he also helped invent US 8,583,422 B2, “System and Method for Automatic Semantic Labeling of Natural Language Texts.”

The question answering invention is explained this way:

A question-answering system for searching exact answers in text documents provided in the electronic or digital form to questions formulated by user in the natural language is based on automatic semantic labeling of text documents and user questions. The system performs semantic labeling with the help of markers in terms of basic knowledge types, their components and attributes, in terms of question types from the predefined classifier for target words, and in terms of components of possible answers. A matching procedure makes use of mentioned types of semantic labels to determine exact answers to questions and present them to the user in the form of fragments of sentences or a newly synthesized phrase in the natural language. Users can independently add new types of questions to the system classifier and develop required linguistic patterns for the system linguistic knowledge base.

The idea, as I understand it, is that I can craft a question without worrying about special operators like AND or field labels like CC=. Presumably I can submit this type of question to a search system based on 730 and its related inventions like the automatic indexing in 422.

The references cited for this 2009 or earlier invention are impressive. I recognized Mr. Todhunter’s name, that of a person from Carnegie Mellon, and one of the wizards behind the tagging system in use at SAS, the statistics outfit loved by graduate students everywhere. There were also a number of references to Dr. Liz Liddy, Syracuse University. I associated her with the mid to late 1990s system marketed then as DR LINK (Document Retrieval Linguistic Knowledge). I have never been comfortable with the notion of “knowledge” because it seems to require that subject matter experts and other specialists update, edit, and perform various processes to keep the “knowledge” from degrading into a ball of statistical fuzz. When someone complains that a search system using Bayesian methods returns off point results, I look for the humans who are supposed to perform “training,” updates, remapping, and other synonyms for “fixing up the dictionaries.” You may have other experiences which I assume are positive and have garnered you rapid promotion for your search system competence. For me, maintaining knowledge bases usually leads to lots of hard work, unanticipated expenses, and the customary termination of a scapegoat responsible for the search system.

I am never sure how to interpret extensive listings of prior art. Since I am not qualified to figure out if a citation is germane, I will leave it to you to wade through the full page of US patent, foreign patent documents, and other publications. Who wants to question the work of the primary examiner and the Faegre Baker Daniels “attorney, agent, or firm” tackling 730.

On to the claims. The patent lists 28 claims. Many of them refer to operations within the world of what the inventors call expanded Subject-Action-Object or eSAO. The idea is that the system figures out parts of speech, looks up stuff in various knowledge bases and automatically generated indexes, and presents the answer to the user’s question. The lingo of the patent is sufficiently broad to allow the system to accommodate an automated query in a way that reminded me of Ramanathan Guha’s massive semantic system. I cover some of Dr. Guha’s work in my now out of print monograph, Google Version 2.0, published by one of the specialist publishers that perform Schubmehl-like maneuvers.

My first pass through the 730’s claims was a sense of déjà vu, which is obviously not correct. The invention has been award the status of a “patent”; therefore, the invention is novel. Nevertheless, these concepts pecked away at me with the repetitiveness of the woodpecker outside my window this morning:

  1. Automatic semantic labeling which I interpreted as automatic indexing
  2. Natural language process, which I understand suggests the user takes the time to write a question that is neither too broad nor too narrow. Like the children’s story, the query is “just right.”
  3. Assembly of bits and chunks of indexed documents into an answer. For me the idea is that the system does not generate a list of hits that are probably germane to the query. The Holy Grail of search is delivering to the often lazy, busy, or clueless user an answer. Google does this for mobile users by looking at a particular user’s behavior and the clusters to which the user belongs in the eyes of Google math, and just displaying the location of the pizza joint or the fact that a parking garage at the airport has an empty space.
  4. The system figures out parts of speech, various relationships, and who-does-what-to-whom. Parts of speech tagging has been around for a while and it works as long as the text processed in not in the argot of a specialist group plotting some activity in a favela in Rio.
  5. The system performs the “e” function. I interpreted the “e” to mean a variant of synonym expansion. DR LINK, for example, was able in 1998 to process the phrase white house and display content relevant to presidential activities. I don’t recall how this expansion from bound phrase to presidential to Clinton. I do recall that DR LINK had what might be characterized as a healthy appetite for computing resources to perform its expansions during indexing and during query processing. This stuff is symmetrical. What happens to source content has to happen during query processing in some way.
  6. Relevance ranking takes place. Various methods are in use by search and content processing vendors. Some of based on statistical methods. Others are based on numerical recipes that the developer knows can be computed within the limits of the computer systems available today. No N=NP, please. This is search.
  7. There are linguistic patterns. When I read about linguistic patterns I recall the wild and crazy linguistic methods of Delphes, for example. Linguistics are in demand today and specialist vendors like Bitext in Madrid, Spain, are in demand. English, Chinese, and Russian are widely used languages. But darned useful information is available in other languages. Many of these are kept fresh via neologisms and slang. I often asked my intelligence community audiences, “What does teddy bear mean?” The answer is NOT a child’s toy. The clue is the price tag suggested on sites like eBay auctions.

The interesting angle in 730 is the causal relationship. When applied to processes in the knowledge bases, I can see how a group of patents can be searched for a process. The result list could display ways to accomplish a task. NOTting out patents for which a royalty is required leaves the searcher with systems and methods that can be used, ideally without any hassles from attorneys or licensing agents.

Several questions popped into my mind as I reviewed the claims. Let me highlight three of these:

First, computational load when large numbers of new documents and changed content has to be processed. The indexes have to be updated. For small domains of content like 50,000 technical reports created by an engineering company, I think the system will zip along like a 2014 Volkswagen Golf.

image

Source: US8666730, Figure 1

When terabytes of content arrived every minute, then the functions set forth in the block diagram for 730 have to be appropriately resourced. (For me, “appropriately resourced” means lots of bandwidth, storage, and computational horsepower.)

Second, the knowledge base, as I thought about when I first read the patent, has to be kept in tip top shape. For scientific, technical, and medical content, this is a more manageable task. However, when processing intercepts in slang filled Pashto, there is a bit more work required. In general, high volumes of non technical lingo become a bottleneck. The bottleneck can be resolved, but none of the solutions are likely to make a budget conscious senior manager enjoy his lunch. In fact, the problem of processing large flows of textual content is acute. Short cuts are put in place and few of those in the know understand the impact of trimming on the results of a query. Don’t ask. Don’t tell. Good advice when digging into certain types of content processing systems.

Third, the reference to databases begs this question, “What is the amount of storage required to reduce index latency to less than 10 seconds for new and changed content?” Another question, “What is the gap that exists for a user asking a mission critical question between new and changed content and the indexes against which the mission critical query is passed?” This is not system response time, which as I recall for DR LINK era systems was measured in minutes. The user sends a query to the system. The new or changed information is not yet in the index. The user makes a decision (big or small, significant or insignificant) based on incomplete, incorrect, or stale information. No big problem is one is researching a competitor’s new product. Big problem when trying to figure out what missile capability exists now in an region of conflict.

My interest is enterprise search. IHS, a professional publishing company that is in the business of licensing access to its for fee data, seems to be moving into the enterprise search market. (See http://bit.ly/1o4FyL3.) My researchers (an unreliable bunch of goslings) and I will be monitoring the success of IHS. Questions of interest to me include:

  1. What is the fully loaded first year cost of the IHS enterprise search solution? For on premises installations? For cloud based deployment? For content acquisition? For optimization? For training?
  2. How will the IHS system handle flows of real time content into its content processing system? What is the load time for 100 terabytes of text content with an average document size of 50 Kb? What happens to attachments, images, engineering drawings, and videos embedded in the stream as native files or as links to external servers?
  3. What is the response time for a user’s query? How does the user modify a query in a manner so that result sets are brought more in line with what the user thought he was requesting?
  4. How do answers make use of visual outputs which are becoming increasingly popular in search systems from Palantir, Recorded Future, and similar providers?
  5. How easy is it to scale content processing and index refreshing to keep pace with the doubling of content every six to eight weeks that is becoming increasingly commonplace for industrial strength enterprise search systems? How much reengineering is required for log scale jumps in content flows and user queries?

Take a look at 730 an d others in the Invention Machine (IHS) patent family. My hunch is that if IHS is looking for a big bucks return from enterprise search sales, IHS may find that its narrow margins will be subjected to increased stress. Enterprise search has never been nor is now a license to print money. When a search system does pump out hundreds of millions in revenue, it seems that some folks are skeptical. Autonomy and Fast Search & Transfer are companies with some useful lessons for those who want a digital Klondike.

IHS Enterprise Search: Semantic Concept Lenses Are Here

July 29, 2014

I pointed out in http://bit.ly/X9d219 that IDC, a mid tier consulting firm that has marketed my information without permission on Amazon of all places, has rolled out a new report about content processing. The academic sounding title is “The Knowledge Quotient: Unlocking the Hidden Value of Information.” Conflating knowledge and information is not logically satisfying to me. But you may find the two words dusted with “value” just the ticket to career success.

I have not read the report, but I did see a list of the “sponsors” of the study. The list, as I pointed out, was an eclectic group, including huge firms struggling for credibility (HP and IBM) down to consulting firms offering push ups for indexers.

One company on my list caused me to go back through my archive of search information. The firm that sparked my interest is Information Handling Services or IHS or Information Handling Service. The company is publicly traded and turning a decent profit. The revenue of IHS has moved toward $2 billion. If the global economy perks up and the defense sector is funded at pre-drawdown levels, IHS could become a $2 billion company.

IHS is a company with an interesting history and extensive experience with structured and unstructured search. Few of those with whom I interacted when I was working full time considered IHS a competitor to the likes of Autonomy, Endeca, and Funnelback.

In the 2013 10-K on page 20, IHS presents its “cumulative total return” in this way:

image

The green line looks like money. Another slant on the company’s performance can be seen in a chart available from Google Finance.

The Google chart shows that revenue is moving upwards, but operating margins are drifting downward and operating income is suppressed. Like Amazon, the costs for operating and information centric company are difficult to control. Amazon seems to have thrown in the towel. IHS is managing like the Dickens to maintain a profit for its stakeholders. For stakeholders, is the hope is that hefty profits will be forthcoming?

image

Source: Google Finance

My initial reaction was, “Is IHS trying to find new ways to generate higher margin revenue?”

Like Thomson Reuters and Reed Elsevier, IHS required different types of content processing plumbing to deliver its commercial databases. Technical librarians and the competitive intelligence professionals monitoring the defense sector are likely to know about IHS different products. The company provides access to standards documents, regulatory information, and Jane’s military hardware information services. (Yep, Jane’s still has access to retired naval officers with mutton chop whiskers and interesting tweed outfits. I observed these experts when I visited the company in England prior to IHS’s purchase of the outfit.)

The standard descriptions of IHS peg the company’s roots with a trade magazine outfit called Rogers Publishing. My former boss at Booz, Allen & Hamilton loved some of the IHS technical services. He was, prior to joining Booz, Allen the head of research at Martin Marietta, an IHS customer in the 1970s. Few remember that IHS was once tied in with Thyssen Bornemisza. (For those with an interest in history, there are some reports about the Baron that are difficult to believe. See http://bit.ly/1qIylne.)

Large professional publishing companies were early, if somewhat reluctant, supporters of SGML and XML. Running a query against a large collection of structured textual information could be painfully slow when one relied on traditional relational database management systems in the late 1980s. Without SGML/XML, repurposing content required humans. With scripts hammering on SGML/XML, creating new information products like directories and reports eliminated the expensive humans for the most part. Fewer expensive humans in the professional publishing business reduces costs…for a while at least.

IHS climbed on the SGML/XML diesel engine and began working to deliver snappy online search results. As profit margins for professional publishers were pressured by increasing marketing and technology costs, IHS followed the path of other information centric companies. IHS began buying content and services companies that, in theory, would give the professional publishing company a way to roll out new, higher margin products. Even secondary players in the professional publishing sector like Ebsco Electronic Publishing wanted to become billion dollar operations and then get even bigger. Rah, rah.

These growth dreams electrify many information company’s executives. The thought that every professional publishing company and every search vendor are chasing finite or constrained markets does not get much attention. Moving from dreams to dollars is getting more difficult, particularly in professional publishing and content processing businesses.

My view is that packaging up IHS content and content processing technology got a boost when IHS purchased the Invention Machine in mid 2012.

Years ago I attended a briefing by the founders of the Invention Machine. The company demonstrated that an engineer looking for a way to solve a problem could use the Invention Machine search system to identify candidate systems and methods from the processed content. I recall that the original demonstration data set was US patents and patent applications. My thought was that an engineer looking for a way to implement a particular function for a system could — if the Invention Machine system worked as presented — could present a patent result set. That result set could be scanned to eliminate any patents still in force. The resulting set of patents might yield a procedure that the person looking for a method could implement without having to worry about an infringement allegation. The original demonstration was okay, but like most “new” search technologies, Invention Machine faced funding, marketing, and performance challenges. IHS acquired Invention Machine, its technologies, its Eastern European developers, and embraced the tagging, searching, and reporting capabilities of the Invention Machine.

The Goldfire idea is that an IHS client can license certain IHS databases (called “knowledge collections”) and then use Goldfire / Invention Machine search and analytic tools to get the knowledge “nuggets” needed to procure a missile guidance component.

The jargon for this finding function is “semantic concept lenses.” If the licensee has content in a form supported by Goldfire, the licensee can search and analyze IHS information along with information the client has from its own sources. A bit more color is available at http://bit.ly/WLA2Dp.

The IHS search system is described in terms familiar to a librarian and a technical analyst; for example, here’s the attributes for Goldfire “cloud” from an IHS 2013 news release:

  • “Patented semantic search technology providing precise access to answers in documents. [Note: IHS has numerous patents but it is not clear what specific inventions or assigned inventions apply directly to the search and retrieval solution(s)]
  • Access to more than 90 million scientific and technical “must have” documents curated by IHS. This aggregated, pre-indexed collection spans patents, premium IHS content sources, trusted third-party content providers, and the Deep Web.
  • The ability to semantically index and research across any desired web-accessible information such as competitive or supplier websites, social media platforms and RSS feeds – turning these into strategic knowledge assets.
  • More than 70 concept lenses that promote rapid research, browsing and filtering of related results sets thus enabling engineers to explore a concept’s definitions, applications, advantages, disadvantages and more.
  • Insights into consumer sentiment giving strategy, product management and marketing teams the ability to recognize customer opinions, perceptions, attitudes, habits and expectations – relative to their own brands and to those of their partners’ and competitors’ – as expressed in social media and on the Web.”

Most of these will resonate with those familiar with the assertions of enterprise search and content processing vendors. The spin, which I find notable, is that IHS delivers both content and information retrieval. Most enterprise search vendors provide technology for finding and analyzing data. The licensee has to provide the content unless the enterprise search vendor crawls the Web or other sources, creates an archive or a basic index, and then provides an interface that is usually positioned as indexing “all content” for the user.

According to Virtual Strategy Magazine (which presumably does not cover “real” strategy), I learned that US 8666730:

covers the semantic concept “lenses” that IHS Goldfire uses to accelerate research. The lenses correlate with the human knowledge system, organizing and presenting answers to engineers’ or scientists’ questions – even questions they did not think to ask. These lenses surface concepts in documents’ text, enabling users to rapidly explore a concept’s definitions, applications, advantages, disadvantages and more.

The key differentiator is claimed to move IHS Goldfire up a notch. The write up states:

Unlike today’s textual, question-answering technologies, which work as meta-search engines to search for text fragments by keyword and then try to extract answers similar to the text fragment, the IHS Goldfire approach is entirely unique – providing relevant answers, not lists of largely irrelevant documents. With IHS Goldfire, hundreds of different document types can be parsed by a semantic processor to extract semantic relationships like subject-action-object, cause-and-effect and dozens more. Answer-extraction patterns are then applied on top of the semantic data extracted from documents and answers are saved to a searchable database.

According to Igor Sovpel, IHS Goldfire:

“Today’s engineers and technical professionals are underserved by traditional Internet and enterprise search applications, which help them find only the documents they already know exist,” said Igor Sovpel, chief scientist for IHS Goldfire. “With this patent, only IHS Goldfire gives users the ability to quickly synthesize optimal answers to a variety of complex challenges.”

Is IHS’ new marketing push in “knowledge” and related fields likely to have an immediate and direct impact on the enterprise search market? Perhaps.

There are several observations that occurred to me as I flipped through my archive of IHS, Thyssen, and Invention Machine information.

First, IHS has strong brand recognition in what I would call the librarian and technical analyst for engineering demographic. Outside of lucrative but quite niche markets for petrochemical information or silhouettes and specifications for the SU 35, IHS suffers the same problem of Thomson Reuters and Wolters Kluwer. Most senior managers are not familiar with the company or its many brands. Positioning Goldfire as an enterprise search or enterprise technical documentation/data analysis tool will require a heck of a lot of effective marketing. Will positioning IHS cheek by jowl with IBM and a consulting firm that teaches indexing address this visibility problem? The odds could be long.

Second, search engine optimization folks can seize on the name Goldfire and create some dissonance for IHS in the public Web search indexes. I know that companies like Attivio and Microsoft use the phrase “beyond search” to attract traffic to their Web sites. I can see the same thing happening. IHS competes with other professional publishing companies looking for a way to address their own marketing problems. A good SEO name like “Goldfire” could come under attack and quickly. I can envision lesser competitors usurping IHS’ value claims which may delay some sales or further confuse an already uncertain prospect.

Third, enterprise search and enterprise content analytics is proving to be a difficult market from which to wring profitable, sustainable revenue. If IHS is successful, the third party licensees of IHS data who resell that information to their online customers might take steps to renegotiate contracts for revenue sharing. IHS will then have to ramp up its enterprise search revenues to keep or outpace revenues from third party licensees. Addressing this problem can be interesting for those managers responsible for the negotiations.

Finally, enterprise search has a lot of companies planning on generating millions or billions from search. There can be only one prom queen and a small number of “close but no cigar” runner ups. Which company will snatch the crown?

This IHS search initiative will be interesting to watch.

Stephen E Arnold, July 29, 2014

HP Autonomy Opens IDOL APIs to App Developers

July 29, 2014

App developers can now work with HP Autonomy’s Intelligent Data Operating Layer engine through the company’s new API program. We learned about the initiative from eWeek’s, “HP Autonomy’s IDOL OnDemand APIs Nurture Apps Ecosystem.” The piece by Darryl K. Taft presents a slide show with examples of those APIs being put to use. He writes:

“IDOL OnDemand delivers Web service APIs that allow developers to tap into the explosive growth of unstructured information to build a new generation of apps…. IDOL OnDemand APIs include a growing portfolio of APIs within the format conversion, image analysis, indexing, search, and text analysis categories. Through an early access program, hackathons and several TopCoder challenges, some great apps have emerged. During the weekend of June 7-8, developers participated in an IDOL OnDemand Hackathon in San Francisco, where participants built apps using IDOL OnDemand Web service APIs. This slide show covers several of the early apps to emerge from these events. Enterprise developers are also adopting the IDOL OnDemand platform, with big names such as PwC and HP taking advantage of the developer-friendly technology to accelerate their development projects using the API’s.”

See the slide show for a look at 12 of these weekend projects. Developers should then check out the IDOL OnDemand site for more information. Founded in 1996, Autonomy grew from research originally performed at Cambridge University. Their solutions help prominent organizations around the world manage large amounts of data. Tech giant HP famously purchased the company in 2011.

Cynthia Murrell, July 29, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Google and Findability without the Complexity

July 28, 2014

Shortly after writing the first draft of Google: The Digital Gutenberg, “Enterprise Findability without the Complexity” became available on the Google Web site. You can find this eight page polemic at http://bit.ly/1rKwyhd or you can search for the title on—what else?—Google.com.

Six years after the document became available, Google’s anonymous marketer/writer raised several interesting points about enterprise search. The document appeared just as the enterprise search sector was undergoing another major transformation. Fast Search & Transfer struggled to deliver robust revenues and a few months before the Google document became available, Microsoft paid $1.2 billion for what was another enterprise search flame out. As you may recall, in 2008, Convera was essentially non operational as an enterprise search vendor. In 2005, Autonomy bought the once high flying Verity and was exerting its considerable management talent to become the first enterprise search vendor to top $500 million in revenues. Endeca was flush with Intel and SAP cash, passing on other types of financial instruments due to the economic downturn. Endeca lagged behind Autonomy in revenues and there was little hope that Endeca could close the gap between it and Autonomy.

Secondary enterprise search companies were struggling to generate robust top line revenues. Enterprise search was not a popular term. Companies from Coveo to Sphinx sought to describe their information retrieval systems in terms of functions like customer support or database access to content stored in MySQL. Vivisimo donned a variety of descriptions, culminating in its “reinvention” as a Big Data tool, not a metasearch system with a nifty on the fly clustering algorithm. IBM was becoming more infatuated with open source search as a way to shift development an bug fixes to a “community” working for the benefit of other like minded developers.

image

Google’s depiction of the complexity of traditional enterprise search solutions. The GSA is, of course, less complex—at least on the surface exposed to an administrator.

Google’s Findability document identified a number of important problems associated with traditional enterprise search solutions. To Google’s credit, the company did not point out that the majority of enterprise search vendors (regardless of the verbal plumage used to describe information retrieval) were either losing money or engaged in a somewhat frantic quest for financing and sales).

Here are the issues Google highlighted:

  • User of search systems are frustrated
  • Enterprise search is complex. Google used the word “daunting”, which was and still is accurate
  • Few systems handle file shares, Intranets, databases, content management systems, and real time business applications with aplomb. Of course, the Google enterprise search solution does deliver on these points, asserted Google.

Furthermore, Google provides integrated search results. The idea is that structured and unstructured information from different sources are presented in a form that Google called “integrated search results.”

Google also emphasized a personalized experience. Due to the marketing nature of the Findability document, Google did not point out that personalization was a feature of information retrieval systems lashed to an alert and work flow component. Fulcrum Technologies offered a clumsy option for personalization. iPhrase improved on the approach. Even Endeca supported roles, important for the company’s work at Fidelity Investments in the UK. But for Google, most enterprise search systems were not personalizing with Google aplomb.

Google then trotted out the old chestnuts gleaned from a lunch discussion with other Googlers and sifting competitors’ assertions, consultants’ pronouncements, and beliefs about search that seemed to be self-evident truths; for example:

  • Improved customer service
  • Speeding innovation
  • Reducing information technology costs
  • Accelerating adoption of search by employees who don’t get with the program.

Google concluded the Findability document with what has become a touchstone for the value of the Google Search Appliance. Kimberly Clark, “a global health and hygiene company,” reduced administrative costs for indexing 22 million documents. The costs of the Google Search Appliance, the consultant fees, and the extras like GSA fail over provisions were not mentioned. Hard numbers, even for Google, are not part of the important stuff about enterprise search.

One interesting semantic feature caught my attention. Google does not use the word knowledge in this 2008 document.

Several questions:

  1. Was Google unaware of the fusion of information retrieval and knowledge?
  2. Does the Google Search Appliance deliver a laundry list of results, not knowledge? (A GSA user has to scan the results, click on links, and figure out what’s important to the matter at hand, so the word “knowledge” is inappropriate.)
  3. Why did Google sidestep providing concrete information about costs, productivity, and the value of indexing more content that is allegedly germane to a “personalized” search experience? Are there data to support the implicit assertion “more is better.” Returning more results may mean that the poor user has to do more digging to find useful information. What about a few, on point results? Well, that’s not what today’s technology delivers. It is a fiction about which vendors and customers seem to suspend disbelief.

With a few minor edits—for example, a genuflection to “knowledge—this 2008 Findability essay is as fresh today as it was when Google output its PDF version.

Several observations:

First, the freshness of the Findability paper underscores the staleness and stasis of enterprise search in the past six years. If you scan the free search vendor profiles at www.xenky.com/vendor-profiles, explanations of the benefits and functions of search from the 1980s are also applicable today. Search, the enterprise variety, seems to be like a Grecian urn which “time cannot wither.”

Second, the assertions about the strengths and weaknesses of search were and still are presented without supporting facts. Everyone in the enterprise search business recycles the same cant. The approach reminds me of my experience questioning a member of a sect. The answer “It just is…” is simply not good enough.

Third, the Google Search Appliance has become a solution that costs as much, if not more, than other big dollar systems. Just run a query for the Google Search Appliance on www.gsaadvantage.gov and check out the options and pricing. Little wonder than low cost solutions—whether they are better or worse than expensive systems—are in vogue. Elasticsearch and Searchdaimon can be downloaded without charge. A hosted version is available from Qbox.com and is relatively free of headaches and seven figure charges.

Net net: Enterprise search is going to have to come up with some compelling arguments to gain momentum in a world of Big Data, open source, and once burned twice shy buyers. I wonder why venture / investment firms continue to pump money into what is same old search packaged with decades old lingo.

I suppose the idea that a venture funded operation like Attivio, BA Insight, Coveo, or any other company pitching information access will become the next Google is powerful. The problem is that Google does not seem capable of making its own enterprise search solution into another Google.

This is indeed interesting.

Stephen E Arnold, July 28, 2014

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta