Indexing Rah Rah Rah!

May 4, 2015

Enterprise search is one of the most important features for enterprise content management systems and there is huge industry for designing and selling taxonomies.  The key selling features for taxonomies are their diversity, accuracy, and quality.  The categories within taxonomies make it easier for people to find their content, but Tech Target’s Search Content Management blog says there is room improvement in the post: “Search-Based Applications Need The Engine Of Taxonomy.”

Taxonomies are used for faceted search, allowing users to expand and limit their search results.  Faceted search gives users a selection to change their results, including file type, key words, and more of the ever popular content categories. Users usually don’t access the categories, primarily they are used behind the scenes and aggregated the results appear on the dashboard.

Taxonomies, however, take their information from more than what the user provides:

“We are now able to assemble a holistic view of the customer based on information stored across a number of disparate solutions. Search-based applications can also include information about the customer that was inferred from public content sources that the enterprise does not own, such as news feeds, social media and stock prices.”

Whether you know it or not, taxonomies are vital to enterprise search.  Companies that have difficulty finding their content need to consider creating a taxonomy plan or invest in purchasing category lists from a proven company.

Whitney Grace, May 4, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

BA Insight: More Auto Classification for SharePoint

April 30, 2015

I thought automatic indexing and classifying of content was a slam dunk. One could download Elastic and Carrot2 or just use Microsoft’s tools to whip up a way to put accounting tags on accounting documents, and planning on strategic management documents.

There are a number of SharePoint centric “automated solutions” available, and now there is one more.

I noticed on the BA Insight Web site this page:

image

There was some rah rah in US and Australian publications. But the big point is that either SharePoint administrators have a problem that existing solutions cannot solve or the competitors’ solutions don’t work particularly well.

My hunch is that automatic indexing and classifying in a wonky SharePoint set up is a challenge. The indexing can be done by humans and be terrible. Alternatively, the tagging can be done by an automated system and be terrible.

The issues range from entity resolution (remember the different spellings of Al Qaeda) to “drift.” In my lingo, “drift” means that the starting point for automated indexing just wanders as more content flows through the system and the administrator does not provide the time consuming and often expensive tweaking to get the indexing back on track.

There are smarter systems than some of those marketed to the struggling SharePoint licensees. I profile a number of NGIA systems in my new monograph CyberOSINT: Next Generation Information Access.

The SharePoint folks are not featured in my study because the demands of real time, multi lingual, real time content processing do not work with solutions from more traditional vendors.

On any given day, I am asked to sit through Webinars about concepts, semantics, and classification. If these solutions worked, the market for SharePoint add in would begin to coalesce.

So far, dealing with the exciting world of SharePoint content processing remains a work very much in progress.

Stephen E Arnold, April 30, 2015

Ontotext Pursues Visibility

April 23, 2015

Do you know Ontotext? The company is making an effort to become more visible. Navigate to “Vassil Momtchev talks Insights with the Bloor Group.” The interview provides a snapshot of the company’s history which dates from 2001. After 14 years, the interview reports that Ontotext “keeps its original company spirit.”

Other points from the write up:

  • The company’s technology makes use of semantic and ontology modeling
  • A knowledge base represents complex information and makes asking questions better
  • Semantic applications can deliver complete applications.

For more information about Ontotext and its “ontological” approach, visit the company’s Web site at www.ontotext.com.

Stephen E Arnold, April 23, 2015

Enterprise Search Is Important: But Vendor Survey Fails to Make Its Case

March 20, 2015

I read “Concept Searching Survey Shows Enterprise Search Rises in the Ranks of Strategic Applications.” Over the years, I have watched enterprise search vendors impale themselves on their swords. In a few instances, licensees of search technology loosed legal eagles to beat the vendors to the ground. Let me highlight a few of the milestones in enterprise search before commenting on this “survey says, it must be true” news release.

A Simple Question?

What do these companies have in common?

  • Autonomy
  • Convera
  • Fast Search & Transfer?

I know from my decades of work in the information retrieval sector that financial doubts plagued these firms. Autonomy, as you know, is the focal point of on-going litigation over accounting methods, revenue, and its purchase price. Like many high-tech companies, Autonomy achieved significant revenues and caused some financial firms to wonder how Autonomy achieved its hundreds of millions in revenue. There was a report from Cazenove Capital I saw years ago, and it contained analyses that suggested search was not the money machine for the company.

And Convera? After morphing from Excalibur with its acquisition of the manual-indexing ConQuest Technologies, a document scanning with some brute force searching technology morphed into Convera. Convera suggested that it could perform indexing magic on text and video. Intel dived in and so did the NBA. These two deals did not work out and the company fell on hard times. With an investment from Allen & Company, Conquest tried its hand at Web indexing. Finally, stakeholders lost faith and Convera sold off its government sales and folded its tent. (Some of the principals cooked up another search company. This time the former Convera wizards got into the consulting engineering business.) Convera lives on in a sense as part of the Ntent system. Convera lost some money along the way. Lots of money as I recall.

And Fast Search? Microsoft paid $1.2 billion for Fast Search. Now the 1998 technology lives on within Microsoft SharePoint. But Fast Search has the unique distinction of facing both a financial investigation for fancy dancing with its profit and loss statement and the distinction of having its founder facing a jail term. Fast Search ran into trouble when its marketers promised magic from the ESP system. When the pixie dust caused licensees to develop an allergic reaction, Fast ran into trouble. The scrambling caused some managers to flee the floundering Norwegian search ship and found another search company. For those who struggle with Fast Search in its present guise, you understand the issues created by Fast Search’s “sell it today and program it tomorrow” approach.

Is There a Lesson in These Vendors’ Trajectories?

What do these three examples tell us? High flying enterprise search vendors seem to have run into some difficulties. Not surprisingly, the customers of these companies are often wary of enterprise search. Perhaps that is the reason so many enterprise search vendors do not use the words “enterprise search”, preferring euphemisms like customer support, business intelligence, and knowledge management?

The Rush to Sell Out before Drowning in Red Ink

Now a sidelight. Before open source search effectively became the go to keyword search system, there were vendors who had products that for the most part worked when installed to do basic information retrieval. These companies’ executives worked overtime to find buyers. The founders cashed out and left the new owners to figure out how to make sales, pay for research, and generate sufficient revenue to get the purchase price back. Which companies are these? Here’s a short list and incomplete list to help jog your memory:

  • Artificial Linguistics (sold to Oracle)
  • BRS Search (sold to OpenText)
  • EasyAsk (first to Progress Software and then to an individual investor)
  • Endeca to Oracle
  • Enginium (sold to Kroll and now out of business)
  • Exalead to Dassault
  • Fulcrum Technology to IBM (quite a story. See the Fulcrum profile at www.xenky.com/vendor-profiles)
  • InQuira to Oracle
  • Information Dimensions (sold to OpenText)
  • Innerprise (Microsoft centric, sold to GoDaddy)
  • iPhrase to IBM (iPhrase was a variant of Teratext’s approach)
  • ISYS Search Software to Lexmark (yes, a printer company)
  • RightNow to Oracle (RightNow acquired Dutch technology for its search function)
  • Schemalogic to Smartlogic
  • Stratify/Purple Yogi (sold to Iron Mountain and then to Autonomy)
  • Teratext to SAIC, now Leidos
  • TripleHop to Oracle
  • Verity to Autonomy and then HP bought Autonomy
  • Vivisimo to IBM (how clustering and metasearch magically became a Big Data system from the company that “invented” Watson) .

The brand impact of these acquired search vendors is dwindling. The only “name” on the list which seems to have some market traction is Endeca.

Some outfits just did not make it or who are in a very quiet, almost dormant, mode. Consider  these search vendors:

  • Delphes (academic thinkers with linguistic leanings)
  • Edgee
  • Dieselpoint (structured data search)
  • DR LINK (Syracuse University and an investment bank)
  • Executive Search (not a headhunting outfit, an enterprise search outfit)
  • Grokker
  • Intrafind
  • Kartoo
  • Lextek International
  • Maxxcat
  • Mondosoft
  • Pertimm (reincarnated with Axel Springer (Macmillan) money as Qwant, which according to Eric Schmidt, is a threat to Google. Yeah, right.)
  • Siderean Software (semantic search)
  • Speed of Mind
  • Suggest (Weitkämper Technology)?
  • Thunderstone

These are not a comprehensive list. I just wanted to layout some facts about vendors who tilted at the enterprise search windmill. I think that a reasonable person might conclude that enterprise search has been a tough sell. Of the companies that developed a brand, none was able to achieve sustainable revenues. The information highway is littered with the remains of vendors who pitched enterprise search as the killer app for anything to do with information.

Now the survey purports to reveal insights to which I have been insensitive in my decades of work in digital information access.

Here’s what the company sponsoring the survey offers:

Concept Searching [the survey promulgator], the global leader in semantic metadata generation, auto-classification, and taxonomy management software, and developer of the Smart Content Framework™, is compiling the statistics from its 2015 SharePoint and Office 365 Metadata survey, currently unpublished. One of the findings, gathered from over 360 responses, indicates a renewed focus on improving enterprise search.

The focus seems to be on SharePoint. I thought SharePoint was a mishmash of content management, collaboration, and contacts along with documents created by the fortunate SharePoint users. Question: Is enterprise search conflated with SharePoint?

I would not make this connection.

If I understand this, the survey makes clear that some of the companies in the “sample” (method of selection not revealed) want better search. I want better information access, not search per se.

Each day I have dozens of software applications which require information access activity.  I also have a number of “enterprise” search systems available to me. Nevertheless, the finding suggests to me that enterprise search is and has not been particularly good. If I put on my SharePoint sunglasses, I see a glint of the notion that SharePoint search is not very good. The dying sparks of Fast Search technology smoldering in fire at Camp DontWorkGud.

Images, videos, and audio content present me with a challenge. Enterprise search and metatagging systems struggle to deal with these content types. I also get odd ball file formats; for example, Framemaker, Quark, and AS/400 DB2 UDB files.

The survey points out that the problem with enterprise search is that indexing is not very good. That may be an understatement. But the remedy is not just indexing, is it?

After reading the news release, I formed the opinion that the fix is to use the type of system available from the survey sponsor Concept Searching. Is that a coincidence?

Frankly, I think the problems with search are more severe than bad indexing, whether performed by humans or traditional “smart” software.

According the news release, my view is not congruent with the survey or the implications of the survey data:

A new focus on enterprise search can be viewed as a step forward in the management and use of unstructured content. Organizations are realizing that the issue isn’t going to go away and is now impacting applications such as records management, security, and litigation support. This translates into real business currency and increases the risk of non-compliance and security breaches. You can’t find, protect, or use what you don’t know exists. For those organizations that are using, or intend to deploy, a hybrid environment, the challenges of leveraging metadata across the entire enterprise can be daunting, without the appropriate technology to automate tagging.

Real business currency. Is that money?

Are system administrators still indexing human resource personnel records, in process legal documents related to litigation, data from research tests and trials in an enterprise search system? I thought a more fine-grained approach to indexing was appropriate. If an organization has a certain type of government work, knowledge of that work can only be made available to those with a need to know. Is indiscriminate and uncontrolled indexing in line with a “need to know” approach?

Information access has a bright future. Open source technology such as Lucene/Solar/Searchdaimon/SphinxSearch, et al is a reasonable approach to keyword functionality.

Value-added content processing is also important but not as an add on. I think that the type of functionality available from BAE, Haystax, Leidos, and Raytheon is more along the lines of the type of indexing, metatagging, and coding I need. The metatagging is integrated into a more modern system and architecture.

For instance, I want to map geo-coordinates in the manner of Geofeedia to each item of data. I also want context. I need an entity (Barrerra) mapped to an image integrated with social media. And, for me, predictive analytics are essential. If I have the name of an individual, I want that name and its variants. I want the content to be multi-language.

I want what next generation information access systems deliver. I don’t want indexing and basic metatagging. There is a reason for Google’s investing in Recorded Future, isn’t there?

The future of buggy whip enterprise search is probably less of a “strategic application” and more of a utility. Microsoft may make money from SharePoint. But for certain types of work, SharePoint is a bit like Windows 3.11. I want a system that solves problems, not one that spawns new challenges on a daily basis.

Enterprise search vendors have been delivering so-so, flawed, and problematic functionality for 40 years. After decades of vendor effort to make information findable in an organization, has significant progress been made. DARPA doesn’t think search is very good. The agency is seeking better methods of information access.

What I see when I review the landscape of enterprise search is that today’s “leaders”  (Attivio, BA Insight, Coveo, dtSearch, Exorbyte, among others) remind me of the buggy whip makers driving a Model T to lecture farmers that their future depends on the horse as the motive power for their tractor.

Enterprise search is a digital horse, an one that is approaching break down.

Enterprise search is a utility within more feature rich, mission critical systems. For a list of 20 companies delivering NGIA with integrated content processing, check out www.xenky.com/cyberosint.

Stephen E Arnold, March 20, 2015

Taxonomy Turmoil: Good Enough May Be Too Much

February 28, 2015

For years, I have posted a public indexing Overflight. You can examine the selected outputs at this Overflight link. (My non public system is more robust, but the public service is a useful temperature gauge for a slice of the content processing sector.)

When it comes to indexing, most vendors provide keyword, concept tagging, and entity extraction. But are these tags spot on? No, most are good enough.

image

A happy quack to Jackson Taylor for this “good enough” cartoon. The salesman makes it clear that good enough is indeed good enough in today’s marketing enabled world.

I chose about 50 companies that asserted their systems performed some type of indexing or taxonomy function. I learned that the taxonomy business is “about to explode.” I find that to be either an interesting investment tip or a statement that is characteristic of content processing optimists.

Like search and retrieval, plugging in “concepts” or other index terms is a utility function. For example, if one indexes each word in an article appearing in this blog, the article might be about another subject. For example, in this post, I am talking about Overflight, but the real topic is the broader use of metadata in information retrieval systems. I could assign the term “faceted navigation” to this article as a way to mark this article as germane to point and click navigation systems.

If you examine the “reports” Overflight outputs for each of the companies, you will discover several interesting things as I did on February 28, 2015 when I assembled this short article.

  1. Mergers or buying failed vendors at fire sale prices are taking places. Examples include Lucidea’s purchase of Cuadra and InMagic. Both of these firms are anchored in traditional indexing methods and seemed to be within a revenue envelope until their sell out. Business Objects acquired Inxight and then SAP acquired Business Objects. Bouvet acquired Ontopia. Teradata acquired Revelytix
  2. Moving indexing into open source. Thomson Reuters acquired ClearForest and made most of the technology available as OpenCalais. OpenText, a rollup outfit, acquired Nstein. SAS acquired Teragram. Smartlogic acquired Schemalogic. (A free report about Schemalogic is available at www.xenky.com/vendor-profiles.)
  3. A number of companies just failed, shut down, or went quiet. These include Active Classification, Arikus, Arity, Forth ICA, MaxThink, Millennium Engineering, Navigo, Progris, Protege, punkt.net, Questans, Quiver, Reuse Company, Sandpiper,
  4. The indexing sector includes a number of companies my non public system monitors; for example, the little known Data Harmony with six figure revenues after decades of selling really hard to traditional publishers. Conclusion: Indexing is a tough business to keep afloat.

There are numerous vendors who assert their systems perform indexing, entity, and metadata extraction. More than 18 of these companies are profiled in CyberOSINT, my new monograph. Oracle owns Triple Hop, RightNow, and Endeca. Each of these acquired companies performs indexing and metadata operations. Even the mashed potatoes search solution from Microsoft includes indexing tools. The proprietary XML data management vendor MarkLogic asserts that it performs indexing operations on content stored in its repository. Conclusion: More cyber oriented firms are likely to capture the juicy deals.

So what’s going on in the world of taxonomies? Several observations strike me as warranted:

First, none of the taxonomy vendors are huge outfits. I suppose one could argue that IBM’s Lucene based system is a billion dollar baby, but that’s marketing peyote, not reality. Perhaps MarkLogic which is struggling toward $100 million in revenue is the largest of this group. But the majority of the companies in the indexing business are small. Think in terms of a few hundred thousand in annual revenue to $10 million with generous accounting assumptions.

What’s clear to me is that indexing, like search, is a utility function. If a good enough search system delivers good enough indexing, then why spend for humans to slog through the content and make human judgments. Why not let Google funded Recorded Future identify entities, assign geo codes, and extract meaningful signals? Why not rely on Haystax or RedOwl or any one of more agile firms to deliver higher value operations.

I would assert that taxonomies and indexing are important to those who desire the accuracy of a human indexed system. This assumes that the humans are subject matter specialists, the humans are not fatigued, and the humans can keep pace with the flow of changed and new content.

The reality is that companies focused on delivering old school solutions to today’s problems are likely to lose contracts to companies that deliver what the customer perceives as a higher value content processing solution.

What can a taxonomy company do to ignite its engines of growth? Based on the research we performed for CyberOSINT, the future belongs to those who embrace automated collection, analysis, and output methods. Users may, if the user so chooses, provide guidance to the system. But the days of yore, when monks with varying degrees of accuracy created catalog sheets for the scriptoria have been washed to the margin of the data stream by today’s content flows.

What’s this mean for the folks who continue to pump money into taxonomy centric companies? Unless the cyber OSINT drum beat is heeded, the failure rate of the Overflight sample is a wake up call.

Buying Apple bonds might be a more prudent financial choice. On the other hand, there is an opportunity for taxonomy executives to become “experts” in content processing.

Stephen E Arnold, February 28, 2015

Partition the Web to Manage It

February 22, 2015

I noted that the mid February 2015 Forbes article did not get much coverage. “US Defense Giant Raytheon: We Need To Divide The Web To Secure It” contains a suggestion that could, if implemented, force changes upon Bing, Google, and other Web indexing outfits.

Here’s the passage I highlighted in lovely ice blue:

But some, including Michael Daly, chief technology officer for cyber security at US defense giant Raytheon, believe that the web needs to be divided into communities. As more critical devices, from insulin pumps to cars, connect to the internet, the more likely a genuinely destructive digital attack will occur. To stop this from happening, some people just shouldn’t be allowed into certain corners of the web, according to Daly.

There are some interesting implications in this notion.

Stephen E Arnold, February 22, 2015

Yes, Watson, an Accurate Description

February 22, 2015

I noted a post in the Atlantic. The article is “IBM’s Watson Memorized the Entire Urban Dictionary, Then His Overlords Had to Delete It.” I am not sure how a machine memorizes, but that is an issue for another day. The point is that Watson responded to one researcher’s query with a one word curse. Key point: Watson is supposed to do marvelous smart things. Without context, Watson is more likely to perform just like any other Lucene and home brew script system. Here’s the passage I noted because it underscores the time and expense of making a unicorn work just like a real animal:

Watson couldn’t distinguish between polite language and profanity — which the Urban Dictionary is full of. Watson picked up some bad habits from reading Wikipedia as well. In tests it even used the word “bullshit” in an answer to a researcher’s query. Ultimately, Brown’s 35-person team developed a filter to keep Watson from swearing and scraped the Urban Dictionary from its memory.

IBM, you never fail to entertain me.

Stephen E Arnold, February 22, 2015

Mondeca: Joey Erodes a Brand

February 8, 2015

Protecting the “name” of a company is important. I pointed out that several content processing vendors were losing control of their name in terms of a Bing or Google query. I noticed another vendor finding itself in the same pickle.

Navigate to YouTube. Run a query for Mondeca. What the query “mondeca” triggers is a spate of videos to Joey Mondeca.

image

Now try the Twitter search for the string “mondeca”. Here’s what I see:

image

Mondeca, the French smart content outfit, may want to turn its attention to dealing with Joey Mondeca’s social media presence. On the other hand, maybe findability is not a priority.

A Google query for “mondeca” returns links to the company. But Joey is moving up in the results list. When vendors lose control of a name as Brainware did, getting back that semantic traction is difficult. Augmentext has a solution.

Stephen E Arnold, February 9, 2015

Which Auto Classification Method Works?

February 1, 2015

Unfortunately A. Lancichinetti, et al do not deliver a consumer reports type analysis in “High Reproducibility and High Accuracy Method for Automated Topic Classification.” The paper does raise some issues that keyword search vendors with add on categorization do not often explain to licensees. In a nutshell, classification is often chugging along with 65 to 80 percent accuracy. Close enough for horseshoes, Gustav Dirichlet  is here. A couple of thoughts:

  • Dirichlet  died in 1822. Yep, another of the math guys from the past.
  • The method is described as “state of the art.”
  • Bounded content sets of sci tech information yield more accurate classification.

How do these methods work on WhatsApp messages in gang slang?

Stephen E Arnold, February 1, 2015

Zaizi: Search and Content Consulting

January 13, 2015

I received a call about Zaizi and the company’s search and content services. The firm’s Web site is at www.zaizi.com. Based on the information in my files, the company appears to be open source centric and an integrator of Lucene/Solr solutions.

What’s interesting is that the company has embraced Mondeca/Smartlogic jargon; for example, content intelligence. I find the phrase interesting and an improvement over the Semantic Web lingo.

The idea is that via indexing, one can find and make use of content objects. I am okay with this concept; however, what’s being sold is indexing, entity extraction, and classification of content.

The issue facing Zaizi and the other content intelligence vendors is that “some” content intelligence and slightly “smarter” information access is not likely to generate the big bucks needed to compete.

Firms like BAE and Leidos as well as the Google/In-Tel-Q backed Recorded Future offer considerably more than indexing. The need is to process automatically, analyze automatically, and generate outputs automatically. The outputs are automatically shaped to meet the needs of one or more human consumers or one or more systems.

Think in terms of taking outputs of a next generation information access system and inputting the “discoveries” or “key items” into another system. The idea is that action can be taken automatically or provided to a human who can make a low risk, high probability decision quickly.

The notion that a 20 something is going to slog through facets, keyword search, and the mind numbing scan results-open documents-look for info approach is decidedly old fashioned.

You can learn more about what the next big thing in information access is by perusing CyberOSINT: Next Generation Information Access at www.xenky.com/cyberosint.

Stephen E Arnold, January 14, 2015

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta