InQuira Antecedents: Answerfriend and Electric Knowledge

May 26, 2012

I have had to look up the antecedents for InQuira again. I wanted to create this post to make it easy to reference these two firms which were combined to create InQuira. InQuira was acquired by Oracle Corp. in that company’s push to address its long-standing search and content processing issues. I have in my Overflight system the 2006 InQuira marketing collateral which, I noticed, provides a crib sheet for the many enterprise search vendors piling into the customer support segment. What’s interesting is that customer support is one of the sectors where open source search is getting some attention.

The antecedents of InQuira were:

  • Answerfriend. The company had software which could understand text. In 2000, the company landed Accenture as a customer. Answerfriend pivoted on its natural language processing technology. Allegedly Answerfriend could handle both structured an unstructured data. Sound familiar in 2012?
  • Electric Knowledge Inc. This also was an NLP shop. The technology was based on computational linguistic technology. This company had licensed its technology to Bank of America, an outfit which has had a long history of trying to find a search system which meets its requirements.

InQuira was created in 2002. The notion of hooking together two separate vendors to do the 1+1=3 thing has been used more recently by Lexalytics and Attensity.

At one time, InQuira was the answer system used by Yahoo’s customer support service. I encountered this when I tried to cancel a Yahoo service. The InQuira service was not too helpful to me. I just killed the credit card and solved the problem.

The marketing pitch of InQuira is as fresh today as it was in 2002. How much progress has there been in search and content processing in the last decade? Could the marketing collateral for a 2002 Oldsmobile be used without any changes? Probably not. Search has a limited supply of jargon, and it gets recycled endlessly in my opinion.

Stephen E Arnold, May 26, 2012

Sponsored by Polyspot

ZyLAB Embraces Predictive and Concept Searching

May 25, 2012

The CodeZed blog recently reported on the automated classification of legal documents in the article “Technology Assisted Review, Concept Search and Predictive Coding: The Limitations & Risks.”

According to the article, artificial intelligence and machine learning has been around since the 1980’s but a recent US ruling regarding the use of machine learning technology in legal review has stirred up trouble in the eDiscovery community. As a result of this ruling, one can expect a dramatic increase in Predictive Coding, Concept Search or other terms relating to TAR capabilities being a requirement for eDiscovery software buyers.

When discussing some of the detriments of machine learning and artificial intelligence, the article states:

“Machine-learning requires significant set-up involving training and testing the quality of the classification model (aka the classifier), which is a time consuming and demanding task that requires at least the manual tagging and evaluation of both the training and the test set by more than one party (in order to prevent biased opinions). Testing has to be done according to best practice standards used in the information retrieval community (e.g. see the proceedings of the TREC conferences organized by the NIST). Deviation from such standards will be challenged in courts. This is time consuming and expensive and should be factored into the cost-benefit analysis for the approach.”

So the short of it is, before using Technology Assisted Review make sure that you do your research and figure out what is best for your business.

Jasmine Ashton, May 25, 2012

Sponsored by PolySpot

Google Progresses on Semantic Search

May 22, 2012

The keyword-free Web search may be on the horizon. Search Engine Journal reports on the progress in “Google Testing Semantic Search Update.” Though Google has said a full-fledged semantic Web search is several years away, the company seems to be trying out some changes.

Writer David Angotti describes Google’s plan:

“A team of software engineers has been working to develop mathematical formulas that will extract and organize data that is currently spread across the Internet. The combination of an acquisition and the extraction algorithms have provided Google with an index of over 200 million people, places, and things, which Google simply calls ‘entities.’ This index, which Google named the Knowledge Graph, will allow Google to move away from keyword-based results to true semantic search.

“Once the entities are properly organized, semantic search technology enables Google to measure the relationship and separation between two entities to determine search results and rankings.”

Angotti notices that Google seems to be testing some of this functionality. His example is the query, “who directed The Hunger Games.” The results successfully placed the correct answer (Gary Ross) at the top of the list, and for some users included related images down the right side where ads usually appear.

When asked, a Google spokesperson had no information to share. More changes, though, are expected to arrive soon. We wonder– how will these revisions affect the rankings of millions of sites? Are keyword-reliant SEO pros anxious yet?

Cynthia Murrell, May 22, 2012

Sponsored by PolySpot

Inforbix: Semantic Technology for Manufacturing Information

May 21, 2012

Inforbix, a company whose focus is product data challenges in manufacturing, will be presenting at the 2012 Semantic Technology & Business Conference in San Francisco this June 3rd through 7th. CEO Oleg Shilovitsky’s presentation will share ways his company uses semantic technology to tackle the growing data complexity plaguing the manufacturing sector. Inforbix will also take part in the Start-Up Competition on the 5th with the pitch, “Solving the Problem of Engineering Data Complexity.”

Regarding the data challenges unique to their corner of the industry, Inforbix’s press release explains:

“Manufacturing companies generate vast amounts of data. These organizations are asking how they will survive tomorrow with such data complexity. Inforbix helps companies solve the problem of data complexity in a new and different way.

1. Inforbix uses smart components (product data crawlers) that scan on-premise data and give users access to data, no matter where it’s located or how it’s sourced. There is no data extraction involved, no data import, and no data conversion. The process is automatic and requires little to no effort to deploy and maintain.

2. Inforbix uses intelligent semantic modeling that infers relationships between disparate sources of data. It combines, links, and connects these data pieces, then exposes that data using product data applications.

3. Inforbix uses the power of the cloud to allow broad and cost-effective data access.”

Founded in 2010, Inforbix is based in Boston, MA. They help their manufacturing clients access mounds of data through a single tool; ease of use, speed, and efficiency are their hallmarks.Inforbix develops intelligent apps– simple tools that address specific product data tasks like searching and accessing product data, organizing and presenting product data, and visualizing product data trends and patterns.

Inforbix’s semantic technology underpins its groundbreaking apps. It automatically finds and infers relationships between disparate sources of structured and unstructured product data. By linking and connecting related product data, Inforbix provides users with the ability to locate and access product data quickly and thoroughly.

While Product Data Management (PDM) systems offer search, their success depends on properly structured and consistent data formats, and those systems can only search within their own infrastructure. Inforbix is product data agnostic: it can access structured and unstructured data located anywhere in a manufacturing company. That’s a huge savings in time and trouble. Though smaller companies may be able to use Inforbix instead of a PDM, the solutions are intended to work with those systems.

Inforbix apps are cloud-based and fast to deploy, require no data migration or maintenance. They also provide data security by preserving the on-premises data without touching it or moving it into the cloud; that is wise.

The company introduced a mobile platform for the iPad for its apps this past January, and no training or prior experience is necessary to make the most of these apps. The software is priced affordably for any size manufacturing company to deploy companywide; a demo is available here. We highly recommend you check Inforbix out.

Cynthia Murrell, May 21, 2012

Sponsored by HighGainBlog

Semantic Search Demystified

May 21, 2012

Confused about Semantic Search? ExtremeTech seeks to explain the burgeoning technology in “Demystifying Semantic Search.” Writer Ed Oswald begins by defining the term and explaining why folks have high hopes for it. He then discusses who uses the technology, and how it will change search as we know it. He concludes by assessing some limits.

Oswald traces the yearning for semantic search back to Ask Jeeves, which launched in 1996 and famously prompted users to query with complete English sentences. The service was keyword based, but shaped the way we interact with search engines. Almost a decade later, Google’s Q&A tried to discern what users really meant. Then Bing in 2009 incorporated semantics, followed (and bested) by Wolfram Alpha.

Going forward, Oswald predicts serious problems for the keyword-reliant search engine optimization field, a prediction with which I agree wholeheartedly. In addition, he notes that users will interact with search differently—the search engine itself becomes a destination rather than a map, simplifying the search process.

The write up summarizes:

“Semantic search shows a lot of promise to change the way we search. For the webmaster, it changes the game of getting your site high up in search results. For the user, it will hopefully make our searches more relevant as it will attempt to guess our intent rather than a literal interpretation of every search term we type in. Will it also change the search giants’ stance against pay-for-play when it comes to search results? That remains to be seen, but the groundwork has certainly been laid.”

See this thorough article for more information if you’re still mystified (or just curious).

Cynthia Murrell, May 21, 2012

Sponsored by PolySpot

Is Knoodle The New Powerpoint?

April 29, 2012

Training and presentation tools are a necessary part of any business and now Revelytix, has upgraded the version of Knoodl that they initially released in 2009. The article, Revelytix Releases Knoodl 3.0 – MarketWatch provides perspective users a look inside what they have to offer in today’s evolving market.

If you happen to be unfamiliar with Revelytix, they are a commercial software company that works with semantic standards. They provide tools enabling Enterprise Information Management so that companies can more readily deal with big data.

The basic upgrades included are;

“This release includes many improvements in the security framework required to satisfy the stringent security requirement of the U.S. Department of Defense. Additional functionality includes a Google gadget container based on Shindig and the Open Social API.

A design interface has been incorporated, making it easy for Knoodl users to create dashboards for visualizing the results of SPARQL queries. All Knoodl gadgets comply with the Google container specification.”

This modernized version of Knoodle offers cloud-based social presentation, training, and learning management within a single platform allowing for effective training and presentations that can be targeted to a specific audience. Those that utilize Knoodle can teach using slides, video, audio, images, surveys, tests, multiple delivery options, and data analytics. Basically, they’ve combined cloud technology with their own version of Powerpoint to compete in today’s market.

Jennifer Shockley, April 29, 2012

Sponsored by Ikanow

Q-Sensei 2.0

April 27, 2012

Q-Sensei adds features to its ontology-based search system, we learn in MarketWatch’s “Q-Sensei Enterprise V2.0 Unveiled to Rapidly Develop Tailored Search applications for Big Data.” Prominently featured are an ontology-based data processing/ configuration and a new API to more efficiently handle big data.

What’s an ontology? We keep forgetting. The dictionary says it’s “the branch of metaphysics that studies the nature of existence or being as such.” Wait, that can’t be right. . . . Ok, in information system lingo, ontology “formally represents knowledge as a set of concepts within a domain, and the relationships between those concepts.” That’s better.

The press release says the newest version of Q-Sensei’s enterprise search platform is designed to tailor search-based applications quickly and flexibly to the needs of its clients, using data from Intranets, social media, third parties, and the Internet. We learn from the write up:

“With Q-Sensei Enterprise’s new ontology-based data processing, businesses can rapidly develop new, tailored search-based applications by using existing RDF and OWL resources such as database models, industry or domain-specific ontologies, process definitions and project configurations. This new processing approach also enables harmonization of semantics, components and functionality across business applications. It also improves the speed and efficiency of data process and indexing, increasing platform performance.”

Version 2.0 also boasts a semi-automatic, guided configuration and a new API that makes it easier to integrate  Q-Sensei into other applications.

Q-Sensei was created in 2007 with the merger of the German Lalisio and the American QUASM, and now has offices in both Brooklyn and Erfurt, Germany. Q-Sensei focuses on multi-dimensional search, which it defines as combining full-text and dynamic faceted search with real-time content analysis. The company maintains that its solutions make it easy to find what you need, even if you don’t have the appropriate keywords on hand.

Cynthia Murrell, April 27, 2012

Sponsored by Ikanow

Are Semantic Technology Vendors in a Squeeze?

April 24, 2012

A new trend seems to be evolving in the ever changing world of technology. Some established companies have started joining in an effort to meet the needs of growing businesses. One such example of this new comradeship is seen in the article, Semantic Web Company and punkt. netServices have merged |The Semantic Puzzle.

Consolidation appears to be the new key as;

“In 2004 Semantic Web Company was foundedpunkt. netServices GmbH has offered services and products since 1998. The company is head-quartered in Vienna/Austria and currently employs 15 persons. [We] bring the semantic web and linked data technologies closer to the needs of companies, consumers and the government sector. We have done a lot of basic research those past years, as well as project-pioneering with prospective customers and partners. Finally we have consolidated our knowledge and skills in that field. What was avant-garde in 2004 now has become bleeding edge technology in present days.”

Modern businesses are making evolutionary changes within their operating systems. Companies now require more flexible programming to allow for big data. Therefore, corporations are seeking convenient modifications that allow the merging of new applications into existing plans.

The service providers are compelled to make alterations in order to compensate, so it appears a new pattern is starting to emerge. It makes sense, as by combining company efforts; providers can more readily deal with the needs and wants of perspective clients. Semantics vendors face new challenges, so perhaps a merger is a way to gain traction?

Jennifer Shockley, April 24, 2012

Sponsored by PolySpot

Calais Web Service

April 22, 2012

Entity extraction and other value added tagging has been moving from center stage to the supporting cast of Analytics: The Next Big Thing. If you want to get a sense of how entity extraction and other semantic functions provides raw inputs to analytics programs, you can navigate to the Open Calais Viewer. Copy some text and paste it into the input box. I used the contents of “Judge Alsup Decides He, Not the Jury, Will Decide the Issue of API Copyrightability.” Here’s the output from Open Calais:

open calaia output

Worth a look. Open Calais is open source.

Stephen E Arnold, April 22, 2012

Sponsored by PolySpot

The Invisibility of Open Source Search

March 27, 2012

I was grinding through my files and I noticed something interesting. After I abandoned the Enterprise Search Report, I shifted my research from search and retrieval to text processing. With this blog, I tried to cover the main events in the post-search world. The coverage was more difficult than I anticipated, so we started Inteltrax, which focuses on systems, companies, and products which “find” meaning using numerical recipes. But that does not do enough, so we are contemplating two additional free information services about “findability.” I am not prepared to announce either of these at this time. We have set up a content production system with some talented professionals working on our particular approach to content. We are also producing some test articles.

Front Cover

Until we make the announcement, I want to reiterate a point I made in my talks in London in 2011 about open source search and content processing:

Most reports about enterprise search ignore open source search solution vendors. A quiet revolution is underway, and for many executives, the shift is all but invisible.

We think that the “invisible” nature of the open source search and content processing options is due to four factors:

Most of the poobahs, self appointed experts and former home economics majors have never installed, set up, or optimized an open source search system. Here at ArnoldIT we have that hands on experience. And we can say that open source search and content processing solutions are moving from the desks of Linux wizards to more mainstream business professionals.

Next, we see more companies embracing open source, contributing to the overall community with bug fixes and new features and functions. At the same time, the commercial enterprises are “wrapping” open source with proprietary, value-added features and functions. The leader in this movement is IBM. Yep, good old Big Blue is an adherent of open source software. Why? We will try to answer this in our new information services.

Third, we think the financial pressure on organizations is greater than ever. CNBC and the Murdoch outfitted Wall Street Journal are cheering for the new economic recovery. We think that most organizations are struggling to make sales, maintain margins, and generate new opportunities. Open source search and content solutions promise some operating efficiencies. We want to cover some of the business angles of the open source search and content processing shift. Yep, open source means money.

Finally, the big solutions vendors are under a unique type of pressure. Some of it comes from licensees who are not happy with the cost of “traditional” solutions. Other comes from the data environment itself. Let’s face it. Certain search systems such as your old and dusty version of IBM STAIRS or Fulcrum won’t do the job in today’s data and information rich environment. New tools are needed. Why not solve a new information problem without dragging the costs, methods, and license restrictions of traditional enterprise software along for the ride? We think change is in the wind just like the smell of sweating horses a couple of months before the Kentucky Derby.

Our approach to information in our new services will be similar to that taken in Beyond Search. We want to provide pointers to useful write ups and offer some comments which put certain actions and events in a slightly different light. Will you agree with the information in our new services? We hope not.

Stephen E Arnold, March 27, 2012

Sponsored by Pandia.com

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta