June 12, 2012
Index Data co-founder Sebastian Hammer discusses the nuts and bolts of search systems in an interview with David Weinberger of the Harvard Library Innovation Lab in “Podcast: Sebastian Hammer on Federated Search.” Both the 23-minute podcast and the written transcript are available at the above link.
The interview begins by defining federated search (a single interface for multiple data sources) and explaining how it differs from search engines like Google (which gather information then pull query results from a unified database.)
Hammer acknowledges that, in some situations, the federated approach is the only choice. For example, you’ll need it if the data you’re after is subject to frequent change. However, federated searches can be terribly slow, and all the data might not be available at the same time. Also, merging federated results can be problematic. On the other hand, building an index by pulling in everything you might possibly want to search can strain practicality. Hammer’s solution– a hybrid approach. He explains:
“So my notion is that you want to be able to gather stuff together in an index when it is practical and possible, and you want to be able to federate for the stuff where it’s not practical or possible. And you want to try to do both of those things as well as you possibly can and you want to try to somehow get the results of both of those types of searches back to the user as a single nice friendly merged search results.”
Simple, right? The interview goes into much greater depth on federated search now and in the future, as well as ways Hammer’s company strives to make the hybrid approach nice and friendly. I recommend checking it out.
Index Data has been creating discovery solutions for over 17 years. Based in Berlin, the company serves national libraries and consortia, government agencies, and businesses. They are proud to contribute significantly to the open source community. The company is happiest when riding on the cutting edge of their field.
Cynthia Murrell, June 12, 2012
Sponsored by PolySpot
April 25, 2012
Big data. Wow. That’s an angle only a public relations person with a degree in 20th century American literature could craft. Vivisimo is many things, but a big data system? News to me for sure.
IBM has been a strong consumer and integrator of open source search solutions. Watson, the game show winner, used Lucene with IBM wrapper software to keep the folks in Jeopardy post production on their toes.
A screen shot of the Vivisimo Velocity system displaying search results for the RAND organization. Notice the folders in the left hand panel. The interface reveals Vivisimo’s roots in traditional search and retrieval. The federating function operates behind the scenes. The newest versions of Velocity permit a user to annotate a search hit so the system will boost it in subsequent queries if the comment is positive. A negative rating on a result suppresses that result.
I learned that IBM allegedly purchased Vivisimo, a company which I have covered in my various monographs about search and content processing. Forbes ran a story which was at odds with my understanding of what the Vivisimo technology actually does. Here’s the Forbes’ title: “IBM To Buy Vivisimo; Expands Bet On Big Data Analytics.” Notice the phrase “big data analytics.”
Why do I point out the “big data” buzzword? The reasons include:
- Vivisimo has a clustering method which takes search results and groups them, placing similar results identified by the method in “folders”
- Vivisimo has a federating method which, like Bright Planet’s and Deep Web Technologies’, takes a user’s query and sends the query to two or more indexing systems, retrieves the results, and displays them to the user
- Vivisimo has a clever de-duplication method which makes the results list present one item. This is important when one encounters a news story which appears on multiple Web sites.
According to the write up in Forbes, a “real” news outfit:
Okay, but in Beyond Search we have documented that Vivisimo followed this trajectory in its sales and marketing efforts since the company opened for business in 2000. In fact, the Wikipedia write up about Vivisimo says this:
Vivisimo is a privately held enterprise search software company in Pittsburgh that develops and sells software products to improve search on the web and in enterprises. The focus of Vivisimo’s research thus far has been the concept of clustering search results based on topic: for example, dividing the results of a search for “cell” into groups like “biology,” “battery,” and “prison.” This process allows users to intuitively narrow their search results to a particular category or browse through related fields of information, and seeks to avoid the “overload” problem of sorting through too many results.
November 29, 2011
Nuances of enterprise search and the challenges some searchers face are discussed in “Why is Enterprise Search more complex than web or desktop search?”
“Access control to the data is a big difference between Enterprise search and the other 2 search types. On the Web, everybody is allowed to see the data. On your desktop you are allowed to see all data, because you are the owner. Web and desktop search can index all the data without to take access control into account.”
In an enterprise, access control is very important. But we prefer to spend more time finding than searching. To get the results you want, you need the right solution and the right search structure and support.
Access control is not an obstacle for Mindbreeze. Their search technology maintains user rights while searching all company-relevant information within the enterprise and in the cloud.
Sara Wood, November 29, 2011
May 12, 2011
Top Hosting Service Information reveals that “Vivisimo Showcases Secure, Cross-domain Intelligence Solutions” at this week’s DoDIIS Worldwide conference in Detroit. Since Vivisimo serves the federal government, including the defense community, this is a welcome development.
The defense and intelligence communities recognize the need to improve information sharing as a way to achieve true all-source analysis and deliver timely, objective, and actionable intelligence to our senior decision makers and war fighters,’ says Bob Carter, vice president and general manager, federal, of Vivisimo. ‘In an era where spending cuts are being made to improve efficiencies, Vivisimo helps streamline operations and ultimately costs by allowing analysts significantly better access, processing and sharing of critical data necessary to the defense of the U.S.
Assembling the myriad of data gathered from around the globe into useful information is one of today’s biggest challenges for the intelligence community. Though the government often travels behind the curve in tech fields, it seems to be stepping up in this area.
Cynthia Murrell May 12, 2011
February 23, 2011
We have noted a number of management changes in the search and content sector.
Now X1 Technologies has appointed a new leader for their eDiscovery division. X1 Technologies Appoints John Patzakis as President of eDiscovery, citing his extensive background in eDiscovery and corporate compliance as well as his knowledge of the law.
“I am pleased to welcome someone as accomplished as John to the X1 team,” said John Waller, CEO of X1 Technologies. “John’s background as a senior software executive coupled with his deep understanding of compliance and discovery law make him a perfect fit to lead our efforts in the eDiscovery market.”
X1’s eDiscovery Search Suite allows users to search data stored in over 500 different files types and applications. This allows for quick retrieval of electronically stored information (ESI) for early case assessment. X1’s support of social media applications will be released this quarter. In Patzakis, X1 has found a leader with the experience and skill to push them forward in the eDiscovery sector.
Emily Rae Aldridge, February 243, 2011
February 13, 2011
So Google can be fooled. It’s not nice to fool Mother Google. The inverse, however, is not accurate. Mother Google can take some liberties. Any indexing system can. Objectivity is in the eye of the beholder or the person who pays for results.
Judging from the torrent of posts from “experts”, the big guns of search are saying, “We told you so.” The trigger for this outburst of criticism is the New York Times’s write up about JC Penny. You can try this link, but I expect that it and its SEO crunchy headline will go dark shortly. (Yep, the NYT is in the SEO game too.)
I am not sure how many years ago I wrote the “search sucks” article for Searcher Magazine. My position was clear long before the JC Penny affair and the slowly growing awareness that search is anything BUT objective.
In the good old days, database bias was set forth in the editorial policies for online files. You could disagree with what we selected for ABI/INFORM, but we made an effort to explain what we selected, why we selected certain items for the file, and how the decision affected assignment of index terms and classification codes. The point was that we were explaining the mechanism for making a database which we hoped would be useful. We were successful, and we tried to avoid the silliness of claiming comprehensive coverage. We had an editorial policy, and we shaped our work to that policy. Most people in 1980 did not know much about online. I am willing to risk this statement: I don’t think too many people in 2011 know about online and Web indexing. In the absence of knowledge, some remarkable actions occur.
You don’t know what you don’t know or the unknown unknowns. Source: http://dealbreaker.com/donald-rumsfeld/
Flash forward to the Web. Most users assume incorrectly that a search engine is objective. Baloney. Just as we set an editorial policy for ABI/INFORM each crawler and content processing system has similar decisions beneath it.
The difference is that at ABI/INFORM we explained our bias. The modern Web and enterprise search engines don’t. If a system tries to explain what it does, most of the failed Web masters, English majors working as consultants, and unemployed lawyers turned search experts just don’t care.
Search and content processing are complicated businesses, and the appetite for the gory details about certain issues are of zero interest to most professionals. Here’s a quick list of “decisions” that must be made for a basic search engine:
- How deep will we crawl? Most engines set a limit. No one, not even Google, has the time or money to follow every link.
- How frequently will we update? Most search engines have to allocate resources in order to get a reasonable index refresh. Sites that get zero traffic don’t get updated too often. Sites that are sprawling and deep may get three of four levels of indexing. The rest? Forget it.
- What will we index? Most people perceive the various Web search systems as indexing the entire Web. Baloney. Bing.com makes decisions about what to index and when, and I find that it favors certain verticals and trendy topics. Google does a bit better, but there are bluebirds, canaries, and sparrows. Bluebirds get indexed thoroughly and frequently. See Google News for an example. For Google’s Uncle Sam, a different schedule applies. In between, there are lots of sites and lots of factors at play, not the least of which is money.
- What is on the stop list? Yep, a list can kill index pointers, making the site invisible.
- When will we revisit a site with slow response time?
- What actions do we take when a site is owned by a key stakeholder?
February 7, 2011
We learned from one of our readers that Kartoo has turned out its lights. According to Wikipedia, the company shut down after a nine year run. Kartoo relied on Flash to display search results. Novel? Yes. Useful. In some types of queries, yes.
If you are interested in visual search, you can check out Yometa.com. This is a federating search system which taps results from Bing, Google, and Yahoo. A query for “Stephen E Arnold” returned this display.
Yometa displays the most relevant search results based on a combination of the three search engines ranking determined by the Yometa algorithm.
The company developed its approach based on research that showed that 97 percent of search results by the three search engines(Google, Yahoo and Bing) are different and there is only three percent overlap. The visual interface allows users to see results of Google, Bing and Yahoo individually and in various combinations. Users can see any combination of search results from Bing, Yahoo and Google in one screen and is displayed in a visual interface. The search results are displayed in a Venn Diagram, the results closer to the middle are more relevant.
For more information navigate to www.yometa.com/about/ .
Stephen E Arnold, February 7, 2011
August 11, 2010
Yippy, Inc. has good reason to rejoice. In “Yippy Releases Family Friendly Search For Nintendo Wii” http://www.tmcnet.com/usubmit/2010/07/28/4925824.htm VP Emily Parker says “the Yippee Wii search has been optimized for use with Nintendo Wii game controls and features Yippy content-blocking protocols.” The report also tells of a soon-to-be-released Yippee Wii Browser with cloud-based content management platforms.
Let’s not get ahead of ourselves. A family friendly search was the focus of The Point (Top 5% of the Internet), developed by Beyond Search’s Stephen E. Arnold, his son, Erik S. Arnold, and business partner, Chris Kitze in 1993. the Point service sold to Lycos in 1996, and, alas, Lycos lost its way. Now, a 17-yr-old idea is back, proving The Point was right on target almost two decades ago.
Brett Quinn, August 11, 2010
February 14, 2010
Abe Lederman (one of the founders of Verity) alerted me this morning that his company, Deep Web Technology, signed a deal and partnership agreement with SWETS. This Netherlands-based company is one of the world’s leading subscription services. SWETS helps government agencies and companies with subscriptions and related services. The firm has clients in over 160 countries and describes itself as “a long-talk powerhouse.”
Deep Web Technology provides the software and systems that fuel Science.gov, a US government search and retrieval project. Science.gov taps into a wide range of data and information related to science and technology. The invention of the Deep Web method was an outgrowth of Dr. Lederman’s experience in providing a user with access to a broad range of structured and unstructured data. In my various reports on enterprise and special purpose search, I have given Dr. Lederman’s method high marks, and I even let him buy me a taco in a restaurant in Santa Fe, after I finished a lecture at Los Alamos. Dr. Lederman contributed at Los Alamos prior to founding Deep Web as I recall.
The deal brings Dr. Lederman’s federation technology to the SwetsWise Searcher. This service will be powered by Deep Web Technology. SwetsWise is designed to help librarians and their users meet the challenge of searching and finding relevant results from the ever-increasing catalog of content available online. The search system simplifies access to an organization’s diverse and valuable resources, along with the open Web content users are accustomed to searching. SWETS will deliver search results through the Deep Web ranking engine, providing incremental results for fast response times, scalability and flexibility. SwetsWise Searcher performs a rapid parallel search of all available sources or selected sources in real-time, ensuring fresh information and that documents are retrieved the minute they are published into a collection’s database. A simple search box to cover all sources can be integrated into any web page, blog or Intranet homepage.
A happy quack to Deep Web Technology. No more tacos in Santa Fe. I want a nuked burrito, a nod to our friends up the road.
Stephen E Arnold, February 14, 2010
No one paid me to write this. I do have a promise of a taco in Santa Fe, which I have just rejected. I will report this to the Food & Drug Administration.
October 13, 2009
I have found the Kartoo.com service useful and innovative. I learned today that the company has rolled out a new interface and links that make it easier to locate the company’s other content processing technology. The new interface provides thumbnails of the top hits. You can explore other results by clicking on the links on the page. The default interface for the query “text mining” appears below:
Other new features include:
- E-reputation tools
- Metasearch functions
- Support for anonymous search
- Support for French, English and Dutch language.
If you have not explored the Kartoo service, give it a whirl.
Stephen Arnold, October 13, 2009, published because I like the French