September 14, 2013
Do we dare broach the subject about heath care information and electronic media records? Yes, we do and we take into account “Dr. Karl Kochendorfer: Bridging The Knowledge Gap In Health Care” from Federated Search Blog. Dr. Karl Kochendorfer wants there to be an official federated search for the national health care system. His idea is to connect health care professionals to authoritative information with an instantaneous return. He cites that doctors and nurses are relying on Wikipedia and Google searches rather than authorized databases, because it is faster. Notice the danger?
Dr. Kochendorfer mentions this fact in a TED talk he gave in April called “Seek And Ye Shall.” He presents the idea for a federated search in this discussion, along with more of these facts:
- “There are 3 billion terabytes of information out there.
- There are 700,000 articles added to the medical literature every year.
- Information overload was described 140 years ago by a German surgeon: “It has become increasingly difficult to keep abreast of the reports which accumulate day after day … one suffocates through exposure to the massive body of rapidly growing information.”
- With better search tools, 275 million improved decisions could be made.
- Clinicians spend 1/3 of their time looking for information.”
Dr. Kochendorfer ‘s idea is grand, but how many academic databases are lining up to offer their information for free or without a hefty subscription fee? Academia is already desperate for money, asking them to share their wealth of knowledge without green will not go over too highly. Should there be a federated search with authoritative information and instantaneous results? Yes. Will it happen? Keep fixing the plumbing.
Whitney Grace, September 14, 2013
September 12, 2013
The general search engines available on the web are simply not adequate for healthcare professionals looking for the latest pertinent information (let alone personalized data on their patients). The Federated Search Blog shares an important Tedx Talk in its piece, “Dr. Karl Kochendorfer: Bridging the Knowledge Gap in Health Care,” which advocates the adoption of federated search for the healthcare industry. I recommend the video not only for those in the healthcare or search fields, but for anyone interested in getting the best care for themselves and their families. The write-up tells us:
“As a family physician and leader in the effort to connect healthcare workers to the information they need, Dr. Kochendorfer acknowledges what those of us in the federated search world already know – Google and the surface web contain so little of the critical information your doctor and his staff need to support important medical decision-making.”
The write-up summarizes highlights from the talk, including the statistic that says a third of clinicians’ time is spent hunting down information. No wonder doctors are spending less time with patients! The article continues:
“And, the most compelling reason to get federated search into healthcare is the sobering thought by Dr. Kochendorfer that doctors are now starting to use Wikipedia to get answers to their questions instead of the best evidence-based sources out there just because Wikipedia is so easy for them to use. Scary.”
Yes, scary is a good word for it. It is true that data reservoirs that feed federated searches can contain errors—a point Kochendorfer does not address in this video. Still, I have to agree with the write-up: the doctor makes a compelling case on this important issue. The video concludes with a call for listeners to support the development of federated healthcare search tools like MedSocket and open standards like Infobuttons. Sounds like a good idea to me.
Cynthia Murrell, September 12, 2013
June 16, 2013
Search Engine Watch re-posted an aggressive article towards Google recently: “Google Should Kill or Radically Change Universal Search Results.” The message comes from Foundem, an UK price comparison firm that has rejected Google’s proposed web search concessions.
These concessions come following the European Commission’s ongoing antitrust investigation into Google’s search business. Foundem believes that their proposed concessions will not lessen Google’s monopoly on web search.
The article tells us that the proposed concessions ignore Google’s monopoly on search:
“Instead, the concessions focus on minor alterations to Google’s “self-serving Universal Search inserts.” According to Foundem’s report, any concessions must address Google’s AdWords search capabilities. Foundem says AdWords will continue to give Google an unfair advantage until they are re-worked. The company says that the current proposal fails to correct Google searches relevance for showing its own services in results. Foundem believes that to truly slow Google’s search monopoly it would have to either eliminate universal search or drastically change it.”
This information reported suggests there is still a big question about federated search results despite the fact that Google’s Universal Search initiative was announced back in 2007.
Megan Feil, June 16, 2013
October 10, 2012
Web sites that wish to use WordPress to build their content and SearchBlox for federated search will soon have an easier time uniting the two. On their blog, SearchBlox announces, “WordPress Plugin Makes It Easy to Integrate SearchBlox.” The post by Timo Selvarag reports:
“SearchBlox has released an updated WordPress plugin to search your WordPress site and integrate faceted search results into your site from the SearchBlox Server. Unlike the Solr Search Plugin, there are fields to configure or schema to load. Simply install the plugin and follow the getting started guide to integrate search into your site. SearchBlox provides fast instant search results from the SearchBlox Server. You can also crawl and integrate external sites, feeds and file system based documents for searching within your WordPress site.”
There’s a demo of the plugin here. WordPress is an open source project licensed under the GPL. Begun as a blogging system in 2003, it has grown into a full content management system with thousands of plugins, widgets, and themes now available.
Cynthia Murrell, October 10, 2012
June 12, 2012
Index Data co-founder Sebastian Hammer discusses the nuts and bolts of search systems in an interview with David Weinberger of the Harvard Library Innovation Lab in “Podcast: Sebastian Hammer on Federated Search.” Both the 23-minute podcast and the written transcript are available at the above link.
The interview begins by defining federated search (a single interface for multiple data sources) and explaining how it differs from search engines like Google (which gather information then pull query results from a unified database.)
Hammer acknowledges that, in some situations, the federated approach is the only choice. For example, you’ll need it if the data you’re after is subject to frequent change. However, federated searches can be terribly slow, and all the data might not be available at the same time. Also, merging federated results can be problematic. On the other hand, building an index by pulling in everything you might possibly want to search can strain practicality. Hammer’s solution– a hybrid approach. He explains:
“So my notion is that you want to be able to gather stuff together in an index when it is practical and possible, and you want to be able to federate for the stuff where it’s not practical or possible. And you want to try to do both of those things as well as you possibly can and you want to try to somehow get the results of both of those types of searches back to the user as a single nice friendly merged search results.”
Simple, right? The interview goes into much greater depth on federated search now and in the future, as well as ways Hammer’s company strives to make the hybrid approach nice and friendly. I recommend checking it out.
Index Data has been creating discovery solutions for over 17 years. Based in Berlin, the company serves national libraries and consortia, government agencies, and businesses. They are proud to contribute significantly to the open source community. The company is happiest when riding on the cutting edge of their field.
Cynthia Murrell, June 12, 2012
Sponsored by PolySpot
April 25, 2012
Big data. Wow. That’s an angle only a public relations person with a degree in 20th century American literature could craft. Vivisimo is many things, but a big data system? News to me for sure.
IBM has been a strong consumer and integrator of open source search solutions. Watson, the game show winner, used Lucene with IBM wrapper software to keep the folks in Jeopardy post production on their toes.
A screen shot of the Vivisimo Velocity system displaying search results for the RAND organization. Notice the folders in the left hand panel. The interface reveals Vivisimo’s roots in traditional search and retrieval. The federating function operates behind the scenes. The newest versions of Velocity permit a user to annotate a search hit so the system will boost it in subsequent queries if the comment is positive. A negative rating on a result suppresses that result.
I learned that IBM allegedly purchased Vivisimo, a company which I have covered in my various monographs about search and content processing. Forbes ran a story which was at odds with my understanding of what the Vivisimo technology actually does. Here’s the Forbes’ title: “IBM To Buy Vivisimo; Expands Bet On Big Data Analytics.” Notice the phrase “big data analytics.”
Why do I point out the “big data” buzzword? The reasons include:
- Vivisimo has a clustering method which takes search results and groups them, placing similar results identified by the method in “folders”
- Vivisimo has a federating method which, like Bright Planet’s and Deep Web Technologies’, takes a user’s query and sends the query to two or more indexing systems, retrieves the results, and displays them to the user
- Vivisimo has a clever de-duplication method which makes the results list present one item. This is important when one encounters a news story which appears on multiple Web sites.
According to the write up in Forbes, a “real” news outfit:
Okay, but in Beyond Search we have documented that Vivisimo followed this trajectory in its sales and marketing efforts since the company opened for business in 2000. In fact, the Wikipedia write up about Vivisimo says this:
Vivisimo is a privately held enterprise search software company in Pittsburgh that develops and sells software products to improve search on the web and in enterprises. The focus of Vivisimo’s research thus far has been the concept of clustering search results based on topic: for example, dividing the results of a search for “cell” into groups like “biology,” “battery,” and “prison.” This process allows users to intuitively narrow their search results to a particular category or browse through related fields of information, and seeks to avoid the “overload” problem of sorting through too many results.
November 29, 2011
Nuances of enterprise search and the challenges some searchers face are discussed in “Why is Enterprise Search more complex than web or desktop search?”
“Access control to the data is a big difference between Enterprise search and the other 2 search types. On the Web, everybody is allowed to see the data. On your desktop you are allowed to see all data, because you are the owner. Web and desktop search can index all the data without to take access control into account.”
In an enterprise, access control is very important. But we prefer to spend more time finding than searching. To get the results you want, you need the right solution and the right search structure and support.
Access control is not an obstacle for Mindbreeze. Their search technology maintains user rights while searching all company-relevant information within the enterprise and in the cloud.
Sara Wood, November 29, 2011
May 12, 2011
Top Hosting Service Information reveals that “Vivisimo Showcases Secure, Cross-domain Intelligence Solutions” at this week’s DoDIIS Worldwide conference in Detroit. Since Vivisimo serves the federal government, including the defense community, this is a welcome development.
The defense and intelligence communities recognize the need to improve information sharing as a way to achieve true all-source analysis and deliver timely, objective, and actionable intelligence to our senior decision makers and war fighters,’ says Bob Carter, vice president and general manager, federal, of Vivisimo. ‘In an era where spending cuts are being made to improve efficiencies, Vivisimo helps streamline operations and ultimately costs by allowing analysts significantly better access, processing and sharing of critical data necessary to the defense of the U.S.
Assembling the myriad of data gathered from around the globe into useful information is one of today’s biggest challenges for the intelligence community. Though the government often travels behind the curve in tech fields, it seems to be stepping up in this area.
Cynthia Murrell May 12, 2011
February 23, 2011
We have noted a number of management changes in the search and content sector.
Now X1 Technologies has appointed a new leader for their eDiscovery division. X1 Technologies Appoints John Patzakis as President of eDiscovery, citing his extensive background in eDiscovery and corporate compliance as well as his knowledge of the law.
“I am pleased to welcome someone as accomplished as John to the X1 team,” said John Waller, CEO of X1 Technologies. “John’s background as a senior software executive coupled with his deep understanding of compliance and discovery law make him a perfect fit to lead our efforts in the eDiscovery market.”
X1’s eDiscovery Search Suite allows users to search data stored in over 500 different files types and applications. This allows for quick retrieval of electronically stored information (ESI) for early case assessment. X1’s support of social media applications will be released this quarter. In Patzakis, X1 has found a leader with the experience and skill to push them forward in the eDiscovery sector.
Emily Rae Aldridge, February 243, 2011
February 13, 2011
So Google can be fooled. It’s not nice to fool Mother Google. The inverse, however, is not accurate. Mother Google can take some liberties. Any indexing system can. Objectivity is in the eye of the beholder or the person who pays for results.
Judging from the torrent of posts from “experts”, the big guns of search are saying, “We told you so.” The trigger for this outburst of criticism is the New York Times’s write up about JC Penny. You can try this link, but I expect that it and its SEO crunchy headline will go dark shortly. (Yep, the NYT is in the SEO game too.)
I am not sure how many years ago I wrote the “search sucks” article for Searcher Magazine. My position was clear long before the JC Penny affair and the slowly growing awareness that search is anything BUT objective.
In the good old days, database bias was set forth in the editorial policies for online files. You could disagree with what we selected for ABI/INFORM, but we made an effort to explain what we selected, why we selected certain items for the file, and how the decision affected assignment of index terms and classification codes. The point was that we were explaining the mechanism for making a database which we hoped would be useful. We were successful, and we tried to avoid the silliness of claiming comprehensive coverage. We had an editorial policy, and we shaped our work to that policy. Most people in 1980 did not know much about online. I am willing to risk this statement: I don’t think too many people in 2011 know about online and Web indexing. In the absence of knowledge, some remarkable actions occur.
You don’t know what you don’t know or the unknown unknowns. Source: http://dealbreaker.com/donald-rumsfeld/
Flash forward to the Web. Most users assume incorrectly that a search engine is objective. Baloney. Just as we set an editorial policy for ABI/INFORM each crawler and content processing system has similar decisions beneath it.
The difference is that at ABI/INFORM we explained our bias. The modern Web and enterprise search engines don’t. If a system tries to explain what it does, most of the failed Web masters, English majors working as consultants, and unemployed lawyers turned search experts just don’t care.
Search and content processing are complicated businesses, and the appetite for the gory details about certain issues are of zero interest to most professionals. Here’s a quick list of “decisions” that must be made for a basic search engine:
- How deep will we crawl? Most engines set a limit. No one, not even Google, has the time or money to follow every link.
- How frequently will we update? Most search engines have to allocate resources in order to get a reasonable index refresh. Sites that get zero traffic don’t get updated too often. Sites that are sprawling and deep may get three of four levels of indexing. The rest? Forget it.
- What will we index? Most people perceive the various Web search systems as indexing the entire Web. Baloney. Bing.com makes decisions about what to index and when, and I find that it favors certain verticals and trendy topics. Google does a bit better, but there are bluebirds, canaries, and sparrows. Bluebirds get indexed thoroughly and frequently. See Google News for an example. For Google’s Uncle Sam, a different schedule applies. In between, there are lots of sites and lots of factors at play, not the least of which is money.
- What is on the stop list? Yep, a list can kill index pointers, making the site invisible.
- When will we revisit a site with slow response time?
- What actions do we take when a site is owned by a key stakeholder?