October 13, 2009
I have found the Kartoo.com service useful and innovative. I learned today that the company has rolled out a new interface and links that make it easier to locate the company’s other content processing technology. The new interface provides thumbnails of the top hits. You can explore other results by clicking on the links on the page. The default interface for the query “text mining” appears below:
Other new features include:
- E-reputation tools
- Metasearch functions
- Support for anonymous search
- Support for French, English and Dutch language.
If you have not explored the Kartoo service, give it a whirl.
Stephen Arnold, October 13, 2009, published because I like the French
October 8, 2009
Vivisimo, http://www.vivismo.com, a company that works with email archiving, eDiscovery, and information management solutions, just released a revved-up version of the Velocity Enterprise Search Platform, which builds search-centric programs. The platform focuses on extensibility, scalability and performance; Vivisimo is using it to accelerate into OEM and reseller markets. Those programs are designed to add value to existing applications and develop new solutions for sorting information assets, for example, it supports searching 1 billion emails on a single server. Vivisimo also says “With Velocity 7.5, new traceable accuracy metrics can accurately prove and defend that all data has been crawled and identify any documents that were not indexed due to corrupt file types.” This can be a big plus for companies dealing with growing regulation. A happy quack for Vivisimo (tagline: “Search Done Right!”). Any progress that can help enterprise business advance search and make sense of unstructured data is a good thing.
Jessica Bratcher, October 8, 2009
October 1, 2009
I have been using Coveo’s products for years. I remember the first time I fired up the original desktop search program. I found the interface intuitive and the features in line with how I looked for information. I learned from the company yesterday (September 30, 2009) that a new version of the product is now available. I noticed that the company has added several new features to its Enterprise Desktop Search application; for example:
- Search of content on my netbook, my Outlook mail store, and other applications running in my Harrod’s Creek data center.
- A centralized index of all enterprise information, including the formerly risky and elusive, cross-enterprise PC and laptop content, which is useful when I am in a meeting and need a coding gosling to locate a particular item of information that I tucked away without telling anyone its location
- Enhanced monitoring functions.
After installing the application, you will want to check out the built in connectors, the faceted “point and click” search function, and the support for access from a BlackBerry device. Nifty indeed because RIM’s search function is not too useful in my opinion.
The president and founder Laurent Simoneau told me:
With our roots dating to the early days of Copernic, a global leader in consumer desktop search, we were committed to build the cross-enterprise capability to index and provide unified access for employees to their desktop content, including their email,” said Coveo CEO and President Laurent Simoneau, who prior to founding Coveo in 2005 was COO of Copernic. “What we’ve done is elevate that access to a higher level, with unified search of not only their individual PCs and laptops, but of contextually relevant knowledge and information residing in any enterprise system, based on IT permissions. In so doing, we’ve placed control over cross-enterprise desktop content indexing, with complete security and access permissions, in the hands of IT.
The benefits of the new system struck me as reducing the time spent hunting for email. Larger organizations will be able to reduces costs and risks as well.
The Coveo Enterprise Desktop Search application is powered by the Coveo Enterprise Search 6.0 platform, which is scalable from hundreds of thousands to billions of documents, and requires approximately 20 percent of the server footprint of legacy enterprise search solutions. Our tests show that Coveo is one of the more modular and scalable enterprise search solutions. It ranks as one of the easiest to install and configure search solutions we have tested. Worth a look. Fill out the form and give it a spin.
Stephen Arnold, October 1, 2009
September 28, 2009
I thought I made Google’s intent clear in Google Version 2.0. The company provides a user with access to content within the Google index. The inventions reviewed briefly in The Google Legacy and in greater detail in Google Version 2.0 explain that information within the Google data management system can be sliced, diced, remixed, and output as new information objects. The analogy is similar to what an MBA does at Booz, McKinsey, or any other rental firm for semi-wizards. Intakes become high value outputs. I was delighted to read Erick Schonfeld’s “With Google Places, Concerns Rise that Google Just Wants to Link to Its Own Content.” The story makes clear that folks are now beginning to see that Google is a digital Gutenberg and is a different type of information company. Mr. Schonfeld wrote:
The concerns arise, however, back on Google’s main search page, where Google is indexing these Places pages. Since Google controls its own search index, it can push Google Places more prominently if it so desires. There isn’t a heck of a lot of evidence that Google is doing this yet, but the mere fact that Google is indexing these Places pages has the SEO world in a tizzy. And Google is indexing them, despite assurances to the contrary. If you do a search for the Burdick Chocolate Cafe in Boston, for instance, the Google Places page is the sixth result, above results from Yelp, Yahoo Travel, and New York Times Travel. This wouldn’t be so bad if Google wasn’t already linking to itself in the top “one Box” result, which shows a detail from Google Maps. So within the top ten results, two of them link back to Google content.
Directories are variants of vertical search. Google is much more than rich directory listings.
Let me give one example, and you are welcome to snag a copy of my three Google monographs for more examples.
Consider a deal between Google and a mobile telephone company. The users of the mobile telco’s service run a query. The deal makes it possible for the telco to use the content in the Google system. No query goes into the “world beyond Google”. The reason is that Google and the telco gain control over latency, content, and advertising. This makes sense. Let’s assume that this is a deal that Google crafts with an outfit like T Mobile. Remember: this is a hypothetical example. When I use my T Mobile device to get access to the T Mobile Internet service, the content comes from Google with its caches, distributed data centers, and proprietary methods for speeding results to a device. In this example, as a user, I just want fast access to content that is pretty routine; for example, traffic, weather, flight schedules. I don’t do much heavy lifting from my flakey BlackBerry or old person hostile iPhone / iTouch device. Google uses its magical ability to predict, slice, and dice to put what I want in my personal queue so it is ready before I know I need the info. Think “I am feeling doubly lucky”, a “real” patent application by the way. T Mobile wins. The user wins. The Google wins. The stuff not in the Google system loses.
Interesting? I think so. But the system goes well beyond directory listings. I have been writing about Dr. Guha, Simon Tong, Jeff Dean, and the Halevy team for a while. The inventions, systems and methods from this group have revolutionized information access in ways that reach well beyond local directory listings.
The Google has been pecking away for 11 years and I am pleased that some influential journalists / analysts are beginning to see the shape of the world’s first trans national information access company. Google is the digital Gutenberg and well into the process of moving info and data into a hyper state. Google is becoming the Internet. If one is not “in” Google, one may not exist for a certain sector of the Google user community. Googleo ergo sum.
Stephen Arnold, September 28, 2009
September 23, 2009
TechFlash reported an interesting article called “Windows Live Lost $560 Million in FY2009”. With revenues of $520, the loss chewed through $64,000 an hour or $2,663 a minute 24×7 for 365 days. With Microsoft’s revenue in the $58 billion range, a $560 million is not such a big deal. In my opinion, profligate spending might work in the short term, but I wonder if the tactic will work over a longer haul on the information highway.
Stephen Arnold, September 23, 2009
September 7, 2009
When a company offers multiple software products to perform a similar function, I get confused. For example, I have a difficult time explaining to my 88 year old father the differences among Notepad, WordPad, Microsoft Works’ word processing, Microsoft Word word processing, and the Microsoft Live Writer he watched me use to create this Web log post. I think it is an approach like the one the genius at Ragu spaghetti sauce used to boost sales of that condiment. When my wife sends me to the store to get a jar of Ragu spaghetti sauce, I have to invest many minutes figuring out what the heck is the one I need. Am I the only male who cannot differentiate between Sweet Tomato Basic and Margherita? I think Microsoft has taken a different angle of attack because when I acquired a Toshiba netbook, the machine had installed Notepad, WordPad, and Microsoft Works. I added a version of Office and also the Live Writer blog tool. Some of these were “free” and others products came with my MSDN subscription.
Now the same problem has surfaced with basic search. I read “FAST ESP versus MOSS 2007 / Microsoft Search Server” with interest. Frankly I could not recall if I had read this material before, but quit a bit seemed repetitive. I suppose when trying to explain the differences among word processors, the listener hears a lot of redundant information as well.
The write up begins:
It took me some time but i figured out some differences between Microsoft Search Server / MOSS 2007 and Microsoft FAST ESP. These differences are not coming from Microsoft or the FAST company. But it came to my notice that Microsoft and FAST will announce a complete and correct list with these differences between the two products at the conference in Las Vegas next week.These differences will help me and you to make the right decisions at our customers for implementing search and are based on business requirements.
Ah, what’s different is that this is a preview of the “real” list of differences. Given the fact that the search systems available for SharePoint choke and gasp when the magic number of 50 million documents is reached, I hope that the Fast ESP system can handle the volume of information objects that many organizations have on their systems at this time.
The list in the Bloggix post numbers 14. Three interested me:
- Faceted navigation
- Advanced federation.
First, scalability is an issue with most search systems. Some companies have made significant technical breakthroughs to make adding gizmos painless and reasonably economical. Other companies have made the process expensive, time consuming, and impossible for the average IT manager to perform. I heard about EMC’s purchase of Kazeon. I thought I heard that someone familiar with the matter pointed to problems with the Fast ESP architecture as one challenge for EMC. In order to address the issue, EMC bought Kazeon. I hope the words about “scalability” are backed up with the plumbing required to deliver. Scaling search is a tough problem, and throwing hardware at hot spots is, at best, a very costly dab of Neosporin.
Second, faceted navigation exists within existing MOSS implementations. I think I included screenshots of faceted navigation in the last edition of the Enterprise Search Report I wrote in 2006 and 2007. There was a blue interface and a green interface. Both of these made it possible to slice and dice results by clicking on an “expert” identified by counting the number of documents a person wrote with a certain word in them. There were other facets available as well, although most we more sophisticated that the “expert” function. I hope that the “new” Fast ESP implements a more useful approach for users of Fast ESP. Of course, identifying, tagging, and linking facets across processed content requires appropriate computing resources. That brings us back to scaling, doesn’t it? Sorry.
Third, federation is a buzz word that means many different things because vendors define the term in quite distinctive ways. For example, Vivisimo federates, and it is or was at one time a metasearch system. The query went to different indexing services, brought back the results, deduplicated them, put the results in folders on the fly, and generated a results list. Another type of federation surfaces in the descriptions of business intelligence systems offered by SAS. The system blends structured and unstructured data within the SAP “environment”. Others are floating around as well, including the repository solutions from TeraText which federates disparate content into one XML repository. What I find interesting is that Microsoft is not delivering “federation” which is undefined. Microsoft is, according to the Bloggix post, on the trail of “advanced federation”. What the heck does that mean. The explanation is:
FAST ESP supports advanced federation including sending queries to various web search APIs, mixing results, and shallow navigation. MOSS only supports federation without mixing of results from different sources and navigation components, but showing them separately.
Okay, Vivisimo and SAP style for Fast ESP; basic tagging for MOSS. Hmm.
To close, I think that the Fast ESP product is going to add a dose of complexity to the SharePoint environment. Despite Google’s clumsy marketing, the Google Search Appliance continues to gain traction in many organizations. Google’s solution is not cheap. People want it. I think Fast ESP is going to find itself in a tough battle for three reasons:
- Google is a hot brand, even within SharePoint shops
- Microsoft certified search solutions are better than Fast ESP based on my testing of search systems over the past decade
- The cost savings pitch is only going to go so far. CFOs eventually will see the bills for staff time, consulting services, upgrades, and search related scaling. In a lousy financial environment, money will be a weak point.
I look forward to the official announcement about Fast ESP, the $1.2 billion Microsoft spent for this company is now going to have to deliver. I find it unfortunate that the police investigation of alleged impropriety at Fast Search & Transfer has not been resolved. If a product is so good as Fast ESP was advertised to be, what went wrong with the company, its technology, and its customer relations prior to the Microsoft buy out? I guess I have to wait for more information on these matters. When you have a lot of different products with overlapping and similar services, the message I get is more like the Ragu marketing model, not the solving of customer problems in a clear, straightforward way. Sigh. Marketing, not technology, fuels enterprise search these days I fear.
Stephen Arnold, September 7, 2009
August 25, 2009
I was exploring usage patterns via Alexa. I wanted to see how Silobreaker, a service developed by some savvy Scandinavians, was performing against the brand name business intelligence companies. Silobreaker is one of the next generation information services that processes a range of content, automatically indexing and filtering the stream, and making the information available in “dossiers”. A number of companies have attempted to deliver usable “at a glance” services. Silobreaker has been one of the systems I have relied upon for a number of client engagements.
I compared the daily reach of LexisNexis (a unit of the Anglo Dutch outfit Reed Elsevier), Factiva (originally a Reuters Dow Jones “joint” effort in content and value added indexing now rolled back into the Dow Jones mothership), Ebsco (the online arm of the EB Stevens Co. subscription agency), and Dialog (a unit of the privately held database roll up company Cambridge Scientific Abstracts / ProQuest and some investors). Keep in mind that Silobreaker is a next generation system and I was comparing it to the online equivalent of the Smithsonian’s computer exhibit with the Univac and IBM key punch machine sitting side by side:
Silobreaker is the blue line which is chugging right along despite the challenging financial climate. I ran the same query on Compete.com, and that data showed LexisNexis showing a growth uptick and more traffic in June 2009. You mileage may vary. These types of traffic estimates are indicative, not definitive. But Silobreaker is performing and growing. One could ask, “Why aren’t the big names showing stronger buzz?”
A better question may be, “Why haven’t the museum pieces performed?” I think there are three reasons. First, the commercial online services have not been able to bridge the gap between their older technical roots and the new technologies. When I poked under the hood in Silobreaker’s UK facility, I was impressed with the company’s use of next generation Web services technology. I challenged the R&D team regarding performance, and I was shown a clever architecture that delivers better performance than the museum piece services against which Silobreaker competes. I am quick to admit that performance and scaling remain problems for most online content processing companies, but I came away convinced that Silobreaker’s engineering was among the best I had examined in the real time content sector.
Second, I think the museum pieces – I could mention any of the services against which I compared Silobreaker – have yet to figure out how to deal with the gap between the old business model for online and the newer business models that exist. My hunch is that the museum pieces are reluctant to move quickly to embrace some new approaches because of the fear of [a] cannibalization of their for fee revenues from a handful of deep pocket customers like law firms and government agencies and [b] looking silly when their next generation efforts are compared to newer, slicker services from Yfrog.com, Collecta.com, Surchur.com, and, of course, Silobreaker.com.
Third, I think the established content processing companies are not in step with what users want. For example, when I visit the Dialog Web site here, I don’t have a way to get a relationship map. I like nifty methods of providing me with an overview of information. Who has the time or patience to handcraft a Boolean query and then paying money whether the dataset contains useful information or not. I just won’t play that “pay us to learn there is a null set” game any more. Here’s the Dialog splash page. Not too useful to me because it is brochureware, almost a 1998 approach to an online service. The search function only returns hits from the site itself. There is not compelling reason for me to dig deeper into this service. I don’t want a dialog; I want answers. What’s a ProQuest? Even the name leaves me puzzled.
I wanted to make sure that I was not too harsh on the established “players” in the commercial content processing sector. I tracked down Mats Bjore, one of the founders of Silobreaker. I interviewed him as part of my Search Wizards Speak series in 2008, and you may find that information helpful in understanding the new concepts in the Silobreaker service.
What are some of the changes that have taken place since we spoke in June 2008?
Mats Bjore: There are several news things and plenty more in the pipeline. The layout and design of Silobreaker.com have been redesigned to improve usability; we have added an Energy section to provide a more vertically focused service around both fossil fuels and alternative energy; we have released Widgets and an API that enable anyone to embed Silobreaker functionality in their own web sites; and we have improved our enterprise software to offer corporate and government customers “local” customizable Silobreaker installations, as well a technical platform for publishers who’d like to “silobreak” their existing or new offerings with our technology. Industry-wise,the recent statements by media moguls like Rupert Murdoch make it clear that the big guys want to monetize their information. The problem is that charging for information does not solve the problem of a professional already drowning in information. This is like trying to charge a man who has fallen overboard for water instead of offering a life jacket. Wrong solution. The marginal loss of losing a few news sources is really minimal for the reader, as there are thousands to choose from anyways, so unless you are a “must-have” publication, I think you’ll find out very quickly that reader loyalty can be fickle or short-lived or both. Add to that that news reporting itself has changed dramatically. Blogs and other types of social media are already favoured before many newspapers and we saw Twitters role during the election demonstrations in Iran. Citizen journalism of that kind; immediate, straight from the action and free is extremely powerful. But whether old or new media, Silobreaker remains focused on providing sense-making tools.
What is it going to be, free information or for fee information?
Mats Bjore: I think there will be free, for fee, and blended information just like Starbuck’s coffee.·The differentiators will be “smart software” like Silobreaker and some of the Google technology I have heard you describe. However, the future is not just lots of results. The services that generate value for the user will have multiple ways to make money. License fees, customization, and special processing services—to name just three—will differentiate what I can find on your Web log and what I can get from a Silobreaker “report”.
What can the museum pieces like Dialog and Ebsco do to get out of their present financial swamp?
Mats Bjore: That is a tough question. I also run a management consultancy, so let me put on my consultant hat for a moment. If I were Reed Elsevier, Dow Jones/Factiva, Dialog, Ebsco or owned a large publishing house, I must realize that I have to think out of the box. It is clear that these organizations define technology in a way that is different from many of the hot new information companies. Big information companies still define technology in terms of printing, publishing or other traditional processes. The newer companies define technology in terms of solving a user’s problem. The quick fix, therefore, ought to be to start working with new technology firms and see how they can add value for these big dragons today, not tomorrow.
What does Silobreaker offer a museum piece company?
Mats Bjore: The Silobreaker platform delivers access and answers without traditional searching. Users can spot what is hot and relevant. I would seriously look at solutions such as Silobreaker as a front to create a better reach to new customers, capture revenues from the ads sponsored free and reach a wider audience an click for premium content – ( most of us are unaware of the premium content that is out there, since the legacy contractual types only reach big companies and organizations. I am surprised that Google, Microsoft, and Yahoo have not moved more aggressively to deliver more than a laundry list of results with some pictures.
Is the US intelligence community moving more purposefully with access and analysis?
The interest in open source is rising. However, there is quite a bit of inertia when it comes to having one set of smart software pull information from multiple sources. I think there is a significant opportunity to improve the use of information with smart software like Silobreaker’s.
Stephen Arnold, August 25, 2009
May 27, 2009
My comments will be carried along on the flow of Twitter commentary today. This post is to remind me that at the end of May 2009, the Google era (lots of older Web content) has ended and the Twitter or real time search era has arrived. Granted, the monetization, stability, maturity, and consumerization has not yet climbed on the real time search bandwagon. But I think these fellow travelers are stumbling toward the rocket pad.
Two articles mark this search shift. Sure, I know I need more data, but I want to outline some ideas here. I am not (in case you haven’t noticed) a real journalist. Save the carping for the folks who used to have jobs and are now trying to make a living with Web logs.
The first article is Michael Arrington’s “Topy Search Launches: Retweets Are the New Currency of the Web” here. The key point for me was not the particular service. What hooked me were these two comments in the article:
- “Topsy is just a search engine. That has a fundamentally new way of finding good results: Twitter users.” This is a very prescient statement.
- “Influence is gained when others retweet links you’ve sent out. And when you retweet others, you lose a little Influence. So the more people retweet you, the more Influence you gain. So, yes, retweets are the new currency on the Web.”
My thoughts on these two statements are:
- Topsy may not be the winner in this sector. The idea, however, is very good.
- The time interval between major shifts in determining relevance are now likely to decrease. Since Google’s entrance, there hasn’t been much competition for the Mountain View crowd. The GOOG will have to adapt of face the prospect of becoming another Microsoft or Yahoo.
- Now that Topsy is available, others will grab this notion and apply it to various content domains. Think federated retweeting across a range of services. The federated search systems have to raise the level of their game.
The second article was Steve Rubel’s “Visits to Twitter Search Soar, Indicating Social Search Has Arrived” here. I don’t have much to add to Mr. Rubel’s write up. The key point for me was:
I think there’s something fundamentally new that’s going on here: more technically savvy users (and one would assume this includes journalists) are searching Twitter for information. Presumably this is in a tiny way eroding searches from Google. Mark Cuban, for example, is one who is getting more traffic to his blog from Twitter and Facebook than Google.
For the purposes of this addled goose, the era of Googzilla seems to be in danger of drawing to a close. The Googlers will be out in force at their developers’ conference this week. I will be interested to see if the company will have an answer to the social search and real time search activity. With Google’s billions, it might be easier for the company to just buy tomorrow’s winners in real time search. Honk.
Stephen Arnold, May 27, 2009
May 12, 2009
I learned from my son (founder of Adhere Solutions) that his team and Perfect Search have a new product available, OBX. I was impressed with the OBX and the way in which the two companies explained their innovation.
The One Box Extender (OBX) allows users to search databases quickly and cost-effectively within the Google Search Appliance.
The One Box Extender (OBX), this product will extend the Google Search Appliance to enable organizations to search their database content with blistering query speeds – all delivered seamlessly though the Google Search Appliance’s OneBox interface.
Presently, Google Search Appliance users search their database content by sending the query through the OneBox Connector to retrieve results from different systems. This approach places query load on the database(s), and slows down the speed of the search for the end users. The Perfect Search One Box Extender (OBX) for the Google Search Appliance enables rapid search of Oracle, Microsoft SQL, DB2, MySQL, and any other SQL compliant database without placing any additional load on these systems. The OBX integrates within the same Search Engine Results Page for database search through the Google Search Appliance’s OneBox API.
Perfect Search and Adhere Solutions… enabling hyper federation.
Traditionally, enterprise search solutions are expensive and can be challenging to implement. The Google Search Appliance with the Perfect Search OBX provides a cost effective, appliance-based solution to index valuable database content. Many current Google Search Appliance users leverage Google’s OneBox connectors as a way to avoid indexing database content purely for cost reasons. Now, these organizations can index their database content, increase speed and relevancy, remove load from their database for a low cost.
Features of the One Box Extender include:
- Integrates the power of database search with the Google Search Appliance OneBox
- Provides connectivity to Oracle, Microsoft, DB2, MySQL, and other JDBC databases
- Can be used to search Microsoft Exchange email records
- Can index millions, or even billions of database records at a fixed cost
- Removes load on existing database systems
- Provides better results than traditional SQL queries
- Results appear within the Google Search Engine Result Page instantly
- Much lower cost than traditional enterprise search software approaches
- Complies with database security policies
- Customizable database displays.
“Adhere Solutions was founded by a team of search industry veterans with the vision of extending the capabilities of the Google Search Appliance and meeting the demand for associated professional services. We provide Google Enterprise customers with support throughout installation and configuration as well as applications built exclusively for the Google Search Appliance,” said Erik Arnold, director, Adhere Solutions. “Through the partnership with Perfect Search we will be able to offer Google Search Appliance customers the ability to search indexed databases without a massive spike in costs.”
“We are thrilled to be able to partner with such an outstanding organization as Adhere Solutions,” states Tim Stay, CEO of Perfect Search Corporation. “They have deep expertise providing robust solutions utilizing Google’s applications for the enterprise. With their guidance, we have been able to integrate the speed and capacity of the Perfect Search indexing and search engine to the breadth and functionality of the Google Search Appliance.”
“The OBX extends the functionality of Google’s strong suite of enterprise applications to large content repositories such as massive databases and email archives,” states George Watanabe, VP of Business Development at Perfect Search. “Historically, searching these very large data sets have been very expensive, but today, Perfect Search and Adhere Solutions are providing a cost-effective search solution that works seamlessly through the Google OneBox interface.”
Adhere Solutions is a Google Enterprise Partner providing products and services that help organizations accelerate their adoption of Google technologies and cloud computing. Adhere Solutions’ team of consultants help customers leverage Google’s Enterprise Search products, Google Maps, and Google Apps to improve access to information, productivity, and collaboration.
Perfect Search Corporation is a software innovation company that specializes in development of search solutions, focusing on speed, scalability, stability, and savings. A total of eight patents have been applied for around the developing technology. The suite of search products at is available on multiple platforms, from small mobile devices, to single servers, to large server farms. For more information, contact Perfect Search at www.perfectsearchcorp.com or +1.801.437.1100.
When I spoke with Perfect Search and got a description of the OBX, I concluded that Perfect Search and Adhere had moved beyond basic mash up and into a new territory. The phrase that was used to describe this product was “hyper federation.” This was the first time I heard this description, and I think that Perfect Search and Adhere have broken new ground and have a way to explain what their engineers have accomplished.
Stephen Arnold, May 12, 2009
April 12, 2009
I was asked about data virtualization last week. As I worked on a short report for the client, I reminded myself about Composite Software, a company with “data virtualization” as a tagline on on its Web site. You can read about the company here. Quick take: the firm’s technology performs federation. Instead of duplicating data in a repository, Composite Software “uses data where it lives.” If you are a Cognos or BMS customer, you may have some Composite technology chugging away within those business intelligence systems. The company opened for business in 2002 and has found a customer base in financial services, military systems, and pharmaceuticals.
The angle that Composite Software takes is “four times faster and one quarter the cost.” The “faster” refers to getting data where it resides and as those data are refreshed. Repository approaches introduce latency. Keep in mind that no system is latency free, but Composite’s approach minimizes latency associated with more traditional approaches. The “cost” refers to the money saved by eliminating the administrative and storage costs of a replication approach.
The technology makes use of a server that handles querying and federating. The user interacts with the Composite server and sees a single-view of the available data. The system can operate as an enabling process for other enterprise applications, or it can be used as a business intelligence system. In my files, I located this diagram that shows a high level view of Composite’s technology acting as a data services layer:
A more detailed system schematic appears in the companies datasheet “Composite Information Server 4.6″ The here. A 2009 explanation of the Composite virtualization process is also available from the same page as the information server document.
The system includes a visual programming tool. The interface makes it easy to point and click through SQL query build up. I found the graphic touch for joins useful but a bit small for my aging eyeballs.
If you are a fan of mashups, Composite makes it possible to juxtapose analyzed data from diverse sources. The company makes available a white paper, written by Bloor Research, that provides a useful round up of some of the key players in the data discovery and data federation sector. You have to register before you can download the document. Start the registration process here.
Keep in mind that this sector does not include search and content processing companies. Nevertheless, Composite offers a proven method for pulling scattered, structured data together into one view.
Stephen Arnold, April 12, 2009