SAS: BI Giant Sends Mixed Signals

May 26, 2008

SAS Institute, one of the leaders in business analytics and the world’s largest privately-held software company, has taken several actions that signals changes at the Cary, North Carolina firm.

Prior to the company’s acquisition of Teragram, SAS moved forward with measured steps. Innovations made statisticians and business analysts giddy with excitement. The average employee dependent upon SAS reports and data noticed little if any significant change.

Now changes are coming at what appears to be faster pace. First, the company announced that it was laying off employees in its educational division. I was introduced to SAS or “sass” as my professor pronounced it at university. Other schools indoctrinated their students with SPSS, SAS Institute’s Chicago-based competitor. If you took advanced statistic, you learned one or the other, and once learned, you stuck with that toolkit unless there was a boss who made you learn the other program.

Layoffs That Weren’t

When I saw the announcement of the layoffs, it struck me as odd. Then I saw this explanation by SAS in the Charlotte News & Observer here.  Navigate to this story quickly. Traditional publishers pull articles or make them hard to find a day or two after these appear online.

The terminations rumor if true, the shift seemed to mark change in tactics in the company’s battle with SPSS. Tomorrow’s analysts use the tools learned in their university statistics and math classes. A pull back in education said to me, “We’re looking for better ways to market.”

No. SAS is adding staff, and it is not for sale. In my experience, everything is for sale, but I will take SAS at its word. The mix up in the local newspaper is peculiar. Experienced reporters don’t make these types of mistakes very often. The reporter heard something; otherwise, the story would not have found its way past the editor into the newspaper.

Teragram and Lucene

Then I cam across information in CMSWire here saying that SAS’ Teragram text processing unit was picking “up up some multilingual, natural language support with the recent integration of Teragram Linguistic Tools.”

Lucene, which I describe in some detail here, is Apache open source search engine. Lucene forms the basis of the IBM Yahoo “free” search, and it is used by many companies looking for a low-cost alternative for such basics and key word search.

With the integration of Teragram‘s tools, Lucene appears to get a steroid injection. Beefed up, Lucene should be able to give a Lucene adept a way to use taxonomies and classification schemes. These are essential if you want to provide users with “See Also” and “Use For” suggestions. The popular point-and-click interfaces such as the one I showed in my talk at the Enterprise Search Summit for the Oracle Technology Network are what users crave instead of a laundry list of search results. (In that Oracle demo I showed the Siderean Software technology implemented by Oracle here.)

If Teragram makes its bag of text processing tricks available, you will be able to provide some advanced features to blunt the edges of dull tool of key word search. Teragram provides a spell check feature and its supports different languages.

In short, the SAS Teragram move may signal a shift in marketing for the Cary, North Carolina firm. SAS once meant proprietary. With the Lucene Teragram hook up, SAS is playing an open source card.
When I step back, these two unrelated events indicate to me that staid SAS is in the midst of change. There’s increasing competition in business analytics. See my HiQube and Infobright technology makes it possible for SAS to distance itself over time from the essays. The TeragramInxight technology that SAS has integrated into its core platform. Inxight, as you may know, was a text processing tools vendor not unlike Teragram. Business Objects, a competitor to SAS, bought Inxight. Then Business Objects itself was acquired by SAP further muddying the water for integrated business analytics.

The confusion over staff changes underscore a lack of coordination. SAS had previously done a good job managing its public face. A gaffe in the home town newspaper means that communication signals were crossed, possibly due to pressures within SAS created by marketing shifts.

My view is that I now have to pay closer attention to SAS. For decades, the company hummed along like a Singer sewing machine. Now, the jerks and starts indicate that some changes are taking place. We don’t know what’s happening. We do know that the competitive arena in the once quiet business and text analytics market niche are becoming evident. Agree? Disagree? Let me know.

Stephen Arnold, May 25, 2008

Government High-Tech Investments: IN-Q-TEL

May 26, 2008

I received an email from a colleague new to the Federal sector. Her email included comments and links about US government funding of high technology companies. I was surprised because I assumed that most people knew of the IN-Q-TEL organization. As US government urls go, IN-Q-TEL’s will baffle some people. First, the hyphens throw off some folks. Then the group’s use of the Dot Org domain is another.

inqtel splash

In a nutshell, IN-Q-TEL makes clear what it does and why:

IN-Q-TEL identifies, adapts, and delivers innovative technology solutions to support the missions of the Central Intelligence Agency and the broader US intelligence community.

I’m not interested in whether IN-Q-TEL is doing a great job or a lousy job. I’m not concerned about its mission, its funding, or its management team.

What I find fascinating is the organization’s choice of companies in which to invest. I don’t know the budget range of IN-Q-TEL, but my sources tell me that the investments stick close to $1 million, sometimes more, sometimes less. You can read more about IN-Q-TEL at these links:

  • The Wikipedia entry, and I am not vouching for the accuracy of this entry
  • The CIA’s own description here
  • KMWorld’s write up here. (I am a paid columnist for KMWorld, but I did not contribute to this story.)

The purpose of this feature is to provide a snapshot of the companies in which IN-Q-TEL has invested. I’ve identified more than 70 companies. This is too many to put in one posting, so I will break up the list and cover the period 2000 to 2003 here and do each subsequent year in additional Beyond Search postings.

In the period from 2000 to 2003, IN-Q-TEL invested in 25 companies. Keep in mind that I may have overlooked some in my research. If you know of a company I missed, please, use the comment section of this Web log to update my information. These appear in the table below:

Read more

HiQube: Another Business Intelligence System

May 25, 2008

Data management, not database, issues now dominate discussions about extracting information from log files, finding ways to manipulate without delay data in financial transactions, and making sense of telemetry data flows.

HiQube is a new high-performance business intelligence (BI) software solution that quickly delivers in-depth business analysis capability and superior reporting, as a result of its unique HiQube technology. HiQube technology is easy to use and is the first to combine hierarchical, relational and multidimensional database technologies. In doing so, it delivers users with unparalleled decision-making power. HiQube BI software solutions are available and supported worldwide.

In January 2007, Altair Engineering (Ann Arbor, Michigan) purchased Hicare, renamed the company, and began marketing the company’s technology more aggressively. Altair is a privately-held firm with an estimated $140 million in revenue in 2007.

The company’s official Web site is here. I found the pre-acquisition Web site more useful. It is here. Don’t let the Italian descriptions throw you. Google Translate makes short work of the language barrier.

What I find interesting is that innovations are coming from specialist firms often based outside the United States. I wrote about the Canadian outfit Infobright last week, now I want to talk about the Italian company HiQube (formerly HiCare).

The company’s technology is a proprietary database that combines three data management technologies in one system. You can manipulate data in a traditional relational form. You also can implement hierarchical data management. The approach I find most interesting is the company’s multi dimensional (n-cube) data managemnet system. If you are not sure of the differences among these types, let me offer a greatly simplified comment about each type:

  • Relational–The Codd database. Think of a table in DB2, Oracle, or SQLServer. Data reside in columns and rows.
  • Hierarchical–This is a structure in which records are organized as a tree or parent child relationship. Each child type is related to only one parent type
  • Multidimensional–This is a data structure that has three or more independent dimensions; for instance, sales by region by product and by time.

HiQube developed what it called the Lilith Enterprise and Web Server business intelligence software, a decision-making support system with unparalleled graphing and reporting capabilities for interactive visualization of information. Lilith, which reminded me of a character on a popular US television show, provides the ability to view and analyze captured data from multiple perspectives and user profiles. The system is available for on premises installation. You can also use the technology as a Web service.

Read more

Google Is Proprietary

May 25, 2008

Computerworld has a story about Google called “Google Opens Up a Little on Its Search Algorithms”. The article by Linda Rosencrance has a summary of Udi Manber’s campaign for search quality. Navigate to the story. Some of the Computerworld articles can be tough to locate a day or two after these appear on the sprawling IDG Web sites. At the Gilbane Content Managment Conference, Mr. Manber will talk about search quality, and I look forward to additional information.

The key point in Ms. Rosencrance’s story is this quote from a Silicon Valley pundit, Rob Enderle, who is a “principal analyst” at the Enderle Group. He offers, according to Ms. Rosencrance:

I think the irony that what we’ve seen with Google is that as time goes on, even though it’s positioned itself as the anti- Microsoft, it’s kind of been reading out of Microsoft’s playbook … In many ways, Google is much more proprietary than Microsoft is, and they actually used open source software to get there. So unlike Microsoft, which started off proprietary and has gradually been opening its stuff up, Google starts off getting other people’s open stuff, turns it proprietary and then makes money off it. It kind of redefines ‘pirate.’ I think Google is feeling a little bit of the heat because people are starting to focus on that a bit.

I am delighted that three years after the Infonortics’ study The Google Legacy, Computerworld and its sources are now perceiving Google as more than great lunches and quirky computer scientists. One reason Microsoft struggles with Google is that the Googlers have learned from Microsoft but added that special Googley touch. Redmond, facing a mirror of the Bill Gates and Paul Allen model, struggles to find an answer with marketing specialists and bureaucracy. The mismatch doesn’t see fair. I await more insights from Computerworld and Mr. Enderle.

Stephen Arnold, May 25, 2008

Keeping Google AI in Touch with Reality

May 25, 2008

A remarkable post by Anand Rajaraman in Datawocky deserves your attention–close attention at that. You can read the article here. I anticipate that Techmeme will follow the discussion thread throughout the day. The Techmeme link is here.

The core argument in Mr. Rajaraman’s posting pivots on a question, “Are Machine Learning Models Prone to Catastrophic Failure?” To answer this question, Mr. Rajaraman spoke with Dr. Peter Norvig, a Google wizard and machine intelligence expert.

Google uses humans to ride herd on some algorithms. Mr. Rajaraman does a very good job of explaining that some processes just hum along; others require Googley monitoring. The answer to the question, then, is, “It depend”.

The most interesting part of this posting is what’s unsaid. For example, Mr. Rajaraman is an influential figure in the Silicon Valley world. He has had personal and business relationships with a number of Google professionals, including some of Google’s lowest-profile, highest IQ individuals. In April 2008, Mr Rajaraman posted an equally important article about Google. This essay “The Story behind Google’s Crawler Upgrade” struck me as a genuine scoop. You can read it here.

Mr Rajaraman, who is involved with some Stanford University activities, is the next best thing to having lunch in the Google cafeteria. In fact, his posts are more insightful than the PR generated by Google itself or the “run the game plan” presentation by worker bee Googlers.

Google’s reliance of different types of computational intelligence is itself useful information. The fact that Google wizards monitor certain functions and make adjustments underscores to what expensive actions Google uses to keep its system in tune. Google could let its “borg” run unattended. That’s not happening, and Google is willing to invest to ensure that its software machines–one of which is cleverly named the janitor–working the way it is supposed to operate.

Google’s willingness to invest in “fancy math” is equalled by its willingness to invest in Google engineers to keep the race car in tune. The approach stands in sharp contrast to companies who systems are built via acquisition so engineers can only use these, not adjust them in meaningful ways. Other companies are content to run certain functions as “black boxes” because the funding is not provided to get under the hood.

A couple of other points about Mr. Rajaraman:

  1. He is on the computer science faculty at Stanford, an institution with many ties to Google
  2. He founded the venture capital firm Cambrian Ventures
  3. He helped start Kosmix and Jungless where some Googlers worked. Kosmix is a next-generation search engine. Junglee, acquired by Amazon, was a data management system with applicability to online retail.
  4. He’s a graduate of the Indian Institute of Technology and has a Ph.D. from Stanford
  5. According to the Cluuz.com relationship map, he has a tie to Aster Data which I wrote about here. I will leave it to you to explore the other Google relationships that Mr. Rajaraman appears to have.

If you want to keep your finger on the pulse of Google, Datawocky is a must read. Sometimes I feel, without any hard evidence, that Google uses Mr. Rajaraman to put certain facts about the world’s largest application platform in circulation.

Stephen Arnold, May 25, 2008

The Economics of Dealing with Complex Information

May 24, 2008

Microsoft announced via its Live Search blog that its Live Search Books and Live Search Academic are “taken down”. Google’s book digitization and journal project caused concern to the commercial database vendors. Google, with its generous cash flow and avowed goal of indexing “all the world’s information” seemed to sign the death warrants of such companies as Dialog, Ebsco, and ProQuest, among others. A flap of the wings to Techmeme for its related links.

The economics of doing anything significant with complex information are not taught in the ivory towers at Harvard, Stanford, and Yale. Google–indifferent to the brutal economics that hobble commercial database publishers–has the cash to figure out how to use software to do tasks usually done by humans. For example, Google has figured out how to scan a book, have software determine what should be converted to ASCII, and generating a reasonably clean, searchable text file. The page images are mostly linked to the correspond text references. Not so for most database producers. These decisions still require humans, often working in exotic locations where labor is less expensive than in Ann Arbor, Boston, and Denver.

Google also has figured out how to take content, apply structure to it, create a variety of additional index terms (metadata), and convert the whole shebang into easily manipulated numerical representations. Not so with the mainstream commercial database publishers. Tagging, cross referencing, and content clean up still takes expensive humans.

Manipulating the information in books and journals is for commercial database producers very expensive. Many costs are difficult to reduce. Google, on the other hand, has invested over the last decade to find software solutions to these intractable cost problems. Fortunately for the commercial database publishers, Google so far has been content to process books and journals. Google finds access to weighty tomes useful for a variety of purposes. I haven’t heard that these motive forces are related to revenue. Google appears to be casual about the cost of its books and journals project. If you aren’t familiar with Google Books, navigate to http://books.google.com. For Google Scholar, go to http://scholar.google.com.

Enter Microsoft. The company jumped to index books and journals. Now it is climbing out of the swamp of costs. Unlike Google, Microsoft faces–maybe for the first time in the company’s history–a need to focus its technical and financial resources. Google keeps on scanning and indexing documents about hyperbolic geometry. Microsoft can’t and no longer will.

For me the most telling statement in the announcement is:

Given the evolution of the Web and our strategy, we believe the next generation of search is about the development of an underlying, sustainable business model for the search engine, consumer, and content partner. For example, this past Wednesday we announced our strategy to focus on verticals with high commercial intent, such as travel, and offer users cash back on their purchases from our advertisers. With Live Search Books and Live Search Academic, we digitized 750,000 books and indexed 80 million journal articles. Based on our experience, we foresee that the best way for a search engine to make book content available will be by crawling content repositories created by book publishers and libraries. With our investments, the technology to create these repositories is now available at lower costs for those with the commercial interest or public mandate to digitize book content. We will continue to track the evolution of the industry and evaluate future opportunities.

Here’s how I read this. First, the reference to next-generation search is about making money with a business model. In short, next-generation search is not about moving beyond traditional metadata, pushing into data management, and creating new types of user experiences. Search at Microsoft means money.

Second, Microsoft wants to index what’s available. That’s certainly less costly than fiddling with the train schedules that Google has indexed at Oxford University. In my experience, indexing what is already available begs for applications that moves beyond what I can do at my local library or with a search engine such as Exalead.com or metasearch system such as Vivisimo’s Clusty.com.

Third, the notion of tracking and looking for future opportunities does not convince me that Microsoft knows what it will do tomorrow. And whatever the company does, by definition, will be reactive.

Microsoft’s termination of this service means that the status quo in the commercial database world will be subject to pressure from Google. More troubling is that Google’s technical papers and its patent documents reveal that the company is moving beyond key word search at an increasing pace. I think that it is significant that Microsoft is husbanding its resources. Now I want to read in a Microsoft Web log about an innovation path that will permit the company to leap frog over Google. Send me a link to this information, and you will receive a gentle quack.

Stephen Arnold, May 24, 2008

Usability: A Must Read from the WSJ

May 24, 2008

I’m too nerdy to read the Wall Street Journal every day. A few minutes with most stories, and I feel as if an MBA consultant were telling me that investment bankers are really nice people. I made an exception this morning, and I strong urge you to navigate to the Journal’s “Business Technology” Web log here.

The post “Business Software Vendor Finds Business Software impossible to Use” carries a date of May 21, 2008, but I just saw it. Nevertheless, this is an important story. The main point for me is this statement:

IFS, a business-software vendor, sent us [the Wall Street Journal] … the results of a survey earlier this year of more than 1,000 respondents. Its findings: “A full 60 percent of respondents said their enterprise software was somewhat difficult, very difficult or almost impossible to use. Only 9 percent characterized their applications as very easy to use.” The biggest time wasters, IFS found, were the need to search through complex navigation systems to find information and the need to learn how to use many programs that all worked differently.

Why is this important? First, the data substantiate my research, Sinequa’s data, and Jane McConnell’s information about enterprise search. High dissatisfaction rates and wasted time–these are the cripplers of some organization’s efficiency and decision making.

I’m going to try and get my hands on the full study from IFS. The Web log post doesn’t tell me how to get a copy. The WSJ provides this link to a study summary here. I filled in the IFS form here, but as of 7 am Eastern on May 24, 2008, I don’t have a copy. I want one. If you come across the full report, let me know.

Usability, not technology, is the key to success it seems. Hasn’t this been Steve Jobs’s mantra for a long time? Good work, Vauhini Vara. A content quack to you from the Beyond Search goose.

Stephen Arnold, May 24, 2008

Ovum Says, ‘Microsoft Has a Plan’ for Search

May 24, 2008

Ovum, a British consultancy of high repute, asserts that Microsoft has its sights set on being “the king of search”. You can read its summary here. This article, penned by Mike Davis, is based upon a longer piece available to Ovum’s paying customers as part of the pundit shop’s Straight Talk service.

The Ovum conclusion, if I read Mr. Davis’ article correctly, is that Microsoft’s pay-for-traffic initiative is just one component of a far larger strategy to close the gap with Google. He writes:

The technology for the programme came from the acquisition of Jellyfish.com last year. The service is a different proposition to merchants than the usual ‘cost per click(s)’ such as used by Microsoft’s current nemesis Google. The payment model being used by Microsoft is called Cost Per Acquisition, and the advertiser only pays when the advertisement results in a purchase.

So, it’s not pay for traffic. It’s a rebate of three to 30 percent, requires a minimum balance of $5, and is designed to go after Amazon.com and eBay.com.

The point that jumped out at me is that Mr. Davis tosses the Fast Search & Transfer acquisition into the mix. Mr. Davis sees the pay-for-traffic plan announced by William Gates at the Advance 08 advertising conference and the $1.2 billion deal for Fast Search as signs of Microsoft’s determination to be “king of search”.

Let’s assume that Ovum’s research and Mr. Davis are right on target. This means that:

  • The Jellyfish technology underpinning the cash back for search play will generate traffic and hence ad revenue for Microsoft.
  • The Fast Search technology will allow Microsoft to break through the 50 million document barrier that some SharePoint users encounter with native SharePoint search
  • Consumers and advertisers leap on the cash back bandwagon and SharePoint licensees pay for the Fast ESP (enterprise search solution)

Each of these actions take place quickly and produce gains for Microsoft.

How much traffic and revenue does Microsoft need to become “king of search”?

The gap between Microsoft and Google is a reasonably large one. Recent data from an admittedly uneven resource suggests that Google has about 62 percent of the US search traffic. Google’s share of the global market is higher. In the April 2008 period, you can read Mashable’s quite good analysis here, Microsoft lost search market share. If the ComScore data are accurate, Microsoft accounts for 9.1 percent of the search traffic. The month before, Microsoft’s search traffic was 9.4 percent. Google’s share is growing, if the ComScore data are correct; Microsoft’s share of search traffic is degrading. Wow!

In order to close this gap, the pay-for-search scheme is going to have to reverse a declining trend, attract advertisers, and scale like the devil. I don’t think the pay-for-traffic scheme will work whether it is aimed at Amazon.com, eBay.com, Google.com, or Yahoo.com.

The Fast Search deal is going to have to show some sizzle. At the recent Enterprise Search Summit, I stopped by the Microsoft exhibit and asked about search. I was told SharePoint was quite good. I asked about Fast Search and I was told that Fast Search had a booth. I asked, “Please, show me the Fast ESP system running on a SharePoint system.” The nice Microsoft person said, “I don’t have that information.” So, no FAST logo in the Microsoft booth and no demo that I could see. Keep in mind that there were vendors such as BA-Insight, Coveo and ISYS Search Software, among others, showing potential buyers SharePoint search systems that worked, scaled, and delivered the nifty metatagging so much in demand.

I walked to the opposite side of the room where the Fast Search exhibit was. I asked to see the Fast ESP SharePoint demo. I was told, “Come back between sessions. We will have it up then.” I came back and was told, “We’ll walk you through the basic systems. SharePoint works the same way.” Iasked, “Where’s your Microsoft logo.” The really friendly person told me, “We don’t have that logo yet. Leave your card, and I will get that information for you.” I said, “No.” Your PR guy hassles me about not knowing anything about Fast Search despite my analysis of the system for the US federal government over a two year period.

Now putting the pay for traffic puzzle piece up against the Fast Search puzzle piece, Ovum sees a fit. I don’t. What I see is a very large orgaznization faced with market push back on three separate war fighting fronts. A three-front conflict is complex, not tidy. And what are the three fronts?

First, Microsoft controls the desktops of 90 percent of computer users and Internet Explorer is the default home page for the browser. Google’s market share means that people are consciouly navigating to Google to run queries even though the Micrsoft Web search box is the default. Most people don’t change their default home page, so extra clicks and typing are required. People like easy, but when it comes to search people go to Google. With a massive market share and the default browser’s search box, users go to Google. I find this pretty amazing. The longer Microsoft persists in losing market share, the more deeply ingrained the Google habit becomes. In the history of online, user habits–once set–become very hard to change.

Second, the pay for clicks approach is a double edged sword. Here’s why. There is a tremendous incentive for users to find ways to scam the system. Google has to work overtime to snuff out fraudulent clicks. Microsoft–lacking a high traffic site and the easy money of AdSense–will find that it must spend more to deal with tricky users. And if the pay for traffic play is really successful, Microsoft will have to scale its online system quickly. One edge is giving up some money by betting more traffic will yield cash. The other edge is that success means more scaling costs. The way I look at it, the pay for traffic play costs money and does not hold the promise of a way to lame the nimble Googzilla.

In fact, Google is adept scaling quickly and at lower costs due to its use of commodity hardware and its “smart” extensions to Linux. Microsoft has yet to prove that it can scale without taking extreme measures such as complex tiering, using super-fast, expensive, high-end branded server gear from Hewlett Packard and other vendors, and dealing with the time and bandwidth issues imposed by Micrsooft’s own 64-bit operating system and application overhead. Microsoft has to spend more to get the basic job done. My take? A huge success for Microsoft results in higher costs. In the short term, that’s not a problem. Over the longer term, higher costs can become a problem even for a deep-pockets giant like Microsoft. If performance lags or user trickery becomes evident, the gains may slip away leaving puddles of red ink.

Third, the Ovum analysis says that the pay for traffic play is based on the Jellyfish acquisition. The enterprise search initiative is based on the Fast Search acquisition. These two key components were not invented at Microsoft, and I have a hunch that integrating these acquired technologies into the Windows-based systems is a work in progress. Again, more costs and increased chance for technical and managerial friction. Microsoft’s ingrained project manager system and its silo-type structure make feudal squabbles between digital princes a feature of Redmond life. To be “king of search”, these destructive hot spots have to be remedied. Google’s certainly not perfect, but it seems able to innovate without clashes over interface and technology popping up when I use its system.

I wish Microsoft well in its quest to become “king of search”. I know Ovum’s management wants its analyses to be accurate and generate consulting business for the firm’s analysts. I hope SharePoint users find search happiness from Fast ESP. I hope Web searchers find that Microsoft’s Web search initiatives deliver the goods.

Microsoft has to find a way to leap frog ahead of Google. I’m not sure making acquisitions and paying for traffic fit together seamlessly. Furthermore I disagree that these two initiatives mesh, have been fully integrated, and represent a significant challenge to the GOOG. Agree? Disagree? Let me know.

Stephen Arnold, May 24, 2008

EMC’s Upcoming Plans

May 23, 2008

Last year, IBM created InfoPrint. When Ricoh, the Japanese copier outfit, invested, InfoPrint became a $1.2 billion company. The idea is that an organization has informatioin scattered in many different systems. IBM’s InfoPrint would make it possible for an organization to tap into these facts and data, generate an output, which could be a personalized invoice, a benefits statement or a Web page. Viewed one way, InfoPrint is a virtual print shop. Viewed another, it was an IBM play to bring some sort of order to the crazy, poorly-disciplined world of content management of CMS as its cheerleaders say.

Then a Lexington, Kentucky, company called Exstream Software was acquired by Hewlett-Packard earlier this year for $1.2 billion and change. Exstream became part of the HP’s printer unit, and marked a turning point in CMS; namely, the notion of software to produce a Web page became a tiny cog in a giant printing or output machine. The functionality in the HP model shifted from the department to the a meta function.

Optio, an early entrant in this sector, struggled and then found a buyer called Bottomline. And, at the same time Swedish-based segment leader Streamserve found itself smack in the middle of a CMS revolution.

These changes underscore the Balkanized state of information management in most organizations. To fix a big problem, each of these companies offer a big solution.

Will the H-bomb approach to helping workers write, access, and repurpose information work? Probably not, but it certainly means that the CMS vendors have to respond to the sins of their past.

In New York yesterday, I learned that EMC (once a vendor of storage devices) has begun to reposition itself to become a more significant player in a changing and increasingly contentious market.

Here’s a run down of what the storage company will do in 2008, if my source has her ear angled the right way.

First, EMC is going to be a player in the enterprise search market. Even though there are more than 300 vendors in this sector, EMC figures that there’s room for one more company. I’m not so sure because EMC’s archives often pose more challenges than they solve when it comes to finding the specific piece of information in one of EMC’s archives or buried in the bowels of its Documentum CMS.

Second, EMC is going to be a player in the eDiscovery business. Regulated industries have to save and be able to find information in archives. EMC reasons that this is a growing sector. If Autonomy (the number two company in enterprise search) can make a go with its Zantaz eDiscovery unit, EMC can certainly squeeze money from regulated or litigated entities. See my list of more than a dozen companies in this search niche now. EMC will have to find a way to sidestep some specialist companies and the aforementioned Autonomy which is smaller and more than willing to engage in hand-to-hand combat.

third, EMC is going to jump into the middle of the emerging enterprise publishing sytem market where InfoPrint, Exstream Software, and other player have gained some key sales in the auto industry, insurance, and health care sector. Remember, this is a market created because established CMS vendors like Documentum, Ektron, vignette, and a 100 others have created because their systems were more problemattic than panacea.

To top these ambitious plans off, EMC wants to enter the SaaS or cloud computing market. Cloud computing is an emerging trend. EMC is a company able to build high performance storage systems, but pulling off an Amazon or Google play is going to be an extra challenge for EMC.

You can read more about EMC’s plans for the next 12 months in the Computer Reseller News’ story about the company.

My thought is that EMC will want to set clear priorities and the realities of competing with the likes of IBM, Ricoh, and HP in a sector created largely because CMS sytems have been tarballs.

I think EMC has its work cut out for it. But once again, in today’s financial climate, some managers find it easier to assert that it can provide a one-stop shop for anything that has to do with information. Customers are looking for new solutions, and I think there will be blood on the floor of the conference room and red ink in the company swimming pool for high-tech companies who think their engineers can solve any problem–even the ones their previous software created.

What are your thoughts about the CMS tarballs? Can giants like IBM and HP learn new tricks? Can companies with a core competency in storage transform themselves into cloud-based services companies?

Based on what information I have, EMC will have a tough time delivering in just one sector–for example, enterprise search. Hitting home runs in these other sectors is going to require more than PR puffery.

Stephen Arnold, May 22, 2008

SPSS Reveals Key Facts

May 23, 2008

Analytics and business intelligent giant SPSS revealed today some key facts about its business. You will want to read the Computerwire story that contains the text of an interview with SPSS’s Alex Kormushoff, a senior VP with the Chicago-based firm.

He disclosed:

  • SPSS has 250,000 customers
  • SPSS serves 95 percent of the Fortune 1000, the top 10 global brands, 21 of the top 25 retailers, and 24 of the top 25 research firms
  • Customers span industry, education, and government

The most interesting assertion he made, if I read the computerwire story correctly is:

According to Nucleus Research, which tracks the company, 94% of customers achieve a positive ROI in 10 months.”

Pretty impressive. I have just two questions. If SPSS has such a dominant position in the market, why is SAS continuing to grow at a healthy clip? Firms offering alternatives such as Clarabridge and Megaputer tell me that their analytics businesses are booming.

My thought is that SPSS is painting a picture that has a touch too much of the bold acrylic lime green and signal red colorings.

Analytics is changing and changing quickly. Google looms as a disruptor on the first order. Megaputer’s tie up with Moscow’s best and brightest mathematicians is not to be ignored. And, SAS’s acquisition of Teragram puts that company in a position to challenge the SPSS unstructured information functions with a newer chunk of technology.

My hunch is that traditional analytics companies are in the same kettle of fish as traditional CMS and enterprise search vendors. The markets are shifting quickly, and these presumptive segment leaders are teetering on the brink of great financial and competitive pressure.

Agree? Disagree? Let me know. My bet is on outfits like Google and Megaputer. Just because you don’t know these organizations are up to their eyeballs in state of the art analytics services doesn’t mean the destabilization of the traditional vendors is not underway.

Stephen Arnold, May 23, 2008

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta