eBay Analysis: Overlooking the Ling Temco Vought Case

February 17, 2009

Not long ago, I wrote about a trophy generation whiz kid who analyzed company and product failure. I pointed out that a seminal study shed light on the “new” developments the whiz kid was pontificating. The whiz kid pulled the post. I just read a post in a Web log that I find interesting most of the time. The author of “Why eBay Should Consider Breaking Itself Up” is Kevin Kelleher. You can read the story here. The premise of the story is one with which I agree. eBay has collected a number of eCommerce companies and failed to make any of them generate sufficient revenue to generate enough revenue to feed the cost maw of the parent. What made me shake my tail feathers was:

  1. The use cases or case studies of the Ling Temco Vought style roll ups do not warrant a mention in the article. Most business school students or investors burned when LTV went south.
  2. The failure to integrate acquisitions is not a problem exclusive to eBay. Mr. Kelleher left me with the impression that eBay was a singleton. It is not. Other companies with a similar gaggle of entities and similar cost control problems include AOL, IAC (mentioned by Mr. Kelleher), Microsoft, Yahoo. A key point is that the overhead associated with LTV style operations increases over time. Therefore, instead of getting better numbers, the LTV style roll up generates worsening numbers. In a lousy economy, the decline is accelerated and may not be reversible. There’s no sell off because time is running out on eBay and possibly on a couple of the other companies I mentioned.
  3. The impact of LTV type failures destabilizes other businesses in the ecosystem. I never liked the Vietnam era “domino theory”, but I think the possibility of a sequence of failures increases in the LTV type of failure.

You should read the story because the information about eBay is useful. If you have an MBA, you probably have LTV data at your fingertips. Historical context for me is important. Maybe next time?

Stephen Arnold, February 17, 2009

Ad Age Advises Yahoo: Startling Strategic Counsel

January 19, 2009

I read this weekend that top job opening require technical or scientific training. Imagine my surprise when Ad Age, a dead tree publication for the Liberal Arts and Master of Fine Arts crowd, published “Four Ways Yahoo Can right Itself under New CEO Bartz.” You can read this remarkable article here. Keep in mind that Yahoo is a technology company. The products and services of Yahoo are based on software, systems, and other arcana that delight computer scientists and electrical engineers, leaving the art gallery and soft drink executives lost in a cloud of unknowing. Furthermore, if you have read my other commentaries about Yahoo, you know that the ills of Yahoo are a manifestation of a misalignment of technology and user needs. Fixing Yahoo, therefore, requires more than a public relations blitz and a handful of consultants to change the ad rate price schedule. Some of the Mad Ave ilk will point to the unsold Super Bowl TV spots and assert, “Yahoo needs to snap up these ad slots and make some brand impact.” Right, advertising online services on the Super Bowl will work just as it will for Ask.com’s sponsorship of NASCAR.

Abbey Klaasen, the Ad Age journalist, identifies four strategies for the Yahooligans.

First, Yahoo has to hang on to search. I am a bit fuzzy about what “search” is referenced. Yahoo has a cartload of search systems. My hunch is that Ad Age thinks about Web search and ignoring the Flickr and Delicious systems, which may have more sizzle than the so so Web search. There’s also mail search, the search on the personal section, and so on. Ad Age is aware of the sports and finance information, but I wonder how much analysis is going on at Ad Age. Anyway, the idea is keep “search”. Let’s assume that Yahoo is to keep its various forms of search.

Second, the recommendation is for Yahoo to “combine search and display data.” I have to admit that I am not sure what this means. Yahoo lacks a homogeneous system; therefore, combining any cluster of services means normalization, transformation, and manipulation of data. Yahoo had a project underway to rationalize some disparate data, but I am not sure if that is still underway or if it swam on rocks. Advertisers have been asking for access to specific slices of Yahoo demographics across services for a while. Yahoo can’t deliver these types of audiences because of technical issues. Yahoo is a technology company. If a service is not available, there’s a technical reason, not a managerial reason. If the cost of “fixing up” the system is too high, the service will not be available. Yahoo has not been able to focus its resources on certain technical problems because it has a GM problem; that is, GM knows what Toyota and Honda do to make autos. GM can’t change the culture nor can it amass the resources to implement the Toyota and Honda solutions. Yahoo’s engineers are smart. Some go to Google and become happy campers; for example, the Delicious.com founder. It’s not brains; it’s a fundamental technical problem exacerbated by cost and management.

Third, Ad Age wants Yahoo to sell “the Unilevers of the world”. My hunch is that this is a play that will require fixing search and audience data. It is going to be tough to repaid the Yahoo-mobile unless one has the right parts. Yahoo is going to require the equivalent of a resto-mod rebuild on the jalopy before the Unilevers pump more cash into the Yahoo advertising opportunity.

Fourth, buy Hulu. Yahoo has been fooling around with video for a while. In case anyone missed the news, Google has managed to make YouTube.com the number two search engine. Hulu.com is also way behind the Googlers in terms of traffic. I grant that Hulu.com is better than Yahoo’s video services. Follow me on this line of reasoning: If Yahoo’s previous attempts to do video have been less than stellar, why will Yahoo handle Hulu.com better. Does anyone remember Finance Vision or the original content production push with Lloyd Braun’s return here? So, I assert that Yahoo’s ability to integrate an acquisition is questionable. Yahoo took years to integrate the Yahoo photo site into Flickr. Let’s assume that Yahoo does buy Hulu. Can Yahoo contribute to the service? At this time, whatever management expertise Yahoo has will be stretched trying to deal with the existing Yahoo technology and financial problems.

In short, I find the Ad Age counsel pretty interesting. It’s not wrong as Mad Ave thinking goes; it’s just from another dimension. I will stick with the reality of the goose pond in Harrod’s Creek, Kentucky.

Stephen Arnold, January 19, 2009

Social Search in the Enterprise Gotcha

December 21, 2008

The cheerleaders for social search in the enterprise will want to read ComputerWorld UK’s “Firms Struggle with Access Management for New Systems” here. Leo King does a good job of explaining what has to be done to deal with the nifty new systems that Web 2.0 baloney artists are pushing into organizations. Most top brass managers don’t have a clue about the risks and costs associated with largely marketing hyperbole based systems. One reality check is related to the problem of identify and access management for these webby wonders. You can read the story here. If this link 404s, take it up with ComputerWorld’s Web master, not me. For me the most important comment in the very good article was:

…Organizations were having to make large changes to existing identity and access management software, in order to keep pace with what the business was doing… IAM product suites now have to deal with new requirements,

The article recycles information from the consulting firm Butler Group. A happy quack to this outfit for generating a finding that matches my own research from the hollow in the hills of Kentucky. I am still skeptical of the azure-tinted consulting firms, however, as a matter of principle.

A word to the enthusiastic but not too wise should be enough. This goose hopes so–before the discovery process begins or before the police seize an organization’s computers and data.

Stephen Arnold, December 21, 2008

Citation Metrics: Another Sign the US Is Lagging in Scholarship

August 31, 2008

Update: August 31, 2008. Mary Ellen Bates provides more color on the “basic cable” problem for professional informatoin. Worth reading here. Econtent does an excellent job on these topics, by the way.

Original Post

A happy quack to the reader who called my attention to Information World Review’s “Numbers Game Hots Up.” This essay appeared in February 2008 and I overlooked it. For some reason, I am plagued by writers who use the word “hots” in their titles. I am certain Tracey Caldwell is a wonderful person and kind to animals. She does a reasonable job of identifying problems in citation analysis. Dr. Gene Garfield, the father of this technique, would be pleased to know that Mr. Caldwell finds his techniques interesting. The point of the long essay which you can read here is that some publishers’ flawed collections yields incorrect citation counts. For me, the most interesting point in the write up was this statement:

The increasing complexity of the metrics landscape should have at least one beneficial effect: making people think twice before bandying about misleading indicators. More importantly, it will hasten the development of better, more open metrics based on more criteria, with the ultimate effect of improving the rate of scientific advancement.

Unfortunately, traditional publishers are not likely to do much that is different from what the firms have been doing since commercial databases became available. The reason is money. Publishers long to make enough money from electronic services to enjoy the profit margins of the pre digital era. But digital information has a different cost basis from the 19th century publishing model. The result is reduced coverage and a reluctance to move too quickly to embrace content produced outside of the 19th century model.

Services that use other methods to determine link metrics exist in another world. If you analyze traditional commercial information, the Web dimension is either represented modestly or ignored. Mr. Caldwell’s analysis looks at the mountain tops, but it does not explore the valleys. In those crevices is another story; namely, researchers who rely on commercial databases are likely to find themselves lagging behind those researchers in countries where commercial databases are simply too expensive for most researchers to use. A researcher who relies on a US or European commercial database is likely to get only an incomplete picture.

Stephen Arnold, August 31, 2008

The ‘Search Is Dead’ Question

August 17, 2008

New Idea Engineering and I cooperate to produce a list of utilities helpful to those working with search and content processing. I want to build on the August 4, 2008, post “Enterprise Search Dead” Or Just Misunderstood?” Keep in mind that I don’t disagree with the points in the post. For me, the important point in the article was this statement about the fact that organizations have multiple search and content processing systems:

The real trick is to glue these technologies together not into a single giant searchable index, but to combine them together logically so the user does not need to know where to look for specific content.  We, like many others, call this Federated Search,

I am in favor of federation, aggregation, and simplification. My concern is that the costs associated with multiple systems, multiple “looks” at information, and multiple “cooks in the kitchen” will be difficult to control. Costs matter today. Tomorrow costs will be even more important. Here’s why:

  1. As search becomes pervasive, costs will chug along, controls will be lax, and then the bills arrive. Few managers can survive cost time bombs like those associated with search. A “time bomb” is a “do whatever it takes” weekends when the system goes down or a cost review by a new chief financial officer who puts a ceiling on information technology expenditures and triggers a melt down.
  2. Multiple indexes of the same document are okay as long as the document is not undergoing rapid change. In certain organizations, change is frequent and often pretty darn wacky. Out of sync information retrieval systems can be a gold mine for legal discovery. Figuring out which index is the “right” one may be an issue in some situations.
  3. Multiple systems indexing content within the organization can choke the internal network. Running several systems to update an index may degrade network performance.

Most information technology mangers assume that today’s software and hardware can handle any demand. The problem is that many of today’s systems increase complexity and risk. The ready availability of low cost, fire breathing servers removes inhibitions. The result is system promiscuity and projects that look great in a PowerPoint presentation but fail miserably in the crucible of doing every day work.

If search is not dead, we need to retire it and move up a level. Let’s give users a way to access information that makes most users happy. That’s not what today’s systems deliver. Most users are unhappy with the search systems available to them for behind the firewall search.

Stephen Arnold, August 17, 2008

Microsoft Cloud Economics

August 17, 2008

Richi Jennings is an independent consultant and writer, specializing in email, spam, blogging, and Linux. His article “On Microsoft Online Services” is worth reading. You can find it here. His assertion is that Microsoft’s pricing for its online services will weaken the service. Mr. Jennings identifies information technology managers’ lack of knowledge about the cost of running machines and software on premises. He notes:

vendors would tell potential purchasers that they [the vendors] could provide the service for less money than it was currently costing to run it in-house, but when it came time to actually quote for the service, most IT managers simply didn’t believe it cost them that much.

The point is that basic knowledge of what enterprise software costs may be a factor in the success or failure of cloud services. He contrasts Microsoft’s online service pricing with Google’s. Google is less expensive. A happy quack to Mr. Jennings for this analysis.

Stephen Arnold, August 17, 2008

Autonomy: A Pretty Good Position

August 16, 2008

Analyst reports are often difficult to figure out. Take for example the write up by the London investment outfit Cazenove. The company issued report about Autonomy’s financial performance for the period ending June 30, 2008. I received a copy of this report from a Web log reader. My experience is that these are documents anyone can get if you have a big enough account with an investment bank or your financial manager is an individual with some clout. The wacky email address that sent me this July 23, 2008 Cazenove report “Autonomy” could be a signal to others in the enterprise search sector. I worked through the detailed analysis. You should read it as well.

On the whole,the document was stuffed full of useful data about Autonomy’s financial performance, which was quite good. Autonomy is on track to be close to or generate more than $300 million in revenue this year. Compared to most vendors of search and content processing, Autonomy is doing a good job. I compared their sales success to tuna fisherman who return to port with no fish. Autonomy’s vessel returns to port with its hold stuffed to the brim. Autonomy is at www.autonomy.com.

For me an interesting point in the Cazenove write up was this observation:

Autonomy management consistently mentioned the strength of its cash collection but we believe there is an issue related to cash conversion (i.e operating cash flow as a percentage of EBITDA). DSO’s (using trade receivables) decreased from 96 days to 91 and yet cash conversion did not improve. Autonomy provided some insight into the difference between commercial and government customers. For commercial accounts the DSO’s are around 30-40 days (and represents 75% of the revenue), which implies that for government customers DSO’s are c. 240 days (or c. 8 months).

As I stated, I have a tough time reading the tea leaves in this analyst’s report. Three thoughts went through my mind:

  1. The days sales outstanding was one of the factors that I had noticed in the 2007 Fast Search & Transfer financials. The growing days sales outstanding can contribute to a cash shortage. Money is not coming in but money keeps going out. A hitch in the git along can trigger a challenge at any company, even well managed ones.
  2. Autonomy has a number of lines of business. Some of these are search like the Web site or library search deals the company has landed and described in news releases. Other lines of business use Autonomy technology but are not “pure search”; for example, fraud detection or video management. Autonomy is growing larger and may be evolving into a more generalized software company. This means marketing and sales costs may be subject to greater pressure. Autonomy has done a good job managing costs, but if the controls slip, a cost surge could occur.
  3. Autonomy has been able to land a number of big deals. The company’s Web site does a great job of identifying these “big tuna” wins. The question I have is, “For a big deal, does the customer like a government agency or a big company pay up front?” In my experience, big companies pay some and then hang on to the bulk of the money until the system is up, certified, and operational. As a result, a big deal in a news release may not translate into an immediate cash injection. Autonomy appears to be able to get the cash or most of it before the system is up and running. This is a management capability that some Autonomy’s competitors cannot achieve.

Autonomy is definitely one of the high profile brands in search and content processing. I track more than 50 vendors of search, text processing, and content analytics. Only Google is in the same revenue sphere as Autonomy. The other vendors are far smaller, and if Autonomy can continue to grow, it may challenge Google and enterprise application vendors like Microsoft more sharply.

A happy quack to the Autonomy financial wizards.

Stephen Arnold, August 16, 2008

Hosted SharePoint Info

August 7, 2008

Network World’s Mitchell Ashley scooped most Microsoft watchers with “Microsoft Spills the Beans on Hosted Exchange / SharePoint”. Mr. Ashley tracked down Microsoft’s John Betz, Director Product Management for Microsoft Business Online Services. The conversation–available as a podcast here–provides useful information about hosted SharePoint. Mr. Ashley tossed some high, soft, easy to field questions, but several points jumped out at me. These were:

  1. The cloud play is “Hosted by Microsoft and sold by partners”. Infrastructure is going to be one important key ingredient in this new service stew.
  2. Pricing has a ceiling of $15 per user. Most folks will pay less. These prices strike me as “pulled from the clouds.”
  3. Microsoft “will make it all work together. Active Directory will communicate with hosted Exchange and SharePoint. A “new tool will be provided”.
  4. Trade off for hosted Exchange and SharePoint–give up some control. “We make assumptions and settings on your behalf…. If you want customization, you need on premises Exchange and SharePoint.”
  5. The Service Level Agreement is for uptime, not transit time or any other network function.
  6. “We absolutely rely on partners. This is a great opportunity to sell an online service today and get paid forever.” Reason: Support comes from partner or local information technology group. Online services are for organizations that have an IT person on staff. “We’re delivering meat and potatoes. Our partners can put an embellishment upon these services.”

This is a very interesting chunk of information. A happy quack to Mr. Ashley.

Stephen Arnold, August 6, 2008

Intel: Cloud Factoid

August 4, 2008

I tracked down an Intel presentation from 2006 and also used in 2007. The link is to ZDNet here. The presentation offers some interesting insights into Intel’s data center problem or opportunities in mid 2006; namely:

  • Intel has 136 of these puppies with an average cost pegged in the $100 million to $200 million range
  • Average idle capacity was about 200 million CPU hours with capacity at 900 million CPU hours, give or take a few hundred thousand hours
  • In 2006, 62 percent of the 136 data centers were 10 years old or older.
  • Plans in 2006 were to move to eight strategic hub centers.

My initial reaction to this 2006 presentation was that Intel’s zippy new chips might find a place in Intel’s own data centers. It would be interesting to calculate the cost of power across the old data centers with the aging chips versus the newer “green” chips. I expect that the money flying out the air conditioning duct is trivial to a giant like Intel.

More on this issue appeared in Data Center Knowledge in 2007 here. In 2007, according to Data Center Knowlege Google had about 93,000 servers in its data centers.

In April 2008, Travis Broughton, Intel, wrote here:

Our cost-cutting measures tend to be related to at least two of the three “R’s” – reducing what we consume, many times by reusing what we already have.

I’m not sure what this means in the context of the Cloud Two initiative, but I will keep poking around.

Stephen Arnold, August 4, 2008

Microsoft’s Browser Rank

July 26, 2008

I heard about Browser Rank a while ago. My take on the technology is a bit different from that of the experts, wizards, and pundits stressing the upside of the approach. To get the received “wisdom”, you will want to review these analyses of this technology:

  • Microsoft’s own summary of the technology here. The full paper is here. (Note: I have discovered that certain papers are no longer available from Microsoft.com; for example, the DNABlueprint document. Snag this document in a sprightly manner.)
  • Steve Shankland’s write up for CNet here. The diagram is a nice addition to the article.
  • Arnold Zafra’s description for Search Engine Journal here.

By the time you read this, there will dozens of commentaries.

Here’s my take:

Microsoft has asserted that it has more than 20 billion pages in its index. However, indexing resources are tight, so Microsoft has been working to find ways to know exactly which pages to index and reindex without spidering the bulk of the Web pages each time. The answer is to let user behavior generate a short list of what must get indexed. The idea is to get maximum payoff from minimal indexing effort.

This is pretty standard practice. Most systems have a short list of “must index” frequently links. There is a vast middle ground which gets pinged and updated on a cycle; for example, every 30 days. Then there are sites like the Railway Retirement Board, which gets indexed on a relaxed schedule, which could be never.

Microsoft’s approach is to take a bunch of factors that can be snagged by monitoring user behavior and use these data to generate the index priority list. Dwell time is presented in the paper as radically new, but it isn’t. In fact, most of the features have been in use or tested by a number of search systems, including the now ancient system used by The Point (Top 5% of the Internet), which Chris Kitze, my son, and I crafted 15 years ago.

We too needed a  way to know only the specific Web sites to index. Trying to index the entire Web was beyond our financial and technical resources. Our approach worked, and I think Microsoft’s approach worked. But keep in mind that “worked” means users looking for popular content will be well served. Users looking for more narrow content will be left to fend for themselves.

I applaud Microsoft’s team for bundling these factors to create a browser graph. The problem is that scale is going to make the difference in Web search, Web advertising, and Web content analytics. Big data returns more useful insights about who wants what under what situation. Context, therefore, not shortcuts to work around capacity limitations is the next big thing.

Watch for the new IDC report authored by Sue Feldman and me on this topic. Keep in mind that this is my opinion. Let me know if you agree or disagree.

Stephen Arnold, July 26, 2008

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta