Web Traffic Metrics in a Muddle

August 24, 2010

I wrote about Vivisimo’s Web traffic spike. For that blog post, I relied on free data from Compete.com. If you have not looked at the service, point your browser thingy at www.compete.com. You can sign up for a free but restricted version of the service, or you can become a paying customer. For my blog post, I used the freebie service. Hey, the blog is free. You expect chopped liver?

After I wrote the story, I heard from a high output search vendor. The point of his somewhat urgent email was that the Compete.com data were wrong. This was a surprise? I know of one outfit that has the horsepower to count and analyze log data in a comprehensive manner. The rest of the outfits use different methods to cope with the ever increasing volumes of data that must be crunched for horizontal and small slice analyses. In short, most reports of traffic are subject to error. I have mentioned in my talks about the volume of traffic that flowed to a Danish insurance company from Google. The insurance company itself was unaware of its dependence on Google. Google probably did not care about the Danish insurance company. It was clear that the Web master at the Danish insurance company had not looked at the log data very carefully prior to my getting involved. So between reality and lousy metrics, most people don’t know much about the traffic and clicks on a Web site. Feel free to tell me I am incorrect, please. Just use the comments section of the blog. Don’t write me an email.

What caught my attention this morning (August 21, 2010) was a story from ClickZ called “New Comscore Methodology Reduces Search Market Share for Microsoft and Yahoo.” (How does this outfit spell its name? Comscore, comScore, something else?)There you have the guts of the problem. A change in methodology makes a winner into a loser, a loser into a bigger loser, and a bigger loser into a contributor to the swelling unemployment ranks in the US. Figure out these data.

comscore aug 21

What about the data themselves? Well, that’s part of the numerical recipe. If you reflect on your exciting moments in Statistics 101, you may recall that sample size has something to do with the confidence one can have in an output. The goose remembers this in a very simplified manner: Small sample, big error. Lousy sample, big error. Shortcuts anywhere, big error. Add up the errors and you get crappy outputs. But figuring this stuff out in real life is beyond the ken of most azurini, poobahs, and Web marketers.

As a result, the data from any Web traffic or click counting service are at best indicative of a trend. Here’s how I check traffic for my own sites and for those I track.

  1. Take a look at the log analytics. We use AWStats, baked in reports from our hosting company, and the Urchin (Google) analytics outputs. Do these agree? Nope. Not even close, but the trends are easily identified.
  2. Take a look at what Alexa reports. Hey, I know it skews toward Internet Explorer, but that’s okay. I am looking at what the system says, not calculating the speed of light.
  3. Take a look at Compete.com. I like the nifty little charts it spits out. The Urchin graphics are bit to HGTV for me and don’t show up well when I do a screen capture.

I then separate the bluebirds from the canaries. I toss out the high and the low and go with the stuff in the middle. Close enough for a free blog post. In fact, I have used this method when paying customers don’t want to pick up the bill for the fancy for fee services.

The bottom-line line is that I can trot out data that supports these assertions:

  • Google and Microsoft are so far ahead in traffic that comparisons with other vendors of search and content processing traffic are meaningless. Are the data correct? Well, the revenues of these two outfits suggest that some correlation between traffic and money must exist. Microsoft is nosing toward $100 billion and Google toward $30 billion. Search vendors are not in this ball game with the sole exception of Autonomy which rings in with $1.0 billion in revenue.
  • Most search vendors generate traffic in the 3,000 to 15,000 uniques per month. Even the bigger of the search vendors have a tough time breaking through the 15,000 ceiling. The reason? Search is sort of a footnote in the broader world of enterprise and Web functions. Lots of talk does not translate into traffic for search vendors on their Web sites.
  • Some search vendors get so few clicks that the services report “insufficient data”. I am sorely tempted to present a list of search vendors whose Web sites get effectively only random clicks and robot traffic. But I don’t need any more defensive snarkiness from search executives. Hey, summer is almost over. Let me enjoy the last few, hazy, lazy days.

To wrap up, are Microsoft and Yahoo losing market share? Probably not. The key factor seems to be Facebook’s emergence as an alternative to Google-style searching. The mobile device “search experience” is a different animal entirely and I don’t think anyone has a firm grip on these data at this time. Google’s obsession with mobile devices is a strong signal that something is indeed happening. The numbers, at this time, are less reliable than the ones for traditional Web site traffic.

Maybe the Web is dead? Maybe search is dead? Maybe an asteroid will hit the earth before it melts? Whatever. Traffic reports are indicative, not definitive. Let’s face it. Search is a small niche and a successful vendor will produce modest uniques when compared to outfits like Amazon, Apple, Google, and Microsoft.

Stephen E Arnold, August 24, 2010

Freebie. 0.999999 confidence in this.

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta