Browser Drag Racing

March 13, 2009

I recall the good old days of running speed tests on the PCs I used to build. Now I just buy Macs. I would load TACH or some other tool and watch the data appear as the system opened a faux Word document, rendered graphic primitives, and wrote files to the disc, then reading them. I never figured out what made a particular computer do well on one test and poorly on another. Even when I tested machines with the same motherboard, CPUs, and memory configurations, I would find wide margins of error. On the serious tests I ran when I was trying to figure out Google’s read write speeds in one of the company’s early technical papers, I identified weird differences on my identical IBM NetFinity 5500 quad processor, four gig, EXP 10 drive SCSI III storage devices, and the six Seagate Cheetahs I used as a Level 5 RAID boot device. Drove me crazy.

Now I read Emil Protalinkski’s “Microsoft’s Own Speed Tests Show IE Beating Chrome, Firefox” and have a flash back. You can find the useful write up here. He has reported on some interesting tests, including a useful table that shows IE 8. as the speed champ. For me, the most interesting point in his article was:

Microsoft chooses approximately 25 websites for daily testing, and tens of thousands on a monthly basis. If you’re going to do your own tests, Microsoft emphasizes that “any list of websites to be used for benchmarking must contain a variety of websites, including international websites, to help ensure a complete picture of performance as users would experience on the Internet.”

In my opinion, this comment does not go far enough. The tests have to be conducted in a rigorous manner in order to deal with latency. I also identified other variables that can affect speed tests:

  • Is the test machine or test machines running the benchmarks at the same operating temperature?
  • Is each machine running the same set of processes and tasks at the time the tests are conducted?
  • Are the sites being tested static pages or composite applications?
  • Is the test machine or machines operating with flushed caches, defragged drives, etc. when the tests are run?

Small frictional points can add up over time. Some of the variances in the Microsoft table included in Mr. Protalinkski’s article are modest in my opinion. Even with baseline systems the variances can be significant. In my opinion, the speed tests are helpful but not definitive.

The same issues apply to testing search systems. It’s easy to crank out a remarkable indexing benchmark until the real world content flow brings the weaknesses of the systems to center stage. I quit benchmark testing long ago, but I still find the data somewhat interesting.

Stephen Arnold, March 13, 2009

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta