More on Search Performance

May 12, 2009

Search performance remains a bit of a mystery in many organizations. Once a search system has been deployed, the team is thrilled that the users can run queries. Performance often becomes an issue when the system crashes, and the licensee discovers the meaning of “just rebuild the index”. In other situations, a merger may create a surge in demand, and the system simply times out or falls over under heavy query spikes.

Some useful performance information appeared in Information Management’s article by Chris Kentouris here. “BNP Speeds Risk Calculations With Hardware Acceleration” made clear that improving search performance often requires more than routine tweaking. The article touches upon the performance characteristics of graphics chips or GPU and field-programmable gate arrays or FPGAs. For me the most interesting part of the write up was this segment:

Exegy ran a test in November using CPUs, FPGAs and GPUs to perform Monte Carlo calculations on a portfolio of 1,024 equities. According to Exegy, “a calculation that would normally take 15 minutes on a multicore CPU now only takes 12 seconds with all three technologies.”

I profiled Exegy in my 2008 study Beyond Search for the Gilbane Group here. Exegy specializes in high volume content processing for governmental and financial institutions.

The short take: if you want to improve search performance, you may need sophisticated hardware and specialized engineering. Cosmetics and easy fixes may not do the job.

Stephen Arnold, May 12, 2009

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta