Moore’s Law: Not Enough for Google

December 29, 2008

I made good progress on my Google and Publishing report for Infonortics over the last three days. I sat down this morning and riffed through my Google technical document collection to find a number. The number is interesting because it appears in a Google patent document and provides a rough estimate of the links that Google would have to process when it runs its loopy text generation system. Here’s the number as it is expressed in the Google patent document:

50 million million billion links

Google’s engineers included an exclamation point to US7231393. The number is pretty big even by Googley standards. And who cares? Few pay much attention to Google’s PhD like technical documents. Google is a search company that sells advertising and until the forthcoming book about Google’s other business interests comes out, I don’t think many people realize that Moore’s law is not going to help Google when it processes lots of links–50 million million billion give or take a few million million.

When I scanned “Sustaining Moore’s Law – 10 Years of the CPU” by Vincent Chang here, I realized that Google has little choice to use fast CPUs and math together. In fact, the faster and more capable the CPU, the more math Google can use. Name another company worrying about Kolmogorov’s procedures?

Take a look at Mr. Chang’s article. The graph shows that the number of transistors appear to keep doubling. The problem is that information keeps growing and the type of analysis Google wants to do to use various probabilistic methods is rising even faster.

The idea that building more data centers allows Google to do more is only half the story. The other half is math. Competitors who focus on building data centers, therefore, may be addressing only part of the job when trying to catch up with Google. Leapfrogging Google seems difficult if my understanding of the issue.

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta