Google: Suddenly Too Big

February 22, 2009

Today Google is too big. Yesterday and the day before Google was not too big. Sudden change at Google or a growing sense that Google is not the quirky Web search and advertising company everyone assumed Googzilla was?

The New York Times’s article by professor Randall Stross available temporarily here points out that some perceive Google as “too big.” Mr. Stross quotes various pundits and wizards and adds a tasty factoid that Google allowed him to talk to a legal eagle. Read the story now so you can keep your pulse on the past. Note the words the past. (You can get Business Week’s take on this same “Google too powerful” here.)

The fact is that Google has been big for years. In fact, Google was big before its initial public offering. Mr. Stross’s essay makes it clear that some people are starting to piece together what dear Googzilla has been doing for the past decade. Keep in mind the time span–decade, 10 years, 120 months. Also note that in that time interval Google has faced zero significant competition in Web search, automated ad mechanisms, and smart software. Google is essentially unregulated.

Let me give you an example from 2006 so you can get a sense of the disconnect between what people perceive about Google and what Google has achieved amidst the cloud of unknowing that pervades analysis of the firm.

Location: Copenhagen. Situation: Log files of referred traffic. Organization: Financial services firm. I asked the two Web pros responsible for the financial services firm’s Web site one question, “How much traffic comes to you from Google?” The answer was, “About 30 percent?” I said, “May we look at the logs for the past month?” One Webmaster called up the logs and in 2006 in Denmark, Google delivered 80 percent of the traffic to the Web site.

The perception was that Google was a 30 percent factor. The reality in 2006 was that Google delivered 80 percent of the traffic. That’s big. The baloney delivered from samples of referred traffic, if the Danish data were plus or minus five percent, Google has a larger global footprint than most Web masters and trophy generation pundits grasp. Why? Sampling services get the market share data in ways that understate Google’s paw prints. Methodology, sampling, and reverse engineering of traffic lead to the weird data that research firms generate. The truth is in log files and most outfits cannot process large log files so “estimates” not hard counts become the “way” to truth. (Google has the computational and system moxie to count and perform longitudinal analyses of its log file data. Whizzy research firms don’t. Hence the market share data that show Google in the 65 to 75 percent share range with Yahoo 40 to 50 points behind. Microsoft is even further behind and Microsoft has been trying to close the gap with Google for years.)

So now it’s official because the New York Times runs an essay that say, “Google is big.”

To me, old news.

In my addled goose monographs, I touched on data my research unearthed about some of Google’s “bigness”. Three items will suffice:

  • Google’s programming tools allow a Google programmer to be up to twice as productive as a programmer using commercial programming tools. How’s this possible? The answer is the engineering of tools and methods that relieve programmers of some of the drudgery associated with developing code for parallelized systems. Since my last study — Google Version 2.0 — Google has made advances in automatically generating user facing code. If the Google has 10,000 code writers and you double their productivity, that’s the equivalent of 20,000 programmers’ output. That’s big to me. Who knows? Not too many pundits in my experience.
  • Google’s index contains pointers to structured and unstructured data. The company has been beavering away to that it no longer counts Web pages in billions. The GOOG is in trillions territory. That’s big. Who knows? In my experience, not too many of Google’s Web indexing competitors have these metrics in mind. Why? Google’s plumbing operates at petascale. Competitors struggle to deal with the Google as it was in the 2004 period.
  • The computations processed by Google’s fancy maths are orders of magnitude greater than the number of queries Google processes per second. For each query there are computations for ads, personalization, log updates, and other bits of data effluvia. How big is this? Google does not appear on the list of supercomputers, but it should. And Google’s construct may well crack the top five on that list. Here’s a link to the Google Map of the top 100 systems. (I like the fact that the list folks use the Google for its map of supercomputers.)

The real question is, “What makes it difficult for people to perceive the size, mass, and momentum of Googzilla?” I recall from a philosophy class in 1963 some thing about Plato and looking at life as a reflection in a mirror or dream (???????). Most of the analysis of Google with which I am familiar treats fragments, not Die Gestalt.

Google is a hyper construct and, as such, it is a different type of organization from those much loved by MBAs who work in competitive and strategic analysis.

The company feeds on raw talent and evolves its systems with Darwinian inefficiency (yes, inefficiency). Some things work; some things fail. But in chunks of time, Google evolves in a weird non directive manner. Also, Google’s dominance in Web search and advertising presages what may take place in other markets sectors as well. What’s interesting to me is that Google lets users pull the company forward.

The process is a weird cyber – organic blend quite different from the strategies in use at Microsoft and Yahoo. Of its competitors, Amazon seems somewhat similar, but Amazon is deeply imitative. Google is deeply unpredictable because the GOOG reacts and follows users’ clicks, data about information objects, and inputs about the infrastructure’s machine processes. Three data feeds “inform” the Google.

Many of the quants, pundits, consultants, and MBAs tracking the GOOG are essentially data archeologists. The analyses report what Google was or what Google wanted people to perceive at a point in time.

I assert that it is more interesting to look at the GOOG as it is now.

Because I am semi retired and an addled goose to boot, I spend my time looking at what Google’s open source technology announcements that seem to suggest the company will be doing tomorrow or next week. I collect factoids such as the “I’m feeling doubly lucky” invention, the “programmable search engines” invention, the “dataspaces” research effort, and new patent documents for a Google “content delivery demonstration”, among others — many others I wish to add.

My forthcoming Google: The Digital Gutenberg explains what Google has created. I hypothesize about what the “digital Gutenberg” could enable. Knowing where Google came from and what it did is indeed helpful. But that information will not be enough to assist the businesses increasingly disrupted by Google. By the time business sectors figure out what’s going on, I fear it may be too late for these folks. Their Baedekers don’t provide much actionable information about Googleland. A failure to understand Googleland will accelerate the  competitive dislocation. Analysts who fall into the trap brilliantly articulated in John Ralston Saul’s Voltaire’s Bastards will continue confuse the real Google with the imaginary Google. The right information is nine tenths of any battle. Apply this maxim to the GOOG is my thought.

Stephen Arnold, February 22, 2009

Comments

One Response to “Google: Suddenly Too Big”

  1. Relevance is Not a Game Changer in Search | The Noisy Channel on February 22nd, 2009 12:40 pm

    […] ’s New York Times article, “Everyone Loves Google, Until It’s Too Big“. As Stephen Arnold notes, don’t read the article expecting to learn something new. Still, not everyone who reads […]

  • Archives

  • Recent Posts

  • Meta