Amazon and Its Cloudy Metrics
February 4, 2011
As computing based on shared resources (with the goal of channeling high performance calculation capabilities into consumer based applications) continues to gain popularity, curiosity over long range profitability and short term pest control grows increasingly more aggressive. Since 2002 with its development of cloud based services including storage, Amazon has remained an important player.
Amazon Web Services have released figures to Data Center Knowledge showing the number of “objects” their S3 service holds more than doubled over the last year—262 billion. The same entry goes on to state the request rate has exceeded two hundred thousand per second. Comparable growth has been observed concerning the launching of virtual servers through the Elastic Compute Cloud (EC2).
As recently as 2009 it seemed Amazon had little interest in cultivating a partner program, content to provide the infrastructure and allow others to develop applications. However as the cloud universe expands and Amazon remains at its center, the relationships which were inevitable given the physics of the new cosmos seem to be forged with a whimper rather than a bang. While details are far from obscured, at times it seems one has a better chance of catching sight of a passing comet.
Our view is that it would be more meaningful to report revenues and profit/loss. I can take a single email and decompose it into lots of objects. Without a definition of substance, what’s an object? What’s 262 billion mean.
We would like to see more emphasis placed on search; for example, easy filtering of results for certain tags such as “best selling” or “available”. Just our narrow Harrod’s Creek view sparked by the Amazon Oracle offer. How will one count Oracle metrics: data size, queries per second, index size, fairy dust, or money? We vote for money, not obfuscation.
Sarah Rogers, February 4, 2011