Microsoft Emulates Henry Ford’s Approach to Assembly
December 5, 2008
InfoWorld’s “Microsoft Applies Model T Factory Methods to Data Centers” caught my attention. The article by James Niccolai is here. One of the useful items in the article was the link to a Web log post about Microsoft’s data centers here. Let me capture the points that I noted:
- One of the Microsoft executives responsible for the Redmond giant’s data centers is Michael Manos; others include Daniel Costello, Data Center Research and Engineering and Christian Belady, principal power and cooling architect
- Microsoft is asking equipment makers to construct systems to Microsoft’s specifications
- The data centers use the type of container approach available from Sun Microsystems and allegedly used by Google
- The approach and systems appear to have a five-year life
- Microsoft has more than 240 products and services it will deliver from these data centers
- The containerized approach is to “reduce capital costs by 20%-40% or greater depending upon class.”
I found this statement from theWeb log post written by Mr. Manos and his colleagues quite thought provoking:
Gen 4 will move data centers from a custom design and build model to a commoditized manufacturing approach. We intend to have our components built in factories and then assemble them in one location (the data center site) very quickly. Think about how a computer, car or plane is built today. Components are manufactured by different companies all over the world to a predefined spec and then integrated in one location based on demands and feature requirements. And just like Henry Ford’s assembly line drove the cost of building and the time-to-market down dramatically for the automobile industry, we expect Gen 4 to do the same for data centers. Everything will be pre-manufactured and assembled on the pad.
In short, Microsoft sees its approach as “a game changer.”
I am convinced that Microsoft has thrown the full weight of its data center engineering team behind these new data centers. The diagrams in the Web log post make it clear that Microsoft wants to raise the bar in data center design.
As I reread the InfoWorld story and Mr. Manos’ Web log article, several thoughts danced through my mind. First, Microsoft is working hard to catch up with Google, a company that has been investing in its infrastructure for a decade. One presumes that Google’s engineers continue to upgrade their hardware and network infrastructure without the urgency and additional cost of fast cycle engineering. Google has been a slow and steady data center investor, and I think that it may have–if my research data are accurate–a significant lead in data center build outs. Second, the key to a data center is the software and the internal systems engineering. A data center is an empty office building in a manner of speaking. The software and the methods used to handle routine data center operations and the inevitable glitches are more important than the physical plant. When extra capacity is required, companies can get more capacity. Google relied on Akamai for a recent live YouTube.com concert. The systems engineering that makes seamless operation across a company’s own data centers and a third party’s data centers is where the mouseball meets the mouse pad. Third, the known bottlenecks within systems within data centers have the greatest impact on the data center’s budget. It’s important to manage costs, but when routers running on commodity servers have to be flanked by two or more support servers, I think the inefficiency of this type of approach will erode centain cost estimates. Throwing hardware at a known problem instead of engineering a solution will affect performance and budgets. Google has been chipping away at certain problems such as the inappropriateness of traditioinal relational database tables for certain operations. Microsoft is making progress in the physical building and layout of the data center itself, but I think considerable work must be done to match Google’s payoff from significant investments in addressing read write bottlenecks, file locking and unlocking, fail indifference, and getting data from communication pipes into the servers in the data center.
To sum up, Google may start feeling the heat from Microsoft’s effort, but with the recent meltdown of Micrsoft’s cash for searching scheme on Black Friday, the company has some engineering issues to resolve. At least Microsoft has a horse in the race after giving Google a 10 year head start out of the cloud services gate.
One final thought: the financial meltdown of the Ford Motor Company hit me as a reminder that using old technical models for tomorrow’s challenges may be a very risky way to catch up with Google. What do you think? Ford Pantera or Ford Pinto?
Stephen Arnold, December 5, 2008