If it sounds counterintuitive that software giant Microsoft is contributing its server specifications to the Open Compute Project, it shouldn’t.
By doing so, the company hopes that big hardware makers will build servers just like the ones it runs in its huge data centers, and perhaps give it a more efficient supply chain.
A quick recap: Facebook(s fb) launched the Open Computer Project in 2011, providing the specs for the servers that power its webscale data centers in hopes that the open-source innovation fervor that’s driven software innovation will feed similar breakthroughs in hardware. Subsequently it spun that work out into a multivendor Open Compute Foundation. Directors include executives from Facebook, Intel(s intc), Rackspace(s rax), Arista Networks and Goldman Sachs(s gs) (weighing in for the user community.)
“What Facebook’s done is great, but Microsoft — with all its productivity applications, databases, its search engine, (and) its games — fields a broader set of workloads,” said Bill Laing, Microsoft’s corporate VP of server and cloud. “We now run more than a million servers across our data center footprint and learned how to optimizes for those workloads with a focus on saving costs.”
Microsoft’s view also differs from Facebook in that it builds facilities ranging from massive data centers to much smaller edge locations, so the specifications take those differences into account, said Kushagra Vaid, GM of cloud infrastructure server engineering. “The challenge is how to design a common spec that can scale from very big data centers with tens of thousands of servers to small locations with tens to hundreds of servers,” he said.
It behooves Microsoft to do this, even at the risk of irking hardware server vendors; I had visions of HP and Dell execs reacting to Microsoft telling them how to build their gear. But, as Laing reminded me: “we are the customer.” And a customer that buys tens of thousands of servers at a time can write its own ticket in this market.
Laing will talk about Microsoft’s OCP work in a keynote at the Open Compute Project summit on Tuesday.
“Part of our motivation is our vision to have our cloud structure spanning private and public clouds. The industry can pick up our hardware specification and deliver not only to big IT providers but to large enterprises,” he said.
According to a blog post Laing wrote to coincide with his keynote, the new spec — no shocker — was designed with Windows/Windows Azure in mind.
“These servers are optimized for Windows Server software and built to handle the enormous availability, scalability and efficiency requirements of Windows Azure, our cloud platform. They offer dramatic improvements over traditional enterprise server designs: up to 40% server cost savings, 15% power efficiency gains, and 50% reduction in deployment and service times. We also expect this server design to contribute to our environmental sustainability efforts by reducing network cabling by 1,100 miles and metal by 10,000 tons across our base of 1 million servers.”
Microsoft is being smart to publicize its requirements. It opens up its business to other prospective server makers, said Al Gillen, program VP of servers and system software at IDC. “I also think they’re a little concerned that the industry may move away from their paradigm if Facebook defines the architecture.”
The great unmentioned player here is Amazon Web Services, which dominates the public cloud infrastructure space to date. While Amazon Distinguished Engineer James Hamilton talks broadly about data center energy efficiency, Amazon does not publicize its server designs.