The biggest deal about Facebook’s open compute project isn’t the project, it’s the wave of innovation this can bring forward at the systems level — which will affect everyone from the chipmakers to the giant systems vendors and data center operators. At its event Thursday, Facebook unveiled the Open Compute Project, which essentially open sources the systems layer that sits between the standard components inside a server and the hypervisor orchestration layer (which itself has been open sourced by the Open Stack project).
There are two things I took away from this: (1) by tying the servers and the data center together in a holistic unit, the data center has now officially become computer and (2) the big iron providers have just had the rug pulled out from under them. They will need to shift their business to this data-center centric viewpoint or they will lose out in the very area where their business is growing fastest.
These server systems, which have historically been built by IBM (s ibm), HP (s hpq), Dell (s dell) and now, even Cisco (s csco) have already begun on a path to consolidation and players such as HP have already been preparing for this future of the data center as a computer by purchasing EYP, a data center design firm. In this vision of the data center as the computer, the server becomes a component, and now, after Facebook’s announcement, it becomes a commodity component of sorts.
Yes, people will still buy servers from Dell, HP and the like, but as more and more people move to on-demand computing either at the infrastructure level as provided by Amazon’s EC2 (s amzn) or at the platform level as provided by an internal cloud or a public PaaS, the older hardware designed for legacy enterprise applications stopS being a growth business. They become mainframes. They’ll still be in the bowels of the building, but it’s not where the newest applications will be built. So the big iron vendors must learn to play a new game for these customers, and it’s a game that Dell is likely the best equipped to play.
In talking to the sales people manning the server prototypes it occurred to me that for webscale customers such as Facebook, it makes sense to put in the time and effort to build your own house, while at the other end there are those who buy EC2 instances or dedicated hosting, which is like renting an apartment. There’s no customization there, a point Jonathan Heiliger the VP of Technical Operations at Facebook, made at the event. However, gear on offer from Cisco, IBM, Dell or HP is like getting a McMansion — there’s some customization, but there are only a few basic models to choose from.
I think in the short-term the market for McMansions is giant as enterprises test out the cloud, but want trusted performance and vendors, but the aspiration for many will be to build their own custom homes. And What Facebook has done is make the custom architecture cheaper to build and run, which in turn makes it easier for other players to come behind and adopt that for better apartments inspired more by the custom homes rather than McMansions.
At the systems level, the news is horribly disruptive, but for the chip companies the pressure is now on. Unlike code, which can be tweaked in a matter of hours or days, hardware has to be built. And if we’re talking about constructing a data center, we’re talking about a construction process that lasts months if not years. So how fast can an open source hardware design iterate? Pat Patla, GM and VP of the server division at AMD, explained that development cycles for hardware and systems have been gradually compressing from 24 months seven or eight years ago to 18 months in the last few years. “Now, lots of folks are comfortable in 12-month development cycles and [Facebook’s news] should help more folks get there, but silicon doesn’t move that fast and now there is a very high pressure on the silicon.”
And for those who think this doesn’t change much, I’ll close with Graham Weston, the Chairman of Rackspace ( s rax) who said that Rackspace was planning its next decade of data center projects and had been working on how to build out the right, efficient infrastructure for the last few years. But after it talked to Facebook and saw its 38 percent savings achieved in running the servers he said of the Rackspace plan, “We threw it out.” Now it plans to adapt Facebook’s ideas for its own use, and perhaps build on them.
And that’s the biggest news of all from today’s announcement. Once something is shared with the community, everyone can take part in adding innovation on top of innovation. Today, Facebook has achieved a power usage effectiveness rating of 1.07 which is down from and industry average of 1.5, but now that Facebook has shared, how long until someone can reach .75, or zero? Sure, this is disruptive for the big iron vendors, but it’s also disruptive to the old, slow way of improving our computing infrastructure. So let’s see what’s next.