Bringing Facebook’s Open Compute Project Down to Earth

Facebook’s Open Compute Project has been characterized as revolutionary, a giant push that will propel server design into the future now, but what if it isn’t actually all that meaningful? What if it’s just a cool, but niche project that shows how smart Facebook is, and might inspire a few other huge companies with similar needs to follow suit, but will have limited impact in the greater IT world? Such is the case put forward by Jeramiah Dooley this morning in his Virtualization for Service Providers blog (note: Dooley is an employee of VCE, the joint venture between VMware (s vmw), Cisco (s csco) and EMC (s emc) to peddle Vblock converged infrastructure systems).
Essentially, argues Dooley, “Facebook isn’t doing anything that Google (s goog) and others haven’t already done,” and the model, which works great if you have just one application type that can benefit from Facebook’s brand of application virtualization, won’t cut it in most IT departments. “[T]he users, and their [multiple types of] applications, always come first,” he writes. Elaborating on the latter point, he adds:

In the case of Facebook and Google, the data centers, every server it holds and even the geographic location of the facility have been focused to support the one application stack they provide to their end-users, which isn’t so different from how an enterprise uses specific types of servers and environmental design to cover the requirements from each of the application types they support. The difference is in the number of applications that are required, and the scale that they are used.

For the most part, I think Dooley is right about everything he says; I had some of the same thoughts yesterday while digesting this news. Further, I’d add that most traditional enterprises — while certainly concerned about energy costs and standardization — are fairly risk-averse and won’t likely be investing in caseless servers anytime soon after years of doing everything possible to protect their boxes and the valuable information and applications that reside in them. Unless they’re actually operating at Facebook-scale, most IT data centers aren’t built to fail, so they tackle issues like cost, consolidation and efficiency through denser servers, virtualization and private cloud software.
When it comes to the revolutionary aspect of the Open Compute Project, Dooley notes that not only has Google (s goog) been doing the do-it-yourself server thing for a while, but also that:

Outside air? Higher ambient temps? Physically isolated cold/hot aisles? Forced plenum? Efficient server power supplies? Modular design? Come on, all of these have been in general use for years. I worked for a regional co-lo company for six years, and almost every one of these principals [sic] were in use there since 2004 or so.

I’m no data center design expert, so I’ll assume for the sake of argument this is true, although suspecting that what Facebook is doing might be a bit more advanced than what many other data centers operators have been doing. I would add, though, that Facebook’s bare-bones, caseless servers are more evolutionary than they are revolutionary: We’re already working toward such a world with micro servers, x86 alternatives and altogether denser, more energy-efficient server architectures, including those being sold by Facebook partner Dell (s dell). Unless other companies of Facebook’s ilk are willing to start building their own servers — something history suggests will be limited at best — it seems the natural result of the Open Compute Project will just be new lines of products for legacy server makers to sell in addition to their existing products.
What Dooley ignores, however, is that the large web applications actually are proliferating and cloud computing is changing the way that applications run. Webscale used to mean Google, Yahoo (s yhoo) and eBay (s ebay), but it now also means Facebook, Twitter, Netflix (s nflx), Zynga, Myspace (s nws) ¬†(arguably) and of a number of other popular web sites that skyrocket in popularity and need to scale as efficiently as possible to handle their ever-increasing traffic. With the advent of tablets, smart phones, streaming media and every other device whose applications rely heavily on web- or cloud-based infrastructure for the bulk of the computing, this level of scale around specific application types is only going to keep increasing. What Facebook is doing now will look a lot more normal a few years down the road, and, as my colleague Stacey wrote yesterday, Facebook’s Open Compute Project will push legacy server makers to adapt accordingly. Dell and HP (s hpq) are already demoing servers based on the Open Compute design.
Cloud computing will force a change in server design, too, as more businesses host their applications with cloud providers that buy their servers racks at a time. Dell is already selling stripped-down servers like hotcakes through its Data Center Solutions group, and Amazon (s amzn) has been known to buoy entire fiscal years for SGI’s (s sgi) (aka Rackable) webscale server business. Should Dell’s customers, Amazon and others decide they want to drive down both server and energy costs even further, they certainly could push for Facebook-style servers from their vendor partners. Especially in standardized multitenant clouds, the underlying hardware takes a passive role to the virtualization and orchestration software that sits above it. As we move into a PaaS world, where applications are written with little regard to operating system, much less hardware, the type of server underneath becomes even less important. If they are reliable enough, efficient enough and can scale, there’s little reason that cloud providers wouldn’t make a move to Facebook-style servers at least for their standard service offerings.
I think the most revolutionary part of Facebook’s Open Compute Project is the openness of it, but that’s the subject of another post. In terms of technology, it seems mostly to be pushing the envelope in areas where we’re already seeing waves of innovation (don’t forget about ARM-based (s armh) servers from companies like Calxeda and Nvidia (s nvda), which haven’t even hit the market yet, but will make a splash of their own when they do). But don’t think Facebook’s designs won’t find a home; the IT-delivery paths we’re already heading down all but ensure they will.
Balloon image courtesy of Flickr user snapxture.