Why shutting down 800 federal data centers won’t be easy

There has been a lot of talk about consolidation lately because federal agencies have until Oct. 7 to present their plans for slashing data center footprints. The Office of Management and Budget has mandated that by 2015, the government must reduce its current stable of roughly 2,094 data centers by 800, or 38 percent of its total data center count. But how exactly the government will pull this off, and how successfully it can do so, are still up for debate.

A survey released this morning by Juniper Networks (s jnpr) (conducted by federal IT think tank MeriTalk) suggests the OMB has set a rather lofty goal. Of the 200 federal IT executives surveyed, only 10 percent think the government will meet its goal; 23 percent think the government actually will have more data centers. The discrepancy between the OMB mandate and stakeholder predictions appears based in large part on two major factors: complexity and demand.

Legacy apps need legacy homes

Complexity is an issue, because the government runs so many legacy applications. It would be a lot easier to consolidate operations and virtualize infrastructure or move workloads to the cloud if existing applications didn’t require custom-built stacks that have been in place for decades, in some cases, and that aren’t particularly well suited for new environments.

Sixty percent of survey respondents said more than 20 operating systems are running in their data centers, while 16 percent said they’re managing more than 100. It’s the same story for management software, with 48 percent claiming more than 20 applications in use, and with 6 percent saying more than 100.

Unless agencies are willing to rewrite their applications to take advantage of new application environments and tools, the result might be fewer, but highly complex, data centers that are a nightmare to manage. This would go against the goal of standardization that drives so many consolidation, cloud computing and virtualization efforts. Standard application stacks and hardware resources make it much less expensive to buy, operate and provision IT resources.

Already, at least two major federal IT hotbeds — NASA and the Department of Defense — have deployed their own cloud infrastructures to standardize the development and management of new applications among their users.

However, as a Juniper representative explained to me via email, “The difficulty is that the applications drive the infrastructure, and in Federal agencies many of the applications are custom-coded legacy applications … The expense to rewrite these applications inhibits the ability of many agencies to consolidate down to a fully standardized, commodity Intel-based server infrastructure.”

More demand, fewer data centers?

There’s also the problem of demand — that is, handling ever-growing compute and data capacity with less space. Respondents of the Juniper survey estimated that they’re currently operating at 61 percent utilization, and will need to increase data center infrastructure by 34 percent in the next five years in order to meet an increased demand.

Some of this increased demand will no doubt be offloaded to the cloud, as the government already has a “cloud-first” policy in place for deploying new infrastructure, and Apps.gov is up and running as a hub for procuring cloud-based applications and infrastructure.

But for applications that can’t run in the public cloud for any number of security or technological reasons, the answer might be an even greater reliance on co-location providers. During a call last week, Equinix (s eqix) GM of the Global Enterprise Segment Greg Adgate told me that although it makes a lot of sense for certain agencies to deploy applications in cutting-edge data centers that can meet performance and scalability needs, getting funding to build such a facility won’t be easy. That means a ripe opportunity for companies like Equinix that can offer co-location space directly and/or that host service providers certified to meet federal compliance standards.

Even if government agencies don’t meet their government mandates, though — and even if there actually are more data centers, as some predict — all isn’t necessarily lost. To the extent that eliminating energy costs is among the reasons for data center consolidation in the first place, research released this morning from Stanford professor Jonathan Koomey gives reason for optimism. According to Koomey’s findings, there’s plenty of room for improvement in data center efficiency that could result in drastically reduced energy usage.

Because of its complex applications and strict compliance requirements, the government won’t likely follow Google’s (s goog) lead of building custom servers and implementing innovative data-center-cooling methods, but it certainly should take some lessons. Koomey’s research estimates that although Google accounts for 0.8 percent of the world’s data center infrastructure, it only accounts for .011 percent overall data center energy usage.

However, it might be too early to make accurate predictions as to whether agencies will meet the OMB mandate, or what steps they’ll take to save money on operations if they can’t hit the consolidation mark. Equinix’s Adgate said the first pass will be relatively pain-free because the government operates many data centers that “aren’t data centers as we know them” — telco closets, retrofitted offices, and other infrastructure caches — that will be easy to lop off. It’s the infrastructure and applications running in actual data centers that will be more difficult to move.

Then there’s the resignation of Federal CIO Vivek Kundra, who has been a champion of cloud computing and has been pushing consolidation since he took the office in 2009. It’s conceivable the appetite for using cloud resources — as well as for Kundra’s other progressive IT strategies — could diminish with his departure, as well as with the possibility of a new administration taking over in 2013.

Feature image courtesy of Flickr user vaxomatic.