For all the upsides that using vasts amounts of data has, there’s also a cost – and often, it can be measured in cold hard dollars. That was one of the takeaways from a panel of data center experts at GigaOM’s Structure:Data conference Wednesday. Processing power and storage have scaled up exponentially, IO CEO George Slessman explained, but the cost of building, deploying and maintaining data centers isn’t that elastic: “The bottleneck is money.”
His co-panelists agreed, and Digital Realty CTO Jim Smith said that the growing costs of data centers can lead to companies taking short cuts. Smith told the audience an anecdote to underscore this point: A client had asked his company to strip out some components to make a data center on the cheap. It’s possible, Smith explained, but the downside is that you have to turn off the whole thing every few years to service the hardware. The client agreed, only to explain a few years later that it would be impossible to go off the grid. “Now it’s gonna catch on fire and explode if we don’t fix it,” quipped Smith.
The other big cost factor is energy, and energy consumption is growing rapidly as the amount of data being processed is increasing. “You gotta find ways to economize your work per watt,” said Equinix CTO Lane Patterson. Part of this is making smarter equipment that doesn’t burn through your budget when it’s idle, and improve on technologies used to cool servers – technology that hasn’t seen much change in 50 years.
But data itself also has to be prioritized, argued Slessman. Different problems have different infrastructure requirements, and it doesn’t make sense to use your most expensive infrastructure to deal with all your data. How do we get to a point where data centers are more flexible, and potentially less energy-hungry? One approach is to make the data center itself more aware of the data and have the infrastructure become part of the IT stack. “You can’t separate the two,” argued Slessman.
Watch the livestream of Structure:Data here.