GE’s industrial internet is really (mostly) about energy efficiency

GE is spearheading a rebranding of green. In a new report out this week, the conglomerate pushes the “Industrial Internet,” which is really mostly about using information technology for energy efficiency gains for industry — across transportation and power generation and distribution.

AlertMe’s ‘huge’ UK gas deal: big data for real people

Smart energy startup AlertMe — which provides a cloud-based way to monitor your energy consumption — has struck a deal with British Gas, the U.K.’s largest domestic energy supplier. It’s the latest big break for the business, CEO Mary Turner explains.

The era of the 100 MW data center

The era of the 100 MW data center is coming, as Internet companies build more and more server-packed data centers to support the growing number of web users and the increasing amount of time spent online.

The top 10 trends from the year’s big smart grid show

One of the year’s largest smart grid conferences — DistribuTECH — closes today in San Antonio, Texas. It’s like the CES for utilities, power companies and the vendors that are trying to sell them stuff. Here are the top 10 trends I took away.

Hacking solutions to the world’s resource problem

This weekend in New York City, dozens of developers gathered for the second Cleanweb Hackathon, where programmers spent the weekend building mobile and web apps around new ways to manage energy. The event is the latest sign the ecosystem around clean technology is changing.

Apple reportedly using new display tech for iPad 3

Supply chain reports released on Friday indicate that Apple will opt for indium gallium zinc oxide (IGZO) instead of in-plane switching (IPS) display panels for its upcoming iPad 3. The change would provide benefits in terms of energy consumption, cost reduction and improved resolution.

It’s time to go beyond PUE in the data center

Last week Google disclosed the details of its energy consumption, and its data center engineers argued that the leading figure cited to assess how energy-efficient a data center is, power usage effectiveness (PUE), must be continuously measured and averaged over a twelve-month period. This was a veiled shot at some companies that measure their data centers on a cold day in January when their cooling costs are zero and then publish a great PUE number. Google is right. We need more transparency surrounding PUE.

But it’s time to go beyond PUE and examine how we view the whole project of what efficient data center computing means. Leading companies like Facebook, Amazon and Google are all approaching 1.1 with their PUE, so it’s a metric with diminishing returns. With that in mind, here are three shifts in focus with regard to data efficiency that will matter in the future.

1. Admit the limits of the Power Usage Effectiveness metric. While PUE has been helpful in making it clear that a data center will be judged on its energy efficiency, it tells us nothing about the efficiency of the hardware and software. Here’s a hypothetical that Power Assure’s CTO, Clemens Pfeiffer, and I recently discussed:

You’ve got a hundred old servers in your data center that you decide you can do without, so you turn them off. The problem is, your PUE just went up. The same amount of power is being used by the facility to cool and light the building even though there’s less power being used by IT equipment. This illustrates a fundamental point: It’s time to address how efficient hardware and software are themselves as they relate to performing actual compute tasks. If a server’s on but it’s not doing anything, that’s wasteful.

2. Think about software. The entire conversation about data center efficiency over the past few years has revolved around facilities management and hardware. But for the first time, we’re seeing the beginnings of a basic question: What software platform is optimal for reducing energy use?

Stanford professor and current Google fellow Christos Kozyrakis has looked at how energy-efficient the widely used software platform Hadoop is. But one of the problems with Hadoop is that it requires nodes to remain powered on even if they’re not being used. “Hadoop is doing a lot of things that are wasteful, and those things have to be optimized,” says Kozyrakis.

When a semiconductor, like an Intel Atom or Xeon chip, is designed, engineers are constantly considering the energy characteristics of the final product. The same thinking now needs to be applied to software platforms.

3. Integrate hardware and software efficiency metrics. The buzzword in data centers is “heterogeneous computing environment.” Engineers are no longer just dealing with uniform servers built around Intel Xeon chips. They work with all sorts of configurations, ranging from high performance setups to low power servers, from Intel Atom–based Seamicro to Linux-based Tilera and maybe even one day ARM-based Calxeda chips.

Here is an opportunity to figure out which programs are appropriate for which server configurations and to optimize efficiency. Kozyrakis cited an example where an MIT professor asked students to write an application in a simple language like C and then in a high-level program, Java. The execution time for the application differed on the order of thousands. This translates into very different energy characteristics for that program.

In the end, PUE is a metric that’s about reducing waste and making sure the energy going into a data center is being used by the server. But the next frontier of data center efficiency is optimizing software for the multitude of emerging hardware platforms. This is more difficult, because it requires a shift in focus among major cloud players, like Google and Rackspace, as well as a new period of cooperation between programmers and hardware designers. It will take time, but there are clear benefits in terms of power consumption and total cost of ownership for the companies operating the data centers driving cloud computing.

Question of the week

What metric best describes the efficiency of a data center?