Could Climate Change Lead to Computing Change?

I wrote about an effort us use millions of specialized embedded processors to build an energy-efficient (relatively) supercomputer that could run at speeds of up to 200 petaflops over at Earth2Tech. The Department of Energy’s Lawrence Berkeley National Laboratory has signed a partnership with chip maker Tensilica to research building such a computer, but after chatting with Chris Rowan, Tensilica’s CEO, I wonder if more specialized computing tasks in the data center might be farmed out to highly customizable — but lower-powered — chips.

Rowen doesn’t think the data center is at the point yet where power consumption costs outweigh the benefits of using a cheaper x86 processor, but said that day might come, especially for very specific uses such as accessing web databases. In the meantime, he’s focusing on getting customized embedded cores in applications that rely on speed, such as routing. Cisco uses Tensilica cores in its recently launched QuantumFlow Processor, primarily as a way to boost speeds. As the web gets faster, general-purpose x86 chips have to work harder and hotter, so a return to specialized, low-power processors may be in the cards.

Computing hardware and services tend to run in cycles, and right now, I think the hardware and networks put in place in the late 90s, which allowed Web 2.0 and rich Internet applications to flourish, are hitting their limit. The IP and IT networks are in the early stages of stepping up to challenge of delivering the next generation of services, but unlike the last cycle, power consumption will join speed as an essential feature for the underlying silicon.