A green data center union on the coast of Northern California: a water cleaning plant and a new data center.
Facebook’s wind power utility deal in Iowa is gaining praise from the data center industry as a model worth replicating.
What to expect: Keynotes by Amazon Web Services bigwigs (minus Jeff Bezos); Netflix, NASA; a scrum of partner announcements; and lots of counter programming from cloud rivals.
Amazon’s data center efficiency expert James Hamilton weighs in on AMD’s announcement that it will be building ARM processors, and he is decidedly positive and likely already testing ARM processors in Amazon’s cloud services.
Invariably what we see happening about once a decade is a high-volume, lower-priced technology takes over the low end of the market. When this happens many engineers correctly point out that these systems can’t hold a candle to the previous generation server technology and then incorrectly believe they won’t get replaced. The new generation is almost never better in absolute terms but they are better price/performers so they first are adopted for the less performance critical applications. Once this happens, the die is cast and the outcome is just about assured. The high-volume parts move up market and eventually take over even the most performance critical workloads of the previous generation. We see this same scenario play out roughly once a decade.
I’ve never seen Hamilton so explicitly pro-ARM in the data center, particularly in terms of his surety that ARM will slowly gobble up more server workload. His fundamental analysis is correct and includes the fact that the bottleneck in a lot of data center workloads isn’t the CPU but other constraints like networking, storage or memory. Which is one more reason why cloud data centers would be willing to sacrifice some performance. Because they don’t necessarily need it. (Incidentally, many of the companies working so hard on ARM servers are also trying to solve a lot of the associated memory control and networking problems with parallel processing.)
Hamilton looks back at how UNIX replaced IBM mainframes, and x86 replaced UNIX, noting that high volume economics always trump performance. Which is why he loves ARM. We’ve never seen processor volumes at this level before. There were 7.6 million server units sold in 2010, but 6.1 billion ARM processors shipped last year. At that sort of volume, innovation moves quickly and Hamilton believes that the innovation will infect the data center very quickly.
Amazon’s data center efficiency guru James Hamilton had a post Sunday night breaking down the recent report from Facebook, which detailed the company’s energy footprint and how much power is used by its data centers. Facebook’s total carbon footprint in 2011 was 285 metric tons of CO2 corresponding to 532 million kilowatt hours of electricity. At a very low electricity rate of 4 cents/kilowatt hour, Facebook would have dropped about $21 million on power last year.
What’s interesting about Hamilton’s post is that he uses the total power consumed rate by Facebook to figure out how many servers the company has. You can do this by starting with the average of power delivered to the IT infrastructure and dividing it by an estimate of the per server wattage (in this case Hamilton estimates 300 watts/server). Hamilton estimates that Facebook is now running at least a 150,000, and probably more like 180,000 servers, while he estimates that Google is running about a million, below even what forecasters had thought Google would have by now. The Google number is relatively low, pointing to the fact that Google has made strong efforts in efficiency and server utilitization which have allowed it to purchase relatively few servers. Second, Facebook’s server count is growing pretty fast, and will ultimately drive the company to further focus on the efficiency of each workload.
There have long been arguments about how hot you can run a data center before you start to encounter server mortality. Operators and engineers are ultra cautious because who wants to tell the boss that you burned a few servers because you let the data warehouse hit 80 degrees (one old study even says that the failure rate for electronics doubles every 10 degrees Celsius). Well, a new study is out from the Computer Science Department at the University of Toronto that looked at multiple environmental conditions at high performance data centers, concluding that “the effect of high data center temperatures on system reliability are smaller than often assumed.” Amazon’s data center efficiency guru James Hamilton notes that the single largest non-IT cost in the data center is cooling and the paper goes on to say that when Microsoft raised the temperature two to four degrees in one of its Silicon Valley data centers it saved $250,000. That’s a lot of cash and reason enough to rethink how hot we can drive servers.
Buried in last week’s catfight between Apple and Greenpeace over the energy sourcing for Apple’s new North Carolina data center and how clean it would be, was a surprising fact — that the conversation between Apple and Greenpeace was happening at all.
Google is a champ when it comes to its infrastructure, and a blog shows the search giant is running its data centers at a PUE of 1.14. Compared to Facebook, it has room for improvement, but what about when ranked against Apple, Amazon and Microsoft?
With its newest facility, Vantage Data Centers wants to prove that a low-power data center in Silicon Valley is not an oxymoron. Vantage’s new Santa Clara, Calif., V2 data center claims an impressive 1.12 PUE (power use efficiency) energy rating.
On Perspectives, the blog written by Amazon’s data center efficiency guru James Hamilton, there’s a post highlighting the Ishikari data center in Japan. The data center does a few innovative things, namely using high voltage direct current (DC) power distribution and ductless cooling. The DC current question has been around for a while (claims are that a DC power distribution can result in 30 percent energy savings though Hamilton points out it’s actually closer to 3-5 percent), but now that solar power is gaining ground, it’s arisen again. Solar panels put out DC power so coupling solar panels with a data center that uses DC current instead of AC current, cuts down on a conversion step and can increase efficiency. IBM is trialing this type of setup in Bangalore, India and we’ll see if it can catch on.