There’s been some coverage today of Cisco’s latest Global Cloud Index, which predicts that traffic to and from the cloud will account for one third of all traffic passing through data centers by 2015. Derrick Harris highlights the report’s interest in 2014, “the year cloud-based workloads will surpass traditional-data-center-based workloads in total volume.” Sean Ludwig pulls out some of the headline figures early in his piece, stressing that “global traffic generated by cloud computing services will increase a staggering 12 times by 2015 compared to cloud traffic in 2010, while data center traffic will increase at a less-showy-but-still-impressive four times by 2015.” With typical irreverence, The Register is quick to point out that “Cisco is engaging in a bit of marketeering by calling it a ‘cloud’ index.” They have a point; this is about far more than the cloud. Still, the numbers make for some interesting reading. Is your network ready to handle the load?
The Itanium spat took a further nasty turn this week, as HP hit Oracle with a lawsuit intended to force the database giant not to drop Itanium support. Oracle originally announced that they’d stop supporting Intel’s Itanium chip back in March, provoking howls of protest from Intel and HP (the biggest supplier of Itanium-powered hardware.) Both companies stressed an ongoing roadmap for Itanium, but Oracle didn’t back down. IBM joined in last month, describing the chip line as a “dead end.” And yesterday, HP filed a long complaint in a Santa Clara court. As Art Wittmann notes for InformationWeek, “Customers can only watch in disgust.” Timothy Prickett-Morgan at The Register reports Oracle’s rapid response; the company would appear to be standing by its earlier claims. Arik Hesseldahl at All Things D has the lawsuit itself, and some further analysis. Art Wittmann compares the whole thing to an episode of Desperate Housewives. It seems rather more akin to a bad day in the sandpit at a Kindergarten, but customers are potentially getting hurt by all the thrown sand and someone needs to grow up and step back soon so everyone can get back to business.
Today’s big news has got to be Facebook’s unveiling of their Open Compute Project; open reference designs for highly efficient servers, and the data centers they sit in. There’s been a flood of coverage overnight, most of which has tended toward effusive praise for this move. Tom Raftery draws upon his own experience to take a good look at the green side of this story, and ZDNet’s Dan Kusnetzky and The Register’s Timothy Prickett Morgan are both quick to spot that a few server reference architectures will not change the world of server design overnight. Jesse Robbins, on the other hand, writes for O’Reilly Radar that “This is a revolutionary project, and I believe it’s one of the most important in infrastructure history,” broadly echoing Stacey Higginbotham’s take here on GigaOM. It may even be possible for them all to be right. One aspect of the news that struck me as particularly interesting was the apparent integration with OpenStack, with Rackspace CEO Lanham Napier commenting that “the Rackspace team has visited and studied Facebook’s next-generation data center, our engineers continue to collaborate, and we look forward to optimizing OpenStack for Open Compute.” It may not immediately change every server in every data center, but open computing on top of open hardware in open data centers cannot help having far-reaching implications for this industry. How will the incumbents respond?