Tilera and the race for the low power server

Tilera introduced its 9-core processor optimized for 64-bit processing this week. Tilera is at the heart of the multi-core efficient server market, and Rich Miller’s interview with CEO Devesh Garg indicates that the company is laser focused on marketing itself based on the performance per watt metric.

One interesting aspect of Tilera’s future involves Taiwan based Quanta Computer, which is an investor in Tilera. Quanta has long been rumored to be providing custom server builds for the likes of Google and Amazon, effectively allowing the webscale data center operators to bypass OEMS like Dell and HP. The reason folks like Google might be doing this is that they want to optimize power efficiency in server design and believe they can build a better server than any OEM can provide. (I’ve written about the market impacts of chronic R&D underinvestment by the OEMs.)

So it makes logical sense that Quanta would put money into Tilera. Tilera is a direct channel to those customers like Facebook that are most open to experimenting with low power servers running on non x86 architecture. Furthermore, if Tilera can really find its niche, then who knows, Quanta might have an awesome new partner to build low power servers for.

All in all, it’s beginning to feel like fewer and fewer people care about chip clock speed anymore. The game is about compute tasks per watt of power.

Today in Cleantech

Rich Miller over at Data Center Knowledge has his top 10 data center trends of 2011 and it’s just striking how many of them are about energy efficiency. On the sustainability tip, he points out the growth of air economization as a more efficient cooling method (#6) and opening up data center designs to help everyone design more efficient data centers (#7). What was most fascinating was trend #10, “renewables and site location.” A recent online poll at the site showed that readers were three times more in favor of the statement “renewable sources must be included” than “the price of power is paramount.” If data centers start demanding clean power sources, as it looks like Facebook is set to do, that will further change how data centers are located and built.

Today in Cloud

It’s been fascinating to see the flurry of attention over the past few days for a piece of Microsoft work that was first reported months ago. The New York Times appears to have started the ball rolling, with a piece by Randall Stross. Brad Feld, Rich Miller, Heather Clancy and David Linthicum were amongst those to pick the piece up again. And the basic premise? That computers running in or near domestic properties could be used to heat those homes. So far, so good… but how do we get power and cooling to those machines, and how do we ensure that data moves quickly enough between them? There’s a reason we cram racks close together, and it’s not just to save money on the surface area of a data center building.

Today in Cloud

Enterprise storage provider Hitachi Data Systems (HDS) yesterday acquired long-standing partner BlueArc Corporation, bringing BlueArc’s strength in Network Attached Storage (NAS) in-house. Rich Miller describes the deal as giving HDS “an ongoing play in the market for Big Data,” placing it in context alongside earlier acquisitions such as HP’s $2.4billion grab for 3PAR. David Vellante takes a look at the numbers, suggesting that HDS was a significant customer for BlueArc and that acquisition may have been the only alternative to an IPO at a bad time. Chris Mellor speculates on how much HDS spent (possibly $400m or more), before marking the end of an era; “With the acquisition the era of the stand-alone filer supplier comes to a close. Out of all the NAS start-ups, only NetApp has become an independent mature company. All the others have been acquired or crashed.” Consolidation in the enterprise IT market just keeps on coming. Who will be next?

Today in Cloud

My wife, once again, has been proved right (don’t tell her). I survived more than a week offline, on the side of a mountain on Crete. My business did not implode, the Internet is still there, and I’m refreshed. As I wade through backlog, the RSS feeds are dominated by Steve Jobs’ resignation and the build-up to, experience during, and aftermath of Hurricane Irene. In Cloud-land, the big gatherings of Dreamforce and VMworld are generating an awful lot of noise that will doubtless deserve closer examination. But, from my refreshed and retrospective viewpoint, the stories that speak loudest appear to have generated hardly a ripple. Dun & Bradstreet is partnering with Salesforce on data.com, with “the vision to unify the best sources of business data.”  Further back, Eucalyptus unveiled version 3.0 of their private cloud solution. Cade Metz writes in The Register that the company has “pumped new life” into a sound technical solution whose future I recently questioned. CEO Marten Mickos responded to me robustly at the time, and maybe 3.0 will prove him correct. And finally, Verizon bought CloudSwitch to beef up their capabilities in delivering hybrid clouds. Three disconnected snippets from a veritable flood of news. Three separate proof-points adding to an already complex picture. Three pieces that speak to the quietly growing relevance of all this stuff to the traditional enterprise; and the need to package the cloud’s capabilities in forms more digestible to that type of buyer. None of these stories are as whizzy as the headline pronouncements from Las Vegas and San Francisco, none dominate news cycles like Jobs apparently did in my absence. But they’re important all the same, and they are the sorts of stories upon which lasting businesses depend.

Today in Cloud

With Amazon’s European outage now behind us, the company published its analysis of what happened. Rich Miller dissects the technical aspects of the report for Data Center Knowledge, but doesn’t touch on Amazon’s own acknowledgement that (just like last time) communication to customers “can improve.” Indeed. Caught in the same outage, Microsoft appears to have done a far better job of ducking criticism. Or do fewer people care? Spooked by the extent to which both this and the April outage spilled over to affect more than one of the (supposedly isolated) Availability Zones inside an Amazon data center, Todd Hoff comes to the conclusion that we might be better not bothering with the Availability Zone concept in its current form at all. Instead, it might make more sense to see each data center as a single logical unit that is either available or not. The Availability Zone model certainly broke down last week, and in Virginia in April. But surely it still has some merit?

Today in Cloud

The European cloud operations of both Amazon and Microsoft were yesterday affected by a lightning strike on equipment belonging to the Irish utility company responsible for providing electrical power to both Dublin data centers. Rich Miller at Data Center Knowledge quotes Amazon’s status report at the time, providing an explanation as to why backup generators on-site failed to do their job. Storms happen, and electrical equipment will get damaged by lightning, but it must be embarrassing for both Amazon and Microsoft that neither backup generators nor alternative power suppliers were able to prevent a prolonged outage. And who thought it was a good idea for two of Europe’s largest data centers to rely upon electricity flowing through a single location? Services at both data centers are gradually being restored, although it may be a few days before everything is back online. Even my music is affected.

Today in Cloud

A paper co-authored by researchers from Facebook and multicore chip specialist Tilera reports that servers comprising large numbers of low-power processor cores deliver clear performance and power advantages in certain circumstances. Dean Takahashi at VentureBeat focuses on the processors being “four times more energy efficient,” whilst Rich Miller at Data Center Knowledge reports that they “boost memecached efficiency” over alternatives with fewer cores. Stacey Higginbotham sees the work as validation of the claims being made by companies such as Tilera, with their alternative chip designs. She also points to the need for more generic benchmarks with which to compare the performance of different architectures, especially as the hardware configuration for a single massive database may need to be radically different from that used in supporting other usage requirements.

Today in Cloud

We’ve seen video tours of Facebook data centers, and we’ve seen how Google destroys old storage on an industrial scale. Amazon tends to be far more reticent about the setup that runs its public cloud operation, which is why Rich Miller’s unpicking of the recent Amazon Technology Day is interesting. Miller quotes Amazon’s James Hamilton, who claims at one point that “Every day Amazon Web Services adds enough new capacity to support all of Amazon.com’s global infrastructure through the company’s first 5 years, when it was a $2.76 billion annual revenue enterprise.” Amazon may not break out numbers for their cloud business, but Hamilton’s quote would certainly appear to suggest that the business is still growing nicely.

Today in Cloud

As expected, Steve Jobs announced Apple‘s iCloud service today and reiterated the company’s commitment to making this service succeed where earlier attempts such as MobileMe had not. Rich Miller at DataCenterKnowledge quotes Jobs as saying “If you don’t think we’re serious about this, you’re wrong.” Apple’s rhetoric is strongly pushing their cloud-based service as the hub for customer computing, with VentureBeat’s Sean Ludwig also quoting Jobs; “‘We’re going to demote the PC and the Mac to just be a device,’ Jobs said. ‘We’re going to move your hub, the center of your digital life, into the cloud.'” Parts of the service — including automated downloading of purchases from one iOS device to any others that you own — are live now, with the rest rolling out later this year. The whole proposition is clearly aimed at consumers, but enterprises and their employees use Apple kit too, and it will be intriguing to see how Apple’s latest offering will be fitted into existing enterprise policies and workflows.