Today in Cloud

Preparing for last week’s Equinix U.S. roadshows on the Asia-Pacific cloud was a useful reminder of just how vast the Asia Pacific region really is. I’ve visited the area in person a few times, travelling from London to Auckland via Los Angeles, and London to Sydney or Canberra via Bangkok or Hong Kong. It doesn’t seem to matter which way you go; the second leg (across the Pacific from Los Angeles, or across assorted bits of water and the bulk of Australia from south-east Asia) takes as long as the (already ridiculously long) first part of the journey. Like other cloud providers, Amazon already has data centers in Singapore and Japan, but either of those is a huge distance from Australia or New Zealand. It’s been rumored for a while that Amazon might open a third data center in the southern Pacific,  and ITNews today reports that the company is advertising for a Sydney-based data center manager. A physical data center will presumably follow.

Today in Cloud

Heather Clancy had a piece on ZDNet late last week, reporting IBM’s efforts to develop a solar array specifically designed to deliver data center power. This resonates nicely with a conversation I just had with Lex Coors, VP of Group Data Center Technology and Engineering at European data center provider Interxion. As part of a broader discussion on the improving green credentials of most data centers, Lex had a lot to say about cooling, about passing waste heat to the wider community, and about the need for hardware vendors to design computers that can run hotter. But on powering data centers, he pointed out a clear split between practice in Europe and in the U.S. In Europe, he said, data centers tend to simply draw power from the continent-spanning power grid. In North America, on the other hand, there is far more interest in data centers generating at least some of their power for themselves… and IBM’s new solar array may presumably have a role to play there. Luckily, those fog-bound Dublin data centers don’t need to worry about finding enough sunshine just yet.

Today in Cloud

Katie Fehrenbacher has a piece on GigaOM today, discussing the news that Facebook’s Oregon data center has been awarded LEED gold certification for its energy efficiency. There is also a higher certification, which GE and Vantage have attained. As more and more data centers embrace techniques to lower their power requirements and emissions, we should see a corresponding growth in the number attaining these awards. Responsible use of resources is clearly something we should all be concerned about… but for the managers of big data centers there may be a more important question; how many of these innovations are a cost-effective way to lower resource use, and how many are interesting but uneconomic experiments that win awards?

Today in Cloud

Hot on the heels of recent data center announcements in Finland and Asia, Google today announced plans to build a new data center in Dublin. This joins existing data centers owned by Amazon and Microsoft, and supplements an existing facility that Google rents in the city. Google cites Ireland’s climate (it rains a lot, and is relatively cool) as a key factor in the move. Network connectivity, an educated workforce, and some of the lowest business taxes in Europe no doubt helped. Let’s just hope Google gets power from a different electricity substation to the one Microsoft and Amazon connect to.

Today in Cloud

Patrick Thibodeau at Computerworld is amongst those reporting David Filas’ experiment to demonstrate the resilience of data center equipment. Data center managers go to great lengths to control temperature, humidity, and other conditions inside their data centers, often devoting a significant proportion of their operational costs to keeping air conditioning blasting out cold air. There are plenty of recent cases in which data center designers have turned the heat up a bit (without significant detrimental effects) or used ambient cooling by piping in air from outside. But Filas has gone further, running his experimental kit in a shed and exposing it to dramatic variations in temperature and humidity. The equipment kept running. The industry is already challenging long-held assumptions around power usage, but Filas’ work perhaps suggests that there is still further to go.

Today in Cloud

Bloomberg BusinessWeek reports Gartner figures suggesting that homegrown servers such as those built to Facebook’s Open Compute designs “now account for 20 percent of the U.S. market for servers.” Whilst the article points to the detrimental effect this trend is apparently having on traditional server vendors such as Dell and HP, it paints a rosier picture for chip manufacturers with “Intel Corp. [reporting] its revenue from chips used to craft servers for data centers surged 50 percent in the second quarter.” Intel (presumably?) sees higher margins selling chips this way, with bespoke server build-outs commanding far smaller bulk discounts than behemoths like HP. ZDNet’s Larry Dignan is more cautious than Gartner or BusinessWeek, noting “it’s unclear how many orders Dell, HP and IBM were really losing. There aren’t any concrete examples or figures to back up the premise.” Dignan is right, and continues “For Facebook and Google, the data center is the largest capital expense. It’s only natural that they’d go the DIY route.” The same is far less true for Wal-Mart or Ford or Boeing, where IT infrastructure is a necessary cost (and hassle) of doing business; it’s far easier for them to haggle a good deal for boxes from Dell or HP than to concern themselves with sourcing power supplies, and cables, and chassis’, and solder, and fans, and processors, and RAM, and all the rest. So DIY servers are growing nicely, but only really in niche areas such as the server farms of the web’s giants. The nightmare scenario of the Boeing designed-and-built server in a Boeing data center, or the Wal-Mart designed-and-built server in every Wal-Mart is unlikely to keep Dell and Apotheker awake at night. Far more worrying might be the prospect of new low-margin upstarts entering the business and selling Open Compute-inspired servers to the customers upon which Dell, HP, IBM and all the rest depend.

Verizon’s acquisitions provide an enterprise path to the cloud

At the end of last month Verizon acquired CloudSwitch, adding value to Verizon’s January acquisition of cloud data center provider Terremark. Around the world, big telecommunications providers such as AT&T, BT, Telstra and Verizon have been hard at work, diversifying and seeking new business opportunities as revenue from domestic and international voice traffic continues to decline. While existing expertise and infrastructure made networking and data hosting a logical new endeavor, recent moves such as the Terremark and CloudSwitch acquisitions tap into a growing enterprise requirement for easy and controlled paths out of the legacy data center and into the cloud.

The world’s biggest telephone companies are increasingly well established providers of co-location and hosting services, typically serving large international corporations with deep pockets and widely distributed workforces. Although smaller data center companies such as Savvis and Rackspace have successfully diversified from simple hosting to the provision of cloud computing solutions, the telcos have typically proved less able to manage the transition on their own.

InfoWorld’s David Linthicum commented at the time that Verizon acquired Terremark that “Verizon has the same problem as many other telecommunications giants: It has fat pipes and knows how to move data, but it doesn’t know how to turn its big honking networks into big honking cloud computing offerings.” Verizon is not alone. Elsewhere, Orange (a subsidiary of France Telecom) is simply reselling a GoGrid product to deliver a private cloud solution to customers, removing the need to develop and deploy a solution of its own.

NPRG Senior Analyst Ed Gubbins notes that

locating and building data centers, outfitting them with the necessary equipment, efficient energy supplies and software and building a capable staff is no small task for a company like Verizon with lots of other business segments it must attend to. “It takes time,”‘ Lowell McAdam, Verizon’s chief operating officer said . . . “That’s not our core competency.”

Terremark and competitors are proving more nimble and more able to adapt to changing data center usage patterns. It seems likely that Terremark executives will gain increasing control over Verizon’s existing data center facilities, accelerating the speed with which these can be transformed for the cloud. It remains to be seen, though, whether strategies that worked for Terremark will prove as successful when transplanted into Verizon’s very different organization.

Verizon’s $1.4 billion acquisition of Terremark in January gave the company a cloud computing capability and, as Bloomberg BusinessWeek reported, access to new markets. Although almost certainly requiring much less cash (terms were not disclosed), last month’s acquisition of Massachusetts startup CloudSwitch may ultimately prove more significant to Verizon’s ambitions. CloudSwitch, the winner of the LaunchPad showcase at GigaOM’s 2010 Structure conference, sells software to simplify the process of moving applications from an enterprise data center to the cloud.

Combining CloudSwitch software with existing Verizon data centers and bandwidth creates an increasingly compelling proposition. Customers no longer simply buy the pipe down which their data moves or access to the server on which their data or application is hosted. Instead, they are buying into a complete package, including networking, hosting and the software that links all of it to their existing on-premise data center. Each of these pieces may exist separately elsewhere, and each of those individual components may be cheaper or better than Verizon’s. But Verizon’s ability to package and brand a rounded set of services is likely to prove compelling, especially in industries where IT is simply a necessary cost of doing business. Verizon isn’t just selling bandwidth or storage or data processing; Verizon is selling peace of mind, and at the moment no other data center provider offers quite the same combination of capabilities.

With CloudSwitch, Verizon is no longer simply one choice among many for networking or hosting. Verizon has become a compelling choice for any enterprise that wishes to explore a hybrid environment in which existing on-premise applications are gradually transitioned out to hosting partners and, ultimately, the cloud.

Question of the week

Is acquisition the only way for telcos to compete in the cloud?

Today in Cloud

Gavin Clarke at The Register has reawoken earlier suggestions that Amazon’s AWS and Microsoft’s Azure play some role in delivering data to customers of Apple’s forthcoming iCloud. Rather than merely serving as a content delivery network (CDN) for iCloud, comments from Clarke’s sources now appear to suggest that both Amazon and Microsoft are storing and serving customer data; their cloud services underpin iCloud. It can obviously make perfect sense for a deliverer of end-user experience like Apple to rely upon more expert third parties to run the plumbing… but if the roles of Microsoft and Amazon are bigger than originally thought, what’s that great big data center of Apple’s for? Why not simply let third party infrastructure do most of the work, and just keep a tight rein over service levels and user experience with some penalty-laden contractual relationships?

Competition for the private cloud heats up

Despite OpenStack’s continued growth, a combination of product updates and acquisitions from Citrix, Eucalyptus, Red Hat and VMware over the past week demonstrate that the race to become the dominant private cloud provider, as well as win over the enterprise, is far from over. Is one of these solutions “better” than the others? Not unequivocally, since each has characteristics that appeal to specific customers. OpenStack, for example, continues to attract attention with a good story about providing cloud infrastructure for all, including NASA and other strong partners; but VMware can leverage its significant installed base in the virtualization space to sell hard. Meanwhile, none of the others are standing still.

First, a quick recap on this week’s news:

  • Citrix. Citrix used VMworld this week to announce that the next version of the CloudStack product will be completely open source, rather than continuing to follow the less permissive open-core model of earlier releases (and competitor Eucalyptus). Citrix has an existing route into data centers with its networking and virtualization business, a strong product with real-world deployment and now an open-source story. Some doubts remain around the future relationship between CloudStack and OpenStack, despite explicit pledges of support. It’s also unclear how CloudStack fits with an earlier Citrix project: Project Olympus. At the end of the day making source code freely available for reuse is a worthy step, but it’s not one that will be decisive in driving the selection of a private cloud solution.
  • Red Hat. Red Hat is also pursuing an open-source line, leading what The Register describes as an “effort to succeed where OpenStack has struggled in building an open-source cloud founded on broad community input.” It’s difficult to interpret OpenStack’s growing mind share as evidence of “struggle,” but Red Hat’s multihypervisor, multicloud approach to Aeolus does represent a different take: Aeolus intends to bridge different environments in a permissive fashion. Red Hat has an interesting story to tell about freedom, flexibility and choice. But enterprises are far more likely to be looking for support, evidence of adoption elsewhere and a clear road map into the future. Interesting as it is technically and philosophically, Aeolus may not be the solution they need.
  • Eucalyptus. Back in May, as Ubuntu promoted OpenStack’s private cloud over previous favorite Eucalyptus, I wrote, “it is becoming increasingly unclear whether [Eucalyptus] has a compelling future.” But last week, Eucalyptus Systems announced version 3.0 of its product, setting its sights on providing “highly available” enterprise clouds capable of responding to hardware failure. The company already has some big names (former MySQL CEO Mårten Mickos) and real-world deployments at scale; it should shout far more loudly about them. Those, together with this week’s features, may be sufficient to tip the balance of market interest back in Eucalyptus’ direction.
  • VMware. As Ben Kepes notes, VMware’s sweeping announcements at VMworld see the company attempt to extend its closed-source reach, encompassing both private clouds inside the data center and hybrid solutions that reach beyond the enterprise. Derrick Harris suggested last month that “VMware wants to be the OS for the cloud,” and that shows no sign of changing soon. VMware is well-known and familiar, with a lock on the enterprise virtualization market that will be hard to shift, especially as it continues to innovate.

Current favorite OpenStack, meanwhile, continues to generate headlines of its own, and it has assembled an impressive set of partners (including Citrix and Verizon-acquired CloudSwitch). But it remains relatively untested in terms of deployment. To move from commentator’s favorite to enterprise solution of choice, OpenStack needs to round out its feature set and provide more examples of successful adoption. It may never choose to compete across VMware’s entire product portfolio, but OpenStack today remains narrowly focused. VMware is ambitious and efficient, useful characteristics in an expanding company but also illustrative of an attitude and mind-set that will appeal to many customers.

At the end of the day, it may not be the “best” cloud that wins but the cloud provider with the best story and the best fit with existing enterprise systems, vision and road map. Could that end up being VMware?

Question of the week

Are this week’s announcements enough to shift the apparent dominance of VMware and OpenStack?

Today in Cloud

One of the drivers for virtualization has always been the desire to use resources more efficiently by increasing the utilization of all those servers sat in the data center. And yet, utilization rates remain lower than might be expected. In a conversation with Abiquo CEO Pete Malcolm this week, he cited figures to suggest that “ideal” utilization of storage is about 70%. In non-virtualized environments, actual utilization is around 25%, but even in virtualized environments that typically only rises to about 35%. The figures for servers are worse. The ideal is again about 70%. Non-virtualized usage is typically only 10%, rising to around 27.5% with virtualization. Are IT managers too cautious to push those utilization rates higher, or is something else going on?