Most of the attention around open-source cloud deployments may be focused on OpenStack, but it seems its older rival OpenNebula is growing at a very healthy pace indeed. This morning, the first OpenNebula Global Conference kicked off, and the community-centric event seems full of diverse use cases.
So, with visible installations — those not behind a firewall — doubling each year, what is OpenNebula being used for?
“Typical enterprise shop”
A typical case study came from Produban Corporation – at the risk of sounding hipsterish, you probably won’t have heard of them; they’re the in-house IT shop for Spain’s Santander Group(s san), the banking giant, and they have around 5,500 employees around the world. They’re trying to make their systems more efficient and they also have an eye on future analytics applications, Hadoop workloads and so on.
“We are a typical enterprise shop – we have thousands of obligations and have a lot of things that are unique,” Produban IT MD Daniel Concepcion told the crowd. Explaining that regulatory issues meant Santander had to steer clear of the public cloud for now, he added that Produban was eager to hop on the IaaS, PaaS and software-defined datacenter trains.
Proprietary stacks weren’t sufficiently adaptable, so Concepcion said it came down to OpenNebula and OpenStack. “OpenStack is great from a technology point of view but not so great from an end-user point of view,” Concepcion said. “If you want to [implement it yourself] you need to go through a vendor – it’s open source but vendor-based.”
Why do you have to “go through a vendor” with OpenStack? Concepcion told me later that “when you go to the website and download it and bring it up and running, it usually doesn’t work out of the box.”
Concepcion maintained that OpenNebula does work straight away, adding that the relatively small size of the OpenNebula community made it easier to get involved. He also said Produban was working with C12G, the company that manages the OpenNebula project, to implement new features such as support for the Open Virtualization Format (OVF) and multiple virtual datacenters, as well as integration between OpenNebula and Viapps. He said this work would be fed back into the community.
Produban is also testing the stack with Open Compute hardware (with both KVM and VMware hypervisors), so stay tuned.
Akamai(s akam) is a much bigger name than Produban, and the content delivery network (CDN) outfit is one of the most prominent members of the OpenNebula community. According to Akamai software engineer Thomas Higdon, the company uses OpenNebula for development and quality assurance purposes.
That effectively means virtualizing Akamai’s whole, highly complex setup. “We’re trying to leverage expertise from around the company in order to create an internal master instance for QA and for developers to use… and not worry about ruining the instance because we’re sharing it. Each additional network we add to this master instance makes it a more powerful concept.”
So Higdon and his colleagues created a tailored distribution of OpenNebula that fits in with Akamai’s homegrown spin on Ubuntu, and there are now 7 or 8 groups within Akamai running their own OpenNebula masters and deployments (Higdon’s involves 20-25 physical machines, running 300 to 400 VMs – “these are early stages here,” he said).
So why OpenNebula? “We don’t have to pay VMware(s vmw) if this thing blows up,” Higdon told me. “Some of this is just a cost-saving measure. We tried out OpenStack but… OpenNebula was more tenable, self-contained and customizable. OpenStack is all over the place.”
Earlier this year we reported how CERN, a real showcase deployment for OpenNebula, had decided to go with OpenStack and VMware in the end. But CERN’s U.S. counterpart, Fermilab, is using OpenNebula for its private IaaS cloud, and has been doing so since 2010.
“It was clear they’d done the system administration interface and got the part of making VMs right,” FermiCloud lead Steven Timm told the OpenNebula conference attendees. Echoing Concepcion, he also pointed out that OpenNebula was “rack-stable from the beginning – it just didn’t crash.”
Timm noted the absence of certain features in OpenNebula, such as an Amazon(s amzn) S3-like distribution method for VM images, and he also told me afterwards that a recent re-evaluation of the various options out there showed OpenStack (which didn’t exist during the first evaluation) scoring about the same as OpenNebula. But overall he sounded positive, suggesting that a shift to private cloud saw barely any performance hit – a big concern for Fermilab’s scientists.
“I don’t think any one cloud is going to win out in the short term, so it’s important to make sure you’re going to [be able] to interact with heterogeneous cloud infrastructure,” he told attendees. “We do want to federate with other institutions. We got FermiCloud and GCloud and EC2 all going at once.”
It’s hard for those behind the project, such as OpenNebula director and C12G co-founder Ignacio Llorente (pictured at the top), to accurately quantify how much adoption the stack is seeing. It’s open-source, so they can track downloads, but beyond that it’s hard.
What they can tell is how many deployments are accessing the internet, as Fermilab’s will, for example. According to Llorente, the current number there is around 5,000, and that count is doubling each year (incidentally, that’s measuring installations that connect consistently over a three-month period, not those connecting once then disappearing).
As for deployment behind the firewall, such as Akamai’s, who knows? What’s equally unclear is what sort of level of deployment we’re talking here – is it for support or developer testing, or a full production deployment?
But even without that specificity, it’s clear that OpenNebula is seeing real use, and at a growing pace. Not bad for an operation that very much lacks the vendor clout that helps to drive OpenStack.