Here’s why CERN ditched OpenNebula for OpenStack

CERN, Europe’s big particle physics lab, had been what might be termed showcase deployments of OpenNebula, the OpenStack rival turned vCloud rival. But no longer – all resources have been shifted from OpenNebula to, uh, OpenStack and vCloud(s vmw).

The research facility is trying to develop a private cloud to serve 11,000 physicists around the world and, with the Large Hadron Collider currently being upgraded, the amount of data that will need to be managed is set to see a major increase.

CERN has been testing various virtualization and cloud options over the last few years and, according to infrastructure manager Tim Bell, it’s currently got two environments: around 3,000 virtual machines running in a high-availability configuration on Microsoft(s msft)’s System Center Virtual Machine Manager (SCVMM); and three OpenStack deployments, the main one running around 500 or 600 hypervisors. The plan is to scale up to 15,000 hypervisors, with 150,000 VMs, by 2015.

Bell, who has spoken at several of the OpenStack Summits over the past few years and is a member of  the OpenStack board, explained that the OpenNebula trial was stopped at the start of this year. He said “all those resources are now part of the OpenStack environments”:

“We’ve been investigating how to use cloud technology for physics. We did a test with OpenNebula and ran various scenarios. It was useful to explore the concepts of cloud computing but, when we were looking at deploying at scale, we wanted an environment where we wouldn’t be the largest. We didn’t want to be pushing the limits of scalability ourselves; we wanted to be building on the works of others.

“We also wanted to take advantage of the ecosystem – load balancing, orchestration and so on. These things float around OpenStack or become part of it, and that’s part of the momentum of that as a solution.”

One of the key OpenNebula deployments had been within CERN’s engineering department – an OpenNebula cloud running on top of VMware. However, as OpenNebula project director Ignacio Llorente confirmed to me today, that department is now using vCloud instead.

But, as regards Bell’s explanation for why CERN has dropped OpenNebula, Llorente hit back very hard indeed:

“They have never been our largest deployment. We have much larger deployments in several telcos. They only ran 16,000 VMs on 400 physical boxes. Regarding the technical aspect, I do not agree OpenStack has a wide ecosystem. The ecosystem around the Amazon API is much wider, and as far as I remember CERN used our AWS API implementation.

“Moreover OpenNebula better meets the requirements of cloud deployments for HPC and Science regarding job orchestration, load balancing … We have a very wide user base in research and supercomputing: FermiLab, European Space Agency, SARA Dutch Supercomputing Center, NASA Langley, CESGA, DESY, NCHC, CSIRO, KIT, PIC, CESCA, CHPC, most of the cloud sites at the European Grid Initiative…

“As far as I know this movement was not a technical issue, it was a hype-driven decision… The management of the cloud project at CERN changed and the new management decided to leverage the hype around OpenStack. Although we spent a lot of time helping CERN build its cloud, they never contacted us to comment about this movement.”

Harsh words. So, did CERN buy into hype or did it make a smart move? Such is the nature of experimentation that we can’t draw a firm conclusion at this point. All we do know for sure is that OpenStack has a significant amount of momentum in Europe right now and OpenNebula – a more mature stack with a stronger local heritage, remember – isn’t going down without a fight.