Red Hat’s new operating system will power up your containers

Open-source software giant Red Hat said on Thursday that its new operating system custom made to power Linux containers is now available to download. Red Hat has been a big proponent of Docker and its container packing technology going back as far as last summer touting its support of the startup and making sure its Enterprise Linux 7 product was compatible with Docker’s technology.

Container technology has generated a lot of buzz over the past year by promising a type of virtualization that’s lighter weight than your typical virtual machine. In order for a container to actually run, it needs to be connected to a host Linux OS that can distribute the necessary system resources to that container.

While you could use a typical Linux-based OS to power up your containers, as CoreOS CEO Alex Polvi (whose own startup offers a competing container-focussed OS) told me last summer, these kinds of operating systems merely get the job done and don’t take full advantage of what containers have to offer.

Red Hat’s new OS supposedly comes packed with features designed to make running containerized applications less of a chore to manage. These features include an easier way to update the operating system (OS updates can often be a pain for IT admins) and an integration with Google’s Kubernetes container-orchestration service for spinning up and managing multiple containers.

The new OS is also promising better security for those Docker containers — which has been an issue that Docker’s team has been addressing in various updates — with a supposed stronger way of isolating containers from each other when they are dispersed in a distributed environment.

Of course, [company]Red Hat[/company] has some competition when it comes to becoming the preferred OS of container-based applications. CoreOS has its own container-centric OS and Ubuntu has its Snappy Ubuntu Core system for powering Docker containers. Additionally, a couple of the former veterans who recently departed Citrix in September have started their own startup called Rancher Labs that just released RancherOS, which the startup describes as a “minimalist Linux distribution that was perfect for running Docker containers.”

It will be worth keeping an eye on which OS gains traction in the container marketplace and whether we will see some of these new operating systems starting to offer support for CoreOS’s new Rocket-container technology as opposed to just the Docker platform.

A Red Hat spokesperson wrote to me in an email that “Red Hat Enterprise Linux-based containers are not supported on CoreOS and rocket is not supported with Atomic Host. We are, as always, continuing to evaluate new additions in the world of containers, including Rocket, with respect to our customer needs.”

Docker buys SocketPlane as it builds out its container-networking strategy

You can add another acquisition to Docker’s plate with the startup set to announce on Wednesday that it has bought a small networking startup called SocketPlane. The acquisition, whose financial terms were not disclosed, is just one more step in Docker’s plans to become the de-facto container-management company whose technology can play well on multiple infrastructures.

SocketPlane’s entire six-person staff is joining Docker and will be helping the container-centric startup develop a networking API that makes it possible to string together hundreds to thousands of containers together no matter if the containers “reside in different data centers,” explained Scott Johnston, Docker’s SVP of product.

The upcoming networking API can be thought of as an extension to Docker’s recently announced orchestration services. Although the new orchestration services makes it possible to spin up and manage multiple clusters of containers, the current Docker networking technology built into the platform only works well for “a handful of hosts and containers” as opposed to the thousands needed in complex environments like Spotify’s infrastructure, explained Johnston.

In order for Docker’s orchestration services to really come to fruition and deliver on the promise of spinning up and managing tons of containers, Docker requires an underlying networking API that enables all of those containers to speak to each other at a massive scale. Docker decided that it needed this technology expertise in-house, so it turned to SocketPlane, whose staff includes networking veterans from [company]Cisco[/company], OpenStack and OpenDaylight, said Johnston.

The goal is to create a networking API that can work with the gear and applications from providers like VMware/Nicira as well as Cisco and [company]Juniper[/company], explained Johnston. Theoretically, the API will make it possible for a user to build a distributed, containerized application in one data center that uses Cisco networking gear and have that application move over to an OpenStack-based cloud environment that uses Juniper gear “without breaking everything,” said Johnston.

If VMware has its own networking technology that users like, users should be able to “swap out” the Docker networking technology and use the other technology, said Johnston. Google’s own Kubernetes container orchestration system currently swaps out the existing Docker networking technology for its own networking technology, said Johnston, but once the SocketPlane team builds a workable open networking API, you can imagine a world in which users can alternate between Kubernetes or Docker’s Swarm orchestration service if they choose to.

“Say this works and we have ten or twelve implementations of the API,” said Johnston. “A network operator might want to take advantage of that.”

This makes for Docker’s third publicly-known acquisition. The startup bought devops startup Koality last October, which was preceded by the July acquisition of London-based Orchard Laboratories.

Story clarified to emphasize that the APIs will not be swapped out.

Microsoft joins Docker in announcing new container services

Docker’s suite of orchestration services that the container-management startup first detailed in December are now available in beta for the public to download, Docker said on Thursday.

These new orchestration services are just another step for Docker to tout its cloud-agnostic platform, geared for enterprise customers worried about vendor lock-in. Judging by Microsoft’s participation in the announcement, it looks like Microsoft is trying to make itself appealing to those same customers as well.

While this announcement is not too surprising given that the company has made it clear that its eyeing orchestration services as a way to further develop the Docker platform, what’s interesting is how excited Microsoft seems to be. This follows through with Microsoft’s pledge of support to making sure Docker will be fully integrated with its Azure cloud and Windows Server.

Container orchestration refers to the ability to spin up, coordinate, schedule and distribute multiple containers for the purpose of running an organization’s infrastructure; this is an operations task, rather than a development task. One can bundle the services that helps make an application run inside these containers, and with an overarching system that can create and distribute containers when needed, IT staff members don’t have to slave away with the minutiae of keeping that application running.

Simply put, containers are great at isolating applications and services from each other while they all share resources from the same Linux OS kernel. When combined with an orchestration service that can oversee the creation of containers and can spin them up as needed, however, their potential at cutting down overhead becomes that much greater. Just look at Spotify, which developed its own Helios container orchestration framework that’s contributed to a much more efficient infrastructure for the streaming-music provider.

So far, the big public cloud providers of Amazon, Google and [company]Microsoft[/company] have all indicated in one way or another that they view containers as the way of the future for IT operations. Google’s been busy promoting its open-source Kubernetes orchestration framework and in November announced its managed service version of Kubernetes called Google Container Engine, which as of now only functions on the Google Cloud. Amazon, on the other hand, announced its own container orchestration service called EC2 Container Service, which unsurprisingly, works for only the Amazon cloud.

Microsoft hasn’t yet announced a similar container orchestration service, and today’s news seems to highlight the fact that it’s content with just letting Docker handle all that orchestration. Docker’s new orchestration services — called Machine, Swarm and Compose — will supposedly make it possible for organizations to run and coordinate their containers across multiple clouds, whether they be [company]Amazon[/company], [company]Google[/company], [company]VMware[/company], Digital Ocean and so on and so on.

With Microsoft jumping on board with Docker in announcing the release of the new features, this seems like one more way the Redmond, Washington giant is trying to gain trust from developers who want a service that plays nice with multiple clouds.

Microsoft’s big open-source push the past year was designed to court developers who are hesitant to trust Microsoft due to its once closed nature under its previous regime. By joining forces with developer-favorite Docker, Microsoft is once again trying to make itself more attractive to the development community.

In a second blog post detailing the announcement, Microsoft made sure to list a number of ways its integrating the new Docker orchestration services into Azure. Ross Gardler, Microsoft’s senior technology evangelist of its open technologies, wrote “Today we announced a number of improvements to our Docker support on Azure, most notably Docker Machine support for Azure and Hyper-V and support for Docker Swarm.“

With orchestration services seeming to be the next step for containers to enter the world of production, it’s interesting that Microsoft hasn’t yet come up with its own version of the technology. But, maybe it doesn’t have to as long as it’s working with Docker.

Of course, by not making its own orchestration service and instead relying on the cloud-agnostic Docker, it’s putting its own Azure Cloud at risk, since organizations will be able to use other clouds as well.

In an attempt to further lure developers (Microsoft already has a strong foothold with legacy companies), it might be worth the risk to Microsoft, however. It’s got years to make up for as it tries to distance itself from the Microsoft of the past.

How Spotify is ahead of the pack in using containers

In late December, CoreOS CEO and container guru Alex Polvi proclaimed in a tweet that he believes 2015 will be the year of the production-ready container, which would be a testament to how fast companies are adopting the technology that promises more portability and less overhead than virtual machines.

For music streaming service Spotify, however, containers are already a way of life. The streaming-music provider has been using containers in production on a large scale, according to Mats Linander, Spotify’s infrastructure team lead.

This is a big deal given that it seems only a few companies beyond cloud providers like Google or Joyent have gone public with how they are using container technology in production. Indeed, when Ben Golub, CEO of the container-management startup Docker, came on the Structure Show podcast in December and described how financial institutions are experimenting with containers, he said that they are generally doing pilots and are using Docker containers “for the less sensitive areas of their operations.”

Ever since Docker rose to prominence, developers have been singing the praises of containers, which have made it easier to craft multicomponent applications that can spread out across clouds. Container technology is basically a form of virtualization that isolates applications and services from each other within virtual shells all while letting them tap into the same Linux OS kernel for their resources.

For many companies as well as government agencies, it’s not just the benefits to the software development process that has them interested in containers — it’s how they can assist their operations. If containers truly are less bulky than virtual machines (Golub told me over the summer that using containers in production can lead to 20-to-80 percent lighter workloads than only using VMs), then it’s clear organizations stand to benefit from using the tech.

But you can’t simply embed containers into the architecture of your application and expect a smooth ride, especially if that application is a hit with the public and can’t afford to go down. It takes a bit of engineering work to see the benefits of containers in operations and there have been people saying that Docker has caused them more headaches than happiness.

Spotify, which has 60 million users, runs containers across its four data centers and over 5,000 production servers. While it runs containers in its live environment, Spotify had to do a little legwork to actually see some gains.

These containers will help beam Beyonce to your playlist

One of the ways the streaming-music company uses containers is to more efficiently deploy the back-end services that power the music-streaming application. With the addition of a home-grown Docker container orchestration service called Helios, the team has come up with a way to control and spin up multiple clusters of containers throughout its data centers.

Out of 57 “distinct backend services in production” that are containerized, Linander considers 20 of them as being significant. All of these containerized services share space with “more than 100 other services” churning each day, he explained.

These containers house stateless services, which basically means that these services don’t require constant updating from databases and they can be safely restarted without causing problems.

Linander said he didn’t “want to go into deep detail” on what all those services are doing, but he did explain that “view-aggregation services” are a good fit for containerization. These kinds of services are responsible for spooling the data from Spotify’s data centers that contain information pertaining to an individual’s playlist — including the name of an artist, album images, and track listings.

Spotify playlist beyonce

Bundling these services inside containers helps Spotify because instead of relying on a client that needs to send separate requests per each service to obtain the necessary information from the databases, Spotify can essentially deploy a cluster of containers that contain an aggregate of the services and thus not have to send so many requests. As a result, the application is less “heavy and bulky,” he said.

It also helps that if Spotify restarts a container it will start fresh from the last time it was spun up. That means that if something crashes, users won’t have to wait too long to see Beyonce’s mug appear on their playlists along with all of her hits.

As Spotify infrastructure engineer Rohan Singh explained during a session at last year’s Dockercon, before the company was using Docker containers, Spotify’s hardware utilization was actually low because “every physical machine used one service” even though the company has a lot of machines.

Spotify slide from Dockercon explaining its older architecture before Docker containers

Spotify slide from Dockercon explaining its older architecture before Docker containers

By running a fleet of containers on bare metal, Spotify was able to squeeze more juice out of the system because that cluster contains more than one service.

Say hello to Helios

Spotify’s Helios container orchestration framework (which the company open sourced last summer) is crucial to making sure that the deployed containers are running exactly the way Spotify wants them to run.

Right around the time Spotify first started experimenting with lightweight containerization, Docker was starting to raise eyebrows, Linander said. The Spotify team then met with Docker (Spotify is also a member of the Docker Governance Advisory Board) to discus the technology, which looked promising but at the time lacked orchestration capabilities in which the containers could be linked together and deployed in groups. It should be noted that as of early December, Docker has now made orchestration services available in its product.

Because container orchestration services weren’t really out there during the time Spotify was investigating the use of Docker, Linander said he decided “we could build something in house that could target our use case.”

For Linander, a lot of the benefits of containers come to fruition when you add an orchestration layer because that means teams can now “automate stuff at scale.”

“When you have several thousands of servers and hundreds of microservices, things become tricky,” Linander said, and so the Helios framework was created to help coordinate all those containers that carry with them the many microservices that make Spotify come alive to the user.

The framework consists of the Helios master — basically the front-end interface that resides on the server — and the Helios agents, which are pieces of software related to the Helios master that are attached to the Docker images.

Slide of Helios from a Spotify talk during Dockercon

Slide of Helios from a Spotify talk during Dockercon

Working in conjunction with the open-source Apache Zookeeper distributed configuration service, Spotify engineers can set a policy around how they want the containers to be created in the Helios master and “Zookeeper distributes the state to the helios agent” to make sure the containers are spun up correctly, said Linander.

During Dockercon, Singh explained that Helios is great at recognizing when a “container is dead” and if a person accidentally shuts down an important container, Helios can be configured to recognize these mission-critical containers and instantly load one back up.

“We just always have this guarantee that this service will be running,” Singh said last summer.

New orchestration options and new container technology

Of course, Helios is no longer the only orchestration system available as there are now several of these frameworks on the block, including Google’s Kubernetes, Amazon’s EC2 container service, the startup Giant Swarm’s microservice framework and Docker’s own similar services.

Now that there’s a host of other options, Spotify will be evaluating possible alternatives, but don’t be surprised if the company sticks with Helios. Linander said the main reason Spotify is currently using Helios is because “it is battle proven” and while other companies may be running containers in production through the use of other orchestration services, no one really knows at what scale they may be operating at.

But what about other new container technology that may give Docker a run for its money, like CoreOS and its Rocket container technology? Linander said he doesn’t have a “strong opinion” on the subject and even if Spotify sees “a bunch of potential” with new container tech, the company isn’t going to drop everything it’s doing and implement the latest container toy.

As for ClusterHQ and its Flocker container-database technology that the startup claims will let users containerize datasets all inside the Docker Hub, Linander said “It looks cool to me, personally,” but it’s still too early to tell if the the startup’s technology lives up to what it says it can deliver. Besides, he’s finding that Cassandra clusters are getting the job done just fine when it comes to storing Spotify’s data.

“We are always considering options,” said Linander. “[We are] building the best music service that ever was and will ever be.”

Mats Linander, infrastructure team lead at Spotify

Mats Linander, infrastructure team lead at Spotify

It’s clear from speaking with Linander that having a well-oiled orchestration service helps take a load off of engineers’ plates when it comes to tending to those container clusters. It seems like a lot of the ease, automation and stability of spinning up clusters of containers comes from the orchestration service that coordinates the endeavor.

However, not every company possesses the engineering skills needed to create something akin to Helios, and while the service is open source, it’s still a custom system designed for Spotify so users will have to do some tweaking to get it functional for themselves.

For 2015 to truly be the year of the production-ready container, organizations are going to have to be up-to-speed with using some sort of orchestration service and that service is going to have to scale well and last a long time without something causing it to go awry.

At this point, it’s just a question of whose orchestration technology will gain the most traction in the marketplace since most organizations will more than likely be trying out new tech rather than creating new tech, unless they are as ambitious as Spotify and other webscale companies. With the plethora of new options now available — from Kubernetes to Docker to CoreOS’s Fleet — the public’s now got a lot of choices.

Was 2014 the end of enterprise computing?

It’s been just over a year since I left Netflix and joined Battery Ventures. So it seemed appropriate (if a couple of weeks late) to take a look back at some technology and cloud themes that bubbled up in 2014 and offer a few predictions for the coming year.

In 2015 I expect more hubbub over everything from the Docker/containerization craze to Netflix’s open-source cloud platform to — dare I say it? — the end of enterprise computing. Here are some thoughts about the recently ended year in tech, in no particular order:

The Netflix open source cloud platform got traction

The Netflix team continues to release projects (about ten new repos on GitHub during 2014) and get more traction.

Notable external use cases for the [company]Netflix[/company] platform include growth in interest in the Reactive programming model using Hystrix; the Spring Microservices architecture including [company]Netflix[/company]; IBM’s Watson services, built using NetflixOSS; and Nike’s online services using NetflixOSS described at the AWS Re:Invent conference. Some aspects of the NetflixOSS architecture have been more widely influential, as seen in the growth of interest in microservices and the immutable service model encouraged by


Docker wasn’t on anyone’s 2014 roadmap, but is on everyone’s 2015 roadmap. (There was even a New York Times story about it earlier this year.) The Docker open-source project—which automates the deployment of new applications inside software “containers” — is an excellent example of how to drive viral adoption of a developer product, and it combines four useful things in one. It’s portable, speeds up development, defines the configuration and is shared via Docker hub. It’s become a key ecosystem and will undoubtedly continue to grow in 2015.

The concept of anti-fragility took off

The ideas behind the Netflix Chaos Monkey, a Netflix service that helps test the automation that helps systems recover from problems, are that you have to prove you are resilient and exercise failure-recovery mechanisms by creating your own failures. This is now so prevalent that it’s being mentioned in unexpected places, such as a business discussion with Workday, and a talk by the CIO of the Department of Homeland Security Citizenship and Immigration Services at the DevOps Enterprise Summit. As enterprises re-architect their systems using principles from DevOps, micro-services and cloud native architectures, the trend is to bake in and automate recovery and resilience.

Cloud roundup: AWS moves on to a new phase

[company]Amazon[/company] Web Services continues to dominate cloud computing, and the service doubled its IP address range again this year, to about 10 million. The IP address range sets an approximate upper limit on the number of instances that AWS could run at the same time, since by default most instances get assigned one address. It is one of the few available metrics that shows the growth rate.

An interesting reversal occurred during 2014: Previously, clouds were seen as having missing features compared to data centers, but now there are many startups that are building products for data centers to provide features that already exist for AWS. It appears that the most sophisticated operations architectures are now on AWS, not on premise.

[company]Microsoft[/company] Azure is getting a lot of enterprise-cloud signups but still doesn’t represent a significant proportion of the overall cloud market. There were several large and embarrassing Azure outages in 2014, and relatively few non-Microsoft services were impacted enough for the public to notice.

AWS had a few zone-level, partial outages or network partitions, but nothing significant enough to cause widespread impacts. Notably, it’s now two years since the last big AWS outage. (Remember all the press those used to get?). While AWS has matured and made its services and operating practices more resilient, Azure has some work to do.

[company]Google[/company] spent 2014 getting enterprise features in place and hiring lots of ex-AWS people. But it still has a lot to prove as a cloud vendor. Google is an interesting alternative to AWS for startups, but the company is going to have trouble getting the enterprise market adoption that Microsoft and AWS have already figured out. Startup cloud vendor Digital Ocean, meanwhile, is growing fast and has carved out a space for itself as the simple, developer oriented, cloud solution. AWS has added so many features that it’s a full-time job just trying to keep up with them, so I think there is a place in the market for something easy to understand and use.

Enterprise computing vendors

Bottom line: The big, traditional enterprise-computing vendors are failing to grow their customer bases. You can watch their revenue from new-product sales fade.

Services and support revenue will increase to compensate in the short term, but even that will eventually collapse as customers move on to low-cost, open-source solutions or outsource to cloud-based services. This is one of those times in which replacement technology revenue is an order of magnitude cheaper than the incumbent revenue.

For example, we could see market segments that currently generate $10 billion of revenue for traditional enterprise-computing vendors be entirely replaced by $1 billion of revenue for cloud vendors and open-source based startups. My friends Peter Magnusson and Marten Mickos joined Oracle Cloud and HP Cloud in 2014. I wish them well, but I’m not optimistic that they will be able to generate enough revenue to offset the losses elsewhere.

Adrian Cockcroft is technology fellow at Battery Ventures. Prior to that he was cloud architect at Netflix, but also spent time as distinguished engineer at eBay and Sun Microsystems.

ClusterHQ rakes in $12M to make containers play nice with data

Big data startup ClusterHQ sees a lot of opportunity in capitalizing on container and database technology and, with a $12 million series A funding round that the company plans to announce on Thursday, it’s got a nice chunk of cash to help it do so.

ClusterHQ’s flagship technology is its open-source Flocker project, which the company released back in August. Flocker aims to make it possible for users to load up datasets into containers all inside the Docker Hub so that the housed datasets are all synced up to the application or application’s components—stored inside containers as well.

A developer would use Flocker to store the types of datasets inside Docker containers that power stateful services, which are essentially the databases, message queues or key-value stores that need constant updating in order for keeping the application up-to-date with the most reliable data.

Currently, applications built with Docker containers can be connected to the type of datasets used for stateful services, but those datasets have to be hosted outside the Docker environment, explained ClusterHQ CEO Mark Davis. Because of this, the Docker hub “doesn’t know anything about [those datasets]” and there is “no notion of picking up the external service and moving it around” like one can do with the application components that are stored in containers.

ClusterHQ team

ClusterHQ team

The plus side of having these kind of datasets that need constant updating to be stored outside of Docker is that if something were to cause the application to falter, the dataset wouldn’t go down with the whole system and any transmitted data can still be retained.

What ClusterHQ wants to do is make it so that these datasets can be as portable as the application components housed in containers, so that when deployed together in tandem, an application would be faster and more responsive to the user. Flocker’s technology, powered with the Sun Microsystems-developed Zettabyte file system (ZFS), can supposedly replicate changes across containerized databases and create backups in case something breaks.

“We are all trying to get to a highly scalable world,” said ClusterHQ CEO Mark Davis. “We want to get to the point where we don’t care about individual services and application services.”

The Bristol, England-based company currently counts 17 employees, which Davis said he wants to double “as fast as we can” with the investment round. Davis, a Silicon Valley veteran, plans to eventually set up a ClusterHQ office in the Bay Area in order for the company to be closer to the enterprise infrastructure landscape where it can be in contact with companies like Docker and CoreOS as well as the proponents of technology like [company]Google[/company]’s Kubernetes and Apache Mesos.

Accel Partners London drove the funding round along with Canaan Partners and previous investors. Kevin Comolli of Accel Partners will take a seat on ClusterHQ’s board.

Why CoreOS just fired a Rocket at Docker

CoreOS’s announcement that it’s built a container engine that can potentially compete with Docker’s container technology caused quite a commotion within the tech community on Monday. Docker’s made a name for itself over the past year with its container-skills catching on with some of the biggest names in the industry—Google, Amazon and Microsoft to name a few—so seeing CoreOS detail its own container plans in light of Docker’s momentum is interesting, to say the least.

CoreOS unveils Rocket, a possible competitor to Docker

CoreOS, the Linux operating system specialist that’s been busy this past year making sure its technology powers Docker containers, detailed on Monday a new container technology called Rocket that’s essentially a competitor to Docker.