Docker buys SocketPlane as it builds out its container-networking strategy

You can add another acquisition to Docker’s plate with the startup set to announce on Wednesday that it has bought a small networking startup called SocketPlane. The acquisition, whose financial terms were not disclosed, is just one more step in Docker’s plans to become the de-facto container-management company whose technology can play well on multiple infrastructures.

SocketPlane’s entire six-person staff is joining Docker and will be helping the container-centric startup develop a networking API that makes it possible to string together hundreds to thousands of containers together no matter if the containers “reside in different data centers,” explained Scott Johnston, Docker’s SVP of product.

The upcoming networking API can be thought of as an extension to Docker’s recently announced orchestration services. Although the new orchestration services makes it possible to spin up and manage multiple clusters of containers, the current Docker networking technology built into the platform only works well for “a handful of hosts and containers” as opposed to the thousands needed in complex environments like Spotify’s infrastructure, explained Johnston.

In order for Docker’s orchestration services to really come to fruition and deliver on the promise of spinning up and managing tons of containers, Docker requires an underlying networking API that enables all of those containers to speak to each other at a massive scale. Docker decided that it needed this technology expertise in-house, so it turned to SocketPlane, whose staff includes networking veterans from [company]Cisco[/company], OpenStack and OpenDaylight, said Johnston.

The goal is to create a networking API that can work with the gear and applications from providers like VMware/Nicira as well as Cisco and [company]Juniper[/company], explained Johnston. Theoretically, the API will make it possible for a user to build a distributed, containerized application in one data center that uses Cisco networking gear and have that application move over to an OpenStack-based cloud environment that uses Juniper gear “without breaking everything,” said Johnston.

If VMware has its own networking technology that users like, users should be able to “swap out” the Docker networking technology and use the other technology, said Johnston. Google’s own Kubernetes container orchestration system currently swaps out the existing Docker networking technology for its own networking technology, said Johnston, but once the SocketPlane team builds a workable open networking API, you can imagine a world in which users can alternate between Kubernetes or Docker’s Swarm orchestration service if they choose to.

“Say this works and we have ten or twelve implementations of the API,” said Johnston. “A network operator might want to take advantage of that.”

This makes for Docker’s third publicly-known acquisition. The startup bought devops startup Koality last October, which was preceded by the July acquisition of London-based Orchard Laboratories.

Story clarified to emphasize that the APIs will not be swapped out.

Microsoft joins Docker in announcing new container services

Docker’s suite of orchestration services that the container-management startup first detailed in December are now available in beta for the public to download, Docker said on Thursday.

These new orchestration services are just another step for Docker to tout its cloud-agnostic platform, geared for enterprise customers worried about vendor lock-in. Judging by Microsoft’s participation in the announcement, it looks like Microsoft is trying to make itself appealing to those same customers as well.

While this announcement is not too surprising given that the company has made it clear that its eyeing orchestration services as a way to further develop the Docker platform, what’s interesting is how excited Microsoft seems to be. This follows through with Microsoft’s pledge of support to making sure Docker will be fully integrated with its Azure cloud and Windows Server.

Container orchestration refers to the ability to spin up, coordinate, schedule and distribute multiple containers for the purpose of running an organization’s infrastructure; this is an operations task, rather than a development task. One can bundle the services that helps make an application run inside these containers, and with an overarching system that can create and distribute containers when needed, IT staff members don’t have to slave away with the minutiae of keeping that application running.

Simply put, containers are great at isolating applications and services from each other while they all share resources from the same Linux OS kernel. When combined with an orchestration service that can oversee the creation of containers and can spin them up as needed, however, their potential at cutting down overhead becomes that much greater. Just look at Spotify, which developed its own Helios container orchestration framework that’s contributed to a much more efficient infrastructure for the streaming-music provider.

So far, the big public cloud providers of Amazon, Google and [company]Microsoft[/company] have all indicated in one way or another that they view containers as the way of the future for IT operations. Google’s been busy promoting its open-source Kubernetes orchestration framework and in November announced its managed service version of Kubernetes called Google Container Engine, which as of now only functions on the Google Cloud. Amazon, on the other hand, announced its own container orchestration service called EC2 Container Service, which unsurprisingly, works for only the Amazon cloud.

Microsoft hasn’t yet announced a similar container orchestration service, and today’s news seems to highlight the fact that it’s content with just letting Docker handle all that orchestration. Docker’s new orchestration services — called Machine, Swarm and Compose — will supposedly make it possible for organizations to run and coordinate their containers across multiple clouds, whether they be [company]Amazon[/company], [company]Google[/company], [company]VMware[/company], Digital Ocean and so on and so on.

With Microsoft jumping on board with Docker in announcing the release of the new features, this seems like one more way the Redmond, Washington giant is trying to gain trust from developers who want a service that plays nice with multiple clouds.

Microsoft’s big open-source push the past year was designed to court developers who are hesitant to trust Microsoft due to its once closed nature under its previous regime. By joining forces with developer-favorite Docker, Microsoft is once again trying to make itself more attractive to the development community.

In a second blog post detailing the announcement, Microsoft made sure to list a number of ways its integrating the new Docker orchestration services into Azure. Ross Gardler, Microsoft’s senior technology evangelist of its open technologies, wrote “Today we announced a number of improvements to our Docker support on Azure, most notably Docker Machine support for Azure and Hyper-V and support for Docker Swarm.“

With orchestration services seeming to be the next step for containers to enter the world of production, it’s interesting that Microsoft hasn’t yet come up with its own version of the technology. But, maybe it doesn’t have to as long as it’s working with Docker.

Of course, by not making its own orchestration service and instead relying on the cloud-agnostic Docker, it’s putting its own Azure Cloud at risk, since organizations will be able to use other clouds as well.

In an attempt to further lure developers (Microsoft already has a strong foothold with legacy companies), it might be worth the risk to Microsoft, however. It’s got years to make up for as it tries to distance itself from the Microsoft of the past.

How Spotify is ahead of the pack in using containers

In late December, CoreOS CEO and container guru Alex Polvi proclaimed in a tweet that he believes 2015 will be the year of the production-ready container, which would be a testament to how fast companies are adopting the technology that promises more portability and less overhead than virtual machines.

For music streaming service Spotify, however, containers are already a way of life. The streaming-music provider has been using containers in production on a large scale, according to Mats Linander, Spotify’s infrastructure team lead.

This is a big deal given that it seems only a few companies beyond cloud providers like Google or Joyent have gone public with how they are using container technology in production. Indeed, when Ben Golub, CEO of the container-management startup Docker, came on the Structure Show podcast in December and described how financial institutions are experimenting with containers, he said that they are generally doing pilots and are using Docker containers “for the less sensitive areas of their operations.”

Ever since Docker rose to prominence, developers have been singing the praises of containers, which have made it easier to craft multicomponent applications that can spread out across clouds. Container technology is basically a form of virtualization that isolates applications and services from each other within virtual shells all while letting them tap into the same Linux OS kernel for their resources.

For many companies as well as government agencies, it’s not just the benefits to the software development process that has them interested in containers — it’s how they can assist their operations. If containers truly are less bulky than virtual machines (Golub told me over the summer that using containers in production can lead to 20-to-80 percent lighter workloads than only using VMs), then it’s clear organizations stand to benefit from using the tech.

But you can’t simply embed containers into the architecture of your application and expect a smooth ride, especially if that application is a hit with the public and can’t afford to go down. It takes a bit of engineering work to see the benefits of containers in operations and there have been people saying that Docker has caused them more headaches than happiness.

Spotify, which has 60 million users, runs containers across its four data centers and over 5,000 production servers. While it runs containers in its live environment, Spotify had to do a little legwork to actually see some gains.

These containers will help beam Beyonce to your playlist

One of the ways the streaming-music company uses containers is to more efficiently deploy the back-end services that power the music-streaming application. With the addition of a home-grown Docker container orchestration service called Helios, the team has come up with a way to control and spin up multiple clusters of containers throughout its data centers.

Out of 57 “distinct backend services in production” that are containerized, Linander considers 20 of them as being significant. All of these containerized services share space with “more than 100 other services” churning each day, he explained.

These containers house stateless services, which basically means that these services don’t require constant updating from databases and they can be safely restarted without causing problems.

Linander said he didn’t “want to go into deep detail” on what all those services are doing, but he did explain that “view-aggregation services” are a good fit for containerization. These kinds of services are responsible for spooling the data from Spotify’s data centers that contain information pertaining to an individual’s playlist — including the name of an artist, album images, and track listings.

Spotify playlist beyonce

Bundling these services inside containers helps Spotify because instead of relying on a client that needs to send separate requests per each service to obtain the necessary information from the databases, Spotify can essentially deploy a cluster of containers that contain an aggregate of the services and thus not have to send so many requests. As a result, the application is less “heavy and bulky,” he said.

It also helps that if Spotify restarts a container it will start fresh from the last time it was spun up. That means that if something crashes, users won’t have to wait too long to see Beyonce’s mug appear on their playlists along with all of her hits.

As Spotify infrastructure engineer Rohan Singh explained during a session at last year’s Dockercon, before the company was using Docker containers, Spotify’s hardware utilization was actually low because “every physical machine used one service” even though the company has a lot of machines.

Spotify slide from Dockercon explaining its older architecture before Docker containers

Spotify slide from Dockercon explaining its older architecture before Docker containers

By running a fleet of containers on bare metal, Spotify was able to squeeze more juice out of the system because that cluster contains more than one service.

Say hello to Helios

Spotify’s Helios container orchestration framework (which the company open sourced last summer) is crucial to making sure that the deployed containers are running exactly the way Spotify wants them to run.

Right around the time Spotify first started experimenting with lightweight containerization, Docker was starting to raise eyebrows, Linander said. The Spotify team then met with Docker (Spotify is also a member of the Docker Governance Advisory Board) to discus the technology, which looked promising but at the time lacked orchestration capabilities in which the containers could be linked together and deployed in groups. It should be noted that as of early December, Docker has now made orchestration services available in its product.

Because container orchestration services weren’t really out there during the time Spotify was investigating the use of Docker, Linander said he decided “we could build something in house that could target our use case.”

For Linander, a lot of the benefits of containers come to fruition when you add an orchestration layer because that means teams can now “automate stuff at scale.”

“When you have several thousands of servers and hundreds of microservices, things become tricky,” Linander said, and so the Helios framework was created to help coordinate all those containers that carry with them the many microservices that make Spotify come alive to the user.

The framework consists of the Helios master — basically the front-end interface that resides on the server — and the Helios agents, which are pieces of software related to the Helios master that are attached to the Docker images.

Slide of Helios from a Spotify talk during Dockercon

Slide of Helios from a Spotify talk during Dockercon

Working in conjunction with the open-source Apache Zookeeper distributed configuration service, Spotify engineers can set a policy around how they want the containers to be created in the Helios master and “Zookeeper distributes the state to the helios agent” to make sure the containers are spun up correctly, said Linander.

During Dockercon, Singh explained that Helios is great at recognizing when a “container is dead” and if a person accidentally shuts down an important container, Helios can be configured to recognize these mission-critical containers and instantly load one back up.

“We just always have this guarantee that this service will be running,” Singh said last summer.

New orchestration options and new container technology

Of course, Helios is no longer the only orchestration system available as there are now several of these frameworks on the block, including Google’s Kubernetes, Amazon’s EC2 container service, the startup Giant Swarm’s microservice framework and Docker’s own similar services.

Now that there’s a host of other options, Spotify will be evaluating possible alternatives, but don’t be surprised if the company sticks with Helios. Linander said the main reason Spotify is currently using Helios is because “it is battle proven” and while other companies may be running containers in production through the use of other orchestration services, no one really knows at what scale they may be operating at.

But what about other new container technology that may give Docker a run for its money, like CoreOS and its Rocket container technology? Linander said he doesn’t have a “strong opinion” on the subject and even if Spotify sees “a bunch of potential” with new container tech, the company isn’t going to drop everything it’s doing and implement the latest container toy.

As for ClusterHQ and its Flocker container-database technology that the startup claims will let users containerize datasets all inside the Docker Hub, Linander said “It looks cool to me, personally,” but it’s still too early to tell if the the startup’s technology lives up to what it says it can deliver. Besides, he’s finding that Cassandra clusters are getting the job done just fine when it comes to storing Spotify’s data.

“We are always considering options,” said Linander. “[We are] building the best music service that ever was and will ever be.”

Mats Linander, infrastructure team lead at Spotify

Mats Linander, infrastructure team lead at Spotify

It’s clear from speaking with Linander that having a well-oiled orchestration service helps take a load off of engineers’ plates when it comes to tending to those container clusters. It seems like a lot of the ease, automation and stability of spinning up clusters of containers comes from the orchestration service that coordinates the endeavor.

However, not every company possesses the engineering skills needed to create something akin to Helios, and while the service is open source, it’s still a custom system designed for Spotify so users will have to do some tweaking to get it functional for themselves.

For 2015 to truly be the year of the production-ready container, organizations are going to have to be up-to-speed with using some sort of orchestration service and that service is going to have to scale well and last a long time without something causing it to go awry.

At this point, it’s just a question of whose orchestration technology will gain the most traction in the marketplace since most organizations will more than likely be trying out new tech rather than creating new tech, unless they are as ambitious as Spotify and other webscale companies. With the plethora of new options now available — from Kubernetes to Docker to CoreOS’s Fleet — the public’s now got a lot of choices.

On Docker, CoreOS, open source and virtualization

In early December, container-specialist Docker was gearing up for its Amsterdam conference and the debut of its new orchestration services and Docker Enterprise product line.

But before Docker co-founder and CTO Solomon Hykes got a chance to board the plane, he got word that operating-system provider [company]CoreOS[/company] announced its own Rocket container technology, which caught the [company]Docker[/company] team off guard, according to Docker CEO Ben Golub in this week’s Structure Show.

“I’ll be the first to say, I think we probably struggled to understand it,” said Golub.

[soundcloud url=”https://api.soundcloud.com/tracks/182147043″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed

Let’s clear some misunderstandings

Golub addressed what CoreOS co-founder CEO Alex Polvi told Gigaom on another recent Structure Show and said that there have been “concerns raised about Docker, some of which we think are legitimate, some of which we think are misunderstandings,” especially when it comes to the notion that Docker is “bloated” (Polvi’s word choice) and is offering a container technology that comes packaged with features users may not want.

“I don’t believe that if people take a look at what we have that we are forcing people to use our orchestration or that we are being monolithic in terms of the lower-level container format that we support.”

Regarding the new orchestration APIs the Docker team rolled out, Golub said that users don’t have to use those features and that “you can swap out the batteries” if you don’t want them or want to use another similar service. The standard Docker container still exists, he said.

“We were a little confused by that messaging because if you just want to use Docker, the container format, you can.”

As for what Docker thinks of CoreOS’s new Rocket container technology, Golub said it’s too soon to tell. “It remains to be seen what the guys at CoreOS and the people using Rocket want it to be,” but if the world wants different container formats, so be it.

What exactly is Docker?

Docker, in Golub’s words, “is a platform for building, shipping and running distributed applications, which basically means that we give people the ability to create applications where either the entire application or portions of the application are packaged up in a lightweight format we call a container.”

While Docker is a platform, Golub was quick to point out that Docker is not a platform-as-a-service, like when it was once known as dotCloud; for example, it’s not providing servers.

As for what type of business Docker, Inc. (not the Docker open-source project) envisions itself to be, the best bet would be something similar to [company]VMware[/company].

“The closest analogy I guess I can give you is, for people who think of Docker and containers as a new form of virtualization, so [with] open source we gave away ESX and what we are selling is something akin to vCenter or vSphere.”

Ben Golub, CEO of Docker

Ben Golub, CEO of Docker

Docker has grown fast in the past year, and Golub said that major institutions, like financial institutions, pharmaceutical companies and governments are considering eventually using Docker in production.

“In the banks, generally speaking, they are doing pilots or they’re using us for the less sensitive areas of their operations,” Golub said. “But the plans are to move them over to operations. It took several years to move virtualization into their more core operations.”

And while making a viable business in open source is currently a somewhat disputed notion, Docker maintains it’s on the right trajectory and Golub points to [company]MongoDB[/company], [company]Hortonworks[/company] and [company]Cloudera[/company] as examples of entities “building viable businesses around open source.”

“In the case of Docker, we’ve been very clear to say that our monetization model is selling commercial software around management and monitoring,” he said.