Report: Docker and the Linux container ecosystem

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Image 1 for post Navicron- Linux emerges as clear winner in mobile applications( 2008-02-07 22:25:59)
Docker and the Linux container ecosystem by Janakiram MSV:
Linux container technology is experiencing tremendous momentum in 2014. The ability to create multiple lightweight, self-contained execution environments on the same Linux host simplifies application deployment and management. By improving collaboration between developers and system administrators, container technology encourages a DevOps culture of continuous deployment and hyperscale, which is essential to meet current user demands for mobility, application availability, and performance.
Many developers interchange the terms “container” and “Docker,” sometimes making it difficult to distinguish between the two, but there is a very important distinction. Docker, Inc. is a key contributor to the container ecosystem in the development of orchestration tools and APIs. While container technology has existed for decades, the company’s open-source platform, Docker, makes that technology more accessible by creating simpler and more powerful tools. Using Docker, developers and system administrators can efficiently manage the lifecycle of tens of thousands of containers.
This report provides a detailed overview of the Linux container ecosystem. It explains the various components of container technology and analyzes the ecosystem contributions from companies to accelerate the adoption of Linux-based containers.
To read the full report click here.

Red Hat’s new operating system will power up your containers

Open-source software giant Red Hat said on Thursday that its new operating system custom made to power Linux containers is now available to download. Red Hat has been a big proponent of Docker and its container packing technology going back as far as last summer touting its support of the startup and making sure its Enterprise Linux 7 product was compatible with Docker’s technology.

Container technology has generated a lot of buzz over the past year by promising a type of virtualization that’s lighter weight than your typical virtual machine. In order for a container to actually run, it needs to be connected to a host Linux OS that can distribute the necessary system resources to that container.

While you could use a typical Linux-based OS to power up your containers, as CoreOS CEO Alex Polvi (whose own startup offers a competing container-focussed OS) told me last summer, these kinds of operating systems merely get the job done and don’t take full advantage of what containers have to offer.

Red Hat’s new OS supposedly comes packed with features designed to make running containerized applications less of a chore to manage. These features include an easier way to update the operating system (OS updates can often be a pain for IT admins) and an integration with Google’s Kubernetes container-orchestration service for spinning up and managing multiple containers.

The new OS is also promising better security for those Docker containers — which has been an issue that Docker’s team has been addressing in various updates — with a supposed stronger way of isolating containers from each other when they are dispersed in a distributed environment.

Of course, [company]Red Hat[/company] has some competition when it comes to becoming the preferred OS of container-based applications. CoreOS has its own container-centric OS and Ubuntu has its Snappy Ubuntu Core system for powering Docker containers. Additionally, a couple of the former veterans who recently departed Citrix in September have started their own startup called Rancher Labs that just released RancherOS, which the startup describes as a “minimalist Linux distribution that was perfect for running Docker containers.”

It will be worth keeping an eye on which OS gains traction in the container marketplace and whether we will see some of these new operating systems starting to offer support for CoreOS’s new Rocket-container technology as opposed to just the Docker platform.

A Red Hat spokesperson wrote to me in an email that “Red Hat Enterprise Linux-based containers are not supported on CoreOS and rocket is not supported with Atomic Host. We are, as always, continuing to evaluate new additions in the world of containers, including Rocket, with respect to our customer needs.”

Docker buys SocketPlane as it builds out its container-networking strategy

You can add another acquisition to Docker’s plate with the startup set to announce on Wednesday that it has bought a small networking startup called SocketPlane. The acquisition, whose financial terms were not disclosed, is just one more step in Docker’s plans to become the de-facto container-management company whose technology can play well on multiple infrastructures.

SocketPlane’s entire six-person staff is joining Docker and will be helping the container-centric startup develop a networking API that makes it possible to string together hundreds to thousands of containers together no matter if the containers “reside in different data centers,” explained Scott Johnston, Docker’s SVP of product.

The upcoming networking API can be thought of as an extension to Docker’s recently announced orchestration services. Although the new orchestration services makes it possible to spin up and manage multiple clusters of containers, the current Docker networking technology built into the platform only works well for “a handful of hosts and containers” as opposed to the thousands needed in complex environments like Spotify’s infrastructure, explained Johnston.

In order for Docker’s orchestration services to really come to fruition and deliver on the promise of spinning up and managing tons of containers, Docker requires an underlying networking API that enables all of those containers to speak to each other at a massive scale. Docker decided that it needed this technology expertise in-house, so it turned to SocketPlane, whose staff includes networking veterans from [company]Cisco[/company], OpenStack and OpenDaylight, said Johnston.

The goal is to create a networking API that can work with the gear and applications from providers like VMware/Nicira as well as Cisco and [company]Juniper[/company], explained Johnston. Theoretically, the API will make it possible for a user to build a distributed, containerized application in one data center that uses Cisco networking gear and have that application move over to an OpenStack-based cloud environment that uses Juniper gear “without breaking everything,” said Johnston.

If VMware has its own networking technology that users like, users should be able to “swap out” the Docker networking technology and use the other technology, said Johnston. Google’s own Kubernetes container orchestration system currently swaps out the existing Docker networking technology for its own networking technology, said Johnston, but once the SocketPlane team builds a workable open networking API, you can imagine a world in which users can alternate between Kubernetes or Docker’s Swarm orchestration service if they choose to.

“Say this works and we have ten or twelve implementations of the API,” said Johnston. “A network operator might want to take advantage of that.”

This makes for Docker’s third publicly-known acquisition. The startup bought devops startup Koality last October, which was preceded by the July acquisition of London-based Orchard Laboratories.

Story clarified to emphasize that the APIs will not be swapped out.

How Spotify is ahead of the pack in using containers

In late December, CoreOS CEO and container guru Alex Polvi proclaimed in a tweet that he believes 2015 will be the year of the production-ready container, which would be a testament to how fast companies are adopting the technology that promises more portability and less overhead than virtual machines.

For music streaming service Spotify, however, containers are already a way of life. The streaming-music provider has been using containers in production on a large scale, according to Mats Linander, Spotify’s infrastructure team lead.

This is a big deal given that it seems only a few companies beyond cloud providers like Google or Joyent have gone public with how they are using container technology in production. Indeed, when Ben Golub, CEO of the container-management startup Docker, came on the Structure Show podcast in December and described how financial institutions are experimenting with containers, he said that they are generally doing pilots and are using Docker containers “for the less sensitive areas of their operations.”

Ever since Docker rose to prominence, developers have been singing the praises of containers, which have made it easier to craft multicomponent applications that can spread out across clouds. Container technology is basically a form of virtualization that isolates applications and services from each other within virtual shells all while letting them tap into the same Linux OS kernel for their resources.

For many companies as well as government agencies, it’s not just the benefits to the software development process that has them interested in containers — it’s how they can assist their operations. If containers truly are less bulky than virtual machines (Golub told me over the summer that using containers in production can lead to 20-to-80 percent lighter workloads than only using VMs), then it’s clear organizations stand to benefit from using the tech.

But you can’t simply embed containers into the architecture of your application and expect a smooth ride, especially if that application is a hit with the public and can’t afford to go down. It takes a bit of engineering work to see the benefits of containers in operations and there have been people saying that Docker has caused them more headaches than happiness.

Spotify, which has 60 million users, runs containers across its four data centers and over 5,000 production servers. While it runs containers in its live environment, Spotify had to do a little legwork to actually see some gains.

These containers will help beam Beyonce to your playlist

One of the ways the streaming-music company uses containers is to more efficiently deploy the back-end services that power the music-streaming application. With the addition of a home-grown Docker container orchestration service called Helios, the team has come up with a way to control and spin up multiple clusters of containers throughout its data centers.

Out of 57 “distinct backend services in production” that are containerized, Linander considers 20 of them as being significant. All of these containerized services share space with “more than 100 other services” churning each day, he explained.

These containers house stateless services, which basically means that these services don’t require constant updating from databases and they can be safely restarted without causing problems.

Linander said he didn’t “want to go into deep detail” on what all those services are doing, but he did explain that “view-aggregation services” are a good fit for containerization. These kinds of services are responsible for spooling the data from Spotify’s data centers that contain information pertaining to an individual’s playlist — including the name of an artist, album images, and track listings.

Spotify playlist beyonce

Bundling these services inside containers helps Spotify because instead of relying on a client that needs to send separate requests per each service to obtain the necessary information from the databases, Spotify can essentially deploy a cluster of containers that contain an aggregate of the services and thus not have to send so many requests. As a result, the application is less “heavy and bulky,” he said.

It also helps that if Spotify restarts a container it will start fresh from the last time it was spun up. That means that if something crashes, users won’t have to wait too long to see Beyonce’s mug appear on their playlists along with all of her hits.

As Spotify infrastructure engineer Rohan Singh explained during a session at last year’s Dockercon, before the company was using Docker containers, Spotify’s hardware utilization was actually low because “every physical machine used one service” even though the company has a lot of machines.

Spotify slide from Dockercon explaining its older architecture before Docker containers

Spotify slide from Dockercon explaining its older architecture before Docker containers

By running a fleet of containers on bare metal, Spotify was able to squeeze more juice out of the system because that cluster contains more than one service.

Say hello to Helios

Spotify’s Helios container orchestration framework (which the company open sourced last summer) is crucial to making sure that the deployed containers are running exactly the way Spotify wants them to run.

Right around the time Spotify first started experimenting with lightweight containerization, Docker was starting to raise eyebrows, Linander said. The Spotify team then met with Docker (Spotify is also a member of the Docker Governance Advisory Board) to discus the technology, which looked promising but at the time lacked orchestration capabilities in which the containers could be linked together and deployed in groups. It should be noted that as of early December, Docker has now made orchestration services available in its product.

Because container orchestration services weren’t really out there during the time Spotify was investigating the use of Docker, Linander said he decided “we could build something in house that could target our use case.”

For Linander, a lot of the benefits of containers come to fruition when you add an orchestration layer because that means teams can now “automate stuff at scale.”

“When you have several thousands of servers and hundreds of microservices, things become tricky,” Linander said, and so the Helios framework was created to help coordinate all those containers that carry with them the many microservices that make Spotify come alive to the user.

The framework consists of the Helios master — basically the front-end interface that resides on the server — and the Helios agents, which are pieces of software related to the Helios master that are attached to the Docker images.

Slide of Helios from a Spotify talk during Dockercon

Slide of Helios from a Spotify talk during Dockercon

Working in conjunction with the open-source Apache Zookeeper distributed configuration service, Spotify engineers can set a policy around how they want the containers to be created in the Helios master and “Zookeeper distributes the state to the helios agent” to make sure the containers are spun up correctly, said Linander.

During Dockercon, Singh explained that Helios is great at recognizing when a “container is dead” and if a person accidentally shuts down an important container, Helios can be configured to recognize these mission-critical containers and instantly load one back up.

“We just always have this guarantee that this service will be running,” Singh said last summer.

New orchestration options and new container technology

Of course, Helios is no longer the only orchestration system available as there are now several of these frameworks on the block, including Google’s Kubernetes, Amazon’s EC2 container service, the startup Giant Swarm’s microservice framework and Docker’s own similar services.

Now that there’s a host of other options, Spotify will be evaluating possible alternatives, but don’t be surprised if the company sticks with Helios. Linander said the main reason Spotify is currently using Helios is because “it is battle proven” and while other companies may be running containers in production through the use of other orchestration services, no one really knows at what scale they may be operating at.

But what about other new container technology that may give Docker a run for its money, like CoreOS and its Rocket container technology? Linander said he doesn’t have a “strong opinion” on the subject and even if Spotify sees “a bunch of potential” with new container tech, the company isn’t going to drop everything it’s doing and implement the latest container toy.

As for ClusterHQ and its Flocker container-database technology that the startup claims will let users containerize datasets all inside the Docker Hub, Linander said “It looks cool to me, personally,” but it’s still too early to tell if the the startup’s technology lives up to what it says it can deliver. Besides, he’s finding that Cassandra clusters are getting the job done just fine when it comes to storing Spotify’s data.

“We are always considering options,” said Linander. “[We are] building the best music service that ever was and will ever be.”

Mats Linander, infrastructure team lead at Spotify

Mats Linander, infrastructure team lead at Spotify

It’s clear from speaking with Linander that having a well-oiled orchestration service helps take a load off of engineers’ plates when it comes to tending to those container clusters. It seems like a lot of the ease, automation and stability of spinning up clusters of containers comes from the orchestration service that coordinates the endeavor.

However, not every company possesses the engineering skills needed to create something akin to Helios, and while the service is open source, it’s still a custom system designed for Spotify so users will have to do some tweaking to get it functional for themselves.

For 2015 to truly be the year of the production-ready container, organizations are going to have to be up-to-speed with using some sort of orchestration service and that service is going to have to scale well and last a long time without something causing it to go awry.

At this point, it’s just a question of whose orchestration technology will gain the most traction in the marketplace since most organizations will more than likely be trying out new tech rather than creating new tech, unless they are as ambitious as Spotify and other webscale companies. With the plethora of new options now available — from Kubernetes to Docker to CoreOS’s Fleet — the public’s now got a lot of choices.

VMware continues pushing the hybrid cloud

Virtualization giant VMware is continuing on its hybrid cloud strategy, making several announcements on Monday that are geared toward customers who want access to both public and private cloud infrastructures. That doesn’t mean just any cloud, of course, but VMware-based public and private clouds.

The company is making use of its NSX networking technology, which came together out of VMware’s $1.26 billion purchase of startup Nicira in 2012, to act as a networking bridge between private clouds built with the [company]VMware[/company] vSphere hypervisor and public clouds built on vCloud Air.

With their public and private clouds linked up by VMware’s VCloud Air Advanced Networking Services,
VMware customers can supposedly create and manage hundreds of virtual networks that carry over their on-premise security policies and networking isolation for applications to the VMware public cloud across a single WAN connection.

“[One can] view the public cloud as an extension of the on-premise data center,” said VMware CEO Pat Gelsinger during a press event in San Francisco on Monday. “We think this disrupts the entire cloud market.”

VMware hybrid cloud

VMware hybrid cloud

If you were hoping that VMware would shed some details on how vCloud Air is doing in the marketplace, you’ll be disappointed that VMware remained silent on those figures. VMware’s public cloud is quite a few years behind public-cloud leader [company]Amazon[/company] Web Services, but it appears that VMware is trying to grow its cloud by courting its clients who already have a VMware-tailored private cloud and don’t want to bother with migrating existing infrastructure to another company’s platform.

VMware’s flagship product vSphere also got an update as of Monday in the form of new services. vSphere users now have the option of using Long-Distance vMotion, a live-migration service that let users run their workloads and migrate them to different hosts across long distances.

Also included is what’s known as Instant Clone Technology (formerly known as Project Fargo), which will apparently let users rapidly spin up both containers and virtual machines in “sub-second timeframes,” according to a company announcement. No doubt that VMware is hoping this service negates the notion that other companies’ container technologies (like Docker’s, for example) are faster and more efficient than what VMware has to offer.

VMware also reiterated its support for OpenStack by further explaining VMware Integrated OpenStack, which lets organizations use VMware’s tools to manage existing OpenStack clouds. vSphere enterprise-plus customers will now get free access to the service, but if they want customer support to help them manage that infrastructure, they will have to pay $200 per CPU each year, according to Dan Wendlandt, VMware’s director of product management for OpenStack.

VMware OpenStack

Wendlandt, who says he has been an OpenStack developer since its inception, said that OpenStack is a “project by geeks for geeks, for lack of a better term,” and that VMware’s integrated service is for users who don’t want to spend countless hours trying to get OpenStack infrastructure to work; they just want to build applications.

Wendlandt wouldn’t comment on specifics when asked about whether VMware’s hybrid cloud push means that we might see a version of OpenStack that can connect to VMware’s public cloud. As of today VMware is currently releasing OpenStack APIs to connect with the company’s private cloud business lineup, he said, although he indicated that we could see something that links OpenStack to vCloud in the future.

All of the announcements made on Monday are expected to be available in the first quarter of 2015.

You can now store Docker container images in Google Cloud

Google Cloud users can now load up their private Docker container images into the search giant’s new Google Container Registry, which Google said Friday is now available in beta and the company noted “is not covered by any SLA or deprecation policy and may be subject to backward-incompatible changes.”

If you are a [company]Google[/company] Cloud customer, your [company]Docker[/company] container images — which contain all the necessary components for spinning up containers, like the source code and binary files — will be “automatically encrypted before they are written to disk,” according to the Google blog post detailing the registry.

From the blog post:
[blockquote person=”Google” attribution=”Google”]Access control: The registry service hosts your private images in Google Cloud Storage under your Google Cloud Platform project. This ensures by default that your private images can only be accessed by members of your project, enabling them to securely push and pull images through the Google Cloud SDK command line. Container host VMs can then access secured images without additional effort.

Google said that with the container images loaded up in the Google cloud and cached in its data centers, users should be able to deploy them to Google Container Engine clusters as well as “Google Compute Engine container-optimized VM’s.”

As for pricing, Google said that while the service is in beta, users “will be charged only for the Google Cloud Storage storage and network egress consumed by your Docker images.”

This seems like part of Google’s strategy to hype up its Google Container Engine, which is the managed-service version of the open-source Kubernetes container-management system. Instead of storing your private containers in the Docker Hub or CoreOS’s Enterprise Registry, Google wants users to trust it with holding on to the valuables.

For now, the Google Container Engine only allows users to craft managed clusters within its system and “It doesn’t have the ability to span across multiple cloud providers,” said Greg DeMichillie, Google’s director of product management for its cloud platform, during the announcement of the container engine last November.

On Docker, CoreOS, open source and virtualization

In early December, container-specialist Docker was gearing up for its Amsterdam conference and the debut of its new orchestration services and Docker Enterprise product line.

But before Docker co-founder and CTO Solomon Hykes got a chance to board the plane, he got word that operating-system provider [company]CoreOS[/company] announced its own Rocket container technology, which caught the [company]Docker[/company] team off guard, according to Docker CEO Ben Golub in this week’s Structure Show.

“I’ll be the first to say, I think we probably struggled to understand it,” said Golub.

[soundcloud url=”″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed

Let’s clear some misunderstandings

Golub addressed what CoreOS co-founder CEO Alex Polvi told Gigaom on another recent Structure Show and said that there have been “concerns raised about Docker, some of which we think are legitimate, some of which we think are misunderstandings,” especially when it comes to the notion that Docker is “bloated” (Polvi’s word choice) and is offering a container technology that comes packaged with features users may not want.

“I don’t believe that if people take a look at what we have that we are forcing people to use our orchestration or that we are being monolithic in terms of the lower-level container format that we support.”

Regarding the new orchestration APIs the Docker team rolled out, Golub said that users don’t have to use those features and that “you can swap out the batteries” if you don’t want them or want to use another similar service. The standard Docker container still exists, he said.

“We were a little confused by that messaging because if you just want to use Docker, the container format, you can.”

As for what Docker thinks of CoreOS’s new Rocket container technology, Golub said it’s too soon to tell. “It remains to be seen what the guys at CoreOS and the people using Rocket want it to be,” but if the world wants different container formats, so be it.

What exactly is Docker?

Docker, in Golub’s words, “is a platform for building, shipping and running distributed applications, which basically means that we give people the ability to create applications where either the entire application or portions of the application are packaged up in a lightweight format we call a container.”

While Docker is a platform, Golub was quick to point out that Docker is not a platform-as-a-service, like when it was once known as dotCloud; for example, it’s not providing servers.

As for what type of business Docker, Inc. (not the Docker open-source project) envisions itself to be, the best bet would be something similar to [company]VMware[/company].

“The closest analogy I guess I can give you is, for people who think of Docker and containers as a new form of virtualization, so [with] open source we gave away ESX and what we are selling is something akin to vCenter or vSphere.”

Ben Golub, CEO of Docker

Ben Golub, CEO of Docker

Docker has grown fast in the past year, and Golub said that major institutions, like financial institutions, pharmaceutical companies and governments are considering eventually using Docker in production.

“In the banks, generally speaking, they are doing pilots or they’re using us for the less sensitive areas of their operations,” Golub said. “But the plans are to move them over to operations. It took several years to move virtualization into their more core operations.”

And while making a viable business in open source is currently a somewhat disputed notion, Docker maintains it’s on the right trajectory and Golub points to [company]MongoDB[/company], [company]Hortonworks[/company] and [company]Cloudera[/company] as examples of entities “building viable businesses around open source.”

“In the case of Docker, we’ve been very clear to say that our monetization model is selling commercial software around management and monitoring,” he said.

It’s all Docker, containers and the cloud on the Structure Show

[soundcloud url=”″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

It’s safe to say that Docker has had a momentous year with the container-management startup gaining a lot of developer interest and scoring a lot of support from big tech companies like AmazonGoogle, VMware and Microsoft.

Docker CEO Ben Golub came on to the Structure Show this week to talk about Docker’s year and what he envisions the company to be as it continues to grow (hint: it’s aiming for something similar to [company]VMware[/company]). Golub also talks about Docker’s raft of new orchestration features and shares his thoughts on the new CoreOS container technology and how that fits in with Docker.

If you listened to our recent Structure Show featuring CoreOS CEO Alex Polvi and are curious to hear Docker’s reaction and perspective on Rocket, you’ll definitely want to hear this week’s episode.

In other news, Derrick Harris and Barb Darrow kick things off by looking at how Hortonworks and New Relic shares were holding up and the good news is — they’re doing pretty well at the ripe old age of 1 week.

Also on the docket, [company]IBM[/company] continues its cloud push by bringing a pantload of new data centers online — in Frankfurt (for the all-important German market) as well as Mexico City and Tokyo. In October, IBM said it was working with local partner Tencent to add cloud services for the Chinese market, which reminds us that Amazon Web Services Beijing region remains in preview mode.


Ben Golub, CEO of Docker

Ben Golub, CEO of Docker


Hosts: Barbara Darrow, Derrick Harris and Jonathan Vanian

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed


Mo’ money, mo’ data, mo’ cloud on the Structure Show

Why CoreOS went its own way on containers

More from Facebook on its new networking architecture 

Do you find OSS hard to deploy? Say hey to ZerotoDocker

All about AWS Re:Invent and oh some Hortonworks and Microsoft news too


Mesosphere’s new data center mother brain will blow your mind

Mesosphere has been making a name for itself in the the world of data centers and cloud computing since 2013 with its distributed-system smarts and various introductions of open-source technologies, each designed to tackle the challenges of running tons of workloads across multiple machines. On Monday, the startup plans to announce that its much-anticipated data center operating system — the culmination of its many technologies — has been released as a private beta and will be available to the public in early 2015.

As part of the new operating system’s launch, [company]Mesosphere[/company] also plans to announce that it has raised a $36 million Series B investment round, which brings its total funding to $50 million. Khosla Ventures, a new investor, drove the financing along with Andreessen Horowitz, Fuel Capital, SV Angel and other unnamed entities.

Mesosphere’s new data center operating system, dubbed DCOS, tackles the complexity behind trying to read all of the machines inside a data center as one giant computer. Similar to how an operating system on a personal computer can distribute the necessary resources to all the installed applications, DCOS can supposedly do the same thing across the data center.

The idea comes from the fact that today’s powerful data-crunching applications and services — like Kafka, Spark and Cassandra — span multiple servers, unlike more old-school applications like [company]Microsoft[/company] Excel. Asking developers and operations staff to configure and maintain each individual machine to accommodate the new distributed applications is quite a lot, as Apache Mesos co-creator and new Mesosphere hire Benjamin Hindman explained in an essay earlier this week.

Mesosphere CEO Florian Leibert

Mesosphere CEO Florian Leibert – Source: Mesosphere

Because of this complexity, the machines are nowhere near running full steam, said Mesosphere’s senior vice president of marketing and business development Matt Trifiro.

“85 percent of a data center’s capacity is typically wasted,” said Trifiro. Although developers and operations staff have come a long way to tether pieces of the underlying system together, there hasn’t yet been a nucleus of sorts that successfully links and controls everything.

“We’ve always been talking about it — this vision,” said Mesosphere CEO Florian Leibert. “Slowly but surely the pieces came together; now is the first time we are showing the total picture.”

Building an OS

The new DCOS is essentially a bundle of all of the components Mesosphere has been rolling out — including the Mesos resource management system, the Marathon framework and Chronos job scheduler — as well as third-party applications like the Hadoop file system and YARN.

The DCOS also includes common OS features one would would find in Linux or Windows, like a graphical user interface, command-line interface and a software-development kit.

These types of interfaces and extras are important for DCOS to be a true operating system, explained Leibert. While Mesos can automate the allocation of all the data center resources to many applications, the additional features provide coders and operations staff a centralized hub from which they can monitor their data center as a whole and even program.

“We took the core [Mesos] kernel and built the consumable systems around it,” said Trifiro. “[We] added Marathon, added Chronos and added the easy install of the entire package.”

To get DCOS up and running in a data center, Mesosphere installs a small agent on all Linux OS-based machines, which in turn allows them to be read as an “uber operating system,” explained Leibert. With all of the machines’ operating systems linked up, it’s supposedly easier for distributed applications, like Google’s Kubernetes, to function and receive what they needs.

The new graphical interface and command-line interface allows an organization to see a visual representation of all of their data center machines, all the installed distributed applications and how system resources like CPU and memory are being shared.

If a developer wants to install an application in the data center, he or she simply has to enter install commands in the command-line interface and the DCOS should automatically load it up. A visual representation of the app should then appear along with indicating which machine nodes are allocating the right resources.

DCOS interface

DCOS interface

The same process goes for installing a distributed database like Cassandra; you can now “have it running in a minute or so,” said Leibert.

Installing Cassandra on DCOS

Installing Cassandra on DCOS

A scheduler is built into DCOS that takes in account certain variables a developer might want to include in order to decide which machine should deliver resources to what application; this is helpful because it allows the developer to set up the configurations and the DCOS will automatically follow through with the orders.

“We basically turn the software developer into a data center programmer,” said Leibert.

And because DCOS makes it easier for a coder to program against, it’s possible that new distributed applications could be made faster than before because the developer can now write software to a fleet of machines rather than only one.

As of today, DCOS can run on on-premise environments like bare metal and OpenStack, major cloud providers — like [company]Amazon[/company], [company]Google[/company] and [company]Microsoft[/company] — and it supports Linux variants like CoreOS and Redhat.

Changing the notion of a data center

Leibert wouldn’t name which organizations are currently trying out DCOS in beta, but it’s hard not to think that companies like Twitter, Netflix or Airbnb — all users of Mesos — haven’t considered giving it a test drive. Leibert was a former engineer at Twitter and Airbnb, after all.

Beyond the top webscale companies, Mesosphere wants to court legacy enterprises like those in the financial-services industry who have existing data centers that aren’t nearly as efficient as those seen at Google.

Banks, for example, typically use “tens of thousands of machines” in their data centers to perform risk analysis, Leibert said. With DCOS, Leibert claims that banks can run the type of complex workloads they require in a more streamlined manner if they were to link up all those machines.

And for these companies that are under tight regulation, Leibert said that Mesosphere has taken security into account.

“We built a security product into this operating system that is above and beyond any open-source system, even as a commercial plugin,” said Leibert.

As for what lies ahead for DCOS, Leibert said that his team is working on new features like distributed checkpointing, which is basically the ability to take a snapshot of a running application so that you can pause your work; the next time you start it up, the data center remembers where it left off and can deliver the right resources as if there wasn’t a break. This method is apparently good for developers working on activities like genome sequencing, he said.

Support for containers is also something Mesosphere will continue to tout, as the startup has been a believer in the technology “even before the hype of [company]Docker[/company],” said Leibert. Containers, with their ability to isolate workloads even on the same machine, are fundamental to DCOS, he said.

Mesosphere believes there will be new container technology emerging, not just the recently announced CoreOS Rocket container technology, explained Trifiro, but as of now, Docker and native Linux cgroup containers are what customers are calling for. If Rocket gains momentum in the market place, Trifiro said, Mesosphere will “absolutely implement it.”

If DCOS ultimately lives up to what it promises it can deliver, managing data centers could be a way less difficult task. With a giant pool of resources at your disposal and an easier way to write new applications to a tethered-together cluster of computers, it’s possible that next-generation applications could be developed and managed far easier than they use to be.

Correction: This post was updated at 8:30 a.m. to correctly state Leibert’s previous employers. He worked at Airbnb, not Netflix.

After a dramatic week, Docker pushes on with its product roadmap

Docker is having one of its most interesting weeks of the year starting Monday as partner (and now potential rival) CoreOS revealed its new container technology of mass-destruction, Rocket — a possible alternative to Docker. The timing of Rocket’s launch was suspect, considering this week Docker is holding a conference in Amsterdam, but the container specialist isn’t putting its head in the sand. Instead, Docker is announcing on Thursday several new features to woo developers who want to more easily craft container-based applications on the Docker platform.

Docker will detail its long-awaited open-source container-orchestration services, as well as Docker Hub Enterprise, a version of the [company]Docker[/company] Hub for paid clients. The three orchestration tools are now available in an alpha release and should enter general availability in the second quarter of 2015. Docker Hub Enterprise will be available in early access in February 2015.

The startup noted before that these new services have been in the pipeline for some time as it attempts to make its platform a sort of container-based-application-development-hub for coders to craft multicomponent applications across different cloud providers. To do this, Docker built orchestration tools, which coordinate, schedule and distribute the appropriate system resources necessary for an application to be built and run in an automated fashion.

How to orchestrate your containers

The three new orchestration services include Docker Machine, Docker Swam and Docker Compose.

Docker Machine is essentially a simpler way for developers to get the Docker engine up and running on multiple clouds from the comfort of their own laptops without having to do any manual configuration, explained David Messina, Docker’s vice president of enterprise marketing. The service uses an API that connects to any cloud so “the infrastructure itself is instantly Docker ready,” he said.

Similar to Docker Machine, Docker Compose basically makes it easier for developers to build an application using multiple Docker containers, regardless of the infrastructure used; a configuration file lets coders craft an application using multiple containers in minutes.

Docker Swarm is a clustering service that ensures an application’s distributed containers are automatically “getting fed the right resources,” said Messina. Docker is also partnering with resource-management startup [company]Mesosphere[/company] so that Mesosphere’s technology can be baked into Swarm, he said.

Swarm will eventually have a set of clustering APIs that allow it to connect with other clustering services so a developer could use Swarm to manage a set of containers on a test environment and then eventually transfer those containers to another clustering system like Mesos or the Amazon EC2 Container service.

And on to the enterprise

As for Docker Enterprise, the new service is pretty much the same Docker that everyone knows except tailored for enterprises who want to use it behind a company firewall for added security. Companies should also have access to both private and public Docker repositories.

It was possible to use Docker behind a firewall before, but companies needed open-source software and tools to do so; like Docker Machine and Compose, this service makes a complex task a bit more simple.

Although pricing has not been determined, the new Enterprise Hub will be available through Docker partners [company]Microsoft[/company], [company]Amazon[/company] Web Services, and [company]IBM[/company] on their own clouds. As part of the launch of the Enterprise Hub, Docker is also announcing its new partnership with IBM, making yet another big tech partner.

IBM will let customers use Docker Enterprise on-premise or in the cloud and Microsoft will let organizations sign up on the Azure marketplace. Amazon is making Docker Enterprise available on its AWS Test Drives and AWS Quick Start Reference platforms, which are essentially the Amazon-sanctioned services for people to test out non-Amazon-related IT products on Amazon infrastructure.

It’s not clear yet if Google will eventually offer Docker Enterprise on its own cloud. Google detailed in November its own paid-container-management platform called Google Container Engine, based on its open-sourced Kubernetes system. It will be worth watching how Amazon plans to tout Docker Enterprise as well, since it recently showed off its own EC2 Container Service.

Lots of new features, but are they warranted?

From these announcements, it’s clear Docker is trying to expand from simply being a container-centric startup to being an application-development service that rolls with all the cloud providers.

Of course, given CoreOS’s claims this week that by working on all the extra bells and whistles, Docker has lost sight of creating a “standard container,” it’s hard not to think that perhaps Docker is getting a bit caught up in its own momentum and its urge to become a modern-day application-development hub.

Messina disagreed with Polvi’s statements on Docker, and said “the drive for orchestration is driven by the need of the users in our community.” Supposedly, Docker’s large community has called on Docker to upgrade those containers and make sure they can be spun up and controlled across multiple clouds with ease.

Messina didn’t want to go in detail as to what he felt Polvi got wrong about Docker when CoreOS unveiled its own stripped-down App Containers, but he did say that Polvi was “painfully inaccurate” when he referred to Docker being “fundamentally flawed” as it pertains to security.

“There’s an incredible number of inaccuracies in that blog post,” Messina said. “I don’t want to comment one by one.”

Docker is only roughly 20 months old, said Messina, and like other technologies, the 1.0 version of a product evolves over time into something a bit different than what started out based on community feedback.

“What is there today will not necessarily be there tomorrow or next week,” he said.

Messina stressed that “Each one of these services is available on the platform but optional.” However, as container-clustering startup Giant Swarm’s founder Oliver Thylmann told me earlier this week, he and his team have noticed the Docker daemon growing each day as Docker adds more features.

Still, it’s understandable why Docker is launching these services. The promise of containers was that it could make developing applications a whole lot easier and prevent infrastructure lockdown. The gist of the new orchestration services is that Docker’s containers are more portable than ever and can run better on different clouds; whether that adds to a larger Docker daemon or ironically ends up making Docker more complex than what it needs to be remains to be seen.

As for Docker Enterprise, the startup has been saying it wants to take a Red Hat approach to its open-source technology, and today’s announcements lays the groundwork for more Docker enterprise services to sprout. The important detail was for Docker to convince enterprises that it’s safe to use, and by making a version of Docker that can run behind a company firewall as well as containing private repositories, companies could feel better about giving the new service a whirl.

The outlier in this case are the multiple cloud providers that are Docker partners. Just how long will they tolerate a startup that plays nice with their competitors and allows for customers to use other infrastructure as well? There is a cloud war going on, after all.