A partnership between Google and Mesosphere further’s Google’s strategy to sell the world on its way of automating applications and resources. Cluster management is important — even sexy when wrapped in the lore of Google or Facebook — and now Google claims it’s easier than ever.
Docker Containers Everywhere!
Docker has undoubtedly been the most disruptive technology that the industry has witnessed in the recent past. Every vendor in the cloud ecosystem has announced some level of support or integration with Docker. DockerCon, the first ever conference hosted by Docker Inc. in June 2014, had the who’s who of the cloud computing industry tell their stories of container integration. While each company had varying levels of container integration within their platforms, they all unanimously acknowledged the benefits of Docker.
It is not often that we see Microsoft, Amazon, IBM, Google, Facebook, Twitter, Red Hat, Rackspace and Salesforce under one roof pledging their support for one technology. But what’s in it for Microsoft or Amazon to support Docker? Why are traditional PaaS players like Heroku and Google rallying behind Docker? Is Docker really creating a level playing field for cloud providers? Does Docker converge IaaS and PaaS? Can we trust the vendors offering their unconditional support for Docker? It may be too early to fully answer these questions.
Will the Docker hype cause it to crash because Docker has too much attention too soon?
History and Parallels
If there is one technology that garnered wide industry support, it was Java. When Java was announced in the mid 90s, everyone, including Microsoft, showed interest until they realized how big a threat it was to their own platforms. Java’s main value proposition is Write Once, Run Anywhere – Docker containers are Build Once, Run Anywhere. Docker can be compared to Java not just from a technology aspect, but also from the potential threat it poses to certain companies. Though we have yet to see specific vendors countering the container threat by creating fear, uncertainty and doubt, it may not be too long before they initiate it.
The question of Docker domination still remains to be seen. Does history repeat itself with Docker the way that it did with Java, or even VMware? Key players from the cloud ecosystem offering everything from low-level hypervisors (VMware) to SaaS (Salesforce) are watching Docker to assess its impact on their businesses.
What is a Docker Container?
Docker is designed to manage things like Linux Containers (lxc). What is so different about Docker, when container technologies have been around since the year 2000 (FreeBSD jails)? Docker is the first technology that makes it easy to create and manage containers and also to package things in a way that make them usable without a lot of tweaking. Developers do not need to be experts in containerization to use Docker.
Docker containers can be provisioned on any VM that runs Linux kernel 3.8 or above. It doesn’t matter which Linux OS is running for a Docker container to launch. Thanks to the powerful Dockerfile – a declarative mechanism to describe the container – it is pretty simple to pull a container from the registry and run it on the local VM in just a few minutes.
The following diagrams depict what a Container is – think Russian Nesting Dolls.
Containers as a Service?
There are already startups like Tutum that offer Docker as a Service by imitating existing IaaS providers. Going forward, there is a possibility that Tutum will leverage multiple IaaS offerings to dynamically provision and move containers across them. Just like the way IaaS customers don’t care about the brand of the servers that hosts their VMs, Tutum’s customers won’t care if their container runs in Amazon or Azure. Customers will choose the geography or location where they want their container to run and then the provider will orchestrate the provisioning by choosing the cheapest available or most suitable public cloud platform.
The viability of Docker, and businesses that use Docker as IaaS offered to customers, is still an open-ended question. While Docker has great industry presence and a great deal of buzz, will this translate to production use across enterprises?
How does Docker impact the Cloud Ecosystem?
From startups to enterprise IT, everyone realized the power of self-service based provisioning of virtual hardware. Public clouds like AWS, Azure and Google turned servers from being commodities to becoming utilities. Docker has the potential to reduce the cost of public cloud services by providing more fine-grained compute resources to be utilized and further reduce provisioning times. Additional services like load balancers, caching and firewalls will move to cloud agnostic containers to offer portability.
Since containers are lighter weight execution environments than VMs, Docker is well suited for hybrid cloud deployments. VMware vCHS and Microsoft Azure differentiate themselves through a VM mobility feature. Cloud bursting, a much talked about capability of hybrid cloud, can be delivered through Docker. Containers can be dynamically provisioned and relocated across environments based on resource utilization and availability.
If providers such as AWS adopt Docker as a new unit of resource, they may get cost efficiency benefits, but will management complexity and immaturity be too high of a burden right now?
Platform as a Service was one of the first service delivery models of cloud computing. It was originally created to enable developers to achieve scale without dealing with infrastructure. PaaS was expected to be the fastest growing market surpassing IaaS. But a few years later, early movers like Microsoft and Google realized that Amazon was growing faster because of its investments in IaaS. Infrastructure services had lower barriers to adoption than PaaS. Today, both Microsoft and Google have strong IaaS offerings that compete with Amazon EC2 in addition to maintaining their PaaS offerings.
The conflict in PaaS and what has caused slower adoption of PaaS, is the need by enterprises for a prescriptive way of writing, managing, and operating applications versus the desire that developers have to resist such constraints. Another concern is portability when writing applications on PaaS; each “brand” of PaaS has unique services and API interfaces which are not portable between one another. This metadata is proprietary for each PaaS vendor preventing the portability of code. Initiatives like buildpacks attempted to make PaaS applications portable. Moving from one PaaS instance to another of the same type, even across cloud providers, is simple. But it is still not an industry standard because public PaaS providers like Google App Engine and Microsoft Azure don’t support the concept of buildpacks.
Docker delivers a simplified promise of PaaS to developers. It is important to note that there are some PaaS solutions, like Cloud Foundry and Stackato that now support Docker containers. With Docker, developers never have to deal with disparate environments for development, testing, staging and production. They can sanitize their development environment and move it to production without losing configuration and its dependencies. This alleviates the classic issue of ‘it-worked-on-my-machine’ syndrome that developers often deal with. Since each Docker container is self-sufficient, in that each contains the code and configuration, it can be easily provisioned and run anywhere. The Dockerfile (which contains the configuration information for a Docker Container) is far more portable than the concept of a buildpack. Developers can manage a Dockerfile by integrating it with version control software like git or SVN, this takes infrastructure as a code to the next level.
Docker disrupts the PaaS world by offering a productive and efficient environment for developers. Developers do not need to learn new ways of coding just because their application runs in the cloud. Of course, they still need to follow best practices of designing and developing scalable applications but their code can run as-is in a Docker container with no changes. Containers encourage developers to write autonomous code that can run as microservices. Going forward, PaaS will embrace Docker by providing better governance, manageability and time to provision.
PaaS is an evolving market and Docker is being brought into the mix. Does this accelerate evolution or disrupt it? Perhaps it is a bit of both, by looking at a standard way of dealing with environments through containers, this may simplify portability for customers, but it may also take those same early adopter customers down a path of a pure but less mature Docker only solution.
Hypervisor and Virtualization Platforms
When VMware started offering virtualization in the form of VMware Workstation, no one thought it would become a dominant force in enterprise IT. Within a few years, VMware started extending virtualization technology to servers and now to the cloud. The ecosystem around Docker is eager to apply lessons learned from hypervisors to Docker containers to fast track its adoption. Eventually, Docker will become more secure and robust to run a variety of workloads that would otherwise run on VMs or even bare metal. There is already buzz around bare metal being a better alternative to multi-tenant VMs. CoreOS, a contemporary OS claims that it delivers better performance on bare metal with applications running inside Docker containers.
The lack of maturity of tooling and the ecosystem being large but not developed/mature brings into question if there will be a few early failures in spite of Docker likely being successful.
Multi-Cloud Management Tools
Multi-cloud management software is typically called a Cloud Management Platform (CMP). Some of the CMP companies including RightScale, Scalr, Enstratius (now Dell Cloud Manager), and ScaleXtreme were all started on the premise of abstracting underlying cloud platforms. Customers use CMP tools to define the deployment topology independent of the specific cloud provider. The CMP then provisions the workload in one of the cloud platforms chosen by the customer. With this, customers never have to deal with cloud specific UIs or APIs. To bring all the cloud platforms to the same level playing field, CMPs leverage similar building block services for each cloud platform.
To avoid lock-in, CMPs use basic compute, block storage, object storage and network services exposed by the cloud providers. Some CMPs deploy their own load balancers, database services and application services within to each cloud platform. This brings portability to workloads without tying them to cloud specific services and APIs. Since they are not tied to a specific platform, customers can decide to run the production environment on vSphere based private clouds while running disaster recovery (DR) on AWS.
In many ways, Docker offers similar portability to CMPs. Docker enables customers to declare an image and associated topology in the Dockerfile and then building it on a specific cloud platform. Similar to the way CMPs build and maintain additional services like networking, databases and application services as managed VMs on each cloud, container providers can deploy and maintain managed containers that complement vendor-specific services. Tools like Orchard, Fig, Shipyard and Kubernetes enable next generation providers to manage complex container deployments running on multiple cloud platforms. This has an overlap with the business model of cloud management platforms, which is why companies like RightScale and Scalr are assessing the impact of Docker on their business.
Does Docker eliminate or create more need for CMP? Docker may cause even more complex and difficult dependency chains that are harder to troubleshoot. Will CMPs adapt to incorporate managing Docker to be heterogenous?
Though there are many tools that fit into the DevOps equation that aim to bring developers and operations closer, Docker is a framework that closely aligns with DevOps principles. With Docker, developers stay focused on their code without worrying about the side effects of running it in production. Ops teams can treat the entire container as yet another artifact while managing deployments. The layered approach to file system and dependency management makes the configuration of environments easier to maintain. Versioning and maintaining Dockerfiles in the same source code control system (like a Git workflow), makes it very efficient managing multiple dev/test environments. Multiple containers representing different environments can be isolated whilst running on the same VM. It should be noted that Docker also plays well with existing tools like Jenkins, Chef, Puppet, Ansible, Salt Stack, Nagios and Opsworks.
Docker has the potential to have a significant impact on the DevOps ecosystem. It could fundamentally changes the way developers and operations professionals collaborate. Emerging DevOps as a service companies like CloudMunch, Factor.io, Drone.io will likely have to adopt Docker and bring that into their CI and CD solutions.
Does Docker ultimately only become a fit for Dev/Test and QA?
Docker is facing the same challenges that Java went through in late 90s. Given its potential to disrupt the market, many players are closely assessing its impact on their businesses. There will be attempts to hijack Docker into territories it is not intended for. Docker Inc. must be cautious in its approach to avoid the same fate as Java. Remember that SUN Microsystems, the original creator of Java never managed to exploit it the way IBM and BEA did. If not handled well, Docker Inc. faces similar risks in having its ecosystem profit more than it does.
The startup whose scalable Linux OS works with Docker containers will roll out an enterprise registry feature for customers who want to keep their Dockerized-source code in-house as opposed to the cloud.
Docker is one of the most-popular technologies in computing right now, and Solomon Hykes is one of the people responsible for it. On our latest Structure Show podcast, he talks about the past, present and future of the project and company he helped build.
A handful of technology companies big and small have vowed to support and contribute to Kubernetes, Google’s open source technology for managing Docker containers. That’s a big boon for portability in cloud computing, and a good way for Google to show off its infrastructure edge.
Docker-in-Chief Solomon Hykes reflects on dot.cloud’s transformation from open-source tool developer to a PaaS to the force behind Docker containers.
The startup that specializes in large server deployments wants the common Joe to take to its scalable Linux OS that hums along nicely with Docker containers.
AOL is taking its flexible infrastructure strategy to a whole new level of flexibility by building data centers that are about the size of French door refrigerators. Now, AOL will be able to deploy infrastructure where needed with little more than an electrical outlet required.
Yesterday, Cisco became the latest to jump on the containerised data center band wagon. Joining Sun, Dell, IBM, HP and more focussed suppliers such as Cirrascale, Cisco is the latest to offer commercial customers the opportunity to buy a container full of Cisco UCS kit (or, unusually, competitor kit) and have it deployed and fully operational on-site within three months. Despite the usual talk of deploying these things to disaster areas or siting them temporarily for compute-intensive events like the Olympics or last week’s wedding, the real emphasis still seems to be on extending the capacity of existing data centers. Maybe the real potential of a truly portable data center will only be realised once the container can generate its own power, cool itself, and push data in and out without placing undue strain on flaky local infrastructure? Until then, adding additional capacity in the car park/ parking lot of overloaded data center buildings is certainly a nice — though niche — business to be in.
Larry Tabb wrote a post for Wall Street & Technology detailing how electronic trading by financial institutions can trigger the demise of their data centers. Essentially, he suggests, when a unit leaves a data center to get nearer to an exchange, every other unit must pay a bigger portion of the costs to keep the data center running. This gives incentives for remaining units to seek less expensive alternatives. I think this could apply to cloud computing, too, as companies rely more on SaaS and IaaS. Perhaps this is where internal clouds and containerized data centers really come into play. When the pressure is on to manage the remaining IT as efficiently as possible, density and automation should rule the day.