Networked Business Defined

In my recently published research agenda, one of the topics that I included was networked business. I gave a brief definition of the term in that post, but realized that it would be useful to go a bit deeper. My goal is to make sure that Gigaom Research clients and followers would have a solid, base understanding of the term whenever I use it in the future.
What is ‘networked business’? Much academic work has been done to define this term in the last two decades. However, rather than forge a consensus definition from multiple opinions, I decided to build my own in 2012, based on the dictionary definition of each component of the term.
However, I quickly found that there is not a single definition for each word. Every dictionary that I referenced offered a slightly different, nuanced version of what the words ‘network’ and ‘business’ mean. In the end, I decided to work with definitions from Snappy Words, a free online visual dictionary. Snappy Words consistently offers thoughtful, refreshing definitions that go beyond the ordinary ones proposed in more traditional dictionaries. Here are the Snappy Words definitions I chose to start from:
network (n.) – an interconnected system of things or people
business (n.) – a commercial or industrial enterprise and the people who constitute it
The Snappy Words meaning of ‘business’ cited above is a good example of how their definitions are different. Most traditional dictionaries do not reference ‘people’ in their definitions of ‘business’. The central point of the Social Business movement was (and is) that people matter quite a lot in business. As is often the case, Snappy Words has done the best job of incorporating recent thought into their definitions.
Back to the task of creating a definition for ‘networked business’ from those for ‘network’ and ‘business’. Combining the two definitions was not as straight-forward as you might think. The complicating factor is which part of speech should emphasized, the adverb (‘networked’) or the object (‘business’). If the object is highlighted, the resulting definition of ‘networked business’ best applies to a single organization. If the adverb (or state) is deemed most important, then the definition most accurately describes an ecosystem
So we really need two definitions for ‘networked business’. Here are the ones that I have proposed:
networked business (n.) –  a company whose value-producing assets are connected to each other and with those of other organizations
networked business (a.) –  a state in which an interconnected system of organizations and their value-producing assets are working toward one or more common objectives
The first definition is about an individual business and the connected state it is in, internally and externally. A networked business views its organizational units as both independent silos and connected network nodes. It treats its people like individuals and co-dependent employees. The networked business sees itself as a separate entity, as well as a partner with other organizations.
The second definition speaks to the larger concept of networked business. It describes the collaborative ecosystem in which individual networked businesses work together to create and capture value. It is a philosophical objective and, if successfully achieved, an operational reality of how business is done in the early 21st century.
These definitions have held up well in the three years since they were written and first published. That said, any definition should be subject to change, as the thing that it is attempting to define morphs over time.
What do you think about these two definitions of ‘networked business’? What do you specifically like or dislike? Are there things that would you add? Please leave comments and suggestions below. Thanks!

IBM InterConnect Day 1 Impressions

InterConnect 2015 Las Vegas is the combination of a few IBM conferences. Past conferences carried quite a bit of overlap in content. As the conversations blurred, it made sense to combine the conferences. The challenge is the conference is spread across two hotels in Las Vegas that are not connected. A whopping 21,000 people are in attendance with another 15,000 joining via their online portal InterConnectGO.

Logistics aside, the first day kicked off with a bang. The opening sessions includes all the glitz and glamor one might expect from a Vegas show. The content covered a wide spectrum of IBM’s portfolio from cloud to data analytics.

In my opinion, IBM’s SoftLayer and Watson stories are gems among a varied portfolio. In addition, the social engine is in full swing here at InterConnect. Analytics play a great role in defining different social metrics and IBM is not missing the opportunity. But more about that in a minute.

All about cloud

The cloud story is starting to gel for IBM, but still needs a bit of sharpening. They covered all the buzzwords in cloud, but it left me wanting to hear more than buzzword bingo. Much of the story hinges on the success of SoftLayer. Taking a deeper look at SoftLayer, it addresses a number of the core enterprise requirements for the broader market. It is not everything for everyone, but doesn’t need to be. This is where the ecosystem comes in. Ecosystems are everything today.

During the opening session, IBM announced ‘OpenStack as a Service’. It is not clear how this fits into the overall strategy as it was glossed over. This is an area to watch closely from two perspectives: 1) What exactly is the offering and what market is it intended to address? 2) How will this affect and/or divert SoftLayer’s existing VMware offerings. Will it cause SoftLayer to abandon VMware in favor of OpenStack as others have done? Each of these questions could govern the future success of SoftLayer both short and long-term.

Coursing through the data analytics

Many references are made to the growing accumulation of data. Terms like ‘data lake’ and ‘data ocean’ are used to describe the growing mass of untapped data. During the opening session, IBM outlined several use cases where companies have leveraged their technology to gain insights to the data problem.

Many of the examples continue with the financial services and healthcare use cases. Healthcare is one, if not the largest industry ripe for disruption from data analytics. Citi joined on stage to talk about their approach to innovation. Their mantra: Unleash, develop, disrupt. In the case of Citi, “Nobody needs banks, but everybody needs banking.” Great analogy. For healthcare, May Clinic mentioned that only 5% of cancer patients are engaged in a trial. Meaning there is a huge disconnect (read: opportunity) in connecting patients to potential treatment courses.

Getting social

Cloud and data analytics aren’t the only topics here at InterConnect. IBM is heavily leveraging their analytics platform to demonstrate the value of social here. And the social media elite are in full force. There are a couple of mis-steps by use of the longer hashtags (#IBMInterConnect and #NewWayToWork), but otherwise, the twitter stream is flowing pretty heavily. The longer hashtags are definitely leading to a myriad of typos, which defeat the purpose of the hashtag. One change would be greater engagement in the conversations happening on Twitter. Like some conferences, the twitter feed is mostly one-way with little two-way engagement.

Aside from the downsides, it is impressive the flow of tweets coming from an IBM conference. Considering the perception of IBM, it appears they’re moving in the right direction socially.

On tap for Day 2 and beyond…

It’s all about the cloud. Looking forward to the cloud discussions today along with the Executive Session and Shark Tank presentations.

Overall, it’s apparent that IBM is turning the corner on the conversations. IBM does have it’s flaws as any company that is 400,000 employees strong. That aside, IBM needs to continue on their quest to drive toward cloud and data analytics dominance. SoftLayer and Watson are two shining gems in the IBM portfolio that will need to blossom as they mature.

Time’s up! Changing core IT principles

There is a theme gaining ground within IT organizations. In truth, there are a number of examples that support a common theme coming up for IT organizations. And this very theme will change the way solutions are built, configured, sold and used. Even the ecosystems and ancillary services will change. It also changes how we think, organize, lead and manage IT organizations. The theme is:

Just because you (IT) can do something does not mean you should.

Ironically, there are plenty of examples in the history of IT where the converse of this principle served IT well. Well, times have changed and so must the principles that govern the IT organization.

Take it to the customization of applications and you get this:

Just because IT can customize applications to the nth degree does not mean they necessarily should.

A great example of this is in the configuration and customization of applications. Just because IT could customize the heck out of it, should they have? Now, the argument often made here is that it provides some value, somewhere, either real or (more often) perceived. However, the reality is that it comes at a cost, sometimes, a very significant and real cost.

Making it real

Here is a real example that has played out time and time again. Take application XYZ. It is customized to the nth degree for ACME Company. Preferences are set, not necessarily because they should be, but rather because they could. Fast-forward a year or two. Now it is time to upgrade XYZ. The costs are significantly higher due to the customizations done. It requires more planning, more testing, more work all around. Were those costs justified by the benefit of the customizations? Typically not.

Now it is time to evaluate alternatives for XYZ. ACME builds a requirements document based on XYZ (including the myriad of customizations). Once the alternatives are matched against the requirements, the only solution that really fits the need is the incumbent. This approach actually gives significant weight to the incumbent solution therefore limiting alternatives.

These examples are not fictitious scenarios. They are very real and have played out in just about every organization I have come across. The lesson here is not that customizations should be avoided. The lesson is to limit customizations to only those necessary and provide significant value.

And the lesson goes beyond just configurations to understanding what IT’s true value is based on what they should and should not do.

Leveraging alternative approaches

Much is written about the value of new methodologies and technologies. Understanding IT’s true core value opportunity is paramount. The value proposition starts with understanding how the business operates. How does it make money? How does it spend money? Where are the opportunities for IT to contribute to these activities?

Every good strategy starts with a firm understanding of the ecosystem of the business. That is, how the company operates and it’s interactions. A good target that many are finding success with sits furthest away from the core company operations and therefore hardest to explain true business value…in business terms. For many, it starts with the data center and moves up the infrastructure stack. For a bit more detail: CIOs are getting out of the data center business.

Preparing for the future today

Is your IT organization ready for today? How prepared is your organization, processes and systems to handle real-time analytics? As companies consider how to engage customers from a mobile platform in real-time, the shift from batch-mode to real-time data analytics quickly takes shape. Yet many of the core systems and infrastructure are nowhere ready to take on the changing requirements.

Beyond data, are the systems ready to respond to the changing business climate? What is IT’s holistic cloud strategy? Is a DevOps methodology engaged? What about container-based architectures?

These are only a few of the core changes in play today…not in the future. If organizations are to keep up, they need to start making the evolutionary turn now.

5 things a CIO wishes for this holiday season

It is that time of year when we start thinking about our predictions for the next year. Before we get to 2015 predictions next week, let us take an introspective look at 2014 and what we could hope for from the IT perspective in 2015.

There are a number of key gaps between where we, as IT organizations and the CIOs that lead them, are today versus where we need to be. One dynamic that is currently evolving, however slowly, is the shift from traditional CIOs to transformational CIOs. This applies equally to the IT organizations they lead. IT is in a transitive state at the moment and leaves quite a bit in flux. In many ways, there is much more changing within IT organizations today than ever before.

As we progress through the 2014 holiday season heading quickly toward 2015, there are a number of things that, as CIO, I would wish for in 2015.

  1. Reduce the risk from security breaches: With recent events, it is probably not surprising that security is front-and-center. Security breaches are not new to IT organizations. Neither are high-profile breaches. The change over the past year is that the frequency in high-profile breaches has increased significantly. In addition, if you consider the breaches just in the past year, the vast majority of people in the US have been affected by at least one of the breaches. As a CIO, I do not want to be on the front page of the Wall Street Journal let alone a household name that violated the trust of my customer’s data.
  2. The end of vaporware: Vaporware, like security breaches, is not new. But the hype around emerging technologies has really gotten out of control. It is time to dial it back to a more reasonable level. This is especially true of services that are ‘stickier’ for customers. Be reasonable with setting expectations. It is OK to be ambitious, but also builds credibility when you express what is and isn’t in your wheelhouse.
  3. A business-centric IT organization: Consider an IT organization that brings a business-centric focus to delivering solutions in a proactive manner. No longer are there ‘translators’ between business and IT. But rather, an IT organization that understands how the company makes and spends money…intimately. This means they understand the ecosystem of the company, their customers and the marketplace.
  4. Symbiotic business relationships: This one is intertwined with #3 where the IT organization and other lines of business work fluidly and collaboratively toward common objectives. Lines of business outside of IT view IT as a strategic asset, not a tool. And, there is no more talk of IT and ‘the business’ as if they’re separate groups. IT is part of ‘the business’.
  5. A clear future, not cloudy: It would be great if the future state were clear as a bell to the entire IT ecosystem. Right now, it’s pretty cloudy (pun intended). That’s not to say that clouds don’t have a place. Cloud computing represents the single biggest opportunity for IT organizations today.

I’ve said it before and I’ll say it again. This is absolutely the best time to work in IT. I know there are IT professionals that have a hard time with that statement. However, much of that consternation comes from the ambiguity currently within the IT industry. Let’s face it; there is a ton of change happening in IT right now. Things we took as gospel for decades is being questioned. Best practices are no longer so.

But with change and disruption comes confusion and opportunity. Once we get beyond this temporary state, things will quell and the future state will become clearer. Here’s to an exhilarating 2015!

Happy Holidays and here’s to an amazing 2015!

IBM connects the dots between data, cloud and engagement

At this week’s IBM Insight conference in Las Vegas, IBM brought out the big guns to demonstrate their chops in the data analytics space. Insight is IBM’s conference dedicated to their solutions around data management and analytics. While there are some highlights, other areas are still evolving.

Setting the stage and connecting the dots

Things kicked off with IBM SVP of the Information and Analytics group, Bob Picciano, talking about the important interconnection between data, cloud and engagement.

  • Data is the ‘What’
  • Cloud is the ‘How’
  • Engagement is the ‘Why’

Bob’s messaging paints a good picture of how the technology and data play a central role to the ever-changing IT organization. Engagement is the key to business relationships with customers. The CIO and IT organization need to fully understand how they engage with customers today and how that will evolve over time. Where are the opportunities? How can IT help create deeper relationships with customers? Data and cloud will play a leading role.

Relationships comes in all sizes

The way companies connect with their customers will vary greatly. To that point, there are some core themes here at Insight that mirrors the varied ways. Two of the key areas are social engagement and mobile. Ironically, traffic at the mobile booth seems anemic compared with the social engagement area, which saw constant traffic. In order for IBM to truly capitalize on the changing marketplace mobile will need to take a stronger position.

Getting social, but still a ways to go

Social media plays a central role in customer engagement for many organizations. The impressive thing is that the #IBMInsight hashtag was trending high on Twitter’s list for much of the day. As a data geek, one is always thinking about the value of those metrics. Trending at the top of Twitter is pretty impressive until you start to look at the finer details.

Running data through Tweet Binder provides a bit of clarity (report). Almost 50% of tweeters used Twitter clients for iPhone, iPad or Android speaking to the importance of mobile in social media. Looking a bit further, 61% of tweeters only tweeted a single tweet while 77.51% of tweeters tweeted only one or two times. That is not a good showing for attendees that should be well versed on the impact of social media and demonstrates there is still a ways to go.

Building an ecosystem

Walking the expansive show floor, it is apparent that IBM has worked to build their ecosystem. There are plenty of vendors that provide complementary products based on IBM technology along with plenty of consulting shops too. The interesting point here is that there are not many larger technology companies other than IBM exhibiting. This could be a side effect to IBM’s wide portfolio of services and solutions and a feeling of competitiveness among vendors. Unfortunately, it does not represent the varied needs of the average enterprise customer.

Summary in a nutshell

Putting it all together, IBM is making good waves to support the enterprise around data and analytics. They have made a good start, but still have a ways to go. The solutions still have a traditional IBM ‘feel’ and with rare exceptions span into the newer territories. There was a showing of IBM’s BlueMix platform, but not too much beyond the large enterprise perspective. Even the cloud area competed with the size of the infrastructure areas.

The reality is that turning a company the size of IBM is hard. In addition to size, there are cultures that need adjustment too. But it seems IBM has started to make good strides in some specific areas with ostensibly more to come. It will be interesting to see how IBM addresses solutions going forward and starts to truly pull the different components (data, cloud, engagement) together.

There comes a point when it is not just about storage space

Is the difference between cloud storage provides about free space? In a word, no. I wrote about the cloud storage wars and potential bubble here:

The cloud storage wars heat up

http://avoa.com/2014/04/29/the-cloud-storage-wars-heat-up/

4 reasons cloud storage is not a bubble about to pop

http://avoa.com/2014/03/24/4-reasons-cloud-storage-is-not-a-bubble-about-to-pop/

Each of the providers is doing their part to drive value into their respective solutions. To some, value includes the amount of ‘free’ disk space included. Just today, Microsoft upped the ante by offering unlimited free space for their OneDrive and OneDrive for Business solutions.

Is there value in the amount of free space? Maybe, but only to a point. Once they offer an amount above the normal needs (or unlimited), the value becomes a null. I do not have statistics, but would hazard a venture that ‘unlimited’ is more marketing leverage where most users only consume less than 50GB each.

Looking beyond free space

Once a provider offers unlimited storage, one needs to look at the feature/ functionality of the solution. Not all solutions are built the same nor offer similar levels. Enterprise features, integration, ease of use and mobile access are just a few of the differentiators. Even with unlimited storage, if the solution does not offer the feature you need, storage value is greatly diminished.

The big picture

For most, cloud storage is about replacing a current solution. On the surface the amount of free storage is a quick pickup. However, the real issue is in the compatibility and value beyond just the amount of free storage. Does the solution integrate with existing solutions? How broad is their ecosystem? What about Single Sign On (SSO) support? How much work will it take to implement and train users? These are just a few of the factors that must be considered.

Ballmer departure changes the Microsoft game for enterprise CIOs

In a letter to Microsoft CEO Satya Nadella, chairman Steve Ballmer announced his departure from Microsoft’s board of directors. While Ballmer still retains 4% of Microsoft (NASDAQ: MSFT), he relinquishes his last official leadership title. But the departure of employee number 30 and former CEO signals a deeper change at Microsoft that enterprise CIOs will want to watch.

Ballmer, and founder Bill Gates before him, brought significant value to enterprises. The relationships they built touch just about every corporate entity on the planet. Even today, Microsoft maintains one of the strongest relationships with enterprises today. Many of those relationships are based on client operating system and productivity tools; namely Windows and Office.

However, as cloud became more prevalent in the enterprise world, Microsoft seemed steeped in their traditional form and only made modest course corrections. Arguably, changing a $300B+ publicly traded corporation is not for the faint of heart. But the appointment of Satya Nadella to the CEO role was no mistake in a carefully orchestrated set of maneuvers intended to change Microsoft’s path. Nadella was formerly Microsoft’s Executive Vice President of Cloud and Enterprise. Ballmer’s role on the Board of Directors left him in a very influential position. His departure signals an opportunity for Nadella to shine.

And that shine is just what Microsoft needs to turn the corner. Now is the opportunity for Microsoft to take actions that fully embrace cloud in a holistic manner. For the enterprise CIO, this means looking at Microsoft beyond just Office 365 and Azure. Microsoft, historically, created a broad ecosystem that was Microsoft-centric. In order for Microsoft, or any cloud provider, to succeed, the ecosystem must be open and extend beyond the boundaries of their existing portfolio of products and services. For Microsoft, this shift becomes more personal away from on-premises enterprise products and shifting toward open cloud-based services like Office 365 and Azure.

Under Ballmer’s reign, this shift would have been challenging at best. Ballmer was a great leader who drove Microsoft hard in a direction that founder Bill Gates started. Now it is Nadella’s turn at the helm to take Microsoft in a completely different. And that very direction, toward cloud-based services, is just what the enterprise needs. Microsoft, with their existing deep enterprise relationships, has the opportunity to capitalize on this shift. For the enterprise CIO, sticking with a Microsoft ecosystem brings a level of comfort and attractiveness. Before the leadership change, one might question if Microsoft was capable to making the turn. With Ballmer’s departure, it should give the enterprise CIO renewed interest in seeing what transpires next. The question is: Can Microsoft really make the shift happen quickly enough. The next few months will be interesting to watch.

Is Docker a threat to the Cloud ecosystem?

Docker Containers Everywhere!

Docker has undoubtedly been the most disruptive technology that the industry has witnessed in the recent past. Every vendor in the cloud ecosystem has announced some level of support or integration with Docker. DockerCon, the first ever conference hosted by Docker Inc. in June 2014, had the who’s who of the cloud computing industry tell their stories of container integration. While each company had varying levels of container integration within their platforms, they all unanimously acknowledged the benefits of Docker.

It is not often that we see Microsoft, Amazon, IBM, Google, Facebook, Twitter, Red Hat, Rackspace and Salesforce under one roof pledging their support for one technology. But what’s in it for Microsoft or Amazon to support Docker? Why are traditional PaaS players like Heroku and Google rallying behind Docker? Is Docker really creating a level playing field for cloud providers? Does Docker converge IaaS and PaaS? Can we trust the vendors offering their unconditional support for Docker? It may be too early to fully answer these questions.

Will the Docker hype cause it to crash because Docker has too much attention too soon?

History and Parallels
If there is one technology that garnered wide industry support, it was Java. When Java was announced in the mid 90s, everyone, including Microsoft, showed interest until they realized how big a threat it was to their own platforms. Java’s main value proposition is Write Once, Run Anywhere – Docker containers are Build Once, Run Anywhere. Docker can be compared to Java not just from a technology aspect, but also from the potential threat it poses to certain companies. Though we have yet to see specific vendors countering the container threat by creating fear, uncertainty and doubt, it may not be too long before they initiate it.

The question of Docker domination still remains to be seen. Does history repeat itself with Docker the way that it did with Java, or even VMware? Key players from the cloud ecosystem offering everything from low-level hypervisors (VMware) to SaaS (Salesforce) are watching Docker to assess its impact on their businesses.

What is a Docker Container?

Docker is designed to manage things like Linux Containers (lxc). What is so different about Docker, when container technologies have been around since the year 2000 (FreeBSD jails)? Docker is the first technology that makes it easy to create and manage containers and also to package things in a way that make them usable without a lot of tweaking. Developers do not need to be experts in containerization to use Docker.

Docker containers can be provisioned on any VM that runs Linux kernel 3.8 or above. It doesn’t matter which Linux OS is running for a Docker container to launch. Thanks to the powerful Dockerfile – a declarative mechanism to describe the container – it is pretty simple to pull a container from the registry and run it on the local VM in just a few minutes.

The following diagrams depict what a Container is – think Russian Nesting Dolls.

Source: Gigaom Research

Stack-inception: Containers and how they relate to systems software with VMs.

Source: Gigaom Research

Stack-inception: Containers and how they relate to systems software without VMs.

Containers as a Service?
There are already startups like Tutum that offer Docker as a Service by imitating existing IaaS providers. Going forward, there is a possibility that Tutum will leverage multiple IaaS offerings to dynamically provision and move containers across them. Just like the way IaaS customers don’t care about the brand of the servers that hosts their VMs, Tutum’s customers won’t care if their container runs in Amazon or Azure. Customers will choose the geography or location where they want their container to run and then the provider will orchestrate the provisioning by choosing the cheapest available or most suitable public cloud platform.

The viability of Docker, and businesses that use Docker as IaaS offered to customers, is still an open-ended question. While Docker has great industry presence and a great deal of buzz, will this translate to production use across enterprises?

How does Docker impact the Cloud Ecosystem?

Public Cloud
From startups to enterprise IT, everyone realized the power of self-service based provisioning of virtual hardware. Public clouds like AWS, Azure and Google turned servers from being commodities to becoming utilities. Docker has the potential to reduce the cost of public cloud services by providing more fine-grained compute resources to be utilized and further reduce provisioning times. Additional services like load balancers, caching and firewalls will move to cloud agnostic containers to offer portability.

Since containers are lighter weight execution environments than VMs, Docker is well suited for hybrid cloud deployments. VMware vCHS and Microsoft Azure differentiate themselves through a VM mobility feature. Cloud bursting, a much talked about capability of hybrid cloud, can be delivered through Docker. Containers can be dynamically provisioned and relocated across environments based on resource utilization and availability.

If providers such as AWS adopt Docker as a new unit of resource, they may get cost efficiency benefits, but will management complexity and immaturity be too high of a burden right now?

PaaS
Platform as a Service was one of the first service delivery models of cloud computing. It was originally created to enable developers to achieve scale without dealing with infrastructure. PaaS was expected to be the fastest growing market surpassing IaaS. But a few years later, early movers like Microsoft and Google realized that Amazon was growing faster because of its investments in IaaS. Infrastructure services had lower barriers to adoption than PaaS. Today, both Microsoft and Google have strong IaaS offerings that compete with Amazon EC2 in addition to maintaining their PaaS offerings.

The conflict in PaaS and what has caused slower adoption of PaaS, is the need by enterprises for a prescriptive way of writing, managing, and operating applications versus the desire that developers have to resist such constraints. Another concern is portability when writing applications on PaaS; each “brand” of PaaS has unique services and API interfaces which are not portable between one another. This metadata is proprietary for each PaaS vendor preventing the portability of code. Initiatives like buildpacks attempted to make PaaS applications portable. Moving from one PaaS instance to another of the same type, even across cloud providers, is simple. But it is still not an industry standard because public PaaS providers like Google App Engine and Microsoft Azure don’t support the concept of buildpacks.

Docker delivers a simplified promise of PaaS to developers. It is important to note that there are some PaaS solutions, like Cloud Foundry and Stackato that now support Docker containers. With Docker, developers never have to deal with disparate environments for development, testing, staging and production. They can sanitize their development environment and move it to production without losing configuration and its dependencies. This alleviates the classic issue of ‘it-worked-on-my-machine’ syndrome that developers often deal with. Since each Docker container is self-sufficient, in that each contains the code and configuration, it can be easily provisioned and run anywhere. The Dockerfile (which contains the configuration information for a Docker Container) is far more portable than the concept of a buildpack. Developers can manage a Dockerfile by integrating it with version control software like git or SVN, this takes infrastructure as a code to the next level.

Docker disrupts the PaaS world by offering a productive and efficient environment for developers. Developers do not need to learn new ways of coding just because their application runs in the cloud. Of course, they still need to follow best practices of designing and developing scalable applications but their code can run as-is in a Docker container with no changes. Containers encourage developers to write autonomous code that can run as microservices. Going forward, PaaS will embrace Docker by providing better governance, manageability and time to provision.

PaaS is an evolving market and Docker is being brought into the mix. Does this accelerate evolution or disrupt it? Perhaps it is a bit of both, by looking at a standard way of dealing with environments through containers, this may simplify portability for customers, but it may also take those same early adopter customers down a path of a pure but less mature Docker only solution.

Hypervisor and Virtualization Platforms
When VMware started offering virtualization in the form of VMware Workstation, no one thought it would become a dominant force in enterprise IT. Within a few years, VMware started extending virtualization technology to servers and now to the cloud. The ecosystem around Docker is eager to apply lessons learned from hypervisors to Docker containers to fast track its adoption. Eventually, Docker will become more secure and robust to run a variety of workloads that would otherwise run on VMs or even bare metal. There is already buzz around bare metal being a better alternative to multi-tenant VMs. CoreOS, a contemporary OS claims that it delivers better performance on bare metal with applications running inside Docker containers.

The lack of maturity of tooling and the ecosystem being large but not developed/mature brings into question if there will be a few early failures in spite of Docker likely being successful.

Multi-Cloud Management Tools
Multi-cloud management software is typically called a Cloud Management Platform (CMP). Some of the CMP companies including RightScale, Scalr, Enstratius (now Dell Cloud Manager), and ScaleXtreme were all started on the premise of abstracting underlying cloud platforms. Customers use CMP tools to define the deployment topology independent of the specific cloud provider. The CMP then provisions the workload in one of the cloud platforms chosen by the customer. With this, customers never have to deal with cloud specific UIs or APIs. To bring all the cloud platforms to the same level playing field, CMPs leverage similar building block services for each cloud platform.

To avoid lock-in, CMPs use basic compute, block storage, object storage and network services exposed by the cloud providers. Some CMPs deploy their own load balancers, database services and application services within to each cloud platform. This brings portability to workloads without tying them to cloud specific services and APIs. Since they are not tied to a specific platform, customers can decide to run the production environment on vSphere based private clouds while running disaster recovery (DR) on AWS.

In many ways, Docker offers similar portability to CMPs. Docker enables customers to declare an image and associated topology in the Dockerfile and then building it on a specific cloud platform. Similar to the way CMPs build and maintain additional services like networking, databases and application services as managed VMs on each cloud, container providers can deploy and maintain managed containers that complement vendor-specific services. Tools like Orchard, Fig, Shipyard and Kubernetes enable next generation providers to manage complex container deployments running on multiple cloud platforms. This has an overlap with the business model of cloud management platforms, which is why companies like RightScale and Scalr are assessing the impact of Docker on their business.

Does Docker eliminate or create more need for CMP? Docker may cause even more complex and difficult dependency chains that are harder to troubleshoot. Will CMPs adapt to incorporate managing Docker to be heterogenous?

DevOps
Though there are many tools that fit into the DevOps equation that aim to bring developers and operations closer, Docker is a framework that closely aligns with DevOps principles. With Docker, developers stay focused on their code without worrying about the side effects of running it in production. Ops teams can treat the entire container as yet another artifact while managing deployments. The layered approach to file system and dependency management makes the configuration of environments easier to maintain. Versioning and maintaining Dockerfiles in the same source code control system (like a Git workflow), makes it very efficient managing multiple dev/test environments. Multiple containers representing different environments can be isolated whilst running on the same VM. It should be noted that Docker also plays well with existing tools like Jenkins, Chef, Puppet, Ansible, Salt Stack, Nagios and Opsworks.

Docker has the potential to have a significant impact on the DevOps ecosystem. It could fundamentally changes the way developers and operations professionals collaborate. Emerging DevOps as a service companies like CloudMunch, Factor.io, Drone.io will likely have to adopt Docker and bring that into their CI and CD solutions.

Does Docker ultimately only become a fit for Dev/Test and QA?

Summary

Docker is facing the same challenges that Java went through in late 90s. Given its potential to disrupt the market, many players are closely assessing its impact on their businesses. There will be attempts to hijack Docker into territories it is not intended for. Docker Inc. must be cautious in its approach to avoid the same fate as Java. Remember that SUN Microsystems, the original creator of Java never managed to exploit it the way IBM and BEA did. If not handled well, Docker Inc. faces similar risks in having its ecosystem profit more than it does.