Are microservices just SOA redux?

Sinclair is CEO and cofounder of Apprenda, a leader in enterprise Platform as a Service.

It seems like every conversation related to cloud-native software projects these days involves microservices. During those conversations, someone inevitably draws a comparison with service-oriented architecture (SOA) or hesitantly asks the question, “Aren’t microservices just SOA?” While it might not seem important on first glance, this is actually a pressing question that gets little attention.
Usually this question is either outright dismissed in the negative or unquestionably accepted in the affirmative. As an exercise in more deeply answering the question, let’s spend time a little time understanding SOA and microservices independently and then comparing.
In the early 2000s, service-orientation became a popular design principle. Driven by backlash against highly coupled, binary-oriented systems, service-orientation promised significant increases in flexibility and compatibility. Microsoft’s Don Box was one of the first to truly spell out the guiding principles of SOA, captured in four simple tenets:

1. Boundaries are explicit

2. Services are autonomous

3. Services share schema and contract, not class

4. Service compatibility is based on policy

By adopting a service-oriented architecture that adhered to these tenets, one could unlock the value in SOA. Very quickly the world’s top software vendors capitalized on the opportunity and began building platforms and technologies to support the concept.
In fact, the SOA movement became almost entirely a vendor-driven paradigm. Vendors scrambled to build middleware to allow developers to build SOA components that could be delivered and managed in the context of those four tenets. That middleware, in many instances, became bloated. Moreover, industry specifications that defined things like SOA schemas and policy management also became bloated. This bloat resulted in heavyweight components and a backlash by developers who viewed SOA as a cumbersome, unproductive model.
In the mid-2000s, cloud infrastructure started gaining steam. Developers were able to quickly standup compute and storage needs and install and configure new applications to use that infrastructure. Additionally, applications continued tackling new levels of scale, requiring distributed architectures to properly handle that scale.
Distribution of components forced segregation of application logic based on functionality. That is, break up an application into smaller components where each component was responsible for specific functions in the app.
This ability to instantaneously call-up infrastructure coupled with the propensity for developers to use distributed architectures prompted individuals to think about formalizing thoughts for a framework. Microservices became a concept that embodied much of this and more.
It would seem that the backstory for microservices satisfies tenets 1 through 3 (although 3 is a bit more relaxed in microservices since a REST API wouldn’t typically be considered a strict contract), making microservices very similar to SOA. So how is that different than SOA?
Microservices, as originally conceptualized by Martin Fowler and James Lewis, extend expectations beyond how an application is partitioned. Microservices as a pattern establish two other important tenets:

5. Communication across components is lightweight

6. Components are independently deployable

These seemingly small additions to the criteria defining microservices have a drastic impact, creating a stark difference between the microservices and SOA.
Tenet 5 implies that complex communications buses should not be used in a microservices architecture. Something like an enterprise service bus (ESB) under the hood would create a large, implicit system dependency that would, by proxy, create a monolith of sorts since all the microservices would have one common, massive dependency influencing the functional end state.
Tenet 6 means that deployment monoliths are not allowed (which is something that was common in SOA). Each service should carry its isolation all the way up the SDLC to at least deployment. These two tenets ensure that services remain independent enough that agile, parallel development are not only possible, but almost required. While SOA meant that logic was divided into explicitly bounded components for the same application, microservices’ independent deployability means that the components need to be for the same application at all, and may each be their own independent application.
SOA set the tone for the fundamental architectural concepts embedded in modern microservices, but didn’t go far enough to create a powerful model that would solve the problems associated with bloat and speed of development. Microservices principles have a huge impact in how we think about the software development process and are not just a prescription for the architectural outcome. Thus, microservices can create a better outcome than its SOA predecessor.

Why monolithic apps are often better than microservices

Sinclair is CEO and cofounder of Apprenda, a leader in enterprise Platform as a Service.
With all of the talk these days about microservices and distributed applications, monolithic applications have become the scourge of cloud systems design. Normally, when a new technical trend emerges to replace a previous one, it is due (at least in part) to evolved thinking. The odd thing with monolithic application architecture, however, is that nobody ever proposed it as a good idea in the first place.
The idea of loosely coupled services with clear boundaries has been around for decades in software engineering. So, how did we end up with so many apps “designed” as monoliths? In a word – convenience.
The fact is, in many use cases, monolithic architectures come with some non-trivial and durable benefits that we can’t simply discount because it doesn’t adhere to a modern pattern. Conversely, microservices can introduce significant complexity to application delivery that isn’t always necessary.
As a fan of microservices, I fear enterprises are blindly charging forward and could be left disappointed with a microservices-based strategy if the technology is not appropriately applied.  The point of this post isn’t to pour FUD onto microservices. It’s about understanding tradeoffs and deliberately selecting microservices based on their benefits rather than technical hype.

Debugging and testing

Generally speaking, monolithic applications are easier to debug and test when compared to their microservices counterparts. Once you start hopping across process, machine, and networking boundaries, you introduce many hundreds of new variables and opportunities for things to go wrong – many of which are out of the developer’s control.
Also, the looser the dependency between components, the harder it is to determine when compatibility or interface contracts are broken. You won’t know something has gone wrong until well into runtime.


If your shiny new mobile app is taking several seconds to load each screen because it’s making 30 API calls to 30 different microservices, your users aren’t going to congratulate you on this technical achievement. Sure, you can add some clever caching and request collapsing, but that’s a lot of additional complexity you just bought yourself as a developer.
If you’re talking about a complicated application being used by hundreds of thousands or millions of users, this additional complexity may well be worth the benefits of a microservices architecture. But, most enterprise line-of-business applications don’t approach anything near that scale.

Security and operations

Fortune 500 enterprises I work with struggle with managing the relatively coarse-grained application security IT departments use today. If you’re going to break up your application into lots of tiny services, you’re going to have to manage the service-to-service entitlements that accompany this plan. While managing “many as one” has time tested benefits, it’s also contrary to the motivation behind microservices.

Planning and design

Microservices have a higher up-front design cost and can involve complicated political conversations across team boundaries. It can be tricky to explain why your new “pro-agile” architecture is going to take weeks of planning for every project to get off the ground. There’s also a very real risk of “over-architecting” these types of distributed solutions.

Final thoughts

Having said all of this, microservices can absolutely deliver significant benefits. If you’re building a complicated application and/or work across multiple development teams operating in parallel and iterating often, microservices make a ton of sense.
In fact, in these types of situations, monolithic applications simply serve as repositories of technical debt that ultimately becomes crippling. There is a clear tipping point here where each of the advantages of monolithic applications I described earlier become liabilities.  They become too large to debug without understanding how everything fits together, they don’t scale, and your security model isn’t granular enough to expose segments of functionality.
One way to help reduce and in some cases even eliminate the technical “tax” associated with microservices is to pair them with an enterprise Platform as a Service (PaaS). A proper enterprise PaaS is designed to stitch together distributed services and takes deployment, performance, security, integration, and operational concerns off the developer and operators’ plates.

Why unikernels might kill containers in five years

Sinclair Schuller is the CEO and cofounder of Apprenda, a leader in enterprise Platform as a Service.
Container technologies have received explosive attention in the past year – and rightfully so. Projects like Docker and CoreOS have done a fantastic job at popularizing operating system features that have existed for years by making those features more accessible.
Containers make it easy to package and distribute applications, which has become especially important in cloud-based infrastructure models. Being slimmer than their virtual machine predecessors, containers also offer faster start times and maintain reasonable isolation, ensuring that one application shares infrastructure with another application safely. Containers are also optimized for running many applications on single operating system instances in a safe and compatible way.
So what’s the problem?
Traditional operating systems are monolithic and bulky, even when slimmed down. If you look at the size of a container instance – hundreds of megabytes, if not gigabytes, in size – it becomes obvious there is much more in the instance than just the application being hosted. Having a copy of the OS means that all of that OS’ services and subsystems, whether they are necessary or not, come along for the ride. This massive bulk conflicts with trends in broader cloud market, namely the trend toward microservices, the need for improved security, and the requirement that everything operate as fast as possible.
Containers’ dependence on traditional OSes could be their demise, leading to the rise of unikernels. Rather than needing an OS to host an application, the unikernel approach allows developers to select just the OS services from a set of libraries that their application needs in order to function. Those libraries are then compiled directly into the application, and the result is the unikernel itself.
The unikernel model removes the need for an OS altogether, allowing the application to run directly on a hypervisor or server hardware. It’s a model where there is no software stack at all. Just the app.
There are a number of extremely important advantages for unikernels:

  1. Size – Unlike virtual machines or containers, a unikernel carries with it only what it needs to run that single application. While containers are smaller than VMs, they’re still sizeable, especially if one doesn’t take care of the underlying OS image. Applications that may have had an 800MB image size could easily come in under 50MB. This means moving application payloads across networks becomes very practical. In an era where clouds charge for data ingress and egress, this could not only save time, but also real money.
  2. Speed – Unikernels boot fast. Recent implementations have unikernel instances booting in under 20 milliseconds, meaning a unikernel instance can be started inline to a network request and serve the request immediately. MirageOS, a project led by Anil Madhavapeddy, is working on a new tool named Jitsu that allows clouds to quickly spin unikernels up and down.
  3. Security – A big factor in system security is reducing surface area and complexity, ensuring there aren’t too many ways to attack and compromise the system. Given that unikernels compile only which is necessary into the applications, the surface area is very small. Additionally, unikernels tend to be “immutable,” meaning that once built, the only way to change it is to rebuild it. No patches or untrackable changes.
  4. Compatibility – Although most unikernel designs have been focused on new applications or code written for specific stacks that are capable of compiling to this model, technology such as Rump Kernels offer the ability to run existing applications as a unikernel. Rump kernels work by componentizing various subsystems and drivers of an OS, and allowing them to be compiled into the app itself.

These four qualities align nicely with the development trend toward microservices, making discrete, portable application instances with breakneck performance a reality. Technologies like Docker and CoreOS have done fantastic work to modernize how we consume infrastructure so microservices can become a reality. However, these services will need to change and evolve to survive the rise of unikernels.
The power and simplicity of unikernels will have a profound impact during the next five years, which at a minimum will complement what we currently call a container, and at a maximum, replace containers altogether. I hope the container industry is ready.

Research Agenda of Larry Hawes, Lead Analyst

Greetings! As my colleague Stowe Boyd announced yesterday, I am part of a fabulous group of smart, well-respected people that have joined the rebooted Gigaom Research as analysts. I was affiliated with the original version of Gigaom Research as an Analyst, and am very pleased to be taking the more involved role of Lead Analyst in the firm’s new incarnation, as detailed in Stowe’s post.
For those of you who don’t know me, I’ve spent the last 16 years working as a management and technology consultant, enterprise software industry analyst, writer, speaker and educator. My work during that time has been focused on the nexus of communication, collaboration, content management and process/activity management within and between organizations ─ what I currently call ‘networked business’.
I intend to continue that broad line of inquiry as a Lead Analyst at Gigaom Research. The opportunity to work across technologies and management concepts ─ and the ability to simultaneously address and interrelate both ─ is precisely what makes working with Gigaom Research so attractive to me. The firm is fairly unique in that aspect, in comparison to traditional analyst organizations that pigeonhole employees into discrete technology or business strategy buckets. I hope that our customers will recognize that and benefit from the holistic viewpoint that our analysts provide.
With the above in mind, I present my research agenda for the coming months (and, probably, years). I’m starting at the highest conceptual level and working toward more specific elements in this list.

Evolution of Work

Some analysts at Gigaom Research are calling this ‘work futures’. I like that term, but prefer the ‘evolution of work’, as that allows me to bring the past and, most importantly, the current state of work into the discussion. There is much to be learned from history and we need to address what is happening now, not just what may be coming down the road. Anyway, this research stream encompasses much of what I and Gigaom Research are focused on in our examination of how emerging technologies may change how we define, plan and do business.

Networked Business

This is a topic on which I’ve been writing and speaking since 2012. I’ve defined ‘networked business’ as a state in which an interconnected system of organizations and their value-producing assets are working toward one or more common objectives. Networked business is inherently driven by connection, communication and collaboration, hence my interest in the topic.
While the concept of networked business is not new, it has been gaining currency in the past few years as a different way of looking at how we structure organizations and conduct their activities. As I noted in the first paragraph of this post, there are many technologies and business philosophies and practices that support networked business, and I will do my best to include as many as possible in my research and discussions.

Networks of Everything

This research stream combines two memes that are currently emerging and garnering attention: the Internet of Things and the rise of robots and other intelligent technologies in the workplace. In my vision, networks of everything are where humans, bots, virtual assistants, sensors and other ‘things’ connect, communicate and collaborate to get work done. The Internet, Web, cellular and other types of networks may be used in isolation or, more likely, in combination to create networks of everything.
I’ve had a book chapter published on this topic earlier this year, and I’m looking forward to thinking and writing more about it in the near future.


How do we build applications that can support business in a heavily networked environment? While the idea of assembling multiple technology components into a composite application are not new (object-oriented programing and Service Oriented Architecture have been with us for decades), the idea continues to gain acceptance and become more granular in practice.
I intend to chronicle this movement toward microservices and discuss how the atomization of component technology is likely to play out next. As always, my focus will be on collaboration, content management and business process management.

Adaptive Case Management and Digital Experience Management

These two specific, complementary technologies have also been gathering more attention and support over the last two years and are just beginning to hit their stride now. I see the combination of these technologies as an ideal enabler of networked business and early exemplars of component architecture at the application level, not the microservice one (yet).
I’ve written about ACM more, but am eager to expand on the early ideas I’ve had about it working together with DEM to support networked business.

Work Chat

Simply put, I would be remiss to not investigate and write about the role of real-time messaging technology in business. I’ve already called work chat a fad that will go away in time, but it needs to be addressed in depth for Gigaom Research customers, because there are valid use cases and it will enjoy limited success. I will look at the viability of work chat as an extensible computing platform, not just as a stand-alone technology. Fitting with my interest in microservices, I will also consider the role that work chat can play as a service embedded in other applications.
Phew! I’m tired just thinking about this, much less actually executing against it. It’s a full plate, a loaded platter really. The scariest thing is that this list is likely incomplete and that there are other things that I will want to investigate and discuss. However, I think it represents my research and publishing interests pretty  well.
My question is, how does this align with your interests? Are there topics or technologies that you would like to see me include in this framework? If so, please let me know in a comment below. Like all research agendas, mine is subject to change over time, so your input is welcomed and valued.

Is Docker a threat to the Cloud ecosystem?

Docker Containers Everywhere!

Docker has undoubtedly been the most disruptive technology that the industry has witnessed in the recent past. Every vendor in the cloud ecosystem has announced some level of support or integration with Docker. DockerCon, the first ever conference hosted by Docker Inc. in June 2014, had the who’s who of the cloud computing industry tell their stories of container integration. While each company had varying levels of container integration within their platforms, they all unanimously acknowledged the benefits of Docker.

It is not often that we see Microsoft, Amazon, IBM, Google, Facebook, Twitter, Red Hat, Rackspace and Salesforce under one roof pledging their support for one technology. But what’s in it for Microsoft or Amazon to support Docker? Why are traditional PaaS players like Heroku and Google rallying behind Docker? Is Docker really creating a level playing field for cloud providers? Does Docker converge IaaS and PaaS? Can we trust the vendors offering their unconditional support for Docker? It may be too early to fully answer these questions.

Will the Docker hype cause it to crash because Docker has too much attention too soon?

History and Parallels
If there is one technology that garnered wide industry support, it was Java. When Java was announced in the mid 90s, everyone, including Microsoft, showed interest until they realized how big a threat it was to their own platforms. Java’s main value proposition is Write Once, Run Anywhere – Docker containers are Build Once, Run Anywhere. Docker can be compared to Java not just from a technology aspect, but also from the potential threat it poses to certain companies. Though we have yet to see specific vendors countering the container threat by creating fear, uncertainty and doubt, it may not be too long before they initiate it.

The question of Docker domination still remains to be seen. Does history repeat itself with Docker the way that it did with Java, or even VMware? Key players from the cloud ecosystem offering everything from low-level hypervisors (VMware) to SaaS (Salesforce) are watching Docker to assess its impact on their businesses.

What is a Docker Container?

Docker is designed to manage things like Linux Containers (lxc). What is so different about Docker, when container technologies have been around since the year 2000 (FreeBSD jails)? Docker is the first technology that makes it easy to create and manage containers and also to package things in a way that make them usable without a lot of tweaking. Developers do not need to be experts in containerization to use Docker.

Docker containers can be provisioned on any VM that runs Linux kernel 3.8 or above. It doesn’t matter which Linux OS is running for a Docker container to launch. Thanks to the powerful Dockerfile – a declarative mechanism to describe the container – it is pretty simple to pull a container from the registry and run it on the local VM in just a few minutes.

The following diagrams depict what a Container is – think Russian Nesting Dolls.

Source: Gigaom Research

Stack-inception: Containers and how they relate to systems software with VMs.

Source: Gigaom Research

Stack-inception: Containers and how they relate to systems software without VMs.

Containers as a Service?
There are already startups like Tutum that offer Docker as a Service by imitating existing IaaS providers. Going forward, there is a possibility that Tutum will leverage multiple IaaS offerings to dynamically provision and move containers across them. Just like the way IaaS customers don’t care about the brand of the servers that hosts their VMs, Tutum’s customers won’t care if their container runs in Amazon or Azure. Customers will choose the geography or location where they want their container to run and then the provider will orchestrate the provisioning by choosing the cheapest available or most suitable public cloud platform.

The viability of Docker, and businesses that use Docker as IaaS offered to customers, is still an open-ended question. While Docker has great industry presence and a great deal of buzz, will this translate to production use across enterprises?

How does Docker impact the Cloud Ecosystem?

Public Cloud
From startups to enterprise IT, everyone realized the power of self-service based provisioning of virtual hardware. Public clouds like AWS, Azure and Google turned servers from being commodities to becoming utilities. Docker has the potential to reduce the cost of public cloud services by providing more fine-grained compute resources to be utilized and further reduce provisioning times. Additional services like load balancers, caching and firewalls will move to cloud agnostic containers to offer portability.

Since containers are lighter weight execution environments than VMs, Docker is well suited for hybrid cloud deployments. VMware vCHS and Microsoft Azure differentiate themselves through a VM mobility feature. Cloud bursting, a much talked about capability of hybrid cloud, can be delivered through Docker. Containers can be dynamically provisioned and relocated across environments based on resource utilization and availability.

If providers such as AWS adopt Docker as a new unit of resource, they may get cost efficiency benefits, but will management complexity and immaturity be too high of a burden right now?

Platform as a Service was one of the first service delivery models of cloud computing. It was originally created to enable developers to achieve scale without dealing with infrastructure. PaaS was expected to be the fastest growing market surpassing IaaS. But a few years later, early movers like Microsoft and Google realized that Amazon was growing faster because of its investments in IaaS. Infrastructure services had lower barriers to adoption than PaaS. Today, both Microsoft and Google have strong IaaS offerings that compete with Amazon EC2 in addition to maintaining their PaaS offerings.

The conflict in PaaS and what has caused slower adoption of PaaS, is the need by enterprises for a prescriptive way of writing, managing, and operating applications versus the desire that developers have to resist such constraints. Another concern is portability when writing applications on PaaS; each “brand” of PaaS has unique services and API interfaces which are not portable between one another. This metadata is proprietary for each PaaS vendor preventing the portability of code. Initiatives like buildpacks attempted to make PaaS applications portable. Moving from one PaaS instance to another of the same type, even across cloud providers, is simple. But it is still not an industry standard because public PaaS providers like Google App Engine and Microsoft Azure don’t support the concept of buildpacks.

Docker delivers a simplified promise of PaaS to developers. It is important to note that there are some PaaS solutions, like Cloud Foundry and Stackato that now support Docker containers. With Docker, developers never have to deal with disparate environments for development, testing, staging and production. They can sanitize their development environment and move it to production without losing configuration and its dependencies. This alleviates the classic issue of ‘it-worked-on-my-machine’ syndrome that developers often deal with. Since each Docker container is self-sufficient, in that each contains the code and configuration, it can be easily provisioned and run anywhere. The Dockerfile (which contains the configuration information for a Docker Container) is far more portable than the concept of a buildpack. Developers can manage a Dockerfile by integrating it with version control software like git or SVN, this takes infrastructure as a code to the next level.

Docker disrupts the PaaS world by offering a productive and efficient environment for developers. Developers do not need to learn new ways of coding just because their application runs in the cloud. Of course, they still need to follow best practices of designing and developing scalable applications but their code can run as-is in a Docker container with no changes. Containers encourage developers to write autonomous code that can run as microservices. Going forward, PaaS will embrace Docker by providing better governance, manageability and time to provision.

PaaS is an evolving market and Docker is being brought into the mix. Does this accelerate evolution or disrupt it? Perhaps it is a bit of both, by looking at a standard way of dealing with environments through containers, this may simplify portability for customers, but it may also take those same early adopter customers down a path of a pure but less mature Docker only solution.

Hypervisor and Virtualization Platforms
When VMware started offering virtualization in the form of VMware Workstation, no one thought it would become a dominant force in enterprise IT. Within a few years, VMware started extending virtualization technology to servers and now to the cloud. The ecosystem around Docker is eager to apply lessons learned from hypervisors to Docker containers to fast track its adoption. Eventually, Docker will become more secure and robust to run a variety of workloads that would otherwise run on VMs or even bare metal. There is already buzz around bare metal being a better alternative to multi-tenant VMs. CoreOS, a contemporary OS claims that it delivers better performance on bare metal with applications running inside Docker containers.

The lack of maturity of tooling and the ecosystem being large but not developed/mature brings into question if there will be a few early failures in spite of Docker likely being successful.

Multi-Cloud Management Tools
Multi-cloud management software is typically called a Cloud Management Platform (CMP). Some of the CMP companies including RightScale, Scalr, Enstratius (now Dell Cloud Manager), and ScaleXtreme were all started on the premise of abstracting underlying cloud platforms. Customers use CMP tools to define the deployment topology independent of the specific cloud provider. The CMP then provisions the workload in one of the cloud platforms chosen by the customer. With this, customers never have to deal with cloud specific UIs or APIs. To bring all the cloud platforms to the same level playing field, CMPs leverage similar building block services for each cloud platform.

To avoid lock-in, CMPs use basic compute, block storage, object storage and network services exposed by the cloud providers. Some CMPs deploy their own load balancers, database services and application services within to each cloud platform. This brings portability to workloads without tying them to cloud specific services and APIs. Since they are not tied to a specific platform, customers can decide to run the production environment on vSphere based private clouds while running disaster recovery (DR) on AWS.

In many ways, Docker offers similar portability to CMPs. Docker enables customers to declare an image and associated topology in the Dockerfile and then building it on a specific cloud platform. Similar to the way CMPs build and maintain additional services like networking, databases and application services as managed VMs on each cloud, container providers can deploy and maintain managed containers that complement vendor-specific services. Tools like Orchard, Fig, Shipyard and Kubernetes enable next generation providers to manage complex container deployments running on multiple cloud platforms. This has an overlap with the business model of cloud management platforms, which is why companies like RightScale and Scalr are assessing the impact of Docker on their business.

Does Docker eliminate or create more need for CMP? Docker may cause even more complex and difficult dependency chains that are harder to troubleshoot. Will CMPs adapt to incorporate managing Docker to be heterogenous?

Though there are many tools that fit into the DevOps equation that aim to bring developers and operations closer, Docker is a framework that closely aligns with DevOps principles. With Docker, developers stay focused on their code without worrying about the side effects of running it in production. Ops teams can treat the entire container as yet another artifact while managing deployments. The layered approach to file system and dependency management makes the configuration of environments easier to maintain. Versioning and maintaining Dockerfiles in the same source code control system (like a Git workflow), makes it very efficient managing multiple dev/test environments. Multiple containers representing different environments can be isolated whilst running on the same VM. It should be noted that Docker also plays well with existing tools like Jenkins, Chef, Puppet, Ansible, Salt Stack, Nagios and Opsworks.

Docker has the potential to have a significant impact on the DevOps ecosystem. It could fundamentally changes the way developers and operations professionals collaborate. Emerging DevOps as a service companies like CloudMunch,, will likely have to adopt Docker and bring that into their CI and CD solutions.

Does Docker ultimately only become a fit for Dev/Test and QA?


Docker is facing the same challenges that Java went through in late 90s. Given its potential to disrupt the market, many players are closely assessing its impact on their businesses. There will be attempts to hijack Docker into territories it is not intended for. Docker Inc. must be cautious in its approach to avoid the same fate as Java. Remember that SUN Microsystems, the original creator of Java never managed to exploit it the way IBM and BEA did. If not handled well, Docker Inc. faces similar risks in having its ecosystem profit more than it does.