The 3 modes of enterprise cloud applications

One of the key attributes to a successful cloud deployment is thinking about the strategy holistically. In my post ‘CIOs are getting out of the data center business’, I introduced the idea of a Data Center/ Cloud Spectrum. The spectrum provides one dimension to consider your cloud journey.

The second dimension considers that of the IT portfolio. What are the different classes of applications and their potential disposition? Over the course of working with companies on their cloud journey, the applications generally break out into this classification structure.

IT Portfolio Categories

The three categories are Enterprise Applications, Application Rewrites and Greenfield Development. There are even sub-categories within each of these, but to provide a baseline, we will stick to the top-line categorization.

Enterprise Applications

Enterprise applications are by far the largest contingent of applications within the enterprise portfolio. These encompass everything from the traditional ERP application to custom applications and platforms built before the advent of cloud. Enterprise organizations may have the opportunity to virtualize these applications, but little else. These applications were never designed with cloud in mind. While these applications are technically legacy applications, they will range from an age of 20 years to recent. Regardless of age, the effort to retrofit or change them is not trivial.

Application Rewrites

Application rewrites is a category of applications (Enterprise Applications) that could be re-written to support cloud computing. Even thought just about every enterprise application could technically be rewritten to support cloud, there are a number of hurdles to get there.

Economic and priority challenges are two of the top inhibitors for application rewrites. Even if the will to change is there, there are a myriad of additional reasons that could prevent a full-blown application rewrite. Some examples include risk profile, skillset requirements, application requirements and cultural challenges.

Eventually, many of the applications in the ‘enterprise applications’ category will move to either Software as a Service (SaaS) or into an application rewrite phase. There is a much smaller contingent that will actually retire.

Greenfield development

Greenfield development is probably the most discussed area of opportunity for cloud computing. However, it also represents one of the smallest areas (relatively speaking) of the overall IT portfolio. Over time, this area will grow, but at the expense of the existing enterprise application base.

For established enterprise organizations, this area represents a very different model from web-scale or new organizations. In the case of new organizations or web-scale companies, they have the ability to start from scratch with little or no legacy to contend with. Unfortunately, the traditional enterprise does not have this luxury.

The forked approach

In order to address the varied demands coming to the CIO and enterprise IT organization, a forked approach is needed. First, it is important not to ignore existing enterprise applications. The irony is that many providers, solutions and organizations do this. The reality is that greenfield development is new, sexy and frankly more interesting in many ways. At the same time the traditional enterprise applications cannot be ignored. A holistic, forked approach is needed.

The holistic effort needs to take into account all three categories of demand. That may mean different models and solutions service them for some time. That’s ok. Part of the strategy needs to take into account how to integrate the models short-term and long-term. For some workloads, over time, they may shift to a different delivery method (private cloud -> SaaS).

Planning and execution

Ignoring the shift and full set of requirements is not an option. Disrupt or be disrupted. The key is to develop a clear strategy that is holistic and includes a well thought out execution plan. The change will not happen overnight. Even for organizations that are strongly aligned for change, it still takes time. For those earlier in the process, it will take more time. The sooner you start, the better.

Mesosphere’s new data center mother brain will blow your mind

Mesosphere has been making a name for itself in the the world of data centers and cloud computing since 2013 with its distributed-system smarts and various introductions of open-source technologies, each designed to tackle the challenges of running tons of workloads across multiple machines. On Monday, the startup plans to announce that its much-anticipated data center operating system — the culmination of its many technologies — has been released as a private beta and will be available to the public in early 2015.

As part of the new operating system’s launch, [company]Mesosphere[/company] also plans to announce that it has raised a $36 million Series B investment round, which brings its total funding to $50 million. Khosla Ventures, a new investor, drove the financing along with Andreessen Horowitz, Fuel Capital, SV Angel and other unnamed entities.

Mesosphere’s new data center operating system, dubbed DCOS, tackles the complexity behind trying to read all of the machines inside a data center as one giant computer. Similar to how an operating system on a personal computer can distribute the necessary resources to all the installed applications, DCOS can supposedly do the same thing across the data center.

The idea comes from the fact that today’s powerful data-crunching applications and services — like Kafka, Spark and Cassandra — span multiple servers, unlike more old-school applications like [company]Microsoft[/company] Excel. Asking developers and operations staff to configure and maintain each individual machine to accommodate the new distributed applications is quite a lot, as Apache Mesos co-creator and new Mesosphere hire Benjamin Hindman explained in an essay earlier this week.

Mesosphere CEO Florian Leibert

Mesosphere CEO Florian Leibert – Source: Mesosphere

Because of this complexity, the machines are nowhere near running full steam, said Mesosphere’s senior vice president of marketing and business development Matt Trifiro.

“85 percent of a data center’s capacity is typically wasted,” said Trifiro. Although developers and operations staff have come a long way to tether pieces of the underlying system together, there hasn’t yet been a nucleus of sorts that successfully links and controls everything.

“We’ve always been talking about it — this vision,” said Mesosphere CEO Florian Leibert. “Slowly but surely the pieces came together; now is the first time we are showing the total picture.”

Building an OS

The new DCOS is essentially a bundle of all of the components Mesosphere has been rolling out — including the Mesos resource management system, the Marathon framework and Chronos job scheduler — as well as third-party applications like the Hadoop file system and YARN.

The DCOS also includes common OS features one would would find in Linux or Windows, like a graphical user interface, command-line interface and a software-development kit.

These types of interfaces and extras are important for DCOS to be a true operating system, explained Leibert. While Mesos can automate the allocation of all the data center resources to many applications, the additional features provide coders and operations staff a centralized hub from which they can monitor their data center as a whole and even program.

“We took the core [Mesos] kernel and built the consumable systems around it,” said Trifiro. “[We] added Marathon, added Chronos and added the easy install of the entire package.”

To get DCOS up and running in a data center, Mesosphere installs a small agent on all Linux OS-based machines, which in turn allows them to be read as an “uber operating system,” explained Leibert. With all of the machines’ operating systems linked up, it’s supposedly easier for distributed applications, like Google’s Kubernetes, to function and receive what they needs.

The new graphical interface and command-line interface allows an organization to see a visual representation of all of their data center machines, all the installed distributed applications and how system resources like CPU and memory are being shared.

If a developer wants to install an application in the data center, he or she simply has to enter install commands in the command-line interface and the DCOS should automatically load it up. A visual representation of the app should then appear along with indicating which machine nodes are allocating the right resources.

DCOS interface

DCOS interface

The same process goes for installing a distributed database like Cassandra; you can now “have it running in a minute or so,” said Leibert.

Installing Cassandra on DCOS

Installing Cassandra on DCOS

A scheduler is built into DCOS that takes in account certain variables a developer might want to include in order to decide which machine should deliver resources to what application; this is helpful because it allows the developer to set up the configurations and the DCOS will automatically follow through with the orders.

“We basically turn the software developer into a data center programmer,” said Leibert.

And because DCOS makes it easier for a coder to program against, it’s possible that new distributed applications could be made faster than before because the developer can now write software to a fleet of machines rather than only one.

As of today, DCOS can run on on-premise environments like bare metal and OpenStack, major cloud providers — like [company]Amazon[/company], [company]Google[/company] and [company]Microsoft[/company] — and it supports Linux variants like CoreOS and Redhat.

Changing the notion of a data center

Leibert wouldn’t name which organizations are currently trying out DCOS in beta, but it’s hard not to think that companies like Twitter, Netflix or Airbnb — all users of Mesos — haven’t considered giving it a test drive. Leibert was a former engineer at Twitter and Airbnb, after all.

Beyond the top webscale companies, Mesosphere wants to court legacy enterprises like those in the financial-services industry who have existing data centers that aren’t nearly as efficient as those seen at Google.

Banks, for example, typically use “tens of thousands of machines” in their data centers to perform risk analysis, Leibert said. With DCOS, Leibert claims that banks can run the type of complex workloads they require in a more streamlined manner if they were to link up all those machines.

And for these companies that are under tight regulation, Leibert said that Mesosphere has taken security into account.

“We built a security product into this operating system that is above and beyond any open-source system, even as a commercial plugin,” said Leibert.

As for what lies ahead for DCOS, Leibert said that his team is working on new features like distributed checkpointing, which is basically the ability to take a snapshot of a running application so that you can pause your work; the next time you start it up, the data center remembers where it left off and can deliver the right resources as if there wasn’t a break. This method is apparently good for developers working on activities like genome sequencing, he said.

Support for containers is also something Mesosphere will continue to tout, as the startup has been a believer in the technology “even before the hype of [company]Docker[/company],” said Leibert. Containers, with their ability to isolate workloads even on the same machine, are fundamental to DCOS, he said.

Mesosphere believes there will be new container technology emerging, not just the recently announced CoreOS Rocket container technology, explained Trifiro, but as of now, Docker and native Linux cgroup containers are what customers are calling for. If Rocket gains momentum in the market place, Trifiro said, Mesosphere will “absolutely implement it.”

If DCOS ultimately lives up to what it promises it can deliver, managing data centers could be a way less difficult task. With a giant pool of resources at your disposal and an easier way to write new applications to a tethered-together cluster of computers, it’s possible that next-generation applications could be developed and managed far easier than they use to be.

Correction: This post was updated at 8:30 a.m. to correctly state Leibert’s previous employers. He worked at Airbnb, not Netflix.

The enterprise CIO needs a comprehensive strategic plan and quick

There are many who profess to know what goes on within the mind of the CIO and across the IT organization as a whole. The challenge is: If you have not been responsible for the role, it is increasingly difficult to truly understand the complicated world that encompasses enterprise IT organizations. Could they be simplified? In a word, yes. But that is easier said than done. One needs an appreciation for the demands coming from not just technology, but also from other organizations within the company and the IT organization itself. But even that statement does not provide the full depth of the complexity facing today’s CIO.

The CIO balancing act

Today’s CIO is facing a balancing act between legacy solutions, methodologies and the modern-day buzzword bingo. Whether from cloud computing, big data analytics, data center complications, new architectures, new programming languages or just simply (relatively) the changes in the business direction, the complication is far and wide. And even if a CIO agrees and wants to move to a new solution like cloud, there may be other limiting factors to consider.

IT as a strategic weapon

Strategy is not a new or foreign concept to the IT organization. The vast majority of CIOs and IT organizations have a well-defined strategy that outlines how the IT organization supports the company as a whole. At times however, strategy becomes a victim to the interrupt-driven nature of IT requests. Always being one to want to please, the latest request becomes the newest focus for the team.

One opportunity missed by many organizations is how to transition from being the “hero” to being the sought-after strategic weapon for a company. There is a big difference between the two and it resonates greatly on IT’s intrinsic value to the company. The modern-day CIO is shifting from problem solving to providing business leverage. That is not to say that the IT organization gives up the problem solving. It remains, but is table stakes in today’s IT requirements.

Spanning the industries

The shift in thinking is not relegated to a specific region or industry. Silicon Valley, including its wide geography from San Francisco to San Jose, is not alone in the opportunity. Neither are new upstarts in the web scale category. Every single industry and region has the same challenge. Recall that companies operate in a global economy and need to respond accordingly. Eat or be eaten. Even the incumbent is not immune to the changes sitting at the front door.

Cloud implementation v2.0

One way IT organizations are changing the conversation between IT and Line of Business (LoB) teams is in the introduction of cloud computing. Beyond the common use-cases (CRM, HRIS, Email, etc), the implementations vary greatly. One trend coming up is a move to ‘cloud implementation v2.0’. Organizations were quick to try cloud-based services with very mixed results. In many cases, the attempt was fairly haphazard. IT organizations are now stepping back and rethinking their approach to cloud in a more holistic fashion. Where does it apply, how, why and when? But it goes much broader than that.

Shifting gears to focus on data

In order to understand where to apply cloud, understanding the larger objective is critical. This is where data-centric conversations come into play. In the end, it is not just about the application and data, but also about the value to the company. Add in conversations like Big Data, Analytics, Internet of Things (IoT), Industrial Internet and one can see how the complexity just grew exponentially.

The clock is ticking…

The growing complexity for the CIO and IT organization does not translate to more available time. Quite the contrary. The demands that companies are placing on their IT organization are increasing exponentially. This is where a new strategic vision is needed. In order to respond in a timely manner, CIOs will need to rethink their organization, processes, focuses and technology in a holistic manner. It will take time to evolve to the new model. But timing is of the essence. The demand is here today and is only increasing.

With Scalable Data Stores Around, Is NoSQL a Non-Starter?

The discussion around NoSQL seems to have evolved from one about abolishing SQL databases to one about coexisting with SQL databases to one where SQL is actually regaining the momentum. Is this a case of “the next big thing” actually instigating concerned parties to improve the existing solution rather than showing it the door?

Today in Cloud

Professionally, I’m not too interested in Web 2.0 services and the consumer-centric issues that accompany them. Facebook’s data-security practices: I’m listening. Facebook’s privacy settings: Can’t we talk about its memcached architecture instead? Personally, however, I’m very interested,which leaves me with what I like to call the “Google conundrum.” I want to vilify companies like Facebook and Google for growing too big and hiding self-serving intentions behind altruistic guises, but then my professional side kicks in. Their open source releases contribute so much to field of massive-scale computing that I can’t help but admire them. Long story short: Today, Om reported on how adoption of Facebook’s HipHop tool is catching on and drastically speeding up PHP apps of all stripes.

Today in Cloud

Twitter’s decision to build its own data center has some asking why web companies abandon their use of cloud computing when they hit a certain size. The question this raises, some posit, is whether cloud providers are doing enough to meet the needs of such companies. I wrote about this issue in May 2009, and the facts haven’t changed since: Public clouds will give most applications and users — even in the enterprise — all the resources and control they’ll need, but they’re not designed for huge operations like Facebook and Twitter that need to manage huge data volumes and maintain high performance levels. This could change over time, but I think cloud providers will settle for average-scale applications in the near term.

Today in Cloud

The news that ARM intends to get into the server market has me wondering how realistic the notion of a viable x86 alternative really is. More accurately, I wonder how soon any alternative-architecture-based processors will be able to steal significant market share. Intel and AMD have invested heavily in x86 and continue to do so, as have server makers, software vendors and end-users, so I suspect there will be major resistance to any such transformation. Plus, we love things to be bigger, faster and stronger, and every new generation of x86 processors fulfills this desire. The transformation will happen, but when, where and to what degree remain to be seen.

Today in Cloud

My Weekly Update about COTS solutions in webscale companies has generated much reaction (mostly negative), with much of it stemming from the condensed version posted on GigaOM. However, I’d like to hear what Pro readers think, based on the full version available here. Far from an endorsement of proprietary or commercial solutions, the post really is meant to question whether it’s impossible that they could work within web-based companies that need scalable data solutions. Maybe I’m way off base, but it doesn’t seem inconceivable that commercial vendors could reduce the need to develop backend technologies in-house, thus freeing brain power to focus on the core product. I’d love to hear what you think.