The enterprise CIO is moving to a consumption-first paradigm

Take yourself back a couple of decades and the IT industry looked very different than it does today. Back then the number of solution choices was relatively limited and only available to those with the finances to afford it. Many of the core services had to be built from the ground up. Why? There simply wasn’t the volume or maturity of the IT marketplace for core services. Today, that picture is very different!

For example, consider email. Back in 1995, Microsoft Exchange was just a fledgling product that was less than two years old. The dominant email solutions were cc:Mail (acquired by Lotus in 1991), Lotus Notes (acquired by IBM in 1995) along with a myriad of mainframe, mini and UNIX-based mail servers.

Every enterprise had to setup and manage their individual email environment. Solutions like Google Apps and Microsoft 365 simply did not exist. There was no real alternative…except for outsourcing.

Outsourcing 1.0

In the mid-late 90’s outsourcing became in vogue as a means to divest the enterprise. The theory was centered on economies of scale and expertise that most enterprises simply did not possess. Back then, IT was squarely seen as a cost center.

Unfortunately, outsourcing did not deliver on the promise. It was an expensive, opaque option that created significant challenges for many enterprise organizations. Even today, these wounds run deep with IT leaders when they think of leveraging cloud-based solutions.

The intersection of IT maturity and focus

Fast forward to present day. Today, organizations are re-doubling efforts to catapult their position through leverage. This effort brings laser focus upon the IT organization to pinpoint those efforts that derive differentiated value.

At the same time, the IT marketplace is far more mature. There are multiple options offered through a number of avenues. A startup company is able to spin up all of their technology services without purchasing a single server or building a single data center. Cloud computing is a key to this leverage point.

The intersection of these two dynamics is causing CIOs and IT organizations to rethink their priorities to better align with the overall business objectives. IT organizations are looking for leverage where they no longer have to do everything themselves. This demonstrably changes the dynamic of speed, agility and focus.

Moving to a consumption-first paradigm

Enter the consumption-first paradigm. Whereas past IT organizations needed to take a build-first methodology out of necessity, today there is a better option. Today, organizations can move to a consume-first paradigm.

Consumption First

Within the paradigm, applications and services are evaluated through a consume-first methodology. If the application/ service is not a good fit, then it moves to a configure-first methodology. If all else fails, it falls to build-first. But the goal here is to consume as much as possible without having to build or configure.

The evaluation process is as important as changing the paradigm. It is critical to clearly understand what is strategic and differentiating for the company. That then becomes a hallmark for guiding which components present the greatest opportunity for focus and leverage.

Paradigm change is hard

Changing the paradigm does not happen overnight. Many will fight the change and develop reasons why consumption is not a good idea. It is important to understand the motivations. From experience, the fundamental concern often comes back to job loss and confusion. For the CIO, it is important to tackle these components head-on.

Equally important is to maintain a balance in evaluating the holistic situation. Understanding the impact on people and processes is often harder than the technology shift. I wrote about this two weeks ago with Time’s Up! Changing core IT Principles.

Coming full circle

Moving to a consumption-first paradigm is not limited to email. It is starting with data centers and core applications (like email) and moving up the stack. The question is: How prepared are you for the coming change. Newer generations of staff, employees and customers are already demanding a different class of services.

The evolution has just started. Moving to a consumption-first paradigm is the core component in making the transformation. Ironically, a vast many organizations are still working with paradigms from the 90’s by trying to do it all themselves. In their case, they believe (mistakenly) that they ‘have’ to. The reality is often very different when taken from an objective perspective.

Do not get caught flat-footed. Change is already happening and picking up momentum. Unlike past evolutions, this is not one you want to be on the trailing edge of.

Time’s up! Changing core IT principles

There is a theme gaining ground within IT organizations. In truth, there are a number of examples that support a common theme coming up for IT organizations. And this very theme will change the way solutions are built, configured, sold and used. Even the ecosystems and ancillary services will change. It also changes how we think, organize, lead and manage IT organizations. The theme is:

Just because you (IT) can do something does not mean you should.

Ironically, there are plenty of examples in the history of IT where the converse of this principle served IT well. Well, times have changed and so must the principles that govern the IT organization.

Take it to the customization of applications and you get this:

Just because IT can customize applications to the nth degree does not mean they necessarily should.

A great example of this is in the configuration and customization of applications. Just because IT could customize the heck out of it, should they have? Now, the argument often made here is that it provides some value, somewhere, either real or (more often) perceived. However, the reality is that it comes at a cost, sometimes, a very significant and real cost.

Making it real

Here is a real example that has played out time and time again. Take application XYZ. It is customized to the nth degree for ACME Company. Preferences are set, not necessarily because they should be, but rather because they could. Fast-forward a year or two. Now it is time to upgrade XYZ. The costs are significantly higher due to the customizations done. It requires more planning, more testing, more work all around. Were those costs justified by the benefit of the customizations? Typically not.

Now it is time to evaluate alternatives for XYZ. ACME builds a requirements document based on XYZ (including the myriad of customizations). Once the alternatives are matched against the requirements, the only solution that really fits the need is the incumbent. This approach actually gives significant weight to the incumbent solution therefore limiting alternatives.

These examples are not fictitious scenarios. They are very real and have played out in just about every organization I have come across. The lesson here is not that customizations should be avoided. The lesson is to limit customizations to only those necessary and provide significant value.

And the lesson goes beyond just configurations to understanding what IT’s true value is based on what they should and should not do.

Leveraging alternative approaches

Much is written about the value of new methodologies and technologies. Understanding IT’s true core value opportunity is paramount. The value proposition starts with understanding how the business operates. How does it make money? How does it spend money? Where are the opportunities for IT to contribute to these activities?

Every good strategy starts with a firm understanding of the ecosystem of the business. That is, how the company operates and it’s interactions. A good target that many are finding success with sits furthest away from the core company operations and therefore hardest to explain true business value…in business terms. For many, it starts with the data center and moves up the infrastructure stack. For a bit more detail: CIOs are getting out of the data center business.

Preparing for the future today

Is your IT organization ready for today? How prepared is your organization, processes and systems to handle real-time analytics? As companies consider how to engage customers from a mobile platform in real-time, the shift from batch-mode to real-time data analytics quickly takes shape. Yet many of the core systems and infrastructure are nowhere ready to take on the changing requirements.

Beyond data, are the systems ready to respond to the changing business climate? What is IT’s holistic cloud strategy? Is a DevOps methodology engaged? What about container-based architectures?

These are only a few of the core changes in play today…not in the future. If organizations are to keep up, they need to start making the evolutionary turn now.

Changing the CIO conversation from technology to business

For many years, traditional IT thinking has served the IT function well. Companies have prospered from both the technological advances and consequent business improvements. Historically, the conversation typically centered on some form of technology. It could have been about infrastructure (data centers, servers, storage, network) or applications (language, platform, architectures) or both.

Today, we are seeing a marked shift in the conversations happening with the CIO. Instead of talking about the latest bell-and-whistle, it is increasingly more apt to involve topics about business enablement and growth. The changes did not happen overnight. For any IT leader, it takes time to evolve the conversation. Not only does the IT leader need to evolve, but so does their team and fellow business leaders. Almost two years ago, I wrote about the evolution of these relationships in Transforming IT Requires a Three-Legged Race.

Starting the journey

For the vast majority of IT leaders, the process is not an end-state, but rather a journey about evolution that has yet to start in earnest. For many I have spoken with, there is an interest, but not a clear path in which to take.

This is where an outside perspective is helpful. It may come from mentors, advisors or peers. It needs to come from someone that is trusted and objective. This is key, as the change itself will touch the ethos of the IT leader.

The assessment

Taking a holistic assessment of the situation is critical here. It requires a solid review of the IT leadership, organizational ability, process state and technological situational analysis. The context for the assessment is back to the core business strategy and objectives.

Specific areas of change are items that clearly are not strategic or differentiating to support the company’s strategy and objectives. A significant challenge for IT organizations will be: Just because you can manage it, does not mean you should manage it.

Quite often, IT organizations get too far into the weeds and loose sight of the bigger picture. To fellow business leaders, this is often perceived as a disconnect between IT & Line of Business (LoB) leaders. It essentially alienates IT leaders and creates challenges to fostering stronger bonds between the same leaders.

Never lose sight of the business

It is no longer adequate for the CIO to be the only IT leader familiar with the company’s strategy and objectives. Any IT leader today needs to fully understand the ecosystem of how the company makes and spends money. Without this clarity, the leader lacks the context in which to make healthy, business-centric decisions.

The converse is an IT leader that is well familiar with the business perspective as outlined above. This IT leader will gain greater respect amongst their business colleagues. They will also have the context in which to understand which decisions are most important.

Kicking technology to the curb

So, is IT really getting out of the technology business? No! Rather, think of it as an opportunity to focus. Focus on what is important and what is not. What is strategic for the company and what is not? Is moving to a cloud-centric model the most important thing right now? What about shifting to a container-based application architecture model? Maybe. Maybe not. There are many areas of ripe, low hanging fruit to be picked. And just as with fruit, the degree of ripeness will change over time. You do not want to pick spoiled fruit. Nor do you want to pick it too soon.

One area of great interest these days is in the data center. I wrote about this in detail with CIOs are getting out of the Data Center business. It is not the only area, but it is one of many areas to start evaluating.

The connection between technology divestiture and business

By assessing which areas are not strategic and divesting those area, it provides IT with greater focus and the ability to apply resources to more strategic functions. Imagine if those resources were redeployed to provide greater value to the company strategy and business objectives. By divesting non-strategic areas, it frees up the ability to move into other areas and conversations.

By changing the model and using business as the context, it changes the tone, tenor and impact in which IT can have for a company. The changes will not happen overnight. The evolution of moving from technology to business discussions takes vision, perseverance, and a strong internal drive toward change.

The upside is a change in culture that is both invigorating and liberating. It is also a model that supports the dynamic changes required for today’s leading organizations.

There comes a point when it is not just about storage space

Is the difference between cloud storage provides about free space? In a word, no. I wrote about the cloud storage wars and potential bubble here:

The cloud storage wars heat up

http://avoa.com/2014/04/29/the-cloud-storage-wars-heat-up/

4 reasons cloud storage is not a bubble about to pop

http://avoa.com/2014/03/24/4-reasons-cloud-storage-is-not-a-bubble-about-to-pop/

Each of the providers is doing their part to drive value into their respective solutions. To some, value includes the amount of ‘free’ disk space included. Just today, Microsoft upped the ante by offering unlimited free space for their OneDrive and OneDrive for Business solutions.

Is there value in the amount of free space? Maybe, but only to a point. Once they offer an amount above the normal needs (or unlimited), the value becomes a null. I do not have statistics, but would hazard a venture that ‘unlimited’ is more marketing leverage where most users only consume less than 50GB each.

Looking beyond free space

Once a provider offers unlimited storage, one needs to look at the feature/ functionality of the solution. Not all solutions are built the same nor offer similar levels. Enterprise features, integration, ease of use and mobile access are just a few of the differentiators. Even with unlimited storage, if the solution does not offer the feature you need, storage value is greatly diminished.

The big picture

For most, cloud storage is about replacing a current solution. On the surface the amount of free storage is a quick pickup. However, the real issue is in the compatibility and value beyond just the amount of free storage. Does the solution integrate with existing solutions? How broad is their ecosystem? What about Single Sign On (SSO) support? How much work will it take to implement and train users? These are just a few of the factors that must be considered.

Is the cloud unstable and what can we do about it?

The recent major reboots of cloud-based infrastructure by Amazon and Rackspace has resurfaced the question about cloud instability. Days before the reboot, both Amazon and Rackspace noted that the reboots were due to a vulnerability with Zen. Barb Darrow of Gigaom covered this in detail here. Ironically, all of this came less than a week before the action took place, leaving many flat-footed.

Outages are not new

First, let us admit that outages (and reboots) are not unique to cloud-based infrastructure. Traditional corporate data centers face unplanned outages and regular system reboots. For Microsoft-based infrastructure, reboots may happen monthly due to security patch updates. Back in April 2011, I wrote a piece Amazon Outage Concerns are Overblown. Amazon had just endured another outage of their Virginia data center that very day. In response, customers and observers took shots at Amazon. However, is Amazon’s outage really the problem? In the piece, I suggested that customers were misunderstanding the problem when they think about cloud-based infrastructure services.

Cloud expectations are misguided

As with the piece back in 2011, the expectations of cloud-based infrastructure have not changed much for enterprise customers. The expectation has been (and still is) that cloud-based infrastructure is resilient just like that within the corporate data center. The truth is very different. There are exceptions, but the majority of cloud-based infrastructure is not built for hardware resiliency. That’s by design. The expectation by service providers is that application/ service resiliency rests further up the stack when you move to cloud. That is very different than traditional application architectures found in the corporate data center where infrastructure provides the resiliency.

Time to expect failure in the cloud

Like many of the web-scale applications using cloud-based infrastructure today, enterprise applications need to rethink their architecture. If the assumption is that infrastructure will fail, how will that impact architectural decisions? When leveraging cloud-based infrastructure services from Amazon or Rackspace, this paradigm plays out well. If you lose the infrastructure, the application keeps humming away. Take out a data center, and users are still not impacted. Are we there yet? Nowhere close. But that is the direction we must take.

Getting from here to there

Hypothetically, if an application were built with the expectation of infrastructure failure, the recent failures would not have impacted the delivery to the user. Going further, imagine if the application could withstand a full data center outage and/ or a core intercontinental undersea fiber cut. If the expectation were for complete infrastructure failure, then the results would be quite different. Unfortunately, the reality is just not there…yet.

The vast majority of enterprise applications were never designed for cloud. Therefore, they need to be tweaked, re-architected or worse, completely rewritten. There’s a real cost to do so! Just because the application could be moved to cloud does not mean the economics are there to support it. Each application needs to be evaluated individually.

Building the counterargument

Some may say that this whole argument is hogwash. So, let us take a look at the alternative. If one does build cloud-based infrastructure to be resilient like that of its corporate brethren, it would result in a very expensive venture at a minimum. Infrastructure is expensive. Back in the 1970’s a company called Tandem Computers had a solution to this with their NonStop system. In the 1990’s, the Tandem NonStop Himalayan class systems were all the rage…if you could afford them. NonStop was particularly interesting for financial services organizations that 1) could not afford the downtime and 2) had the money to afford the system. Consequently, Tandem was acquired by Compaq who in turn was acquired by HP. NonStop is now owned by HP as part of their Integrity NonStop products. Aside from Tandem’s solutions, even with all of the infrastructure redundancy, many are still just a data center outage away of impacting an application. The bottom line is: It is impossible to build a 100% resilient infrastructure. That is true either due to 1) it is cost prohibitive and 2) becomes a statistical probability problem. For many, the value comes down to the statistic probably of an outage compared with the protections taken.

Making the move

Over the past five years or so, companies have looked at the economics to build redundancy (and resiliency) at the infrastructure layer. The net result is a renewed focus on moving away from infrastructure resiliency and toward low-cost hardware. The thinking is: infrastructure is expensive and resiliency needs to move up the stack. The challenge is changing the paradigm of how application redundancy is handled by developers of corporate applications.

Leveraging existing assets for disruption

Let’s compare two ways big tech companies are differentiating themselves this week: using contrarian marketing angles and using existing assets to enter new markets. We’ll look at four stories of disruptive moves. A common strand we find in each of them—whether a move of market enhancement or market extension—is the value of low-cost infrastructure and a large-scale customer base for continuing disruption.
Two of the stories involve going at competitors from a different angle:

And two of the stories involve using built-out infrastructure to enter a new competitive niche:

T-Mobile using a new technology
T-Mobile is known for disrupting the market for mobile service on a price basis. This makes sense given that the company hasn’t historically been known for being first to market with the highest-speed networks. Now, however, T-Mobile is an early adopter, rolling out a new technology that should significantly increase the consistency of call quality within its LTE network.
Gigaom’s Kevin Fitchard had a scoop on the story last June, but he was able to get confirmation on the deployment this week from Mark McDiarmid, the company’s VP of technology. The 4-by-2 multiple input-multiple output (4×2 MIMO) technology uses multiple antennas to send twice as many transmissions to a phone as is usual in  LTE (2×2 MIMO). While this doesn’t crank up network speed, it does improve transmission around obstacles or at the fringes of a network.
Amazon challenging online grocers
On Wednesday, Amazon launched Prime Pantry, which unlike its Prime Fresh service doesn’t compete with FreshDirect, Peapod, or Instacart on speed of service. However, Gigaom’s Laura Hazard Owen, who had the story and has already done price comparisons with FreshDirect, Instacart, and local New York City grocers, has found that Prime Pantry clearly undercuts its competitors on price. The service allows customers to mix and match small quantities of low-cost groceries for a combined delivery charge of $5.99 per 45 pounds of product.
Amazon using its own trucks
No, there’s no need to look overhead for the drones quite yet, but Amazon is testing the use of its own trucks for ‘final mile’ package delivery in San Francisco, New York, and Los Angeles. The company is moving to vertically integrate its supply chain by supervising contractor-supplied trucks and drivers. This trial does compete on speed of service, as it enables same-day delivery of packages that FedEx, UPS or the U.S. Postal Service can’t match. These trials were started late last year, following an earlier rollout in the UK. But the Wall Street Journal has the story, reporting that last year’s Christmas delivery snafus with the usual carriers created a greater sense of urgency for Amazon to control its last-minute delivery.
The move can also be seen as an effort to counteract the advantage that brick-and-mortar retailers like Wal-Mart have in providing same-day, local delivery in combination with the proficiency they have now also achieved in Amazon’s online turf. However, Amazon’s scale of delivery and prowess in logistics present a direct threat to the traditional package delivery services.
Facebook filing to provide online banking
Fortune this week picked up on reports earlier this month in the Irish press of Facebook filing with Ireland’s central bank for an e-money license to provide its own Bitcoin-like currency. With approval possible within the month, Facebook would be able to provide the service across Europe.
In leveraging its customer and technology network to provide a form of payment services, Facebook would be joining non-banking companies like Google, T-Mobile and Sprint that are similarly looking to leverage their technology platforms in financial services. Facebook’s approach is slightly different, however; and with its services and reach, the company may be especially suited to serving the many under-banked customers in developing markets.
The value of low-cost infrastructure
A common strand in all of these moves—whether market enhancements or market extensions—is the value of low-cost infrastructure and a large-scale customer base for continuing market disruption. In that sense, this dynamic is just a modern update to the manufacturing advantage that the Japanese automakers used successfully against Detroit in the 1970s and 1980s.
While Toyota and Honda initially competed against the American car manufacturers at the low end of the market, they used factory automation, in part, to gain a cost advantage in producing small and simple automobiles. Having solidified that niche in the market, however, they were able to leverage that same technology to provide better and better made cars. First they were able to establish a quality advantage, then they moved up-market to producing larger autos. Finally, they were able to disrupt the luxury segment with their Lexus and Acura brands. (Even earlier, Honda had been able to leverage its manufacture of motorcycles and motorcycle engines to get started as a low-cost manufacturer of autos.)
The implications for enterprise IT
The implications of this dynamic for enterprise IT are pretty clear. A low-cost technology delivery platform becomes a vessel for delivering more, new, and better products both within traditional markets and beyond. Companies that have achieved a common, low-cost and flexible technology platform are in a position to roll out new competitive offerings, enter new markets, and counter competitive challenges. Those that have not are sitting ducks for sharp-shooting competitors within their own markets, as well as for hunters from neighboring or foreign markets—or putative partners within their supply chains.

The direction of public cloud databases

In his Weekly Update, David Linthicum, the Gigaom Research curator for cloud, has a great look at public cloud databases. Asking, “Will cloud databases and cloud infrastructure combine, or have they already?”, David notes inevitable pull toward optimization and then integration of one vendor’s products with its other services. Along with pondering this longer-term product direction, he notes the compelling case for more and more public cloud-based database solutions.

Two Gigaom Research takes on Microsoft’s quarterly results

Microsoft announced its quarterly results this week, and two Gigaom Research curators weighed in with their perspectives.

As social media curator Stowe Boyd observed, Microsoft posts record Q2 results, and beats expectations, but no new CEO. Cloud curator David Linthicum, meanwhile asked, “Will the public cloud ever be more “Microsofty?””

Both analysts see Microsoft’s future in its enterprise business, and the firm’s eventual naming of new leadership as an opportunity to move aggressively in that direction. Its Azure cloud platform is the firm’s new flagship, though as David notes, the Azure environment is sufficiently populated with legacy developers to potentially slow the vessel’s speed against stiff competition.