One of the biggest challenges comes in the specter of performance, a poltergeist that adds latency to transactions and is further empowered by physical distances.
Sinclair Schuller is the CEO and cofounder of Apprenda, a leader in enterprise Platform as a Service.
When the phrase “hybrid cloud” is mentioned, some technologists will tell you it is the eventual end state of cloud computing, while other technologists chuckle. Those that chuckle typically view hybrid as a phrase used by vendors and customers who have no cloud strategy at all. But hybrid is real and here to stay. Not only is it here to stay, but the hybrid cloud will also reshape cloud computing forever.
People today imagine public cloud to be an “amorphous, infinitely scalable computing ether.” They think moving to the cloud rids themselves of the need to detail with computing specificity and that cloud makes you location, risk and model independent. They think enterprises that move to the cloud no longer need to depend on pesky IT departments and deal with risks associated with centralized computing. This perception of computing independence and scale couldn’t be further from the truth.
The promise of cloud is one where anyone who needs compute and storage can get it in an available, as-needed, and robust manner. Cloud computing providers have perfected availability to the point where, even with occasional mass outages, they outperform the service-level agreements (SLAs) of internal IT departments. This does come at a cost, however.
Cloud computing is arguably the largest centralization of technology the world has ever seen and will see. For whatever reason, many people don’t immediately realize that the cloud is centralized, something that should be heavily scrutinized. Possibly because the marketing behind cloud can be vague and lacking a description of a tangible “place.” Don’t be fooled.
When an enterprise selects a cloud vendor, they’re committing to that provider in a meaningful way. As applications are built for or migrated to a cloud, switching cost gets very high. The nature of this market is driven by a network effect where, assuming all else is equal, each prospective customer of a cloud provider (AWS, Microsoft, etc.) benefits by consuming a cloud that has many of customers over one that has fewer since it indicates lower risk and helps drive the economies that make a given cloud attractive.
If we play this future out, we’ll likely see the cloud infrastructure market collapse to just a few massive, global providers. This will partly be driven by success of the individual providers and the consolidation of smaller players who have great technology but simply can’t compete at that scale. Just take a look at the acquisition of Virtustream by EMC just prior to Dell’s acquisition of EMC for a recent example.
A look at recent market share estimates show exactly that, with Amazon, Microsoft, IBM, and Google accounting for 50 percent of the global cloud infrastructure market. One day, these five vendors will likely account for 80 percent of the market. Compare that to the often-criticized banking world, where despite the massive size of today’s banks, the list of banks that hold 50 percent of global deposits is much longer than just five banks. If we applied the same standard to cloud computing, we’d certainly be infuriated and demanding that these “too big to fail” computing providers be broken up.
To be clear, I’m not suggesting that what’s happening is bad or that public cloud is bad, but rather to point out the realistic state of cloud computing and the risk created by centralizing control to just a few providers. Cloud would likely never have succeeded without a few key companies making massive bets. The idea of a truly decentralized, global cloud would likely have been the wrong starting point.
Let’s explore the idea that a global decentralized cloud, or something more decentralized than what we have now, is the likely end state. Breaking up cloud providers isn’t necessary or optimal. Unlike banking, technology is capable of layers of abstraction to mitigate these sorts of centralized risks.
Most large enterprises looking to adopt cloud are making two large considerations in their decision process:
- They can’t shut down their entire IT department and replace it with cloud. There are many practical reasons why this is unlikely.
- Many are keenly aware of the risks associated with depending on a single vendor for all their cloud computing needs.
The first consideration makes it difficult to adopt a public cloud without at least considering how to reconcile the differences with on-premises, and the second makes it difficult to choose one provider at a level that is incompatible with another provider. The result of centralization in public cloud providers and looking for symmetry between off- and on-premises computing strategies is driving enterprises to explore (and in some cases demand) hybrid capabilities in layers that abstract away infrastructure. In fact, hybrid has transformed to be synonymous with multi-cloud.
Technical layers like enterprise PaaS software platforms and Cloud Management Platforms have evolved to allow for multi-cloud capabilities to cater to a modality where resources are abstract. Over the coming years, we’ll likely see multi-cloud features in these technology layers to lead to a much more decentralized computing model where something like a PaaS layer will fuse resources from public clouds, on-premises infrastructure, and regional infrastructure providers into logical clouds.
At least in the enterprise space, “private clouds” will really be an amalgam of resources and will behave as the single, “amorphous ether” that we tend to assign cloud to begin with. The cloud market will not be one where five vendors control all the compute and customers are at the mercy of the vendors. Instead, cloud will be consumed through multi-cloud layers that will protect customers from inherent centralization risk. The end state is a decentralized model with control points owned by the customer through software – a drastic reshaping to say the least.
IT monitoring-specialist ScienceLogic just landed $43 million in a Series D funding round, which will give it more resources as it continues to advocate for the hybrid cloud. The startup now has $84 million in total investment.
ScienceLogic is banking on the growth of companies adopting a hybrid-cloud strategy, which means that they have both on-premise infrastructure that works in conjunction with outside cloud providers, explained ScienceLogic CEO Dave Link.
While companies may have monitoring tools available for their existing infrastructure that can scan log files and spot problems in the data center, when it comes to seeing the big picture of how their infrastructure connects with what they may have running in the cloud, there’s not a lot out there to give them that holistic view, Link explained.
ScienceLogic has a console that can supposedly connect via APIs to the different monitoring tools provided by public-cloud providers like Amazon’s CloudWatch. It can siphon that data from the CloudWatch monitoring tool and then correlate it with the data its own CloudMapper tool is spooling from a company’s on-premise infrastructure and then “intelligently map out these relationships” between the two environments, said Link.
“You understand the relationships,” said ScienceLogic CEO Dave Link. “You can correlate the events and get a clear indicator on what is the root cause of the application being slow or the system [not] being responsive.”
Link said his company currently competes “day in and day out” with the big legacy companies like HP and CA as well as with the open-source Nagios IT-monitoring tool. When it comes to Nagios, Links claimed that what sets apart the open-source tool from ScienceLogic is that it doesn’t scale well unless an organization has the engineering resources available to tend to the product, which can end up costing a lot of cash.
“There is just so much variability that makes it hard for IT to get a single product that gives them a line of sight across the hardware, hypervisor, and the application layer,” Link said.
Goldman Sachs led the investment round, which also included existing investors NEA and Intel Capital.
VMware has developed a reputation in some circles as being proprietary and less innovative than it was when the company made server virtualization a household word in the IT space, and it’s trying to change that. Yeah, its bread and butter is still in supporting existing applications on existing virtual infrastructure, but there’s a lot opportunity to make that a much better experience.
Bill Fathers, VMware’s executive vice president and general manager of cloud services, came on the Structure Show podcast this week to explain what [company]VMware[/company] is up to in the cloud computing space and how it’s trying to keep pushing the envelope. Here are some of the better quotes from the interview, but you’ll probably want to listen to the whole thing, including for some rather candid assessments and defenses of the company’s business, and the increasing importance of the network.
[soundcloud url=”https://api.soundcloud.com/tracks/189580709″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]
OpenStack out of necessity
“What we’re seeing is a lot of our clients are starting to embrace OpenStack, they almost reach a glass ceiling in terms of how far they can deploy, and that they’re looking for somebody who can (a) take care of the integration with vSphere and (b) provide support,” Fathers said. “And so basically, what we have done, I guess, is become a distributor of OpenStack, created VMware-integrated OpenStack.”
Starting small with vCloud Air
“To some extent, attracting thousands of clients wasn’t really just the objective,” Fathers explained. “The real objective is to secure hundreds of what we call ‘beachhead clients,’ which are clients that are using vCloud Air and seeing genuine value from the compatibility, on-premises and in the vCloud Air . . . and the integration we’ve done, specifically in the networking layer. Pleased to say we have not only now thousands of clients — we aren’t being more precise than that — but I can be precise in saying we have hundreds of beachhead clients.”
When asked whether the cloud business is just complementary to the legacy business, he predicted strong growth over time. “Will [the hybrid cloud] become a multi-billion-dollar business?” he said. “Yeah, probably. I suspect it will.”
VMware’s hybrid cloud is about VMware’s hybrid cloud
“I am not spending a second working out how you solve what I think is an unsolvable problem of a client who’s marooned an application in AWS and is desperately trying to get it connected securely back to an on-premises app,” Fathers said.
Partnering with Google is about giving clients the best technology
“We just felt like the Google BigQuery service, coupled with their NoSQL database and the object storage, you’re not going to beat it,” Fathers said. “I mean, it’s space-age. There’s no way you’re going to compete with that.”
And what of all the database and analytics technology VMware and [company]EMC[/company] offloaded as part of their Pivotal spinoff a couple years ago? “I personally haven’t yet parsed how you’d segment the analytic capabilities that Pivotal will offer versus using something like BigQuery,” he said. “My sense is that BigQuery is sort of a space-age, enormously capable service, but you need to conform to its APIs, whereas the Pivotal world I think is far more scoped into customization and you can create your own analytics.” (On a related note, some of those Pivotal services might soon be getting a forced open source facelift.)
“Either way,” he added, “both are probably cheaper, candidly, than buying Exadata or HANA.” Exadata is Oracle’s converged server-database-combo and HANA is SAP’s in-memory database that is now the focal point of its next-gen business applications.
Asked whether there might be a way to expand the new relationship with [company]Google[/company] beyond BigQuery and some select services, Fathers said they’re taking it slowly. But … “This could do a long way,” he noted. “They have very complementary offerings, as opposed to competitive, and they actually target an entirely different client base, as well.”
Network integration: A big challenge that “sends clients to sleep”
“If there’s one thing we’ve found [that’s critical to delivering hybrid clouds for clients] . . .it’s the network integration,” Fathers explained. “It’s the biggest problem clients got and they don’t yet know it, and it’s kind of tough to pitch it because they’re not yet aware that the integration challenges of trying to connect your LAN to a public cloud are way harder than people realize. We’re going to have to find a better way of marketing it, basically.”
VMware has pinned high hopes on its vCloud Air hybrid cloud, the company’s response to public cloud competitors like Amazon Web Services. But there’s not a ton of information on just how well that cloud is doing in the market.
Those hoping for more details on the company’s fourth quarter conference call Tuesday night had to make do with this:
The category containing vCloud Air — VMware’s hybrid cloud and SaaS products — made up “just under five percent” of total revenue but showed a year-over-year growth rate of 100 percent, according to CFO Jonathan Chadwick. That would put revenue for that segment at about $85 million out of total revenue of $1.7 billion for the quarter ending December 31, 2014.
Chadwick also cited a new vCloud Air deal with”one of the largest pharmaceutical companies looking to shift their current on premise infrastructure to a hybrid model.”
[company]VMware[/company] thus becomes the fourth legacy IT company in two weeks — after IBM, Microsoft and [company]SAP[/company] — to prompt worries that the sales of shiny new stuff like SaaS and cloud is not close to replacing the dough generated from legacy cash cows that still sell a ton but whose growth is slowing. In VMware’s case the cash cow would be vSphere and related virtualization gear that companies use to run their own data centers and server rooms.
Perhaps worse is that sales of the new stuff will cannibalize sales of the old stuff, which makes Wall Street nervous.
On the call, VMware CEO Pat Gelsinger stressed vCloud expansion over the past year — adding a new region in Australia in November, for example.
The worry among VMware partisans is that the company, as profitable as it is (it logged a profit of $326 million in Q4, down from $335 million the same time a year ago) cannot build out cloud on the scale the way [company]Amazon[/company] or [company]Microsoft[/company] can. To achieve that sort of scale, VMware hosts vCloud Air itself for large customers but also fields a network of third-party providers that offer vCloud Air. That gives it more scale but also sets up a scenario in which it is competing with its own service provider partners.
vCloud Air debuted in the summer of 2013 so it’s playing catchup with 9-year-old AWS. But, like Microsoft, VMware has tons of enterprise accounts and it’s banking that those companies will feel more comfortable using VMware’s enterprise-oriented cloud over a pure public option.
For more on VMware’s cloud strategy, check out VMware Cloud EVP Bill Fathers’ talk at Structure 2014.
Remember that data center land grab we keep talking about? It’s not letting up. This week it’s IBM’s turn (again) to claim data center expansion to fuel its effort to offer cloud services worldwide.
[company]IBM[/company] is adding 8 new data center locations via a partnership with [company]Equinix[/company]. Those locations come in addition to three new data centers in Germany — a particular focus for all the cloud powers — Japan and Mexico City. The latter three data centers, now online, are part of a $1.2 billion investment announced early last year.
The Equinix deal gives IBM’s SoftLayer cloud services more coverage (via Equinix Cloud Exchange) from Amsterdam, Dallas, Paris, Northern California, Singapore, Sydney, Tokyo and Washington D.C. In October, IBM announced a cloud expansion into China in partnership with Tencent.
IBM sees more enterprise accounts — many of which already deploy private clouds “behind their four walls” — looking at off-premises clouds, said Angel Diaz, VP of open standards.
“That might be a dedicated zone of a public cloud or a public cloud, but the magic, sweet spot is hybrid, which connects those two worlds [private and public clouds] together,” he added.
IBM will not have that sweet spot to itself. A dozen or more competitors including traditional rival [company]Hewlett-Packard[/company] and sometimes-ally [company]Red Hat[/company] are also gunning for that market. Then there’s [company]VMware[/company] and [company]Microsoft[/company]. And Amazon Web Services, which used to sort of pooh-pooh the need for private cloud, has changed its messaging and introduced products to facilitate hybrid cloud set-up. And all of these vendors are adding data centers and cloud capabilities around the world.
Amazon recently opened a new region in Germany and Microsoft is working in that direction. Germany is a critical battle ground due to the size of that market and its stricter-than-usual rules around keeping citizen data in-country.
For more on the cloud computing competitive landscape, check out this talk from Battery Ventures’ Technology Fellow Adrian Cockcroft from Structure 2014.
This story was updated at 11:30 a.m. PST to reflect that AWS opened a new “region” not a new data center in Germany.
Some people still see Amazon Web Services as “out there” and think of their own on-premises servers as islands onto themselves. But, as we keep being reminded, Amazon has ambitions to reach right into your server room. The latest example: AWS OpsWorks now can manage VMs running in your own data centers — provided those VMs are web-connected.
As AWS explained this week in a blog post:
Previously, you could only deploy and operate applications on [company]Amazon[/company] EC2 instances created by OpsWorks. Now, OpsWorks can also manage existing EC2 instances created outside of OpsWorks.
This means that an IT administrator can update security patches, operating systems and application software upgrades for all of her managed resources with one command. Management of EC2 instances is free and the charge for non-AWS resources is $0.02 per hour for each server running the OpsWorks agent.
This is just the latest step in a long march toward erasing boundaries between in-house server rooms and the AWS cloud. Last spring, AWS launched a portal that mimics the VMware vCenter experience so VM admins can manage AWS resources; in October, it debuted AWS Directory Services, which tie cloud resources into existing on-premises applications managed by Active Directory or Samba directories.
Tying on-prem apps to the cloud
At AWS Re:Invent last month, Amazon announced a service catalog that lets a company’s IT staff offer AWS-based services to internal users while providing an IT-level of control over those services. I’m guessing it won’t take long for that catalog to add services running outside AWS to the mix.
Amazon has made a concerted effort to appeal to big companies that are likely to prefer hybrid cloud for the foreseeable future — putting some work and data into a public cloud but retaining lots of other stuff under its control. In that realm, AWS has to compete with old-school IT vendors such as [company]VMware[/company], [company]Microsoft[/company], [company]IBM[/company] and Oracle.
What’s sort of amazing here is that cloud competitors — I’m looking at you, Oracle — continue to pigeonhole AWS as “just” an Infrastructure-as-a-Service provider, even as it rolls out enterprise-focused management capabilities like these.
The Aorato deal promises to bring machine-learning smarts to bear to protect Active Directory assets in hybrid cloud deployments, Microsoft said in a blog post.
The first reveal of AWS Re:invent 2014. Aurora a MySQL-compatible relational database engine to take on Oracle et al. Also lots of goodies for developers including continuous integration and a managed code repository.
Dozens (hundreds?) of smaller software companies are at AWS Re:invent to preview products that can grease the skids to successful hybrid cloud deployments.