Everything You Know About the Stack is About to Change

I am at the OpenStack Summit here in Austin and the announcements and releases keep rolling out, illustrating that the growing OpenStack market has some real teeth, taking a bite out of the market standbys. Even so, there is still a great deal of fear, uncertainty and doubt around the viability of clouds built upon OpenStack. The real question here is if that FUD is unfounded for today’s emerging markets.
That means taking a closer look at OpenStack is a must for businesses delving further into public, private and hybrid clouds.
The OpenStack Project, which is now managed by the OpenStack Foundation, came into being back in 2010 as joint venture between NASA and RackSpace Hosting, with the goal of bringing collaborative, open sourced based software to the then emerging cloud market. Today, the OpenStack Foundation boasts that some 500 companies have joined the project and the community now collaborates around a six-month, time-based release cycle.
Openstack, which is basically an open-source software platform designed for cloud computing, has become a viable alternative to the likes of Amazon (S3, EC2), Microsoft Azure and Digital Ocean. Recent research by the 451 Group has predicted a 40% CAGR, with the OpenStack Market reaching some $3.5 billion by 2018. Enough of a market share to make all players involved take notice.
However, the big news out of the OpenStack Summit Austin 2016, comes in the form of product announcements, with more and more vendors aligning themselves with the platform.
For example, HPE has announced its HPE Helion OpenStack 3.0 platform release, which is designed to improve efficiency and ease private cloud development, all without vendor lock-in problems.
Cisco is also embracing the OpenStack movement with its Cisco MetaPod, an on-premise, preconfigured solution based on OpenStack.
Another solution out of the summit is the Avi Vantage Platform from AVI Networks, which promises to bring software-defined application services to OpenStack clouds, along with load balancing, analytics, and autoscaling. In other words, Avi is aiming to bring agility to OpenStack clouds.
Perhaps the most impressive news out of the summit comes from Dell and Red Hat, with the Dell Red Hat OpenStack Cloud Solution Version 5.0,  which incorporates an integrated, modular, co-engineered, validated core architecture, that leverages optional validated extensions to create a robust OpenStack cloud that integrates with the rest of the OpenStack community offerings.
Other vendors making major announcements at the event include F5 networks, Datera, DreamHost, FalconStor, Mirantis, Nexenta Systems, Midokura, SwiftStack, PureStorage, and many others. All of those announcements have one core element in common, and that is the OpenStack community. In other words, OpenStack is here to stay and competitors must now take the threat of the open-source cloud movement a little more seriously.
 
 
 

Why boring workloads trump intergalactic scale in HP’s cloud biz

Although having a laugh at so-called “enterprise clouds” is a respected pastime in some circles, there’s an argument to be made that they do serve a legitimate purpose. Large-scale public clouds such as Amazon Web Services, Microsoft Azure, and Google Compute Engine are cheap, easy and flexible, but a lot of companies looking to deploy applications on cloud architectures simply don’t need all of that all of the time.

So says Bill Hilf, the senior vice president of Helion (the company’s label for its cloud computing lineup) product management at [company]HP[/company]. He came on the Structure Show podcast this week to discuss some recent changes in HP’s cloud product line and personnel, as well as where the company fits in the cloud computing ecosystem. Here are some highlights of the interview, but anyone interested in the details of HP’s cloud business and how its customers are thinking about the cloud really should listen to the whole thing.

[soundcloud url=”https://api.soundcloud.com/tracks/194323297″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed

Amazon matters . . . and so does everything else

“First and foremost, our commitment and focus and investment in OpenStack hasn’t changed or wavered at all,” Hilf said. “It’s only increased, frankly. We are fully committed to OpenStack as our core infrastructure-as-a-service platform.” HP has been a large backer of the open source project for years now, and was building out an OpenStack-based cloud platform exclusively before acquiring Eucalyptus and its Amazon-Web-Services-compatible cloud technology in September.

However, he added, “As we started working with customers around what they were looking for in their overall cloud environment, we did hear the signal loud and clear that the AWS design pattern is incredibly relevant to them.” Often times, he explained, that means either hoping to bring an application into a private cloud from Amazon or perhaps moving an application from a private cloud into Amazon.

[pullquote person=”” attribution=”” id=”919622″]”People often use the term ‘lock-in’ or ‘proprietary.’ I think the vendors get too wrapped up in this.”[/pullquote]

Hilf thinks vendors targeting enterprise customers need to make sure they’re selling enterprise what they actually want and need, rather than what’s technologically awesome. “Our approach, from their feedback, is to take an application-down approach, rather than an infrastructure-up approach,” he said. “How do we think about a cloud environment that helps an application at all parts of its lifecycle, not just giving them the ability to spin up compute instances or virtual machines as fast as possible.”

Below is our post-Eucalyptus-acquisition podcast interview with Hilf, former Eucalyptus CEO Marten Mickos and HP CTO Martin Fink.

[soundcloud url=”https://api.soundcloud.com/tracks/167435404″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Enterprise applications might be boring, and that’s OK

Whatever HP’s initial promises were about challenging [company]Amazon[/company] or [company]Microsoft[/company] in the public cloud space, that vision is all but dead. HP still maintains a public cloud, Hilf explained, bu does so as much to learn from the experience of managing OpenStack at scale as it does to make any real money from it. “It not only teaches us, but allows us to build things for people who are going to run our own [private-cloud] products at scale,” he said.

But most of the time, he said, the companies that are looking to deploy OpenStack or a private cloud aren’t super-concerned with concepts such as “webscale,” so it’s not really in HP’s financial interests to go down that path:

“[W]e don’t have an intention to go spend billions and billions of dollars to build the infrastructure required for, let’s say, an AWS or an Azure. . . . . It’s not because ‘Oh, we don’t want to write a billion-dollar check,’ it’s because [with] the types of customers we’re going after, that’s not at the top of their priority list. They’re not looking for a hundred thousand servers spread across the globe. . . . Things like security are much higher on their list than the intergalactic scale of a public cloud.”

Hilf added:

“What we typically hear day-to-day, honestly, is actually pretty unexciting and mundane from customers. They’re not all trying to stream the Olympics or to build Netflix. Like 99 percent of the enterprise in the world are doing boring things like server refreshes or their lease in a data center is expiring. It’s really boring stuff, but it matters to them.”

“If a customer came to me and said, ‘Hey I need to spin up a billion instances to do whatever,'” he said, “. . . I’d say, ‘Go talk to AWS or Azure.’”

Get over the talk about lock-in

Despite the fact that it’s pushing a lineup of Helion cloud products that’s based on the open source OpenStack technology, Hilf is remarkably realistic about the dreaded concept of vendor lock-in. Essentially, he acknowledged, HP, Amazon and everyone else building any sort of technology is going to make a management interface and experience that’s designed to work great with their particular technology, and customers are probably going to be running multiple platforms in different places.

Hilf thinks that’s a good thing and the nature of business, and it provides an opportunity for vendors (like HP, coincidentally) with tools to help companies get at least some view into what’s happening across all these different platforms.

“People often use the term ‘lock-in’ or ‘proprietary.’ I think the vendors get too wrapped up in this,” he said. “The enterprise is already through the looking glass. They all know they’re going to have some degree of lock-in, it’s just where.”

HP fine-tunes its multi-cloud pitch

It must be really interesting to work at Hewlett-Packard these days. Not only is the company breaking itself in half, it’s making multi-billion-dollar acquisitions and it’s balancing an array of cloud offerings. Oh, and it just shook up cloud management, with Marten Mickos turning key responsibilities over to three other execs, including Bill Hilf,  SVP of HP Helion product management.

As of now, [company]HP[/company] is fielding two private cloud frameworks. Eucalyptus (or, as reported last week, Helion Eucalyptus) is for people who want compatibility with Amazon Web Services APIs. Helion OpenStack is apparently for everyone else.

These two offerings got point upgrades this week. Helion OpenStack 1.1, for example, features better high-availability features, and better support for running Windows workloads (with Microsoft backstopping HP’s own support.) Helion Eucalyptus 4.1 gets an “AWS CloudFormation compatible service” to make it easier for customers to move orchestration templates from AWS to HP Helion clouds without rewriting or a ton of tweaking. And Helion Development Environment (aka HP’s version of the Cloud Foundry Platform as a Service) gets better logging, more dashboards to track usage quotas and system patches.

Bill Hilf, SVP of Helion Cloud product management for HP.

Bill Hilf, SVP of Helion Cloud product management for HP.

No AWS APIs for OpenStack

HP will not add AWS API compatibility to Helion Openstack, Hilf said in an interview Tuesday. Instead, he said, the company will offer Cloud Service Automation atop the various clouds — Helion Eucalyptus, Helion OpenStack, [company]Amazon[/company] Web Services, [company]Microsoft[/company] Azure, [company]VMware[/company] — that will give users the proverbial “one pane of glass” to manage them all.

As is usually the case, the rationale cited was customer feedback. “We sat in focus groups and customers said they didn’t want [AWS] S3 APIs embedded in OpenStack. They wanted an OpenStack cloud and an AWS-compatible cloud and a VMware-based cloud and to be able to move stuff between them,” Hilf said.

“So instead of burning huge time and resources in community debates, we decided, why not just let them build those different clouds and manage them all across the top?”

A public cloud, but not an AWS rival

[company]HP[/company] continues to offer public cloud, but the positioning of that has definitely changed. Its product was once positioned (a year or so ago) as an enterprise-worthy public cloud to compete directly with Amazon Web Services, but that’s no longer the sales pitch.

“We are not building a general-purpose cloud at that scale for any type of workload,” Hilf said. “We are focused on building private, managed clouds that can interoperate. We do have public cloud but we’re not aiming to compete with the big three. We want to interoperate with them.”

Sooooo, what’s new with HP’s cloud strategy?

Look! Hewlett-Packard is doing something with Eucalyptus after all, at least according to a new web page touting HP Helion Eucalyptus, the “Open. Agile. Secure AWS-compatible private cloud.”

HP bought Eucalyptus in September, put that company’s CEO, Marten Mickos, in charge of the overall HP cloud business and things went pretty quiet. Until this week, when I reported that Mickos was ceding his leadership role and the aforementioned page appeared.

An [company]HP[/company] spokesman confirmed that it is a new page, and is “fully in line” with HP’s hybrid cloud push and previous pledge to support AWS customers. Most of the page’s links route back to the original Eucalyptus web site.

I still have so many questions.  Will HP’s OpenStack-based Helion private cloud also offer AWS API compatibility?  HP pulled planned support for those APIs from its public cloud two years ago. Will it reverse that course?

And most intriguingly, will HP — which is a long-time and sometimes irritated Microsoft partner — decide to de-emphasize its own public cloud aspirations and instead throw in more fully with [company]Microsoft[/company] Azure? Hey, anything is possible.

hphelioneucalyptus

Kubernetes comes to OpenStack this time thanks to Mirantis

For businesses wanting to run the Kubernetes cluster management framework for containers on OpenStack clouds, Google and Mirantis have teamed up to make that happen more easily.

The OpenStack Murano application catalog technology promises to ease deployment of Kubernetes clusters on OpenStack and then deploy Docker containers on those clusters.

Murano provides what Mirantis CEO Adrian Ionel (pictured above) described as a “seamless point-and-click experience” not only for deploying workloads to OpenStack, but also making sure they get there with associated automation, provisioning and security intact. “In this case we use it to automate the provisioning and life cycle management of containers,” he said.

Murano, he added, makes it easier for people to build application environments that can be container-only, or mix containers with bare metal and virtual machines in one big happy package. (I’m paraphrasing here.)

This is not the industry’s first attempt to bring Kubernetes technology, open sourced by Google last year, over to OpenStack. In August, [company] Hewlett-Packard[/company] announced its own Kubernetes setup utility for HP’s OpenStack-based Helion cloud, but I haven’t heard much about it since.

There is no exclusivity in this latest news. The work Mirantis and [company]Google[/company] have done here will, in theory, help customers deploy Kubernetes on any OpenStack distribution. Mirantis and Google will demonstrate the technology Thursday in San Francisco.

And in the grand scheme of things, nearly every cloud or wanna-be cloud vendor worth its salt (including SaltStack) Microsoft, IBM, Red Hat and others, have pledged or contributed actual support for Kubernetes.

This latest news is another indication that Google is indeed serious about providing cloud capabilities to business customers, many of whom still view public clouds like Google Cloud Platform with suspicion. OpenStack is the cloud framework usually mentioned when a company decides to deploy a private cloud that they deem more suited for mission-critical workloads.

“From a Google perspective, containerization is important and running container clusters is a great way to enable developers to be productive,” said Kit Merker, the Google product manager focusing on Google Container Engine and Kubernetes.

“We know that enterprises will take time to transition to cloud. Kubernetes is a way to optimize infrastructure so it can run workloads in private or public cloud or bare metal.”

kubernetes openstackSo this is about workload portability but not really hybrid cloud per se. “This means you can build an application that uses containers and then move it to a different environment. That is what Kubernetes is all about,” he said. That is not the same thing as seamlessly integrating public and private clouds into a hybrid scenario.

[company]Amazon[/company] Web Services still leads the world in public cloud but Google and [company]Microsoft[/company] are giving it a run for its money. Microsoft Azure, because of its business roots, is seen as an attractive public cloud for that company’s myriad business customers so both Google and AWS have to show that they “get” CIO concerns about cloud deployment and provide enterprise class features and functions.

This step by Google, along with other moves announced in the fall and more recent news that it’s bringing four Google services to VMware’s  vCloud Air, are meant to reassure the C-suite set that Google means business.

Note: This story was updated at 11:11 a.m. PST with a more complete list of Kubernetes contributors.

 

Private cloud? Public cloud? Rackspace erases the difference

Rackspace is going to stop distinguishing between the money it makes from public cloud and what it derives from “dedicated” cloud, a category that encompasses a bunch of options.

Well that’s one way to sidestep the whole “is private cloud dead?” debate.

The move may show a fanatical obsession on managed cloud or indicate that Rackspace is giving up on public cloud where leader, [company]Amazon[/company] Web Services, is contending with growing threats from [company]Microsoft[/company] Azure and[company] Google[/company] Cloud Platform. Or both. Tomato, tomahto.

On the fourth quarter earnings call Tuesday, CEO Taylor Rhodes reiterated that “managed cloud” versus the wild-west of unmanaged public cloud is where [company]Rackspace[/company] is focused. And its financial earnings will reflect that going forward. No longer will Rackspace put its public cloud revenue in one bucket and combined private cloud, managed hosting, managed services all into the dedicated cloud revenue bucket.

In an interview after the call, Rhodes acknowledged that most new “greenfield” applications will be built for public cloud deployment over a ten-year time frame. But, there are also many legacy applications that will stay either stay where they are or move to a single-tenant private cloud situation. And there is demand for well-managed specialized clouds for different workloads, Rhodes said.

rax q4 2The accounting changes were made in part to keep Rackspace sales people from selling the wrong cloud to the wrong customer, he said. “We have a dilemma in that we switched from a horizontal position in cloud … to [cloud for] particular workloads. We want to be the best at supporting Oracle commerce and we will be the best at managing that with a highly opinionated point of view on whether Oracle commerce should be a single-tenant or multi-tenant implementation.” I’m guessing that single-tenant will be the answer here.

Rackspace sales people shouldn’t be rewarded “perversely” for selling multi-tenant when single tenant is best, he said.

Overall, the company posted net income of 26 cents per share, surpassing consensus estimates of 19 cents per share, but it missed on revenue, logging $472.2 million where analysts expected $474 million.

rax public cloud ytd

OpenStack comes up huge for Walmart

For those skeptics who still think OpenStack isn’t ready for prime time, here’s a tidbit: @WalmartLabs is now running in excess of 100,000 cores of OpenStack on its compute layer. And that’s growing by the day.

It’s also the technology that ran parent company Walmart’s prodigious Cyber Monday and holiday season sales operations. If that’s not production, I’m not sure what is.

San Bruno, California–based @WalmartLabs, which is the e-commerce innovation and development arm for the [company]Walmart[/company] retail colossus, started working with OpenStack about a year and a half ago, at first relying heavily on the usual vendors but increasingly building up its in-house talent pool, Amandeep Singh Juneja, senior director of cloud operations and engineering, said in an interview.

Building a private cloud at public cloud scale

@WalmartLabs has about 3,600 employees worldwide, 1,500 of whom are in the Bay Area. Juneja estimated the organization has hired about 1,000 engineers in the last year or so — no mean feat given that there are lots of companies, including the OpenStack vendors, in the market for this expertise.

“Traditionally, Walmart is vendor-heavy in its big technology investments — name a vendor and we’ve worked with it and that was also true with OpenStack,” Juneja noted. “We started about one and a half years ago with all the leading distribution vendors involved … we did our first release with Havana and [company]Rackspace[/company]. But then we invested internally in building our own engineering muscle. We attended all the meet-ups and summits.” Havana is the code name for the eighth OpenStack code release.

Amandeep Singh Juneja, @WalmartLabs

Amandeep Singh Juneja, @WalmartLabs

Nothing says big like Walmart. It has around $480 billion in annual revenue, more than 2 million employees, and more than 11,000 retail locations worldwide (including Sam’s Club and Walmart International venues). Walmart.com claims more than 140 million weekly visitors. So scale was clearly an issue from the get-go.

What @WalmartLabs loved about OpenStack was that it could be molded and modified to fit its specifications, without vendor lock-in.

AWS need not apply

This is a massive private cloud built on a public cloud scale. There are also some macro issues at play here. Since parent company Walmart competes tooth and nail with [company]Amazon.com[/company], the chances of Walmart using Amazon Web Services public cloud are nil. (I asked Juneja whether Walmart would ever use any public cloud capabilities and he politely responded that this question was above his pay grade.)

The beauty of open-source projects like OpenStack is that new capabilities continually come on line and there is a community of deeply technical people working on the code. Going forward, Juneja is particularly interested in Ironic, an OpenStack project to enable provisioning of bare metal (as opposed to virtual) machines, and in the Trove database-as-a-service project. Trove, he noted, has matured a bit and Walmart will be using more DbaaS going forward.

Another work in progress is the construction of a multi-petabyte object store using the OpenStack Swift technology, but there are also plans to bring more block storage in-house, possibly using OpenStack Cinder. And the team is looking at Neutron for software-defined network projects.

One thing Walmart must deal with is its brick-and-mortar roots. The ability to order online and pick up in the store means that what @WalmartLabs builds must interact with inventory and other systems already running the Walmart/Sam’s Club storefronts. Non-e-commerce-related IT projects are run by Walmart’s Information Services Division at the company’s Bentonville, Arkansas headquarters.

So the ability of the shiny new OpenStack systems to interface with infrastructure that’s been in place for decades or so — some for as much as 50 years — is critical. It also spells the full employment act for all those @WalmartLabs engineers.

Note: this story was updated at 11:30 a.m. PST to reflect that Walmart is running 100K+ cores, not nodes, of OpenStack

AWS suits up more enterprise perks

More AWS perks for business users

Amazon Web Services has beefed up its identity management and access control capabilities so that businesses can more easily apply permissions to users, groups and roles in a consistent way. As explained in a blog post,  these identity and access management (IAM) policies are now treated as “first-class AWS objects” so that they can be created, named, and attached to one or more IAM users, groups, or roles.

Since I was unclear about what a first-class AWS Object really is I reached out to someone who knows who said that these policies get their own unique Amazon Resource Name (ARN). And that, in turn means users can more easily reuse common managed policies without having to write,update and maintain permissions.

These managed policies can also be managed centrally and applied across IAM entities — the aforementioned users, groups, or roles. And, customers can subscribe to shared AWS Managed Policies, so that its easier for them to appy best security or other practcies.

 

That news came a few days after [company]Amazon[/company] announced general availability of its AWS Config, a configuration management database (CMDB) tool, announced in November, that keeps track of the cloud resources used and the connections between them. The goal is that it can then track changes made to those resources and make sure those changes are logged in AWS CloudTrail.  The data collected there can then be polled via Amazon’s own APIs

AWS Config, and AWS Service Catalog, were both announced in preview form AWS re:Invent in November. A Service Catalog is a tool used in enterprise accounts to shop for and manage authorized tools and applications and will be tied into IAM.  General availability for Service Catalog was promised for early 2015, so stay tuned.

All of these services — promised and delivered — are geared to make AWS more IT friendly in bigger enterprises — to help make sure that users can access only the resources they are authorized for and that those resources are the most updated versions.

It’s also interesting that AWS, which used to announce new services only when they were ready, is now fully in enterprise software mode, pre-announcing new products weeks and months before they are broadly available.

 

AWS Re:invent

AWS Re:invent

EMC Cloudscaling aims to bridge OpenStack-AWS divide

If you’re running an OpenStack private cloud and want it to talk to Amazon’s EC2 compute service, you may want to check out this a new “drop-in”API created by EMC/Cloudscaling and available from Stackforge.

https://gigaom.com/2015/02/13/heres-a-new-drop-in-ec2-api-for-openstackers-who-want-it/

Randy Bias, co-founder  of Cloudscaling and now VP of Technology for [company]EMC[/company], has long maintained that OpenStack needs to work with Amazon. He also pledged similar support for [company]Google[/company] Compute Engine APIs. Asked via email if that’s still the plan, Bias  said “yes but it’s a lower priority until we see traction.”

Structure Podcast: The biologic roots of deep learning

Deep learning, which enables a computer to learn — or program itself — to solve problems — is a hot topic that Enlitic CEO Jeremy Howard and Senior Data Scientist Ahna Girshick helped explain to mere mortals on this week’s podcast.   If you want to know why you don’t necessarily need a ton of data to do good work in deep learning and how the field is inspired by biology, if not the human brain,  check out this show. And, to hear more from Gisrshnick on this hot topic, you can also sign up for next month’s Structure Data event.

[soundcloud url=”https://api.soundcloud.com/tracks/190680894″ params=”secret_token=s-lutIw&color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

SHOW NOTES

Hosts: Barb Darrow and Derrick Harris.

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed

 

This story was updated at 11:37 a.m. PST February 18 with more detail on what an AWS First-Class is.

The poor private cloud gets no respect

Pity your private cloud, if you have one. If cloud analysts are to be believed, private cloud is losing ground as public cloud providers — chiefly Amazon Web Services, Google, and Microsoft — keep adding features and functions, many of which target enterprise IT buyers.

Last week, for example, Gartner analyst Thomas Bittman blogged that 95 percent of enterprise IT types he surveyed found something lacking in their own private clouds. Of course Bittman loaded the gun for them, distilling the reasons “your enterprise public cloud is failing”  into six key categories and then polling an audience about them at an event.

Part of the problem may be in definitions. Private cloud is not merely a highly virtualized data center. It needs to deliver on-demand services easily and offer the sort of scale-up-and-down-as-needed elasticity that is the hallmark of public clouds. In a response to one comment on his post Bittman defined private cloud as the

cloud computing style delivered with isolation. Fully private would be fully isolated. It doesn’t need to be owned and managed on-premises, but today it often is (I’d say, 90-95% of the time).

Of the 140 companies Bittman surveyed, the most common reason for dissatisfaction (noted by 31 percent of respondents) is that too much emphasis was placed on cost-cutting, not on providing agility in creating, spinning up and down capabilities as needed. The second most-cited complaint, for 19 percent of respondents, was that their private cloud doesn’t do enough. But check out the whole post, along with the comments.

In August Gigaom Research published its own analysis showing public cloud options outstripping private clouds (subscription required) for several reasons. Notably, even if you are running a real private cloud — not just a heavily virtualized server room — you are probably still buying, deploying and maintaining your own hardware and software.

Gigaom research analyst David Linthicum — who is also SVP at Cloud Technology Partners, which works with the big public cloud providers — noted in that report that security, or lack thereof, has been touted as a key private cloud selling point but is not necessarily a differentiator in the way most people expect. He wrote:

Private clouds, while they feel more secure since you can see the blinking servers in your data center, are as secure or less secure than public clouds, generally speaking. Enterprises are just discovering this fact, and are opting for public clouds as cloud projects come on-line.

Ouch. Private cloud purveyors, please feel free to comment below.

Philip Bertolini, CIO of Oakland County, Michigan, said to term private clouds as failures because there is not 100 percent satisfaction is unfair. In the Gartner blog post, he noted, Bittman discusses how 95 percent of the users have had problems but that doesn’t mean their efforts failed.

“Moving to the cloud is difficult and has to be planned out carefully. Any IT project requires good planning or the results can be less than desirable. I do believe that the is not the magic wand for everything that troubles us. Using the cloud wisely with good planning can be very successful,” Bertolini noted by email.

There is some merit to the private-cloud-doesn’t-meet-expectations argument. Vendors have fed into that by overselling the technology, for one thing. But, the notion that a small number of public cloud vendors (even vendors as huge as [company]Amazon[/company], [company]Google[/company] and [company]Microsoft[/company]) can fill every need is a stretch.

As more than a dozen vendors, many of them pitching OpenStack-based private clouds, duke it out, they need to counter this perception that public cloud is becoming the inevitable destination for many, many workloads going forward.

This story was updated on February 12 with quotes from Oakland County CIO Philip Bertolini and on February 13 with a note of David Linthicum’s affiliation with CTP.

Why VMware is going ‘space-age’ with Google and embracing OpenStack

VMware has developed a reputation in some circles as being proprietary and less innovative than it was when the company made server virtualization a household word in the IT space, and it’s trying to change that. Yeah, its bread and butter is still in supporting existing applications on existing virtual infrastructure, but there’s a lot opportunity to make that a much better experience.

Bill Fathers, VMware’s executive vice president and general manager of cloud services, came on the Structure Show podcast this week to explain what [company]VMware[/company] is up to in the cloud computing space and how it’s trying to keep pushing the envelope. Here are some of the better quotes from the interview, but you’ll probably want to listen to the whole thing, including for some rather candid assessments and defenses of the company’s business, and the increasing importance of the network.

[soundcloud url=”https://api.soundcloud.com/tracks/189580709″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed

OpenStack out of necessity

“What we’re seeing is a lot of our clients are starting to embrace OpenStack, they almost reach a glass ceiling in terms of how far they can deploy, and that they’re looking for somebody who can (a) take care of the integration with vSphere and (b) provide support,” Fathers said. “And so basically, what we have done, I guess, is become a distributor of OpenStack, created VMware-integrated OpenStack.”

Starting small with vCloud Air

“To some extent, attracting thousands of clients wasn’t really just the objective,” Fathers explained. “The real objective is to secure hundreds of what we call ‘beachhead clients,’ which are clients that are using vCloud Air and seeing genuine value from the compatibility, on-premises and in the vCloud Air . . . and the integration we’ve done, specifically in the networking layer. Pleased to say we have not only now thousands of clients — we aren’t being more precise than that — but I can be precise in saying we have hundreds of beachhead clients.”

When asked whether the cloud business is just complementary to the legacy business, he predicted strong growth over time. “Will [the hybrid cloud] become a multi-billion-dollar business?” he said. “Yeah, probably. I suspect it will.”

The VMware hybrid cloud, in a diagram.

The VMware hybrid cloud, in a diagram.

VMware’s hybrid cloud is about VMware’s hybrid cloud

“I am not spending a second working out how you solve what I think is an unsolvable problem of a client who’s marooned an application in AWS and is desperately trying to get it connected securely back to an on-premises app,” Fathers said.

Partnering with Google is about giving clients the best technology

“We just felt like the Google BigQuery service, coupled with their NoSQL database and the object storage, you’re not going to beat it,” Fathers said. “I mean, it’s space-age. There’s no way you’re going to compete with that.”

And what of all the database and analytics technology VMware and [company]EMC[/company] offloaded as part of their Pivotal spinoff a couple years ago? “I personally haven’t yet parsed how you’d segment the analytic capabilities that Pivotal will offer versus using something like BigQuery,” he said. “My sense is that BigQuery is sort of a space-age, enormously capable service, but you need to conform to its APIs, whereas the Pivotal world I think is far more scoped into customization and you can create your own analytics.” (On a related note, some of those Pivotal services might soon be getting a forced open source facelift.)

Bill Fathers VMware Structure 2014

Bill Fathers at Structure 2014

“Either way,” he added, “both are probably cheaper, candidly, than buying Exadata or HANA.” Exadata is Oracle’s converged server-database-combo and HANA is SAP’s in-memory database that is now the focal point of its next-gen business applications.

Asked whether there might be a way to expand the new relationship with [company]Google[/company] beyond BigQuery and some select services, Fathers said they’re taking it slowly. But … “This could do a long way,” he noted. “They have very complementary offerings, as opposed to competitive, and they actually target an entirely different client base, as well.”

Network integration: A big challenge that “sends clients to sleep”

“If there’s one thing we’ve found [that’s critical to delivering hybrid clouds for clients] . . .it’s the network integration,” Fathers explained. “It’s the biggest problem clients got and they don’t yet know it, and it’s kind of tough to pitch it because they’re not yet aware that the integration challenges of trying to connect your LAN to a public cloud are way harder than people realize. We’re going to have to find a better way of marketing it, basically.”