Amazon hones its cloud update process

Remember that planned Xen-related reboot Amazon Web Services warned about last week? Well, things went better than planned, according to an updated blog post Monday.

The company said it was able to perform live updates on 99.9 percent of the affected instances, avoiding the need for a reboot altogether.  Last Thursday, [company]Amazon[/company] had said that it would need to reboot about 10 percent of total AWS instances to address a Xen security issue.

The ability of AWS to perform updates without shutting down and bringing back up compute instances comes as very good news to cloud users. And that’s true whether the technology used was a live migration, hot patching or maybe something else. The net result was the same: workloads were not interrupted.

The Xen-related security issue also affected [company]Rackspace[/company], Linode and [company]IBM[/company] SoftLayer, all of which said they’re doing their own fixes before March 10 when more information is released about the vulnerability.

Add IBM cloud to the list of reboots to come

The latest Xen hypervisor vulnerabilities are forcing IBM to reboot some customers’ cloud instances between now and March 10. The vendor sent out an alert to affected IBM SoftLayer customers on Friday, the same day Linode alerted its customers.

As reported, [company]Amazon[/company] Web Services and [company]Rackspace[/company] already posted news about the updates on Thursday night.

Per an [company]IBM[/company] notice sent to customers, the company said it was “in the process of scheduling maintenance for patching and rebooting a portion of services that host portal-provisioned virtual server instances, virtual servers hosted on these servers will be offline during the patching and rebooting process.”

As with the other alerts, the maintenance will happen before March 10, when more details of hte underlying Xen vulnerability will be disclosed. IBM promised more information when it becomes available and said it was working to minimize service disruptions.

Xen security issue prompts Amazon, Rackspace cloud reboots

Amazon Web Services and Rackspace are warning their customers of upcoming reboots they’re taking to address a new Xen hypervisor security issue.

In a premium support bulletin issued Thursday night, Amazon said fewer than 10 percent of all EC2 instances will require work but the affected instances must be updated by March 10. [company]Rackspace[/company] also notified customers of the issue, which will affect a subset of a portion of its First and Next Generation Cloud Servers, Thursday night. Later on Friday, Linode also warned users of an upcoming Xen-related reboot.

If you’re sensing a little bit of deja vu, it’s because the major cloud players were forced to reboot a bunch of their customers in September due to a Xen hypervisor issue, although the reason for the updates was not disclosed at first. Last time out, AWS also said 10 percent of its EC2 instances were affected.

Cloud vendors impacted by these security issues tread a tricky path. They have to address the vulnerability as fast as possible before the details of the flaw are made public, which can lead to a bit of a fire drill. In this case, more information about the flaw will be disclosed March 10.

In September, [company]Amazon[/company] was first out of the chute with notifications, followed by Rackspace and then IBM Softlayer made its disclosures the following week.

Note: This story was updated at 3:49 p.m. PST to note that Linode is also performing system updates.

AWS maintains lead in public cloud, but Azure inches forward

Amazon Web Services continues to dominate public cloud usage across the board, but Microsoft Azure is making strides at least in business accounts, according to a new RightScale survey.

[company]Amazon[/company] cloud adoption leads the pack with 57 percent of respondents reporting use of AWS (up from 54 percent last year) while 12 percent said they run [company]Microsoft[/company] Azure Infrastructure as a Service, up 6 percent from last year’s survey.

Among business or enterprise users, though, while AWS still leads with 50 percent, up slightly from 49 percent, Azure IaaS scored 19 percent, up from 11 percent.  [company]Rackspace[/company] and [company]Google[/company] App Engine are the next most popular clouds in this category, while vCloud Air logged 7 percent adoption, down from 18 percent. (Could the rebranding of vCloud Hybrid Services to vCloud Air have been a factor here?)

The Rackspace callout is interesting since the company said Tuesday it will stop breaking out public cloud and private cloud revenue and report them together. Rackspace is now focusing on private, managed cloud, in what some say shows it is ceding public cloud to the big guys.

RightScale Enterprise Cloud 2014-2015

All of these numbers are based on RightScale’s survey (downloadable here) of 930 cloud users, 24 percent of whom are RightScale customers.

Private cloud boosters won’t like this part: The new numbers show overall adoption of private cloud pretty much holding steady compared to last year. [company]VMware[/company] vSphere virtualized environments led with 53 percent of enterprise customers who reported that they use it as a private cloud. (Another 13 percent said they use vCloud Director as cloud.) This echoes last year’s survey in which many customers equated their virtualized server rooms with private cloud.

While private cloud appears to be in a bit of a swoon, it’s no surprise that Docker usage is hot. Per the survey, that containerization technology, while relatively new, is already used by 13 percent of respondents, while more than a third of the rest (35 percent) said they are planning to implement it.

Rightscale Public Clouds 2014OpenStack showed the greatest traction this year, with 13 percent adoption, growing by three percent year over year and still garnering big interest from companies whether they use it or not. A full 30 percent of respondents said they were evaluating or interested in using OpenStack over time. Microsoft’s relatively new Azure Pack showed a respectable seven  percent usage. Azure Pack, which mirrors Microsoft’s internal Azure usage, can run in a company’s own data centers or server rooms to provide an Azure-on-Azure hybrid.

Overall, Santa Barbara, California–based RightScale concluded from its research that cloud adoption is “a given” and hybrid cloud is the preferred mode of adoption. Of course RightScale offers multi-cloud management tools so that works out nicely for them.

RightScale VP of Marketing Kim Weins was our Structure Show guest after last year’s survey and had some interesting insights that might be helpful to compare and contrast. Check out the podcast below.

[soundcloud url=”″ params=”color=ff5500&auto_play=false&hide_related=false&show_artwork=true” width=”100%” height=”166″ iframe=”true” /]

Private cloud? Public cloud? Rackspace erases the difference

Rackspace is going to stop distinguishing between the money it makes from public cloud and what it derives from “dedicated” cloud, a category that encompasses a bunch of options.

Well that’s one way to sidestep the whole “is private cloud dead?” debate.

The move may show a fanatical obsession on managed cloud or indicate that Rackspace is giving up on public cloud where leader, [company]Amazon[/company] Web Services, is contending with growing threats from [company]Microsoft[/company] Azure and[company] Google[/company] Cloud Platform. Or both. Tomato, tomahto.

On the fourth quarter earnings call Tuesday, CEO Taylor Rhodes reiterated that “managed cloud” versus the wild-west of unmanaged public cloud is where [company]Rackspace[/company] is focused. And its financial earnings will reflect that going forward. No longer will Rackspace put its public cloud revenue in one bucket and combined private cloud, managed hosting, managed services all into the dedicated cloud revenue bucket.

In an interview after the call, Rhodes acknowledged that most new “greenfield” applications will be built for public cloud deployment over a ten-year time frame. But, there are also many legacy applications that will stay either stay where they are or move to a single-tenant private cloud situation. And there is demand for well-managed specialized clouds for different workloads, Rhodes said.

rax q4 2The accounting changes were made in part to keep Rackspace sales people from selling the wrong cloud to the wrong customer, he said. “We have a dilemma in that we switched from a horizontal position in cloud … to [cloud for] particular workloads. We want to be the best at supporting Oracle commerce and we will be the best at managing that with a highly opinionated point of view on whether Oracle commerce should be a single-tenant or multi-tenant implementation.” I’m guessing that single-tenant will be the answer here.

Rackspace sales people shouldn’t be rewarded “perversely” for selling multi-tenant when single tenant is best, he said.

Overall, the company posted net income of 26 cents per share, surpassing consensus estimates of 19 cents per share, but it missed on revenue, logging $472.2 million where analysts expected $474 million.

rax public cloud ytd

Here’s a new “drop-in” EC2 API for OpenStackers who want it

Many news cycles have been burned on the debate over whether OpenStack-based cloud providers should or need to support the major Amazon Web Services APIs.

Cloudscaling and its co-founder Randy Bias have long advocated that such support is critical to the success of OpenStack and promised Cloudscaling support for [company]Amazon[/company] elastic compute cloud (EC2) APIs. AWS, after all, is by far the market leader in the public cloud arena.

As of this week, Cloudscaling, now part of [company]EMC[/company], has made available a “drop-in” replacement for the existing OpenStack Nova EC2 API. Nova, OpenStack’s compute module, already offered a degree of EC2 API compatibility that a vendor could expose, or not, in its own cloud offering.

Rackspace notably chose not to expose it. [company]Hewlett-Packard[/company] at first opted to support the EC2 API, then reversed course in late 2013 — but within a year bought Eucalyptus, a provider of private cloud technology noted for its AWS API support. And VMware’s cloud chief Bill Fathers made it pretty clear on the recent Structure podcast that he doesn’t give a fig about supporting AWS APIs.

Bias, now VP of technology at EMC, is unwavering in his belief that AWS API support will strengthen OpenStack’s chances of success in the market. Cloudscaling also has promised support for key Google Compute Platform APIs.

Per Bias’ blog post:

I’ll reiterate again, since folks still sometimes get confused, I’m not advocating dropping the OpenStack APIs in favor of AWS.  I’m advocating embracing the AWS APIs, making them a first class citizen, and viewing AWS as a partner, not an enemy.  A partner in making cloud big for everyone.

His plan is to improve upon the existing Nova EC2 API — actually build it from scratch — and ask the community to test it out and support it. His rationale? People are using Amazon’s cloud and OpenStack needs to attract those people.

Bias used a chart from the November OpenStack user survey (which had 669 respondents) to illustrate his point. Nearly half of users surveyed use the EC2 compatibility API in production, 38 percent use it in development/quality assurance and 38 percent use it in proof-of-concept projects. By contrast, just four percent said they used the Open Cloud Computing Interface in production, one percent in dev/QA and seven percent in proof of concept trials.

Compatibility APIs

If you want the back story of the great API kerfuffle, check out this YouTube video of a debate between Bias, Mirantis co-founder Boris Renski and others.


Amazon continues to reach into your server room

Microsoft already has a huge presence in most companies’ server rooms. And it hopes to keep it that way or at least, persuade those companies running Windows Server, SQL Server et al. to — when the time comes to move to the cloud — opt for Microsoft Azure. Amazon, of course, has another plan for those workloads.

Last week, Amazon Web Services launched an update to a previously announced  System Center Virtual Machine Manager add-in  that will enable your admin, in layman’s terms, to vacuum those Windows workloads into the AWS public cloud. System Center is Microsoft’s management console for Windows environments, analogous to VMware’s vCenter. The original AWS add-in announced last fall, let admins manage both their on-prem Windows and their AWS EC2 instances “out there” from the same console.

I love how The Register characterized this sleight of hand:

“As of Wednesday, that plugin can import Windows virtual machines from on-premises bit barns into EC2. [company]Amazon[/company] reckons it takes just a few clicks and – POOF! – VMs disappear into the cloud.”

Remember, Amazon is also wooing [company]VMware[/company] admins with a portal that lets VMware vCenter users manage both in-house VMware workloads and AWS instances from one place that looks and feels familiar to them.

I think Amazon’s enterprise workload ambitions are even grander than it’s already signaled. Here’s betting that the Service Catalog  announced in November to ensure that only users authorized to access any AWS service will eventually extend to managing third-party application access on premises as well.

Conspiracy theory? Maybe. But that doesn’t make it wrong.

New managed VMware cloud from Rackspace

In other cloud news last week, [company]Rackspace[/company], one of the original OpenStack powers, announced a “Dedicated VMware vCloud” as one of its managed private cloud options.

The new [company]VMware[/company] menu option was described as:

a single-tenant, hosted environment that enables enterprises to take the next step in their virtualization journey by offering advanced automation, self-service, hosted catalogs and access to the vCloud API and vCloud web portal.”

This was reported as a sort of shocker since OpenStack originally launched — 5 years ago? — as a counterweight to VMware and AWS, but in reality, Rackspace has a long history of offering VMware-based infrastructure. And, on the flip side, VMware joined the OpenStack Foundation  in 2012 by virtue of its purchase of Nicira.

Oh, and don’t forget Rackspace also operates and manages [company]Microsoft[/company]-based private clouds. So this continues Rackspace’s attempt to distinguish its service and support of several core infrastructures as a differentiator.

Structure Show: Defending the data scientist

hilary masonHilary Mason was chief data scientist at, data scientist in residence at Accel Partners and is now CEO of research company Fast Forward Labs so it’s probably not a shocker that she thinks the title of “data scientist” remains valid. Here’s her take on why that is and what it takes to move big data concepts from theory to real-world application.

And, to hear more on these topics and others from Mason and a bunch of other data brainiacs, come to Gigaom’s Structure Data conference that takes place March 18-19 in New York.

[soundcloud url=”” params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]



Hosts: Barb Darrow and Derrick Harris.

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed


Here we go again: 5 key questions for patent reform in 2015

Will the third time be the charm? In the last five years, Congress has twice tried to fix the country’s dysfunctional patent laws only to see those efforts founder at the hands of shrewd lobbying by reform opponents.

Now, lawmakers are at it again, vowing to cut down the patent trolls who have made a mockery of a system that is supposed to promote innovation by instead turning it into a tool for economic extortion. Here’s a short look at the story so far, plus five factors that will determine if this year’s patent reform effort will fare any better than 2011 and 2014 — and a prediction of how it will all turn out.

A short, unhappy history of patent reform

Patents became a major problem in the early 2000s with the rise of so-called patent trolls, which are companies that don’t make tech products or provide services, but instead acquire old intellectual property and threaten expensive lawsuits against those that do.

The trolls (who prefer to call themselves “non-practicing entities,” or NPEs) soon grew rich by exploiting an economic asymmetry in federal law that makes it relatively cheap and risk-free to file a patent lawsuit but ruinously expensive to defend one. As a result, many companies have chosen to simply hold their nose and pay the trolls — those who didn’t often landed in the patent swamps of East Texas, where lawyers and plaintiff-friendly juries have built a cottage industry based around multimillion dollar verdicts. The trolls’ recent scalps in Texas include Google, which faces an $85 million jury award over push notifications on smartphones, and comedian Adam Carolla, who was browbeat into a settlement by a troll that claims to own podcasting.

The growing economic toll of the trolls, which came to target everyone from big tech companies to small coffee shops, eventually led to calls for Congress to pass laws to stop them. Proposed remedies included fee-shifting, which would undercut the economic imbalance that makes trolling so lucrative, and the creation of expedited review procedures to challenge the validity of so-called “business method” patents, which the Patent Office began issuing by the thousands after 1998, and which can grant 20-year monopolies on basic business practices.

While Congress did pass a reform law in 2011 known as the America Invents Act, it had virtually no effect since lobbyists for patent owners had gutted almost every key provision by the time President Obama signed it into law. Indeed, after 2011, the scale of patent trolling actually increased to the point where it became a source of national notoriety through mainstream media exposes, including a landmark radio documentary titled “When Patents Attack.”

As a result, Congress tried again in 2014 and came close to achieving meaningful reform with a proposed law called the Innovation Act, which passed the House of Representatives by a large margin, and which enjoyed bipartisan support from influential Senators like John Cornyn (R-Tx) and Chuck Schumer (D-NY), as well as President Obama. The law foundered last spring, however, when Sen. Patrick Leahy (D-Vt) abruptly cancelled a key vote. Leahy never offered an explanation for his decision to pull the plug, though it’s rumored he did so in order to win favor from trial lawyers and other key Democratic constituencies ahead of last year’s mid-term elections.

Now, patent reform is brewing in Congress for a third time. Last week, an unusually broad coalition of tech companies and main street retailers announced a campaign to “take back our system from trolls,” and the wind appears to be in their sails thanks to support from the Republican-controlled Congress and the White House.

While a proposed bill is expected to arrive next month, skeptics who have seen this movie before may wonder if patent reform will go 0-for-3 — either by failing to pass, or suffering an Innovation Act-style gutting. It’s too soon to know, but here is what will determine the answer:

5 questions that will make or break patent reform in 2015

1. Will there be one reform bill — or more?

Despite bipartisan support for “patent reform,” lawmakers in 2014 offered up a potpourri of different bills that drew supporters in different directions.  This played into the hands of patent trolls, who were able to claim the mantle of “reform” for themselves by supporting the weaker legislation, which offered only cosmetic changes and none of the measures (like fee-shifting or discovery reform) that would threaten their operations.

2. Will tech and retail stick together?

On previous occasions, opponents have been able to portray patent reform as a pet project of Silicon Valley, and suggest reformers were no more than slick tech villains looking to ride roughshod over inventors.

Now, as patent trolls present a growing burden to the likes of restaurants and retailers, companies like Macy’s and JC Penney are standing side by side with big tech names like Google, Adobe and Oracle. According to a person close to the campaign, the tech and retail companies have agreed to an all-or-nothing approach, and committed to seven core reform principles as a condition of membership. But it remains to be seen if this will hold up once the lobbying dollars start flying around.

3. Will anyone fall for the “good trolls” versus “bad trolls” distinction?

In recent months, the strategy of big players in the patent troll space has become clear: head off reform by drawing a distinction between themselves and the small-time shakedown players who have been targeting mom-and-pop coffee shops. In the case of Intellectual Ventures, which is the largest and most famous NPE/patent troll, the company has been scrambling to create associations with startups and charities in an effort to downplay its core business.

Likewise, in an interview late last year, the CEO of Finjan Holdings — which looks, walks and talks like a patent troll — assured me that his company was not a patent troll, but that its reputation has been harmed as a result of people associating it with “bad actors.” Whether lawmakers will appreciate this distinction, or if they will continue to swallow the trolls’ “be careful not to harm innovation” shtick, is an open question.

4. Will Apple step up?

While tech companies like Google and Rackspace have been at the forefront of patent reform, Apple has been less vocal — even as it has groused about being the very favorite target of trolls. So far its name is not among the other tech giants, including Facebook and Amazon, who are anchoring the new “United for Patent Reform” coalition.

If Apple goes all-in pushing for reform, the iPhone maker’s powerful reputation among inventors and consumers could persuade any wavering lawmakers to drive a fatal stake into the patent trolls.

5. Will pharma stay on the sidelines?

In the past, the pharmaceutical industry has been one of the most powerful opponents to patent reform on the grounds that it could weaken incentives to develop new drugs. This has been a sticking point for reform because the justification for patents in pharma, where innovation is slow and incredibly expensive, is much different than in tech where innovations are often obsolete in a year or two.

This time, however, the source familiar with the patent coalition said that the pharma industry may stay out of the legislative debate — so long as the drug companies feel comfortable the measures are aimed at patent trolls and not pills.

Is reform for real? Handicapping the 2015 outcome

Patent reform proponents are optimistic 2015 is their year. Of course, this was also the case last year when the Innovation Act was one of the few pieces of bipartisan legislation that people predicted could pass in a dysfunctional Congress.

The difference this time, however, is it will be harder other Senate Democrats to throw wrenches in the process.

But the best indication that this really could be the year for genuine patent reform may come from Erich Spangengberg, a notorious patent troll, who boasted to the New York Times in 2013 about how he likes to “go thug” on those who resist his licensing demands.

Early this year, Spangenberg blogged that 2015 would be the worst year yet for his much-maligned industry. Many companies and consumers, who pay higher prices due the trolls, no doubt hope he’s right. My own prediction is that Congress will pass some sort of reform, but that real reform — which must include fee-shifting and the end of discovery abuse — is still a crapshoot.

For cloud players, hot patching may be hotter than live migration

Late last year, the world got a good look at the challenges and pain associated with kernel maintenance required by cloud providers. A security vulnerability in the Xen hypervisor required immediate and unprecedented infrastructure updates and the following “great cloud reboot” impacted a huge swath of cloud providers — including portions of both Amazon Web Services and Rackspace — and the hundreds of thousands of customers running on that infrastructure.

Reaction was swift on [company]Twitter[/company] and in blogs, suggesting these providers were poorly prepared for such an update. Critics suggested that they instead should have utilized a technology known as Live Migration to avoid having to reboot individual workloads on hypervisors.

Unfortunately, the operational reality is that live migration wouldn’t have saved the day. Live migration is an attractive feature — it appears to solve all kinds of administrative woes. But in a scenario where there’s a major security vulnerability like a hypervisor breakout, live migration can’t physically overcome the challenge of data gravity to avoid the system-wide reboots we experienced.

As we continue to move more workloads to cloud infrastructure, cloud operators need to find a solution. Fortunately, one already exists: hot patching. Let’s start with some definitions.

Live migration is when a virtual machine from one physical host is moved to another physical machine using virtual memory streaming, thus avoiding a reboot. Assuming there are no hiccups in that process, the end user should experience no downtime and, at worst, a slight pause in the workload as a result.

Kernel hot patching is the practice of applying dynamic kernel updates without rebooting the underlying system. Like live migration, this process shouldn’t impact the end user as it happens; however, because the patch is changing running code in the kernel, there’s a potential risk of system instability.

The success of VMware’s vMotion has popularized the notion of live migration, but vMotion performs best when the virtualization environment is operating from a storage-attached network (SAN). Because the data resides on the SAN itself and never needs to be copied over the wire, live migration with vMotion and a SAN is a data-light process. This is why SAN-backed vMotion can happen so quickly.

migrating geese

… but it’s also a boondoggle for cloud operators at scale

The problem is that few, if any, cloud providers (and there are exceptions to this rule) run local virtual machines from a SAN. Doing so has a number of drawbacks, including centralizing performance bottlenecks and increasing the blast radius in the event of an outage. Instead, the majority of cloud providers deploy a virtual machine with storage sitting on-chassis alongside the compute host running the actual virtual machine.

The use of live migration falls short for cloud providers for several reasons:

  • The weight of data (data is heavy)
  • The speed at which data can be moved
  • A limited capacity within the cloud fleet (i.e. servers)
  • The necessity of “leap-frogging” (moving data from host to host in succession to avoid exceeding capacity in any one host)
  • The time that successful “leap-frogging” would demand

Those factors aside, live migration can be a complementary tool for cloud providers. The practice can be effective when deployed to fix a single machine, re-balance capacity or perform general maintenance.

So if live migration can’t deliver what cloud operations need, what is the alternative?

Solving cloud operators’ problems with hot patching

Kernel hot patching lets a provider patch security vulnerabilities in real time on running hosts without the need to move data or virtual machines off the system. Of course, hot patching isn’t a perfect solution, either. There are no open-source options available today — and that alone limits the accessibility of the technology.

Oracle acquired kSplice in 2011 and initially shut down the service but subsequently reintroduced it. KernelCare is another commercial option, but the reality is that companies rely on either in-house engineering to craft and implement these patches, or rely on external providers and the whims of their business models. Additionally, those commercial offerings must support the specific kernels a provider uses (for example, KernelCare doesn’t support Ubuntu Linux).

At the end of the day, when it comes to the cloud, there’s no one right answer to the live migration vs. hot patching debate. The acquisitions of kSplice and of GridCentric (a live migration company focused on KVM that was bought by [company]Google[/company]) affirm this belief. As both methodologies have merit, I believe both will get continued investment and interest. And don’t be surprised if you hear the big cloud operators talking more about hot-patching technology in the coming year.

Jesse Proudman is founder and CTO of Blue Box Group.

Rackspace to users: It’s time to move to next-gen cloud

You know how hard it is to let go of things. That ancient high school sweatshirt, the Netware server in the closet … But at some point you have to say goodbye. And that’s what Rackspace is telling customers who still run workloads on its old (um, venerable) Slicehost-based First Generation Cloud servers. This year, those people will have to migrate to the company’s shinier, newer OpenStack-based Next Generation Cloud.

Starting this month, [company]Rackspace[/company] will start notifying those customers 30 days in advance of when their servers become eligible for migration. When the time comes, customers can migrate the servers themselves or let Rackspace do it for them. This is all according to an email sent to some customers already and confirmed by Rackspace as legitimate.

“If you choose not to self-migrate, Rackspace will migrate your First Generation servers on your behalf at the end of the self-migration window,” according to the letter.

Once everything has been moved, interactions with those servers will happen via a new API.

Rackspace bought Slicehost in 2008 to better compete with [company]Amazon[/company] Web Services. In 2011, a Rackspace exec said Slicehost customers would be moved to Rackspace servers over the course of that year. That, apparently, was to convert the branding, but some underlying technology lived on in those first-gen servers. And now it’s time to say goodbye.

Cloud in the city