Education lessons from the AWS Pop-up Loft

As the AWS Pop-up Loft closes after its most recent two-week stint, I thought I would catch up with Ian Massingham, AWS Technical Evangelist, to see how it had gone. To explain, the ‘loft’ is the ground floor of Eagle House, a converted office block on City Road which runs up towards Kings Cross from the heart of London’s tech start-up scene, Old Street and the Silicon Roundabout.
The aim of the Loft — the clue’s in the term ‘pop-up’ — is to offer a temporary space to run an educational programme, aimed at organisations looking to use AWS technologies in anger. “It was never meant to be a long-term thing,” explains Ian. “We thought that by coming back periodically, we’d be able to connect with different cohorts of customers, at different points in their development.”
There’s an “Ask the Architect” (think: Genius) bar, a co-working space and a room for sessions. Booths for support teams and training partners, who are on call to ask questions. The single-track timetable has been filled with back-to-back sessions on a wide range of topics, from IoT to Machine Learning, from introductory to deep dive technical, from shorter to longer formats, aimed at a variety of audiences.
So, what were my take-away thoughts? Interestingly, these were less about the topics themselves, and more about how they were delivered. The model is simple: you register, you come, you learn, you have the opportunity to ask questions and participate in workshops, chalk and talk sessions and hackathons. It’s been intense, but that was the plan, says Ian. “We’ve learned a lot from previous pop-ups, on how to make the best use of people’s time.” Not least that the content — educational content, that is — is king.
While this may appear self-evident, less clear is the importance that should be attached to providing a diverse range of materials. “You need to create the right interaction channels for different types of customers. While a large base of our customers expect to self-serve, others will want full support. And similarly some like to read documents, others like videos, others like classroom training, it’s up to us to be ubiquitous, so people won’t get unhappy even if the majority of content is not directly appropriate to their needs.”
Secondary plus points concerned the location (“Yes, sure, the location is important, we’re right in centre of startup community”), the food (“Developers run on beer and pizza”) and so on but these were seen as hygiene factors for the pop-up.
Formal feedback has not been collated but the signs are good that the key goal of the event, to “get people productive on the platform,” was achieved. As, if not more importantly was that people got what they wanted and more. “I was just told, ‘This is great, I love it, it’s so convenient to engage with your architects.’ ”
The message, as I read it, was one that events of any size and scale could take away: whatever the format, make delivery of a range of excellent content, to fit a diverse audience, the primary goal. So, yes, context is important: nobody wants to travel to the back of beyond to attend an event of any form. But head and shoulders above this is the range and applicability of the content.
If this appears obvious, it begs a question — why do so many events, held in far more glitzy and dare I say, exotic locations (sorry, Shoreditch) tend to forget this simple, yet important truth? Like the software developed without regard for its users, so should events focus first and foremost on meeting the needs of their attendees. If Amazon Web Services, purveyors of online platforms that depend heavily on the self-service model recognise this, then so should everybody else.

Seattle vs. San Francisco: Who is tops in the cloud?

In football, city livability rankings — and now in the cloud — San Francisco and Seattle are shaping up as fierce rivals.

Who’s winning? Seattle, for now. It’s due mostly to the great work, vision and huge head-start of Amazon and Microsoft, the two top dogs in the fast-growing and increasingly vital cloud infrastructure services market. Cloud infrastructure services, also called IaaS, for Infrastructure as a Service, is that unique segment of the cloud market that enables dreamers, start-ups and established companies to roll-out innovative new applications and reach customers anytime, anywhere, from nearly any device.

Amazon Web Services (AWS) holds a commanding 29 percent share of the market. Microsoft (Azure), is second, with 10 percent. Silicon Valley’s Google remains well behind, as does San Francisco-based Salesforce (not shown in the graph below).

cloud leaders

The Emerald city shines

I spoke with Tim Porter, a managing director for Seattle-based Madrona Venture Group. Porter told me that “Seattle has clearly emerged as the cloud computing capital.  Beyond the obvious influence of AWS and strong No. 2, (Microsoft) Azure, Seattle has also been the destination of choice for other large players to set up their cloud engineering offices.  We’ve seen this from companies like Oracle, Hewlett-Packard, Apple and others.”

Seattle is also home to industry leaders ConcurChef, and Socrata, all of whom can only exist thanks to the cloud, and to 2nd Watch, which exists to help businesses successfully transition to the cloud. Google and Dropbox have also set up operations in the Emerald City to take advantage of the region’s cloud expertise. Not surprisingly, the New York Times said “Seattle has quickly become the center of the most intensive engineering in cloud computing.”

Seattle has another weapon at its disposal, one too quickly dismissed in the Bay Area: stability. Washington has tougher non-compete clauses than California, preventing some budding entrepreneurs from leaving the mother ship to start their own company. The consequence of such laws can lead to larger, more stable businesses, with the same employees interfacing with customers over many years. In the cloud, dependability is key to customers, many of whom are still hesitant to move all their operations off-premise.

Job hopping is also less of an issue. Jeff Ferry, who monitors enterprise cloud companies for the Daily Cloud, told me that while “Silicon Valley is great at taking a single idea and turning it into a really successful company, Seattle is better for building really big companies.”

The reason for this, he said, is that there are simply more jobs for skilled programmers and computing professionals in the Bay Area, making it easier to hop from job to job, place to place. This go-go environment may help grow Silicon Valley’s tech ecosystem, but it’s not necessarily the best environment for those hoping to create a scalable, sustainable cloud business. As Ferry says, “running a cloud involves a lot of painstaking detail.” This requires expertise, experience, and stability.

San Francisco (and Silicon Valley)

The battle is far from over. The San Francisco Bay Area has a sizable cloud presence, and it’s growing. Cisco and HP are tops in public and private cloud infrastructure. Rising star Box, which provides cloud-based storage and collaboration tools, started in the Seattle area but now has its corporate office in Silicon Valley. E-commerce giant Alibaba, which just so happens to operate the largest public cloud services company in China, recently announced that its first cloud computing center would be set up in Silicon Valley.

That’s just for starters.

I spoke with Byron Deeter, partner at Bessemer Venture Partners (BVP), which tracks the cloud industry. He told me that five largest “pure play” cloud companies by market cap are all in the Bay Area: Salesforce, LinkedIn, Workday, ServiceNow and NetSuite.

The Bay Area also has money. Lots of money. According to the National Venture Capital Association, nearly $50 billion in venture capital was invested last year. A whopping 57 percent went to California firms, with San Francisco, San Jose and Oakland garnering a rather astounding $24 billion. The Seattle area received only $1.2 billion.

venture capital by region

The Bay Area’s confluence of talent, rules and money will no doubt continue to foster a virtuous and self-sustaining ecosystem, one that encourages well-compensated employees to leave the nest, start their own business, and launch the next evolution in cloud innovation. If Seattle has big and focused, San Francisco has many and iterative.

The cloudy forecast

Admittedly, this isn’t sports. There’s no clock to run out and not everyone keeps score exactly the same. Just try to pin down Microsoft’s Azure revenues, for example. It’s also worth noting that the two regions do not compete on an even playing field. Washington has no personal or corporate income tax, and that is no doubt appealing to many — along with the mercifully lower price of real estate, both home and office.

The cloud powers healthcare, finance, retail, entertainment, our digital lives. It is increasingly vital to our always-on, from anywhere-economy, and a key driver of technical and business model innovation. If software is eating the world, the cloud is where it all goes to get digested. Here’s hoping both cities keep winning.

Why boring workloads trump intergalactic scale in HP’s cloud biz

Although having a laugh at so-called “enterprise clouds” is a respected pastime in some circles, there’s an argument to be made that they do serve a legitimate purpose. Large-scale public clouds such as Amazon Web Services, Microsoft Azure, and Google Compute Engine are cheap, easy and flexible, but a lot of companies looking to deploy applications on cloud architectures simply don’t need all of that all of the time.

So says Bill Hilf, the senior vice president of Helion (the company’s label for its cloud computing lineup) product management at [company]HP[/company]. He came on the Structure Show podcast this week to discuss some recent changes in HP’s cloud product line and personnel, as well as where the company fits in the cloud computing ecosystem. Here are some highlights of the interview, but anyone interested in the details of HP’s cloud business and how its customers are thinking about the cloud really should listen to the whole thing.

[soundcloud url=”https://api.soundcloud.com/tracks/194323297″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed

Amazon matters . . . and so does everything else

“First and foremost, our commitment and focus and investment in OpenStack hasn’t changed or wavered at all,” Hilf said. “It’s only increased, frankly. We are fully committed to OpenStack as our core infrastructure-as-a-service platform.” HP has been a large backer of the open source project for years now, and was building out an OpenStack-based cloud platform exclusively before acquiring Eucalyptus and its Amazon-Web-Services-compatible cloud technology in September.

However, he added, “As we started working with customers around what they were looking for in their overall cloud environment, we did hear the signal loud and clear that the AWS design pattern is incredibly relevant to them.” Often times, he explained, that means either hoping to bring an application into a private cloud from Amazon or perhaps moving an application from a private cloud into Amazon.

[pullquote person=”” attribution=”” id=”919622″]”People often use the term ‘lock-in’ or ‘proprietary.’ I think the vendors get too wrapped up in this.”[/pullquote]

Hilf thinks vendors targeting enterprise customers need to make sure they’re selling enterprise what they actually want and need, rather than what’s technologically awesome. “Our approach, from their feedback, is to take an application-down approach, rather than an infrastructure-up approach,” he said. “How do we think about a cloud environment that helps an application at all parts of its lifecycle, not just giving them the ability to spin up compute instances or virtual machines as fast as possible.”

Below is our post-Eucalyptus-acquisition podcast interview with Hilf, former Eucalyptus CEO Marten Mickos and HP CTO Martin Fink.

[soundcloud url=”https://api.soundcloud.com/tracks/167435404″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Enterprise applications might be boring, and that’s OK

Whatever HP’s initial promises were about challenging [company]Amazon[/company] or [company]Microsoft[/company] in the public cloud space, that vision is all but dead. HP still maintains a public cloud, Hilf explained, bu does so as much to learn from the experience of managing OpenStack at scale as it does to make any real money from it. “It not only teaches us, but allows us to build things for people who are going to run our own [private-cloud] products at scale,” he said.

But most of the time, he said, the companies that are looking to deploy OpenStack or a private cloud aren’t super-concerned with concepts such as “webscale,” so it’s not really in HP’s financial interests to go down that path:

“[W]e don’t have an intention to go spend billions and billions of dollars to build the infrastructure required for, let’s say, an AWS or an Azure. . . . . It’s not because ‘Oh, we don’t want to write a billion-dollar check,’ it’s because [with] the types of customers we’re going after, that’s not at the top of their priority list. They’re not looking for a hundred thousand servers spread across the globe. . . . Things like security are much higher on their list than the intergalactic scale of a public cloud.”

Hilf added:

“What we typically hear day-to-day, honestly, is actually pretty unexciting and mundane from customers. They’re not all trying to stream the Olympics or to build Netflix. Like 99 percent of the enterprise in the world are doing boring things like server refreshes or their lease in a data center is expiring. It’s really boring stuff, but it matters to them.”

“If a customer came to me and said, ‘Hey I need to spin up a billion instances to do whatever,'” he said, “. . . I’d say, ‘Go talk to AWS or Azure.’”

Get over the talk about lock-in

Despite the fact that it’s pushing a lineup of Helion cloud products that’s based on the open source OpenStack technology, Hilf is remarkably realistic about the dreaded concept of vendor lock-in. Essentially, he acknowledged, HP, Amazon and everyone else building any sort of technology is going to make a management interface and experience that’s designed to work great with their particular technology, and customers are probably going to be running multiple platforms in different places.

Hilf thinks that’s a good thing and the nature of business, and it provides an opportunity for vendors (like HP, coincidentally) with tools to help companies get at least some view into what’s happening across all these different platforms.

“People often use the term ‘lock-in’ or ‘proprietary.’ I think the vendors get too wrapped up in this,” he said. “The enterprise is already through the looking glass. They all know they’re going to have some degree of lock-in, it’s just where.”

Microsoft faces specter of shelfware in the cloud era

The notion that pay-as-you-go cloud computing will eliminate shelfware — paid-for but unused computing resources — has always been suspect. Last year I wrote that the proliferation of unused compute instances resulting in zombie resources that are allegedly active but not doing productive work, could be a big problem for cloud vendors as customers smarten up.

Another type of shelfware is a cloud service that is purchased but never actually deployed, and that’s something [company]Microsoft[/company] is facing with Azure.

Business Insider report this week noted that Microsoft sales teams are under pressure not just to sell Azure — usually in conjunction with a broader enterprise license — but to make sure customers actually use it. To be fair, Microsoft has been aware of this issue for some time and last summer ended an Azure discount program that exacerbated the shelfware problem.

A long-time Microsoft partner told me at the time that the company was pushing its sales force hard “to drive utilization, not just revenue.”

The problem was that once Microsoft field sales sold a pre-paid Azure contract, there was zero incentive for them to make sure the customer put those resources to work. And that’s a problem as companies start scrutinizing what they have rights to and what they’ve actually deployed. Eventually the bean counters will start wondering about the value of those license agreements.

Another long-time Microsoft partner told me this week that he knows of lots of customers who have tens of thousands of dollars worth of Azure licenses who are not running Azure at all. And that brings us back to the BI report, which shows that little progress has been made in the past six months. According to BI:

Microsoft has been structuring deals that give away access to Azure, its cloud competitor to [company]Amazon[/company] Web Services, for little to no extra cost to some customers who have no plans to use it. It has been counting some revenue from those deals for its cloud, but if they don’t actually use the cloud, that revenue won’t continue.

A Microsoft spokesman said the company sees  “strong usage of Microsoft Cloud services by businesses of all sizes” and that more than 60 percent of all Azure companies use at least one premium service, say, media streaming. And, he noted that more than 80 percent of Office 365 enterprise customers run two workloads or more.

I’m not sure that really resolves the question but in any case, shelfware is an issue for all cloud providers as customers get more savvy about what they’re actually paying for and using. Or not using.

Last week, a Wall Street Journal report on the “hidden waste and expense of cloud computing”  (paywall) pointed out that C-level execs are increasingly worried about idle cloud resources and are looking to what cloud pioneers like [company]Netflix[/company] have done to optimize their cloud computing resources. Netflix, for example, has technology that shuts off resources automatically when they’re not needed.

Others turn to third-party tools from Cloudyn, Cloudability and Krystallize Technologies to minimize waste.

As one commenter to the Journal story pointed out, the secret to minimizing waste is to keep tabs on what you spin up.  “The minute you turn on a process it’s going to cost money,” he noted. Other AWS shops have said that Amazon’s own Trusted Advisor and Cost Explorer dashboards have gotten much better over time, eliminating much of the need to keep spreadsheets to track usage.

This story was updated at 10:30 a.m. PST with additional Microsoft partner comment and again at 12:30 p.m. PST with Microsoft comment.

Google gets chatty about live migration while AWS stays mum

On Monday, Amazon wanted us to know that its staff worked day and night to avert planned reboots of cloud instances and updated a blog post to flag that information. But it didn’t provide any specifics on how these live updates were implemented.

Did [company]Amazon[/company] use live migration — a process in which the guest OS is moved to a new, safe host? Or did it use hot patching in which dynamic kernel updates are applied without screwing around with the underlying system?

Who knows? Because Amazon Web Services ain’t saying. Speculation is that it used live migration — even though AWS proponents last fall insisted that live migration per se would not have prevented the Xen-related reboots it launched at that time.

But where AWS remains quiet, [company]Google[/company], which wants to challenge AWS for public cloud workloads, was only too glad to blog about its live migration capabilities launched last year. Live migration, it claimed on Tuesday, prevented a meltdown during the Heartbleed vulnerability hullabaloo in April.

Google’s post is replete with charts and graphs and eight-by-ten glossies. Kidding about the last part but there are lots of diagrams.

A betting person might wager that Google is trying to tweak Amazon on this front by oversharing. You have to credit Google’s moxie here and its aspirations for live migration remain large. Per the Google Cloud Platform blog:

The goal of live migration is to keep hardware and software updated across all our data centers without restarting customers’ VMs. Many of these maintenance events are disruptive. They require us to reboot the host machine, which, in the absence of transparent maintenance, would mean impacting customers’ VMs.

But Google still has a long row to hoe. Last fall, when Google started deprecating an older cloud data center zone in Europe and launched a new one, there was no evidence of live migration. Customers were told to make a disk snapshots and use them to relaunch new VMs in the new zone.

As reported then, Google live migration moves working VMs between physical hosts within zones but not between them. Google promised changes there too, starting in late January 2015 but there appears to be nothing new on that front as yet.

So let the cloud games continue.

 

If you thought cloud competition couldn’t get hotter, think again

Chinese e-commerce giant Alibaba has opened a data center hub in Silicon Valley, adding yet another gigantic player to a growing, but already hotly-contested cloud computing market.

Aliyun, Alibaba’s cloud computing arm, has been likened to Amazon.com’s Amazon Web Services unit and you can bet that [company]Amazon[/company], as well as [company]Google[/company] and [company]Microsoft[/company], are watching this development closely. Those American cloud giants are focused on boosting business and operations outside the U.S. — Microsoft and Amazon have presence in China, for example, and now Aliyun will return the favor with its first US-based data center.

The initial plan is for the Aliyun data center, the exact location of which was not disclosed, to target Chinese companies based in the U.S. and to expand from that base. In a statement Aliyun VP Ethan Sicheng Yu said:

… the ultimate objective of Aliyun is to bring cost-efficient and cutting-edge cloud computing services to benefit more clients outside China to boost their business development.

The U.S. expansion comes at an interesting time politically as well — relations are tense between the Chinese and U.S. governments and both sides have accused the other of spying on each other and using native tech companies to help in this effort.

Aliyun’s current data centers are in Hangzhou, Qingdao, Beijing, Shenzhen and Hong Kong.

Amazon hones its cloud update process

Remember that planned Xen-related reboot Amazon Web Services warned about last week? Well, things went better than planned, according to an updated blog post Monday.

The company said it was able to perform live updates on 99.9 percent of the affected instances, avoiding the need for a reboot altogether.  Last Thursday, [company]Amazon[/company] had said that it would need to reboot about 10 percent of total AWS instances to address a Xen security issue.

The ability of AWS to perform updates without shutting down and bringing back up compute instances comes as very good news to cloud users. And that’s true whether the technology used was a live migration, hot patching or maybe something else. The net result was the same: workloads were not interrupted.

The Xen-related security issue also affected [company]Rackspace[/company], Linode and [company]IBM[/company] SoftLayer, all of which said they’re doing their own fixes before March 10 when more information is released about the vulnerability.

Getting the real dope on your cloud deployment

Lots of companies can perform cost analysis of cloud-based instances; Krystallize Technologies promises to do more. The Austin, Texas–based startup said its technology delves deeper into what’s going on with cloud workloads and compares how a given job will do across Amazon, IBM SoftLayer, vCloud Air, Microsoft Azure, CenturyLink, etc. to help you decide where to run it.

If it works as advertised, it could be a big leg up for cloud deployers (or would-be cloud deployers) who are discovering that there is no one-size-fits-all cloud. There will be times when a private cloud running large instances is more cost-effective than a public cloud churning a ton of itty-bitty instances. The problem is that most of that is discovered now by trial and error — if it is discovered at all.

Krystallize CloudQOS, on the other hand, enables real capacity planning, according to founder and CEO Clinton France. To get to its CloudQOS index, Krystallize buys the instances in question and then applies its own technology.

“We’re the first to put a workload simulation engine and a performance statistic to measure what’s going on in the cloud,” France said in an interview.

I’d been hearing about Krystallize a bit already from people in the industry who were impressed with the pilot, which has been running for about six months. It also got some early press. One data center specialist was particularly impressed because Krystallize takes a lot of factors into account — the cloud resources used, how the components are integrated and how oversubscribed (or not) the hypervisor is in any given case.

Krystallize measures cloud environments down to a level that gets to what is the true performance of a cloud instance, this expert said. One example: “Most people don’t know that the internal clocks in cloud instances are not necessarily reality,” said the specialist, who did not want to be named because he works with a lot of the cloud providers.

“You can think of it as the clock on a Star Trek holodeck.  The clock in the holodeck can be slower or faster than real time.  To solve this, Krystallize adds its own clocking mechanism that runs in the cloud instance to get apples-to-apples comparison of what gets done over a period of time.”

The company can measure a given workload, with calculations per second on one axis and variability along the other to give a better representation of what a customer can expect to get for its money. Great if a given cloud can claim a zillion calculations per second; not so great if it only hits that high only once in awhile.

In one analysis,  Krystallize ran the same workload on [company]VMware[/company] vCloud Air private cloud, [company]Amazon[/company] Web Services and [company]Google[/company] public clouds. Given the specifics of this application, vCloud Air private showed the best performance for a given number of transactions (about 78,000 calculations per second) with fairly good variability. Google delivered just over 40K calculations with more variability and AWS performed just under 30K calculations but with less variability.

Krystallize private v public

In this case VMware looks best if the application requires all those transactions, but if the application only had to deliver 27,000 transactions or less, VMware as configured would use just a third of the resources allocated. This is why to properly gauge performance, you must understand both the application workload requirements and the cloud capabitilies private or public, France said.

The company has been self-funded to date but just landed a $1.2 million seed round from several unidentified angel investors.

France said the beauty is that Krystallize can look at service performance over time and monitor them to make sure the customer gets what she’s paying for. One value is to provide a price/performance index, and another is that Krystallize can help customers do “cloud pruning,” jettison resources that aren’t up to the task and redeploying workloads to resources that will handle them better.

That could help companies rid themselves of “shelfware” or cloud instances that are still up there but not really doing anything.

“This is like going to the vegetable stand and instead of taking the stuff off the top, going in to find the better, fresher fruit that may be at the bottom,” France said.

Add IBM cloud to the list of reboots to come

The latest Xen hypervisor vulnerabilities are forcing IBM to reboot some customers’ cloud instances between now and March 10. The vendor sent out an alert to affected IBM SoftLayer customers on Friday, the same day Linode alerted its customers.

As reported, [company]Amazon[/company] Web Services and [company]Rackspace[/company] already posted news about the updates on Thursday night.

Per an [company]IBM[/company] notice sent to customers, the company said it was “in the process of scheduling maintenance for patching and rebooting a portion of services that host portal-provisioned virtual server instances, virtual servers hosted on these servers will be offline during the patching and rebooting process.”

As with the other alerts, the maintenance will happen before March 10, when more details of hte underlying Xen vulnerability will be disclosed. IBM promised more information when it becomes available and said it was working to minimize service disruptions.

For retailers the buy-or-build cloud decision looms large

If you need proof that cloud deployment stories can touch off religious disputes, my recent report about @Walmartlabs deploying 100K cores of OpenStack to run the retail giant’s e-commerce operations is Exhibit A.

This is, by any measure, a massive private cloud, and some readers were incredulous that [company]Walmart[/company] would go this route instead of plying public cloud services. It’s the old build versus buy discussion all over again, with many of the participants weighing in on the “buy” side.

One reader, termed this decision “ridiculous,” pointing out that @walmartlabs has hired on 1,000 or so engineers over the past year — although no one said all those people were dedicated to building or maintaining the aforementioned OpenStack private cloud. Still the argument is, if you go with public cloud, you won’t need to bring that much expensive talent in house. Engineering talent is pricey, especially in Silicon Valley. @walmartlabs is headquartered in San Bruno, Calif.

Wal-Mart StoreHis opinion is that a big retail outfit is far better off using “out of the box” public cloud capabilities for much of its work rather than reinventing the wheel (or building its own cloud.) For this camp, Walmart’s decision to build a customizable and flexible cloud with OpenStack makes no sense.

On the other hand, private cloud (and OpenStack) proponents noted joyously that Walmart’s work proves “private cloud deniers” wrong. (Does anyone else find that phrase disturbing? It brings to mind thought of climate change and holocaust deniers and seems to lack a sense of proportionality but back to the topic.)

Server Density CEO David Mytton, a buy sider, wrote about the Walmart private cloud here. Bottom line, he said Walmart is:

dedicating significant resources to building their own “private cloud” and although it’s true there is no specific vendor lock-in, they are locked into their own development. They’re competing in resources, talent and innovation against the public cloud providers (who have more resources to dedicate to engineering both product features and efficiency at scale).

Anybody but AWS?

Remember, given the competitive retail landscape, Walmart was hardly likely to run Amazon Web Services public cloud seeing as how Amazon.com is seen as Darth Vader by many of the rest of the retail universe. Target used Amazon.com (not AWS) for infrastructure but left the fold in 2011.

AWS would likely point out, if it were prone to comment on such things, that its cloud business is run as a separate entity than [company]Amazon.com[/company] — [company]Netflix[/company] is a huge customer after all and Amazon also runs Amazon instant video. But I’ve talked to other retailers who, off the record, will point to the political incorrectness of turning over key retail functions to Darth, er AWS.

Jeff Aden, co-founder of 2nd Watch, a systems integrator that works with customers to deploy AWS, said his company has several retail customers running on AWS, including Diane Von Furstenberg. Other AWS retail users include Gilt.com and Nordstrom Rack.

Mytton, conceded that AWS might be a tough sell for a big reseller to use, but why not throw in with [company]Google[/company] Cloud Platform or [company]Microsoft[/company] Azure? He points out that Ocado, the big British retailer is a Google cloud customer.

Last week I spoke with Sudhir Hasbe, director of software engineering BI and data services for Zulily, a members-only online fashion retailer that has fully embraced Google cloud services — BigQuery, Google Storage and Google Compute Engine. In this, Zulily is sort of a counter-narrative to the @Walmartlabs story.

Zulilly puts 9,000 new items on its site daily but wants to make sure it displays only the items that are relevant an potentially of interest to a given shopper. If you’re a woman who shops for herself and maybe a 6 year old boy, then she’ll see options for those demographics and not have to wade through the rest. “Search doesn’t work well in retail,” Hisbe said.

“For this we need the full big data platform so we can perform maximum data processing– what preferences do they have, what do they like. It also means when you have that much data, the whole supply chain side needs to consume it to make decisions,” he noted.

What’s nice about deploying Hadoop clusters on GCE, is that once the processing has run, the data is pushed into BigQuery where it’s available to all the business units and analysts, and the bill for Hadoop processing stops. The data is all stored in inexpensive Google Storage.

Anyway, feel free to comment on when and in what circumstances it makes sense to deploy public cloud or BYO private cloud. But please keep it polite.

Agree or not, Mark Cuban’s take on net neutrality is worth a listen

For those who missed, Mark Cuban visited the Structure Show last week to re-iterate/explain his thinking on net neutrality and why he thinks turning over internet governance to the FCC is a big mistake. Check it out below.

[soundcloud url=”https://api.soundcloud.com/tracks/193100656″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]