Report: Hybrid application design: balancing cloud-based and edge-based mobile data

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Data - generic
Hybrid application design: balancing cloud-based and edge-based mobile data by Rich Morrow:
We’re now seeing an explosion in the number and types of devices, the number of mobile users, and the number of mobile applications, but the most impactful long-term changes in the mobile space will occur in mobile data as users increasingly interact with larger volumes and varieties of data on their devices. More powerful devices, better data-sync capabilities, and peer-to-peer device communications are dramatically impacting what users expect from their apps and which technologies developers will need to utilize to meet those expectations.
As this report will demonstrate, the rules are changing quickly, but the good news is that, because of more cross-platform tools like Xamarin and database-sync capabilities, the game is getting easier to play.
To read the full report, click here.

Report: How to resolve cloud migration challenges in physical and virtual applications

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Cloud computing
How to resolve cloud migration challenges in physical and virtual applications by Paul Miller:
Enterprise IT infrastructure largely predates the emergence of cloud computing as a viable choice for hosting mission-critical applications. Although large organizations are now showing real signs of adopting cloud computing as part of their IT estate, most cloud-based deployments still tend to be either for new and self-contained projects or to meet the needs of traditional development and testing functions.
Compatibility, interoperability, and performance concerns have kept IT administrators from being completely comfortable with the idea of moving their complex core applications to the cloud. And without a seamless application migration blueprint, the project can seem more of a headache – and risk – than it’s worth. This report will highlight for systems administrators, IT directors, cloud architects, and decision-makers at Software as a Service (SaaS) companies and Cloud Service providers, the different approaches they can take in moving existing applications to the cloud.
To read the full report, click here.

VMWare on AWS is really cool!!! (or not?) #VMWonAWS

Just a few days before VMWorld, VMWare announced VMWare on AWS partnership. I struggled a bit to understand what it was and how it worked, but if I’m right… this is another attempt from VMware to be more relevant in the cloud and, at the same time, it looks like another major validation for AWS.

VMware wants to be cloudier

Long story short, VMware is not perceived as a cloud player, so by associating their name with AWS they want to change that. Technically speaking they did a great job by bringing all the ESXi-based products and management suites on the AWS infrastructure. All the VMware experience as is, on the cloud.. WOW!!! There is no cross platform compatibility though, you’ll just have your VMware environment running next to an AWS region. And close, means less latency and easier data mobility too, which could be another benefit from the TCO point of view.
From the end user standpoint this is great, and should be a significant benefit. You can have your services on-premises and on the cloud without touching a line of code or an application or even a process and it can be managed by the same sysadmins… while having access to the next generation services and APIs provided by Amazon AWS for your cloud-based applications.
The price component could be a deterrent, but it’s also true that VMware will take care of everything under the hood (including ESX/vSphere patching, etc.), you just have to manage your VMs, networks (NSX is an option) and applications. Slick and neat!

Strategy and tactic

The beauty of this service is also the real risk for VMware – in the long term at least. By adopting this service the end user has the easiest path to access AWS from its VMware environment. There is no real compatibility between the two, but moving services and applications will become easier over time… and it’s supported by VMware.
The end user is now free to choose the best strategy for the smoothest path without limitations or constraints. After all, nothing was added to VMware in terms of services, APIs or anything else, it’s just the classic VMware infrastructure moved to the cloud as is. Easy and great from the tactical point of view, but in the long run they’ll just be helping AWS do more business…

This is yet another gateway to AWS

Don’t get me wrong, I like this VMW on AWS thing… but it really looks like another gateway to better AWS consumption. If your core applications reside on a VMWARE-based infrastructure you are just moving them closer to AWS while removing any latency barrier. Today, the developer will be able to design a new application that accesses your old database stored in VMware VM (which also serves other legacy applications)… tomorrow that database will be migrated to AWS, and I’m not so sure you’ll be needing the old database any longer (hence VMware), will you? This is the worst scenario (for VMware) of course, but I can’t see many positive outcomes for VMware in the long term. Please share your comments on this if you have a different point of view.

Closing the circle

I can see the advantages for both AWS and VMW with this partnership. But in the long term, AWS will most likely benefit the most and be in the better position.
VMW is further validating AWS in the enterprise (was it necessary?), while this could be the first time VMware gets it right with the cloud.
It remains to be seen whether this move will pay off for VMware in the long term… whether this is the first step towards something bigger that we don’t know about yet or just a tactical shift to buy some time and try to stay relevant while deciding what to do in a cloudy future…
Other questions arise – vCloud air is already dead but what about vCloud Director? Will VMW on AWS functionalities be ported to vCloud Director for example? I’m curious to see the reaction of VMware-based service providers to this news and how their strategy will change now that their competitors are VMW and AWS together!

Originally posted on Juku.it

Jay Greene on Cloud Computing May Be Hampering Tech Spending

The transition to cloud computing – at the current snail pace doesn’t warrant ‘transformation’ rhetoric – may be getting a goose because of tightening finances in the corporate world. A time when the purported risks of cloud computing are moderated by companies hungry for cost-cutting. And that may be part of the slow-down in tech spending right now.

As Jay Greene writes in the WSJ,

Hesitance [sic: hesitancy] among chief information officers to commit to long-term hardware and software purchases may reflect the gradual shift from corporate data centers to so-called public cloud offerings from companies such as Amazon.com Inc. and Microsoft Corp., Deustche Bank analyst Karl Keirstead wrote in a research report.

“It is entirely plausible that this is having at least a marginal impact on the desire of large enterprises to sign material and multi-year commitments to on-premise technology suppliers,” Mr. Keirstead wrote.

Gartner research chief Peter Sondergaard made a related observation at the recent Wall Street Journal CIO Conference, noting that budget pressures are pushing corporate technology managers to take a close look at their options.

“I think many [CIOs] have benefited from pressures in central IT budgets, in that it has created opportunity for looking at different alternatives,” Mr. Sondergaard said.

Take Ted Ross, CIO of the city of Los Angeles. He needed to upgrade the technology that powers the city’s Business Assistance Virtual Network, the site where vendors bid for projects from various city agencies. Ross considered buying new blade servers to host the site.Instead, he decided to run the site on Microsoft’s Azure technology.He’ll halve his costs, and the migration should take four to six weeks, he said.

“It really seems it’s more judicious to make the investment in the cloud,” Mr. Ross said.

The winners in this foot race? Amazon AWS is the market monster, with Microsoft a strong #2 with Azure and the company’s productivity products. Google is perceived as a trailing #3.

But the larger market of SaaS players are going to benefit from this windfall, and the more traditional enterprise hardware and software players – HP, SAP, and the like – will be facing increasingly strong down drafts in this turbulent and accelerating market.


Originally posted at stoweboyd.com on 17 February 2016.

The Dell-EMC deal is huge, but where’s it headed?

I forget about Dell. It happens all the time — I see that cheerful, round, delightfully dated logo and have something of an “remember when?” moment. Dell hasn’t been a serious part of the big consumer device discussion in years, and some of that’s by design, really. Dell isn’t stupid — it knows that its strength in the PC market is waning and that soon, there won’t be enough meat on the bone to sustain a company that just made one of the biggest pure tech deals ever.

Today, the company made it official and announced the (inconceivably huge) $67 billion deal that’ll bring it together with EMC — making it, in Dell’s own words, “the world’s largest privately-controlled, integrated technology company.” This acquisition of EMC proves total recognition on Dell’s part that devices are not it’s way forward. Instead, Dell is targeting big IT and the enterprise market.

Information Technology and enterprise are aggressively unexciting arenas for consumers, but at a time when Apple, Microsoft, Amazon, Cisco, HP, Dell and everyone else are vying for a piece of the big enterprise pie (albeit in very different ways), it’s no small part of the vast technology landscape. The Dell-EMC deal is almost inconceivably massive, with that $67 billion price tag, but also serves as a larger indicator of what’s taking place in enterprise computing: consolidation.

“The market cannot continue to sustain all of these players,” said Glenn O’Donnell, Forrester’s Research Director for Infrastructure & Operations Professionals. “It’s going to continue to shrink into a number of mega-vendors.”

Dell’s trying to claw it’s way into a infrastructure and enterprise market that’s being rapidly devoured by cloud services–most notably, Amazon’s. Enterprise is where the money’s at, but it’s not a market that’s especially friendly towards fragmentation. So, is the Dell-EMC deal a game-changing power play or a $67 billion death rattle? That remains to be seen.

“They’ve got to consider how they’re going to play as a new and different vendor. Perpetuating the old-school IT model is not going to work,” said O’Donnell with regards to Dell going forward. “In the general landscape of technology, one big question has been looming…and that is: ‘What is the future for traditional tech? Are the HPs and the IBMs and the Dells and such really in a position to succeed in this new world order where the Amazons and the Microsoft Azures and the other cloud players are taking over…More and more of the IT investment is going into the cloud services, so what does that mean for these more traditional models?”

There’s a clear divide between hardware-heavy, old-school enterprise models and the light, agile enterprise solutions that are quickly eclipsing the clunky business tools of yore. Dell’s marketplace perception has long been one intrinsically tied to the devices it makes–the physical deliverables that are becoming a shrinking line item in its revenue stream.

“That is something that they need to move their messaging away from,” said Mukul Krishna, the Global Head of Frost & Sullivan’s Digital Media Group, “from a device company…to a much more agile, reconfigurable enterprise solution, scalable partner for the technology enterprise.”

To put it simply, big business is trying to lighten up and those not willing to join the cloud game and rethink flexible, scalable enterprise systems will be left behind. “Many of the technology companies who have taken a beating because they’ve focused on a very hardware-centric approach for a long time, have been trying to figure out what they need to do,” said Krishna.

So why these two companies? And why now? Well, rumors have been swirling around EMC for some time in light of stalling growth. And Dell? It’s looking to reinvent.

“One of the main reasons that Dell went private is because it wanted to restructure itself without all of the scrutiny,” said Krishna. “Buying someone like EMC that has been for a long period and has a very strong pedigree of selling that enterprise market was a very, very good thing because they immediately solved the perception problem.”

Effectively pulling EMC off of the market will allow Dell-EMC to make decisions without the scrutiny that comes from answering to the slew of investors that come with the public trading territory.

“The major attractiveness is that by merging with Dell and taking the company private it puts EMC’s assets in the hands of an owner that understands the value of EMC’s technology and also provides clear leadership in Michael Dell (as Joe Tucci retires),” said Matt Eastwood, an analyst with IDC. “The go private nature allows the company to make long term strategic bets around cloud, security, and analytics which will be critically important for the company in the future. The investments are difficult to defend as a public company looking where investors have a shorter term horizon for their returns.”

And as a company that needs to rethink its strategy for staying relevant in an enterprise conversation that’s quickly taking off towards the cloud, that freedom to maneuver may prove to be vital. Or, shall we say, Pivotal.

Pivotal is a joint venture between EMC and its most notable offspring (VMware) and was designed to compete with the enterprise cloud giant, Amazon Web Services. Dealing in big data and cloud computing, Pivotal encapsulates much of what Dell-EMC needs to become to keep up with the burgeoning enterprise market.

“The assets and strategic direction of the Pivotal umbrella cannot be overlooked,” said Laura DuBois, an IDC analyst. ” There is a change underway in enterprises – custom applications and being written in new ways, mimicking the direction Pivotal has taken.  These new applications lend themselves to server-based storage approaches.  So Pivotal gives Dell expertise in the app dev side and Dell provides the infrastructure software and systems.”

What about everyone else? We’ve established that enterprise is a big market with big margins. Where does a $67 billion deal leave the rest of the enterprise players? In short, it may lead to significant changes in enterprise technology cooperation.

“For the broader technology market, there will be shifts in strategy and partnerships that emerge,” said Eastwood. “For example, Cisco (a long term strategic EMC and VMware partner) may need to strike deeper alliances with others including NetApp, Microsoft Citrix, etc.  At the same time, Lenovo (another EMC storage partner) may be drawn closer to IBM for their storage needs.  Longer term, I believe the merger or EMC and Dell together will create the biggest headaches for HP Enterprise as they have many of the same hardware assets but Dell will now have deeper software assets in security, data management, virtualization and software defined infrastructures.”

Why boring workloads trump intergalactic scale in HP’s cloud biz

Although having a laugh at so-called “enterprise clouds” is a respected pastime in some circles, there’s an argument to be made that they do serve a legitimate purpose. Large-scale public clouds such as Amazon Web Services, Microsoft Azure, and Google Compute Engine are cheap, easy and flexible, but a lot of companies looking to deploy applications on cloud architectures simply don’t need all of that all of the time.

So says Bill Hilf, the senior vice president of Helion (the company’s label for its cloud computing lineup) product management at [company]HP[/company]. He came on the Structure Show podcast this week to discuss some recent changes in HP’s cloud product line and personnel, as well as where the company fits in the cloud computing ecosystem. Here are some highlights of the interview, but anyone interested in the details of HP’s cloud business and how its customers are thinking about the cloud really should listen to the whole thing.

[soundcloud url=”https://api.soundcloud.com/tracks/194323297″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed

Amazon matters . . . and so does everything else

“First and foremost, our commitment and focus and investment in OpenStack hasn’t changed or wavered at all,” Hilf said. “It’s only increased, frankly. We are fully committed to OpenStack as our core infrastructure-as-a-service platform.” HP has been a large backer of the open source project for years now, and was building out an OpenStack-based cloud platform exclusively before acquiring Eucalyptus and its Amazon-Web-Services-compatible cloud technology in September.

However, he added, “As we started working with customers around what they were looking for in their overall cloud environment, we did hear the signal loud and clear that the AWS design pattern is incredibly relevant to them.” Often times, he explained, that means either hoping to bring an application into a private cloud from Amazon or perhaps moving an application from a private cloud into Amazon.

[pullquote person=”” attribution=”” id=”919622″]”People often use the term ‘lock-in’ or ‘proprietary.’ I think the vendors get too wrapped up in this.”[/pullquote]

Hilf thinks vendors targeting enterprise customers need to make sure they’re selling enterprise what they actually want and need, rather than what’s technologically awesome. “Our approach, from their feedback, is to take an application-down approach, rather than an infrastructure-up approach,” he said. “How do we think about a cloud environment that helps an application at all parts of its lifecycle, not just giving them the ability to spin up compute instances or virtual machines as fast as possible.”

Below is our post-Eucalyptus-acquisition podcast interview with Hilf, former Eucalyptus CEO Marten Mickos and HP CTO Martin Fink.

[soundcloud url=”https://api.soundcloud.com/tracks/167435404″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Enterprise applications might be boring, and that’s OK

Whatever HP’s initial promises were about challenging [company]Amazon[/company] or [company]Microsoft[/company] in the public cloud space, that vision is all but dead. HP still maintains a public cloud, Hilf explained, bu does so as much to learn from the experience of managing OpenStack at scale as it does to make any real money from it. “It not only teaches us, but allows us to build things for people who are going to run our own [private-cloud] products at scale,” he said.

But most of the time, he said, the companies that are looking to deploy OpenStack or a private cloud aren’t super-concerned with concepts such as “webscale,” so it’s not really in HP’s financial interests to go down that path:

“[W]e don’t have an intention to go spend billions and billions of dollars to build the infrastructure required for, let’s say, an AWS or an Azure. . . . . It’s not because ‘Oh, we don’t want to write a billion-dollar check,’ it’s because [with] the types of customers we’re going after, that’s not at the top of their priority list. They’re not looking for a hundred thousand servers spread across the globe. . . . Things like security are much higher on their list than the intergalactic scale of a public cloud.”

Hilf added:

“What we typically hear day-to-day, honestly, is actually pretty unexciting and mundane from customers. They’re not all trying to stream the Olympics or to build Netflix. Like 99 percent of the enterprise in the world are doing boring things like server refreshes or their lease in a data center is expiring. It’s really boring stuff, but it matters to them.”

“If a customer came to me and said, ‘Hey I need to spin up a billion instances to do whatever,'” he said, “. . . I’d say, ‘Go talk to AWS or Azure.’”

Get over the talk about lock-in

Despite the fact that it’s pushing a lineup of Helion cloud products that’s based on the open source OpenStack technology, Hilf is remarkably realistic about the dreaded concept of vendor lock-in. Essentially, he acknowledged, HP, Amazon and everyone else building any sort of technology is going to make a management interface and experience that’s designed to work great with their particular technology, and customers are probably going to be running multiple platforms in different places.

Hilf thinks that’s a good thing and the nature of business, and it provides an opportunity for vendors (like HP, coincidentally) with tools to help companies get at least some view into what’s happening across all these different platforms.

“People often use the term ‘lock-in’ or ‘proprietary.’ I think the vendors get too wrapped up in this,” he said. “The enterprise is already through the looking glass. They all know they’re going to have some degree of lock-in, it’s just where.”

Microsoft faces specter of shelfware in the cloud era

The notion that pay-as-you-go cloud computing will eliminate shelfware — paid-for but unused computing resources — has always been suspect. Last year I wrote that the proliferation of unused compute instances resulting in zombie resources that are allegedly active but not doing productive work, could be a big problem for cloud vendors as customers smarten up.

Another type of shelfware is a cloud service that is purchased but never actually deployed, and that’s something [company]Microsoft[/company] is facing with Azure.

Business Insider report this week noted that Microsoft sales teams are under pressure not just to sell Azure — usually in conjunction with a broader enterprise license — but to make sure customers actually use it. To be fair, Microsoft has been aware of this issue for some time and last summer ended an Azure discount program that exacerbated the shelfware problem.

A long-time Microsoft partner told me at the time that the company was pushing its sales force hard “to drive utilization, not just revenue.”

The problem was that once Microsoft field sales sold a pre-paid Azure contract, there was zero incentive for them to make sure the customer put those resources to work. And that’s a problem as companies start scrutinizing what they have rights to and what they’ve actually deployed. Eventually the bean counters will start wondering about the value of those license agreements.

Another long-time Microsoft partner told me this week that he knows of lots of customers who have tens of thousands of dollars worth of Azure licenses who are not running Azure at all. And that brings us back to the BI report, which shows that little progress has been made in the past six months. According to BI:

Microsoft has been structuring deals that give away access to Azure, its cloud competitor to [company]Amazon[/company] Web Services, for little to no extra cost to some customers who have no plans to use it. It has been counting some revenue from those deals for its cloud, but if they don’t actually use the cloud, that revenue won’t continue.

A Microsoft spokesman said the company sees  “strong usage of Microsoft Cloud services by businesses of all sizes” and that more than 60 percent of all Azure companies use at least one premium service, say, media streaming. And, he noted that more than 80 percent of Office 365 enterprise customers run two workloads or more.

I’m not sure that really resolves the question but in any case, shelfware is an issue for all cloud providers as customers get more savvy about what they’re actually paying for and using. Or not using.

Last week, a Wall Street Journal report on the “hidden waste and expense of cloud computing”  (paywall) pointed out that C-level execs are increasingly worried about idle cloud resources and are looking to what cloud pioneers like [company]Netflix[/company] have done to optimize their cloud computing resources. Netflix, for example, has technology that shuts off resources automatically when they’re not needed.

Others turn to third-party tools from Cloudyn, Cloudability and Krystallize Technologies to minimize waste.

As one commenter to the Journal story pointed out, the secret to minimizing waste is to keep tabs on what you spin up.  “The minute you turn on a process it’s going to cost money,” he noted. Other AWS shops have said that Amazon’s own Trusted Advisor and Cost Explorer dashboards have gotten much better over time, eliminating much of the need to keep spreadsheets to track usage.

This story was updated at 10:30 a.m. PST with additional Microsoft partner comment and again at 12:30 p.m. PST with Microsoft comment.

Google’s new service will ease real-time communications for applications

Google has a new real-time messaging system available in beta for its cloud service called Google Cloud Pub/Sub, the company said on Wednesday in a blog post. The system in theory will enable applications and services to communicate with each other in real time, regardless if they are built atop the Google Cloud or run on-premises.

In today’s world of distributed systems, its important for messages to flow between applications and services as fast as possible in order for applications to present the freshest information to users as well as the IT admins responsible for managing the infrastructure. This is why Apache Kafka is so popular with companies like [company]Hortonworks[/company], which added support for the real-time messaging framework last summer.

The new messaging system targets developers looking to build complex, distributed applications on the [company]Google[/company] Cloud and it follows in line with the recently announced Google Container Engine back in November. Google Container Engine is basically the managed service version of the Kubernetes container-management system used for spinning up and managing tons of containers for complex, multi-component applications.

At this time, both Google Cloud Pub/Sub and the Google Container Engine are only available for the Google Cloud Platform, so you can see that the search giant is hoping to lure more enterprise clients to its cloud who don’t want to deal with the heavy lifting that’s often associated with using open-source technology.

Google said the new messaging system powers its recently launched Google Cloud Monitoring service as well as Snapchat’s new Discover feature, which as my colleague Carmel DeAmicis reported is basically Snapchat’s portal to media companies like Vice and CNN.

Google Cloud Pub/Sub is free to use while in beta, but once it hits general availability, you’ll have to pay based on usage, which starts “at 40¢ per million for the first 100 million API operations each month,” according to the blog post.

Google gets chatty about live migration while AWS stays mum

On Monday, Amazon wanted us to know that its staff worked day and night to avert planned reboots of cloud instances and updated a blog post to flag that information. But it didn’t provide any specifics on how these live updates were implemented.

Did [company]Amazon[/company] use live migration — a process in which the guest OS is moved to a new, safe host? Or did it use hot patching in which dynamic kernel updates are applied without screwing around with the underlying system?

Who knows? Because Amazon Web Services ain’t saying. Speculation is that it used live migration — even though AWS proponents last fall insisted that live migration per se would not have prevented the Xen-related reboots it launched at that time.

But where AWS remains quiet, [company]Google[/company], which wants to challenge AWS for public cloud workloads, was only too glad to blog about its live migration capabilities launched last year. Live migration, it claimed on Tuesday, prevented a meltdown during the Heartbleed vulnerability hullabaloo in April.

Google’s post is replete with charts and graphs and eight-by-ten glossies. Kidding about the last part but there are lots of diagrams.

A betting person might wager that Google is trying to tweak Amazon on this front by oversharing. You have to credit Google’s moxie here and its aspirations for live migration remain large. Per the Google Cloud Platform blog:

The goal of live migration is to keep hardware and software updated across all our data centers without restarting customers’ VMs. Many of these maintenance events are disruptive. They require us to reboot the host machine, which, in the absence of transparent maintenance, would mean impacting customers’ VMs.

But Google still has a long row to hoe. Last fall, when Google started deprecating an older cloud data center zone in Europe and launched a new one, there was no evidence of live migration. Customers were told to make a disk snapshots and use them to relaunch new VMs in the new zone.

As reported then, Google live migration moves working VMs between physical hosts within zones but not between them. Google promised changes there too, starting in late January 2015 but there appears to be nothing new on that front as yet.

So let the cloud games continue.

 

Box buys small security startup to court more risk-averse clients

Fresh off its IPO in January, Box has made its first acquisition of the year, buying a small security startup called Subspace, the company said on Wednesday. Financial terms of the deal were not disclosed, but all seven Subspace employees will be joining Box and the startup will be closing up shop by April 3.

Subspace touts a supposedly secure browser that connects to a corporate network, whether it be on-premise or cloud-based. The browser is hooked up to the Subspace cloud-based backend where an organization’s IT staff can control access and craft data-protection policies for the websites and applications that a user might visit within the Subspace browser.

?In a blog post on the acquisition, Box CEO Aaron Levie wrote that the Subspace staff will be working on Box’s data security efforts and “will let us go even deeper with our security and data policies, enabling reliable corporate security policies, even when content leaves the Box platform to be accessed on a customer or partner’s device.”

As [company]Box[/company] continues to push its new Box for Industries product lineup, its going to need more security features to court customers who may be paranoid of cloud offerings. The types of customers Box wants to sign up for Box for Industries are the types of clients found in heavily-regulated industries like healthcare, finance and legal. So far, Box has made public that Stanford Health Care, [company]Eli Lilly[/company], T Rowe Price and Nationwide Insurance all feel comfortable with using Box as their work/cloud storage hub.

In February, Box rolled out the Box Enterprise Key Management (EKM) service, which lets users hold on to their encryption keys while using the Box platform. Box partnered up with the company SafeNet as well as [company]Amazon[/company] Web Services to help customers set up the service.