Report: Cloud and data centers join forces for a new IT platform for internet applications and businesses

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Cloud and data centers join forces for a new IT platform for internet applications and businesses by Rich Morrow:
An increasingly large swath of businesses are realizing that the cloud-plus-data-centers model provides the best of both worlds, and integrating the public virtual cloud with the physical data center is the best way to cost effectively scale, secure, and serve modern production workloads.
To read the full report, click here.


It was the Three-Course Dinner Gum that served as Violet Beauregarde’s downfall at Willy Wonka’s Chocolate Factory and also introduced multiple generations to the curious possibilities of food’s future. Now, more than fifty years’ since the publication of the Roald Dahl classic, we’re on the brink of innovations that might make twentieth-century fiction look more like a forecasting engine. As the way we cook, eat and interact with our food is evolving, what does the future eating look like?
Let’s start in the kitchen
Many an embroidered wall-hanging will tell you that the kitchen is the heart of the home. Today, that heart holds many possibilities for innovation, some of which are already in play. There are a growing number of smart refrigerators on the market, offering touch-screen, wi-fi enabled doors—yes, you can watch cat videos but you can also view how many eggs you have stocked while you’re at the market.
Similarly, wi-fi oven ranges are making it possible to adjust oven temperatures from afar and check if you left your burners on after you left the house. The connectivity plays out in a few different ways; some appliances will connect to your smartphone, but many are hooking up with smart home systems or digital assistants (see Whirlpool and Nest and GE and Alexa) and yet others plug into their own smart home systems (see Samsung’s Smartthings Hub).
But if you’re not ready to invest in new built-in appliances, there are other entry points to smarter cooking. Cuciniale, for example, promises to deliver perfectly cooked meats by connecting your steak to your smartphone through its multisensor probe. June Intelligent Oven also works with sensors to improve timing and preparation, but can also recognize what food it’s cooking.
These (as well as the bigger appliances) have the appeal of ease and convenience and may also elevate our cooking skills much in the same way digital has improved our photography. (Think of “seared” as a filter you can simply tap to apply to your tuna.)
Those holding out for a fully hands-off solution might find projects like UK startup Moley Robotics’ robotic kitchen of interest. Moley offers a pair of wall-embedded arms that can prepare and cook your meals. (No indication if it also does dishes.) Meanwhile, thanks to artificial intelligence, robots are learning how to cook the same way many humans are picking up tips: through Youtube. It’s all quite compelling, though, for now at least, it’s still more convenient to just order a pizza.
What about the actual food?
A more savory aspect of the future of food is, naturally, the food itself. One fairly easy trend to identify is the move toward a more health-conscious eating—there are plenty of studies to support this but you really only need to see that McDonald’s sells apple slices for confirmation. Technology is ready to enable this trend, with apps that offer calorie counts on pictures of food and devices like Nima that scan food for gluten and other allergens.  
In a way that mirrors the fragmenting of media experiences, we’re also moving toward an era of more customized meals. That’s not simply Ethan-won’t-eat-zucchini-so-make-him-a-hot-dog-customization, but rather food that is developed to mirror our specific preferences, adjust to allergies and even address specific nutritional deficiencies. Success here relies on access to dietary insights, be it through logged historical eating patterns, blood levels and/or gut microbiome data. (New York Magazine has an interesting piece on the use of microbiome data to create your own personal food algorithm.)
And while it’s easy to imagine more personalized diets at home, we can count on technology to support that same customized approach while we’re eating out. Increasingly restaurants like Chili’s, Applebee’s, Olive Garden and Buffalo Wild Wings are introducing the table side tablet to increase efficiency and accuracy in orders and payments. As restaurant-goers take more control in how food is ordered, it will be easy to expect more customization in what is ordered.
Are we redefining food?
Given the rise of allergies and food intolerance, it’s not difficult to imagine a world of highly-customized eating. More unexpected in the evolution of eating is the work being done in neurogastronomy. This is a field that is approaching flavor from a neural level—actually rewiring the brain to change our perception of taste. In other words, neurogastronomy could make a rice cake register as delicious as ice cream cake. By fundamentally changing the types of food from which we derive pleasure, neurogastronomy could essentially trick us into healthier eating.
Then there is the emerging camp that eschews eating in favor of more efficient food alternatives. Products like provocatively-named Soylent and the much-humbler-sounding Schmilk offer a minimalist approach to nutrition (underscored by minimalist packaging), sort of like Marie Kondo for your diet. While this level of efficiency may have appeal in today’s cult of busy-ness, there something bittersweet about stripping food to the bare nutritional essentials, like eliminating the art of conversation in favor of plain, cold communication.
Another entry from the takes-some-time-to-get-used-to department comes from a team of Danish researchers. With the goal of addressing the costly challenge of food storage in space, CosmoCrops is working on a way to 3D-print food. There are already a number of products available that offer 3D-printed food (check out this Business Insider article for some cool food sculptures), but CosmoCrops is unique in its aim to reduce storage needs by printing food from bacteria. To that end, they are developing a ‘super-bacterium’ that can survive in space. (What could possibly go wrong?)
Where is the opportunity?
It’s probably too soon to tell if we’ll be more likely to nosh on bacteria burgers or pop nutritional powder pills come 2050. What is easier to digest today is the fact that connectivity is coming to eating. For the home kitchen, it won’t happen immediately—the turnover for built-in appliances isn’t as quick as, say, televisions and costs are still high. This means there’s still time for the contenders, both the appliance builders and the smart technology providers, to figure out which features will tip the kitchen in their favor.
From a dietary perspective, there is an opportunity in bridging the gap between our diet and technology. Restaurants will want to explore how to use technology to support more customized food preferences, but the broader question may be what will make it possible—and acceptable, in terms of privacy—to analyze personal data in order to develop meals that align with our unique food preferences as well as our specific nutritional needs? Maybe it’s a wearable that links your gut bacteria to ingredients stocked in the fridge, a toothbrush that reads your saliva, or (to really close the loop) the diagnostic toilet.
With innovation happening on many tracks, the possibilities for our future cooking and eating are both broad and captivating. What will lunch look like in next fifty, twenty, or even ten years? To borrow from Willy Wonka (who actually borrowed from Oscar Wilde): “The suspense is terrible. I hope it’ll last.”

Welcome to the Post-Email Enterprise: what Skype Teams means in a Slack-Leaning World

Work technology vendors very commonly — for decades — have suggested that their shiny brand-new tools will deliver us from the tyranny of email. Today, we hear it from all sorts of tool vendors:

  • work management tools, like Asana, Wrike, and Trello, built on the bones of task manager with a layer of social communications grafted on top
  • work media tools, like Yammer, Jive, and the as-yet-unreleased Facebook for Work, build on social networking model, to move communications out of email, they say
  • and most prominently, the newest wave of upstarts, the work chat cadre have arrived, led by Atlassian’s Hipchat, but most prominently by the mega-unicorn Slack, a company which has such a strong gravitational field that it seems to have sucked the entire work technology ecosystem into the black hole around its disarmingly simple model of chat rooms and flexible integration.

Has the millennium finally come? Will this newest paradigm for workgroup communications unseat email, the apparently undisruptable but deeply unlovable technology at the foundation of much enterprise and consumer communication?
Well, a new announcement hit my radar screen today, and I think that we may be at a turning point. In the words of Winston Churchill, in November 1942 after the Second Battle of El Alamein, when it seemed clear that the WWII allies would push Germany from North Africa,

Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.

And what is this news that suggests to me we may be on the downslope in the century-long reign of email?
Microsoft is apparently working on a response to Slack, six months after the widely reported termination of discussions of acquisition. There has been a great deal of speculation about Microsoft’s efforts in this area, especially considering the now-almost-forgotten acquisition of Yammer (see Why Yammer Deal Makes Sense, and it did make sense in 2012). However, after that acquisition, Microsoft — and especially Bill Gates, apparently — believed they would be better off building Slackish capabilities into an existing Microsoft brand. But, since Yammer is an unloved product inside of the company, now, the plan was to build these capabilities into something that the company has doubled down on. So now we see Slack Teams, coming soon.
Microsoft may be criticized for maybe attempting to squish too much into the Skype wrapper with Skype Teams, but we’ll have to see how it all works together. It is clear that integrated video conferencing is a key element of where work chat is headed, so Microsoft would have had to come up with that anyway. And Skype certainly has the rest of what is needed for an enterprise work chat platform, and hundreds of millions of email users currently on Exchange and Office 365.
The rest of the details will have to wait for actual hands on inspection (so far, I have had only a few confidential discussions with Microsofties), but an orderly plan for migration away from email-centric work technologies to a work chat-centric model coming from Microsoft means it’s now mainstream, not a bunch of bi-coastal technoids. This will be rolled out everywhere.
So, we are moving into a new territory, a time where work chat tools will become the super dominant workgroup communications platform of the next few decades. This means that the barriers to widespread adoption will have to be resolved, most notably, work chat interoperability.
Most folks don’t know the history of email well enough to recall that at one time email products did not interconnect: my company email could not send an email to your company email. However, the rise of the internet and creation of international email protocols led to a rapid transition, so that we could stop using Compuserve and AOL to communicate outside the company.
It was that interoperability that led to email’s dominance in work communications, and similarly, it will take interoperability of work chat to displace it.
In this way, in the not-too-distant future, my company could be using Slack while yours might be using Skype Teams. I could invite you and your team to coordinate work in a chat channel I’ve set up, and you would be able to interact with me and mine.
If the world of work technology is to avoid a collapse into a all-encompassing monopoly with Slack at the center of it, we have to imagine interoperability will emerge relatively quickly. Today’s crude integrations — where Zapier or IFTTT copy new posts in Hipchat to a corresponding channel in Slack — will quickly be replaced by protocols that all competitive solutions will offer. And Skype is that irritant that will motivate all these giants to make a small peace around interoperability, in order to be able to play nice with Slack.
We’ll have to see the specifics of Skype Teams, and where Facebook at Work is headed. Likewise, all internet giants — including Apple, Google, and Amazon — seem to be quietly consolidating their market advantages in file sync-and-share, cloud computing, social networks, and mobile devices. Will we see a Twitter for Work, for example, after a Google acquisition? Surely Google Inbox and Google+ aren’t the last work technologies that Alphabet intends for us? How might Slack fit into Amazon’s designs? That might surprise a lot of people.
But no matter the specifics, we are certainly on the downslopes of the supremacy of email. We may have to wait an additional 50 years for its last gasping breath, but we’re now clearly in the chat (and work chat) era of human communications, and there’s no turning back.

AI for an Eye

While they predate the warm and fuzzy moniker that is “wearables,” contact lenses are one of the more common pieces of technology applied to the body today. But, unlike most other commercial wearable devices, corrective contact lenses have not been particularly sexy. They help us see as well as we should see and then their job ends. Or, at least, that’s where it has historically—yes, like everything from cars to shoes to refrigerators, the contact lens is about to get “smart.”
The innovation of smart contact lenses is moving in a few different directions. One notable and noble pursuit is health monitoring. Alphabet’s Verily (formerly Google Life Sciences) is doing work here, with a lens that monitors (via tears) blood glucose levels for diabetics, and startup Medella Health recently secured $1.4 million for its competitive product. Meanwhile, Swiss-based Sensimed AG, has received FDA approval for a lens that tests eye pressure for glaucoma patients. Unlike traditional glaucoma tests, Sensimed AG’s Triggerfish makes it possible to monitor eye pressure for a 24-hour period, including sleep, for more accurate assessment of a glaucoma patient’s risk of vision loss.
The contact lens has an advantage in the health monitoring space—at least compared to other wearables that didn’t originate as medical devices and don’t connect so intimately with the body—but this isn’t the only future for the smart lens. There are new opportunities for vision correction coming, as evidenced by Google and Novartis and EPGL; both teams are developing autofocus lenses in an effort to correct farsightedness. And, of course, there are a number of other innovations in the works that will appeal to our sci-fi’d imaginations, like Ocumetrics, a company that reportedly has created a lens that improves vision 3x better than 20/20. While their “bionic” lens, technically a surgical implant, has received some skepticism from the medical community, it generated a fair amount of buzz in social media. (And, understandably so; for those of us who spent childhoods watching The Six Million Dollar Man, the wait for bionic vision has been long and grueling.)
Meanwhile, Samsung recently filed a patent for a smart lens, which, according to Sammobile, “shows a contact lens equipped with a tiny display, a camera, an antenna, and several sensors that detect movement and the most basic form of input using your eyes: blinking.” This is foundational user interface stuff that sets us up for interactions akin to Google Glass, but right there on the eyeball. It leads a future where the recording of experiences becomes incidental and video games, augmented and virtual reality can be experienced without the need for bulky equipment. In this way, tech becomes more discreet—a point of interest to marketers who have relied on the showmanship of early adopters because how do you fuel word of mouth for “invisible” technology?
More provocative, however, is the potential for change in human behavior as the boundaries between our bodies and information continue to dissolve. If you think there’s no need to lock facts, figures, and trivia into memory because your smartphone is in your pocket today, wait until you can blink your way through IMDB. And how does human interaction shift when Facetime happens in your face, when we have the power to conduct background checks on the fly? (“Hello, it’s great to meet you and…um…are you browsing my Facebook page right now?”) Given the pace of innovation today, it’s not difficult to imagine a world where the smart lens gives a lawyer or student a steroid-like advantage on the intellectual playing field, or a quick lens check become the norm before, say, the National Spelling Bee—all before the next time you need to renew your driver’s license.
As all the world’s information migrates from our fingertips to our eyes, the next logical step is to introduce some level of processing of the information—artificial intelligence—in a lens. Progress here depends a lot on computer vision, the same technology that helps a self-driving car distinguish between a traffic light and a man wearing a green hat. Computer vision is one of the more intensive areas of innovation today—Slate published an interesting piece on the challenges—and naturally there are a number of innovators tackling it. This includes a Russian developer that has created an open source computer vision platform in collaboration with both Google and Facebook. It’s also likely that large scope of data acquired as a result of the first wave of camera-like smart lenses will play a meaningful role in advancing computer vision. In other words, smart lens wearers will be effectively teaching computers how to see.
But we’re not cyborgs (eyeborgs?) yet. Even as humans get more comfortable with the idea of body hacking via objects like radio frequency ID chips, there’s still something squeamish about putting technology right there in the eye.  (Cue A Christmas Story: “You’ll shoot your eye out.”) Innovations in miniaturization, like ETH Zurich’s ultra-thin circuit—50 times thinner than a human hair—help address these concerns.
There is also the powering of smart lens technology to consider—how do these things get their juice? Google’s glucose-monitoring lens would be powered by a “reader” device, such as a piece of clothing or headband that sits near the lens. Google also has a patent for solar-powered contact lens, while Sony’s recent patent includes “…sensors [that] would convert the movements of the eye into energy to power the lens.”
With these and other patents and products in the works today, it’s clear to see that both the reach and role of the contact lens is on the brink of transformation. From vision correction and enhancement to health monitoring, from entertainment to data capture and processing, the range of applications for smart lenses is vast and sets the stage for a behavioral shift on par with—if not more substantial—than what we’ve seen with the mobile device. While we’re not quite there yet, it’s a good time to start thinking about the implications—if recent advances in technology have taught us anything, it’s that big changes can happen in the blink of an eye.

Isn’t signing a document just a feature, not a company?

I have to confess that I’ve been surprised by the valuation for DocuSign, which was valued at $3 billion in a funding round last year. I’ve had niddling doubts about the growth possibilities for a company that is basically built around the document signature use case, especially since the idea of ‘electronic signatures’ feels like a skeuomorph crying out for disruption by other approaches to identity verification, most notably fingerprint recognition on smartphones.
Perhaps those questions are being raised by others as well. According to Bloomberg, Rick Osterloh, a former Motorola Mobility exec, had been picked to lead the company forward to an IPO, but just before the announcement he balked and took a job as head of hardware at Alphabet. DocuSign is left with Keith Krach as CEO, who said last fall he wanted to step down.
Basically, verification of identity is now in the hands of the Internet giants, like Google and Apple, and DocuSign is a dinosaur just waiting for the shower of meteorites to come raining down.
So, here’s a small prediction, based on the senior executives that have been bailing out of the company — four out of nine top execs left this year: one of the majors — Google or Microsoft? — will buy DocuSign, and for less than the $3 billion valuation. The company is unwilling to share its financials, and has invested heavily to meet the requirements for eIDAS regulations in the EU, going into effect on July 1.

WeWork making cuts; Fadell leaves Nest; BitTorrent spins out Sync

All sorts of changes going on:

  • WeWork plans to cut around 7% of its staff, according to Bloomberg’s Ellen Huet, and has paused hiring. The $16B coworking/coliving startup raised $430 million a few months ago led by Chinese investors, and plans to expand in Asia. It seems that WeWork might be responding to the advice of many investors to cut back on burn as the economy seems to be cooling.
  • Tony Fadell has left Nest following months of bad press and growing friction within Alphabet/Google, which acquired the company in 2014 for $3.2 billion. Positioned as a ‘transition’, the move is more likely the case of Fadell being pushed out. The new Nest CEO is Marwan Fawaz, who was a Motorola Mobility executive vice president, and who oversaw the sell off of that company after Google’s acquisition of Motorola. Looks like Alphabet/Google is positioning Nest for a sale, since Google’s developed its own line of smart home products that don’t play nice with Nest’s technologies.
  • BitTorrent has spun out Sync, its file sync and share technology, into a new firm, Resilio, and Sync will be renamed as Connect. Former BitTorrent CEO Eric Klinker will be heading up the new company, after trying to repositioning BitTorrent as an enterprise software company. It now seems the existing BitTorrent company will focus on a new live streaming app.

There’s only 10 types of people in the world: those who ‘get’ digital, and…

Over the past few days and weeks, a recurring mantra has appeared in conversations I have had, or overheard, with clients. “Well, nobody really knows what digital is, do they?” goes the question.
At first this seems like a fair point — after all, isn’t it yet another term trotted out by a marketing-led technology industry, another bandwagon defined by some analyst firm, to be got with and differentiated against.
Indeed, last time such a point was raised about a ‘megatrend’, the topic was cloud. In that case, the question was on the money: as intervening history has shown, the term was poorly defined, easily abused by gung-ho marketers and in general, made explanations more complicated rather than simpler.
So such scepticism is as understandable as it is, in the case of digital, misplaced. Over the past 3-4 years since the term started being thrown about once again, what has become clear is that ‘digital’ is a business-led, not an IT-led initiative.
At its heart is a simple principle: that technologically enabled data flows are now everywhere, connecting everything that can be connected. There are no boundaries to reach, nor to innovation, nor to privacy.
This is fundamentally different to old-world use of technology, where data was over there, to be managed by an IT department and protected with firewalls. It is notable that Google has removed the latter, with its Beyond Corp initiative.
Which brings to the point. It’s an old joke in the title, I know, but it seemed appropriate to roll it out. ‘Digital’ is a state of mind that needs to exist at board level. Either an organisation acts like data is everywhere, or like it is over there. It’s the difference between being a digital native, or, well, not being so.
Businesses know this, and in many cases are restructuring to take the shift into account. The term being bandied around is ‘digital transformation’, or that horrible word, digitalisation.
But let’s be clear. This is not some journey to be started upon never to be completed, nor an initiative to be piloted to see what works.
If you want an example of a major company that ‘gets it’, look no further than GE. Since I started speaking to the company a few years ago, it has gone from an organisation that recognised the need to change, through an organisation embracing change, to an organisation being that change.
The journey started, and ended, at the very top. I met with Harel Kodesh CTO of GE Digital at the end of last year, and we had a good conversation as one would hope. More interesting than what was discussed, was the way in which it was discussed however.
Right now however the organisation is already making its bets as a digital organisation, for example with its digital twins or Predix platform initiatives. It is an organisation without any doubt in its mind about strategy or direction. It is all-in on digital.
Of course such efforts might fail, business was ever thus. Someone else might steal GE’s lunch, or the company might make bad decisions. But nobody can doubt the organisation’s clarity over that single question: what is digital?
I doubt anyone inside the organisation still cares, as they are too busy.
The fact is, if you have to ask the question, you probably have a way to go. In the digital world, we need to think like children, break away from our legacy understanding of where technology sits, and accept the fact that it has changed from without to within.
Understand this, simple yet profound truth, and you can return to the field and set a strategy accordingly. Fail to do so and be doomed to pondering the question, long after those who have answered it are already gone.

Why adopt a mobile-first development strategy?

“We think mobile first,” stated Macy’s chief financial officer Karen Hoguet, in a recent earnings call with financial analysts.
A quick glance at the US department store chain’s 2015 financial results explains why mobile technologies might be occupying minds and getting top priority there. Sales made by shoppers over mobile devices were a definite bright spot in an otherwise disappointing year for the company. Mobile revenues more than doubled, in fact, thanks to big increases in the number of shoppers using smartphones and tablets not only to browse, but also to buy.
So it’s no surprise that Macy’s hopes to maintain this trend, by continuing to improve the mobile experience it offers. In the year ahead, Hoguet explained, this ‘mobile first’ mindset will see Macy’s add new filters to search capabilities, clean up interfaces and fast-track the purchase process for mobile audiences.
Other consumer-focused organisations are thinking the same way and the phrase ‘mobile first’ has become something of a mantra for many. One of its earliest high-profile mentions came way back in 2010, in a keynote given by Eric Schmidt, the-then Google CEO (and now Alphabet executive chairman), at Mobile World Congress in Barcelona.
“We understand that the new rule is ‘mobile first’,” he told attendees. “Mobile first in everything. Mobile first in terms of applications. Mobile first in terms of the way people use things.”
The trouble is that, for in-house development teams, a mobile-first strategy still represents something of a diversion from standard practice. They’re more accustomed to developing ‘full size’ websites for PCs and laptops first, and then shrinking these down to fit the size, navigation and processing-power limitations posed by mobile devices.
The risk here is that what they end up with looks like exactly what it is: a watered-down afterthought, packing a much weaker punch than its designed-for-desktop parent.
A development team that has adopted a mobile-first strategy, by contrast, will start by developing a site for mobile that looks good and works well on small form factors, and then ‘work their way up’ to larger devices, adding extra content and functions as they go.
That approach will make more and more sense as more ‘smart’ devices come online and the desktop PC becomes an increasingly minor character in our day-to-day lives. Take wearables, for example: many CIOs believe that headsets, wrist-mounted devices and the like hold the key to providing workers with relevant, contextual information as and when they need it, whether they’re up a ladder in a warehouse or driving a delivery van.
Developing apps for these types of devices present many of the same challenges associated with smartphones and tablets: minimal screen real estate, limited processing power and the need to integrate with third-party plug-ins and back-end corporate systems. Then there’s a lack of standardised platform for wearables to consider, meaning that developers may be required to adapt their mobile app to run on numerous different devices. For many, it may be better to get that hard work out of the way at the very start of a project.
In a recent survey of over 1,000 mobile developers conducted by InMobi, only 6% of respondents said they had created apps for wearables, but 32% believe they’re likely to do so in future.
The same rules apply to a broader category of meters and gadgets that make up the Internet of Things, from meters for measuring gas flow in a utilities network, to products for ‘smart homes’, such as the Canary home-monitoring device, to virtual reality headsets, such as Samsung’s Gear VR, as worn by attendees at Facebook CEO Mark Zuckerberg’s keynote at this year’s MWC.
As the population of ‘alternative’ computing devices grows, developers will begin with a lean, mean mobile app, which functions well despite the constraints of the platform on which it runs, having made all the tough decisions about content and function upfront. Then, having exercised some discipline and restraint, they’ll get all the fun of building on top of it, to create a richer experience for desktop devices.
More importantly, they’ll be building for the devices that consumers more regularly turn to when they want to be informed, entertained or make a purchase. In the US, digital media time (or in other words, Internet usage) on mobile is now significantly higher at 51% than on desktop (42%), according to last year’s Global Internet Trends Report by Mary Meeker of Silicon Valley-based venture capital firm Kleiner Perkins Caufield & Byers (KPCB).
In other words, developers should go mobile first, because that’s what we consumers increasingly do.
Picture Credit: Farzad Nazifi