The future of AI and the enterprise

Whether or not a company is actively involved in developing AI, it’s clear that it’s a powerful force affecting all industries. IDC claims AI was an $8 billion industry in 2016 and will grow to $47 billion in 2020.
However, five years ago there were no cognitive engines as we define them today. Today there are over 5,000 cognitive engines and in the next five years, it is expected there will be well over a million engines.This is a true testament to the fact that the industry is constantly growing and poised to expand even more beyond 2018. Institutions and organizations recognize the necessity of analyzing unstructured data at scale in near real-time.
Unfortunately, the current landscape of artificial intelligence solutions can be expensive, skill-intensive and difficult to implement. Such solutions also tend to be siloed, extremely narrow in their application, and challenged in their ability to deliver real value. In PwC’s Digital IQ survey, only 20% of executives said their organizations had the skills necessary to succeed with AI. As a result, the power of AI has been largely inaccessible to most organizations.
This is all going to be changing as forward-thinking businesses will begin to set aside budget for AI in the coming years. If AI has been on a company’s radar, the good news is that there is still time to learn and strategize, but unfortunately, there will be a huge chasm in business application of early adopters and those who fell behind. But what can the AI industry do to assist enterprises that are seeking to utilize their services? AI can, and should, utilize the data wherever it is.
We predict that AI will prove itself through business application and will meet industries in the cloud, on multiple clouds, or on-premise.
Take AWS for example. Currently, it’s enabling scalable, flexible and cost-effective solutions for startups to global enterprises. To support the seamless integration and deployment of these solutions, AWS established the AWS Partner Competency Program to help customers identify Consulting and Technology APN Partners with deep industry experience and expertise. As AWS claims, “Out of all of the innovations that are being driven by cloud, the areas of artificial intelligence (AI) and machine learning (ML) are perhaps the most exciting.” In order for AI companies to tap into some of that excitement, they need to prove themselves with something notable like the designation that AWS provides.
Many major companies today are in various stages of cloud migration: Some are still transferring data to the cloud, others are mobilizing to different clouds at once; and some, are going back to on-premise solutions. The challenge for the AI industry is that it will need to be versatile enough to be able to analyze the data wherever it’s located. As a result, customers who have made significant investments in on-premise storage, and/or have cost or security concerns about storing their content in the cloud, will be able to take advantage of the business applications that artificial intelligence offers.
Regardless of where, progressive companies are unlocking the power of AI
AI deployments can help augment the daily and tedious roles of the modern workforce, democratize services that were originally costly or unavailable and tap into the wealth of unstructured data for actionable use because now, every frame of video or second of audio can be searched for objects, faces, voices, brands, sentiment, text and more. The speed at which AI can tackle the previous tactics of manual discovery open a world of opportunity via human-machine thinking partnerships.
Guest post by Tyler Schulze, vice president & general manager, Veritone

AWS meets the enterprise head on

Amazon Web Services must have been a very interesting company to work for over recent years. My conversations with AWS senior executives have sometimes been fraught — not because of any conflict or contention, but rather due to a pervading feeling that discussion gets in the way of activity. The organisation has been so busy doing what it is doing (and making a pretty reasonable fist of it) that it barely has time to stop to talk.

Any thoughts or feedback about how AWS might do things differently, about how the needs of the enterprise could be better achieved, have been met with flummoxed consternation. It’s all completely understandable, in a company which measures success by the number of new features achieved or services shipped, to question any question of whether it is doing enough. But still, the question needs to be asked.

Against this background, watching the feet is a far better option of watching the mouth. AWS has come a long way since its early stance of offering an out-and-out alternative to in-house enterprise IT processing and storage and indeed, continues to work on delivering ‘the’ technology platform for digital-first organisations that need, and indeed desire, little in the way of infrastructure.

From an enterprise perspective however, and despite some big wins, many decision makers still treat the organisation as the exception rather than the norm. In part this is through no fault of AWS; more that you can’t just rip and replace decades worth of IT investments, even if you wanted to. In many cases, the cheaper (in both money and effort) option is to make the most of what you have — the age-old blessing and curse of legacy systems.

In addition, as IT staffers from CIOs to tape operatives are only too aware, technology is only one part of the challenge. Over the years, Enterprise IT best practice has evolved to encompass a wide variety of areas, not least how to develop applications and services in a sustainable manner, how to maintain service delivery levels, how to pre-empt security risks and assure compliance, how to co-ordinate a thousand pools of data.

And, above all, how to do so in what sometimes feels like a horseless cart careering down a hill, even as the hill itself is going through convulsions of change, just one slope in a wide technology landscape that shimmers and twists to adapt to what is being called the ‘digital wave’ of user-led technology adoption. Within which AWS itself is both driving the cause of constant change, and feeling its effect.

So what? Well, my perception is that as AWS matures, its philosophy is becoming more aligned to these, very real enterprise needs. This can only be a perception: if you asked AWS execs whether they cared about security, they would look askance, because of course the organisation would not exist without pretty strong security built in. Similarly, the AWS platform is built with the needs of developers front and centre. And so on.

What’s changing is how these areas are being positioned, to incorporate a more integrationist, change-aware, even enterprise-y foundation. For example, development tools are evolving to support the broader needs of integrated configuration and delivery management, DevOps automation and so on. Security teams are not only delivering on security features, but are broadening into areas such policy-based management and, for example, how to reduce the time to resolution should a breach occur.

The seal on this deal is AWS’ recently announced Managed Services (previously codenamed Sentinel) offering, which brings ITIL-type features — change management, performance management, incident management and so on — into the AWS portfolio. The toolset originally appeared on the radar back in June last year but wasn’t launched until December, perhaps in recognition of the fact that it had to be right. It’s also available both to end-user organisations and service providers or outsourcing organisations.

By incorporating ITIL best practice, AWS has kicked into touch any idea that it doesn’t ‘get’ the IT challenges faced by larger organisations. Meanwhile many other areas of AWS’ evolving catalogue of capabilities, and indeed its rhetoric, reinforce a direction that takes into account the fact that enterprise IT is really, really hard and requires a change-first mindset. AWS’ confirmation that the world will be hybrid for some time yet, the expansion of its Snowmobile storage movement product to a 100 Petabyte shipping container, and indeed simple remarks like “many customers don’t know what they have,” all illustrate this point.

Such efforts are a work in progress: plenty remains for AWS to deliver internally, in terms of how products integrate, how features are provided and to whom: this will always be the case in a rapidly changing world. Nonetheless the organisation is a quick learner which is moving beyond seeing cloud-based services as something ‘out there’ that need to be ‘moved to’, and towards an understanding that it can provide a foundation the enterprise can build upon, offering not only the right capabilities but also the right approach.

With this understanding, AWS can engage with enterprise organisations in a way the latter understand, even as enterprises look to make the kinds of transformations AWS and other technology providers enable. Finally the cloud vendor can earn the right to partner with traditional enterprises, alongside the cloud-first organisations it has preferred to highlight thus far.

Report: Understanding the Power of Hadoop as a Service

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Hadoop-elephant_rgb
Understanding the Power of Hadoop as a Service by Paul Miller:
Across a wide range of industries from health care and financial services to manufacturing and retail, companies are realizing the value of analyzing data with Hadoop. With access to a Hadoop cluster, organizations are able to collect, analyze, and act on data at a scale and price point that earlier data-analysis solutions typically cannot match.
While some have the skill, the will, and the need to build, operate, and maintain large Hadoop clusters of their own, a growing number of Hadoop’s prospective users are choosing not to make sustained investments in developing an in-house capability. An almost bewildering range of hosted solutions is now available to them, all described in some quarters as Hadoop as a Service (HaaS). These range from relatively simple cloud-based Hadoop offerings by Infrastructure-as-a-Service (IaaS) cloud providers including Amazon, Microsoft, and Rackspace through to highly customized solutions managed on an ongoing basis by service providers like CSC and CenturyLink. Startups such as Altiscale are completely focused on running Hadoop for their customers. As they do not need to worry about the impact on other applications, they are able to optimize hardware, software, and processes in order to get the best performance from Hadoop.
In this report we explore a number of the ways in which Hadoop can be deployed, and we discuss the choices to be made in selecting the best approach for meeting different sets of requirements.
To read the full report, click here.

Report: The importance of benchmarking clouds

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Windowed City Skyscraper Architecture Beneath Cloudscape in Black and White
The importance of benchmarking clouds by Paul Miller:
For most businesses, the debate about whether to embrace the cloud is over. It is now a question of tactics — how, when, and what kind? Cloud computing increasingly forms an integral part of enterprise IT strategy, but the wide variation in enterprise requirements ensures plenty of scope for very different cloud services to coexist.
Today’s enterprise cloud deployments will typically be hybridized, with applications and workloads running in a mix of different cloud environments. The rationale for those deployment decisions is based on a number of different considerations, including geography, certification, service level agreements, price, and performance.
To read the full report, click here.

AWS Re:Invent parting thoughts: The post-hybrid technology landscape will be multiplatform

As I flew away from Amazon Web Services’ Re:Invent developer conference, my first thought was how there is far, far more going on than anyone can keep up to date with. This issue has dogged me for some time, indeed every time I tried to make headway writing the now-complete book Smart Shift, I was repeatedly beset by the world changing for a thousand reasons and in a thousand ways. (As a result, incidentally, the book has morphed into a history of technology that starts 200,000 years in the past — at least that isn’t going to change! But I digress.)
Even though making sense of the rapidly changing digital landscape feels like drinking from several fire hydrants at once, such immersion does reveal some pointers about where technology is going. Across conversations I had at Re:Invent, not just with the host but also Intel and Splunk, Treasure Data and several partners and customers, a number of repeated themes started to let themselves be known.
Let’s start with hybrid and get it out of the way. Actually, let’s start with the fact that AWS are pretty impressive in what they are achieving and how they are achieving it, with a strong focus on the customer and a business model that really does save organisations a small fortune in running costs. This being said, one aspect of the organisation’s overall pitch stuck out as incongruous. “We were misunderstood. Of course we always believed hybrid models were valid,” said Andy Jassy at the keynote. I paraphrase but that’s roughly it; it is also out of kilter with what has been said in previous years. I (and the people I spoke to) have too good memories to take this revisiting of history with anything other than a pinch of salt.
A second topic of conversation, notably with Kiyoto Tamura of data management platform Treasure Data but reinforced by several customers, was how multi-cloud models would pervade — again, despite AWS’ opinion to the contrary. While it may be attractive to have a “single throat to choke” and reduce the number of vendors accordingly, a clutch of reasons make two or more cloud providers better than one: many government organisations have a requirement to work with more than one supplier, for example; meanwhile past decisions, cost models, use of specific SaaS that drives deeper PaaS, all make for a multi-cloud situation alongside the hybrid consequence of using existing IT.
Even as AWS toes the hybrid line (to its credit as this is a significant pillar of its alignment to the enterprise, a point I will expand upon in a future blog) and pushes back against the notion of multiple clouds, I think the world is already moving on from the history-driven realities of hybrid and the current inevitability of multi-cloud. The history of this technological age has been marked by some underlying tendencies, one of which is commoditisation through supply and demand (which directly leads to @jonno’s first law of data growth, also the subject of a future blog) and the second, a corollary, is the nature of providers to expand into less commoditised areas.
Case in point: AWS, which started in storage and virtual servers, but which is placing increasing attention on increasingly complex services — c.f. the machine leaning driven Alexa, Lex and Polly. To stay in the game and not be commoditised out of existence, all cloud providers inevitably need to become platform providers, purveyors of PaaS. As another corollary, this does send a warning shot across the bows of platform-enabled facade companies, such as those over-valued digital darlings AirBnB, Uber and the like. Who are quite rightly diversifying before they are, also inevitably, subsumed back into the platform.
The future is platform-based rather than cloud-based, for sure. As Andy Jassy also said, and again I paraphrase, “We don’t have to waste time having those conversations about whether cloud is a good idea any more. We can just get on with delivering it.” The conversations are already moving on from cloud and towards what it enables, and it will be enabling far more in the future than in the past. As yet another aside, it may be that AWS should be thinking about changing its mantra from “journey to the cloud” to “journey to what is enabled… but I digress again.
The platform perspective also deals with how we should think about all that pesky in-house, legacy stuff. There’s barrow-loads of it, and it is fantastically complex — we’ve all seen those technology architecture overviews that look something like a Peter Jackson-directed fly-through of an Orc-riddled mine. For many enterprise organisations, such complex, arcane and inefficient environments represent their business. But, fantastically complex as it is, existing IT systems can also be thought about as delivering a platform of services.
Many years ago I helped a public organisation re-consider its existing IT as a valid set of services in what was then called a service-oriented architecture, and the platform principle is not so different. Start with working out the new services you need, build interfaces to legacy systems based on the facade pattern, and the rest is gravy. So, if an organisation is going to be using platforms of services from multiple providers, and if existing systems can be ring-fenced and considered as service platforms in their own right, what do we have but a multiplatform technology landscape?
And why does it matter? Because this perspective takes us beyond notions of hybrid, which essentially refer to how cloud stuff needs to integrate with legacy stuff, and towards a principle that organisations should have a congruent set of services to build upon and innovate against. This is not simply thinking out loud, but has tangible consequences as organisations can think about how skills in platform engineering, architecture, delivery and orchestration will become future differentiators, and can start to plan for them now. For sure we will see service catalogues, marketplaces and the like, as there is nothing new under the sun. Most important however is for organisations to deliver the processes and mindsets that will enable them to make the most of such enablers in the future.

VMWare on AWS is really cool!!! (or not?) #VMWonAWS

Just a few days before VMWorld, VMWare announced VMWare on AWS partnership. I struggled a bit to understand what it was and how it worked, but if I’m right… this is another attempt from VMware to be more relevant in the cloud and, at the same time, it looks like another major validation for AWS.

VMware wants to be cloudier

Long story short, VMware is not perceived as a cloud player, so by associating their name with AWS they want to change that. Technically speaking they did a great job by bringing all the ESXi-based products and management suites on the AWS infrastructure. All the VMware experience as is, on the cloud.. WOW!!! There is no cross platform compatibility though, you’ll just have your VMware environment running next to an AWS region. And close, means less latency and easier data mobility too, which could be another benefit from the TCO point of view.
From the end user standpoint this is great, and should be a significant benefit. You can have your services on-premises and on the cloud without touching a line of code or an application or even a process and it can be managed by the same sysadmins… while having access to the next generation services and APIs provided by Amazon AWS for your cloud-based applications.
The price component could be a deterrent, but it’s also true that VMware will take care of everything under the hood (including ESX/vSphere patching, etc.), you just have to manage your VMs, networks (NSX is an option) and applications. Slick and neat!

Strategy and tactic

The beauty of this service is also the real risk for VMware – in the long term at least. By adopting this service the end user has the easiest path to access AWS from its VMware environment. There is no real compatibility between the two, but moving services and applications will become easier over time… and it’s supported by VMware.
The end user is now free to choose the best strategy for the smoothest path without limitations or constraints. After all, nothing was added to VMware in terms of services, APIs or anything else, it’s just the classic VMware infrastructure moved to the cloud as is. Easy and great from the tactical point of view, but in the long run they’ll just be helping AWS do more business…

This is yet another gateway to AWS

Don’t get me wrong, I like this VMW on AWS thing… but it really looks like another gateway to better AWS consumption. If your core applications reside on a VMWARE-based infrastructure you are just moving them closer to AWS while removing any latency barrier. Today, the developer will be able to design a new application that accesses your old database stored in VMware VM (which also serves other legacy applications)… tomorrow that database will be migrated to AWS, and I’m not so sure you’ll be needing the old database any longer (hence VMware), will you? This is the worst scenario (for VMware) of course, but I can’t see many positive outcomes for VMware in the long term. Please share your comments on this if you have a different point of view.

Closing the circle

I can see the advantages for both AWS and VMW with this partnership. But in the long term, AWS will most likely benefit the most and be in the better position.
VMW is further validating AWS in the enterprise (was it necessary?), while this could be the first time VMware gets it right with the cloud.
It remains to be seen whether this move will pay off for VMware in the long term… whether this is the first step towards something bigger that we don’t know about yet or just a tactical shift to buy some time and try to stay relevant while deciding what to do in a cloudy future…
Other questions arise – vCloud air is already dead but what about vCloud Director? Will VMW on AWS functionalities be ported to vCloud Director for example? I’m curious to see the reaction of VMware-based service providers to this news and how their strategy will change now that their competitors are VMW and AWS together!

Originally posted on Juku.it

Seattle vs. San Francisco: Who is tops in the cloud?

In football, city livability rankings — and now in the cloud — San Francisco and Seattle are shaping up as fierce rivals.

Who’s winning? Seattle, for now. It’s due mostly to the great work, vision and huge head-start of Amazon and Microsoft, the two top dogs in the fast-growing and increasingly vital cloud infrastructure services market. Cloud infrastructure services, also called IaaS, for Infrastructure as a Service, is that unique segment of the cloud market that enables dreamers, start-ups and established companies to roll-out innovative new applications and reach customers anytime, anywhere, from nearly any device.

Amazon Web Services (AWS) holds a commanding 29 percent share of the market. Microsoft (Azure), is second, with 10 percent. Silicon Valley’s Google remains well behind, as does San Francisco-based Salesforce (not shown in the graph below).

cloud leaders

The Emerald city shines

I spoke with Tim Porter, a managing director for Seattle-based Madrona Venture Group. Porter told me that “Seattle has clearly emerged as the cloud computing capital.  Beyond the obvious influence of AWS and strong No. 2, (Microsoft) Azure, Seattle has also been the destination of choice for other large players to set up their cloud engineering offices.  We’ve seen this from companies like Oracle, Hewlett-Packard, Apple and others.”

Seattle is also home to industry leaders ConcurChef, and Socrata, all of whom can only exist thanks to the cloud, and to 2nd Watch, which exists to help businesses successfully transition to the cloud. Google and Dropbox have also set up operations in the Emerald City to take advantage of the region’s cloud expertise. Not surprisingly, the New York Times said “Seattle has quickly become the center of the most intensive engineering in cloud computing.”

Seattle has another weapon at its disposal, one too quickly dismissed in the Bay Area: stability. Washington has tougher non-compete clauses than California, preventing some budding entrepreneurs from leaving the mother ship to start their own company. The consequence of such laws can lead to larger, more stable businesses, with the same employees interfacing with customers over many years. In the cloud, dependability is key to customers, many of whom are still hesitant to move all their operations off-premise.

Job hopping is also less of an issue. Jeff Ferry, who monitors enterprise cloud companies for the Daily Cloud, told me that while “Silicon Valley is great at taking a single idea and turning it into a really successful company, Seattle is better for building really big companies.”

The reason for this, he said, is that there are simply more jobs for skilled programmers and computing professionals in the Bay Area, making it easier to hop from job to job, place to place. This go-go environment may help grow Silicon Valley’s tech ecosystem, but it’s not necessarily the best environment for those hoping to create a scalable, sustainable cloud business. As Ferry says, “running a cloud involves a lot of painstaking detail.” This requires expertise, experience, and stability.

San Francisco (and Silicon Valley)

The battle is far from over. The San Francisco Bay Area has a sizable cloud presence, and it’s growing. Cisco and HP are tops in public and private cloud infrastructure. Rising star Box, which provides cloud-based storage and collaboration tools, started in the Seattle area but now has its corporate office in Silicon Valley. E-commerce giant Alibaba, which just so happens to operate the largest public cloud services company in China, recently announced that its first cloud computing center would be set up in Silicon Valley.

That’s just for starters.

I spoke with Byron Deeter, partner at Bessemer Venture Partners (BVP), which tracks the cloud industry. He told me that five largest “pure play” cloud companies by market cap are all in the Bay Area: Salesforce, LinkedIn, Workday, ServiceNow and NetSuite.

The Bay Area also has money. Lots of money. According to the National Venture Capital Association, nearly $50 billion in venture capital was invested last year. A whopping 57 percent went to California firms, with San Francisco, San Jose and Oakland garnering a rather astounding $24 billion. The Seattle area received only $1.2 billion.

venture capital by region

The Bay Area’s confluence of talent, rules and money will no doubt continue to foster a virtuous and self-sustaining ecosystem, one that encourages well-compensated employees to leave the nest, start their own business, and launch the next evolution in cloud innovation. If Seattle has big and focused, San Francisco has many and iterative.

The cloudy forecast

Admittedly, this isn’t sports. There’s no clock to run out and not everyone keeps score exactly the same. Just try to pin down Microsoft’s Azure revenues, for example. It’s also worth noting that the two regions do not compete on an even playing field. Washington has no personal or corporate income tax, and that is no doubt appealing to many — along with the mercifully lower price of real estate, both home and office.

The cloud powers healthcare, finance, retail, entertainment, our digital lives. It is increasingly vital to our always-on, from anywhere-economy, and a key driver of technical and business model innovation. If software is eating the world, the cloud is where it all goes to get digested. Here’s hoping both cities keep winning.

Smart home management firm AlertMe bought out by British Gas

The British smart home outfit AlertMe has a long history with British Gas – back in 2009 it scored its first trial with the country’s biggest energy supplier, and in 2012 it was chosen to provide the software for British Gas’s smart meters across the country. Its technology also underpins British Gas’s Hive Active Heating system.

Now the two have formally tied the knot. On Friday British Gas announced it was buying out AlertMe, in which it already held a 21 percent stake, for a net cost of £44 million ($68 million). AlertMe said the deal was worth an overall $65 million ($100 million). British Gas parent company Centrica said it expected the transaction to be completed by the end of the quarter.

Centrica said the purchase would give it a connected home boost in its other territories, such as the U.S. (where it owns Direct Energy) and Ireland (where it owns Bord Gáis.)

AlertMe provides a modular connecting home system called Omnia, comprising an energy analytics software service, energy monitoring and control, and home automation – covering everything from surveillance and various alarms to remotely controlled door locks.

The company makes the smart home management system sold by Lowe’s in the U.S. under the “Iris” brand. Then there’s its AlertMe Cloud, running on Amazon Web Services, which ties the whole caboodle together and comes with APIs for device and application partners.

AlertMe also has customers in other energy providers such as Essent in the Netherlands, which is part of energy giant RWE. I asked the company where such non-Centrica deals will stand, and a spokeswoman told me that “as far as existing customers are concerned, there’s no change.”

“The intention in the global market is selling the product as a platform-as-a-service — there are opportunities beyond the Centrica group companies,” she added. AlertMe currently has 70 full-time employees and made £17.8 million in revenues in 2013.

Amazon to power cloud with wind farm in Indiana

Following Amazon’s quiet commitment to use 100 percent clean energy for its AWS cloud, on Tuesday Amazon announced that it will support the construction and operation of a wind farm in Benton County, Indiana, which will provide power for its data centers. While Google, Facebook, and Apple have been investing in clean power for data centers for awhile, Amazon has moved more slowly and been more quiet when it comes to how it planned to incorporate clean power into its energy infrastructure mix.

These are the first actual energy infrastructure details I’ve heard so far. Amazon says Pattern Energy Group will develop a 150 MW wind farm, which will provide enough power for about 46,000 average American homes. The wind farm — dubbed the Amazon Web Services Wind Farm — will be operational as early as January 2016.

Wind turbines in Hawaii

Wind turbines in Hawaii

To put this in context, 150 MW is a small contribution to Amazon’s overall energy needs for its AWS cloud. But that amount of power could support a data center or two (or even three), depending on the size of the data centers. Apple’s 50 MW of onsite clean energy in North Carolina fully supports its large data center in the region.

Large wind turbine projects are one of the lowest cost sources of clean energy in the U.S., and can also be competitive with cheap fossil fuel plants, like new natural gas plants. The other increasingly common large scale clean power option is utility-scale solar panel farms.

Wind farms can cost as low 3 to 8 cents per kilowatt hour, in windy regions like the interior of the U.S., according to the American Wind Energy Association. Amazon didn’t disclose the financial details of its power agreement.

The Topaz solar farm.

The Topaz solar farm outside of San Luis Obispo, Calif.

Generally companies that want to buy large amounts of clean power from a new power plant, will make a “power purchase agreement” deal with the developer to buy the power from the project at a low cost over the course of 25 or so years. The developer can then use the contract with the power purchaser to get the project built.

Google has been announcing these types of clean power purchase agreement deals for years. Earlier this month Google announced that it was making a $76 million investment in a 300 MW wind project in Beaver County, Oklahoma, that is expected to be finished in late summer 2015. A week before that Google announced an $80 million investment in a solar project in Utah. Google has spent over a billion dollars on clean energy projects over the years.

This news from Amazon indicates that the cloud leader will indeed attempt to meet its commitment for 100 percent clean power for its cloud infrastructure. In recent years Greenpeace has targeted Amazon as being a slow mover when it comes to clean power for data centers.

Apple's solar farm next to its data center in Maiden, North Carolina, image courtesy of Katie Fehrenbacher Gigaom

Apple’s solar farm next to its data center in Maiden, North Carolina, image courtesy of Katie Fehrenbacher Gigaom

If you’re interested in clean power and data centers check out these stories:

 

 

 

Robots embrace Ubuntu as it invades the internet of things

Canonical has revealed what I reckon is its biggest announcement in years: Ubuntu is about to invade the internet of things with a minimal version of the Linux distribution that it hopes will provide a standardized platform for connected devices from drones to home hubs.

“Snappy” Ubuntu Core came out of [company]Canonical[/company]’s mobile efforts (which are yet to go anywhere) and was made available on [company]Amazon[/company] Web Services, [company]Microsoft[/company] Azure and the [company]Google[/company] Cloud Platform at the end of 2014. Now it’s available for smart devices, and Canonical has already got players such as the Open Source Robotics Foundation (OSRF), drone outfit Erle Robotics and connected hub maker NinjaBlocks on board.

From mobile to IoT, via the cloud

Unlike traditional, package-based Ubuntu for servers and desktops, the extensible Core keeps apps and each part of the OS securely isolated from one another, and it allows for “transactional updates” — they only need to include the difference between the old and new version, allowing for easy upgrading and rolling-back if needed. In the cloud, Canonical is pushing Ubuntu Core as ideal for Docker and other containerized apps.

Mark Shuttleworth

Mark Shuttleworth


However, Core’s suitability for the container trend was more or less an accidental bonus while the technology was quietly making its way from Ubuntu Touch to the internet of things, Canonical founder Mark Shuttleworth told me in an interview. According to Shuttleworth, Core’s development began as Canonical grappled with carriers’ annoyance at existing mobile firmware update mechanisms, and as cheap development systems such as Raspberry Pi and Arduino started to take off.

[pullquote person=”Mark Shuttleworth” attribution=”Mark Shuttleworth, Canonical founder” id=”907873″]Let us deliver those updates to your device with the same efficiency as with a phone[/pullquote]”Two years ago we started seeing a lot of what I’d call alpha developers starting to tinker with what at the time people called embedded development,” Shuttleworth said. “We realized there was a very interesting commonality between the work we were doing for mobile — specifically this update mechanism work – and the things you’d want if you were to build a product around one of these boards.”

Canonical had “invested in the container capabilities of the Linux kernel as it happened for the mobile story,” Shuttleworth said, as it was needed to fix security issues on the phone, such as isolating untrusted apps from the address book. “Docker is based on those primitives that we built,” he noted.

Developer push

For makers of connected devices, the same technology means being able to concentrate on the connected app and keeping the device more secure. “[Currently] if you’re going to get an update for that firmware, what you’re getting is a whole blog of kernel and OS and app, and the net effect is you rarely get them, so a lot of devices are vulnerable,” Shuttleworth said. “With Core, you can let us worry about Heartbleed and so on, and let us deliver those updates to your device with the same efficiency as with a phone.”

What’s more, Core for smart devices comes with an app store (that can be white-labeled for brands) that provides developers with a distribution mechanism, and also opens up the possibility of running different apps from different vendors on connected devices.

Shuttleworth gave the example of a smart lawnmower that could take an add-on spectral camera from a different manufacturer and run that manufacturer’s app:

It’s going from a single-device stodgy world to more cross-pollination between devices from different vendors. Because you have a store, you can see more innovation where people concentrate on the software – they don’t have to build a whole device. Because it’s a common platform, they can deliver that app to many devices.

One of the key benefits of Core is its flexibility. The base Ubuntu Core code is identical across the cloud, connected devices and even the desktop – it supports both ARM and x86. This means device makers can prototype their “Snappy” apps on a PC before running thousands of simulations in the cloud, and it also means old PCs can be easily repurposed as a home storage server or photo booth or what have you.

Early adopters

The OSRF is going to use Ubuntu Core for its new app store, so developers can push updates to their open robots. Erle Robotics is using Core to power its new Erle-Copter open educational drone (pictured above), which will ship in February.

NinjaBlocks' Ninja Sphere smart home controller

NinjaBlocks’ Ninja Sphere smart home controller

NinjaBlocks’ is using Core and its app store as the basis for its new Ninja Sphere smart home controller (pictured right).

Shuttleworth said he was intrigued by the possibilities of hubs: “They may be routers or set-top boxes [but] you really want to think of them as extensible. Why can’t a NAS also have facial recognition capabilities; why can’t your Wi-Fi base station also run a more sophisticated firewall?”

The current Raspberry Pi won’t run Ubuntu Core as it uses the older ARMv6 architecture – Core requires ARMv7, though the ODroid-C1 provides a cheap ($35) option in that department. “We decided we wouldn’t go to lower specifications because our Core story is the next generation of devices,” Shuttleworth said.

Speaking of hardware, the Ubuntu founder also hinted that there might be further announcements in connection with the big silicon vendors, with which Canonical already has extensive relationships – “At the silicon level we’re a unifying factor” — though he didn’t want to go into detail just yet. The likes of Intel and Samsung and Qualcomm are all trying to develop their own (infuriatingly disparate) standards for the internet of things, and it would be interesting to see how Canonical can insert itself into this chaotic land-grab, if indeed it can.

Ubuntu’s future

For those wishing to repurpose old PCs, the private cloud storage outfit OwnCloud (already available in the Core app store) provides an interesting test case for the difference between Ubuntu Core and the full-fat Ubuntu. As Shuttleworth tells it, OwnCloud “got bitten” by the traditional package management system on Ubuntu, because that involves different packages for different versions of the OS.

“It came to the question of who’s responsible for an out-of-date, insecure version of OwnCloud,” he said. “We can’t usually give [developers] access rights to the archive to push updates – if something malicious is in there… it can go anywhere. [Now] we can say: ‘OK, there’s just one place you push the latest version of OwnCloud and it goes directly to every device with Snappy.’ If they were to do something malicious, we’d confine that to just the data you’ve already given to OwnCloud.”

So, is Core Ubuntu’s WinCE or the future of the venerable Linux distro? Shuttleworth was adamant that the Debian-package version of Ubuntu “will never go away because it’s the mechanism with which we collaborate amongst ourselves and with Debian” and would be of continued relevance for developers:

The question comes when you look to shipping the software to a device or user – folks are increasingly comfortable with the idea that a more bundled, precise and predictable delivery mechanism is attractive for that. I think there will be millions of people using Snappy, but I don’t think the package-based version will go away. It’s so useful for developers and in many cases for production, but in cases where you have a particular property of very high cost to go fix something if it breaks, the Snappy system is very attractive.

For any given application, it’s clear which would be better.