Estimates vary about the number of connected devices there will be as the internet of things shapes up — a recent Gartner estimate was 26 billion by 2020 — but everyone agrees there will be a ton.
The numbers are staggering even when it comes to personal electronic devices like smartphones, tablets and game consoles. Just five years ago, U.S. households averaged just one web-connected device, now the average is 5 or 6, according to Deepfield CEO Craig Labovitz, whose company keeps track of the traffic flying around the web. But the really mind-blowing numbers are for the other devices — those that talk to each other, not to us. Gartner’s 26 billion number for example, doesn’t even count people-oriented smartphones and tablets.
So the endpoints may be cool and numerous, but the more interesting story will be how all the data they spew will be handled and how the internet infrastructure itself — with a big assist from cloud computing — will adapt to deal with it.
That is a central question to be taken up at Gigaom Structure 2014 in June. There, an array of industry luminaries including Werner Vogels, CTO of Amazon; Urs Hölzle, Google’s SVP of technical infrastructure; Lance Crosby, CEO and founder of SoftLayer, now part of IBM(s ibm); and Scott Guthrie, Microsoft’s(s msft) newly named executive VP of cloud and enterprise, will be asked about all this. And, IT pros from companies like sugar giant Florida Crystals and The Gap will talk about how their companies are looking at cloud in the impending IoT era.
All of that aforementioned data, after all, needs to be aggregated from far-flung points, stored and crunched. Tortured metaphors like Cisco’s(s csco) notion of “fog” are an attempt to depict an architecture that will require more dispersed aggregation and number crunching points. In Cisco’s view, not surprisingly, more compute will happen at routers dispersed out near the devices generating the data, not at some fortress cloud data center hundreds or thousands of miles away.
In IoT, big data keeps getting bigger
“To date, we’ve largely deployed Big Data technologies to find value in data exhaust: the service data and logs generated from operating everyday services and platforms,” said Cory von Wallenstein, chief technologist for Dyn. “But what happens when we increase the number of devices on the Internet by nearly an order of magnitude, each actively generating new telemetry data we did not have access to before? We’ve now gone beyond ‘find value in the data we already have’ to ‘actively produce more data in pursuit of value.’ Our thoughts of Big Data are actually quite small in the context of the amount of data we’ll be processing in the cloud originating from the Internet of Things.”
The diversity of applications will require several implementation options, said Gary Ballabio, product line director for Akamai’s(s akam) Web Experience Division. The infrastructure model will depend a lot on whether the application requires real-time response or if some latency is acceptable. And the amounts of data generated will also vary wildly. A runner’s wrist band that transmits her heart rate is one thing; a jet engine that phones home reams of performance data is quite another, he said. For most of those applications, he still sees a need for centralized data aggregation and compute.
As Stacey Higginbotham wrote in describing Cisco’s foggy view, everyone is trying to come up with a way to depict this massive transformation and how, as more devices connect at the edge, “we need a highly distributed architecture that encompasses not just data centers but the edge nodes, with applications that can handle the distributed compute and the databases that such an architecture requires.”