Technology may be fast-moving but some concepts have remained stable for decades. Not least the principle of resource constraints. Simply put, we have four finite variables to play with in our technological sandpit:
- electrical power
- processor speed
- network bandwidth
- storage volume
This principle is key to understanding the current Internet of Things phenomenon. Processing and storage capacities have increased exponentially — today’s processors support billions of instructions per second and the latest solid state storage can fit 32 gigabytes on a single chip.
As we expand our abilities to work with technology, so we are less constrained, creating new possibilities that were previously unthinkable due to either cost or timeliness. Such as creating vast networks of sensors across our manufacturing systems and supply chains, termed the Industrial Internet.
This also means, at the other end of the scale, that we can create tiny, yet still powerful computers. So today, consumers can afford sports watches that give immediate feedback on heart rate and walking pace. Even five years ago, this would not have been practical. While enterprise business may be operating on a different scale, the trade-offs are the same.
Power and end-to-end network bandwidth have not followed such a steep curve, however. When such resources are lacking, processing and storage tend to be used in support. So for example, when network bandwidth is an issue (as it so often is, still), ‘cache’ storage or local processing can be added to the architecture.
In Internet of Things scenarios, sensors (attached to ’things’) are used to generate information, sometimes in considerable volumes, which can then be processed and acted upon. A ‘thing’ could be anything from a package being transported, a motor vehicle, an air conditioning unit or a classic painting.
If all resources were infinite, such data could be transmitted straight to the cloud, or to other ’things’. In reality however, the principle of resource constraints comes into play. In the home environment, this results in having one or more ‘smart hubs’ which can collate, pre-process and distil data coming from the sensors.
As well as a number of startups such as Icontrol and (the Samsung-led) Smartthings, the big players recognise the market opportunity this presents. Writes Alex Davies at Rethink IoT, “Microsoft is… certainly laying the groundwork for all Windows 10 devices, which now includes the Xbox, to act as coordinating hubs within the smart home.”
Smart hubs also have a place in business, collating, storing and forwarding information from sensors. Thinking more broadly however, there are no constraints on what the architecture needs to look like, beyond the need to collate data and get the message through as efficiently as possible – in my GigaOm report I identify the three most likely architectural approaches.
Given the principle of principle of resource constraints, the idea of form factor becomes more a question of identifying the right combination of elements for the job. For example, individual ‘things’ may incorporate some basic processing and solid state storage. Such capabilities can even be incorporated in disposable hubs, such as the SmartTraxx device which can be put in a shipping container to monitor location and temperature.
We may eventually move towards seemingly infinite resources, for example one day, quantum sensors might negate the need to transport information at all. For now however, we need to deal in the finite — which creates more than enough opportunity for both enterprises and consumers alike.