5 questions for… ADLINK. Edge-Fog-Cloud?

One of the most interesting and dynamic areas of technology today is also one of the hardest to define: that is, the space between the massively scalable infrastructure of the cloud, and the sensor-based devices that are proliferating all around us. To understand this better, I spoke to Steve Jennis, corporate VP and Head of Global Marketing at hardware platform and connectivity provider ADLINK.
 
1. What do you mean by Edge Computing, with respect to Cloud and Fog Computing?
“Edge Computing” is still in the eye of the beholder. Cloud Service Providers use the Edge as a reference to any computing resources on their users’ premises, the telecoms industry sees it as the end-node of their proprietary network (as in Multi-access Edge Computing), and corporate users often think of the term as the boundary between operational technology (OT) and information technology (IT). All are valid: Edge Computing is computing at the edge of a network, but the user’s context is obviously important.
By comparison, the term “Fog Computing” represents the continuum between data sources, Edge Computing and Cloud Services. As such, Edge Computing is a subset of Fog Computing, ‘the meat in the sandwich’ between the IoT’s “Things” and Cloud-based Services. Fog Computing is about exploiting compute resources anywhere in an end-to-end system, to add the most (business) value overall.
Meanwhile we have “Cloud Computing”: despite its enormous growth it has some well-understood limitations. For example, the cost of exclusively using Cloud Services can be unacceptable if you are generating terabytes of data every day. That drives IT people – who tend to think top-down – to consider Edge Computing as a complement to Cloud Computing.
Simultaneously, we have the “hard-hat” guys in their OT world, working at the sharp-end of data collection, analysis and control, with a need for real-time systems that require 24/7 fault-tolerant operation. These guys think bottom-up, they often look suspiciously at the corporate IT world and feel that IT people don’t understand their production computing needs; they see Edge Computing as the boundary between themselves and their counterparts in IT, a boundary where their ‘northbound’ data is processed (ingested, normalized, aggregated, analyzed, abstracted, etc.) before it enters the traditional corporate IT domain.
2. If it’s all about enabling end-to-end IoT solutions, how do you see the opportunity?
A few years ago we were in the proof-of-concept era: the main question around IoT was, would the technology work end-to-end? That question is answered today (although concerns around security remain, see below), as are questions around business value: the business case for exploiting the IoT ‘tool kit’ has become a nod, whether for operational excellence reasons (e.g. predictive maintenance), or to support new business models (e.g. post-sale connectivity, services and subscription models).
Therefore, the focus has now shifted to, how do I get started, how do I deploy, and how do I manage the new risks? End-to-end IoT solutions are, by their nature, multi-technology, multi-vendor and multi-standard. They also almost always include both greenfield (new) and brownfield (legacy) data sources and data sinks. All of which is moving the focus to IoT systems integration. The bigger vendors (the traditional leaders in IT or OT) have a single vendor, one-stop shop culture, but IoT solutions aren’t like that. So, who do you turn-to to take end-to-end responsibility for these heterogeneous IoT solutions – an in-house multi-disciplinary team, your trusted local services supplier or a major SI? A lack of good options is the biggest bottleneck in IoT solutions deployment today, across vertical markets.
We’re also seeing two models of IoT deployment. Bottom-up, equipment providers are adding IoT technologies into their product lines, enabling evolutionary adoption by end users within normal technology cycles. And top-down, strategic digitization opportunities and threats are getting a lot of attention, so turning IoT threats into opportunities is a growing concern for company boards, CEOs and CIOs. Both models lead to greater enterprise digitization in pursuit of operational excellence (the cost line) and support for new revenue opportunities (the top line).
Putting the two together, we see machine providers increasingly supporting as-a-service models, marrying IT and OT worlds to optimize post-sales performance and provide customers with new services. Right now, the bottom-up (evolutionary adoption) model is the most prevalent, but five years from now, it will become more balanced as supporting new business models increasingly drives IoT investments. i.e. as one vendor introduces a new service, its competitors will have to quickly follow-suit to remain competitive.
3. How is ADLINK responding to this market evolution?
20 years ago, ADLINK was a traditional electronics manufacturer, building boards and modules, often to customer designs. Soon thereafter, ADLINK started designing its own innovative (analog and digital, hence AD-LINK) embedded computing products and building its own brand globally. This business has been consistently successful over more than two decades. Then, a few years ago, Jim Liu our CEO, added to our corporate strategic vision and ADLINK entered the emerging market for industrialized IoT products and solutions, essentially enabling “connected embedded computing”.
As we started to think about the elements of end-to-end IoT solutions, we quickly realized how much of an opportunity existed at the Edge. Edge computing really is virgin territory in computing, where no incumbent vendors dominate, making it a great growth opportunity. But, as mentioned, the channels-to-market for deploying IoT solutions are relatively immature. As a result, we offer what we call “Digital Experiments as-a-Service” (DXS) — where we partner with customers who are looking to improve their operations or prove-out new business models and revenue streams.
As we help our customers and learn more about the best opportunities-of-scale (both in terms of the size of deployments and the number of potential customers) this solutions view also helps drive the way we embed IoT tech into all our enabling products: platforms, data connectivity and advanced application enablement (e.g. AI-at-the-Edge). Through this top-down (DXS IoT solutions) and bottom-up (enabling products) approach, we support our customers in their embrace of IoT technologies and also help our systems integration channel partners to respond to the huge IoT solutions opportunity in front of them.
4. Where are you seeing the most maturity and growth in end user adoption of IoT?
Overall, we are looking to address two specific questions: Firstly, how to identify customers and solutions with the biggest potential upside, i.e. which users and Use Cases will deliver the best ROIs? And secondly, how to identify applications that can really scale, in terms of system size per customer and/or the number of potential customers? The answers define the sweet-spots in the overall market for us.
In addressing these questions, we prioritize engagements with forward-looking, entrepreneurial and innovative customers rather than any specific vertical markets. It is the customer’s culture and attitude that is more important than their application domain.
That said, we are spending a lot of time with manufacturers (particularly in terms of enabling smart factories) and with a wide range of machinery makers, who now see their products as valuable data sources in an IoT context (in addition to their traditional functionality). But we range across many verticals, and engagement depends mostly on the forward-thinking and innovation culture of the customer. So, in summary, the customer’s willingness to innovate, experiment and explore is more important than the vertical market in which they operate.
5. Where do integrators fit in the end-to-end IoT solutions ecosystem?
For IoT solutions to work end-to-end you need the right team of players, including both users (domain experts) and specialist partners (complementary services providers). So, we’re working with a broad set of partners, both major – such as Intel’s market-ready solutions programme – and smaller – like specialist systems integrators and local services providers – to reduce the user’s barriers to deployment of IoT technologies.
We still see a channel bottleneck in terms of skills and experience in many vertical domains, so working with innovative partners we can learn together how best to create new business value from IoT solutions. ADLINK will continue to act as a multi-vendor, end-to-end solutions advisor and provider, working with preferred systems integrators to develop the IoT solutions eco-system, and thus overcome the “getting started” and then the large-scale deployment issues that end users and machinery makers face today.
 
My take: There’s substance behind the fog: watch out, cloud providers
I confess, I’m not a great fan of the term “fog computing” as it focuses more on the problem rather than the solution. However, offers a relatively accurate description of IoT’s current state of affairs: a lack of clarity pervades, alongside more general agreement on standards and norms. These are symptoms of where we are, rather than inherent problems, which will be treated over time.
The foggy nature of things is also a smoke screen for what could be one of the most exciting areas of technology in years to come. I don’t want to overstate this, as it starts to sound like hype, but let’s think about the pervading architectural model: cloud.
Right now, we have a layer of rhetoric which assumes that all processing and storage will centralise to a small number of providers: this is variously termed as “the journey to the cloud.” Perhaps it may take decades, goes the thinking, but it will happen. Hybrid architectures are a stop-gap, a Canute-like attempt to stave off the inevitable.
Fog computing, a.k.a. highly distributed and self-orchestrated processing systems, flies directly in the face of the hyper-centralised cloud model. In the foggy world, technology is moving rapidly from a set of standardised boxes and stacks, to a situation where anything goes. The mobile phone or the home wireless hub are just as able to integrate sensors and processing, as any custom-built device. And they will.
When we do achieve a level of standardisation (and move away from this wild west), we can expect to see an explosion in both innovation and uptake. Organizations that have built their businesses on the centralised models will no doubt adjust the rhetoric to suggest that the cloud has extended right out to the sensors. But they will have their work cut out keeping up with the new competitors that will emerge, out of the fog, to take market leading positions seemingly from nowhere.
 

Enchanting Products and Spaces by Rethinking the Human-Machine Interface

At the Gigaom Change conference in Austin, Texas, on September 21-23, 2016, David Rose (CEO of Ditto Labs, MIT Media Lab Researcher and author of Enchanted Objects), Mark Rolston (Founder and Chief Creative Officer at argodesign) and Rohit Prasad (Vice President and Head Scientist, Alexa Machine Learning) spoke with moderator, Leanne Seeto, about “enchanted” products, the power of voice-enabled interactions and the evolution of our digital selves.
There’s so much real estate around us for creating engaging interfaces. We don’t need to be confined to devices. Or at least that is the belief of Gigaom Change panelists, David Rose, Rohit Prasad and Mark Rolston, who talked about the ideas and work being explored today that will change the future of human-machine interfaces creating more enchanted objects in our lives.
With the emergence of Internet of Things (IoT) and advances in voice recognition, touch and gesture-based computing, we are going to see new types of interfaces that look less like futuristic robots and more like the things we interact with daily.
Today we’re seeing this happen the most in our homes, now dubbed the “smart home.” Window drapes that automatically close to give us privacy when we need it is just one example of how our homes and even our workspaces will soon come alive with what Rose and Rolston think of as Smart-Dumb Things (SDT). One example might be an umbrella that can accurately tell you if or when it’s going to rain. In the near future devices will emerge out of our phones and onto our walls, furniture and products. We may even see these devices added to our bodies. This supports the new thinking that devices and our interactions with them can be a simpler, more seamless and natural experience.
Rose gave an example from a collaboration he did with the architecture firm Gensler for the offices of Salesforce. He calls it a “conversational balance table.” It’s a device that helps subtly notify people who are speaking too much during meetings. “Both introverts and extraverts have good ideas. What typically happens, though, is that during the course of a meeting, extraverts take over the conversation, often not knowingly,” Rose explains, “so we designed a table with a microphone array around the edge that identifies who is speaking. There’s a constellation of LEDs embedded underneath the veneer so as people speak, LEDs illuminate in front of where you are. Over the course of 10 or 15 minutes you can see graphically who is dominating the conversation.”
So what about voice? Will we be able to talk to these devices too? VP and Head Scientist behind Amazon Alexa, Rohit Prasad, is working on vastly improving voice interactions with devices. Prasad believes voice will be the key feature in the IoT revolution that is happening today. Voice will allow us to access these new devices within our homes and offices more efficiently. As advances in speech recognition continue, voice technology will become more accurate and able to quickly understand our meaning and context.
Amazon is hoping to spur even faster advances in voice from the developer community through Alexa Skills Kit (ASK) and Alexa Voice Service (AVS), which allow developers to build voice-enabled products and devices using the same voice service that powers Alexa. All of this raises important questions. How far does this go? When does voice endow an object with the attributes of personhood? That is, when does an object become an “enchanted” object?
At some point, as Mark Rolston of argodesign has observed, users are changed in the process of interacting with these objects and spaces. Rolston believes that our digital selves will evolve into entities of their own — what he calls our “meta me,” a combination of both the real and the digital you. In the future Rolston sees our individual meta me’s as being more than just data, but actually negotiating, transacting, organizing, and speaking on our behalf.
And while this is an interesting new concept for our personal identity, what is most interesting is using all of this information and knowledge to get decision support on who we are and what we want. The ability for these cognitive, connected applications to help us make decisions in our life is huge. What we’re moving toward is creating always-there digital companions to help with our everyday needs. Imagine the future when AI starts to act as you, making the same decisions you would make.
As this future unfolds, we’re going to begin to act more like nodes in a network than simply users. We’ll have our own role in asking questions of the devices and objects around us, telling them to shut off, turn on, or help us with tasks; gesturing or touching them to initiate some new action. We’ll still call upon our smartphones and personal computers, but we won’t be as tethered to them as our primary interfaces.
We’ll begin to call on these enchanted devices, using them for specific tasks or even in concert together. When you ask Amazon’s Echo a simple question like “what’s for lunch?” you won’t be read a lengthy menu from your favorite restaurant. Instead, your phone will vibrate letting you know it has the menu pulled up for you to scroll through and decide what to eat. Like the talking candlestick and teapot in Beauty and The Beast, IoT is going to awaken a new group of smart, interconnected devices that will forever change how we interact with our world.
By Royal Frasier, Gryphon Agency for Gigaom Change 2016

Review: SmartDraw Helps to Tame Wild IoT Networks

Comprehending the intricacies of the emerging IoT world takes more than looking at a static Visio diagram, it takes a tool that is designed to deal with both virtual and physical devices and the ability to visualize those complex interconnections dynamically.

Hands on with Zuli’s smartplug

I wanted to review the Zuli smartplug, a device that makes a compelling argument for the Internet of Things without requiring consumers to set up a bunch of sensors, connect a central hub to their router, and tinker with an app’s complex settings. However, my house was determined to make it nigh impossible for me to really use the thing (more on that in a moment).
Still, there’s plenty of reason to call some attention to Zuli, which today announced that the device will debut in Lowe’s retail stores across the U.S. September 28, and that it’s forged a partnership with the Google-owned Nest Labs. Zuli’s smartplug is pretty simple: Just stick the device into an electric socket, connect to it via Bluetooth, then use the mobile application to gain limited control over dumb (sorry, non-connected) household utilities plugged into its side. It took me about 10 minutes to get a few of the smartplugs up and running in my home.
Taylor Umphreys, the company’s chief executive, tells me that Zuli was designed to bring the Internet of Things to people who can’t hook up other connected devices. I’ve written in the past about the struggle to use “smart” devices while renting — how many renters can rewire the thermostats in their apartments? — and these plugs are supposed to give people living in those situations a taste of the connected life.
“Lots of renters move around a lot, and they don’t have the ability to install appliances,” Umphreys says. FiveThirtyEight supports that claim with a report indicating that many Americans will move 11 times throughout their lives. It’s hard to imagine them bringing connected devices with them during each move.
Zuli is supposed to tackle another issue: Plug-in appliances that can’t be controlled unless their owner is right next to them. People who own their own homes can install wall-mounted switches for their lights or purchase some ceiling fans. Renters are stuck with lamps controlled with an itty-bitty switch and standalone fans.
The smartplug is supposed to make those products easier to use from afar. They can also be scheduled to turn on or off at specific times, and the plug can control the amount of electricity passed onto an object, basically acting as dimmer switches for lamps that previously would have been “on” or “off” with nothing in between.
But the main draw is a feature called “Presence,” which tailors a plug’s settings based on its owner’s proximity. If you’re in your living room, that light might turn on. Leave for the kitchen, and that one will turn on while the other lamp switches off. It could very well represent the epitome of smartphone-enabled laziness.
That potential is better realized for owners of the Nest smart thermostat. Zuli has partnered with the device’s maker to offer control directly from its own app, giving smartplug owners a way to control both devices without having to switch apps. Nest can also be controlled with Presence, making it easy to fiddle with a room’s heating.
Presence does have some limitations. It requires three smartplugs to function, which is why Lowe’s will sell the device in single and multipacks, which will cost $60 and $160, respectively. The mobile app also supports just one person at a time, which means multiple people can’t tailor Presence to their own preferences until Zuli ships an update that Umphreys tells me will be available in the near future.

Easy to use, but not without issues

Now, about my problems with Zuli. I’ll preface this by saying that this probably won’t affect everyone, but I suspect that more than a few people will encounter this issue: I didn’t have enough outlets, or enough appliances I wanted to control with Zuli’s mobile application, to use the three smartplugs I was supposed to review.
Plugging In
I rent an older home. Instead of the three-pronged outlets that help prevent people from being electrocuted, I have the old two-prong outlets. This means I can’t use Zuli for many things. And where I was able to find three-pronged outlets, I didn’t see any device nearby that I thought would really benefit from using the device.
So I set up one device to automatically turn on the fan in my dog’s room at a certain time, and turn it off early in the morning. This wasn’t a critical function, I’ve never forgotten to turn on the fan without a smartplug’s assistance, but I figured it would be a small thing I could use to test the plug’s capabilities. And it worked! For a few days I heard the fan turn on from the other room with nary a finger lifted.
But that was about the only use I got out of the product. All of the other fans are controlled with a wall switch or plugged into two-prong outlets. My lights are the same way. I couldn’t think of anything else to use the smartplugs on, so I pulled the units from my walls and stuffed them back into their package to be mailed back.
Umphreys was sympathetic to my problems. When I spoke to him about it, however, he pointed out that many people are unlikely to have the same issue. I think it will be more of a problem than he might expect, but I’m willing to concede that most consumers interested in the Internet of Things will have better luck.
He also pointed out another use for the smartplugs: Giving control over appliances to people who can’t, or simply don’t want to, get up and turn them on manually. See, when we spoke last week, Umphreys had just broken his arm over the weekend. He told me that he used the smartplugs set up in his home to control his lights, fans, and other appliances when the painkillers made it hard for him to do so himself.
That’s an interesting fringe case. Might people who struggle with daily activities be able to benefit from something like this? Then again, couldn’t they get similar use out of a clapper that illuminates their homes whenever they slap their hands? I’m not sure, but I could see how both options might come in handy for some people.
All of which leaves me with this pseudo-review of the Zuli smartplug. Should people buy it? If they have an idea of what they would use some of them for, and are willing to spend $160 for its most interesting feature, I think it could be worth a shot. It will be interesting to see if the company can partner up with other connected device makers, too, and give consumers more control from a single mobile application.
Zuli’s smartplug is well-designed and easy to use, and I understand the thinking behind it. In the end, though, the device tries so hard to appeal to people renting modern apartments that it proved all-but-useless to me in my older home. For me it’s not any more useful than other “smart” products. I won’t miss it when it’s gone.

Aether adds multi-room functionality to its Cone speaker

San Francisco-based IoT-startup Aether Things has beefed up its Aether Cone speaker with some additional features: Cone users can now play music on multiple speakers at the same time, Sonos-style, and a firmware update is also bringing Bluetooth connectivity to the Cone.

The $400 speaker is supposed to learn from your listening habits and compile music on the fly. So far, though, it doesn’t look as if many people have actually bought a Cone. The company’s Android app, which isn’t required to use the Cone but helps with set-up and control, has been downloaded between 50 and 100 times, according to Google Play.

Aether is competing in an increasingly crowded market. On the loudspeaker front, there’s the industry leader Sonos, but also connected speakers from LG, Sony, Denon and others. But as a smart connected device that can be controlled with voice commands, the Cone also competes with [company]Amazon[/company]’s Echo, which offers a lot of additional functionality.

Evrythng partners with Gooee in LED internet of things push

The internet-of-things identity management outfit Evrythng, which partnered up with Samsung not long ago, has struck another strategic partnership deal – this time with a new LED lighting player called Gooee.

Gooee, which emerged last year out of lighting technology firm Aurora, provides sensors to detect motion or CO2 or other phenomena, that can be integrated into new LED lighting products. It also sells assembled light engines (LEDs integrated with electronic control gear) and the mechanisms for controlling them.

Evrythng handles identity and authentication for smart devices, to make it easier for people and systems to interact with them and analyze their output – it’s keen on calling itself the “Facebook for things”. Together, the British firms intend to create the “operating system for smart connected lighting.”

As Gooee technology chief Simon Coombes explained to me, the idea is to “be the ‘Intel inside’ of the smart lighting market”, with Gooee handling the hardware aspect at industrial scale and Evrythng’s cloud gluing, ahem, everything together. Gooee drew in Evrythng as a partner after doing some data modelling and realizing how much data it would ultimately be trying to wrangle.

The lights will be able to individually report back on their own operating condition (temperature, power consumption, voltage and current), and assess and communicate and react to changes in their environment, either with sensors embedded in the luminaire itself or installed nearby.

According to Coombes and Evrythng co-founder Andy Hobsbawm, lighting provides an ideal opportunity for the internet of things due to its ubiquity. Buildings of all kinds are full of lights, and if those lights can be smartened up with sensors for surveillance and safety and what-have-you at (Gooee promises) “marginally increased cost”, then that’s far simpler than installing standalone sensor systems.