AI for an Eye

While they predate the warm and fuzzy moniker that is “wearables,” contact lenses are one of the more common pieces of technology applied to the body today. But, unlike most other commercial wearable devices, corrective contact lenses have not been particularly sexy. They help us see as well as we should see and then their job ends. Or, at least, that’s where it has historically—yes, like everything from cars to shoes to refrigerators, the contact lens is about to get “smart.”
The innovation of smart contact lenses is moving in a few different directions. One notable and noble pursuit is health monitoring. Alphabet’s Verily (formerly Google Life Sciences) is doing work here, with a lens that monitors (via tears) blood glucose levels for diabetics, and startup Medella Health recently secured $1.4 million for its competitive product. Meanwhile, Swiss-based Sensimed AG, has received FDA approval for a lens that tests eye pressure for glaucoma patients. Unlike traditional glaucoma tests, Sensimed AG’s Triggerfish makes it possible to monitor eye pressure for a 24-hour period, including sleep, for more accurate assessment of a glaucoma patient’s risk of vision loss.
The contact lens has an advantage in the health monitoring space—at least compared to other wearables that didn’t originate as medical devices and don’t connect so intimately with the body—but this isn’t the only future for the smart lens. There are new opportunities for vision correction coming, as evidenced by Google and Novartis and EPGL; both teams are developing autofocus lenses in an effort to correct farsightedness. And, of course, there are a number of other innovations in the works that will appeal to our sci-fi’d imaginations, like Ocumetrics, a company that reportedly has created a lens that improves vision 3x better than 20/20. While their “bionic” lens, technically a surgical implant, has received some skepticism from the medical community, it generated a fair amount of buzz in social media. (And, understandably so; for those of us who spent childhoods watching The Six Million Dollar Man, the wait for bionic vision has been long and grueling.)
Meanwhile, Samsung recently filed a patent for a smart lens, which, according to Sammobile, “shows a contact lens equipped with a tiny display, a camera, an antenna, and several sensors that detect movement and the most basic form of input using your eyes: blinking.” This is foundational user interface stuff that sets us up for interactions akin to Google Glass, but right there on the eyeball. It leads a future where the recording of experiences becomes incidental and video games, augmented and virtual reality can be experienced without the need for bulky equipment. In this way, tech becomes more discreet—a point of interest to marketers who have relied on the showmanship of early adopters because how do you fuel word of mouth for “invisible” technology?
More provocative, however, is the potential for change in human behavior as the boundaries between our bodies and information continue to dissolve. If you think there’s no need to lock facts, figures, and trivia into memory because your smartphone is in your pocket today, wait until you can blink your way through IMDB. And how does human interaction shift when Facetime happens in your face, when we have the power to conduct background checks on the fly? (“Hello, it’s great to meet you and…um…are you browsing my Facebook page right now?”) Given the pace of innovation today, it’s not difficult to imagine a world where the smart lens gives a lawyer or student a steroid-like advantage on the intellectual playing field, or a quick lens check become the norm before, say, the National Spelling Bee—all before the next time you need to renew your driver’s license.
As all the world’s information migrates from our fingertips to our eyes, the next logical step is to introduce some level of processing of the information—artificial intelligence—in a lens. Progress here depends a lot on computer vision, the same technology that helps a self-driving car distinguish between a traffic light and a man wearing a green hat. Computer vision is one of the more intensive areas of innovation today—Slate published an interesting piece on the challenges—and naturally there are a number of innovators tackling it. This includes a Russian developer that has created an open source computer vision platform in collaboration with both Google and Facebook. It’s also likely that large scope of data acquired as a result of the first wave of camera-like smart lenses will play a meaningful role in advancing computer vision. In other words, smart lens wearers will be effectively teaching computers how to see.
But we’re not cyborgs (eyeborgs?) yet. Even as humans get more comfortable with the idea of body hacking via objects like radio frequency ID chips, there’s still something squeamish about putting technology right there in the eye.  (Cue A Christmas Story: “You’ll shoot your eye out.”) Innovations in miniaturization, like ETH Zurich’s ultra-thin circuit—50 times thinner than a human hair—help address these concerns.
There is also the powering of smart lens technology to consider—how do these things get their juice? Google’s glucose-monitoring lens would be powered by a “reader” device, such as a piece of clothing or headband that sits near the lens. Google also has a patent for solar-powered contact lens, while Sony’s recent patent includes “…sensors [that] would convert the movements of the eye into energy to power the lens.”
With these and other patents and products in the works today, it’s clear to see that both the reach and role of the contact lens is on the brink of transformation. From vision correction and enhancement to health monitoring, from entertainment to data capture and processing, the range of applications for smart lenses is vast and sets the stage for a behavioral shift on par with—if not more substantial—than what we’ve seen with the mobile device. While we’re not quite there yet, it’s a good time to start thinking about the implications—if recent advances in technology have taught us anything, it’s that big changes can happen in the blink of an eye.

Does Blockchain hold the key to the distributed patient data dilemma?

By now most readers have probably heard of blockchain through tech blogs and major cover stories from the likes of The Economist over the past year. The financial sector has rapidly accelerated engagement with blockchain through a growing number of consortia and fintech startup initiatives. As the foundation for bitcoin, blockchain’s distributed, cryptographic ledger provides a novel data structure and capabilities that could offer a wide number of benefits beyond existing technologies over the coming decade.
The discourse on blockchain is exploding, as are the critiques. But many of us can’t help but feel that blockchain, in an ever evolving manner, is here to stay and is likely going to become the next layer of the internet that will dramatically improve security of data that is flowing in our transactional economy. Quite simply, we need blockchain’s cryptographic security and distributed data structure to deal with the wealth of data that is coming from the citizen-end of the spectrum.
Not least in the healthcare sector, where patient data is spread across an increasingly fragmented set of repositories. Healthcare’s interoperability challenge may only grow worse for the medium term as the growth of data from beyond the electronic health records (EHRs) due to wearables, smartphone apps and sensors in the home become more mainstream.
We see a number of bottlenecks arising out of this inability to integrate non-EHR data into records and become actionable intelligence for clinicians. This partially accounts for the lack of stickiness of most wearables as the data collected is locked in apps and fails to provide actionable feedback to those whom need it most.
A great deal of health data is locked in silos and under-utilized in both the diagnostic process and more broadly in medical research. Blockchain is one of several solutions that are only going to grow in importance, due to its distributed and traceable nature.
Meanwhile, healthcare is reaching an epic number of data security breaches over the past year including entire hospitals taken hostage by ransomware. With blockchain we may get a twofer by giving patients more control over whom they can share data with in clinical research, for example, while also maintaining higher levels of security.
Blockchain’s smart contract capabilities might also enable sharing economies for medical technology such as MRIs, expensive machinery that sometimes goes idle and could take advantage of the IoT and blockchain and enable new business models around scheduling and local options for consumers.
Blockchain has also recently been used to help fund novel HIV research. UBS, the bank, donated code to Finclusion Systems for a platform that will launch HealBond, a “smart bond” amounting to $10B deployed in a more efficient manner to fund research for HIV cures.
As healthcare slowly enters the API economy beyond siloed EHRs we may eventually see the post-EHR based on distributed databases and more patient-centric controls. Blockchain will likely play a major supporting role in this gradual transition that values data liquidity vs. data capture, patient-centric vs. vendor-centric solutions that we find in our current health IT ecosystem.
This will be good news for consumers and those interested in wellness, but this won’t happen overnight. We may also need to approach blockchain with the openness that typically hasn’t greeted “the new” in technology in the past. Play and experimentation will be needed to change entrenched ways.

Moving beyond digital signage in the workplace

At first, thinking about digital signage in the workplace makes perfect sense. We’re all familiar with big screens in airports, hotel foyers and sports stadiums, so, yes, how about seeing similar kit in business environments, for example in headquarter lobbies, or indeed in coffee rooms?
Yes of course, the concept can find plenty of use. Corporate visitors can be offered videos, presentations and data. Employees can more easily be briefed and can share their own content, such as the latest inter-departmental soccer results. Touch screens and kiosks also have a place, for campus navigation and training information dissemination.
The benefits are pretty straightforward — not only that content can be updated faster, or printing costs can be saved but also increased staff motivation and wellbeing, improved health and safety knowledge, higher productivity have been cited. As hardware costs come down, business cases become more evident (though of course, remember to account for the overheads of managing real-time content).
So, what’s the problem? Let’s take a look. Ultimately, such a signage-centric, “Let’s take what is working over there and deploy it over here” mindset is missing a trick. It’s worth reviewing a number of other areas where screen use is prevalent, and seeing these as input to the decision process.
First, the smart screen has come a long way. Back in the early nineties, I can remember de-boxing a whiteboard with a built in printer; since then such devices have become giant input and output panels. Pioneer in the field is education: from sharing an office with a manufacturer of interactive whiteboards for this sector, I’ve seen just how much of a difference such technologies can have on classrooms.
Most importantly, it’s not about the screen but the environment. Consider, for example, the ability to create information on a tablet computer, then share it onto a screen for somebody else to edit. Imagine being able to do this on a wall screen in the meeting room, with input from people in another office.
It should be straightforward, and both schools and home tutors are using such capabilities all the time, but they have yet to make an impact in the workplace.
Second, ‘telepresence’, the term coined by Cisco and also offered by HP to describe the immersive impact of seeing full-sized people on screens in a videoconference. Apart from (addressable through software) issues with eye movement, it’s like they are in the room.
When they were launched a few years ago, such technologies were too costly for all but head office installations. With today’s network bandwidth and with screen costs having plummeted, the ‘telepresence’ notion has a much broader appeal.
And third, we can look to transactional areas of the business for best practice in terms of screen use. ‘Starship Enterprise’ style, centralised network and equipment management hubs have plenty to offer in terms of what should be visible on the big screen, and how it should be presented relative to individualised views on smaller desktops.
Similarly, call centres and sales environments make extensive use of screens. From these parts of the organisation we can learn not only the options available, but also how to strike a balance between operational efficiency and keeping staff motivated.
Learning from these areas, the bottom line is that digital signage is only part of the opportunity offered by either passive or interactive screens. Direct information sharing, collaboration, workflow management, employee feedback, resource scheduling and booking, training, brainstorming and team building are just a few areas that a deployment can achieve.
Perhaps, yes, a quick win is to deploy some screens for the purpose of disseminating information. But,as some sectors are already discovering — such as non-obtrusive up-selling in the hospitality sector— active interaction yields new opportunities for enablement and empowerment, beyond passive information sharing.
So, it’s worth thinking outside the box, and treating screens as a viewport onto a shared data set, which can also be accessed via other devices. What starts as digital signage becomes a series of windows onto a brave new world, which drives a set of considerations, not least in terms of type, size and location, that should be considered before any deployment.

Why is securing the Internet of Things so difficult?

It’s inevitable, isn’t it, that the security industry should be all over the Internet of Things. If you’re feeling like you’ve heard it all before, you probably have. Top of the list of topics is that the ‘things’ themselves are going to be insecure. They’re running operating systems and software, neither of which may have been considered with security in mind.
The consequence is a massive increase in what security pros know as the ‘attack surface’, that is, the scope of stuff that can be targeted by malicious hackers, fraudsters or other nondescripts. The resulting challenge is very real, particularly given the personal nature of information being captured — from heart rates to locations — and its potential for misuse.
In the spirit of a brainstorm, let’s make an assumption however: that there is nothing we can do about it. The genie is well and truly out of the bottle, let us say, and our every movement and behaviour can and will be logged for personal, commercial and governmental purposes. While we may benefit, we also may need to live with the security risks.
This ultra-transparent scenario may not become the case, but even if it doesn’t, there will be situations that make it seem that way. What is more, the devices that we rely upon will inevitably become both smarter, and more susceptible to attack. We need to face up to our complicity in this: who thinks about data security before buying a fitness device, for example?
By seeing such risks as read, we can bank them and move on to other areas of concern. The above covers data, but in its most granular sense — facts about individuals, or login details, are a risk in themselves. But there’s a deeper level — that the data is open to manipulation.
For sure, insurers may refuse to cover an individual whose fitness device shows the occasional heart flutter. But what if the data stream itself is modified, through malice or through incompetence, such that numerous heart rates incorrectly indicate a flutter?
Some have speculated about the potential to modify agricultural data as a way of manipulating futures markets. Equally, a home automation company could rig your systems so it made more money — for example, turning on the heating for 29 seconds extra every day. Not a figure to register on one thermostat, but one that would ring up a large amount of small change.
So, not only do we need mechanisms to protect the confidentiality of our data, based on the same assumption that the bad thing is reasonably likely to happen, we also need to consider how to prove that the data is valid.
One possibility is to make every single sensor reading linked to a security key, but the phrase sledgehammer and nut springs to mind. Equally, the scale of the solution would be too costly to be achievable.
Is there an answer? Yes indeed, and it lies in taking a leaf from the works of the Jericho Forum, that body of Chief Information Security Officers founded in 2002 and disbanded a decade later, when the group deemed its work on ‘de-perimeterisation’ to be complete. Complete? Really? How could information security ever be complete?
The CISOs realised that they needed to manage data wherever it was, rather than trying to keep it in one place — and to do so, they needed a way to identify who, or what, was creating or accessing it. In November 2010 they announced the Identity and Access Management Commandments, a set of design principles technologies need to adopt.
This finding — that identity needs to be present — is profound. A corollary principle has been adopted by Google in its Beyond Corp initiative for its internal systems, which treats networks as insecure and instead, enables data access based on being able to identify the device, and the person, making the access request.
We could take this insight one step further. That data which cannot prove its provenance (i.e. from an identifiable person or device) could, or even should be treated as invalid. The notion of security by design is a start, but perhaps it will only be through identity by design that we can architect the Internet of Things to be both transparent and trusted.

The next opportunity for wearable technologies: aesthetics

Many of us have seen both the hype around wearables as well as the growing number of critiques of the hype. But one thing is clear: what we see in the market now is just the beginning, a warm up band for the main act to follow.
In my previous post I discussed the problem of sustainable use of tracking devices and how consumers abandon them within months typically. But is the battle for the wrist and smartwatches really the future of wearable technology? Why the wrist and why do products designed for the wrist and marketed for their aesthetics such as the Fitbit Alta fail to impress from a design perspective?
Furthermore, could user experience be wrapped up with aesthetics and could this be an important factor even for medical devices? Of course it is. We need only go back nearly a decade to find examples of how aesthetics were used to rethink wearable technology. It might be time to re-visit the past to see the future.
Nearly a decade ago, diabetes blogger Amy Tenderich posted a blog bemoaning the fact that diabetics needed their own Steve Jobs to re-design the insulin pump. The device worn by many diabetics to manage insulin levels was viewed as a clunky medical device devoid of any aesthetic considerations. Functionality trumped aesthetics. But sick people, or those struggling with chronic diseases and/or aging, do care about aesthetics, especially if they have the device on the body.
A design firm in San Francisco discovered the blog post and within a short time re-designed an insulin pump that could make diabetics feel better about wearing the device. We hear a lot about patient engagement these days and in this context, aesthetics mean a lot.
Devices are not solely about data and the data are not the only dimensions of disease or wellness. These can become aspects of identities.  To illustrate the case, a similar design effort was sponsored by the UK Design Council over a decade ago to rethink the hearing aid and create “hear ware”.
At the time, ‘Hearing Aids’ were viewed as stigmatized and associated with the aging body. Introducing an aesthetic component helped designers to re-imagine hearing devices well beyond the hearing aid, to address hearing challenges we all face, such as being in a noisy restaurant or when exposed to noise pollution. The competition featured in the Victoria and Albert Museum featured devices resembling jewelry with a wider range of functions.
Now enter Amanda Parkes, a New York-based technologist/designer with a PhD from MIT’s Media Lab. Notice as well, the location: New York, the heart of high fashion in the US. Famous for her invocation, “Let Silicon Valley have the wrist, I’ll take the body”, Amanda is deep into re-imagining wearables from both a fashion perspective and materials design. From smart fibers to fiber batteries and bio-materials, she is rethinking the whole concept of the wearable from an aesthetic angle and materials. Wearables, meet Bauhaus design principles.
When we look at what is going on in the labs these days, with sensors in the form of tattoos that can detect ever more powerful biometric indications, we need to begin thinking about the body as an interface. Many of these sensors will be invisible. They may be connected to your mobile money application as well.
When the novelty of wearing a shrunken iPhone on the wrist wears off, there is much more work to be done from an innovation standpoint. Parkes makes the case for diversity, as many in the tech sector do these days, but for rethinking form, function and appearance.
Perhaps in no other sector will diversity in design from an age, gender, ethnicity, you name it subjectivity; aesthetics follows broader cultural norms and trends. And this matters in healthcare too. I’m betting the next generation of market leaders in this sector will grasp this, and in doing so will find themselves pushing on an open door. Aesthetics matters for the afflicted as much for the well, if not more so.
 
Interested in learning about the evolution of wearables health technology? Check out this infographic produced by the Washington Post.

Insights from VR World Congress: prioritise storytelling over individual skills

As I left the aspirationally named VR World Congress in Bristol, England (We just thought, “Let’s go crazy,” event founder Ben Trewhella told me of the 750-delegate event that started as a meetup), I found myself puzzling over a number of questions.
Whether VR is going to explode as a technology platform, extending way beyond its gaming origins, was not among them. The number of potential use cases — enabling surgeons to conduct operations in the ‘presence’ of thousands of students, or architectural walkthroughs of new building designs — left me in no doubt.
Equally, I have a firmer idea of timescales. While displays and platforms may have passed a threshold of acceptability, they are still evolving. The consensus was that we now have at least a year of lead time, during which hardware will improve, along three dimensions: latency, frame rate and pixel density, said Frank Vitz, Creative Director at Cryengine.
In the meantime, software and content providers are discovering how to make the most of it all. But what new skills and capabilities need to be learned? The answer is not so straightforward, it transpires, as many of them (3D graphics, animation, behavioural design, data integration) are already available.
Less straightforward is understanding how this palette of skills should be integrated. In mobile and web development for example, User Experience (UX) is a hot topic. Makes sense — the best apps are those which get things right by the user, offering potentially complex functionality and services in simple, accessible ways.
Virtual Reality adds extra dimensions (quite literally) to the notion of experience. Not only is the environment immersive but it is also non-linear. Whereas most web sites and indeed, mobile apps tend to operate on a tree-walk basis (where you drop down a menu level then go ‘back’ to the main menu when done), VR removes this constraint.
From a construction perspective, this changes the game. A mobile or web team might have a UX guy, an adjunct who can add a layer of gaily coloured iconography to an app, as UX is just one thing to get right. In VR however, the experience — VX if you will — is everything, and needs to sit at the centre of the project.
As a consequence, many of my discussions at #VRWC were less about individual skills, and more about how to build the right skills mix into tight, multidisciplinary teams that can make the most of what VR has to offer. “You can’t just put out any old content and hope it will do well,” said Ben Trewhella.
“Unless you are delivering an enhanced service, then what is the point?” concurred Rick Chapman, high tech sector specialist at Invest Bristol & Bath, who used the evolution of 3D techniques in film as an illustration. “The first 3D films used 3D as a gimmick. Avatar, whatever you think of its plot, was conceived and filmed for 3D.”
Delivering VR-first experiences is a real, and potentially new, skill. The idea that VR is about storytelling came up repeatedly: it appears that holding someone’s attention in an immersive environment is tantamount to telling a good story, and anecdotal evidence suggested that those working at the leading edge of VR are also the better storytellers.
This takes the conversation beyond base skills to how they should be harnessed. “Yes, you need the right mix of capabilities, but you also need empathy, you reed rapport, you need to understand charisma,” said Rick. “Consider — language is a capability, but with charisma and rapport you don’t need to be so reliant on verbal acuity.”
This is not simply a message for design agencies, gaming companies and animation studios. If VR is to become mainstream, larger companies keen to engage better with their customers, from retailers to manufacturers, need also to welcome VR into the core of their customer engagement strategies.
This means considering the impacts on the relationship between IT, marketing, sales and service and indeed, HR and recruitment. Getting the virtual experience right may become as much a symptom, of an organisation’s depth of understanding of its audiences and how they want to engage, as a cause of any resulting business value.

Why adopt a mobile-first development strategy?

“We think mobile first,” stated Macy’s chief financial officer Karen Hoguet, in a recent earnings call with financial analysts.
A quick glance at the US department store chain’s 2015 financial results explains why mobile technologies might be occupying minds and getting top priority there. Sales made by shoppers over mobile devices were a definite bright spot in an otherwise disappointing year for the company. Mobile revenues more than doubled, in fact, thanks to big increases in the number of shoppers using smartphones and tablets not only to browse, but also to buy.
So it’s no surprise that Macy’s hopes to maintain this trend, by continuing to improve the mobile experience it offers. In the year ahead, Hoguet explained, this ‘mobile first’ mindset will see Macy’s add new filters to search capabilities, clean up interfaces and fast-track the purchase process for mobile audiences.
Other consumer-focused organisations are thinking the same way and the phrase ‘mobile first’ has become something of a mantra for many. One of its earliest high-profile mentions came way back in 2010, in a keynote given by Eric Schmidt, the-then Google CEO (and now Alphabet executive chairman), at Mobile World Congress in Barcelona.
“We understand that the new rule is ‘mobile first’,” he told attendees. “Mobile first in everything. Mobile first in terms of applications. Mobile first in terms of the way people use things.”
The trouble is that, for in-house development teams, a mobile-first strategy still represents something of a diversion from standard practice. They’re more accustomed to developing ‘full size’ websites for PCs and laptops first, and then shrinking these down to fit the size, navigation and processing-power limitations posed by mobile devices.
The risk here is that what they end up with looks like exactly what it is: a watered-down afterthought, packing a much weaker punch than its designed-for-desktop parent.
A development team that has adopted a mobile-first strategy, by contrast, will start by developing a site for mobile that looks good and works well on small form factors, and then ‘work their way up’ to larger devices, adding extra content and functions as they go.
That approach will make more and more sense as more ‘smart’ devices come online and the desktop PC becomes an increasingly minor character in our day-to-day lives. Take wearables, for example: many CIOs believe that headsets, wrist-mounted devices and the like hold the key to providing workers with relevant, contextual information as and when they need it, whether they’re up a ladder in a warehouse or driving a delivery van.
Developing apps for these types of devices present many of the same challenges associated with smartphones and tablets: minimal screen real estate, limited processing power and the need to integrate with third-party plug-ins and back-end corporate systems. Then there’s a lack of standardised platform for wearables to consider, meaning that developers may be required to adapt their mobile app to run on numerous different devices. For many, it may be better to get that hard work out of the way at the very start of a project.
In a recent survey of over 1,000 mobile developers conducted by InMobi, only 6% of respondents said they had created apps for wearables, but 32% believe they’re likely to do so in future.
The same rules apply to a broader category of meters and gadgets that make up the Internet of Things, from meters for measuring gas flow in a utilities network, to products for ‘smart homes’, such as the Canary home-monitoring device, to virtual reality headsets, such as Samsung’s Gear VR, as worn by attendees at Facebook CEO Mark Zuckerberg’s keynote at this year’s MWC.
As the population of ‘alternative’ computing devices grows, developers will begin with a lean, mean mobile app, which functions well despite the constraints of the platform on which it runs, having made all the tough decisions about content and function upfront. Then, having exercised some discipline and restraint, they’ll get all the fun of building on top of it, to create a richer experience for desktop devices.
More importantly, they’ll be building for the devices that consumers more regularly turn to when they want to be informed, entertained or make a purchase. In the US, digital media time (or in other words, Internet usage) on mobile is now significantly higher at 51% than on desktop (42%), according to last year’s Global Internet Trends Report by Mary Meeker of Silicon Valley-based venture capital firm Kleiner Perkins Caufield & Byers (KPCB).
In other words, developers should go mobile first, because that’s what we consumers increasingly do.
 
Picture Credit: Farzad Nazifi

Enterprise Security in a mobile-first world does not stop at the device

Mobile technology has come incredibly far over the past few years, from smartphones and tablet computers being a luxury to becoming an essential tool for the enterprise, fundamentally changing how we do business. This transformation provided the backdrop for Gigaom’s recent Webinar Evolving Enterprise Security for the Mobile-First World.
As we discussed, mobile security has lagged behind the way we now use such devices, which have moved significantly beyond email to offering a wide range of complex functions. At least until now, as considerable strides have been made to secure the data and services that can now be delivered.
But security does not stop at the device. Speaking to Sam Phillips, Samsung USA’s Chief Information Security Officer and VP of Samsung Business Services for Enterprise Security, it became clear that while the features now available provide a robust foundation for mobile security, they should not be treated in isolation.
We are working in a mobile-first world. Many organisations see mobile technologies as a source of innovation, of new business models and growth. Quite rightly so, as they hold a great deal of promise for both externally facing and operational tasks, for suppliers and customers, for support engineers and warehouse managers.
The theme ‘security as a business enabler’ has evolved from its past, internally-facing beginnings. Where good security once reduced risk, enabling better business by freeing sales representatives or supporting home workers, today security has become an essential part of the trust all stakeholders place in an organisation.
In the platform economy, where a company is perceived through the services it offers online and via mobile, trust becomes a most important asset.  Lose this trust and you lose business, as many organisations have found out to their cost.
As a result, we discussed, mobile security becomes both a strategic business imperative and tangible opportunity. This is recognised across the business, but adoption can sometimes happen without the basics being in place. “Business adoption of Mobile has gotten ahead of CISOs and the CIOs they work with,” says Sam.
The mobile security mechanisms now available on devices, coupled with the services available from a variety of partners, enable mobile services to be locked down or opened up, depending on data to be transferred and apps to be installed.
While this gives organisations the flexibility to create secure mobile solutions to fit their own needs, it emphasises a key element of any mobile strategy. “Businesses need to decide what they really want to do with mobile,” explains Sam. With no one-size-fits-all, so organisations first need to understand what the data and apps are to achieve.
This means more than aligning mobile strategy with business strategy. In reality, mobile becomes a fundamental element of the business models required in today’s increasingly digital world, and therefore needs to be treated as such.
Understanding this, and getting the basics right in terms of securing the mobile assets of the business, implementing a secure development life cycle for apps and so on, ensures mobile can become a powerful platform for business model innovation and enterprise transformation. The alternative means building a castle on sand.
It remains to be seen how far mobile will go in the next five years, but it will only become a more embedded element of our business and personal lives. As it moves from an extension of the enterprise to an intrinsic part, getting the right pieces in place now will stand a firm in good stead for the future.

Is logistics heading towards its final destination?

In this digitally enabled world, it’s easy to be distracted by the wealth of technology now available. Mobile apps and devices are transforming the nature of logistics, for example, with devices now being used to track consignments, plan and monitor routes. Features from identifying gas station locations to directly updating customers are becoming part and parcel of the logistical journey.
It would be trite to suggest that such capabilities are inadequate, as they quite clearly bring a great deal of value. For example, a local business with which I am familiar has improved visibility on its delivery team, to the extent that it can respond better to customer complaints. On one occasion, mobile technology proved that a driver was driving under the speed limit through a village, when a complainant said he was not.
At the same time however, the features mobile apps currently provide are largely aimed at the logistical process itself – as if it existed in isolation from what was being delivered, and more importantly, why. The transportation of goods has traditionally been considered independently, largely because it is so manually intensive and, in itself, complicated.
As technology advances however, this, more isolated nature of logistics is changing. A simple, yet profound example is the click-and-collect service, in which a retailer offers delivery of an online purchase to a location of choice – a shop or an affiliated delivery point. It’s a great idea, making for a significantly improved customer experience — if it works. If it does not, the dream can quickly turn to nightmare.
The key to success (or indeed, failure) is the ability to transmit clear information between the two most important components in the chain: the consignment, and its planned recipient — this could be a retail customer or, in the B2B case of spares management chain, a field engineer.
Mobile devices remain a very important element, as they offer a window onto the logistical world. For this window to operate effectively however, back-end systems and tools (such as inventory systems, maintenance schedules and sales databases) need to exchange information in a fashion that appears seamless. No room exists for doubt, when it comes to whether a delivery has taken place.
This may sound obvious, but it is not yet always the case. In one, anecdotal click-and-collect example, a customer went to a store to pick up a package, only to find it had already been returned due to a mismatch of delivery records. On another occasion, an order was cancelled as the product was no longer available, without updating the customer – who only found out when they arrived to collect it.
Logistics simply cannot afford to make errors such as these, as they jeopardise its very rationale. On the upside, get things right and a number of new opportunities emerge — not least to differentiate the business in both B2C and B2B markets, but also to extend product ranges and squeeze that all-important operational efficiency.
The threat is that incumbent logistics organisations may not have forever to get things right. Consider Uber: while it, and its competitors, have significantly disrupted taxi and private hire services, the company’s valuation is based on its potential as a delivery mechanism for all forms of physical delivery.
“Uber isn’t valued at more than $50 billion because it’s a ‘taxi app’” explains Adrian Gonzalez, president of supply chain consulting firm Adelante SCM, but because, “Investors see Uber as a logistics company.”
Despite such topics as 3D printing, self-driving vehicles and flying drones threatening to impact the delivery and receipt of products, these remain early days for logistics – and furthermore, the data integration points with other systems will remain the same, however a delivery takes place.
So, this is certainly not the time to paint any disaster scenario. Rather, it is the right moment to get the basics of offering an integrated service right, in the knowledge that whatever comes in the future, it will only grow in importance.
 

New Open Connectivity Foundation combines Open Interconnect Consortium and AllSeen Alliance

Update 21 February 2016 — I received email from Sophie Sleck of Blanc & Otis:

OCF is not unifying OIC and AllSeen, this is not a merger of two groups. The technology leaders who have been specifying software protocols for the Internet of Things announced they are now working together to form a new entity. OCF is the successor to OIC; it’s initiatives are about solidifying and trying to reduce fragmentation in the industry.

Also, Meredith Solberg of the Linux Foundation wrote to correct me:

I represent AllSeen Alliance and just wanted to reach out in response to your article with a correction that there has been no merger with AllSeen Alliance. AllSeen is not combining with OIC to form OCF and we remain a separate organization. We do, however, have some overlap with members in common. If you could please issue a correction and include a correction note to your Twitter followers, we’d greatly appreciate it! Want to remain transparent and share accurate info.

So, OCF is the successor to OIC, and we will have to wait for the cooperation between OCF and AllSeen to lead to yet another organization/consortium/foundation/whozis.


A new milestone in the maturation of the Internet of Things has been reached: two contending organizations — the Open Interconnect Consortium (backed by Intel and others), and the AllSeen Alliance (back by Qualcomm and others) are merging to form the Open Connectivity Foundation. (See correction above.)
This is a big step, and one that may help break the logjam in the market. After all, consumers are justifiably concerned about making a bet in home automation — for example — if they are unsure about how various devices may or may not interoperate.
Aaron Tilley points out that IoT has seemed to be, so far, all hat and no cattle:

In some ways, the Internet of Things still feels like empty tech jargon. It’s hard to lump all these different, disparate things together and talk about them in a meaningful way. Maybe once all these things really begin talking to each other, the term will be more appropriate. But for now, there is still a mess in the number of standards out there in the Internet of Things. People have frequently compared it to the VHS-Betamax videotape format war of the 1980s.

The VHS-Betamax format war was not solved by standardization, it was the VHS vendors making the devil’s bargain with porn companies. The OCF may be more like the creation of the SQL standard, where a number of slightly different implementations of relational database technology decided to standardize on the intersection of the various products, and that led to corporations to invest when before they had been stalling.
The consortium includes — beside Intel and Qualcomm — ARRIS, CableLabs, Cisco, Electrolux, GE Digital, Samsung, and Microsoft.
Terry Myerson, Executive Vice President, Windows and Devices Group at Microsoft announced the company’s participation in the creation of the OCF, and spelling out Microsoft’s plans:

We have helped lead the formation of the OCF because we believe deeply in its vision and the potential an open standard can deliver. Despite the opportunity and promise of IoT to connect devices in the home or in businesses, competition between various open standards and closed company protocols have slowed adoption and innovation. […]
Windows 10 devices will natively interoperate with the new OCF standard, making it easy for Windows to discover, communicate, and orchestrate multiple IoT devices in the home, in business, and beyond. The OCF standards will also be fully compatible with the 200 million Windows 10 devices that are “designed for AllSeen” today.
We are designing Windows 10 to be the ideal OS platform for Things, and the Azure IoT platform to be the best cloud companion for Things, and for both of them to interoperate with all Things.

Microsoft was late to the party on mobile, but Nadella’s leadership seems to be all about getting in early on other emerging technologies, like IoT, machine learning, and modern productivity.
Noticeably absent are the other Internet giants: Apple, Amazon, and Google. When will they get on board?