Can dialog-based interfaces be the next Braille?

One byproduct of technological progress that’s often overlooked is the way in which new technologies serve to better integrate people with disabilities into society at large. The first electronic hearing aid was developed on the heels of the telephone; the advent of mechanized technology allowed for the invention and mass production of Braille cards. And the wheelchair, of course, followed the invention of the wheel, whenever that happened.

Advances like these have often meant the difference between participation and isolation for the disabled, allowing people to compensate for impairments that otherwise would have severely hindered them.

I believe we’re in the midst of another such advance in the realm of widespread dialog-based interfaces that rely on natural language understanding. For most of us, this technology means greater convenience. But for people who live with one of several disabilities – including impaired use of the hands, compromised mobility, or dyslexia – it could have a transformative effect.

Siri, take a memo

It wasn’t too long ago that people who had difficulty using their hands to write were forced to make onerous adaptations in order to do so. Those who could control their head and neck muscles, for instance, would be encouraged to learn to write by holding a pen in their mouth, or by tapping a keyboard with a device attached to the head.

The commercial availability of dictation software, which first hit the market in the late 1990s, held out hope for an easier solution. But it’s only recently that these systems have become reliable enough for those with disabilities to really be able to count on them.

Now, more adaptations of dialog-based interfaces promise to push the technology deeper into the lives of the disabled. Certain computer software programs extend natural language voice recognition beyond word processing, enabling users to control other applications – such as web browsers and email programs – with vocal commands.

Virtual assistants like Siri allow users to take advantage of the full range of functionality in mobile devices without using their hands, and virtual assistants have even begun to allow users to power their devices on and off with a voice command, making the entire experience hands-free. For people who are unable to use their hands to turn on a device, this is no small advance.

Bringing speech commands into the physical world

Before long, we can expect natural language understanding to leap the bounds of the device world and seep into the built environment. Several companies are working to embed this capability into appliances and even entire environments, allowing us to interact with our cars and living rooms through our natural speech (a personification of objects, perhaps, that is interesting in its own right).

These new dialog-based interfaces will make it easier for all of us to accomplish routine tasks, but they will provide a particular advantage to those with impaired mobility – enabling them to turn on the lights without walking to the light switch, or even make a call to order dinner without speaking into a phone.

A UI advance that helps kids learn to read

Natural language understanding also has potentially transformative applications in the classroom. Children with dyslexia often fall behind in school, in part because they tend to take longer than their peers to read exam questions and compose written responses. Providing students with these kinds of language recognition and understanding can help level the playing field by enabling students to listen to questions and speak the answers, rather than take a paper test.

As natural language understanding and the dialog-based interfaces that support the technology has passed through its adolescence, so to speak, it has often been left up to tinkerers to adapt the technology to help people cope with disabilities. Check out this 2010 article abstract, for instance, which details a system that a quadriplegic Mississippi man named Doug Maples created to turn on his lights and computer using voice commands. Mr. Maples’ ingenuity and inventiveness is remarkable, but the fact that he had to go to such great lengths to adapt his environment to respond to voice commands reminds us of the high bar faced by those with disabilities who wish to get similar accommodations.

As some of the more transformative applications of voice recognition technology make their way to the market, however, it should be much easier for a broad range of people with disabilities to access them. The fact that these applications appeal to such a wide swath of the population at large will only speed their implementation and keep costs down – a big plus for those who really need them.

Ilya Gelfenbeyn is the CEO of Speaktoit, a company that develops natural language virtual assistants for Android, iOS, and Windows Phone.