In the past five years, touchscreens have transformed the way we use mobile devices and voice input could well be the next transitional technology. Adding natural language understanding (NLU) and voice recognition isn’t something that every app developer can easily do, however. That spells opportunity for a company like OneTok, which offers a voice-interpretation platform to make it easy for devs.
OneTok — one of our Launchpad finalists at this week’s Mobilize event — launched in 2011, and made its platform available for mobile app developers on Wednesday. The company pairs native support for iOS(s aapl), Android(s goog), and BlackBerry 10(s rimm) with libraries for third-party developer frameworks such as Appcelerator Titanium and PhoneGap. Although developers enable voice input with OneTok at the client level, the service is actually cloud based: Audio processing the NLU analysis runs on servers with the results quickly shot back to a mobile device.
OneTok’s CEO, Ben Lilienthal noted how voice recognition solutions such as Apple’s Siri have raised awareness of NLU with consumers, but it’s is a limited solution ripe for opportunity.
“With the new Siri, users can open apps with their voices, but it stops there. Siri gets users to developers’ doorsteps, but without OneTok as the voice enabler, they can’t come in. OneTok enables them to talk and actually interact with and use their apps through simple voice commands.”
This type of solution makes sense, provided app developers can predict and enable voice phrases that users will most want to use. The service — which is subscription based — offers a helpful Top 10 package of the most likely used phrases, such as “Post to Facebook”, “Share via email or Twitter”, or “Zoom in and out”. But to enable truly useful voice interpretation to apps, solutions will have to grow with the complexity of those apps.