Beyond Verbal snags $1M more for emotion-decoding voice recognition software

You know the expression ‘it’s not what you say, but how you say it’? Well, a new Israel-based startup is building an entire company on the promise that its software can determine a person’s emotional state just by analyzing the sound of his voice.
Building on patents developed over the past nearly 20 years in physics and neuropsychology, Beyond Verbal launched in May of this year with $2.8 million. On Tuesday, the startup said it had raised an additional $1 million from Israel-based startup investment fund Winnovation for research and development and business development.
“We understand a speaker’s transient mood, attitude towards subjects and emotional decision-making characteristics in real time. Not from the words that people use but their vocal modulations,” said Dan Emodi, the company’s vice president of marketing and strategic accounts.
Thanks to Apple’s Siri, interest in voice recognition technology is on the upswing. But while several companies offer software that understands and responds to voice commands, Beyond Verbal wants to lead the way with technology that understands voice and emotion.
By listening to just 10 seconds of person speaking, the company says it can analyze the patterns of high and low intonations to determine several emotional dimensions. For example, in an analysis of a clip of President Barack Obama speaking in a debate against Republic presidential nominee Mitt Romney, the software detected primary emotions of “practicality, anger and great strength,” with an underlying hints of “provocation, cynicism and ridicule.” The company says it’s 81 percent accurate for phonetic language and 75 percent accurate for tonal languages like Mandarin Chinese and Vietnamese.
Interestingly, instead of pursuing a specific set of application, Beyond Verbal offers its software as a cloud-based licensed service. By connecting to its API and SDK, other companies can use the company’s technology for a variety of purposes in a range of fields. For example, it could be used in a phone service for the hearing impaired that doesn’t just turn speech into text but provides the emotional context of the conversation. Or, it could be put to work monitoring the speech of airline pilots and alerting them when they lose focus or listening to customer service calls to help representatives respond to grumpy customers.
The idea that we could outsource something as fundamentally human as emotion-decoding to machines could rub some the wrong way. But it’s part of an emerging movement focused on creating emotionally-aware software. And, as we’ve covered before, it may have some interesting and valuable implications for education, where emotion-sensing artificial intelligence software could help provide learning experiences most appropriate to a student’s needs.
Image by Syda Productions via Shutterstock.