At IDF this week, Intel showed off a demo of Nuance’s Dragon Assist software running on an Ultrabook. Dragon Assist, currently in beta, is a Siri-like approach to PC voice control. Using natural words and phrases you can ask the computer to do a number of tasks. The demo that Intel showed on stage was quite impressive — it was fast, accurate, and potentially quite useful. But would you use it?
I say Dragon Assist is “Siri-like” because it doesn’t encompass its own personality like the famous Apple creation. Instead of being a metaphor of an entity like Siri is, Dragon Assist just does what you ask of it rather than talk back to you.
Along with touch, NFC, and gesture control, Intel is pushing voice control as a more natural way to interact with one’s computer. Here’s Intel showing off Dragon Assist running on an Ultrabook at IDF this week:
Assuming that this was a live demo, I’m quite impressed. Not only was Dragon Assist deadly accurate, it was quick too. This speed helps Intel highlight the abilities of their Core processor line which will be even snappier when they launch the next iteration, Haswell, in 2013.
Siri relies on a data connection to function; when you talk to Siri your voice is recorded and sent off to a remote server for processing. Once the powerful remote server understands what you’ve asked, the command is sent back to your phone and executed. So in reality Siri lives in the cloud not on your iPhone. This presents a number of issues, like when you don’t have a fast data connection or when something goes wrong on the send/receive side (say you are transitioning from WiFi to cellular data). This can make the Siri experience quite frustrating. Many times when I use Siri I just end up bothered because I could have done the same thing faster if I just did it myself from the get-go.
Dragon Assist, on the other hand, appears to use fully local processing, meaning you can use it online or off and it’ll still be fast and won’t succumb to network hiccups. Achieving this requires a synthesis of components. First you need a powerful processor which Intel has down pat. Second you need a high quality microphone. In the demo, the presenter was wearing a microphone — in reality most home users would simply want to use the integrated mics in their Ultrabook. Manufacturers will likely need high quality mic arrays to achieve the same level of performance we saw in the demo. Third, you need intelligent algorithms for voice analysis, a field in which Nuance is one of the foremost experts.
Even if a system were able to achieve 100% accuracy (and that hasn’t happened yet) would you use voice control? I use Siri on my iPhone from time to time but almost never in public. I use Siri at home, but even there when people are around I won’t. Why? Well there seems to be something silly about talking to a computer. Apple is onto something when it comes to making Siri act and respond like a person would but it isn’t there yet — I still feel dumb talking to a machine in front of other people, much in the same way that I’d feel dumb waving my arms around in front of Microsoft’s awful Kinect.
What do you think, folks? Is speech control a worthwhile endeavor? Do you hesitate to use it when people are around?