While voice recognition has been available on desktops for over a decade, it’s found a new life on mobile devices, where it powers things like Apple’s Siri virtual assistant and Google Now. But to make that platform leap, tech companies have had to rely on cloud processing, which adds significant delays and makes voice commands unavailable without an Internet connection.
Exactly. And this is the problem with all pure cloud computing efforts. You’re entirely at the mercy of whatever bandwidth may (or often may not) be available. My local Whole Foods sports a coffee shop with “free wifi.” The best speed I’ve ever been able to manage on download is about 1Mb. Once upon a time I’d have been thrilled with speed like that, but no longer. In today’s increasingly cloud-oriented (storage, computing, file transfer) world, that’s not bandwidth, that’s the equivalent of two tin cans connected by a string.
The big problem has always been local power and storage. Those are improving by leaps and bounds. I’ve got desktop Nuance Naturally Speaking, a full-featured voice recognition system for the desktop set up on my 10 inch tablet. It works fine. And it’s not dependent on a lifeline to the cloud.
And this is why I don’t think the pure cloud plays will last, longterm. They are merely an interim step on the way to a life in which everybody carries around on (or in) their person all the computing power and storage then need to do whatever they want to do.