**UPDATE of sadness: http://www.idownloadblog.com/2011/11/14/siri-mind-control-hack-fake/ **
Since the advent of the iPhone’s multitouch interface, I’ve been fascinated by technologies that lower the bar between an operator’s thoughts and the actual result within the computer (/device). Before the advent of Siri, I hypothesized that speech input would soon come to replace touch, mainly because touch requires us to change our visual focus from whatever we’re trying to accomplish in the real world, to look down at the screen of our device.
But speech removes that hurdle, enabling a whole new kind of multitasking. Using Siri, I can now place calls, perform web searches (and have the resulting webpages actually ready back to me outloud), play any selection of my music, and much more, simply by raising the device to my head and speaking naturally. The best places to do this? While driving, or cooking – activities I don’t really want to be using a touch interface for (for one, because of safety, and the other, because my fingers are usually covered with olive oil and/or garlic) but which generally provide many opportunities for effective multi-tasking. (“Siri, how many tablespoons are there in three cups?”).
But of course speech inputs for years have had to be “trained” for days, if not weeks, ahead of time, and essentially amount to matching the specific sounds within words to your vocal patterns, a method which is both inaccurate, and slow.
But as this video demonstrates, we’re not too far off from a world where both touch and speech input could be replaced by a direct “thought” interface.
The advantages of that should be obvious: imagine a world with no keyboards or mouses, where your Xbox has no 18-button’ed controller, where composing a letter was a simple as thinking it.
The seamless integration of human thought and computing power will obviate much, if not all, of our current clunky interface systems, and bring the world one step closer to “Snow Crash“.