This is absolutely blowing my mind. 3 cheers, Disney Research!
Computer interfaces with NO input screen or device? Using my body as a touchscreen? Sounds too amazing to be true. Except of course, for the inevitable second coming of the ‘cellphones cause cancer’ backlash.
I’m surprised they saved the best for last, though. A TV that turns on as soon as I sit on the couch? This is truly exciting stuff. Everyone in America would want this killer feature. (Everyone that hasn’t accidentally locked themselves out of their house by closing the door too hard, that is.)
Since the advent of the iPhone’s multitouch interface, I’ve been fascinated by technologies that lower the bar between an operator’s thoughts and the actual result within the computer (/device). Before the advent of Siri, I hypothesized that speech input would soon come to replace touch, mainly because touch requires us to change our visual focus from whatever we’re trying to accomplish in the real world, to look down at the screen of our device.
But speech removes that hurdle, enabling a whole new kind of multitasking. Using Siri, I can now place calls, perform web searches (and have the resulting webpages actually ready back to me outloud), play any selection of my music, and much more, simply by raising the device to my head and speaking naturally. The best places to do this? While driving, or cooking – activities I don’t really want to be using a touch interface for (for one, because of safety, and the other, because my fingers are usually covered with olive oil and/or garlic) but which generally provide many opportunities for effective multi-tasking. (“Siri, how many tablespoons are there in three cups?”).
But of course speech inputs for years have had to be “trained” for days, if not weeks, ahead of time, and essentially amount to matching the specific sounds within words to your vocal patterns, a method which is both inaccurate, and slow.
But as this video demonstrates, we’re not too far off from a world where both touch and speech input could be replaced by a direct “thought” interface.
The advantages of that should be obvious: imagine a world with no keyboards or mouses, where your Xbox has no 18-button’ed controller, where composing a letter was a simple as thinking it.
The seamless integration of human thought and computing power will obviate much, if not all, of our current clunky interface systems, and bring the world one step closer to “Snow Crash“.
Google wants to be an integral part of your quest for knowledge. Or put another way, Google wants to be a part of how you think. To that end it rolled out a slew of new tools today designed to more or less encourage you to embed it in your brain.
Planetary was launched by San Francisco startup Bloom Studio earlier this month. The company calls it “the first of a new type of visual discovery app” and promises more such apps in the coming months. They plan to use this type of visualization to “let you explore and participate in social networks, video streaming services, and location-based applications in a whole new way!”
What’s different about Planetary is that it doesn’t depend on traditional software controls and design patterns – such as a play button, scrolling down a list of tracks, even flipping through album covers. Instead, the app is controlled by the data visualizations.
“The thing I consider significant about that remarkable piece of Bloom software is that it uses information visualization as a new breed of control interface. That’s not just fancy re-skinning of the same old music-machine pushbuttons. That whole graphic shebang is generated in real-time on the fly. And you can run code with that, play music, do media with it! An advance like that is important.” (emphasis ours)
A Wired review of the app notes that it turns a data set – in this case music – into “tactile and dynamic visual objects.”
Imagine those same techniques being used for data from social networking, location, media and real-world objects (the Internet of Things). That’s an intriguing development and I’m curious to see what other apps Bloom releases over the course of this year.
It’s not just about Minority Report and Hackers. At their core, all major OSs still function on interfaces developed in the time of DOS and command line prompts. Sure, Macs flashy interface is intuitive, but you can’t say it leverages recent technology and data interface techniques very creatively.
We’ve seen that data portability has reshaped the Internet. Now it’s time to reshape how we interact with our PCs.
Google Profiles has a new user interface that emphasizes the profile photo, includes many new sections and uses encrypted connections. You can now click on a section of your profile to edit it, add “10 words that describe you best”, bragging rights, relationship information, structured information about your education and employment, a scrapbook with your favorite photos. Another change is that you can now hide the Google Buzz tab from your profile.