A few commentors have taken me to task for drooling over the “multitouch” UI demo. My gut reaction is two-fold: I want a huge display (covering 150 degrees or so, with high-density pixels, of course) and I want direct manipulation. Although I’m pretty sure I’m right about the former, I could very well be wrong about the latter. A cautionary tale:
From about a year before it was publicly announced until recently, I have spent a good deal of time programming the Tablet PC. I did this primarily because I was a huge enthusiast for pen-based UIs. However, when you use a Tablet PC regularly and, especially, if you try to integrate pen-based components with regular UI elements, what you learn is that UIs have co-evolved with the keyboard and mouse. There’s an old UI canard that “everyone agrees that mice are faster than keyboards: except the stopwatch.” Similarly, I adore writing longhand with a pen, but it is unworkably slow as an input method and there is no word-processing software that is pen-specific. Not even OneNote works as an actual pen-based word-processor and while I admire the technical achievement of InkGestures, it doesn’t make longhanding into Word appropriate.
More subtlely, virtually everything about the modern UI — clicking, the size of icons, rectangular movements “through” menus, etc. — work better with mice and the movements that come naturally from the wrist. Pens have more precision from the fingertips and use more of the arm and, of course, I don’t think there’s anyone who can actually draw better with a mouse than with a pen. But I know from experience that a pen is not nearly as transformative within Photoshop as you’d think — again, the workflow is co-evolved with keyboard and mouse.
All of which is to say that I have a history of being mistaken about the long-term effectiveness of touch-based UIs.