The Future Of Interfaces Is Multi-Touch
I read about Jeff Han’s work on multi-touch screens just a few days ago, but seeing the technology demonstrated live indicates that a transformative paradigm in UI experiences may be unfolding.
The multi-touch screen is essentially a touch screen that responds to simultaneous touches at multiple points on its surface, This enables users to form more intuitive, expressive and sophisticated gestures as well as providing an inherently multi-user interface, as different parts of a screen can be used by each user. The touch surface itself is based on the total internal reflection properties of the glass above the screen itself.
Han indicates that the social, intuitive nature of the multi-touch screen, takes computing away from the historic WIMP direction of interfaces and opens computing culture to children and older users. Certainly, the playful, unintimidating nature of the technology appears to lower the usual barriers of complexity that those unfamiliar with computing environments face.
Han’s demonstrations included:
- Navigation of a 3D globe using gestures for panning and zooming.
- TV channels displayed as individual scalable ’tiles’ and windows.
- A scalable, translucent, virtual keyboard.
- A sock puppet, animaed through gesture, created by a drawing simple strokes on the screen.
- Puzzle games with users using different parts of the screen to solve their puzzles in a time-trial.
Han’s technology seems eminently useful for highly interactive, manipulable visualisations of complex data, but it’s not difficult to imagine versions of Photoshop, fabrication control interfaces, 3D design and sculpting applications being crafted from the same technology.