Frontiers in User‐Computer Interaction

In this age of (near-)adequate computing power, the power and usability of the user interface is as key to an application’s success as its functionality. Most of the code in modern desktop productivity applications resides in the user interface. But despite its centrality, the user interface field is currently in a rut: the WIMP (Windows, Icons, Menus, Point-and-Click GUI based on keyboard and mouse) has evolved little since it was pioneered by Xerox PARC in the early ’70s. Computer and display form factors will change dramatically in the near future and new kinds of interaction devices will soon become available. Desktop environments will be enriched not only with PDAs such as the Newton and Palm Pilot, but also with wearable computers and large-screen displays produced by new projection technology, including office-based immersive virtual reality environments. On the input side, we will finally have speech-recognition and force-feedback devices. Thus we can look forward to user interfaces that are dramatically more powerful and better matched to human sensory capabilities than those dependent solely on keyboard and mouse. 3D interaction widgets controlled by mice or other interaction devices with three or more degrees of freedom are a natural evolution from their two-dimensional WIMP counterparts and can decrease the cognitive distance between widget and task for many tasks that are intrinsically 3D, such as scientific visualization and MCAD. More radical post-WIMP UIs are needed for immersive virtual reality where keyboard and mouse are absent. Immersive VR provides good driving applications for developing post-WIMP UIs based on multimodal interaction that involve more of our senses by combining the use of gesture, speech, and haptics.