In many different application domains the use of 3D visualization is accelerating. If the complexity of 3D data increases often stereoscopic display provides a better insight for domain experts as well as ordinary users. Usually, interaction with and visualization of the 3D data is decoupled because manipulation of stereoscopic content is still a challenging task. Hence, 3D data is visualized stereoscopically whereas interaction is performed via 2D graphical user interfaces. Although such interscopic interaction between stereoscopic and monoscopic content is of major interest in many application domains it has not been sufficiently investigated. One example of decoupling interaction and visualization occurs in presentation scenarios where a desktop or handheld computer is used to input parameters altering the visualization that might be presented to a larger user group on a stereoscopic projection screen. Recently emerging multi–touch interfaces promise an alternative approach to this challenge. While multi–touch has shown its usefulness for 2D interfaces by providing more natural and intuitive interaction, it has not been considered if and how these concepts can be extended to 3D multi–touch interfaces, in particular in combination with stereoscopic display. In this paper we discuss the potentials and the limitations as well as possible solutions for the interaction with interscopic data via multi–touch interfaces. 1. BACKGROUND AND MOTIVATION In recent years virtual environments (VEs) have become more and more popular and widespread due to the requirements of numerous application areas. Two–dimensional desktop systems are often limited in cases where natural interfaces are desired. In these cases virtual reality (VR) systems using tracking technologies and stereoscopic projections of three–dimensional synthetic worlds support a better exploration of complex data sets. Although costs as well as the effort to acquire and maintain VR systems have decreased to a moderate level, these setups are only used in highly specific application scenarios within some VR laboratories. In most human-computer interaction processes VR systems are only rarely applied by ordinary users or by experts–even when 3D tasks have to be accomplished [1]. One reason for this is the inconvenient instrumentation required to allow immersive interactions in such VR systems, i. e., the user is forced to wear stereo glasses, tracked devices, gloves etc. Furthermore the most effective ways for humans to interact with synthetic 3D environments have not finally been determined [1, 3]. Even the WIMP metaphor [15], which is used for 2D-Desktop interaction, has its limitations when it comes to direct manipulation of 3D data sets [6], e. g., via 3D widgets [7]. Devices with three or more degrees of freedom (DoFs) may provide a more direct interface to 3D manipulations than their 2D counterparts, but using multiple DoFs simultaneously still involves problems [3]. And as a matter of fact 2D interactions are performed best with 2D devices usually supporting only two DoFs [13, 17]. Hence 3D user interfaces are often the wrong choice in order to accomplish tasks requiring exclusively or mainly two-dimensional control [1, 13]. Most 3D applications also include 2D user interface elements, such as menus, texts and images, in combination with 3D content. While 3D content usually benefits from stereoscopic visualization 2D GUI items often do not have associated depth information. Therefore, interactions between monoscopic and stereoscopic elements, so–called interscopic interactions, have not been fully examined with special consideration of the interrelations between the elements. Multi–touch interaction with computationally enhanced surfaces has received considerable attention in recent years. When talking about multi-touch surfaces we think of surfaces that support multi-finger and multi-hand operation (in analogy to the seminal work by Bill Buxton [5]). Multi– touch surfaces can be realised by using different technologies, ranging from capacitive sensing to video analysis of infrared or full color video images. Recently the promising FTIR (frustrated total internal reflection) technology has been rediscovered by Jeff Han [12]. Its cheap footprint has accelerated the usage of multi–touch in the last two years. If multi–touch applications need to distinguish between different users, the Diamond Touch concept from MERL [8] could be used, with the drawback that the users either need to be wired or stay in specially prepared locations. With today’s Figure 1: Illustration of two users interacting with interscopic data in a city planning scenario. technology it is now possible to apply the basic advantages of bi-manual interaction [5, 9, 23] to any suitable domain. Another benefit of multi–touch technology is that the user does not have to wear inconvenient devices in order to interact in an intuitive way [18]. For instance, frustrated internal frustration technologies allow to input multi–touch data by means of a low cost system. The DoF are restricted by the physical constraints of the touch screen. In combination with autostereoscopic displays such a system can avoid any instrumentation of the user, while providing an advanced user experience. However, the benefits and limitations of using multi–touch in combination with stereoscopic display have not been examined in-depth and are not well understood. Our experiences make us believe that mobile devices with multi–touch enabled surfaces, such as the iPhone/iPod touch, have great potential to support and enrich the interaction with large scale stereoscopic projection screens or even in immersive virtual reality. In this position paper we discuss challenges of such user interfaces for stereoscopic display setups and in particular the role multi–touch enabled mobile devices could play in those environments. The paper is structured as follows: In section two we discuss issues related to the parallax-dependent selection and direct manipulation of 3D objects as well as issues related to navigation in 3D data sets. These issues have to be taken into account when designing a multi–touch user interface for 3D interaction. In addition, we will illustrate how the combination of a mobile multi–touch device and a stereoscopic multi–touch wall can enrich the interaction and solve existing interaction problems. Furthermore, we discuss application areas that show the potential for the interaction with stereoscopic content via multi–touch interfaces, in particular multi-touch enabled mobile devices. Section 3 concludes the paper. 2. MULTI–TOUCHING 3D DATA As mentioned above 3D visualization applications combine two-dimensional with three-dimensional content. While 3D data has the potential to benefit from stereoscopic display, visualization and interaction with 2D content should be restricted to two dimensions [1]. In this section we discuss aspects which have to be taken into account when designing a multi–touch user interface for interscopic interaction. We think that this interaction with interscopic data can be very useful in a city planning scenario (see figure 1) 2.1 Parallax Paradigms A major benefit of stereoscopy is binocular disparity that provides a better depth awareness. When stereoscopic display is used each eye of the user perceives a different perspective of the same scene. This can be achieved by using different technologies, either by having the user wear special glasses or by using special 3D displays. Although the resulting binocular disparity provides an additional depth cue, in a stereoscopic representation of a 3D scene it may be hard to access distant objects [3]. This applies in particular if the interaction is restricted to a 2D touch surface. Objects might be displayed with different parallax paradigms, i. e., negative, zero, and positive parallax, resulting in different stereoscopic effects. Interaction with objects that are displayed with different parallaxes is still a challenging task in VR–based environments. In particular the interaction with objects with a large negative parallax is complicated. 2.1.1 Negative Parallax When stereoscopic content is displayed with negative parallax the data appears to be in front of the projection screen (see orange-colored box in Figure 2). Hence, when the user wants to interact with data objects by touching, s/he is limited to touch the area behind the objects since multi–touch screens capture only direct contacts. Therefore, the user virtually has to move fingers or her/himself through virtual objects, and the stereoscopic projection is disturbed. Consequently, immersion may get lost. This problem is a common issue known from two-dimensional representation of the mouse cursor within a stereoscopic image. While the mouse cursor can be displayed stereoscopically on top of stereoscopic objects [20], movements of real objects in the physical space, e. g., the user’s hands, cannot be constrained such that they appear only on top of virtual objects.
[1]
Johannes Schöning,et al.
Improving interaction with virtual globes through spatial thinking: helping users ask "why?"
,
2008,
IUI '08.
[2]
Ross T. Smith,et al.
Tech Note: Digital Foam
,
2008,
2008 IEEE Symposium on 3D User Interfaces.
[3]
Jefferson Y. Han.
Low-cost multi-touch sensing through frustrated total internal reflection
,
2005,
UIST.
[4]
Timo Ropinski,et al.
Interscopic User Interface Concepts for Fish Tank Virtual Reality Systems
,
2007,
2007 IEEE Virtual Reality Conference.
[5]
Daniel C. Robbins,et al.
Three-dimensional widgets
,
1992,
I3D '92.
[6]
Constrained 3D navigation with 2D controllers
,
1997,
Proceedings. Visualization '97 (Cat. No. 97CB36155).
[7]
Wolfgang Stuerzlinger,et al.
Unconstrained vs. Constrained 3D Scene Manipulation
,
2001,
EHCI.
[8]
Timo Ropinski,et al.
VR and Laser-Based Interaction in Virtual Environments Using a Dual-Purpose Interaction Metaphor
,
2005
.
[9]
Clifton Forlines,et al.
DTLens: multi-user tabletop spatial data exploration
,
2005,
UIST.
[10]
Brad A. Myers,et al.
A taxonomy of window manager user interfaces
,
1988,
IEEE Computer Graphics and Applications.
[11]
W. Buxton,et al.
A study in two-handed input
,
1986,
CHI '86.
[12]
Mike Wu,et al.
A study of hand shape use in tabletop gesture interaction
,
2006,
CHI Extended Abstracts.
[13]
Daniel J. Wigdor,et al.
Combining and measuring the benefits of bimanual pen and direct-touch interaction on horizontal interfaces
,
2008,
AVI '08.
[14]
Ivan Poupyrev,et al.
3D User Interfaces: Theory and Practice
,
2004
.
[15]
Joshua Napoli,et al.
Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays
,
2005,
IS&T/SPIE Electronic Imaging.
[16]
Mike Wu,et al.
Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays
,
2003,
UIST '03.
[17]
Douglas A. Bowman,et al.
Interaction Techniques For Common Tasks In Immersive Virtual Environments - Design, Evaluation, And
,
1999
.
[18]
Andrew S. Forsberg,et al.
Image plane interaction techniques in 3D immersive environments
,
1997,
SI3D.
[19]
Martin Hachet,et al.
Navidget for Easy 3D Camera Positioning from 2D Inputs
,
2008,
2008 IEEE Symposium on 3D User Interfaces.
[20]
Darren Leigh,et al.
DiamondTouch: a multi-user touch technology
,
2001,
UIST '01.