LensMouse: augmenting the mouse with an interactive touch display

We introduce LensMouse, a novel device that embeds a touch-screen display -- or tangible 'lens' -- onto a mouse. Users interact with the display of the mouse using direct touch, whilst also performing regular cursor-based mouse interactions. We demonstrate some of the unique capabili-ties of such a device, in particular for interacting with auxil-iary windows, such as toolbars, palettes, pop-ups and dia-log-boxes. By migrating these windows onto LensMouse, challenges such as screen real-estate use and window man-agement can be alleviated. In a controlled experiment, we evaluate the effectiveness of LensMouse in reducing cursor movements for interacting with auxiliary windows. We also consider the concerns involving the view separation that results from introducing such a display-based device. Our results reveal that overall users are more effective with LenseMouse than with auxiliary application windows that are managed either in single or dual-monitor setups. We conclude by presenting other application scenarios that LensMouse could support.

[1]  Daniel Vogel,et al.  Shift: a technique for operating pen-based interfaces using touch , 2007, CHI.

[2]  Carl Gutwin,et al.  Targeting across displayless space , 2008, CHI.

[3]  Ben Shneiderman,et al.  Image-Browser Taxonomy and Guidelines for Designers , 1995, IEEE Softw..

[4]  Catherine Plaisant,et al.  Navigation patterns and usability of zoomable user interfaces with and without an overview , 2002, TCHI.

[5]  Ehud Sharlin,et al.  Predictive interaction using the delphian desktop , 2005, UIST.

[6]  Benjamin B. Bederson,et al.  Target size study for one-handed thumb use on small touchscreen devices , 2006, Mobile HCI.

[7]  Steven K. Feiner,et al.  Interacting with hidden content using content-aware free-space transparency , 2004, UIST '04.

[8]  Kasper Hornbæk Navigation Patterns and Usability of Zoomable User Interfaces with and without an Overview , 2003 .

[9]  Takeo Igarashi,et al.  Ninja cursors: using multiple cursors to assist target acquisition on large screens , 2008, CHI.

[10]  Desney S. Tan,et al.  Effects of Visual Separation and Physical Discontinuities when Distributing Information across Multiple Displays , 2003, INTERACT.

[11]  Mary Czerwinski,et al.  Display space usage and window management operation comparisons between single monitor and multiple monitor users , 2004, AVI.

[12]  Brad A. Myers,et al.  Extending the windows desktop interface with connected handheld computers , 2000 .

[13]  Jeffrey S. Pierce,et al.  Understanding the whethers, hows, and whys of divisible interfaces , 2006, AVI '06.

[14]  Tony DeRose,et al.  Toolglass and magic lenses: the see-through interface , 1993, SIGGRAPH.

[15]  Emmanuel Pietriga,et al.  Pointing and beyond: an operationalization and preliminary evaluation of multi-scale searching , 2007, CHI.

[16]  Jonathan Grudin,et al.  Partitioning digital worlds: focal and peripheral awareness in multiple monitor use , 2001, CHI.

[17]  Patrick Baudisch,et al.  Mouse ether: accelerating the acquisition of targets across multi-monitor displays , 2004, CHI EA '04.

[18]  Nicholas Chen,et al.  Navigation techniques for dual-display e-book readers , 2008, CHI.

[19]  Meredith Ringel Morris,et al.  Reading Revisited: Evaluating the Usability of Digital Display Surfaces for Active Reading Tasks , 2007, Second Annual IEEE International Workshop on Horizontal Interactive Human-Computer Systems (TABLETOP'07).

[20]  Steven K. Feiner,et al.  Multi-monitor mouse , 2005, CHI EA '05.

[21]  Kang Shi,et al.  PressureMove: Pressure Input with Mouse Movement , 2009, INTERACT.

[22]  Sriram Subramanian,et al.  Augmenting the mouse with pressure sensitive input , 2007, CHI.

[23]  Roel Vertegaal,et al.  LookPoint: an evaluation of eye input for hands-free switching of input devices between multiple computers , 2006, OZCHI '06.

[24]  Carl Gutwin,et al.  Multiblending: displaying overlapping windows simultaneously without the drawbacks of alpha blending , 2004, CHI.

[25]  Richard Szeliski,et al.  The VideoMouse: a camera-based multi-degree-of-freedom input device , 1999, UIST '99.

[26]  I.,et al.  Fitts' Law as a Research and Design Tool in Human-Computer Interaction , 1992, Hum. Comput. Interact..

[27]  Xiaojun Bi,et al.  Comparing usage of a large high-resolution display to single or dual desktop displays for daily work , 2009, CHI.

[28]  John T. Stasko,et al.  mudibo: multiple dialog boxes for multiple monitors , 2005, CHI Extended Abstracts.

[29]  Mark Ashdown,et al.  Combining head tracking and mouse input for a GUI on multiple monitors , 2005, CHI Extended Abstracts.

[30]  Ravin Balakrishnan,et al.  Pointing lenses: facilitating stylus input through visual-and motor-space magnification , 2007, CHI.

[31]  Ravin Balakrishnan,et al.  Acquisition of expanding targets , 2002, CHI.

[32]  Edward Cutrell,et al.  Quantitative analysis of scrolling techniques , 2002, CHI.

[33]  Desney S. Tan,et al.  WinCuts: manipulating arbitrary window regions for more effective use of screen space , 2004, CHI EA '04.

[34]  Ravin Balakrishnan,et al.  Codex: a dual screen tablet computer , 2009, CHI.

[35]  John T. Stasko,et al.  Consistency, multiple monitors, and multiple windows , 2007, CHI.

[36]  Renaud Blanch,et al.  Object Pointing: A Complement to Bitmap Pointing in GUIs , 2004, Graphics Interface.

[37]  Ravin Balakrishnan,et al.  The PadMouse: facilitating selection and spatial positioning for the non-dominant hand , 1998, CHI.

[38]  Colin Ware,et al.  The DragMag image magnifier , 1995, CHI 95 Conference Companion.

[39]  John T. Stasko,et al.  Lightweight task/application performance using single versus multiple monitors: a comparative study , 2008, Graphics Interface.

[40]  Xiang Cao,et al.  Mouse 2.0: multi-touch meets the mouse , 2009, UIST '09.