BeThere: 3D mobile collaboration with spatial input

We present BeThere, a proof-of-concept system designed to explore 3D input for mobile collaborative interactions. With BeThere, we explore 3D gestures and spatial input which allow remote users to perform a variety of virtual interactions in a local user's physical environment. Our system is completely self-contained and uses depth sensors to track the location of a user's fingers as well as to capture the 3D shape of objects in front of the sensor. We illustrate the unique capabilities of our system through a series of interactions that allow users to control and manipulate 3D virtual content. We also provide qualitative feedback from a preliminary user study which confirmed that users can complete a shared collaborative task using our system.

[1]  Patrick Baudisch,et al.  Lucid touch: a see-through mobile device , 2007, UIST.

[2]  Benjamin Cohen,et al.  TeleAdvisor: a versatile augmented reality tool for remote assistance , 2012, CHI.

[3]  Donald R. Johnson,et al.  The Office of the Future. , 1985 .

[4]  Susan R. Fussell,et al.  Gestures Over Video Streams to Support Remote Collaboration on Physical Tasks , 2004, Hum. Comput. Interact..

[5]  Tobias Höllerer,et al.  Integrating the physical environment into mobile remote collaboration , 2012, Mobile HCI.

[6]  Michael Rohs,et al.  PalmSpace: continuous around-device gestures vs. multitouch for 3D rotation tasks on mobile devices , 2012, AVI.

[7]  Greg Welch,et al.  The office of the future: a unified approach to image-based modeling and spatially immersive displays , 1998, SIGGRAPH.

[8]  David Kim,et al.  HoloDesk: direct 3d interactions with a situated see-through display , 2012, CHI.

[9]  Carl Gutwin,et al.  Improving interpretation of remote gestures with telepointer traces , 2002, CSCW '02.

[10]  Andrew Wilson,et al.  MirageTable: freehand interaction on a projected augmented reality tabletop , 2012, CHI.

[11]  Andy Cockburn,et al.  Characterizing user performance with assisted direct off-screen pointing , 2011, Mobile HCI.

[12]  Chris Harrison,et al.  Abracadabra: wireless, high-precision, and unpowered finger input for very small mobile devices , 2009, UIST '09.

[13]  Greg Welch,et al.  Animatronic shader lamps avatars , 2009, 2009 8th IEEE International Symposium on Mixed and Augmented Reality.

[14]  Abhishek Ranjan,et al.  Dynamic shared visual spaces: experimenting with automatic camera control in a remote repair task , 2007, CHI.

[15]  Philip L. Davidson,et al.  A screen-space formulation for 2D and 3D direct manipulation , 2009, UIST '09.

[16]  Gerd Kortuem,et al.  "Where are you pointing at?" A study of remote collaboration in a wearable videoconference system , 1999, Digest of Papers. Third International Symposium on Wearable Computers.

[17]  Shumin Zhai,et al.  Virtual reality for palmtop computers , 1993, TOIS.

[18]  Wolfgang Friedrich,et al.  ARVIKA-augmented reality for development, production and service , 2002, Proceedings. International Symposium on Mixed and Augmented Reality.

[19]  Darren Gergle,et al.  See what i'm saying?: using Dyadic Mobile Eye tracking to study collaborative reference , 2011, CSCW.

[20]  Bruce H. Thomas,et al.  Implementation of god-like interaction techniques for supporting collaboration between outdoor AR and indoor tabletop users , 2006, 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality.

[21]  Rod McCall,et al.  Lightweight palm and finger tracking for real-time 3D gesture control , 2011, 2011 IEEE Virtual Reality Conference.

[22]  Patrick Baudisch,et al.  Imaginary interfaces: spatial interaction with empty hands and without visual feedback , 2010, UIST.

[23]  John C. Tang,et al.  Videodraw: a video interface for collaborative drawing , 1991, TOIS.

[24]  M. Sheelagh T. Carpendale,et al.  Sticky tools: full 6DOF force-based interaction for multi-touch tables , 2009, ITS '09.

[25]  Tek-Jin Nam,et al.  Supporting telepresence by visual and physical cues in distributed 3D collaborative design environments , 2006, CHI EA '06.

[26]  Shahram Izadi,et al.  SideSight: multi-"touch" interaction around small devices , 2008, UIST '08.

[27]  David A. Forsyth,et al.  Around device interaction for multiscale navigation , 2012, Mobile HCI.

[28]  Antonis A. Argyros,et al.  Efficient model-based 3D tracking of hand articulations using Kinect , 2011, BMVC.

[29]  Jeremy R. Cooperstock,et al.  Did "Minority Report" Get It Wrong? Superiority of the Mouse over 3D Input Devices in a 3D Placement Task , 2009, INTERACT.

[30]  Jacki O'Neill,et al.  From ethnographic study to mixed reality: a remote collaborative troubleshooting system , 2011, CSCW.

[31]  Laurent Grisoni,et al.  Integrality and Separability of Multitouch Interaction Techniques in 3D Manipulation Tasks , 2012, IEEE Transactions on Visualization and Computer Graphics.

[32]  David Salesin,et al.  Automated generation of interactive 3D exploded view diagrams , 2008, ACM Trans. Graph..

[33]  Daniel C. Robbins,et al.  PlayTogether: Playing Games across Multiple Interactive Tabletops , 2006 .

[34]  Olivier Chapuis,et al.  Mid-air pan-and-zoom on wall-sized displays , 2011, CHI.

[35]  Michael Rohs,et al.  HoverFlow: expanding the design space of around-device interaction , 2009, Mobile HCI.

[36]  Andrew W. Fitzgibbon,et al.  KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera , 2011, UIST.

[37]  Hirokazu Kato,et al.  Real world teleconferencing , 1999, CHI Extended Abstracts.

[38]  Andrew D. Wilson Robust computer vision-based detection of pinching for one and two-handed gesture input , 2006, UIST.

[39]  Sean White,et al.  Nenya: subtle and eyes-free mobile input with a magnetically-tracked finger ring , 2011, CHI.

[40]  Chris Harrison,et al.  OmniTouch: wearable multitouch interaction everywhere , 2011, UIST.