I-see-3D ! An interactive and immersive system that dynamically adapts 2D projections to the location of a user's eyes

This paper presents a non-intrusive system that gives the illusion of a 3D immersive and interactive environment with 2D projectors. The user does not need to wear glasses, nor to watch a (limited) screen. The virtual world is all around him, drawn on the floor. As the user is himself immersed in the virtual world, there is no need for a proxy like an avatar; he can move inside the virtual environment freely. Moreover, the I-see-3D system allows a user to manipulate virtual objects with his own body, making interactions with the virtual world very intuitive. Giving the illusion of 3D requires to render images in such a way that the deformation of the image projected on the floor is taken into account, as well as the position of the user's “eye” in its virtual world. The resulting projection is neither perspective nor orthographic. Nevertheless, we describe how this can be implemented with the standard OpenGL pipeline, without any shader. Our experiments demonstrate that our system is effective in giving the illusion of 3D. Videos showing the results obtained with our I-see-3D system are available on our website http://www.ulg.ac.be/telecom/projector.

[1]  Andrew W. Fitzgibbon,et al.  Efficient regression of general-activity human poses from depth images , 2011, 2011 International Conference on Computer Vision.

[2]  Ramesh Raskar,et al.  Automatic projector calibration with embedded light sensors , 2004, UIST '04.

[3]  Marc Van Droogenbroeck,et al.  ViBe: A Universal Background Subtraction Algorithm for Video Sequences , 2011, IEEE Transactions on Image Processing.

[4]  Jordi Gonzàlez,et al.  View-invariant human-body detection with extension to human action recognition using component-wise HMM of body parts , 2008, 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition.

[5]  Carolina Cruz-Neira,et al.  Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE , 2023 .

[6]  Larry S. Davis,et al.  Ghost: a human body part labeling system using silhouettes , 1998, Proceedings. Fourteenth International Conference on Pattern Recognition (Cat. No.98EX170).

[7]  K. H. Leo,et al.  User-tracking mobile floor projection virtual reality game system for paediatric gait & dynamic balance training , 2010 .

[8]  Sebastian Thrun,et al.  Real-time identification and localization of body parts from depth images , 2010, 2010 IEEE International Conference on Robotics and Automation.

[9]  Laurence Nigay,et al.  Using the user's point of view for interaction on mobile devices , 2011, IHM.

[10]  Jacques Verly,et al.  A platform for the fast interpretation of movements and localization of users in 3D applications driven by a range camera , 2010, 2010 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video.

[11]  Jitendra Malik,et al.  Estimating Human Body Configurations Using Shape Context Matching , 2002, ECCV.

[12]  T. Başar,et al.  A New Approach to Linear Filtering and Prediction Problems , 2001 .

[13]  Paramvir Bahl,et al.  RADAR: an in-building RF-based user location and tracking system , 2000, Proceedings IEEE INFOCOM 2000. Conference on Computer Communications. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies (Cat. No.00CH37064).

[14]  Elena Deza,et al.  Encyclopedia of Distances , 2014 .

[15]  Johnny Chung Lee,et al.  Projector-Based Location Discovery and Tracking , 2007 .

[16]  Marc Van Droogenbroeck,et al.  A new low-cost and non-intrusive feet tracker , 2011 .

[17]  Ram Nevatia,et al.  Body Part Detection for Human Pose Estimation and Tracking , 2007, 2007 IEEE Workshop on Motion and Video Computing (WMVC'07).

[18]  Hari Balakrishnan,et al.  6th ACM/IEEE International Conference on on Mobile Computing and Networking (ACM MOBICOM ’00) The Cricket Location-Support System , 2022 .

[19]  Holger Regenbrecht,et al.  Visual manipulations for motor rehabilitation , 2012, Comput. Graph..

[20]  Raimo Sepponen,et al.  Positioning Accuracy And Multi-Target Separation With A Human Tracking System Using Near Field Imaging , 2009 .

[21]  Andrew W. Fitzgibbon,et al.  Real-time human pose recognition in parts from single depth images , 2011, CVPR 2011.

[22]  John C. Hart,et al.  The CAVE: audio visual experience automatic virtual environment , 1992, CACM.

[23]  Dana Kulic,et al.  Local Shape Context Based Real-time Endpoint Body Part Detection and Identification from Depth Images , 2011, 2011 Canadian Conference on Computer and Robot Vision.

[24]  Thia Kirubarajan,et al.  Estimation with Applications to Tracking and Navigation: Theory, Algorithms and Software , 2001 .

[25]  Ignas Niemegeers,et al.  A survey of indoor positioning systems for wireless personal networks , 2009, IEEE Communications Surveys & Tutorials.

[26]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[27]  Johnny Chung Lee,et al.  Hacking the Nintendo Wii Remote , 2008, IEEE Pervasive Computing.

[28]  Behzad Dariush,et al.  Controlled human pose estimation from depth image streams , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[29]  Marc Van Droogenbroeck,et al.  Estimation of Human Orientation in Images Captured with a Range Camera , 2011, ACIVS.

[30]  Jacques Verly,et al.  Utilisation de la Kinect , 2012 .