Smart Ubiquitous Projection: Discovering Surfaces for the Projection of Adaptive Content

Ubiquitous projection or "display everywhere" is a popular paradigm, according to which regular rooms are augmented with projected digital content in order to create immersive interactive environments. In this work, we revisit this concept, where instead of considering every physical surface and object as a display, we seek to determine areas that are suitable for the projection and interaction with digital information. After determining a set of requirements that such surfaces need to fulfil, we describe a novel computer vision-based technique to automatically detect rectangular surface regions that are deemed adequate for projection and mark those areas as available placeholders for users to use as "clean" displays. As a proof of concept, we show how content can be adaptively laid out in those placeholders using a simple tablet UI.

[1]  Vincent Lepetit,et al.  Randomized trees for real-time keypoint recognition , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[2]  Blair MacIntyre,et al.  RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units , 2014, UIST.

[3]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[4]  John Hardy,et al.  Toolkit support for interactive projected displays , 2012, MUM.

[5]  Eyal Ofek,et al.  IllumiRoom: peripheral projected illusions for interactive experiences , 2013, SIGGRAPH '13.

[6]  Raphaël Marée,et al.  Random subwindows for robust image classification , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[7]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[8]  Roberto Cipolla,et al.  Semantic texton forests for image categorization and segmentation , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[9]  Gordon Wetzstein,et al.  The visual computing of projector-camera systems , 2008, SIGGRAPH '08.

[10]  Pierre Geurts,et al.  Extremely randomized trees , 2006, Machine Learning.

[11]  Hrvoje Benko,et al.  Combining multiple depth cameras and projectors for interactions on, above and between surfaces , 2010, UIST.

[12]  Hiroshi Ishii,et al.  Emancipated pixels: real-world graphics in the luminous room , 1999, SIGGRAPH.

[13]  R. Sukthankar,et al.  Towards Ambient Projection for Intelligent Environments , 2005, Computer Vision for Interactive and Intelligent Environment (CVIIE'05).

[14]  Jun Rekimoto,et al.  Augmented surfaces: a spatially continuous work space for hybrid computing environments , 1999, CHI '99.

[15]  Eyal Ofek,et al.  Spatial Constancy of Surface-Embedded Layouts across Multiple Environments , 2015, SUI.

[16]  Richard A. Bolt,et al.  “Put-that-there”: Voice and gesture at the graphics interface , 1980, SIGGRAPH '80.

[17]  James M. Rehg,et al.  Shadow Elimination and Blinding Light Suppression for Interactive Projected Displays , 2007, IEEE Trans. Vis. Comput. Graph..

[18]  John C. Hart,et al.  The CAVE: audio visual experience automatic virtual environment , 1992, CACM.

[19]  Gabriel Taubin,et al.  Simple, Accurate, and Robust Projector-Camera Calibration , 2012, 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission.

[20]  Andrew Zisserman,et al.  Image Classification using Random Forests and Ferns , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[21]  Greg Welch,et al.  The office of the future: a unified approach to image-based modeling and spatially immersive displays , 1998, SIGGRAPH.

[22]  Markus H. Gross,et al.  Interactive environment-aware display bubbles , 2006, UIST.

[23]  Masanori Sugimoto,et al.  Clutter-aware dynamic projection system using a handheld projector , 2008, 2008 10th International Conference on Control, Automation, Robotics and Vision.