A note on hybrid control of robotic spatial augmented reality

A robotic spatial augmented reality (RSAR) system combines robotics with spatial augmented reality (SAR) where cameras are used to recognize real objects, and projectors augment information and user interface directly on the surface of the real objects, rather than relying on mobile or wearable display devices. Hence, the control of a RSAR system requires handling of different types of control schemes at once such as classical inverse kinematics of simply linked bodies, inverse projections to find appropriate internal/external parameters of a projector, and geometric manipulation of a projection source image to increase the flexibility in control. In this paper, we outline a hybrid approach to control relevant control components in a coordinated manner, specially focused on application in a prototype RSAR system developed in ETRI.

[1]  Ruigang Yang,et al.  PixelFlex: a reconfigurable multi-projector display system , 2001, Proceedings Visualization, 2001. VIS '01..

[2]  Greg Welch,et al.  Table-top spatially-augmented realty: bringing physical models to life with projected imagery , 1999, Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR'99).

[3]  Gwi-Tae Park,et al.  Anamorphosis Projection by Ubiquitous Display in Intelligent Space , 2009, HCI.

[4]  Brian P. Bailey,et al.  Build your world and play in it: Interacting with surface particles on complex objects , 2010, 2010 IEEE International Symposium on Mixed and Augmented Reality.

[5]  Joo-Haeng Lee,et al.  Issues in Control of a Robotic Spatial Augmented Reality System , 2011 .

[6]  Soon-Yong Park,et al.  Active Calibration of Camera-Projector Systems Based on Planar Homography , 2010, 2010 20th International Conference on Pattern Recognition.

[7]  Pattie Maes,et al.  LuminAR: portable robotic augmented reality interface design and prototype , 2010, UIST '10.

[8]  Dieter Fox,et al.  A large-scale hierarchical multi-view RGB-D object dataset , 2011, 2011 IEEE International Conference on Robotics and Automation.

[9]  Pattie Maes,et al.  SixthSense: a wearable gestural interface , 2009, SIGGRAPH ASIA Art Gallery & Emerging Technologies.

[10]  H. Opower Multiple view geometry in computer vision , 2002 .

[11]  Ronald Azuma,et al.  Recent Advances in Augmented Reality , 2001, IEEE Computer Graphics and Applications.

[12]  Rolf R. Hainich The End of Hardware, 3rd Edition: Augmented Reality and Beyond , 2009 .

[13]  Ramesh Raskar,et al.  Projectors for graphics , 2008, SIGGRAPH '08.

[14]  Hrvoje Benko,et al.  Combining multiple depth cameras and projectors for interactions on, above and between surfaces , 2010, UIST.

[15]  Andrew Zisserman,et al.  Multiple View Geometry in Computer Vision (2nd ed) , 2003 .

[16]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[17]  Hyun Kim,et al.  Robotic Computer as a Mediator in Smart Environments , 2011, ICOST.