Design of coded reference labels for indoor optical navigation using monocular camera

We present a machine vision based indoor navigation system. The paper describes a pose estimation of machine vision system by recognizing rotationally independent optimized color reference labels combined with a geometrical camera calibration model, which determines a set of camera parameters. A reference label carries one byte of information, which can be uniquely designed for various values. More than four reference labels are used in the image to calculate the localization coordinates of the system. An algorithm in Matlab has been developed so that a machine vision system can recognize N number of labels at any given orientation. In addition, a one channel color technique is applied in segmentation process, due to this technique the number of segmented image components is reduced significantly, limiting the memory storage requirement and processing time. The algorithm for pose estimation is based on direct linear transformation (DLT) method with a set of control reference labels in relation to the camera calibration model. From the experiments we concluded that the pose of the machine vision system can be calculated with relatively high precision, in the calibrated environment of reference labels.

[1]  Gaetano Borriello,et al.  Positioning and Orientation in Indoor Environments Using Camera Phones , 2008, IEEE Computer Graphics and Applications.

[2]  Andreas Donaubauer,et al.  Real-time indoor positioning using range imaging sensors , 2010, Photonics Europe.

[3]  Feng Duan,et al.  Mobile robot action based on QR code identification , 2012, 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO).

[4]  Brian J. Luzum Navigation Principles of Positioning and Guidance , 2004 .

[5]  C.R. Viala,et al.  Performance evaluation of linear camera calibration techniques , 2004, Proceedings World Automation Congress, 2004..

[6]  Xin Cheng,et al.  Real-time Component Labelling with Centre of Gravity Calculation on FPGA , 2011 .

[7]  Alexandre Bernardino,et al.  Random features vs Harris Corners in Real-Time Visual Egomotion Estimation , 2010 .

[8]  Xin Cheng,et al.  Hardware Centric Machine Vision for High Precision Center of Gravity Calculation , 2010 .

[9]  Brian Burrell Merriam-Webster's Guide to Everyday Math : A Home and Business Reference , 1996 .

[10]  Markus Windolf,et al.  Systematic accuracy and precision analysis of video motion capturing systems--exemplified on the Vicon-460 system. , 2008, Journal of biomechanics.

[11]  Sebastian Tilch,et al.  Survey of optical indoor positioning systems , 2011, 2011 International Conference on Indoor Positioning and Indoor Navigation.

[12]  J. Todd Book Review: Digital image processing (second edition). By R. C. Gonzalez and P. Wintz, Addison-Wesley, 1987. 503 pp. Price: £29.95. (ISBN 0-201-11026-1) , 1988 .

[13]  Chin-Hung Teng,et al.  Developing QR Code Based Augmented Reality Using SIFT Features , 2012, 2012 9th International Conference on Ubiquitous Intelligence and Computing and 9th International Conference on Autonomic and Trusted Computing.

[14]  Janne Heikkilä,et al.  A four-step camera calibration procedure with implicit image correction , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[15]  Xin Cheng,et al.  Optimized color pair selection for label design , 2011, Proceedings ELMAR-2011.

[16]  Andreas Ullrich,et al.  Long-range high-performance time-of-flight-based 3D imaging sensors , 2002, Proceedings. First International Symposium on 3D Data Processing Visualization and Transmission.

[17]  Tsukasa Ogasawara,et al.  Indoor Navigation for a Humanoid Robot Using a View Sequence , 2009, Int. J. Robotics Res..