Optimizing camera placement for localization accuracy

This paper presents the optimization of a camera placement for improved localization accuracy. It is the basis of a localization system aiming for high localization accuracy. This goal is achieved using several cameras with very redundant field-of-views. The paper presents the calculation of the localization accuracy - which depends on the camera model and the pixel quantizing error - at one specific point. A method is introduced for handling areas in a probabilistic sense instead of examining only one point. The accuracy can be improved by adding a new camera to the system. The calculation of the optimal position of the new camera at compliance to some restrictions is demonstrated. The Smart Mobile Eyes for Localization (SMEyeL) project is open-source: the source code, all measurement input data and documentation are public available.

[1]  David G. Lowe,et al.  Scene modelling, recognition and tracking with invariant image features , 2004, Third IEEE and ACM International Symposium on Mixed and Augmented Reality.

[2]  Joel A. Hesch,et al.  A Direct Least-Squares (DLS) method for PnP , 2011, 2011 International Conference on Computer Vision.

[3]  Vincent Lepetit,et al.  Keypoint recognition using randomized trees , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Zhanyi Hu,et al.  PnP Problem Revisited , 2005, Journal of Mathematical Imaging and Vision.

[5]  Vincent Lepetit,et al.  Accurate Non-Iterative O(n) Solution to the PnP Problem , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[6]  Larry S. Davis,et al.  Iterative Pose Estimation Using Coplanar Feature Points , 1996, Comput. Vis. Image Underst..

[7]  David Szaloki,et al.  Marker localization with a multi-camera system , 2013, 2013 International Conference on System Science and Engineering (ICSSE).