Visual Servoing for UAVs

Vision is in fact the richest source of information for ourself and also for outdoors Robotics, and can be considered the most complex and challenging problem in signal processing for pattern recognition. The first results using Vision in the control loop have been obtained in indoors and structured environments, in which a line or known patterns are detected and followed by a robot (Feddema & Mitchell (1989), Masutani et al. (1994)). Successful works have demonstrated that visual information can be used in tasks such as servoing and guiding, in robot manipulators and mobile robots (Conticelli et al. (1999), Mariottini et al. (2007), Kragic & Christensen (2002).) Visual Servoing is an open issue with a long way for researching and for obtaining increasingly better and more relevant results in Robotics. It combines image processing and control techniques, in such a way that the visual information is used within the control loop. The bottleneck of Visual Servoing can be considered the fact of obtaining robust and on-line visual interpretation of the environment, which can be usefully treated by control structures and algorithms. The solutions provided in Visual Servoing are typically divided into Image Based Control Techniques and Pose Based Control Techniques, depending on the kind of information provided by the vision system that determine the kind of references that have to be sent to the control structure (Hutchinson et al. (1996), Chaumette & Hutchinson (2006) and Siciliano & Khatib (2008)). Another classical division of the Visual Servoing algorithms considers the physical disposition of the visual system, yielding to eye-in-hand systems and eye-to-hand systems, that in the case of Unmanned Aerial Vehicles (UAV) can be translated as on-board visual systems (Mejias (2006)) and ground visual systems (Martinez et al. (2009)). The challenge of Visual Servoing is to be useful in outdoors and non-structured environments. For this purpose the image processing algorithms have to provide visual information that has to be robust and works in real time. UAV can therefore be considered as a challenging testbed for visual servoing, that combines the difficulties of abrupt changes in the image sequence (i.e. vibrations), outdoors operation (non-structured environments) and 3D information changes (Mejias et al. (2006)). In this chapter we give special relevance to the fact of obtaining robust visual information for the visual servoing task. In section (2).we overview the main algorithms used for visual tracking and we discuss their robustness when they are applied to image sequences taken from the UAV. In sections (3). and (4). we analyze how vision systems can perform 3D pose estimation that can be used for

[1]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  David G. Lowe,et al.  Shape indexing using approximate nearest-neighbour search in high-dimensional spaces , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[3]  Peter I. Corke,et al.  A tutorial on visual servo control , 1996, IEEE Trans. Robotics Autom..

[4]  Peter I. Corke,et al.  Two Seconds to Touchdown - Vision-Based Controlled Forced Landing , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  François Chaumette,et al.  Visual servo control. I. Basic approaches , 2006, IEEE Robotics & Automation Magazine.

[6]  Peter F. Sturm,et al.  Algorithms for plane-based pose estimation , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[7]  Rs Roel Pieters,et al.  Visual Servo Control , 2012 .

[8]  Pascual Campoy,et al.  Fuzzy control system navigation using priority areas , 2008 .

[9]  Miguel A. Olivares-Méndez,et al.  Computer Vision Onboard UAVs for Civilian Tasks , 2009, J. Intell. Robotic Syst..

[10]  Peter J. Rousseeuw,et al.  Robust regression and outlier detection , 1987 .

[11]  Miguel A. Olivares-Méndez,et al.  Trinocular ground system to control UAVs , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Zhengyou Zhang,et al.  A Flexible New Technique for Camera Calibration , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Gary Bradski,et al.  Computer Vision Face Tracking For Use in a Perceptual User Interface , 1998 .

[14]  Matthijs C. Dorst Distinctive Image Features from Scale-Invariant Keypoints , 2011 .

[15]  F. Miyazaki,et al.  Visual servoing for non-holonomic mobile robots , 1994, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'94).

[16]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[17]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[18]  Enrique Muñoz,et al.  Tracking a Planar Patch by Additive Image Registration , 2003, VLBV.

[19]  Danica Kragic,et al.  Survey on Visual Servoing for Manipulation , 2002 .

[20]  Michael J. Swain,et al.  Color indexing , 1991, International Journal of Computer Vision.

[21]  M.A.O. Mendez,et al.  Fuzzy Logic User Adaptive Navigation Control System For Mobile Robots In Unknown Environments , 2007, 2007 IEEE International Symposium on Intelligent Signal Processing.

[22]  P. Khosla,et al.  Image-based visual servoing of nonholonomic mobile robots , 1999, Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304).

[23]  Simon Baker,et al.  Lucas-Kanade 20 Years On: A Unifying Framework , 2004, International Journal of Computer Vision.

[24]  Giuseppe Oriolo,et al.  Image-Based Visual Servoing for Nonholonomic Mobile Robots Using Epipolar Geometry , 2007, IEEE Transactions on Robotics.

[25]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[26]  J.-Y. Bouguet,et al.  Pyramidal implementation of the lucas kanade feature tracker , 1999 .

[27]  Ian D. Reid,et al.  A plane measuring device , 1999, Image Vis. Comput..

[28]  Marie-Odile Berger,et al.  Pose Estimation for Planar Structures , 2002, IEEE Computer Graphics and Applications.

[29]  Richard O. Duda,et al.  Use of the Hough transformation to detect lines and curves in pictures , 1972, CACM.

[30]  Andrew W. Fitzgibbon,et al.  Markerless tracking using planar structures in the scene , 2000, Proceedings IEEE and ACM International Symposium on Augmented Reality (ISAR 2000).

[31]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[32]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.