Motion estimation of a miniature helicopter using a single onboard camera

This paper describes a technique for the estimation of the translational and rotational velocities of a miniature helicopter using the video signals from a single onboard camera. For every two consecutive frames from the camera, point correspondences are identified and Epipolar Geometry based algorithms are used to find the likely estimates of the absolute rotations and relative displacements. Images from onboard camera are often corrupted with various types of noises; SIFT descriptors were found to be the best feature descriptors to be used for point correspondences. To speed up the processing, we introduce a new representation of these descriptors based on compressive sensing formalisms. To estimate the absolute displacement of the helicopter between frames, we use the measurements from a simulated IR sensor to find the true change in altitude of the body frame, scaling other translational dimensions accordingly, and later estimating the velocities. Experiments conducted using data from a real helicopter in an indoor environment demonstrate promising results.

[1]  Richard I. Hartley,et al.  Estimation of Relative Camera Positions for Uncalibrated Cameras , 1992, ECCV.

[2]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[3]  Eric Feron,et al.  Scaling effects and dynamic characteristics of miniature rotorcraft , 2004 .

[4]  Cordelia Schmid,et al.  Scale & Affine Invariant Interest Point Detectors , 2004, International Journal of Computer Vision.

[5]  Bernard Mettler,et al.  Experimental framework for evaluating autonomous guidance and control algorithms for agile aerial vehicles , 2007, 2007 European Control Conference (ECC).

[6]  Ashutosh Saxena,et al.  Autonomous indoor helicopter flight using a single onboard camera , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Nikos Papanikolopoulos,et al.  Miniature embedded rotorcraft platform for aerial teleoperation experiments , 2009, 2009 17th Mediterranean Conference on Control and Automation.

[8]  Michael Harris,et al.  The Challenges of Flight-Testing Unmanned Air Vehicles , 2002 .

[9]  L. Matthies,et al.  Precise Image-Based Motion Estimation for Autonomous Small Body Exploration , 2000 .

[10]  R. Lind,et al.  State Estimation using Optical Flow from Parallax-Weighted Feature Tracking , 2006 .

[11]  Rajat Raina,et al.  Self-taught learning: transfer learning from unlabeled data , 2007, ICML '07.

[12]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[13]  E. Candès,et al.  Sparsity and incoherence in compressive sampling , 2006, math/0611957.

[14]  H. C. Longuet-Higgins,et al.  A computer algorithm for reconstructing a scene from two projections , 1981, Nature.

[15]  Matthijs C. Dorst Distinctive Image Features from Scale-Invariant Keypoints , 2011 .

[16]  S. Shankar Sastry,et al.  Multiple view motion estimation and control for landing an unmanned aerial vehicle , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[17]  Avinash C. Kak,et al.  Vision for Mobile Robot Navigation: A Survey , 2002, IEEE Trans. Pattern Anal. Mach. Intell..