Using parallel line information for vision-based landmark location estimation and an application to automatic helicopter landing

Abstract An approach to landmark location estimation by computer vision techniques is proposed. The objective is to derive the position and the orientation of the landmark with respect to the vehicle by a single image. Such information is necessary for automatic vehicle navigation. This approach requires lower hardware cost and simple computation. The vanishing points of the parallel lines on the landmark are used to detect the landmark orientation. The detected vanishing points are used to derive the relative orientation between the landmark and the camera, which is then utilized to compute the landmark orientation with respect to the vehicle. The size of the landmark is used to determine the landmark position. Sets of collinear points are extracted from the landmark and their inter-point distances are computed. The positions of the collinear point sets are evaluated and used to determine the landmark position. Landing site location estimation by using the identification marking H on the helicopter landing site for automatic helicopter landing is presented as an application of the proposed approach. Simulations and experiments have been conducted to prove the feasibility of the proposed approach.

[1]  B. Noble Applied Linear Algebra , 1969 .

[2]  Gregory D. Hager,et al.  Real-time vision-based robot localization , 1993, IEEE Trans. Robotics Autom..

[3]  Rafael C. González,et al.  Local Determination of a Moving Contrast Edge , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Wen-Hsiang Tsai,et al.  Model-based guidance by the longest common subsequence algorithm for indoor autonomous vehicle navigation using computer vision , 1993 .

[5]  Wen-Hsiang Tsai,et al.  A new approach to robot location by house corners , 1986, Pattern Recognit..

[6]  Linda G. Shapiro,et al.  Computer and Robot Vision , 1991 .

[7]  佐藤 真知子,et al.  Position and Attitude Estimation from a Image Sequence of a Circle , 1995 .

[8]  Kostas J. Kyriakopoulos,et al.  Sensor-based self-localization for wheeled mobile robots , 1995, J. Field Robotics.

[9]  G. Schmidt,et al.  Landmark-oriented visual navigation of a mobile robot , 1993, ISIE '93 - Budapest: IEEE International Symposium on Industrial Electronics Conference Proceedings.

[10]  José María Armingol Moreno,et al.  Continuous mobile robot localization by using structured light and a geometric map , 1996, Int. J. Syst. Sci..

[11]  Wei-Song Lin,et al.  Accurate linear technique for camera calibration considering lens distortion by solving an eigenvalue problem , 1993 .

[12]  M. J. Minneman Handwritten character recognition employing topology, cross correlation, and decision theory , 1966 .

[13]  James W. Daniel,et al.  Applied linear algebra (3rd edition) , by Ben Noble and James W. Daniel. Pp 521. £16·95. 1988. ISBN 0-13-040957-X (Prentice-Hall) , 1988, Mathematical Gazette.

[14]  Pai-Shih Lee,et al.  Model-based location of automated guided vehicles in the navigation sessions by 3d computer vision , 1994, J. Field Robotics.

[15]  George R. Cross,et al.  A two-step string-matching procedure , 1991, Pattern Recognit..

[16]  D.E. Pearson,et al.  Visual communication at very low data rates , 1985, Proceedings of the IEEE.

[17]  Jake K. Aggarwal,et al.  Mobile robot self-location using model-image feature correspondence , 1996, IEEE Trans. Robotics Autom..