Self-localization from the panoramic views for autonomous mobile robots
暂无分享,去创建一个
This paper describes a self-localization method for mobile robots using panoramic view images. A panoramic view image has the information of object locations from the viewer robot perspective and the direction between the objects at each position. Among panoramic view image sequences, the target objects in the image such as traffic signs, building facades, road signs, etc., are located in the real world so that the location and direction enable robots to localize position and direction from their view. With previously captured panoramic images, the method calculates the distance and direction of the region of interest, corresponds the regions between the sequences, and identifies the real world location. To obtain the region, vertical edge line segments which are located in the buildings and traffic sign stems and region segments of monotone color are utilized. In the segmented region, a centroid is used to localize a principal point. In our experiments, a mobile robot circulates to capture images of the University campus so that he can recognize his location and make a map of the campus. Experimental results showed the method is adequate for map generation and sending a robot on an errand to a destination.
[1] Nuttapong Jinjakam,et al. Conventional versus Fuzzy Control , 2001 .
[2] Clark F. Olson,et al. Probabilistic self-localization for mobile robots , 2000, IEEE Trans. Robotics Autom..
[3] Dana H. Ballard,et al. Computer Vision , 1982 .
[4] Matthew J. Barth,et al. Autonomous landmark selection for route recognition by a mobile robot , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.