In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the position of the robot. Therefore, image transformation was required to implement self-localization. Second, we used an approach to transform the omni-directional images into panoramic images. Hence, the distortion of the white line can be fixed through the transformation. The interest points that form the corners of the landmark were then located using the features from accelerated segment test (FAST) algorithm. In this algorithm, a circle of sixteen pixels surrounding the corner candidate is considered and is a high-speed feature detector in real-time frame rate applications. Finally, the dual-circle, trilateration, and cross-ratio projection algorithms were implemented in choosing the corners obtained from the FAST algorithm and localizing the position of the robot. The results demonstrate that the proposed algorithm is accurate, exhibiting a 2-cm position error in the soccer field measuring 600 cm2 x 400 cm2.
[1]
Sunglok Choi,et al.
Design of an Omnidirectional Robot for FIRA Robosot
,
2006
.
[2]
Guo Lei,et al.
Recognition of planar objects in 3-D space from single perspective views using cross ratio
,
1990,
IEEE Trans. Robotics Autom..
[3]
Baoyong Yin,et al.
Robot Self-localization with Optimized Error Minimizing for Soccer Contest
,
2011,
J. Comput..
[4]
H.A. Beyer,et al.
Accurate calibration of CCD-cameras
,
1992,
Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[5]
Olivier D. Faugeras,et al.
Automatic calibration and removal of distortion from scenes of structured environments
,
1995,
Optics & Photonics.
[6]
Yoichiro Maeda,et al.
Self-localization based on image features of omni-directional image
,
2012,
The 6th International Conference on Soft Computing and Intelligent Systems, and The 13th International Symposium on Advanced Intelligence Systems.
[7]
Chen-Chien James Hsu,et al.
Dual-circle self-localization for soccer robots with omnidirectional vision
,
2012
.
[8]
Armando J. Pinho,et al.
A hybrid vision system for soccer robots using radial search lines
,
2007
.
[9]
Tom Drummond,et al.
Machine Learning for High-Speed Corner Detection
,
2006,
ECCV.