RoboCup competitions are among the most prestigious robotics organizations in the world. Contestants compete in different categories with the robots they develop step by step to achieve the objectives of the organization for 2050. One of these categories is designed for search and rescue robots. Robots get points from the following criteria: identifying the victims autonomously in disaster situations, detecting hazmat chars, reading QR code and recognizing the various objects which is identified during the competition and marking their the locations on a map. In this study, some performance tests of various feature extraction methods have been performed for the detection of hazmat signs in competition environments of RoboCup search and rescue league. The tests were conducted in a laboratory which designed as RoboCup search and rescue league competition environment. After some comparison tests, the most appropriate method for real-time operation has been decided considering acceptable accuracy rates.
[1]
Luc Van Gool,et al.
Speeded-Up Robust Features (SURF)
,
2008,
Comput. Vis. Image Underst..
[2]
Gerald Steinbauer,et al.
Novel rule set for the RoboCup rescue robot league
,
2016,
2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR).
[3]
David G. Lowe,et al.
Object recognition from local scale-invariant features
,
1999,
Proceedings of the Seventh IEEE International Conference on Computer Vision.
[4]
Bill Triggs,et al.
Histograms of oriented gradients for human detection
,
2005,
2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).