Learning to traverse doors using visual information

Mobile robots need to navigate in their environment in order to perform useful tasks. Doors appear in almost every office-like indoor environment and they have to be crossed often during the navigation process. We present in this paper a new approach that uses visual information to anticipate that a door has to be crossed. Combining then visual information with ultrasonic sensors, the robot approaches the door until an adequate distance is reached. Door traversing is then performed using sonar sensors. This paper describes the control architecture and the behaviors that have been implemented to obtain the door traversing behavior. Results and performance issues are explained. The experiments have been carried out with a B21 mobile robot.

[1]  John C. Russ,et al.  The image processing handbook (3. ed.) , 1995 .

[2]  Henrik I. Christensen,et al.  Active exploration for feature based global localization , 2000, Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113).

[3]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[4]  Arnold W. M. Smeulders,et al.  Color-based object recognition , 1997, Pattern Recognit..

[5]  N.P. Papanikolopoulos,et al.  Real-time door detection in cluttered environments , 2000, Proceedings of the 2000 IEEE International Symposium on Intelligent Control. Held jointly with the 8th IEEE Mediterranean Conference on Control and Automation (Cat. No.00CH37147).

[6]  Joseph Revelli,et al.  The Image Processing Handbook, 4th Edition , 2003, J. Electronic Imaging.

[7]  Klaus Peter Wershofen,et al.  Object-oriented vision for a behavior-based robot , 1996, Other Conferences.

[8]  S. Pizer,et al.  The Image Processing Handbook , 1994 .