Localization is a fundamental task in mobile robotics and in indoor environments we can use various sensors to solve this problem. In the Intelligent Space environment we can use laser range finders or ultrasonic positioning systems to localize and track mobile robots. Nevertheless, our final goal is to substitute these sensors and accomplish this task using just cameras. In this paper, we show the feasibility of determining the robot's location based on the images received from a single camera. In our experimental room we used a surveillance camera, which can be controlled to pan/tilt in order to change the point of view. The camera had been mounted on the ceiling and six preset positions were selected to cover the whole area. The object recognition is based on colour space filtering and contour detection. Finally, the contours of the detected objects are transformed from the image space to the world coordinate system and the polygons are reduced to simpler ones. The system is able to detect not only the position of the objects, but their orientations.
[1]
Barry Brumitt,et al.
EasyLiving: Technologies for Intelligent Environments
,
2000,
HUC.
[2]
Armando Fox,et al.
The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms
,
2002,
IEEE Pervasive Comput..
[3]
Hideki Hashimoto,et al.
Model based robot localization using onboard and distributed laser range finders
,
2008,
2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[4]
M. Goodale,et al.
The visual brain in action
,
1995
.
[5]
Mark Weiser,et al.
Some computer science issues in ubiquitous computing
,
1999,
MOCO.
[6]
Joo-Ho Lee,et al.
Intelligent Space — concept and contents
,
2002,
Adv. Robotics.
[7]
Larry Rudolph,et al.
Project Oxygen: Pervasive, Human-Centric Computing - An Initial Experience
,
2001,
CAiSE.
[8]
Berthold K. P. Horn,et al.
Tsai ’ s camera calibration method revisited
,
2003
.