Qualitative Relational Mapping for Robotic Navigation

This paper presents a novel method for autonomous robotic navigation and mapping of large scale spaces. The proposed framework makes use of a graphical representation of the world in order to build a map consisting of qualitative constraints on the the relationships between objects observed by the robot. These relationships are represented in terms of the relative geometrical layout of landmark triplets. A novel measurement method based on camera imagery is presented which extends previous work from the eld of Qualitative Spatial Reasoning. Measurements are fused into the map using a deterministic approach based on iterative graph updates. Simulation results are presented for two simple scenarios to demonstrate that a reasonable robot trajectory is capable of generating a fully constrained graph, but that the current approach is limited by a need to jointly observe most of the landmarks in every image in order to generate useful maps.