A novel way to implement self-localization in a multi-robot experimental platform

Self-localization is an important part of multi-robot experiments. The traditional way to obtain the global pose (position and orientation) of a robot is to attach a unique pattern to the robot. A centralized sensor, such as a camera, is typically used to identify the pattern and estimate its pose directly. However, the pattern-based localization scheme lends itself a poor scalability property because, in presence of a large number of robots, the recognition of all patterns may result into a computational bottleneck. Moreover, the implementation of this technique in a decentralized framework is questionable. Additionally, the practicality of designing a large number of unique patterns and attaching them on often miniature robots present some challenges. To overcome the use of a pattern, a novel method has been proposed in this paper for the robots to obtain their global poses without using any pattern. In this method, we run Extended Kalman Filter (EKF) on each robot to fuse the global position data with the control input data to estimate the orientation. In a multi-robot setting, this becomes challenging because the positional data is untagged and robots do not have a priori knowledge about which data pertains to their own position. To overcome this issue, we propose a method in which each robot can identify its own track from the other's tracks by comparing the average errors over time between prediction and measurement in the EKFs that are run on each candidate trajectory. Therefore, instead of trying to identify robots in a centralized manner for localization, we distribute the task to each robot and let them estimate their pose on their own. Extensive simulations and experiments have been conducted and the results are provided to show the effectiveness of this method.