Multi-robot information sharing for complementing limited perception: A case study of moving ball interception

Poor sensor data because of uncertainty and hardware limitations results in a robot misinterpreting the state of its surrounding environment, leading to bad decisions and eventually failure to successfully perform its desired tasks. These limitations can be overcome if a teammate robot with a better view shares its visual information. Our work aims to investigate why current approaches fail to effectively use teammate sensor data, propose an alternative where a teammate helps to better capture the state of the environment, and demonstrate that the robot can make better decisions when a teammate shares its perceptual data. Raw teammate sensor data is not meaningful unless provided a relative, geometric transform to place this data within another robot's own egocentric coordinates. There are few approaches that are able to discover this relative localization accurately in sparse environments while remaining computationally light. Our approach addresses these limitations by accumulating correspondence matches of objects over time from the overlapping views of two stationary robots to compute an accurate relative localization. We evaluate the benefits of teammate sensor data used with our computed relative localization with a challenging, time critical task where the robot's cameras alone are lacking. Our empirical results with two coordinating robots indicates that our approach is able to successfully take advantage of teammate robots with a better view within the challenging physical and hardware constraints of our robots.