Cognitive integration of aerial and ground views in remote vehicle operations

Both unmanned air vehicles (UAVs) and unmanned ground vehicles (UGVs) are being used in increasingly complex roles as this technology matures. There are many proposals for the joint use of UAVs and UGVs working together to achieve mission goals. Due to the spatial perception difficulties commonly reported in UGV operations, it has been postulated that the operation of UGVs could benefit from the use of live aerial views in scenarios where this is possible. In a series of experimental studies we examine the cognitive task of integrating air and ground views. Integrating these views is a key cognitive task component in UGV operations which usually require that maps for navigation be integrated with imagery from ground view cameras. Whether aerial views are used or not, integration with map views is crucial. In this paper we discuss a series of experimental studies relevant to the cognitive integration of air and ground views in UGV scenarios. The integration of map and ground camera views may be facilitated by the use of live aerial views. In urban search scenarios the addition of an aerial view to a UGV operator's task does not appear to have any measurable negative consequences and improves localization performance.