A human-centric approach in geospatial data fusion for real-time remotely controlled robotic platforms

Many modern technologies widely deploy semi-autonomous robotic platforms, remotely controlled by a human operator. Such tasks usually require rapid fusion of multisensor imagery and auxiliary geospatial data. Operational-control units in particular can be considered as displays of the decision-support systems, and the complexity of automated multi-domain geospatial data fusion leads to human-in-the loop technology which widely deploys visual analytics. While a number of research studies have investigated eye movements and attention on casual scenes, there has been a lack of investigations concerning the expert's eye movements and visual attention, specifically when an operator is engaged in real-time visual data fusion to control and maneuver a remote unmanned robotic vehicle which acquires visual data using CCTV cameras in visible, IR or other spectral zones, and transmits this data through telemetric channels to a human operator. In this paper we investigate the applicability of eye-tracking technology for the numerical assessment of efficiency of an operator in fusion of multi-sensor and multi-geometry visual data in real-time robotic control tasks.