SUAVE: Integrating UAV video using a 3D model

Controlling a team of Unmanned Aerial Vehicles (UAV) requires the operator to perform continuous surveillance and path planning. The operator's situation awareness is likely to degrade as an increasing number of surveillance videos must be viewed and integrated. The Picture-in-Picture display (PiP) provides one solution for integrating multiple UAV camera video by allowing the operator to view the video feed in the context of surrounding terrain. The experimental SUAVE (Simple Unmanned Areal Vehicle Environment) display extends PiP methods by sampling imagery from the video stream to texture a 3D map of the terrain. The operator can then inspect this imagery using world in miniature (WIM) or fly-through methods. We investigate the properties and advantages of SUAVE in the context of a search mission with 11 UAVs finding a strong advantage for finding targets While performance is expected to improve with increasing numbers of UAVs we did not find differences in performance between models generated by 11 UAVs and those employing 22 UAVs.

[1]  A. T. Tai,et al.  A human factors testbed for command and control of unmanned air vehicles , 2003, Digital Avionics Systems Conference, 2003. DASC '03. The 22nd.

[2]  Heath A. Ruff,et al.  Evaluation of Synthetic Vision Overlay Concepts for UAV Sensor Operations: Landmark Cues and Picture-in-Picture , 2006 .

[3]  James F. Blinn,et al.  Where am I? What am I looking at? (cinematography) , 1988, IEEE Computer Graphics and Applications.

[4]  Timothy I. Page Incorporating Scene Mosaics as Visual Indexes into UAV Video Imagery Databases , 2012 .

[5]  Bruce P. Hunn The Human Challenges of Command and Control with Multiple Unmanned Aerial Vehicles , 2005 .

[6]  J. Tittle,et al.  The Remote Perception Problem , 2002 .

[7]  Randy F. Pausch,et al.  Navigation and locomotion in virtual worlds via flight into hand-held miniatures , 1995, SIGGRAPH.

[8]  Doug A. Bowman,et al.  Travel in immersive virtual environments: an evaluation of viewpoint motion control techniques , 1997, Proceedings of IEEE 1997 Annual International Symposium on Virtual Reality.

[9]  Michael A. Goodrich,et al.  Comparing Situation Awareness for Two Unmanned Aerial Vehicle Human Interface Approaches , 2006 .

[10]  L Gugerty,et al.  Seeing where you are heading: integrating environmental and egocentric reference frames in cardinal direction judgments. , 2001, Journal of experimental psychology. Applied.

[11]  Mark H. Draper,et al.  Synthetic vision system for improving unmanned aerial vehicle operator situation awareness , 2005, SPIE Defense + Commercial Sensing.

[12]  Supun Samarasekera,et al.  Aerial video surveillance and exploitation , 2001, Proc. IEEE.

[13]  J. Adams,et al.  A Picture-in-Picture Interface for a Multiple Robot System , 2007 .

[14]  Laurel D. Riek,et al.  A decomposition of UAV-related situation awareness , 2006, HRI '06.

[15]  David S Alberts,et al.  Network Centric Warfare: Developing and Leveraging Information Superiority , 1999 .

[16]  Michael A. Goodrich,et al.  Ecological Interfaces for Improving Mobile Robot Teleoperation , 2007, IEEE Transactions on Robotics.

[17]  Mica R. Endsley,et al.  Measurement of Situation Awareness in Dynamic Systems , 1995, Hum. Factors.

[18]  Shafiq Abedin,et al.  Scalable target detection for large robot teams , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[19]  Paul Milgram,et al.  Viewpoint Animation With a Dynamic Tether for Supporting Navigation in a Virtual Environment , 2009, Hum. Factors.

[20]  M. Cummings Management of Multiple Dynamic Human Supervisory Control Tasks for UAVs , 2005 .

[21]  Heath A. Ruff,et al.  Advanced Display Concepts for Uav Sensor Operations: Landmark Cues and Picture-in-Picture , 2006 .

[22]  Randal W. Beard,et al.  Semi-autonomous human-UAV interfaces for fixed-wing mini-UAVs , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[23]  Timo Partala,et al.  Controlling a Single 3D Object: Viewpoint Metaphors, Speed and Subjective Satisfaction , 1999, INTERACT.

[24]  S. Hart,et al.  Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research , 1988 .

[25]  Mica R. Endsley,et al.  Direct Measurement of Situation Awareness: Validity and Use of SAGAT , 2000 .

[26]  Shumin Zhai,et al.  Applications of augmented reality for human-robot communication , 1993, Proceedings of 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '93).