STAC: a comprehensive sensor fusion model for scene characterization

We are interested in data fusion strategies for Intelligence, Surveillance, and Reconnaissance (ISR) missions. Advances in theory, algorithms, and computational power have made it possible to extract rich semantic information from a wide variety of sensors, but these advances have raised new challenges in fusing the data. For example, in developing fusion algorithms for moving target identification (MTI) applications, what is the best way to combine image data having different temporal frequencies, and how should we introduce contextual information acquired from monitoring cell phones or from human intelligence? In addressing these questions we have found that existing data fusion models do not readily facilitate comparison of fusion algorithms performing such complex information extraction, so we developed a new model that does. Here, we present the Spatial, Temporal, Algorithm, and Cognition (STAC) model. STAC allows for describing the progression of multi-sensor raw data through increasing levels of abstraction, and provides a way to easily compare fusion strategies. It provides for unambiguous description of how multi-sensor data are combined, the computational algorithms being used, and how scene understanding is ultimately achieved. In this paper, we describe and illustrate the STAC model, and compare it to other existing models.

[1]  Fakhri Karray,et al.  Corrigendum to 'Multisensor data fusion: A review of the state-of-the-art' [Information Fusion 14 (1) (2013) 28-44] , 2013, Information Fusion.

[2]  James Llinas,et al.  Revisiting the JDL Data Fusion Model II , 2004 .

[3]  David A. McAllester,et al.  Object Detection with Discriminatively Trained Part Based Models , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Bill Triggs,et al.  Histograms of oriented gradients for human detection , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[5]  Andrew Starr,et al.  A Review of data fusion models and architectures: towards engineering guidelines , 2005, Neural Computing & Applications.

[6]  Fakhri Karray,et al.  Multisensor data fusion: A review of the state-of-the-art , 2013, Inf. Fusion.

[7]  Alan N. Steinberg,et al.  Revisions to the JDL data fusion model , 1999, Defense, Security, and Sensing.

[8]  Ren C. Luo,et al.  A tutorial on multisensor integration and fusion , 1990, [Proceedings] IECON '90: 16th Annual Conference of IEEE Industrial Electronics Society.

[9]  Vinod Ramnath,et al.  Active-passive data fusion algorithms for seafloor imaging and classification from CZMIL data , 2010, Defense + Commercial Sensing.

[10]  Frédo Durand,et al.  Eulerian video magnification for revealing subtle changes in the world , 2012, ACM Trans. Graph..

[11]  Hugh F. Durrant-Whyte,et al.  Multisensor Data Fusion , 2016, Springer Handbook of Robotics, 2nd Ed..

[12]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[13]  Frank Dellaert,et al.  Information fusion in navigation systems via factor graph based incremental smoothing , 2013, Robotics Auton. Syst..

[14]  Silvio Savarese,et al.  Semantic structure from motion , 2011, CVPR 2011.