JDL level 0 and 1 algorithms for processing and fusion of hard sensor data

A current trend in information fusion involves distributed methods of combining both conventional "hard" sensor data and human-based "soft" information in a manner that exploits the most useful and accurate capabilities of each modality. In addition, new and evolving technologies such as Flash LIDAR have greatly enhanced the ability of a single device to rapidly sense attributes of a scene in ways that were not previously possible. At the Pennsylvania State University we are participating in a multi-disciplinary university research initiative (MURI) program funded by the U.S. Army Research Office to investigate issues related to fusing hard and soft data in counterinsurgency (COIN) situations. We are developing level 0 and level 1 methods (using the Joint Directors of Laboratories (JDL) data fusion process model) for fusion of physical ("hard") sensor data. Techniques include methods for data alignment, tracking, recognition, and identification for a sensor suite that includes LIDAR, multi-camera systems, and acoustic sensors. The goal is to develop methods that dovetail on-going research in soft sensor processing. This paper describes various hard sensor processing algorithms and their evolving roles and implementations within a distributed hard and soft information fusion system.

[1]  Thia Kirubarajan,et al.  Multisensor particle filter cloud fusion for multitarget tracking , 2008, 2008 11th International Conference on Information Fusion.

[2]  Pablo O. Arambel,et al.  Generation of a fundamental data set for hard/soft information fusion , 2008, 2008 11th International Conference on Information Fusion.

[3]  Margrit Betke,et al.  Tracking a large number of objects from multiple views , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[4]  Shaogang Gong,et al.  Tracking colour objects using adaptive mixture models , 1999, Image Vis. Comput..

[5]  W. Eric L. Grimson,et al.  Adaptive background mixture models for real-time tracking , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[6]  Kang-Hyun Jo,et al.  Human Tracking based on Multiple View Homography , 2009, J. Univers. Comput. Sci..

[7]  Zhigang Cao,et al.  Dual-Microphone Source Location Method in 2-D Space , 2006, 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings.

[8]  Yaser Sheikh,et al.  Object tracking across multiple independently moving airborne cameras , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[9]  P. Meer,et al.  Covariance Tracking using Model Update Based on Means on Riemannian Manifolds , 2005 .

[10]  Akbar M. Sayeed,et al.  Detection, Classification and Tracking of Targets in Distributed Sensor Networks , 2002 .

[11]  David L. Hall,et al.  A synthetic dataset for evaluating soft and hard fusion algorithms , 2011, Defense + Commercial Sensing.

[12]  J.-Y. Bouguet,et al.  Pyramidal implementation of the lucas kanade feature tracker , 1999 .

[13]  Ramesh C. Jain,et al.  Invariant surface characteristics for 3D object recognition in range images , 1985, Comput. Vis. Graph. Image Process..

[14]  H. Qi,et al.  Multi-Resolution Data Integration Using Mobile Agents in Distributed Sensor Networks , 2001 .

[15]  Michael Himmelsbach,et al.  Fast segmentation of 3D point clouds for ground vehicles , 2010, 2010 IEEE Intelligent Vehicles Symposium.

[16]  Robert T. Collins,et al.  Evaluation of sampling-based pedestrian detection for crowd counting , 2009, 2009 Twelfth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance.

[17]  Jake K. Aggarwal,et al.  An adaptive background model initialization algorithm with objects moving at different depths , 2008, 2008 15th IEEE International Conference on Image Processing.

[18]  Ramakant Nevatia,et al.  Car detection in low resolution aerial images , 2003, Image Vis. Comput..

[19]  Touradj Ebrahimi,et al.  Multi-view video segmentation and tracking for video surveillance , 2009, Defense + Commercial Sensing.

[20]  Sonya A. H. McMullen,et al.  Mathematical Techniques in Multisensor Data Fusion (Artech House Information Warfare Library) , 2004 .

[21]  Ambrish Tyagi,et al.  Fusion of Multiple Camera Views for Kernel-Based 3D Tracking , 2007, 2007 IEEE Workshop on Motion and Video Computing (WMVC'07).

[22]  Simon Baker,et al.  Lucas-Kanade 20 Years On: A Unifying Framework , 2004, International Journal of Computer Vision.

[23]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[24]  Huiyu Zhou,et al.  Object tracking using SIFT features and mean shift , 2009, Comput. Vis. Image Underst..

[25]  Greg Welch,et al.  Welch & Bishop , An Introduction to the Kalman Filter 2 1 The Discrete Kalman Filter In 1960 , 1994 .

[26]  Hideo Saito,et al.  Tracking soccer players based on homography among multiple views , 2003, Visual Communications and Image Processing.

[27]  David L. Hall,et al.  A multi-agent infrastructure for hard and soft information fusion , 2011, Defense + Commercial Sensing.

[28]  Richard L. Tutwiler,et al.  Using full motion 3D Flash LIDAR video for target detection, segmentation, and tracking , 2010, 2010 IEEE Southwest Symposium on Image Analysis & Interpretation (SSIAI).