Context Aware Information Delivery for Mobile Devices

Delivering the right amount of information to the right person is vital on the tactical battlefield. With the increasing use of mobile devices by the military, delivering relevant information instantaneously to the warfighter becomes possible. However, large quantities of data are being generated constantly while the human processing power and communication channels are limited. Therefore, data must be processed so it can be evaluated against operational needs. This data is collected in multiple modalities include images, videos and field reports with multi-sensor data. Providing automated processing of unstructured information promises to effectively connect information processing with operational decision making, dramatically reducing the time needed to identify relevant information for mission planning and execution. We describe a multi-view learning technique that augments the feature set used by a classifier in one modality with entity relationships discovered in other modalities. To accommodate the limited computation power of field devices, mostly handhelds, the multi-view learning algorithm is low complexity. It applies to multiple modalities, leveraging many-to-many correspondences among different modalities. Experiments on image and text are presented in the paper which show more than 20% improvement over categorizing text or images independently. The categorized information is matched to the mission and task needs. Finally, relevant information needs to be transmitted via limited bandwidth negotiated from limited resources.

[1]  Pattie Maes,et al.  Just-in-time information retrieval agents , 2000, IBM Syst. J..

[2]  Ron Bekkerman,et al.  Multi-modal Clustering for Multimedia Collections , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[3]  Ray-Yuan Sheu,et al.  Multiagent-based adaptive pervasive service architecture (MAPS) , 2009, AUPC 09.

[4]  Kristen Grauman,et al.  Watch, Listen & Learn: Co-training on Captioned Images and Videos , 2008, ECML/PKDD.

[5]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[6]  Joseph B. Evans,et al.  TIGR in Iraq and Afghanistan: Network-adaptive distribution of media rich tactical data , 2009, MILCOM 2009 - 2009 IEEE Military Communications Conference.

[7]  Bradley J. Rhodes Using Physical Context for Just-in-Time Information Retrieval , 2003, IEEE Trans. Computers.

[8]  Sethu Vijayakumar,et al.  Structure Inference for Bayesian Multisensor Scene Understanding , 2007 .

[9]  Avrim Blum,et al.  The Bottleneck , 2021, Monopsony Capitalism.

[10]  Radu Horaud,et al.  Conjugate Mixture Models for Clustering Multimodal Data , 2011, Neural Computation.

[11]  Gerard Salton,et al.  A vector space model for automatic indexing , 1975, CACM.

[12]  Michelle X. Zhou,et al.  Context-Aware, adaptive information retrieval for investigative tasks , 2007, IUI '07.

[13]  Thomas Serre,et al.  Object recognition with features inspired by visual cortex , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[14]  Rayid Ghani,et al.  Analyzing the effectiveness and applicability of co-training , 2000, CIKM '00.

[15]  Inderjit S. Dhillon,et al.  Co-clustering documents and words using bipartite spectral graph partitioning , 2001, KDD '01.