Adaptive Multi-Modal Data Mining and Fusion for Autonomous Intelligence Discovery

Abstract : This proposal addressed the autonomous discovery of relevant information in massive, complex, dynamic text and imagery streams. We began development of a prototype system to mine, filter and fuse multi-modal data streams and dynamically interact with the analysts to improve their efficiency through feedbacks and autonomous adaptation of the algorithms. The plan was to implement four core capabilities: 1) Text and image mining for feature extraction, 2) Multi-modal data fusion, 3) Agent-based adaptive information filtering, 4) Cognitively friendly information visualization. The focus in the first phase of the work was multilingual text search systems as well as geospatial mapping of documents and images.