A smart fusion framework for multimodal object, activity and event detection
暂无分享,去创建一个
With an increasing diffusion of wearable technologies and mobile sensor systems, along with entrenchment of social media networks and crowdsourced information systems in every aspect of modern society, an unavoidable reality is that of continuous, pervasive and ubiquitous sensing, monitoring, surveillance and detection of every type of object, activity, event and incident at a global scale. This rapid proliferation has provided immense opportunities to make use of comprehensive information from a diverse array of multimodal, multi-view, and multisensory data streams for developing efficient and robust, automated computer based decision support systems. Further, with the availability of the complementary and the supplementary information in terms of auxiliary meta-data from the social networks, human experts and the crowdsourced communities, it is possible to obtain better actionable intelligence from these systems. In this paper, we propose a novel computational framework for addressing this gap. The proposed smart fusion framework with particular focus on combining heterogeneous, multimodal real-time big data streams - with information from different types of sensor and auxiliary information drawn from human experts and opinion scores in the loop, allows synergistic fusion to be achieved, leading to better actionable intelligence from the computer based decision support systems. The details of this framework implementation with a component based software platform - the msifStudio, and its evaluation for some of the use case application scenarios is presented here.