Multimodal workbench for automatic surveillance applications

Noticeable developments have lately been achieved on designing automated multimodal smart processes to increase security in every-day life of people. As these developments continue, proper infrastructures and methodologies for the aggregation of various demands that will inevitably arise, such as the huge amount of data and computation, become more important. In this research, we introduce a multimodal framework with support for an automatic surveillance application. The novelty of the attempt resides in the modalities to underpin data manipulation as a natural process but still keeping the overall performance at high levels. At the application level, the typical complexity behind the emerging distributed multimodal systems is reduced in a transparent manner through multimodal frameworks that handle data on different abstraction levels and efficiently accommodate constituent technologies. The proposed specifications includes the use of shared memory spaces (XML data spaces) and smart document-centered content-based data querying mechanisms (XQuery formal language (S. Bong et al., 2006)). We also report on the use of this framework in an application on aggression detection in train compartments.

[1]  C. Michael Sperberg-McQueen,et al.  World Wide Web Consortium , 2009, Encyclopedia of Database Systems.

[2]  Scott Boag,et al.  XQuery 1.0 : An XML Query Language , 2007 .

[3]  Eric van der Vlist,et al.  XML Schema , 2002 .

[4]  D. Datcu,et al.  The recognition of emotions from speech using GentleBoost classifier: A comparison approach , 2006 .

[5]  Jim Melton,et al.  XML schema , 2003, SGMD.

[6]  Léon J. M. Rothkrantz,et al.  Facial Expression Recognition with Relevance Vector Machines , 2005, 2005 IEEE International Conference on Multimedia and Expo.

[7]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.