GOOSE: semantic search on internet connected sensors

More and more sensors are getting Internet connected. Examples are cameras on cell phones, CCTV cameras for traffic control as well as dedicated security and defense sensor systems. Due to the steadily increasing data volume, human exploitation of all this sensor data is impossible for effective mission execution. Smart access to all sensor data acts as enabler for questions such as “Is there a person behind this building” or “Alert me when a vehicle approaches”. The GOOSE concept has the ambition to provide the capability to search semantically for any relevant information within “all” (including imaging) sensor streams in the entire Internet of sensors. This is similar to the capability provided by presently available Internet search engines which enable the retrieval of information on “all” web pages on the Internet. In line with current Internet search engines any indexing services shall be utilized cross-domain. The two main challenge for GOOSE is the Semantic Gap and Scalability. The GOOSE architecture consists of five elements: (1) an online extraction of primitives on each sensor stream; (2) an indexing and search mechanism for these primitives; (3) a ontology based semantic matching module; (4) a top-down hypothesis verification mechanism and (5) a controlling man-machine interface. This paper reports on the initial GOOSE demonstrator, which consists of the MES multimedia analysis platform and the CORTEX action recognition module. It also provides an outlook into future GOOSE development.

[1]  Georges Quénot,et al.  TRECVID 2015 - An Overview of the Goals, Tasks, Data, Evaluation Mechanisms and Metrics , 2011, TRECVID.

[2]  Klamer Schutte,et al.  Selection of negative samples and two-stage combination of multiple features for action detection in thousands of videos , 2013, Machine Vision and Applications.

[3]  Tom Heath,et al.  Linked Data: Evolving the Web into a Global Data Space , 2011, Linked Data.

[4]  Henri Bouma,et al.  Recognition and localization of relevant human behavior in videos , 2013, Defense, Security, and Sensing.

[5]  Emilly Budlong Multimedia Information Extraction , 2007 .

[6]  J. Manyika Big data: The next frontier for innovation, competition, and productivity , 2011 .

[7]  Henri Bouma,et al.  Intelligent sensor networks for surveillance , 2011 .

[8]  Klamer Schutte,et al.  Automated Textual Descriptions for a Wide Range of Video Events with 48 Human Actions , 2012, ECCV Workshops.

[9]  Robbert-Jan Beun,et al.  Efficient Semantic Information Exchange for Ambient Intelligence , 2010, Comput. J..

[10]  Wessel Kraaij,et al.  Notebook paper: TNO instance search submission 2012 , 2011 .

[11]  Henri Bouma,et al.  Automatic human action recognition in a scene from visual inputs , 2012, Defense + Commercial Sensing.

[12]  Danh Le Phuoc,et al.  Linked Open Data in Sensor Data Mashups, , 2009, SSN.

[13]  Klamer Schutte,et al.  Spatio-temporal layout of human actions for improved bag-of-words action detection , 2013, Pattern Recognit. Lett..

[14]  Henri Bouma,et al.  Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall , 2013, Defense, Security, and Sensing.