Multimodal multisensor activity annotation tool

In this paper we describe a multimodal-multisensor annotation tool for physiological computing; for example mobile gesture-based interaction devices or health monitoring devices can be connected. It should be used as an expert authoring tool to annotate multiple video-based sensor streams for domain-specific activities. Resulting datasets can be used as supervised datasets for new machine learning tasks. Our tool provides connectors to commercially available sensor systems (e.g., Intel RealSense F200 3D camera, Leap Motion, and Myo) and a graphical user interface for annotation.

[1]  Georgios Paliouras,et al.  Knowledge-Driven Multimedia Information Extraction and Ontology Evolution - Bridging the Semantic Gap , 2011, Knowledge-Driven Multimedia Information Extraction and Ontology Evolution.

[2]  Paul Lukowicz,et al.  Wearable Activity Tracking in Car Manufacturing , 2008, IEEE Pervasive Computing.

[3]  Klaus Schöffmann,et al.  Video Interaction Tools , 2015, ACM Comput. Surv..

[4]  Yiannis Kompatsiaris,et al.  A Survey of Semantic Image and Video Annotation Tools , 2011, Knowledge-Driven Multimedia Information Extraction and Ontology Evolution.

[5]  Daniel Sonntag,et al.  LabelMovie: Semi-supervised machine annotation tool with quality assurance and crowd-sourcing options for videos , 2014, 2014 12th International Workshop on Content-Based Multimedia Indexing (CBMI).

[6]  Michael Kipp Spatiotemporal Coding in ANVIL , 2008, LREC.

[7]  Daniel Sonntag ERmed - Towards Medical Multimodal Cyber-Physical Environments , 2014, HCI.