Pen Gestures in Online Map and Photograph Annotation Tasks

The recognition of pen gestures for map-based navigation and annotation is a difficult problem. Especially if users are unconstrained in the gesture repertoires that they can use. This paper reports on a study to develop a taxonomy of pen-gesture shape in the context of multi-modal crisis management applications. A human-factors experiment was conducted for acquiring domain-specific data. A hierarchical categorisation of the data was produced, which confirmed our expectation that three broad classes can be distinguished: deictic gestures, hand-written text and drawn objects. Since users were requested to annotate maps and photographs, most gestures belonged to the deictic category, indicating locations, routes and events. Based on the acquired data, the most suitable geometric features for recognition of the different classes were explored. Results show that the majority of gestures was recognised correctly. We expect that the results from this study can be generalised to other domains that use pen-based 'interactive maps'.