Touch2Annotate: generating better annotations with less human effort on multi-touch interfaces

Annotation is essential for effective visual sense making. For multidimensional data, most existing annotation approaches require users to manually type notes to record the semantic meaning of their findings. They require high effort from multi-touch interface users since these users often experience low typing speeds and high typing errors. To lower the typing effort and improve the quality of the generated annotations, we propose a new approach that semi-automatically generates annotations with rich semantic meanings on multidimensional visualizations. A working prototype of this approach, named Touch2Annotate, has been implemented and used on a tabletop. We present a scenario of using Touch2Annotate to demonstrate its effectiveness.

[1]  M. Sheelagh T. Carpendale,et al.  Lark: Coordinating Co-located Collaboration with Information Visualization , 2009, IEEE Transactions on Visualization and Computer Graphics.

[2]  James R. Eagan,et al.  Low-level components of analytic activity in information visualization , 2005, IEEE Symposium on Information Visualization, 2005. INFOVIS 2005..

[3]  Kang Zhang,et al.  A Mobile Interface for Hierarchical Information Visualization and Navigation , 2007, 2007 IEEE International Symposium on Consumer Electronics.

[4]  Jarke J. van Wijk,et al.  Supporting the analytical reasoning process in information visualization , 2008, CHI.

[5]  Johannes Hirche,et al.  Adaptive interface for text input on large-scale interactive surfaces , 2008, 2008 3rd IEEE International Workshop on Horizontal Interactive Human Computer Systems.

[6]  Martin Wattenberg,et al.  ManyEyes: a Site for Visualization at Internet Scale , 2007, IEEE Transactions on Visualization and Computer Graphics.

[7]  William Ribarsky,et al.  Toward effective insight management in visual analytics systems , 2009, 2009 IEEE Pacific Visualization Symposium.