LiquidText: a flexible, multitouch environment to support active reading

Active reading, involving acts such as highlighting, writing notes, etc., is an important part of knowledge workers' activities. Most computer-based active reading support seeks to replicate the affordances of paper, but paper has limitations, being in many ways inflexible. In this paper we introduce LiquidText, a computer-based active reading system that takes a fundamentally different approach, offering a flexible, fluid document representation built on multitouch input, with a range of interaction techniques designed to facilitate the activities of active reading. We report here on our design for LiquidText, its interactions and gesture vocabulary, and our design process, including formative user evaluations which helped shape the final system.

[1]  Susanne Askwall,et al.  Computer Supported Reading vs Reading Text on Paper: A Comparison of Two Reading Situations , 1985, Int. J. Man Mach. Stud..

[2]  Wilfred J. Hansen,et al.  Reading and writing with computers: a framework for explaining differences in performance , 1988, CACM.

[3]  Andreas Holzinger,et al.  An investigation of finger versus stylus input in medical scenarios , 2008, ITI 2008 - 30th International Conference on Information Technology Interfaces.

[4]  Bill N. Schilit,et al.  Linking by inking: trailblazing in a paper-like hypertext , 1998, HYPERTEXT '98.

[5]  Kenton O'Hara,et al.  Understanding the materiality of writing from multiple sources , 2002, Int. J. Hum. Comput. Stud..

[6]  Catherine C. Marshall,et al.  Introducing a digital library reading appliance into a reading group , 1999, DL '99.

[7]  W. Keith Edwards,et al.  Active reading and its discontents: the situations, problems and ideas of readers , 2011, CHI.

[8]  James D. Hollan,et al.  Papiercraft: A gesture-based command system for interactive paper , 2008, TCHI.

[9]  Robert C. Zeleznik,et al.  Hands-on math: a page-based multi-touch and pen desktop for technical work and problem solving , 2010, UIST.

[10]  William Buxton,et al.  Pen + touch = new tools , 2010, UIST.

[11]  Tom Murray Applying Text Comprehension and Active Reading Principles to Adaptive Hyperbooks , 2003 .

[12]  Meredith Ringel Morris,et al.  Reading Revisited: Evaluating the Usability of Digital Display Surfaces for Active Reading Tasks , 2007, Second Annual IEEE International Workshop on Horizontal Interactive Human-Computer Systems (TABLETOP'07).

[13]  Kenton O'Hara,et al.  A comparison of reading paper and on-line documents , 1997, CHI.

[14]  Kasper Hornbæk,et al.  Reading patterns and usability in visualizations of electronic documents , 2003, TCHI.

[15]  Kenton O'Hara,et al.  Student readers' use of library documents: implications for library technologies , 1998, CHI.

[16]  Patrick Baudisch,et al.  The generalized perceived input point model and how to double touch accuracy by extracting fingerprints , 2010, CHI.

[17]  Kenton O'Hara,et al.  A diary study of work-related reading: design implications for digital reading devices , 1998, CHI.

[18]  Mike Wu,et al.  Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays , 2003, UIST '03.

[19]  Kaj Grønbæk,et al.  Fluid annotations through open hypermedia: using and extending emerging web standards , 2002, WWW '02.

[20]  Mikkel Rønne Jakobsen,et al.  Evaluating a fisheye view of source code , 2006, CHI.

[21]  Daniel J. Wigdor,et al.  Direct-touch vs. mouse input for tabletop displays , 2007, CHI.

[22]  K. O'hara Rank Xerox Research Centre Cambridge Laboratory Towards a Typology of Reading Goals RXRC Affordances of Paper Project , 1998 .

[23]  Abigail Sellen,et al.  Paper as an analytic resource for the design of new technologies , 1997, CHI.

[24]  Bill N. Schilit,et al.  Beyond paper: supporting active reading with free form digital ink annotations , 1998, CHI.

[25]  Andries van Dam,et al.  An Outline for a Functional Taxonomy of Annotation , 1999 .