Screenfinity: extending the perception area of content on very large public displays

We propose and validate a model of the perception area of content on public displays in order to predict from where users can read. From this model, we derive Screenfinity, a technique to rotate, translate, and zoom content in order to enable reading while passing by very large displays. Screenfinity is comfortable to read when close, supports different content for different users, does not waste screen real estate and allows expert passers-by to read content while walking. A laboratory study shows that expert users are able to perceive content when it moves. A field study evaluates the effect of Screenfinity on novice users in an ecologically valid setting. We find 1) first time users can read content without slowing down or stopping; 2) Passers-by stopping did so to explore the technology. Users explore the interaction, the limits of the system, manipulate the technology, and look behind the screen.

[1]  Sebastian Boring,et al.  Proxemic peddler: a public advertising display that captures and preserves the attention of a passerby , 2012, PerDis.

[2]  Yvonne Rogers,et al.  Rethinking 'multi-user': an in-the-wild study of how groups approach a walk-up-and-use tabletop interface , 2011, CHI.

[3]  Pourang Irani,et al.  ARC-Pad: absolute+relative cursor positioning for large displays with a mobile touchscreen , 2009, UIST '09.

[4]  Kim Halskov,et al.  Designing urban media façades: cases and challenges , 2010, CHI.

[5]  G. Drew Kessler,et al.  PRISM interaction for enhancing control in immersive virtual environments , 2007, TCHI.

[6]  Colin Ware,et al.  Information Visualization: Perception for Design , 2000 .

[7]  Florian Alt,et al.  Looking glass: a field study on noticing interactivity of a shop window , 2012, CHI.

[8]  Nicolai Marquardt,et al.  Proxemic interactions: the new ubicomp? , 2011, INTR.

[9]  Patrick Baudisch,et al.  Multitoe: high-precision interaction with back-projected floors based on high-resolution multi-touch input , 2010, UIST.

[10]  Mary Czerwinski,et al.  Large display research overview , 2006, Color Imaging Conference.

[11]  D. Robinson,et al.  Loss of the neural integrator of the oculomotor system from brain stem lesions in monkey. , 1987, Journal of neurophysiology.

[12]  Kirstie Hawkey,et al.  Whale Tank Virtual Reality , 2010, Graphics Interface.

[13]  Wendy Ju,et al.  Range: exploring implicit interaction through electronic whiteboard design , 2008, CSCW.

[14]  Antti Oulasvirta,et al.  It's Mine, Don't Touch!: interactions at a large multi-touch display in a city centre , 2008, CHI.

[15]  Daniel Vogel,et al.  Interactive public ambient displays: transitioning from implicit to explicit, public to personal, interaction with multiple users , 2004, UIST '04.

[16]  Jörg Müller,et al.  Chained displays: configurations of public displays can be used to influence actor-, audience-, and passer-by behavior , 2012, CHI.

[17]  Tovi Grossman,et al.  Collaborative interaction with volumetric displays , 2008, CHI.

[18]  Eva Hornecker,et al.  Urban HCI: spatial aspects in the design of shared encounters for media facades , 2012, CHI.

[19]  Peter F. Fisher,et al.  Extending the applicability of viewsheds in landscape planning , 1996 .

[20]  Carl Gutwin,et al.  E-conic: a perspective-aware interface for multi-display environments , 2007, UIST.

[21]  George W. Fitzmaurice,et al.  Spotlight: directing users' attention on large displays , 2005, CHI.

[22]  Paul Marshall,et al.  Measuring environments for public displays: a space syntax approach , 2010, CHI Extended Abstracts.

[23]  Pierre Dragicevic,et al.  Mnemonic rendering: an image-based approach for exposing hidden changes in dynamic displays , 2006, UIST.

[24]  M. Benedikt,et al.  To Take Hold of Space: Isovists and Isovist Fields , 1979 .

[25]  Mary Czerwinski,et al.  Text in 3D: some legibility results , 2000, CHI Extended Abstracts.