Predicting mid-air gestural interaction with public displays based on audience behaviour

Abstract Knowledge about the expected interaction duration and expected distance from which users will interact with public displays can be useful in many ways. For example, knowing upfront that a certain setup will lead to shorter interactions can nudge space owners to alter the setup. If a system can predict that incoming users will interact at a long distance for a short amount of time, it can accordingly show shorter versions of content (e.g., videos/advertisements) and employ at-a-distance interaction modalities (e.g., mid-air gestures). In this work, we propose a method to build models for predicting users’ interaction duration and distance in public display environments, focusing on mid-air gestural interactive displays. First, we report our findings from a field study showing that multiple variables, such as audience size and behaviour, significantly influence interaction duration and distance. We then train predictor models using contextual data, based on the same variables. By applying our method to a mid-air gestural interactive public display deployment, we build a model that predicts interaction duration with an average error of about 8 s, and interaction distance with an average error of about 35 cm. We discuss how researchers and practitioners can use our work to build their own predictor models, and how they can use them to optimise their deployment.

[1]  Omar Mubin,et al.  How Not to Become a Buffoon in Front of a Shop Window: A Solution Allowing Natural Head Movement for Interaction with a Public Display , 2009, INTERACT.

[2]  Yuan-Cheng Lai,et al.  Improving the Accuracy of , 2014 .

[3]  Florian Alt,et al.  CueAuth: Comparing Touch, Mid-Air Gestures, and Gaze for Cue-based Authentication on Situated Displays , 2018, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[4]  Paul Holleis,et al.  Design and evaluation of techniques for mobile interaction with dynamic NFC-displays , 2011, Tangible and Embedded Interaction.

[5]  Albrecht Schmidt,et al.  Requirements and design space for interactive public displays , 2010, ACM Multimedia.

[6]  Florian Alt,et al.  Pocket Transfers: Interaction Techniques for Transferring Content from Situated Displays to Mobile Devices , 2018, CHI.

[7]  Fredrik Lundh,et al.  An Introduction to Tkinter , 1999 .

[8]  Daniel Vogel,et al.  Interactive public ambient displays: transitioning from implicit to explicit, public to personal, interaction with multiple users , 2004, UIST '04.

[9]  Florian Alt,et al.  They are looking at me!: understanding how audience presence impacts on public display users , 2017, PerDis.

[10]  Ava Fatah gen. Schieck,et al.  Exploring the effect of spatial layout on mediated urban interactions , 2013, PerDis.

[11]  Kenton O'Hara,et al.  Pre-Touch Sensing for Mobile Interaction , 2016, CHI.

[12]  Shri Kant Machine Learning and Pattern Recognition , 2010 .

[13]  Daniel Vogel,et al.  Soft-Constraints to Reduce Legacy and Performance Bias to Elicit Whole-body Gestures with Low Arm Fatigue , 2015, CHI.

[14]  Florian Alt,et al.  TextPursuits: using text for pursuits-based interaction and calibration on public displays , 2016, UbiComp.

[15]  Bernd Huber,et al.  Detecting User Intention at Public Displays from Foot Positions , 2015, CHI.

[16]  P. Cavanagh,et al.  Gender differences in adult foot shape: implications for shoe design. , 2001, Medicine and science in sports and exercise.

[17]  Gregory D. Abowd,et al.  SynchroWatch: One-Handed Synchronous Smartwatch Gestures Using Correlation and Magnetic Sensing , 2018, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[18]  Mitchell Harrop,et al.  Uncovering the Honeypot Effect: How Audiences Engage with Public Interactive Systems , 2016, Conference on Designing Interactive Systems.

[19]  Jörg Müller,et al.  Chained displays: configurations of public displays can be used to influence actor-, audience-, and passer-by behavior , 2012, CHI.

[20]  David Lindlbauer,et al.  Understanding Mid-Air Hand Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI , 2012 .

[21]  Meredith Ringel Morris,et al.  Understanding users' preferences for surface gestures , 2010, Graphics Interface.

[22]  F Dobson,et al.  Eye contact. , 1993, Nursing times.

[23]  C. Gerba,et al.  Bacterial contamination of computer touch screens. , 2016, American journal of infection control.

[24]  Olivier Chapuis,et al.  Mid-Air Pointing on Ultra-Walls , 2015, TCHI.

[25]  Pourang Irani,et al.  Are you comfortable doing that?: acceptance studies of around-device gestures in and for public settings , 2014, MobileHCI '14.

[26]  Nemanja Memarovic,et al.  Longitudinal, cross-site and "in the Wild": a study of public displays user communities' situated snapshots , 2016, MAB.

[27]  Mohan M. Trivedi,et al.  Head Pose Estimation in Computer Vision: A Survey , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[28]  ValkanovaNina,et al.  Public visualization displays of citizen data , 2015 .

[29]  José A. Pino,et al.  Predicting Task Execution Time on Natural User Interfaces based on Touchless Hand Gestures , 2015, IUI.

[30]  John Williamson,et al.  Analysing Pedestrian Traffic Around Public Displays , 2014, PerDis.

[31]  Jörg Müller,et al.  Cuenesics: using mid-air gestures to select items on interactive public displays , 2014, MobileHCI '14.

[32]  Florian Alt,et al.  Improving Accuracy, Applicability and Usability of Keystroke Biometrics on Mobile Touchscreen Devices , 2015, CHI.

[33]  John D. Hunter,et al.  Matplotlib: A 2D Graphics Environment , 2007, Computing in Science & Engineering.

[34]  Markku Turunen,et al.  Challenges in Public Display Deployments: A Taxonomy of External Factors , 2017, CHI.

[35]  Jacob O. Wobbrock,et al.  Mouse pointing endpoint prediction using kinematic template matching , 2014, CHI.

[36]  Alessio Malizia,et al.  Investigating how user avatar in touchless interfaces affects perceived cognitive load and two-handed interactions , 2017, PerDis.

[37]  Luc Van Gool,et al.  Real time head pose estimation with random regression forests , 2011, CVPR 2011.

[38]  Ava Fatah gen. Schieck,et al.  Networked architectural interfaces: exploring the effect of spatial configuration on urban screen placement , 2013 .

[39]  Judy Kay,et al.  An in-the-wild study of learning mid-air gestures to browse hierarchical information at a large interactive public display , 2015, UbiComp.

[40]  M. Argyle,et al.  EYE-CONTACT, DISTANCE AND AFFILIATION. , 1965, Sociometry.

[41]  Jörg Müller,et al.  MyPosition: sparking civic discourse by a public interactive poll visualization , 2014, CSCW.

[42]  Antti Oulasvirta,et al.  Improving accuracy in back-of-device multitouch typing: a clustering-based approach to keyboard updating , 2014, IUI.

[43]  Jin-Hyuk Hong,et al.  Understanding and prediction of mobile application usage for smart phones , 2012, UbiComp.

[44]  Peter Robinson,et al.  Rendering of Eyes for Eye-Shape Registration and Gaze Estimation , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[45]  Peter Dalsgård,et al.  Performing perception—staging aesthetics of interaction , 2008, TCHI.

[46]  E. Hall,et al.  The Hidden Dimension , 1970 .

[47]  Paul Marshall,et al.  Display Blindness?: Looking Again at the Visibility of Situated Displays using Eye-tracking , 2015, CHI.

[48]  Markus Funk,et al.  Interaction Proxemics: Combining Physical Spaces for Seamless Gesture Interaction , 2015, PerDis.

[49]  Stefano Faralli,et al.  CoPuppet: Collaborative Interaction in Virtual Puppetry , 2008 .

[50]  Andrew Zisserman,et al.  Detecting People Looking at Each Other in Videos , 2014, International Journal of Computer Vision.

[51]  Florian Alt,et al.  Don't disturb me: understanding secondary tasks on public displays , 2016, PerDis.

[52]  Alessio Malizia,et al.  Touchless Interfaces For Public Displays: Can We Deliver Interface Designers From Introducing Artificial Push Button Gestures? , 2016, AVI.

[53]  Anders Krogh,et al.  Neural Network Ensembles, Cross Validation, and Active Learning , 1994, NIPS.

[54]  Yvonne Rogers,et al.  Rethinking 'multi-user': an in-the-wild study of how groups approach a walk-up-and-use tabletop interface , 2011, CHI.

[55]  Ava Fatah gen. Schieck,et al.  Designing digital displays and interactive media in today’s cities by night. Do we know enough about attracting attention to do so? , 2018 .

[56]  C Mottram,et al.  The urban screen as a socialising platform: exploring the role of place within the urban space , 2008 .

[57]  Jörg Müller,et al.  The Audience Funnel: Observations of Gesture Based Interaction With Multiple Large Displays in a City Center , 2011, Int. J. Hum. Comput. Interact..

[58]  Sven Gehring,et al.  A Survey of Pervasive Displays for Information Presentation , 2016, IEEE Pervasive Computing.

[59]  Norbert A. Streitz,et al.  Situated Interaction with Ambient Information: Facilitating Awareness and Communication in Ubiquitous Work Environments , 2003 .

[60]  Ryen W. White,et al.  User see, user point: gaze and cursor alignment in web search , 2012, CHI.

[61]  Mario Fritz,et al.  Appearance-based gaze estimation in the wild , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[62]  Judy Kay,et al.  Does the Public Still Look at Public Displays? , 2018, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[63]  Kim Halskov,et al.  Designing urban media façades: cases and challenges , 2010, CHI.

[64]  Marc Langheinrich,et al.  Audience monitor: an open source tool for tracking audience mobility in front of pervasive displays , 2017, PerDis.

[65]  Saul Greenberg,et al.  Proxemic interaction: designing for a proximity and orientation-aware environment , 2010, ITS '10.

[66]  Jörg Müller,et al.  GazeHorizon: enabling passers-by to interact with public displays by gaze , 2014, UbiComp.

[67]  Florian Alt,et al.  Looking glass: a field study on noticing interactivity of a shop window , 2012, CHI.

[68]  Florian Alt,et al.  EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays , 2017, UIST.

[69]  Florian Alt,et al.  Which one is me?: Identifying Oneself on Public Displays , 2018, CHI.

[70]  Michael S. Horn,et al.  Fluid Grouping: Quantifying Group Engagement around Interactive Tabletop Exhibits in the Wild , 2015, CHI.

[71]  Florian Alt,et al.  A field study on spontaneous gaze-based interaction with a public display using pursuits , 2015, UbiComp/ISWC Adjunct.

[72]  Nina Valkanova,et al.  Public visualization displays of citizen data: Design, impact and implications , 2014, Int. J. Hum. Comput. Stud..

[73]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[74]  Jörg Müller,et al.  StrikeAPose: revealing mid-air gestures on public displays , 2013, CHI.

[75]  Oleksandr Makeyev,et al.  Neural network with ensembles , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).

[76]  C. Pramesh,et al.  Common pitfalls in statistical analysis: Measures of agreement , 2017, Perspectives in clinical research.

[77]  Pourang Irani,et al.  Consumed endurance: a metric to quantify arm fatigue of mid-air interactions , 2014, CHI.

[78]  Marko Jurmu,et al.  Multipurpose Interactive Public Displays in the Wild: Three Years Later , 2012, Computer.

[79]  Matthias Baldauf,et al.  When Smart Devices Interact With Pervasive Screens , 2017, ACM Trans. Multim. Comput. Commun. Appl..

[80]  Niels Henze,et al.  uCanvas: A Web Framework for Spontaneous Smartphone Interaction with Ubiquitous Displays , 2015, INTERACT.

[81]  Florian Alt,et al.  Vote With Your Feet: Local Community Polling on Urban Screens , 2014, PerDis.

[82]  Alessio Malizia,et al.  A Touchless Gestural System for Extended Information Access Within a Campus , 2017, SIGUCCS.

[83]  Markku Turunen,et al.  "It's Natural to Grab and Pull": Retrieving Content from Large Displays Using Mid-Air Gestures , 2017, IEEE Pervasive Computing.

[84]  René de la Barré,et al.  Touchless Interaction-Novel Chances and Challenges , 2009, HCI.

[85]  Kim Halskov,et al.  Participation Gestalt: Analysing Participatory Qualities of Interaction in Public Space , 2016, CHI.

[86]  Jonathan Arnowitz,et al.  It's mine... , 2005, INTR.

[87]  Florian Alt,et al.  GTmoPass: two-factor authentication on public displays using gaze-touch passwords and personal mobile devices , 2017, PerDis.

[88]  Nemanja Memarovic,et al.  FunSquare: first experiences with autopoiesic content , 2011, MUM.

[89]  Kenton O'Hara,et al.  WaveWindow: public, performative gestural interaction , 2010, ITS '10.

[90]  Judy Kay,et al.  Skeletons and Silhouettes: Comparing User Representations at a Gesture-based Large Display , 2016, CHI.

[91]  Antti Oulasvirta,et al.  It's Mine, Don't Touch!: interactions at a large multi-touch display in a city centre , 2008, CHI.

[92]  Ye Xu,et al.  Preference, context and communities: a multi-faceted approach to predicting smartphone app usage patterns , 2013, ISWC '13.

[93]  Yvonne Rogers,et al.  Enticing People to Interact with Large Public Displays in Public Spaces , 2003, INTERACT.

[94]  Henrik I. Christensen,et al.  Computational visual attention systems and their cognitive foundations: A survey , 2010, TAP.

[95]  Florian Alt,et al.  GravitySpot: Guiding Users in Front of Public Displays Using On-Screen Visual Cues , 2015, UIST.

[96]  Andrew Vande Moere,et al.  The concurrent use of touch and mid-air gestures or floor mat interaction on a public display , 2017, PerDis.

[97]  Florian Alt,et al.  Pervasive Displays: Understanding the Future of Digital Signage , 2014, Pervasive Displays.

[98]  Andreas Butz,et al.  The puppeteer display: attracting and actively shaping the audience with an interactive public banner display , 2014, Conference on Designing Interactive Systems.

[99]  Kalyanmoy Deb,et al.  A fast and elitist multiobjective genetic algorithm: NSGA-II , 2002, IEEE Trans. Evol. Comput..

[100]  Robert J. K. Jacob,et al.  Evaluation of eye gaze interaction , 2000, CHI.