Exploring self-interruptions as a strategy for regaining the attention of distracted users

In this paper we present a first exploratory study investigating the effects of a contingently self-interrupting vs non-self-interrupting virtual agent in a smart home environment who transmits information to a human interaction partner. We tested the hypothesis that self-interruptions are a strategy for keeping the user's attention, as measured by post-interaction information recall. Interestingly, our experiment does not allow us to confirm this hypothesis. In fact, users found the self-interruption strategy to be less-likeable. From our observations, we draw suggestions for future implementations of attention-retainment strategies.

[1]  Gerhard Leitner,et al.  Aspekte der Human Computer Interaction im Smart Home , 2014, HMD Praxis der Wirtschaftsinformatik.

[2]  David Schlangen,et al.  Towards Closed Feedback Loops in HRI: Integrating InproTK and PaMini , 2014, MMRWHRI '14.

[3]  Brent Lance,et al.  The Rickel Gaze Model: A Window on the Mind of a Virtual Human , 2007, IVA.

[4]  Eric Horvitz,et al.  Managing Human-Robot Engagement with Forecasts and... um... Hesitations , 2014, ICMI.

[5]  Hideaki Kuzuoka,et al.  “The first five seconds”: Contingent stepwise entry into an interaction as a means to secure sustained engagement in HRI , 2009, RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication.

[6]  Dana Kulic,et al.  Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots , 2009, Int. J. Soc. Robotics.

[7]  Anton Nijholt,et al.  Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes , 2001, CHI.

[8]  M. Argyle,et al.  Gaze and Mutual Gaze , 1994, British Journal of Psychiatry.

[9]  Candace L. Sidner,et al.  Recognizing engagement in human-robot interaction , 2010, HRI 2010.

[10]  Zhou Yu,et al.  Incremental Coordination: Attention-Centric Speech Production in a Physically Situated Conversational Agent , 2015, SIGDIAL Conference.

[11]  Eric Horvitz,et al.  Models for Multiparty Engagement in Open-World Dialog , 2009, SIGDIAL Conference.

[12]  Giulio Sandini,et al.  Gaze contingency in turn-taking for human robot interaction: Advantages and drawbacks , 2015, 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[14]  Petra Wagner,et al.  How to Address Smart Homes with a Social Robot? A Multi-modal Corpus of User Interactions with an Intelligent Environment , 2016, LREC.

[15]  N. George,et al.  Facing the gaze of others , 2008, Neurophysiologie Clinique/Clinical Neurophysiology.

[16]  Yukie Nagai,et al.  Yet another gaze detector: An embodied calibration free system for the iCub robot , 2015, 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids).

[17]  Matthew W. Crocker,et al.  Visual attention in spoken human-robot interaction , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[18]  Gabriel Skantze,et al.  Attention and Interaction Control in a Human-Human-Computer Dialogue Setting , 2009, SIGDIAL Conference.

[19]  Ingo Lütkebohle,et al.  The bielefeld anthropomorphic robot head “Flobi” , 2010, 2010 IEEE International Conference on Robotics and Automation.

[20]  David Schlangen,et al.  The InproTK 2012 release , 2012, SDCTD@NAACL-HLT.

[21]  Stefan Kopp,et al.  Situationally Aware In-Car Information Presentation Using Incremental Speech Generation: Safer, and More Effective , 2014, DM@EACL.

[22]  Jean-Marc Odobez,et al.  Engagement-based Multi-party Dialog with a Humanoid Robot , 2011, SIGDIAL Conference.

[23]  Peter Wittenburg,et al.  ELAN: a Professional Framework for Multimodality Research , 2006, LREC.

[24]  Jochen J. Steil,et al.  Robots Show Us How to Teach Them: Feedback from Robots Shapes Tutoring Behavior during Action Learning , 2014, PloS one.