The Willful Marionette: Exploring Responses to Embodied Interaction

This paper explores how participants constructed and re-constructed their relationship with an interactive art installation. The piece, textit{the willful marionette}, was developed in collaboration between artists and researchers. It explores the dynamics of non-verbal communication by building (from a scanned image of a human figure) a stringed marionette that responds to human movement. The intent was to challenge participants' expectations about communication, intelligence, emotion and the social role of the body. We show that participants' descriptions of their interactions vary along two axes: whether they or the marionette was perceived as leading the interaction, and whether they constructed a social or a technical mindset. We explore these differences using semi-structured interviews and a mix of qualitative and quantitative methods. We then present some implications for the design of both goal-directed and expressive embodied intelligent systems. textit{the willful marionette} has since been acquired by the Smithsonian American Art Museum as an example of cutting-edge art that deals with contemporary issues of machine intelligence and the social role of our bodies.

[1]  Paul Dourish,et al.  Seeking a Foundation for Context-Aware Computing , 2001, Hum. Comput. Interact..

[2]  I-Ming Chen,et al.  Many strings attached: from conventional to robotic marionette manipulation , 2005, IEEE Robotics & Automation Magazine.

[3]  Rosalind W. Picard Affective computing: (526112012-054) , 1997 .

[4]  Jadwiga Indulska,et al.  Modeling Context Information in Pervasive Computing Systems , 2002, Pervasive.

[5]  Sebastian Möller,et al.  Evaluating embodied conversational agents in multimodal interfaces , 2015 .

[6]  Peter Jones,et al.  Bodystorming as embodied designing , 2010, INTR.

[7]  J. Cassell,et al.  Embodied conversational agents , 2000 .

[8]  Kazjon Grace,et al.  The Willful Marionette: Modeling Social Cognition Using Gesture-Gesture Interaction Dialogue , 2016, HCI.

[9]  Tom Maver,et al.  Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition , 2015 .

[10]  Jodi Forlizzi,et al.  Service robots in the domestic environment: a study of the roomba vacuum in the home , 2006, HRI '06.

[11]  Ephraim Nissan,et al.  Have computation, animatronics, and robotic art anything to say about emotion, compassion, and how to model them?: The survivor project , 2008 .

[12]  C. Nass,et al.  Machines and Mindlessness , 2000 .

[13]  Roger K. Moore,et al.  Towards an investigation of speech energetics using ‘AnTon’: an animatronic model of a human tongue and vocal tract , 2008, Connect. Sci..

[14]  K. Chang,et al.  Embodiment in conversational interfaces: Rea , 1999, CHI '99.

[15]  Tatsuo Arai,et al.  Wholebody Teleoperation for Humanoid Robot by Marionette System , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[16]  Anthony Jameson,et al.  Understanding and Dealing With Usability Side Effects of Intelligent Processing , 2009, AI Mag..

[17]  Nava Tintarev,et al.  Evaluating the effectiveness of explanations for recommender systems , 2012, User Modeling and User-Adapted Interaction.

[18]  Andrew Olney,et al.  Upending the Uncanny Valley , 2005, AAAI.

[19]  Ning Tan,et al.  The Role of Body Postures in the Recognition of Emotions in Contextually Rich Scenarios , 2014, Int. J. Hum. Comput. Interact..

[20]  Cynthia Breazeal,et al.  Interactive robot theatre , 2003, CACM.

[21]  Christopher J. Fox,et al.  A stop list for general text , 1989, SIGF.

[22]  Clifford Nass,et al.  Evaluating the effects of behavioral realism in embodied agents , 2009, Int. J. Hum. Comput. Stud..

[23]  Marilyn A. Walker,et al.  MATCH: An Architecture for Multimodal Dialogue Systems , 2002, ACL.

[24]  Y. Benjamini,et al.  Controlling the false discovery rate: a practical and powerful approach to multiple testing , 1995 .

[25]  Scott P. Robertson,et al.  Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , 1991 .

[26]  Oliver Lemon,et al.  multithreaded context for robust conversational interfaces: Context-sensitive speech recognition and interpretation of corrective fragments , 2004, TCHI.

[27]  J. Bailenson,et al.  Digital Chameleons , 2005, Psychological science.

[28]  Karen Barad Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter , 2003, Signs: Journal of Women in Culture and Society.

[29]  JOHN F. Young Machine Intelligence , 1971, Nature.

[30]  Kristian T. Simsarian,et al.  Take it to the next stage: the roles of role playing in the design process , 2003, CHI Extended Abstracts.

[31]  Noah Wardrip-Fruin,et al.  Expressive Processing: Digital Fictions, Computer Games, and Software Studies , 2009 .

[32]  Geke D.S. Ludden,et al.  Surprise As a Design Strategy , 2008, Design Issues.

[33]  Wan Ling Chang,et al.  PARO robot affects diverse interaction modalities in group sensory therapy for older adults with dementia , 2013, 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR).

[34]  J. Cassell,et al.  Turn Taking versus Discourse Structure , 1999 .

[35]  John Lasseter,et al.  Principles of traditional animation applied to 3D computer animation , 1987, SIGGRAPH.

[36]  Sean Andrist,et al.  Gaze and Attention Management for Embodied Conversational Agents , 2015, ACM Trans. Interact. Intell. Syst..

[37]  Carmel McNaught,et al.  Using Wordle as a Supplementary Research Tool , 2010 .

[38]  Jun'ichiro Seyama,et al.  The Uncanny Valley: Effect of Realism on the Impression of Artificial Human Faces , 2007, PRESENCE: Teleoperators and Virtual Environments.

[39]  Justine Cassell,et al.  Embodied Conversation: Integrating Face and Gesture into Automatic Spoken Dialogue Systems , 1998 .

[40]  Stanley Peters,et al.  The WITAS multi-modal dialogue system I , 2001, INTERSPEECH.

[41]  Lucy Suchman,et al.  Human-Machine Reconfigurations: Plans and Situated Actions , 2006 .

[42]  H. Benjamin Brown,et al.  Controlling a Motorized Marionette with Human Motion Capture Data , 2004, Int. J. Humanoid Robotics.

[43]  Kristina Höök,et al.  Steps to take before intelligent user interfaces become real , 2000, Interact. Comput..

[44]  Clifford Nass,et al.  The media equation - how people treat computers, television, and new media like real people and places , 1996 .

[45]  Majid Nili Ahmadabadi,et al.  Fast Hand gesture recognition based on saliency maps: An application to interactive robotic marionette playing , 2009, RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication.

[46]  Victor Zue,et al.  Conversational interfaces: advances and challenges , 1997, Proceedings of the IEEE.

[47]  David R. Traum,et al.  Embodied agents for multi-party dialogue in immersive virtual worlds , 2002, AAMAS '02.

[48]  Paul Marshall,et al.  Workshop on embodied interaction: theory and practice in HCI , 2011, CHI Extended Abstracts.

[49]  Rosalind W. Picard Affective Computing , 1997 .

[50]  Derek Partridge,et al.  Surprisingness and Expectation Failure: What's the Difference? , 1987, IJCAI.

[51]  R. Mayer,et al.  An embodiment effect in computer-based learning with animated pedagogical agents. , 2012, Journal of experimental psychology. Applied.

[52]  Meredith Ringel Morris,et al.  User-defined gestures for surface computing , 2009, CHI.

[53]  Michael Mateas,et al.  Expressive AI: A Hybrid Art and Science Practice , 2001, Leonardo.

[54]  Brigitte Krenn,et al.  Fully generated scripted dialogue for embodied agents , 2008, Artif. Intell..

[55]  Ann Blandford,et al.  Semi-structured qualitative studies , 2013 .

[56]  Akshay Gupta,et al.  Viewpoints AI , 2013, AIIDE.

[57]  Paul Dourish,et al.  What we talk about when we talk about context , 2004, Personal and Ubiquitous Computing.

[58]  Jennifer C. Lai,et al.  Conversational interfaces , 2000, CACM.

[59]  Eric Horvitz,et al.  Principles of mixed-initiative user interfaces , 1999, CHI '99.

[60]  Joseph Bates,et al.  The role of emotion in believable agents , 1994, CACM.