Viewpoint-based legibility optimization

Much robotics research has focused on intent-expressive (legible) motion. However, algorithms that can autonomously generate legible motion have implicitly made the strong assumption of an omniscient observer, with access to the robot's configuration as it changes across time. In reality, human observers have a particular viewpoint, which biases the way they perceive the motion. In this work, we free robots from this assumption and introduce the notion of an observer with a specific point of view into legibility optimization. In doing so, we account for two factors: (1) depth uncertainty induced by a particular viewpoint, and (2) occlusions along the motion, during which (part of) the robot is hidden behind some object. We propose viewpoint and occlusion models that enable autonomous generation of viewpoint-based legible motions, and show through large-scale user studies that the produced motions are significantly more legible compared to those generated assuming an omniscient observer.

[1]  John Lasseter,et al.  Principles of traditional animation applied to 3D computer animation , 1987, SIGGRAPH.

[2]  Stefan Schaal,et al.  STOMP: Stochastic trajectory optimization for motion planning , 2011, 2011 IEEE International Conference on Robotics and Automation.

[3]  Sean Quinlan Real-time modification of collision-free paths , 1994 .

[4]  Siddhartha S. Srinivasa,et al.  Generating Legible Motion , 2013, Robotics: Science and Systems.

[5]  Siddhartha S. Srinivasa,et al.  An Analysis of Deceptive Robot Motion , 2014, Robotics: Science and Systems.

[6]  Michael Beetz,et al.  Generality and legibility in mobile manipulation , 2010, Auton. Robots.

[7]  Marc Toussaint,et al.  Robot trajectory optimization using approximate inference , 2009, ICML '09.

[8]  G. Giralt,et al.  Safe and dependable physical human-robot interaction in anthropic domains: State of the art and challenges , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  Marc Toussaint,et al.  Rprop Using the Natural Gradient , 2005 .

[10]  Z. Nadasdy,et al.  Taking the intentional stance at 12 months of age , 1995, Cognition.

[11]  M. Tomasello,et al.  Unwilling versus unable: infants' understanding of intentional action. , 2005, Developmental psychology.

[12]  Jodie A. Baird,et al.  Infants parse dynamic action. , 2001, Child development.

[13]  Tom Porter,et al.  On site: creating lifelike characters in Pixar movies , 2000, CACM.

[14]  Jeffrey M. Bradshaw,et al.  Ten Challenges for Making Automation a "Team Player" in Joint Human-Agent Activity , 2004, IEEE Intell. Syst..

[15]  A. Woodward Infants selectively encode the goal object of an actor's reach , 1998, Cognition.

[16]  Jessica K. Hodgins,et al.  Expressing animated performances through puppeteering , 2013, 2013 IEEE Symposium on 3D User Interfaces (3DUI).

[17]  B. Sodian,et al.  Infants' Understanding of Looking, Pointing, and Reaching as Cues to Goal-Directed Action , 2004 .

[18]  Wendy Ju,et al.  Expressing thought: Improving robot readability with animation principles , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[19]  M. Tomasello,et al.  Social cognition, joint attention, and communicative competence from 9 to 15 months of age. , 1998, Monographs of the Society for Research in Child Development.

[20]  Siddhartha S. Srinivasa,et al.  CHOMP: Gradient optimization techniques for efficient motion planning , 2009, 2009 IEEE International Conference on Robotics and Automation.

[21]  G. Csibra,et al.  'Obsessed with goals': functions and mechanisms of teleological interpretation of actions in humans. , 2007, Acta psychologica.

[22]  F. Thomas,et al.  The illusion of life : Disney animation , 1981 .

[23]  Anind K. Dey,et al.  Maximum Entropy Inverse Reinforcement Learning , 2008, AAAI.

[24]  A. Meltzoff Understanding the Intentions of Others: Re-Enactment of Intended Acts by 18-Month-Old Children. , 1995, Developmental psychology.

[25]  Siddhartha S. Srinivasa,et al.  Formalizing Assistive Teleoperation , 2012, Robotics: Science and Systems.

[26]  Siddhartha S. Srinivasa,et al.  Manipulation planning with goal sets using constrained trajectory optimization , 2011, 2011 IEEE International Conference on Robotics and Automation.

[27]  Mira Dontcheva,et al.  Layered acting for character animation , 2003, ACM Trans. Graph..

[28]  E. Todorov,et al.  A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems , 2005, Proceedings of the 2005, American Control Conference, 2005..

[29]  Siddhartha S. Srinivasa,et al.  Legibility and predictability of robot motion , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[30]  O. Brock,et al.  Elastic Strips: A Framework for Motion Generation in Human Environments , 2002, Int. J. Robotics Res..

[31]  József Szabados,et al.  Trends and Applications in Constructive Approximation , 2006 .

[32]  Jessica K. Hodgins,et al.  Exploring the neural correlates of goal-directed action and intention understanding , 2011, NeuroImage.

[33]  Berthold K. P. Horn Robot vision , 1986, MIT electrical engineering and computer science series.

[34]  Andrea Lockerd Thomaz,et al.  Generating anticipation in robot motion , 2011, 2011 RO-MAN.