The saliency of anomalies in animated human characters

Virtual characters are much in demand for animated movies, games, and other applications. Rapid advances in performance capture and advanced rendering techniques have allowed the movie industry in particular to create characters that appear very human-like. However, with these new capabilities has come the realization that such characters are yet not quite “right.” One possible hypothesis is that these virtual humans fall into an “Uncanny Valley”, where the viewer's emotional response is repulsion or rejection, rather than the empathy or emotional engagement that their creators had hoped for. To explore these issues, we created three animated vignettes of an arguing couple with detailed motion for the face, eyes, hair, and body. In a set of perceptual experiments, we explore the relative importance of different anomalies using two different methods: a questionnaire to determine the emotional response to the full-length vignettes, with and without facial motion and audio; and a 2AFC (two alternative forced choice) task to compare the performance of a virtual “actor” in short clips (extracts from the vignettes) depicting a range of different facial and body anomalies. We found that the facial anomalies are particularly salient, even when very significant body animation anomalies are present.

[1]  Jessica K. Hodgins,et al.  Perception of Human Motion With Different Geometric Models , 1998, IEEE Trans. Vis. Comput. Graph..

[2]  Michael Meehan,et al.  Physiological measures of presence in stressful virtual environments , 2002, SIGGRAPH.

[3]  S PollardNancy,et al.  Perceptual metrics for character animation , 2003 .

[4]  Nancy S. Pollard,et al.  Perceptual metrics for character animation: sensitivity to errors in ballistic motion , 2003, ACM Trans. Graph..

[5]  Bobby Bodenheimer,et al.  Computing the duration of motion transitions: an empirical approach , 2004, SCA '04.

[6]  Ronald A. Rensink,et al.  Obscuring length changes during animated motion , 2004, ACM Trans. Graph..

[7]  M. Erb,et al.  Brain activity underlying emotional valence and arousal: A response‐related fMRI study , 2004, Human brain mapping.

[8]  Andrew Olney,et al.  Upending the Uncanny Valley , 2005, AAAI.

[9]  Karl F. MacDorman,et al.  Mortality salience and the uncanny valley , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..

[10]  H. Ishiguro,et al.  Assessing Human Likeness by Eye Contact in an Android Testbed , 2005 .

[11]  Sang Il Park,et al.  Capturing and animating skin deformation in human motion , 2006, ACM Trans. Graph..

[12]  K. MacDorman,et al.  Subjective Ratings of Robot Video Clips for Human Likeness, Familiarity, and Eeriness: An Exploration of the Uncanny Valley , 2006 .

[13]  Takayuki Kanda,et al.  Is The Uncanny Valley An Uncanny Cliff? , 2007, RO-MAN 2007 - The 16th IEEE International Symposium on Robot and Human Interactive Communication.

[14]  Rachel McDonnell,et al.  Smooth movers: perceptually guided human motion simulation , 2007, SCA '07.

[15]  A. Kingstone,et al.  The effect of emotional faces on eye movements and attention , 2007 .

[16]  Yifan Wang,et al.  Exploring the Uncanny Valley with Japanese Video Game Characters , 2007, DiGRA Conference.

[17]  Tom Geller,et al.  Overcoming the Uncanny Valley , 2008, IEEE Computer Graphics and Applications.

[18]  Danielle M. Lottridge Emotional response as a measure of human performance , 2008, CHI Extended Abstracts.

[19]  Frans A. J. Verstraten,et al.  Perception of 'speech-and-gesture2 integration , 2008, AVSP.

[20]  Karl F. MacDorman,et al.  Human emotion and the uncanny valley: A GLM, MDS, and Isomap analysis of robot video ratings , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[21]  Angela Tinwell,et al.  Survival horror games - an uncanny modality , 2009 .

[22]  Mark Grimshaw,et al.  Bridging the uncanny: an impossible traverse? , 2009, MindTrek '09.

[23]  Karl F. MacDorman,et al.  Too real for comfort? Uncanny responses to computer generated faces , 2009, Comput. Hum. Behav..

[24]  Carol O'Sullivan,et al.  Eye-catching crowds: saliency based selective variation , 2009, SIGGRAPH 2009.

[25]  Jessica K. Hodgins,et al.  Perceptually motivated guidelines for voice synchronization in film , 2010, TAP.