Investigating rejection behavior in the ultimatum game as a measure of anthropomorphism

In this work I investigate the possibility of using rejection behavior in the Ultimatum Game as a measure of anthropomorphism. Other measures of anthropomorphism have been suggested, mostly in the form of surveys. These, however, capture only one aspect of anthropomorphism: users’ rational and conscious ideas about a piece of technology. More implicit measures are needed to assess users’ subconscious and behavioral responses to the technology. Literature indicates that in the Ultimatum Game players are willing to incur a cost to themselves to punish (by rejecting an offer) another player for unfair behavior. In order to examine if and how UG behavior could serve as a measure of anthropomorphism, I have investigated how human receivers respond to robotic players in the UG and how this relates to the way they respond to computer and human players. The results of this study show that responses to human, robotic and computer players do not differ significantly. This in contrast to earlier findings that computer offers are rejected less often than offers from a human opponent. This difference may have been caused by the methodologies used: in the other studies, subjects were deliberately put into a less anthropomorphic mindset when they played against computer opponents. In the current study, the instructions were the same for all types of opponents. UG rejection behavior, then, seems to vary with the level of anthropomorphism, which suggests that it can indeed be used as a measure of anthropomorphism. In addition, results show that although subjects claim that they see computers and robots as less intentional and responsible than humans, these attitudes are not reflected in their behavior toward robots and computers in the Ultimatum Game. This supports the idea that self-reported anthropomorphism is not the same as implicit anthropomorphism. Implications for HRI and HCI practice are discussed.

[1]  W. Hamilton The genetical evolution of social behaviour. I. , 1964, Journal of theoretical biology.

[2]  R. Trivers The Evolution of Reciprocal Altruism , 1971, The Quarterly Review of Biology.

[3]  Clifford Nass,et al.  The media equation - how people treat computers, television, and new media like real people and places , 1996 .

[4]  B. Olatunji,et al.  The effect of disgust conditioning and disgust sensitivity on appraisals of moral transgressions , 2011 .

[5]  J. Haidt The emotional dog and its rational tail: a social intuitionist approach to moral judgment. , 2001, Psychological review.

[6]  Serge A R B Rombouts,et al.  Unfair? It depends: neural correlates of fairness in social context. , 2010, Social cognitive and affective neuroscience.

[7]  E. Fehr,et al.  Altruistic punishment in humans , 2002, Nature.

[8]  Maurice E. Schweitzer,et al.  Fairness, Feelings, and Ethical Decision- Making: Consequences of Violating Community Standards of Fairness , 2007 .

[9]  T. W. Ross,et al.  Cooperation without Reputation: Experimental Evidence from Prisoner's Dilemma Games , 1996 .

[10]  Brian R. Duffy,et al.  Anthropomorphism and the social robot , 2003, Robotics Auton. Syst..

[11]  J. Cacioppo,et al.  On seeing human: a three-factor theory of anthropomorphism. , 2007, Psychological review.

[12]  D. Dennett The Intentional Stance. , 1987 .

[13]  A. Sanfey,et al.  Friend or foe: The effect of implicit trustworthiness judgments in social decision-making , 2008, Cognition.

[14]  Skyler T. Hawk,et al.  Presentation and validation of the Radboud Faces Database , 2010 .

[15]  Joyce E. Berg,et al.  Trust, Reciprocity, and Social History , 1995 .

[16]  Nathan G. Freier,et al.  Social and moral relationships with robotic others? , 2004, RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759).

[17]  H. Gintis,et al.  Costly signaling and cooperation. , 2001, Journal of theoretical biology.

[18]  Batya Friedman,et al.  Human agency and responsible computing: Implications for computer system design , 1992, J. Syst. Softw..

[19]  Jonathan D. Cohen,et al.  The Neural Basis of Economic Decision-Making in the Ultimatum Game , 2003, Science.

[20]  P. Rozin,et al.  The CAD triad hypothesis: a mapping between three moral emotions (contempt, anger, disgust) and three moral codes (community, autonomy, divinity). , 1999, Journal of personality and social psychology.

[21]  Dana Kulic,et al.  Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots , 2009, Int. J. Soc. Robotics.

[22]  M. Schweitzer,et al.  The Influence of Physical Attractiveness and Gender on Ultimatum Game Decisions. , 1999, Organizational behavior and human decision processes.

[23]  J. V. Bradley Complete Counterbalancing of Immediate Sequential Effects in a Latin Square Design , 1958 .

[24]  Gary E. Bolton,et al.  Anonymity versus Punishment in Ultimatum Bargaining , 1995 .

[25]  Laura Moretti,et al.  Disgust selectively modulates reciprocal fairness in economic interactions. , 2010, Emotion.

[26]  Jodi Forlizzi,et al.  All robots are not created equal: the design and perception of humanoid robot heads , 2002, DIS '02.

[27]  W. Güth,et al.  An experimental analysis of ultimatum bargaining , 1982 .

[28]  Alan G. Sanfey,et al.  Expectations and social decision-making: biasing effects of prior knowledge on Ultimatum responses , 2009 .

[29]  S. Kiesler,et al.  Mental Models and Cooperation with Robotic Assistants , 2001 .

[30]  K. Foster,et al.  Kin selection is the key to altruism. , 2006, Trends in ecology & evolution.

[31]  Batya Friedman,et al.  “It's the computer's fault”: reasoning about computers as moral agents , 1995, CHI 95 Conference Companion.