Welcome to the Real World: How Agent Strategy Increases Human Willingness to Deceive

Humans that negotiate through representatives often instruct those representatives to act in certain ways that align with both the client's goals and his or her social norms. However, which tactics and ethical norms humans endorse vary widely from person to person, and these endorsements may be easy to manipulate. This work presents the results of a study that demonstrates that humans that interact with an artificial agent may change what kinds of tactics and norms they endorse-often dramatically. Previous work has indicated that people that negotiate through artificial agent representatives may be more inclined to fairness than those people that negotiate directly. Our work qualifies that initial picture, demonstrating that subsequent experience may change this tendency toward fairness. By exposing human negotiators to tough, automated agents, we are able to shift the participant's willingness to deceive others and utilize "hard-ball" negotiation techniques. In short, what techniques people decide to endorse is dependent upon their context and experience. We examine the effects of interacting with four different types of automated agents, each with a unique strategy, and how this subsequently changes which strategies a human negotiator might later endorse. In the study, which was conducted on an online negotiation platform, four different types of automated agents negotiate with humans over the course of a 10-minute interaction. The agents differ in a 2x2 design according to agent strategy (tough vs. fair) and agent attitude (nice vs. nasty). These results show that in this multi-issue bargaining task, humans that interacted with a tough agent were more willing to endorse deceptive techniques when instructing their own representative. These kinds of techniques were endorsed even if the agent the human encountered did not use deception as part of its strategy. In contrast to some previous work, there was not a significant effect of agent attitude. These results indicate the power of allowing people to program agents that follow their instructions, but also indicate that these social norms and tactic endorsements may be mutable in the presence of real negotiation experience.

[1]  Roy J. Lewicki,et al.  Extending and testing a five factor model of ethical and unethical bargaining tactics: Introducing the SINS scale. , 2000 .

[2]  Andrew Ortony,et al.  Reducing Mistrust in Agent-Human Negotiations , 2014, IEEE Intelligent Systems.

[3]  Stacy Marsella,et al.  Social decisions and fairness change when people’s interests are represented by autonomous agents , 2018, Autonomous Agents and Multi-Agent Systems.

[4]  Nicholas R. Jennings,et al.  Approximate and online multi-issue negotiation , 2007, AAMAS '07.

[5]  Jonathan Gratch,et al.  Humans versus Computers: Impact of Emotion Expressions on People's Decision Making , 2015, IEEE Transactions on Affective Computing.

[6]  Stacy Marsella,et al.  "Do As I Say, Not As I Do": Challenges in Delegating Decisions to Automated Agents , 2016, AAMAS.

[7]  C. D. De Dreu,et al.  The interpersonal effects of emotions in negotiations: a motivated information processing approach. , 2004, Journal of personality and social psychology.

[8]  Sarvapali D. Ramchurn,et al.  A computational trust model for multi-agent interactions based on confidence and reputation , 2003 .

[9]  Y. Trope,et al.  Construal-level theory of psychological distance. , 2010, Psychological review.

[10]  C. Dreu,et al.  Psychological distance boosts value-behavior correspondence in ultimatum bargaining and integrative negotiation , 2010 .

[11]  Ya'akov Gal,et al.  A study of computational and human strategies in revelation games , 2014, Autonomous Agents and Multi-Agent Systems.

[12]  Valentin Robu,et al.  Modeling complex multi-issue negotiations using utility graphs , 2005, AAMAS '05.

[13]  H. Chad Lane,et al.  Teaching Negotiation Skills through Practice and Reflection with Virtual Humans , 2006, Simul..

[14]  Nicholas R. Jennings,et al.  Using similarity criteria to make issue trade-offs in automated negotiations , 2002, Artif. Intell..

[15]  Catholijn M. Jonker,et al.  Virtual Reality Negotiation Training Increases Negotiation Knowledge and Skill , 2012, IVA.

[16]  James J. White Machiavelli and the Bar: Ethical Limitations on Lying in Negotiation , 1980, Discussions in Dispute Resolution.

[17]  Koen V. Hindriks,et al.  Accepting optimally in automated negotiation with incomplete information , 2013, AAMAS.

[18]  Thomas E. Becker,et al.  Lying in negotiations: how individual and situational factors influence the use of neutralization strategies , 2005 .

[19]  Sarit Kraus,et al.  Strategic Negotiation in Multiagent Environments , 2001, Intelligent robots and autonomous agents.

[20]  Jacques L. Koko,et al.  The Art and Science of Negotiation , 2009 .

[21]  Catholijn M. Jonker,et al.  When Will Negotiation Agents Be Able to Represent Us? The Challenges and Opportunities for Autonomous Negotiators , 2017, IJCAI.

[22]  Christopher Y. Olivola,et al.  Doing Unto Future Selves As You Would Do Unto Others: Psychological Distance and Decision Making , 2008, Personality & social psychology bulletin.

[23]  J. Blascovich Social influence within immersive virtual environments , 2002 .

[24]  Philip L. Smith,et al.  Mutually Dependent: Power, Trust, Affect and the Use of Deception in Negotiation , 2009 .

[25]  Jonathan Gratch,et al.  The Misrepresentation Game: How to win at negotiation while seeming like a nice guy , 2016, AAMAS.

[26]  Jeremy N. Bailenson,et al.  Avatars Versus Agents: A Meta-Analysis Quantifying the Effect of Agency on Social Influence , 2015, Hum. Comput. Interact..

[27]  Jonathan Gratch,et al.  Trust me: multimodal signals of trustworthiness , 2016, ICMI.

[28]  Jonathan Gratch,et al.  Pinocchio : Answering Human-Agent Negotiation Questions through Realistic Agent Design , 2017 .