Fuzzy signaling game of deception between ant-inspired deceptive robots with interactive learning

Abstract In this study the robotic deception phenomenon is raised in the framework of a signaling game which utilizes fuzzy logic and game theory along with inspirations from nature. Accomplishing the fuzzy signaling strategy set for deceptive players serves as a great part of our contribution and on this aim, hierarchical fuzzy inference systems support receiver’s actions and sender’s ant-inspired deceptive signals (track and pheromone). In addition, special deceptive robots and visually-supported experimental environment are also provided. The fuzzy behavior of robots defines the strategy type of players. The final result of deception process depends on this strategy type which leads to proposing a payoff matrix in which each cell of mutual costs is defined with special supporting logic related to our deception game with pursuit–evasion applications. Furthermore, motivated by animal signaling, through applying mixed strategies on deceiver’s honesty level and rival’s trust level, the corresponding learning dynamics are investigated and the conceptual discussion put forward serves as a proof to the smart human-like behavior that occurs between the robots: the interactive learning. Simulation results show that robots are capable of interactive learning within deceptive interaction and finally change their strategies to adopt themselves to new situation occurred due to opponent’s strategy change. Because of repetitive change in strategies as a result of learning, the conditions of a persistent deception without breakdown holds for this game where deceiver can frequently benefit from deception without leaving rival to lose its trust totally. The change in strategy will happen after a short time needed to learn the new situation. In rival’s learning process, this short time, which we call the ignorance time, exactly is the period that deceiver can benefit from deception while its evil intends are still concealed. Moreover, in this study an algorithm is given for the proposed signaling game of deception and an illustrative experiment in the introduced experimental environment demonstrates the process of a successful deception. The paper also gives solution to the proposed game by analyzing mixed Nash equilibrium which turns out to be the interior center fixed point of the learning dynamics.

[1]  Mohammad Ali Badamchizadeh,et al.  Ant-Inspired Fuzzily Deceptive Robots , 2016, IEEE Transactions on Fuzzy Systems.

[2]  Selmer Bringsjord,et al.  Red-Pill Robots Only, Please , 2012, IEEE Transactions on Affective Computing.

[3]  Marco Dorigo,et al.  Ant colony optimization , 2006, IEEE Computational Intelligence Magazine.

[4]  Andrew Ortony,et al.  Reducing Mistrust in Agent-Human Negotiations , 2014, IEEE Intelligent Systems.

[5]  Morteza Lahijanian,et al.  Social Trust: A Major Challenge for the Future of Autonomous Systems , 2016, AAAI Fall Symposia.

[6]  Xiaowei Huang,et al.  Reasoning about Cognitive Trust in Stochastic Multiagent Systems , 2017, AAAI.

[7]  Joao P. Hespanha,et al.  Application and Value of Deception , 2006 .

[8]  J. B. Cruz,et al.  An Approach to Fuzzy Noncooperative Nash Games , 2003 .

[9]  Candace Ohm The evolution of deception in signaling systems , 2013 .

[10]  Stephen P Ellner,et al.  Why Animals Lie: How Dishonesty and Belief Can Coexist in a Signaling System , 2006, The American Naturalist.

[11]  Ronald C. Arkin,et al.  Biologically-Inspired Deceptive Behavior for a Robot , 2012, SAB.

[12]  Mark Coeckelbergh,et al.  Are Emotional Robots Deceptive? , 2012, IEEE Transactions on Affective Computing.

[13]  Simon M. Huttegger,et al.  Some dynamics of signaling games , 2014, Proceedings of the National Academy of Sciences.

[14]  Marta Z. Kwiatkowska Cognitive Reasoning and Trust in Human-Robot Interactions , 2017, TAMC.

[15]  Brian Skyrms,et al.  Signals: Evolution, Learning, and Information , 2010 .

[16]  Bruno Sinopoli,et al.  Detection in Adversarial Environments , 2014, IEEE Transactions on Automatic Control.

[17]  Alan R. Wagner The role of trust and relationships in human-robot social interaction , 2009 .

[18]  James C. Boerkoel,et al.  Trust and Cooperation in Human-Robot Decision Making , 2016, AAAI Fall Symposia.

[19]  Eugene Santos,et al.  On Deception Detection in Multiagent Systems , 2010, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[20]  Amots Zehavi,et al.  The Handicap Principle: A Missing Piece of Darwin's Puzzle , 1997 .

[21]  Ronald C. Arkin,et al.  Acting Deceptively: Providing Robots with the Capacity for Deception , 2011, Int. J. Soc. Robotics.

[22]  Kevin J. S. Zollman,et al.  Between cheap and costly signals: the evolution of partially honest communication , 2013, Proceedings of the Royal Society B: Biological Sciences.

[23]  Philippe Jehiel,et al.  A theory of deception , 2010 .

[24]  Ronald C. Arkin,et al.  Moral Decision Making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception , 2012, Proceedings of the IEEE.