Robot's self-trust as precondition for being a good collaborator

In Human Robot cooperation scenarios, building a robot that can be defined a good collaborator, means endowing it with the capability to evaluate not only the physical environment, but especially the mental states and the features of its human interlocutor, in order to adapt its behavior every time she/he requires the robot’s help. The quality of this kind of evaluation, underlies the robot’s capability to operate a meta-evaluation of its own predictive skills to build a model of the interlocutor and of her/his goals. The robot’s capability to self-trust his skills to interpret the interlocutor and the context, is a fundamental requirement for producing smart and effective decisions towards humans. In this work we propose a simulated experiment, designed with the goal to test a cognitive architecture for trustworthy human robot collaboration. The experiment has been designed in order to demonstrate how the robot’s capability to learn its own level of self-trust on its predictive abilities in perceiving the user and building a model of her/him, allows it to establish a trustworthy collaboration and to maintain an high level of user’s satisfaction, with respect to the robot’s performance, also when these abilities progressively degrade. 1 The needs of collaborative robots in Human-Robot Cooperation The complexity of the AI systems (i.e. social robots, autonomous cars, virtual assistants etc.) surrounding us is every day more demanding and requires a consequent capability of these systems to be trusted by humans, just as humans are able to do when they collaborate with each other [KS20]. In the context of Human-Robot Cooperation, the sense of human vulnerability due to the presence of the robot [SSTJS18], can be reduced by changing the role of the robot itself: from a passive executor, to a smart and active collaborator [FC01]. Let’s consider the following collaborative scenario: a human X (the trustor) and a robot Y (the trustee) collaborate so that X has to trust Y, in a specific context, for executing a task τ and realizing the results that include or correspond to the X’s GoalX(g) = gX [CF10]. In this context, X relies on Y for realizing some part of the task she/he has in mind (task delegation); on its side, Y decides to help X, to replace her/him and perform Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). In: R. Falcone, J. Zhang, and D. Wang (eds.): Proceedings of the 22nd International Workshop on Trust in Agent Societies, London, UK on May 3-7, 2021, published at http://ceur-ws.org

[1]  Mohamed Helmy Khafagy,et al.  Recommender Systems Challenges and Solutions Survey , 2019, 2019 International Conference on Innovative Trends in Computer Engineering (ITCE).

[2]  Anand S. Rao,et al.  BDI Agents: From Theory to Practice , 1995, ICMAS.

[3]  J PazzaniMichael A Framework for Collaborative, Content-Based and Demographic Filtering , 1999 .

[4]  Eric W. Frew,et al.  Machine Self-Confidence in Autonomous Systems via Meta-Analysis of Decision Processes , 2018, AHFE.

[5]  Olivier Boissier,et al.  Multi-agent oriented programming with JaCaMo , 2013, Sci. Comput. Program..

[6]  D. Abrams,et al.  Categorization by Age , 2018 .

[7]  Rino Falcone,et al.  The human in the loop of a delegated agent: the theory of adjustable social autonomy , 2001, IEEE Trans. Syst. Man Cybern. Part A.

[8]  Michael J. Pazzani,et al.  A Framework for Collaborative, Content-Based and Demographic Filtering , 1999, Artificial Intelligence Review.

[9]  Francesco Ricci,et al.  User Personality and the New User Problem in a Context-Aware Point of Interest Recommender System , 2015, ENTER.

[10]  Rachid Alami,et al.  An implemented theory of mind to improve human-robot shared plans execution , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[11]  Bing Cai Kok,et al.  Trust in Robots: Challenges and Opportunities , 2020, Current Robotics Reports.

[12]  Brian Scassellati,et al.  The Ripple Effects of Vulnerability: The Effects of a Robot’s Vulnerable Behavior on Trust in Human-Robot Teams , 2018, 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[13]  Rino Falcone,et al.  Trust Theory: A Socio-Cognitive and Computational Model , 2010 .

[14]  Rino Falcone,et al.  Towards a theory of delegation for agent-based systems , 1998, Robotics Auton. Syst..

[15]  Rino Falcone,et al.  Towards trustworthiness and transparency in social human-robot interaction , 2020, 2020 IEEE International Conference on Human-Machine Systems (ICHMS).

[16]  Rafael H. Bordini,et al.  BDI agent programming in AgentSpeak using Jason , 2006 .