Trust-Driven Privacy in Human-Robot Interactions

In this paper we present a trust-driven differential privacy implementation for private trajectory sharing in human-robot interactions. While differential privacy implementations depend on a privacy parameter that is typically set before runtime, there are a number of applications in which human users may not have any information about their robot interaction partners a priori, making it difficult to determine a reasonable privacy level for information sharing. To enable collaboration in scenarios with unfamiliar robots, we dynamically adapt a human user's privacy level when sending information to a robot by using a quantitative measure of trust. We develop a trust model that reflects a robot's level of cooperation over time and captures key features of trust from both the psychological and human-robot interaction communities. To characterize our framework and its performance, we quantify the amount of information a robot can gain as a function of its cooperation, and we present bounds on the level of cooperation needed to attain a desired level of trust (and therefore privacy) over time. Simulation results are provided to illustrate this trust-driven private information sharing scheme.

[1]  George J. Pappas,et al.  Differential privacy in control and network systems , 2016, 2016 IEEE 55th Conference on Decision and Control (CDC).

[2]  Magnus Egerstedt,et al.  Graph Theoretic Methods in Multiagent Networks , 2010, Princeton Series in Applied Mathematics.

[3]  Cynthia Dwork,et al.  Differential Privacy , 2006, ICALP.

[4]  Jessie Y. C. Chen,et al.  A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction , 2011, Hum. Factors.

[5]  George J. Pappas,et al.  Differentially Private Filtering , 2012, IEEE Transactions on Automatic Control.

[6]  Colin Camerer,et al.  Not So Different After All: A Cross-Discipline View Of Trust , 1998 .

[7]  Aiko M. Hormann,et al.  Programs for Machine Learning. Part I , 1962, Inf. Control..

[8]  Austin Jones,et al.  Privacy in Feedback: The Differentially Private LQG , 2017, 2018 Annual American Control Conference (ACC).

[9]  Lovekesh Vig,et al.  Multi-robot coalition formation , 2006, IEEE Transactions on Robotics.

[10]  Geir E. Dullerud,et al.  Differential Privacy in Linear Distributed Control Systems: Entropy Minimizing Mechanisms and Performance Tradeoffs , 2017, IEEE Transactions on Control of Network Systems.

[11]  C.J.R. McCook,et al.  Flocking for Heterogeneous Robot Swarms: A Military Convoy Scenario , 2007, 2007 Thirty-Ninth Southeastern Symposium on System Theory.

[12]  J. H. Davis,et al.  An Integrative Model Of Organizational Trust , 1995 .

[13]  Andrea Gasparri,et al.  Trust-based interactions in teams of mobile agents , 2016, 2016 American Control Conference (ACC).

[14]  Illah R. Nourbakhsh,et al.  Human-robot teaming for search and rescue , 2005, IEEE Pervasive Computing.

[15]  Robin I. M. Dunbar,et al.  The costs of family and friends: an 18-month longitudinal study of relationship maintenance and decay , 2011 .

[16]  Guy N. Rothblum,et al.  Boosting and Differential Privacy , 2010, 2010 IEEE 51st Annual Symposium on Foundations of Computer Science.

[17]  Jürgen Leitner,et al.  Multi-robot Cooperation in Space: A Survey , 2009, 2009 Advanced Technologies for Enhanced Quality of Life.

[18]  Brent A. Scott,et al.  Trust, trustworthiness, and trust propensity: a meta-analytic test of their unique relationships with risk taking and job performance. , 2007, The Journal of applied psychology.

[19]  Roderick M. Kramer,et al.  Trust and distrust in organizations: emerging perspectives, enduring questions. , 1999, Annual review of psychology.