Value-Based Core Areas of Trustworthiness in Online Services

In the digital domain, users can be expected to place their trust in online services if they have a reason to believe that, in addition to the functional and quality of service aspects, their rights will be protected and their shared values respected. However, recent studies and surveys suggest that users do not actually trust in online services, one of the reasons being that technology unable to meet their values and address their concerns. To bridge this gap, this work-in-progress paper presents a set of core areas of trustworthiness for online services that have emerged from an interdisciplinary discussion involving a social, ethical, legal and technological perspective while paying due attention to the protection of European fundamental rights and values. It then analyses the manner in which each of these core areas of trustworthiness maps to well-known system properties and (post-compliance) operational requirements.

[1]  Daniel Osterwalder,et al.  Trust Through Evaluation and Certification? , 2001 .

[2]  J. Lewis,et al.  SOCIAL ATOMISM, HOLISM, AND TRUST , 1985 .

[3]  J. Lewis,et al.  The Social Dynamics of Trust: Theoretical and Empirical Research, 1985–2012 , 2012 .

[4]  Carl E. Landwehr,et al.  Basic concepts and taxonomy of dependable and secure computing , 2004, IEEE Transactions on Dependable and Secure Computing.

[5]  Seth Flaxman,et al.  EU regulations on algorithmic decision-making and a "right to explanation" , 2016, ArXiv.

[6]  J HoffmanLance,et al.  Trust beyond security , 2006 .

[7]  Suresh Venkatasubramanian,et al.  Auditing black-box models for indirect influence , 2016, Knowledge and Information Systems.

[8]  Morris Sloman,et al.  A survey of trust in internet applications , 2000, IEEE Communications Surveys & Tutorials.

[9]  Lorrie Faith Cranor,et al.  Designing Effective Privacy Notices and Controls , 2017, IEEE Internet Computing.

[10]  P. Sztompka Trust: A Sociological Theory , 2000 .

[11]  Klaus Pohl,et al.  An Analysis of Software Quality Attributes and Their Contribution to Trustworthiness , 2013, CLOSER.

[12]  Maritta Heisel,et al.  Understanding the Privacy Goal Intervenability , 2016, TrustBus.

[13]  Marit Hansen,et al.  Protection Goals for Privacy Engineering , 2015, 2015 IEEE Security and Privacy Workshops.

[14]  T. Burns,et al.  Trust and power : two works , 1979 .

[15]  Mariarosaria Taddeo,et al.  The case for e-trust , 2011, Ethics and Information Technology.

[16]  Juan C. Yelmo,et al.  Engineering privacy requirements valuable lessons from another realm , 2014, 2014 IEEE 1st International Workshop on Evolving Security and Privacy Requirements Engineering (ESPRE).

[17]  Klaus Pohl,et al.  Trustworthiness Attributes and Metrics for Engineering Trusted Internet-Based Software Systems , 2013, CLOSER.

[18]  David Weinberger,et al.  Accountability of AI Under the Law: The Role of Explanation , 2017, ArXiv.

[19]  Guido Möllering,et al.  The Nature of Trust: From Georg Simmel to a Theory of Expectation, Interpretation and Suspension , 2001 .

[20]  Philip A.E. Brey,et al.  Values in Technology and Disclosive Computer Ethics. , 2010 .

[21]  Lance J. Hoffman,et al.  Trust beyond security: an expanded trust model , 2006, CACM.

[22]  Vitaly Shmatikov,et al.  Robust De-anonymization of Large Sparse Datasets , 2008, 2008 IEEE Symposium on Security and Privacy (sp 2008).

[23]  Mariarosaria Taddeo,et al.  Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust , 2010, Minds and Machines.

[24]  Klaus-Robert Müller,et al.  Learning how to explain neural networks: PatternNet and PatternAttribution , 2017, ICLR.

[25]  Maritta Heisel,et al.  A Taxonomy of Requirements for the Privacy Goal Transparency , 2015, TrustBus.

[26]  Luciano Floridi Soft Ethics: Its Application to the General Data Protection Regulation and Its Dual Advantage , 2018 .

[27]  Josep Domingo-Ferrer,et al.  Privacy and Data Protection by Design - from policy to engineering , 2014, ArXiv.

[28]  Maritta Heisel,et al.  Computer-Aided Identification and Validation of Intervenability Requirements , 2017, Inf..

[29]  David A. Wagner,et al.  Android permissions: user attention, comprehension, and behavior , 2012, SOUPS.

[30]  Sameer Singh,et al.  “Why Should I Trust You?”: Explaining the Predictions of Any Classifier , 2016, NAACL.