Trust Management VIII

Accountability Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Walid Benghabrit, Hervé Grall, Jean-Claude Royer, Mohamed Sellami, Karin Bernsmed, and Anderson Santana De Oliveira Trust Assessment Using Cloud Broker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Pramod S. Pawar, Muttukrishnan Rajarajan, Theo Dimitrakos, and Andrea Zisman Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 The Importance of Trust in Computer Security Christian Damsgaard Jensen Department of Applied Mathematics & Computer Science Technical University of Denmark DK-2800 Kgs. Lyngby, Denmark Christian.Jensen@imm.dtu.dk Abstract. The computer security community has traditionally regarded security as a “hard” property that can be modelled and formally proven under certain simplifying assumptions. Traditional security technologies assume that computer users are either malicious, e.g. hackers or spies, or benevolent, competent and well informed about the security policies. Over the past two decades, however, computing has proliferated into all aspects of modern society and the spread of malicious software (malware) like worms, viruses and botnets have become an increasing threat. This development indicates a failure in some of the fundamental assumptions that underpin existing computer security technologies and that a new view of computer security is long overdue. In this paper, we examine traditional models, policies and mechanisms of computer security in order to identify areas where the fundamental assumptions may fail. In particular, we identify areas where the “hard” security properties are based on trust in the different agents in the system and certain external agents who enforce the legislative and contractual frameworks. Trust is generally considered a “soft” security property, so building a “hard” security mechanism on trust will at most give a spongy result, unless the underlying trust assumptions are made first class citizens of the security model. In most of the work in computer security, trust assumptions are implicit and they will surely fail when the environment of the systems change, e.g. when systems are used on a global scale on the Internet. We argue that making such assumptions about trust explicit is an essential requirement for the future of system security and argue why the formalisation of computational trust is necessary when we wish to reason about system security. The computer security community has traditionally regarded security as a “hard” property that can be modelled and formally proven under certain simplifying assumptions. Traditional security technologies assume that computer users are either malicious, e.g. hackers or spies, or benevolent, competent and well informed about the security policies. Over the past two decades, however, computing has proliferated into all aspects of modern society and the spread of malicious software (malware) like worms, viruses and botnets have become an increasing threat. This development indicates a failure in some of the fundamental assumptions that underpin existing computer security technologies and that a new view of computer security is long overdue. In this paper, we examine traditional models, policies and mechanisms of computer security in order to identify areas where the fundamental assumptions may fail. In particular, we identify areas where the “hard” security properties are based on trust in the different agents in the system and certain external agents who enforce the legislative and contractual frameworks. Trust is generally considered a “soft” security property, so building a “hard” security mechanism on trust will at most give a spongy result, unless the underlying trust assumptions are made first class citizens of the security model. In most of the work in computer security, trust assumptions are implicit and they will surely fail when the environment of the systems change, e.g. when systems are used on a global scale on the Internet. We argue that making such assumptions about trust explicit is an essential requirement for the future of system security and argue why the formalisation of computational trust is necessary when we wish to reason about system security.

[1]  Alex Pentland,et al.  Sensing the "Health State" of a Community , 2012, IEEE Pervasive Computing.

[2]  Brian L. Mark,et al.  Byzantine robust trust establishment for mobile ad hoc networks , 2007, Telecommun. Syst..

[3]  Cynthia Dwork,et al.  Differential Privacy , 2006, ICALP.

[4]  David Basin,et al.  Model driven security: From UML models to access control infrastructures , 2006, TSEM.

[5]  Hector Garcia-Molina,et al.  Limited reputation sharing in P2P systems , 2004, EC '04.

[6]  E. Buchanan,et al.  Internet Research Ethics , 2012 .

[7]  Paul Dowland,et al.  Behaviour Profiling on Mobile Devices , 2010, 2010 International Conference on Emerging Security Technologies.

[8]  Khalil El-Khatib,et al.  Encouraging second thoughts: Obstructive user interfaces for raising security awareness , 2013, 2013 Eleventh Annual Conference on Privacy, Security and Trust.

[9]  Rino Falcone,et al.  Trust Theory: A Socio-Cognitive and Computational Model , 2010 .

[10]  Peretz Shoval,et al.  Quality and comprehension of UML interaction diagrams-an experimental comparison , 2005, Inf. Softw. Technol..

[11]  J. Doug Tygar,et al.  Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0 , 1999, USENIX Security Symposium.

[12]  Valérie Issarny,et al.  Composing Trust Models towards Interoperable Trust Management , 2011, IFIPTM.

[13]  Xing Xie,et al.  GeoLife: A Collaborative Social Networking Service among User, Location and Trajectory , 2010, IEEE Data Eng. Bull..

[14]  VARUN CHANDOLA,et al.  Anomaly detection: A survey , 2009, CSUR.

[15]  Yolanda Gil,et al.  A survey of trust in computer science and the Semantic Web , 2007, J. Web Semant..

[16]  Vitaly Shmatikov,et al.  The cost of privacy: destruction of data-mining utility in anonymized data publishing , 2008, KDD.

[17]  Mark Vinkovits,et al.  TrustFraMM: Meta Description for Trust Frameworks , 2012, 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing.

[18]  S. Buchegger,et al.  A Robust Reputation System for P2P and Mobile Ad-hoc Networks , 2004 .

[19]  C. Willig Introducing Qualitative Research in Psychology , 2001 .

[20]  Félix Gómez Mármol,et al.  Towards pre-standardization of trust and reputation models for distributed and heterogeneous systems , 2010, Comput. Stand. Interfaces.

[21]  Mark Vinkovits,et al.  Defining a Trust Framework Design Process , 2013, TrustBus.

[22]  David Lazer,et al.  Inferring friendship network structure by using mobile phone data , 2009, Proceedings of the National Academy of Sciences.

[23]  Peter Scheuermann,et al.  Protecting Private Data on Mobile Systems based on Spatio-temporal Analysis , 2011, PECCS.

[24]  Salvatore J. Stolfo,et al.  A Geometric Framework for Unsupervised Anomaly Detection , 2002, Applications of Data Mining in Computer Security.

[25]  Xiao Wang,et al.  SenSec: Mobile security through passive sensing , 2013, 2013 International Conference on Computing, Networking and Communications (ICNC).

[26]  Mary Ellen Zurko,et al.  User-centered security , 1996, NSPW '96.

[27]  Malek Ben Salem,et al.  Modeling User Search Behavior for Masquerade Detection , 2011, RAID.

[28]  C. Hine Virtual Ethnography: Modes, Varieties, Affordances , 2008 .

[29]  Andrew S Branscomb Behaviorally Identifying Smartphone Users. , 2013 .

[30]  Panayiotis Zaphiris,et al.  Applying qualitative content analysis to study online support communities , 2010, Universal Access in the Information Society.

[31]  Steven G. Jones,et al.  Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee , 2004 .

[32]  Heather Anne Crawford,et al.  A framework for continuous, transparent authentication on mobile devices , 2012 .

[33]  Audun Jøsang,et al.  A survey of trust and reputation systems for online service provision , 2007, Decis. Support Syst..

[34]  Stephen Marsh,et al.  Defining and Investigating Device Comfort , 2011, J. Inf. Process..

[35]  Sanjay Rawat,et al.  Efficient data mining algorithms for intrusion detection , 2005 .

[36]  Dan Suciu,et al.  The Boundary Between Privacy and Utility in Data Publishing , 2007, VLDB.

[37]  Peretz Shoval,et al.  Entity-Relationship and Object-Oriented Data Modeling-an Experimental Comparison of Design Quality , 1997, Data Knowl. Eng..

[38]  Ruoming Jin,et al.  Efficient location aware intrusion detection to protect mobile devices , 2012, Personal and Ubiquitous Computing.

[39]  Markus Jakobsson,et al.  Implicit Authentication through Learning User Behavior , 2010, ISC.

[40]  Lea Viljanen,et al.  Towards an Ontology of Trust , 2005, TrustBus.

[41]  Kurt Rothermel,et al.  Towards a Generic Trust Model - Comparison of Various Trust Update Algorithms , 2005, iTrust.

[42]  Xi Chen,et al.  Implicit User Re-authentication for Mobile Devices , 2009, UIC.

[43]  Arun Ross,et al.  Information fusion in biometrics , 2003, Pattern Recognit. Lett..

[44]  Josep Domingo-Ferrer,et al.  A Critique of k-Anonymity and Some of Its Enhancements , 2008, 2008 Third International Conference on Availability, Reliability and Security.

[45]  Joan Feigenbaum,et al.  Decentralized trust management , 1996, Proceedings 1996 IEEE Symposium on Security and Privacy.

[46]  Cécile Paris,et al.  A survey of trust in social networks , 2013, CSUR.