STRIDE: A Secure Framework for Modeling Trust-Privacy Tradeoffs in Distributed Computing Environments
暂无分享,去创建一个
This paper presents STRIDE: a Secure framework for modeling Trust-pRIvacy tradDEoffs in distributed computing environments. STRIDE aims at achieving the right privacy-trust tradeoff among distributed systems entities. This is done by establishing a set of secure mechanisms for quantifying the privacy loss and the corresponding trust gain required by a given network transaction. The privacy-trust quantification process allows the service requestor and provider to create the required trust levels necessary for executing the transaction while minimizing the privacy loss incurred. Moreover, STRIDE supports communication anonymity by associating each communicating entity with an administrative group. In this way, the identification information of the communicating entities is anonymously masked by the identification of their respective groups. The confidentiality, authenticity and integrity of data communication are ensured using appropriate cryptographic mechanisms. Moreover, data sent between groups is saved from dissemination by a self-destruction process. STRIDE provides a context-aware model supporting agents with various privacy-trust characteristics and behaviors. The system is implemented on the Java-based Aglets platform. DOI: 10.4018/jdtis.2010010104 International Journal of Dependable and Trustworthy Information Systems, 1(1), 60-81, January-March 2010 61 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. dentials from the other entity before executing any transaction. Knowledge can be based on observations, recommendations or reputation. However, knowledge is not only related to the concept of “trusting an entity”. Another concept, which is tightly related to knowledge and trust, is privacy. Trust and privacy are two conflicting concepts. This is due to the fact that the more knowledge an entity acquires about a second entity, the more accurate the trustworthiness would be. But, more knowledge about an entity implies less privacy left to that entity. Since both trust and privacy are essential elements in a well-functioning environment, this conflict should be properly addressed. In this paper we present STRIDE, a secure framework for modeling trust-privacy tradeoffs in distributed computing environments. STRIDE employs a set of quantification mechanisms to model privacy loss and trust gain in order to determine the right tradeoff between them. A general framework is developed to select the set of information that minimizes the privacy loss for a required trust gain. STRIDE supports communication anonymity, confidentiality, authentication, and integrity and prevents private data dissemination by employing a self-destruction process. Moreover, STRIDE provides a context-aware model supporting agents with various privacy-trust characteristics and behaviors. The system is implemented on the Java-based Aglets platform. Simulation results prove that entities requesting a service tend to incur higher privacy losses when their past experiences are not reputable, when they exhibit an open behavior in revealing their private data, or when the service provider is paranoiac in nature. In all cases, the privacy loss is controlled and quantified. The rest of this paper is organized as follows: Section II presents a literature survey of the main protocols related to the proposed work. Section III describes the trust-privacy tradeoff model design and architecture. Section IV discusses the simulation results obtained when testing the trust-privacy tradeoff system on a simulated network using the Aglets platform. Conclusions are presented in Section V.