Access Control is an Inadequate Framework for Privacy Protection

As Web architects, we might all agree on the need to protect privacy. But what is it that we want to protect? In Brandeis and Warren’s classic legal study [19], privacy is defined as the “right to be let alone”. In Alan Westin’s seminal work [20], privacy is the ability for people to determine for themselves “when, how, and to what extent, information about them is communicated to others”. The UN Declaration of Human Rights [16] stipulates that “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation”. There are important differences among these definitions. One central definition is that Westin focuses on information access: how information comes to be known. In contrast, the UN Declaration, and Brandeis and Warren, focus on what happens to people as a result of how information is used. Present discussions of Internet privacy, both policy and technology, tend to assume Westin’s perspective. The focus on access control sometimes regards privacy as a kind of ”currency of the digital age”1 that people need to exchange in return for better search results, more personalization, customized services, more targeted advertising, and better communication with friends, family, and colleagues. “Protecting privacy” is often equated with letting users make these tradeoffs by defining detailed rules to govern access to their personal information. This year’s technology press is filled with announcements by social networking sites about their new privacy controls, i.e. new ways for users to define access rules [18, 23]; followed by embarrassment when the choices prove to be inadequate or too complex for people to deal with [17, 1, 10, 22, 15, 3, 13]. Even when access control systems are successful in blocking out unwanted viewers, they are ineffective as privacy protection for a large, decentralized system like the World Wide Web, where it is easy to copy or aggregate information. These days, it is possible to infer sensitive information from publicly available information. For example, social security numbers (SSN) have always been closely guarded because they are used to identify individuals by most government and financial institutions. Every use

[1]  J. Rubenfeld The Right of Privacy , 1989 .

[2]  J. Doug Tygar,et al.  Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0 , 1999, USENIX Security Symposium.

[3]  Oshani Seneviratne,et al.  Policy-Aware Content Reuse on the Web , 2009, International Semantic Web Conference.

[4]  Bhavani M. Thuraisingham,et al.  Inferring private information using social network data , 2009, WWW '09.

[5]  Behram F. T. Mistree,et al.  Gaydar: Facebook Friendships Expose Sexual Orientation , 2009, First Monday.

[6]  Lalana Kagal,et al.  Enabling Privacy-Awareness in Social Networks , 2010, AAAI Spring Symposium: Intelligent Information Privacy Management.