Understanding Perceived Risk: 1978–2015

www.environmentmagazine.org environment 25 Tim O’Riordan made it sound easy when he convinced me to write a brief perspective on hazard management almost four decades after the publication of three seminal articles in Environment on that topic. I agreed, not realizing the breadth of those articles and the relatively narrow focus of my own work and knowledge. Moreover, much has happened in the world of risk since the fall of 1978. Fortunately, Baruch Fischhoff and Howard Kunreuther agreed to join me in reflecting on these developments. I focus this commentary on the second article, “Handling Hazards: Can Hazard Management be Improved?” by Fischhoff, Hohenemser, Kasperson, and Kates.1 This important article, which can be viewed online at www.environment magazine.org, presented a framework of hazard causation pointing to opportunities for management interventions. Concepts such as risk perception, acceptable risk, and value trade-offs were introduced, along with institutional failures in properly attending to the most serious hazards. The article by Fischhoff et al. was insightful in its treatment of risk perception, noting that the way we think about and respond to hazards shapes the agendas of public interest groups and politicians, as well as the attempts of laypeople to manage the hazards of their daily lives. The importance of perceived benefits and value trade-offs was also stressed along with the observation that reducing a hazard might conflict directly with other widely held values or political goals. Also noted was the fact that our tolerance of risk varies widely among activities and technologies, and this inconsistency of public values greatly complicates hazard management. This article was soon followed by another article in Environment, “Rating the Risks,” in which my colleagues and I described early attempts to quantify perceptions of risk and document their implications for hazard management.2 Revisiting this article, I am struck by how harshly we came down on the public. Risk was characterized narrowly in terms of annual fatality rates, and serious public misjudgments of these rates were attributed to lack of knowledge compounded by biases linked to the imaginability and memorability of the hazard. Although the possibility was raised that experts, too, often rely on judgments that might sometimes be biased, we concluded that the public needed to be better informed, to rely less on unexamined judgments, to be aware of the qualitative aspects of hazards that could bias its judgments (e.g., involuntary exposure, emotions), and to be open to new evidence that might alter its risk perceptions. Despite the inaccuracy of public perceptions, we noted that removing the public from the hazard-management process was not feasible in a democratic society. Almost four decades later, many of the same issues still challenge risk management, though our understanding of them has greatly increased and a more balanced appreciation of the strength and weaknesses of both expert risk assessments and public perceptions has evolved.3 The method of using numerical rating scales to measure risk perceptions was later named “the psychometric paradigm” and was extended to characterize and assess perceptions in many different ways. Perceived risk and acceptable risk were found to be systematic and predictable. Psychometric techniques seemed well suited for identifying similarities and differences among groups with regard to risk perceptions and attitudes. This research showed that the concept of “risk” meant different things to different people. The public was found to have a broad conception of risk, qualitative and complex, that incorporates considerations such as uncertainty, dread, catastrophic potential, controllability, equity, risk to future generations, and so forth, into the risk equation. In contrast, experts’ perceptions of risk are not closely related to these characteristics. Rather, studies found that experts tend to see riskiness as synonymous with probability of harm or expected mortality. As a result of these different perspectives, conflicts often resulted from experts and laypeople having different definitions of the concept “risk.” In this light, it is not surprising that expert recitations of “risk statistics” often did little to change people’s attitudes and perceptions. Over time, it was recognized that there are legitimate, value-laden issues underlying the multiple dimensions of public risk perceptions, and these values need to be considered in riskmanagement decisions.4 For example, is risk from cancer (a dreaded disease) worse than risk from auto accidents (not dreaded)? Is a risk imposed on a child more serious than a known risk accepted voluntarily by an adult? Are the deaths of 50 passengers in separate automobile accidents equivalent to the deaths of 50 passengers in one airplane crash? Is the risk from a polluted Superfund site worse if the site is located in a neighborhood that has a number of other hazardous facilities nearby? Quantitative risk assessments cannot answer such questions. At much the same time, the technical foundations of scientific risk assessment also came under scrutiny, perhaps because of sharp discrepancies with public perceptions and the frequent conflicts and controversies centered around these differences. Social research challenged the traditional view that dangers result from physical and natural processes in Understanding Perceived Risk: 1978–2015

[1]  P. Slovic,et al.  Numbers and Nerves: Information, Emotion, and Meaning in a World of Data , 2015 .

[2]  P Slovic,et al.  Assessment of the Regional Economic Impacts of Catastrophic Events: CGE Analysis of Resource Loss and Behavioral Effects of an RDD Attack Scenario , 2012, Risk analysis : an official publication of the Society for Risk Analysis.

[3]  Geoffrey L. Cohen,et al.  Cultural cognition of the risks and benefits of nanotechnology. , 2009, Nature nanotechnology.

[4]  Paul Slovic,et al.  The affect heuristic , 2007, Eur. J. Oper. Res..

[5]  C. Sunstein Terrorism and Probability Neglect , 2003 .

[6]  P. Slovic,et al.  Violence Risk Assessment and Risk Communication: The Effects of Using Actual Cases, Providing Instruction, and Employing Probability Versus Frequency Formats , 2000, Law and human behavior.

[7]  Stephen M. Johnson,et al.  The affect heuristic in judgments of risks and benefits , 2000 .

[8]  P. Slovic Trust, Emotion, Sex, Politics, and Science: Surveying the Risk‐Assessment Battlefield , 1999, Risk analysis : an official publication of the Society for Risk Analysis.

[9]  J. Huisman The Netherlands , 1996, The Lancet.

[10]  Sean A. Spence,et al.  Descartes' Error: Emotion, Reason and the Human Brain , 1995 .

[11]  T. Webler,et al.  Fairness and competence in citizen participation : evaluating models for environmental discourse , 1995 .

[12]  P. Slovic,et al.  A psychological study of the inverse relationship between perceived risk and perceived benefit. , 1994, Risk analysis : an official publication of the Society for Risk Analysis.

[13]  P. Slovic,et al.  Intuitive Toxicology: Expert and Lay Judgments of Chemical Risks , 1992, Toxicologic pathology.

[14]  B Fischhoff,et al.  Designing risk communications: completing and correcting mental models of hazardous processes, Part I. , 1994, Risk analysis : an official publication of the Society for Risk Analysis.

[15]  Baruch Fischhoff,et al.  Characterizing Mental Models of Hazardous Processes: A Methodology and an Application to Radon , 1992 .

[16]  Ortwin Renn,et al.  The Social Amplification of Risk: A Conceptual Framework , 1988 .

[17]  Baruch Fischhoff,et al.  Modeling the Societal Impact of Fatal Accidents , 1984 .

[18]  B. Fischhoff,et al.  Rating the Risks , 1979 .

[19]  B. Fischhoff,et al.  How safe is safe enough? A psychometric study of attitudes towards technological risks and benefits , 1978 .