In Evaluating Technological Risks, When and Why Should We Consult Our Emotions?

In December of 2018, the New York Times ran a story about members of the public in Arizona attacking experimental self-driving cars that were being road-tested on public streets there (Romero 2018). Some people were reportedly throwing rocks at the cars. Others were slashing their tires, or waving guns at them. Why the anger at these self-driving cars (or at the companies experimenting with them on public roads)? Some Arizonans felt put at risk. In March of the year before, an Arizona native was hit and killed by an experimental self-driving car operated by the company Uber. This clearly illustrated the complications of testing out these new technologies among ordinary people, who incidentally had not consented to participating in this experiment. One Arizona man, Mr. O’Polka, was quoted as saying the following in the article, “They said they need real-world examples, but I don’t want to be their real-world mistake.” These Arizonans responded to these technological risks with anger and fear. They apparently felt they were being wronged or treated unfairly. What, if anything, could these Arizonans learn about the ethical dimensions of their situation by consulting their emotions about the risks they were being exposed to? Sabine Roeser’s rich and stimulating book Risk, Technology, and Moral Emotions is about exactly this sort of question (Roeser 2018). On a general level, the book is about three main topics: (1) the ethical assessment of technological risks, (2) the role of emotions in such risk assessments, and (3) what meta-ethical and moral–psychological theories best make sense of the role(s) that emotions do and should play in technological risk assessments. On the level of first-order normative ethics, Roeser opposes narrow, monistic views about what considerations should matter in risk assessments, arguing instead for a broad, pluralistic view. She argues that when we allow our emotions to guide us in our assessments of technological risks, this makes us shrink from narrow, technocratic risk-assessments. Risk assessment should not