Dealing With Risk: Why the Public and the Experts Disagree on Environmental Issues
暂无分享,去创建一个
Most of us must have felt at some time a sense of bewilderment at the difference in the risk to health from exposure to ionising radiation as perceived by the general public and by the experts who work in the field. Why is it that the public (and the media) display so much concern over radiation exposures which expert opinion regards as trivial? In this book, Howard Margolis explores this thorny issue, setting out his thesis as to why there is a conflict between the layman and the expert on certain high profile environmental issues. His contention is that people rely upon intuition which has evolved to guide them through the challenges of everyday life and that this intuition has proved to be generally successful under these circumstances - it is the default option. However, under conditions that are far removed from the experiences of everyday life, `habits of mind' can be misleading and provide perceptions of risk that are erroneous. These attitudes can be very difficult to change because they are remote from common experience and, therefore, natural corrective processes do not operate. As a consequence, these false perceptions `feel right' and are clung to tenaciously. To illustrate this point Margolis uses an exercise in probability to demonstrate how doggedly people will adhere to the wrong answer. I shall not use his example (you will have to read the book to enjoy the challenge) but I shall attempt to illustrate the point with a similar example. This involves a gameshow in which the host presents the contestant with three closed identical boxes. The host tells the contestant that in one of the three boxes is a card awarding the contestant with the star prize of a car, whereas in the other two boxes are cards for booby prizes of cabbages. The contestant chooses one box which, for the time being, remains closed. The host then tells the contestant that he is going to open one of the other boxes which he knows contains a cabbage card. This he does and he then invites the contestant either to stick with his original choice of box or to switch to the other still closed box. The question is, should the contestant stick to his original choice or change to the other closed box? This question has led to heated discussion. In fact, the answer is that the contestant should always switch to the other box because he is twice as likely to win the car than if he sticks to his original choice of box. (I can give the reason for this in a subsequent issue of the Journal if I get enough correspondence.) However, the point is that people are willing to vigorously defend their original answer (usually that there is no advantage to changing and therefore the contestant should stick to his original choice of box) even when told the right answer and even when the reasoning behind the correct decision is explained. Considerable persuasion is required and people are disinclined to change their original (and incorrect) point of view. This is Margolis's point. The layman is very reluctant to accept the view of the expert that apparently conflicts with common sense - the common sense he relies upon to navigate his way through his experiences of the everyday world. Appreciable effort is required to persuade the public that expert opinion is right and lay opinion wrong on an issue that seems quite clear. Margolis proposes that this is the underlying reason for lay/expert conflict. Margolis argues that the entrenched positions adopted by the lay public in the face of expert opinion are the product of an inability to see the entire picture, in that there is no incentive for the non-expert to appreciate the adverse aspects of adopting an extreme position. Therefore, the cautious attitude (`better safe than sorry') is adopted rather than a more pragmatic approach (`waste not, want not'), particularly when the costs of the cautious approach, for example the banning of all potentially carcinogenic additives in food, are spread over a very large number of individuals and are not readily appreciated. It takes considerable effort to redress the balance, but there are examples of when this has happened. One of these relates to the presence of asbestos in public schools in New York City. When, in September 1993, small amounts of asbestos were found in New York schools the schools were closed with overwhelming public support despite expert opinion that the risks arising were trivial. When, however, after three weeks, the schools were still closed and it was realised that there were alternative risks associated with not having the children at school, there was a public demand for the schools to re-open, even though in that short period of time the risk from asbestos had not altered significantly. In this case the lay public had realised from direct experience that there is a balance of risk associated with particular actions and it came to be self-evident that the risk from the asbestos was very small relative to the risks of children roaming the streets. In Margolis's view this more reasonable approach would occur with greater frequency if the complete picture (what Margolis refers to as `achieving fungability') could be achieved in other similar situations. What does Margolis suggest should be done to bring lay and expert opinion closer together on contentious environmental issues? Forcing the repercussions of adopting an extreme position into the collective consciousness of the general public (as happened with asbestos in New York schools) would appear to be the answer, but this is easier said than done. Here, Margolis wanders into rather dubious territory. He rightly points out that many inferences of carcinogenicity are based upon animal experiments using high doses, and that the risks posed by the low doses commonly encountered environmentally are uncertain and may even be zero. Clearly, an appropriate balance must be struck in assessing risks under these circumstances, but I felt uncomfortable in the manner in which Margolis made use of this uncertainty to argue that a (small) raised risk should not be assumed, or even that low doses might be beneficial. I do not think that such arguments are helpful to his cause of achieving a balanced position on risk. The proposition of Margolis that choice about risk should be founded upon the basis that, taken overall, regulation should `do no harm', and that the hidden costs to society (and, therefore, ultimately to individuals) of over-regulation are brought to the fore seems eminently sensible. However, the practicalities are a different matter. What is the benefit to politicians in opposing popular crusades against environmental risks? He suggests that institutional arrangements are made to ensure that somebody (analogous to the Office of the US Surgeon General) is responsible for ensuring that the overall risk-benefit balance is fully perceived when making regulations concerning environmental risks, so that costs can be appreciated in their entirety. This book is an interesting examination of lay/expert conflict over environmental risks, and puts forward an attractive thesis. Although his approach to the underlying problem is persuasive, I am not convinced that the practical solution proposed by Margolis would break the logjam. Also, the reader must deal with jargon and (what to me is) some rather strange use of English, such as the use of `psychic' for `psychological', which can lead to disjointed comprehension. I would, however, recommend this book to anyone who wants to be better informed of the nature of the conflict between expert and general public over environmental risks.