Comment on “Data Representativeness for Risk Assessment” by Rosemary Mattuck et al., 2005
暂无分享,去创建一个
Following a discussion of the difference between risk assessment for chronic and acute exposures, Mattuck et al. (2005) focus on important factors to consider in sampling designs for chronic exposure decision units. They conclude that, “Probability-based sampling designs are the best for supporting risk assessment because they are unbiased, provide a reliable estimate of variability, and allow statistical inferences to be made from the sample data set” (p. 70). They illustrate the computation of the number of samples required to obtain a reliable estimate of the arithmetic mean concentration and the 95% upper confidence limit (UCL) on the mean. Comments are also offered on the comparative utility of discrete and composite samples. We believe that these issues require further discussion. The equation recommended to compute the number of samples required to estimate a mean contaminant concentration with a specified level of confidence is based on an assumed normal distribution of sample concentrations. However, most data sets representing contaminant concentrations in discrete samples are not normally distributed. For typical low concentrations, there is a natural lower boundary (zero or the detection limit) but no practical upper boundary. Therefore, distributions are often skewed toward higher concentrations. In the absence of an acceptable normalizing transformation, the calculated coefficient of variation (CV) from such data is invalid and can be very large. Table 1 in Mattuck et al. includes sample number estimates based on CVs up to 250%. Such very large CVs are clearly invalid and lead to predictions of sample numbers and UCLs that are both invalid and unrealistic. When discrete sample contaminant concentrations are nonnormally distributed, composite sampling becomes especially attractive because the distribution of composite sample concentrations will often approach normal, in accordance with the Central Limit Theorem. Mattuck et al. acknowledge the benefits of composite sampling to estimate mean concentrations with improved precision and at lower cost. However, they also qualify this by stating, “. . . composites can be problematic when used for statistical tests of parameters that rely on estimates of the sample variance, such as the 95% UCL. . . .” (p. 69). They also claim that “the 95% UCL of the composites may underestimate the true UCL and thus may underestimate risk at the site” (p. 69). First, we must remember that the 95% UCL is a measure of the uncertainty in our estimate of the mean concentration. It is not a prediction of the highest concentration within an exposure area. The objective should be to produce a reliable estimate of the mean and the uncertainty in the mean. Secondly, any estimate of the 95% UCL on the mean, computed using a standard deviation derived from a seriously non-normal distribution, is invalid. In contrast, one derived from replicate composite samples is far more likely to be valid due to better conformance to normality requirements. Mattuck et al. state that, “Data sets with high variability or a small number of samples will have a UCL that may be several times higher than the mean concentration” (p. 66). How can one place any reliance on such an estimate? We believe that an acceptable sampling plan must generate a 95% UCL that is never more than twice the mean concentration estimate, and preferably it should be much closer than that to the mean. Finally, it must be noted that there is no true 95% UCL—it is a probability-based estimate derived from a limited number of samples from a much larger population. These points are illustrated by a typical data set for 2,4dinitrotoluene (2,4-DNT) concentrations in a 10 m × 10 m area at an artillery firing point (Walsh et al., 2005). After dividing the area into 100 equal sized grids, a discrete surface sample (0–2.5 cm) was collected from a random location within each grid. These samples, which ranged from 39 to 82 grams, were analyzed without subsampling. Concentration estimates varied from 0.0007 to 6.4 μg/g. A normal probability plot (Figure 1) was clearly nonlinear, demonstrating that the data was not normally distributed. If we compute the mean and standard deviation despite their lack of validity, we obtain estimates of 1.10 and 1.17 μg/g, respectively (CV = 106%).
[1] A. Wait,et al. Data Representativeness for Risk Assessment , 2005 .
[2] Michael R. Walsh,et al. Collection Methods and Laboratory Processing of Samples from Donnelly Training Area Firing Points, Alaska, 2003 , 2005 .
[3] Thomas A. Ranney,et al. Representative Sampling for Energetic Compounds at Military Training Ranges , 2005 .