Forming a comparison reference value from different distributions of belief

If measurement uncertainty is to be expressed by forming a distribution of belief for the measurand then this should be reflected in the theory underlying the analysis of interlaboratory comparison data. This paper presents a corresponding method for the calculation of a reference value in a comparison where each laboratory independently assigns a probability density function, fi(x), to the value X of a stable artefact. Straightforward argument shows that a consensus density function for X can be taken as , assuming that the densities of the n laboratories are reliable and mutually consistent. A method is also presented for examining this consistency. The result f(x) is a special case of a consensus in the statistical literature called the logarithmic opinion pool. The key comparison reference value might be taken as the mean, median or mode of f(x).The method developed does not explicitly involve laboratory biases (offsets), which are important features of most methods for comparison analysis published recently and which are relevant to the calculation of degrees of equivalence. The belief approach does not seem well-suited when it is assumed that such offsets exist.

[1]  Robert L. Winkler,et al.  The Consensus of Subjective Probability Distributions , 1968 .

[2]  R B Frenkel Statistical procedures for comparing realizations of physical units using artefact standards, including an estimation of transportation effects , 1999 .

[3]  M Ballico Calculation of key comparison reference values in the presence of non-zero-mean uncertainty distributions, using the maximum-likelihood technique , 2001 .

[4]  Tilmann Deutler,et al.  Grubbs-Type Estimators for Reproducibility Variances in an Interlaboratory Test Study , 1991 .

[5]  Maurice G. Cox,et al.  The evaluation of key comparison data: An introduction , 2002 .

[6]  F Pavese Comments on 'Statistical analysis of CIPM key comparisons based on the ISO Guide' , 2005 .

[7]  Clemens Elster,et al.  Model-based analysis of key comparisons applied to accelerometer calibrations , 2001 .

[8]  M. Cox The evaluation of key comparison data , 2002 .

[9]  R Willink Statistical determination of a comparison reference value using hidden errors , 2002 .

[10]  Raghu N. Kacker,et al.  SHORT COMMUNICATION: Response to comments on 'Statistical analysis of CIPM key comparisons based on the ISO Guide' , 2004 .

[11]  Clemens Elster,et al.  Analysis of key comparison data: assessment of current methods for determining a reference value , 2001 .

[12]  Raghu N. Kacker,et al.  Combining information from interlaboratory evaluations using a random effects model , 2004 .

[13]  Christian Genest,et al.  Combining Probability Distributions: A Critique and an Annotated Bibliography , 1986 .

[14]  A G Steele,et al.  Data pooling and key comparison reference values , 2002 .

[15]  Ronald Christensen,et al.  Bayesian Point Estimation Using the Predictive Distribution , 1985 .

[16]  David White On the analysis of measurement comparisons , 2004 .

[17]  Raghu N. Kacker,et al.  Response to comments on ‘Statistical analysis of CIPM key comparisons based on the ISO Guide’ , 2005 .

[18]  M T Clarkson,et al.  A General Approach to Comparisons in the Presence of Drift , 1994 .

[19]  Franco Pavese,et al.  The use of a mixture of probability distributions in temperature interlaboratory comparisons , 2004 .

[20]  Thomas Mathew,et al.  Models and Confidence Intervals for True Values in Interlaboratory Trials , 2004 .

[21]  Ignacio Lira,et al.  A united interpretation of different uncertainty intervals , 2005 .

[22]  A. Chunovkina,et al.  "GUIDE TO THE EXPRESSION OF UNCERTAINTY IN MEASUREMENT" (GUM) AND "MUTUAL RECOGNITION OF NATIONAL MEASUREMENT STANDARDS AND OF CALIBRATION AND MEASUREMENT CERTIFICATES ISSUED BY NATIONAL METROLOGY INSTITUTES" (MRA): SOME PROBLEMS OF DATA PROCESSING AND MEASUREMENT UNCERTAINTY EVALUATION , 2001 .

[23]  C M Sutton,et al.  Analysis and linking of international measurement comparisons , 2004 .

[24]  R B Frenkel Comparison of artefact standards by more than two laboratories , 2002 .

[25]  Raghu N. Kacker,et al.  Combined result and associated uncertainty from interlaboratory evaluations based on the ISO Guide , 2002 .

[26]  Nien Fan Zhang,et al.  Statistical analysis of key comparisons with linear trends , 2004 .