Evaluation of the use of consensus values in proficiency testing programmes

Proficiency testing (PT) is an essential tool used by laboratory accreditation bodies to assess the competency of laboratories. Because of limited resources of PT providers or for other reasons, the assigned reference value used in the calculation of z-score values has usually been derived from some sort of consensus value obtained by central tendency estimators such as the arithmetic mean or robust mean. However, if the assigned reference value deviates significantly from the ‘true value’ of the analyte in the test material, laboratories’ performance will be evaluated incorrectly. This paper evaluates the use of consensus values in proficiency testing programmes using the Monte Carlo simulation technique. The results indicated that the deviation of the assigned value from the true value could be as large as 40%, depending on the parameters of the proficiency testing programmes under investigation such as sample homogeneity, number of participant laboratories, concentration level, method precision and laboratory bias. To study how these parameters affect the degree of discrepancy between the consensus value and the true value, a fractional factorial design was also applied. The findings indicate that the number of participating laboratories and the distribution of laboratory bias were the prime two factors affecting the deviation of the consensus value from the true value.

[1]  Richard E. Lawn,et al.  Proficiency Testing in Analytical Chemistry , 1997 .

[2]  Michael Thompson,et al.  International Harmonized Protocol for Proficiency Testing of (Chemical) Analytical Laboratories , 1993 .

[3]  A. Gustavo Gonzalez,et al.  Uncertainty evaluation from Monte-Carlo simulations by using Crystal-Ball software , 2005 .

[4]  P. Lowthian,et al.  A Horwitz-like function describes precision in a proficiency test , 1995 .

[5]  A. Hill,et al.  A comparison of simple statistical methods for estimating analytical uncertainty, taking into account predicted frequency distributions. , 2001, The Analyst.

[6]  Alex Lepek,et al.  A computer program for a general case evaluation of the expanded uncertainty , 2003 .

[7]  Eamonn Mullins,et al.  Statistics for the Quality Control Chemistry Laboratory , 2003 .

[8]  Robert George Visser Reference values versus consensus values: a reaction to the article of Baldan et al. , 2001 .

[9]  Jean-Claude Libeer,et al.  Proficiency testing in analytical chemistry, microbiology and laboratory medicine – working group discussions on current status, problems and future directions , 2001 .

[10]  Linfield Brown,et al.  Statistics for Environmental Engineers , 2002 .

[11]  James A. Holcombe,et al.  Analytical Applications of Monte Carlo Techniques. , 1990 .

[12]  J. Miller,et al.  Statistics and chemometrics for analytical chemistry , 2005 .

[13]  Michael Thompson,et al.  Bump-hunting for the proficiency tester--searching for multimodality. , 2002, The Analyst.

[14]  Jordi Riu,et al.  Effect of non-significant proportional bias in the final measurement uncertainty. , 2003, The Analyst.