Interlaboratory comparisons are the most powerful tools for determining the competences of laboratories performing calibrations and testing. Performance metrics is based on statistical analysis, which can be very complex in certain cases, especially for testing where transfer standards (samples) are prepared by the pilot laboratory. Statistical quantities are calculated using different kinds of software, from simple Excel applications to universal or specific commercial programmes. In order to ensure proper quality of such calculations, it is very important that all computational links are recognized explicitly and known to be operating correctly. In order to introduce a traceability chain into metrology computation, the European project EMRP NEW 06 TraCIM was agreed between the EC and the European Metrology Association (EURAMET). One of the tasks of the project was also to establish random datasets and validation algorithms for verifying software applications in regard to evaluating interlaboratory comparison results. The statistical backgrounds for resolving this task, and the basic concept of the data generator are presented in this paper. Background normative documents, calculated statistical parameters, boundary conditions for generating reference data sets are described, as well as customer interface. © 2014 PEI, University of Maribor. All rights reserved.
[1]
Maurice G. Cox,et al.
The evaluation of key comparison data: determining the largest consistent subset
,
2007
.
[2]
Bojan Ačko.
Final report on EUROMET Key Comparison EUROMET.L-K7: Calibration of line scales
,
2012
.
[3]
M. Cox.
The evaluation of key comparison data
,
2002
.
[4]
M. Cox,et al.
The evaluation of key comparison data using key comparison reference curves
,
2012
.
[5]
S. Raczynski.
UNCERTAINTY, DUALISM AND INVERSE REACHABLE SETS
,
2011
.
[6]
Karin Kniel,et al.
Critical observations on rules for comparing measurement results for key comparisons
,
2013
.
[7]
Simon Brezovnik,et al.
on Intelligent Manufacturing and Automation , 2013 Verification of Software Applications for Evaluating Interlaboratory Comparison Results
,
2014
.