Monitoring the calibration status of a measuring instrument by a stochastic model

The paper discusses a class of stochastic models for evaluating the optimal calibration interval in measuring instruments. The model is based on the assumption that the calibration status of a measuring instrument can be monitored by means of one observable parameter. The observable parameter is undergoing a stochastic drift process. The paper introduces and compares stochastic drift models of different nature, and estimates the first passage time of the monitored parameter on a preset limit. The calibration interval is determined as a suitable percentile of the distribution function of the first passage time. A preliminary validation of the model, based on a sample of experimental data collected on a class of instruments, is finally reported.