This paper describes three different techniques for fitting straight lines to experimental data and discusses the corresponding evaluation of uncertainty. The techniques are (i) traditional fitting by least-squares, (ii) a Bayesian linear-regression analysis and (iii) an analysis according to the propagation of probability density functions attributed to the points measured. The material is presented to clarify assumptions underlying the techniques, to highlight differences between the techniques and to point to difficulties associated with applying the techniques under current views of 'uncertainty analysis'. Considerable attention is given to the estimation of values of the function and not just to the estimation of parameters of the function. The paper gives a summary of many results of least-squares fitting, including some unfamiliar results for the simultaneous estimation of the unknown function at all points. On many occasions the unknown function will only be approximately linear, in which case we must define a unique unknown gradient to give proper meaning to our 'estimate' of slope. This can be achieved by defining an interval of interest and then applying a least-squares-type result.
[1]
C. Elster,et al.
Probabilistic and least-squares inference of the parameters of a straight-line model
,
2007
.
[2]
Gerald J. Hahn,et al.
A table of percentage points of the distribution of the largest absolute value of k Student t variates and its applications
,
1971
.
[3]
K. Pillai,et al.
ON THE DISTRIBUTION OF THE RATIO OF THE iTH OBSERVATION' IN AN ORDERED SAMPLE FROM A NORMAL POPULATION TO AN INDEPENDENT ESTIMATE OF THE STANDARD DEVIATION
,
1954
.
[4]
Ignacio Lira,et al.
A united interpretation of different uncertainty intervals
,
2005
.
[5]
R Willink,et al.
Forming a comparison reference value from different distributions of belief
,
2006
.
[6]
P. R. Bevington,et al.
Data Reduction and Error Analysis for the Physical Sciences
,
1969
.