Statistical Auditing of Non-transparent Expert Assessments

Abstract A statistical model is developed for an audit of the assessments of a panel of experts when little information is made available beyond a final announcement of the individual assessed ratings given. The application is to the process for the research assessment exercise for UK universities. Based on the proportions of the publications a panel deems to be International standard, National standard or Unclassified, a department’s research output is rated by the panel on a seven point scale. The expert panel’s remit is carefully interpreted and the given ratings are modelled via an underlying trinomial random variable with a bivariate Normal approximation. A likelihood function is developed and maximised in order to obtain fitted ratings for all units of assessment. The model’s fitted values for the given ratings explain outcomes remarkably well and there are few mis-classifications; but there are some surprising outliers that do still require some explanation. The procedure illustrates well how Statisticians, surprisingly, might be able to model and audit for consistency the work of experts even if little or no information is provided, beyond vague prior published guidelines for the assessments and the final ratings given.