Technical and Ethical Issues in Indicator Systems.

Most indicator systems are top-down, published, management systems, addressing primarily the issue of public accountability. In contrast we describe here a university-based suite of "grass-roots," research-oriented indicator systems that are now subscribed to, voluntarily, by about 1 in 3 secondary schools and over 4,000 primary schools in England. The systems are also being used by groups in New Zealand, Australia and Hong Kong, and with international schools in 30 countries. These systems would not have grown had they not been cost-effective for schools. This demanded the technical excellence that makes possible the provision of one hundred percent accurate data in a very timely fashion. An infrastructure of powerful hardware and ever-improving software is needed, along with extensive programming to provide carefully chosen graphical and tabular presentations of data, giving at-a-glance comparative information. Highly skilled staff, always learning new techniques, have been essential, especially as we move into computer-based data collection. It has been important to adopt transparent, readily understood methods of data analysis where we are satisfied that these are accurate, and to model the processes that produce the data. This can mean, for example, modelling separate regression lines for 85 different examination syllabuses for one age group, because any aggregation can be shown to represent unfair comparisons. Ethical issues are surprisingly often lurking in technical decisions. For example, reporting outcomes from a continuous measure in terms of the percent of students who surpassed a certain level, produces unethical behavior: a concentration of teaching on borderline students. Distortion of behavior and data corruption are ever-present concerns in indicator systems. The systems we describe would have probably failed to thrive had they not addressed schools' on-going concerns about education. Moreover, data interpretation can only be completed in the schools, by those who know all the factors involved. Thus the commitment to working closely and collaboratively with schools in "distributed research" is important, along with "measuring what matters"... not only achievement. In particular the too-facile interpretation of correlation as causation that characterized much school effectiveness research had to be avoided and the need for experimentation promoted and demonstrated. Reasons for the exceptionally warm welcome from the teaching profession may include both threats (such as the unvalidated inspection regime run by the Office for Standards in Education) and opportunities (such as site based management).