The Metaevaluation Imperative

The evaluation field has advanced sufficiently in its methodology and public service that evaluators can and should subject their evaluations to systematic metaevaluation. Metaevaluation is the process of delineating, obtaining, and applying descriptive information and judgmental information about an evaluation’s utility, feasibility, propriety, and accuracy and its systematic nature, competence, integrity/honesty, respectfulness, and social responsibility to guide the evaluation and publicly report its strengths and weaknesses. Formative metaevaluations—employed in undertaking and conducting evaluations—assist evaluators to plan, conduct, improve, interpret, and report their evaluation studies. Summative metaevaluations—conducted following an evaluation—help audiences see an evaluation’s strengths and weaknesses, and judge its merit and worth. Metaevaluations are in public, professional, and institutional interests to assure that evaluations provide sound findings and conclusions; that evaluation practices continue to improve; and that institutions administer efficient, effective evaluation systems. Professional evaluators are increasingly taking their metaevaluation responsibilities seriously but need additional tools and procedures to apply their standards and principles of good evaluation practice.

[1]  D. Stufflebeam Lessons in Contracting for Evaluations , 2000 .

[2]  Decker Walker,et al.  No Simple Answer: Critique of the Follow Through Evaluation , 1978 .

[3]  E. Diamond Development of the Joint Committee Standards for Evaluations of Educational Programs, Projects, and Materials. , 1985 .

[4]  Michael Scriven,et al.  An Introduction to Meta-Evaluation. , 1969 .

[5]  Michael Scriven,et al.  Product Evaluation—The State of the Art , 1994 .

[6]  L. Darling-Hammond,et al.  The New Handbook of Teacher Evaluation: Assessing Elementary and Secondary School Teachers , 1989 .

[7]  Lorrie Shepard Setting Performance Standards for Student Achievement. A Report of the National Academy of Education Panel on the Evaluation of the NAEP Trial State Assessment: An Evaluation of the 1992 Achievement Levels. , 1993 .

[8]  S Hassfeld,et al.  EVALUATION OF MODELS , 2002, Biomedizinische Technik. Biomedical engineering.

[9]  P. Grasso Meta-Evaluation of an Evaluation of Reader Focused Writing for the Veterans Benefits Administration , 1999 .

[10]  Michael Jay Orris,et al.  Industrial Applicability of the Joint Committee's Personnel Evaluation Standards , 1989 .

[11]  M. Crispin Ethical principles in the conduct of research with human participants. , 1973, The American psychologist.

[12]  R. Stake,et al.  Summary of Evaluation of Reader Focused Writing for the Veterans Benefits Administration , 1999 .

[13]  Ernest R. House,et al.  Fair Evaluation Agreement , 1979 .

[14]  James R. Sanders,et al.  Standards and principles , 1995 .

[15]  Daniel L. Stufflebeam,et al.  Evaluation Models: New Directions for Evaluation , 2001 .

[16]  E. Guba The Failure of Educational Evaluation. , 1969 .

[17]  Maris A. Vinovskis,et al.  Overseeing the Nation's Report Card: The Creation and Evolution of the National Assessment Governing Board (NAGB). , 2001 .

[18]  J. Millman Handbook of teacher evaluation , 1981 .

[19]  Emmett Flemming,et al.  NCES Statistical Standards. , 1992 .

[20]  M. A. Scheirer,et al.  Guiding Principles for Evaluators , 1995 .

[21]  Harold L. Miller The New York City Public Schools Integrated Learning Systems Project: Evaluation and Meta-Evaluation. , 1997 .

[22]  Herbert J. Walberg,et al.  A meta-evaluation , 1997 .

[23]  Daniel L. Stufflebeam,et al.  Evaluation Checklists: Practical Tools for Guiding and Judging Evaluations , 2001 .

[24]  Leon M. Lessinger Every kid a winner: accountability in education , 1970 .

[25]  J E Brandl Evaluation and politics. , 1978, Evaluation.

[26]  Lois-ellin Datta Circe’s demonstration of a close-to-ideal evaluation in a less-than-ideal world , 1999 .