Why organizations (do not) evaluate: a search for necessary and sufficient conditions

The wide acceptance of evaluation in this evidence-based society might hide significant variation in the extent of evaluation activeness between public sector organizations. In explaining these differences, evidence is only fragmentally available. Admittedly, multiple explanatory factors can be identified in the evaluation community, mainly in the evaluation capacity building literature. Yet, common to the practical character of the field, insights are mainly of anecdotic nature and have seldom been systematically tested. Thus far, the only certainty is that ‘contingency’ matters. The inherently contingent nature of evaluation practices may not discourage us, however, from collecting more systematic insight in explaining differences in the extent of evaluation activeness. It is not clear, indeed, to which degree the contingency reigns. The question is whether more parsimonious patterns can nonetheless be discerned, when attacking the complexity. The present paper takes up this challenge. Via a systematic comparison of 27 public sector organizations of the Flemish administration (Belgium) through the application of several configurational comparative techniques (MSDO/MDSO & csQCA), the analysis identifies a range of necessary and sufficient (combinations of) conditions for the (non)conduct of evaluations.

[1]  Melvin M. Mark,et al.  Toward an Agenda for Research on Evaluation , 2003 .

[2]  Benoît Rihoux,et al.  Innovative Comparative Methods for policy Analysis. Beyond the Quantitative-Qualitative Divide , 2006 .

[3]  P. Bursens,et al.  MSDO/MDSO revisited for public policy analysis , 2006 .

[4]  Charles C. Ragin,et al.  Set Relations in Social Research: Evaluating Their Consistency and Coverage , 2006, Political Analysis.

[5]  David Byrne,et al.  The SAGE Handbook of Case-Based Methods , 2009 .

[6]  Tom Delreux,et al.  Solving contradictory simplifying assumptions in QCA: presentation of a new best practice , 2010 .

[7]  Gary Goertz,et al.  Assessing the Trivialness, Relevance, and Relative Importance of Necessary or Sufficient Conditions in Social Science , 2006 .

[8]  F. Scharpf,et al.  Institutions in Comparative Policy Research , 2000 .

[9]  John Mayne,et al.  Instruments and procedures for assuring evaluation quality: a Swiss perspective , 2005 .

[10]  Dirk Berg-Schlosser,et al.  Comparing political systems: Establishing similarities and dissimilarities , 1994 .

[11]  G. De Meur,et al.  Conditions of Authoritarianism, Fascism, and Democracy in Interwar Europe , 1996 .

[12]  F. Scharpf Games Real Actors Play: Actor-centered Institutionalism In Policy Research , 1997 .

[13]  Christina Segerholm Researching Evaluation in National (State) Politics and Administration: A Critical Approach , 2003 .

[14]  Bojana Lobe,et al.  The Case for Qualitative Comparative Analysis (QCA) : Adding Leverage for Thick Cross-Case Comparison , 2010 .

[15]  Charles C. Ragin,et al.  Fuzzy-Set Social Science , 2001 .

[16]  G. Meur,et al.  The Logic and Assumptions of MDSO–MSDO Designs , 2009 .

[17]  Charles C. Ragin,et al.  Redesigning social inquiry , 2008 .

[18]  Peer C. Fiss Building Better Causal Theories: A Fuzzy Set Approach to Typologies in Organization Research , 2011 .

[19]  Benoît Rihoux,et al.  Qualitative Comparative Analysis (QCA) As an Approach , 2009 .

[20]  Carsten Q. Schneider,et al.  Improving Inference with a 'Two-step' Approach: Theory and Limited Diversity in fs/QCA , 2003 .

[21]  W. Scott,et al.  Institutions and Organizations. , 1995 .

[22]  Charles C. Ragin,et al.  Between Complexity and Parsimony: Limited Diversity, Counterfactual Cases, and Comparative Analysis. , 2005 .