How Evaluators Can Use a Complex Systems Lens to Get “Untrapped” From Limiting Beliefs and Assumptions

Evaluators are becoming increasingly aware that, to provide maximum benefit to the programs they evaluate, they must address the systems within which the programs work. However, evaluation practice is often influenced by beliefs and assumptions that are rooted in an understanding of these systems as stable and predictable. We identify four traps that can limit the usefulness of program evaluations when the instability and uncertainty of systems are not acknowledged. We explore how evaluators can get “untrapped” by using a complex systems lens to interrogate their beliefs and assumptions about (1) the purpose of the evaluation, (2) the definition of the evaluand and boundaries of the evaluation, (3) leveraging controlled evaluation methodologies to learn about complex systems, and (4) the meaning of “going to scale.” From our work on a national cross-site evaluation, we provide examples of ways a complex systems lens can increase the merit, worth, and significance of program evaluations.

[1]  Penelope Hawe,et al.  Lessons from complex interventions to improve health. , 2015, Annual review of public health.

[2]  Beverly A. Parsons,et al.  Culturally Responsive Evaluation Meets Systems-Oriented Evaluation , 2017 .

[3]  L W Green,et al.  From research to "best practices" in other settings and populations. , 2001, American journal of health behavior.

[4]  Robbert Huijsman,et al.  Evidence for the impact of quality improvement collaboratives: systematic review , 2008, BMJ : British Medical Journal.

[5]  Deborah Ghate From Programs to Systems: Deploying Implementation Science and Practice for Sustained Real World Effectiveness in Services for Children and Families , 2016, Journal of clinical child and adolescent psychology : the official journal for the Society of Clinical Child and Adolescent Psychology, American Psychological Association, Division 53.

[6]  Bruce F Chorpita,et al.  Understanding the common elements of evidence-based practice: misconceptions and clinical examples. , 2007, Journal of the American Academy of Child and Adolescent Psychiatry.

[7]  Nancy Cartwright Knowing What We Are Talking About: Why Evidence Doesn't Always Travel. , 2013 .

[8]  W. Stiles,et al.  Some Problems with Randomized Controlled Trials and Some Viable Alternatives. , 2016, Clinical psychology & psychotherapy.

[9]  Rohit Ramaswamy,et al.  Evidence-Based Interventions Are Necessary but Not Sufficient for Achieving Outcomes in Each Setting in a Complex World , 2016 .

[10]  T. Cook GENERALIZING CAUSAL KNOWLEDGE IN THE POLICY SCIENCES: EXTERNAL VALIDITY AS A TASK OF BOTH MULTIATTRIBUTE REPRESENTATION AND MULTIATTRIBUTE EXTRAPOLATION , 2014 .

[11]  Marah Moore,et al.  Partnerships, Paradigms, and Social-System Change , 2016 .

[12]  D. Swendeman,et al.  Adapting Evidence-Based Interventions Using a Common Theory, Practices, and Principles , 2014, Journal of clinical child and adolescent psychology : the official journal for the Society of Clinical Child and Adolescent Psychology, American Psychological Association, Division 53.

[13]  Mark W. Lipsey,et al.  The Primary Factors that Characterize Effective Interventions with Juvenile Offenders: A Meta-Analytic Overview , 2009 .