An evaluation capacity building toolkit for principal investigators of undergraduate research experiences: A demonstration of transforming theory into practice.

This paper describes the approach and process undertaken to develop evaluation capacity among the leaders of a federally funded undergraduate research program. An evaluation toolkit was developed for Computer and Information Sciences and Engineering(1) Research Experiences for Undergraduates(2) (CISE REU) programs to address the ongoing need for evaluation capacity among principal investigators who manage program evaluation. The toolkit was the result of collaboration within the CISE REU community with the purpose being to provide targeted instructional resources and tools for quality program evaluation. Challenges were to balance the desire for standardized assessment with the responsibility to account for individual program contexts. Toolkit contents included instructional materials about evaluation practice, a standardized applicant management tool, and a modulated outcomes measure. Resulting benefits from toolkit deployment were having cost effective, sustainable evaluation tools, a community evaluation forum, and aggregate measurement of key program outcomes for the national program. Lessons learned included the imperative of understanding the evaluation context, engaging stakeholders, and building stakeholder trust. Results from project measures are presented along with a discussion of guidelines for facilitating evaluation capacity building that will serve a variety of contexts.

[1]  Frances Lawrenz,et al.  User-friendly Handbook For Project Evaluation , 2015 .

[2]  Barbara M. Moskal,et al.  Examining science and engineering students' attitudes toward computer science , 2009, 2009 39th IEEE Frontiers in Education Conference.

[3]  Jody L. Fitzpatrick,et al.  An introduction to context and its role in evaluation practice , 2012 .

[4]  P. Hardré,et al.  Redesigning and aligning assessment and evaluation for a federally funded math and science teacher educational program. , 2010, Evaluation and program planning.

[5]  R. Perry,et al.  An Examination of the Relationship Among Academic Stress, Coping, Motivation, and Performance in College , 2000 .

[6]  D. Lopatto Survey of Undergraduate Research Experiences (SURE): first findings. , 2004, Cell biology education.

[7]  Y. Lincoln,et al.  Scientific Research in Education , 2004 .

[8]  D. Compton,et al.  Toward a definition of the ECB process: A conversation with the ECB literature , 2002 .

[9]  Juan E. Gilbert,et al.  Making a case for BPC [Broadening Participation in Computing] , 2006, Computer.

[10]  David Lopatto,et al.  Undergraduate research experiences support science career decisions and active learning. , 2007, CBE life sciences education.

[11]  Lyn M. Shulha,et al.  The Program Evaluation Standards: A Guide for Evaluators and Evaluation Users , 2010 .

[12]  Ronald A. Berk,et al.  Measuring the Effectiveness of Faculty Mentoring Relationships , 2005, Academic medicine : journal of the Association of American Medical Colleges.

[13]  Tiffany Barnes,et al.  The STARS Alliance: Viable Strategies for Broadening Participation in Computing , 2011, TOCE.

[14]  I. Ajzen Nature and operation of attitudes. , 2001, Annual review of psychology.

[15]  Jane Whynot,et al.  Application of an organizational evaluation capacity self-assessment instrument to different organizations: similarities and lessons learned. , 2015, Evaluation and program planning.

[16]  Lorna Earl,et al.  The Case for Participatory Evaluation , 1992 .

[17]  Blaine R. Worthen,et al.  Conceptual Challenges Confronting Cluster Evaluation , 1997 .

[18]  Kelli Johnson,et al.  Compulsory project‐level involvement and the use of program‐level evaluations: Evaluating the Local Systemic Change for Teacher Enhancement program , 2011 .

[19]  M. Clifford,et al.  K-20 Partnerships: Literature Review and Recommendations for Research. WCER Working Paper No. 2008-3. , 2008 .

[20]  Daniel L. Stufflebeam,et al.  The 21St-Century CIPP Model: Origins, Development, and Use , 2004 .

[21]  L. Bakken,et al.  A Course Model for Building Evaluation Capacity Through a University–Community Partnership , 2014 .

[22]  Rick Matzen,et al.  Defining undergraduate research in computer science: a survey of computer science faculty , 2012 .

[23]  Abraham Wandersman,et al.  A Research Synthesis of the Evaluation Capacity Building Literature , 2012 .

[24]  Hallie Preskill,et al.  Evaluation's Second Act , 2008 .

[25]  Patricia J. Rogers,et al.  Teaching People to Fish? Building the Evaluation Capability of Public Sector Organizations , 2003 .

[26]  Barbara K. Goza,et al.  The role of efficacy and identity in science career commitment among underrepresented minority students , 2011 .

[27]  Hallie Preskill,et al.  A Multidisciplinary Model of Evaluation Capacity Building , 2008 .

[28]  James C McDaid,et al.  Program Evaluation and Performance Measurement: An Introduction to Practice , 2006 .

[29]  Jean A. King,et al.  A Checklist for Building Organizational Evaluation Capacity 1 , 2007 .

[30]  Susan H. Russell,et al.  Benefits of Undergraduate Research Experiences , 2007, Science.

[31]  Michael Quinn Patton,et al.  Utilization-Focused Evaluation , 1979 .

[32]  Jessica Shaw,et al.  Can a workbook work? Examining whether a practitioner evaluation toolkit can promote instrumental use. , 2015, Evaluation and program planning.

[33]  A. Bandura Self-Efficacy: The Exercise of Control , 1997, Journal of Cognitive Psychotherapy.

[34]  Amy DeGroff,et al.  Challenges and strategies in applying performance measurement to federal public health programs. , 2010, Evaluation and program planning.