Towards an Analytical Framework for Evaluating Student Learning in Computer Science Courses

This poster presents an overview of ongoing work in the Computer Science Department to assess the learning occurring in multiple undergraduate courses in an analytical manner which will facilitate semester-to-semester and institution-to-institution comparisons. It describes the types of assessments created (which are course-specific based on ACM model content areas identified as covered by the instructor), their use, data analysis and the conclusions which can be drawn. Limited initial data is also presented. Analytical Framework Most educational assessment work is complicated by the need to be able to control for the impact of the experiment in some way. The need for student choice, combined with year-to-year (or even semester-to-semester) student capabilities differences and other confounding factors makes conducting a valid educational experiment problematic. Some have responded by creating elaborate multi -school controlled experiments, while others have opted to report on their own work and leave the potential extrapolation of this work as a subject for others to investigate. However, if a well-known set of standards existed to compare performance against (for example using a pre-test, post-test approach which would allow a performance difference to be ascertained), conducting assessment on educational innovations (both large and minute) would be much easier. The same information that can enable experimental work in educational assessment can facilitate formative assessment as well. In too many cases, the fundamental question of what (if anything) did the students learn and how much is replaced with surrogate questions about student perceptions of instruction and other (possibly correlating in some cases) topics. Being able to assess students’ starting and ending points would enable a more effective assessment of what techniques, styles and other inputs into the learning process work well and which do not work as well. We’re working to develop these standards for the Computer Science discipline. A pilot project which was run during the Spring, 2014 semester at the University of North Dakota is conducting preand post-test based assessment of three 100-level computer science courses and one 300-level computer science course. We plan to expand this during the Fall, 2014 semester to incorporate other schools in North Dakota and beyond. This work seeks to generate the basics of a standard that can be used on a national or international scale. What Has Been Done Evaluating the performance of courses, education approaches and educators is a subject that provokes no shortage of problems. Significant disagreement exists regarding how to achieve the best results for students, or even what results should be generated and assessed [1-3]. Others fear that determining an evaluative criteria may allow an administration to ‘clean house’ of those not subscribing to an approach. O’Mahony and Garavan [4] contend that this “managerialism” perception, the notion that university leadership is uses systems to manage the school in a business-like way and seeks to “advance strategic objectives” for some can be problematic. However, a robust approach, which considers knowledge, skill and experience attainment may identify numerous benefits and the trades that must be made to get each. This may include benefits beyond what are typically assessed, such as enhanced creativity [5], motivation and self-image [6] and job placement benefits [7]. Current policy makers perceive an ever growing cost of higher education [8] with generally positive results, but which suffers from a difficulty of de-confounding the selection effect (of who seeks to attend and is admitted to colleges) from the impact of the college’s educational services [9]. Baum, Kurose and McPherson [9] proffer that value is being created; however, its characterization in the specific is elusive – even though student earnings differentials [9, 10] demonstrate the presence of significant value. Many metrics show U.S. education systems trailing behind other countries, across all levels (e.g., [11, 12]). However, these measures may exclude metrics (such as the hands on experience generated by project-based [13] and other experiential education techniques [14, 15]) under which the U.S. may perform more favorably. Alston, et al. [16] indicate that many of these other skills are key indicators of students ability to succeed in the workplace. Quite pragmatically, if the educational community doesn’t take the lead in developing metrics for higher education institutional success, others may (as was the case with K-12 [17]) do so instead. Background Evaluating the performance of courses, education approaches and educators is a subject that provokes no shortage of problems. Significant disagreement exists regarding how to achieve the best results for students, or even what results should be generated and assessed [1-3]. Others fear that determining an evaluative criteria may allow an administration to ‘clean house’ of those not subscribing to an approach. O’Mahony and Garavan [4] contend that this “managerialism” perception, the notion that university leadership is uses systems to manage the school in a business-like way and seeks to “advance strategic objectives” for some can be problematic. However, a robust approach, which considers knowledge, skill and experience attainment may identify numerous benefits and the trades that must be made to get each. This may include benefits beyond what are typically assessed, such as enhanced creativity [5], motivation and self-image [6] and job placement benefits [7]. Current policy makers perceive an ever growing cost of higher education [8] with generally positive results, but which suffers from a difficulty of de-confounding the selection effect (of who seeks to attend and is admitted to colleges) from the impact of the college’s educational services [9]. Baum, Kurose and McPherson [9] proffer that value is being created; however, its characterization in the specific is elusive – even though student earnings differentials [9, 10] demonstrate the presence of significant value. Many metrics show U.S. education systems trailing behind other countries, across all levels (e.g., [11, 12]). However, these measures may exclude metrics (such as the hands on experience generated by project-based [13] and other experiential education techniques [14, 15]) under which the U.S. may perform more favorably. Alston, et al. [16] indicate that many of these other skills are key indicators of students ability to succeed in the workplace. Quite pragmatically, if the educational community doesn’t take the lead in developing metrics for higher education institutional success, others may (as was the case with K-12 [17]) do so instead. Conclusion This paper has presented an overview of work to-date on the development of a standardized assessment tool for Computer Science education. It has described current progress, presented limited results and described the planned next steps. This nascent effort would greatly benefit from the participation of instructors of Computer Science courses everywhere. Through these increased numbers we can build a question set and performance dataset that can serve to enable future research, formative and evaluative assessment in computer science. This revised assessment model will be based on quantitative data about students’ performance, based on a standardized set of criteria. The ability to quickly compare local performance, using this standard, to national and regional averages should facilitate expedient analysis and enable more and more expedient work in this area. Next Steps We are currently working to develop a similar examination for CSCI 242 (Computer Science three) which we plan to give at UND for the first time during the Fall, 2014 semester. We are also working to identifying and coordinate with other colleges and universities around the State of North Dakota to participate in an expanded trial during the Fall, 2014 semester, as well. The existence of commonlydefined courses in North Dakota enables this extension. We will still, however, ask each instructor to define what areas of the ACM Model Curriculum he or she is covering to ensure that only relevant areas are tested. We are seeking participants from other areas around the country, as well. For those that opt to participate, we will create a customized examination based on the areas of the ACM Model Curriculum indicated as covered. We are slowly working to expand our question bank to include questions from all areas identified in the Model Curriculum. 12 [17]) do so instead. References [1] A. Korzh. What are we educating our youth for? European Education 45(1), pp. 50-73. 2013. [2] E. C. Lagemann and H. Lewis. What is College for?: The Public Purpose of Higher Education 2012. [3] M. Tomlinson. Graduate employability: A review of conceptual and empirical themes. Higher Education Policy 25(4), pp. 407-431. 2012. [4] K. O'Mahony and T. N. Garavan. Implementing a quality management framework in a higher education organisation: A case study. Quality Assurance in Education 20(2), pp. 184200. 2012. [5] A. Ayob, R. A. Majid, A. Hussain and M. M. Mustaffa. Creativity enhancement through experiential learning. Advances in Natural and Applied Science 6(2), pp. 94-99. 2012. [6] Y. Doppelt. Implementation and assessment of project-based learning in a flexible environment. International Journal of Technology and Design Education 13(3), pp. 255-272. 2003. [7] N. Hotaling, B. B. Fasse, L. F. Bost, C. D. Hermann and C. R. Forest. A quantitative analysis of the effects of a multidisciplinary engineering capstone design course. J Eng Educ 101(4), pp. 630-656. 2012. [8] G. L. Brown. Dissolving the iron triangle: Increasing access and quality at reduced cost in public higher education. Masters Thesis, George Mason University 2012. Available: http:// digilib.gmu.edu/dspace/bitstream/1920/791

[1]  Yaron Doppelt,et al.  Implementation and Assessment of Project-Based Learning in a Flexible Environment , 2003 .

[2]  Alla Korzh,et al.  What Are We Educating Our Youth For? , 2013 .

[3]  M. Tomlinson Graduate Employability: A Review of Conceptual and Empirical Themes , 2012, Higher Education Policy.

[4]  Thomas N. Garavan,et al.  Implementing a quality management framework in a higher education organisation: A case study , 2012 .

[5]  Bruce Fuller,et al.  Gauging Growth: How to Judge No Child Left Behind? , 2007 .

[6]  S. Baum,et al.  An Overview of American Higher Education , 2013, The Future of children.

[7]  Aini Hussain,et al.  Creativity enhancement through experiential learning , 2012 .

[8]  Antoine J. Alston,et al.  THE IMPORTANCE OF EMPLOYABILITY SKILLS AS PERCEIVED BY THE EMPLOYERS OF UNITED STATES' LAND-GRANT COLLEGE AND UNIVERSITY GRADUATES , 2009 .

[9]  Louise Young,et al.  Strategies for sustaining quality in PBL facilitation for large student cohorts , 2012, Advances in Health Sciences Education.

[10]  Erin M Wright,et al.  The game changer. , 2018, Nursing standard (Royal College of Nursing (Great Britain) : 1987).

[11]  Philip Oreopoulos,et al.  Making College Worth It: A Review of the Returns to Higher Education , 2013, The Future of children.

[12]  Neil Savage,et al.  Game changer , 2012, Commun. ACM.

[13]  Nathan Hotaling,et al.  A Quantitative Analysis of the Effects of a Multidisciplinary Engineering Capstone Design Course , 2012 .

[14]  Guilbert Lee Brown,et al.  Dissolving the Iron Triangle: Increasing Access and Quality at Reduced Cost in Public Higher Education , 2012 .

[15]  K. Cecil What Is College For? The Public Purpose of Higher Education , 2014 .

[16]  Andrew Delbanco,et al.  What Is College For , 2012 .

[17]  Ordway Tead What is a College For , 1963 .

[18]  Paul E. Peterson,et al.  Is the U.S. Catching Up , 2012 .

[19]  Michael B. Horn The Transformational Potential of Flipped Classrooms: Different Strokes for Different Folks , 2013 .

[20]  Jacob L. Vigdor Solving America's Math Problem. , 2013 .