Replication, Validation, and Use of a Language Independent CS1 Knowledge Assessment

Computing education lags other discipline-based education research in the number and range of validated assessments available to the research community. Validated assessments are important for researchers to reduce experimental error due to flawed assessments and to allow for comparisons between different experiments. Although the need is great, building assessments from scratch is difficult. Once an assessment is built, it's important to be able to replicate it, in order to address problems within it, or to extend it. We developed the Second CS1 Assessment (SCS1) as an isomorphic version of a previously validated language-independent assessment for introductory computer science, the FCS1. Replicating the FCS1 is important to enable free use by a broader research community. This paper is documentation of our process for replicating an existing validated assessment and validating the success of our replication. We present initial use of SCS1 by other research groups, to serve as examples of where it might be used in the future. SCS1 is useful for researchers, but care must be taken to avoid undermining the validity argument.

[1]  Michelle K. Smith,et al.  Active learning increases student performance in science, engineering, and mathematics , 2014, Proceedings of the National Academy of Sciences.

[2]  Russell H. Fazio,et al.  The feature-positive effect in the self-perception process: Does not doing matter as much as doing? , 1982 .

[3]  Mark Guzdial,et al.  Exploring hypotheses about media computation , 2013, ICER.

[4]  A Pollatsek,et al.  On the use of counterbalanced designs in cognitive research: a suggestion for a better and more powerful analysis. , 1995, Journal of experimental psychology. Learning, memory, and cognition.

[5]  Cynthia Taylor,et al.  Computer science concept inventories: past and future , 2014, Comput. Sci. Educ..

[6]  Amy J. Ko,et al.  Comparing the Effectiveness of Online Learning Approaches on CS1 Learning Outcomes , 2015, ICER.

[7]  Barbara Ericson,et al.  "Georgia computes!": improving the computing education pipeline , 2009, SIGCSE '09.

[8]  Andy P. Field,et al.  Discovering Statistics Using SPSS , 2000 .

[9]  John R. Anderson,et al.  The Transfer of Cognitive Skill , 1989 .

[10]  Alejandra J. Magana,et al.  Introducing Discipline-Based Computing in Undergraduate Engineering Education , 2013, TOCE.

[11]  Mark Guzdial,et al.  Developing a validated assessment of fundamental CS1 concepts , 2010, SIGCSE.

[12]  Mark Guzdial,et al.  Subgoals, Context, and Worked Examples in Learning Computing Problem Solving , 2015, ICER.

[13]  J. Libarkin,et al.  Assessment of Learning in Entry-Level Geoscience Courses: Results from the Geoscience Concept Inventory , 2005 .

[14]  David Weintrop,et al.  Using Commutative Assessments to Compare Conceptual Understanding in Blocks-based and Text-based Programs , 2015, ICER.

[15]  R. Hambleton,et al.  Fundamentals of Item Response Theory , 1991 .

[16]  R. Hake Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses , 1998 .

[17]  Peter Hubwieser,et al.  Design and First Results of a Psychometric Test for Measuring Basic Programming Abilities , 2015, WiPSCE.

[18]  Hung-Wei Tseng,et al.  Evaluating student understanding of core concepts in computer architecture , 2013, ITiCSE '13.

[19]  Owen L. Astrachan,et al.  The CS principles project , 2012, INROADS.

[20]  SimonBeth,et al.  Building a virtual community of practice for K-12 CS teachers , 2014 .

[21]  Jan Vahrenhold,et al.  Hunting high and low: instruments to detect misconceptions related to algorithms and data structures , 2013, SIGCSE '13.

[22]  Mark Guzdial,et al.  Building a community to support HS CS teachers: the disciplinary commons for computing educators , 2011, SIGCSE.

[23]  N. Sanjay Rebello,et al.  The effect of distracters on student performance on the force concept inventory , 2004 .

[24]  Shuchi Grover,et al.  Building a virtual community of practice for K-12 CS teachers , 2014, CACM.

[25]  Andreas Stefik,et al.  An Empirical Investigation into Programming Language Syntax , 2013, TOCE.

[26]  Frederic M. Lord,et al.  The relation of the reliability of multiple-choice tests to the distribution of item difficulties , 1952 .

[27]  Lorraine Stefani,et al.  Assessment of Student Learning: promoting a scholarly approach , 2005 .

[28]  Michael C. Loui,et al.  Setting the Scope of Concept Inventories for Introductory Computing Subjects , 2010, TOCE.

[29]  Michael C. Loui,et al.  The development of a digital logic concept inventory , 2011 .

[30]  Michael C. Loui,et al.  Creating the digital logic concept inventory , 2010, SIGCSE.

[31]  Leen-Kiat Soh,et al.  Concept inventories in computer science for the topic discrete mathematics , 2006, ACM SIGCSE Bull..

[32]  Descriptors Educational,et al.  of Educational Measurement , 1988 .

[33]  Steven A. Wolfman,et al.  Misconceptions and concept inventory questions for binary search trees and hash tables , 2014, SIGCSE.

[34]  Allison Elliott Tew,et al.  A fresh look at novice programmers' performance and their teachers' expectations , 2013, ITiCSE -WGR '13.

[35]  D. Hestenes,et al.  Force concept inventory , 1992 .

[36]  Mark Guzdial,et al.  The FCS1: a language independent assessment of CS1 knowledge , 2011, SIGCSE.

[37]  Mark Guzdial,et al.  Assessing fundamental introductory computing concept knowledge in a language independent manner , 2010 .

[38]  Allison Elliott Tew,et al.  The Case for Validated Tools in Computer Science Education Research , 2013, Computer.