What is at stake in knowing the content and capabilities of children’s minds?

Many significant changes in perspective have to take place before efforts to learn the content and capabilities of children’s minds can hold much sway in educational testing. The language of testing, especially of high stakes testing, remains firmly in the realm of ‘behaviors’, ‘performance’ and ‘competency’ defined in terms of behaviors, test items, or observations. What is on children’s minds is not taken into account as integral to the test design and interpretation process. The point of this article is to argue that behaviorist-based validation models are ill-founded, and to recommend basing tests on cognitive models that theorize the content and capabilities of children’s minds in terms of such features as meta-cognition, reasoning strategies, and principles of sound thinking. This approach is the one most likely to yield the construct validity for tests long endorsed by many testing theorists. The implications of adopting a cognitive basis for testing that might be upsetting to many current practices are explored.

[1]  Robert H. Ennis Logic in teaching , 1969 .

[2]  Irvin R. Katz,et al.  Effects of Response Format on Difficulty of SAT-Mathematics Items: It's Not the Strategy , 2000 .

[3]  Andrew Davis,et al.  The Limits of Educational Assessment , 1999 .

[4]  S. Norris,et al.  Explanations of Reading Comprehension: Schema Theory and Critical Thinking Theory , 1987, Teachers College Record: The Voice of Scholarship in Education.

[5]  G. Engelhard,et al.  Accuracy of Bias Review Judges in Identifying Differential Item Functioning on Teacher Certification Tests , 1990 .

[6]  Educational measurement and knowledge of other minds , 2004 .

[7]  E. Michael Nussbaum,et al.  Interview Procedures for Validating Science Assessments , 1997 .

[8]  Linda M. Phillips,et al.  Young Readers' Inference Strategies in Reading Comprehension , 1988 .

[9]  Peter Afflerbach,et al.  Verbal Protocols of Reading: The Nature of Constructively Responsive Reading , 1996 .

[10]  Connie A. Korpan,et al.  University Students' Interpretation of Media Reports of Science and its Relationship to Background Knowledge, Interest, and Reading Difficulty , 2003 .

[11]  Mark J. Gierl,et al.  Identifying Sources of Differential Item and Bundle Functioning on Translated Achievement Tests: A Confirmatory Analysis , 2001 .

[12]  Stephen P. Norris,et al.  The Relevance of a Reader's Knowledge within a Perspectival View of Reading , 1994 .

[13]  Michael R. W. Dawson,et al.  A parallel distributed processing model of Wason’s selection task , 2001, Cognitive Systems Research.

[14]  Assessment of Student Problem-Solving on Ill-Defined Tasks , 1999 .

[15]  L. Phillips Developing and validating assessments of inference ability in reading comprehension , 1989 .

[16]  Robert Glaser,et al.  Investigating the Cognitive Complexity of Science Assessments , 1998 .

[17]  M. Davison,et al.  A Sex Difference by Item Difficulty Interaction in Multiple‐Choice Mathematics Items Administered to National Probability Samples , 2001 .

[18]  Stephen P. Norris,et al.  Interpreting popular reports of science: what happens when the reader's world meets the world on paper? , 1999 .

[19]  Children's and adults' knowledge and models of reasoning about the ozone layer and its depletion , 2003 .

[20]  Educational Evaluation Standards for Educational and Psychological Testing , 1999 .

[21]  William Stout,et al.  A Multidimensionality-Based DIF Analysis Paradigm , 1996 .

[22]  L. Shepard,et al.  Methods for Identifying Biased Test Items , 1994 .