Several years ago, one of the authors of this chapter was privy to details of a large-scale writing assessment of junior high students. The students had been given a brief prompt asking them to think through how watching television affects people's thinking styles. One of the students involved in the assessment had approached the task creatively, beginning his essay as one would a television commercial and echoing that tone, complete with channel changes and other fragmenting interruptions. He began his essay this way: "Hi there! Television has not affected my mind ... " and then proceeded to show, in a sophisticated demonstration of self-satire, how television had indeed fragmented his mind. Most of the evaluators participating in the assessment were impressed at the level of thinking, awareness, and creativity that went into the student's writing sample. However, one of the evaluators-a prominent state politician-was not at all impressed. This evaluator read the student's essay, shook his head, and tsk-tsked. "That's"too bad," he said, putting the essay down. The same essay that earned accolades from most of the English teachers and faculty evaluators was, in his mind, a disaster. Instead, he had found the essay's unconventional approach and sentence fragments distracting and inappropriate. This vignette illustrates what might be called a "paradigm dash." In her book on the history and theorirs of writing assessment, Patricia Lynne (referring to the work of Thomas Kuhn) defines a paradigm as a concept "indicating a set of common models, values, commitments, and symbolic exchanges that unite disciplinary communities" (Lynne, 2004, p. 5). Paradigms are important because they allow disciplinary communities to have a common set of assumptions, a "knowledge base" that is shared. A paradigm clash, therefore, occurs when two communities operating under . different paradigms meet on the terrain of ideas, definitions, or approaches. The vignette exemplifies a paradigm clash in that the politician held certain assumptions about what "good writing" looks like-formal in tone, grammatically clean, organized in a linear fashion-while the educators valued writing in terms of unique expression of thought, risk-taking, the ability to mock conventions appropriately, and an awareness of multiple forms or rhetorical strategies. The paradigms each group operated under reflected different sets of assumptions and values. Such clashes often have very realworld consequences: this mixed group had to reach some sort of consensus about
[1]
George Hillocks,et al.
Fighting Back: Assessing the Assessments.
,
2003
.
[2]
Lee Nickoson-Massey.
Coming to Terms: A Theory of Writing Assessment
,
2006
.
[3]
Dudley W. Reynolds,et al.
Keeping assessment local: The case for accountability through formative assessment
,
2007
.
[4]
Barbara M. Olds,et al.
An Assessment Matrix for Evaluating Engineering Programs
,
1998
.
[5]
David Bartholomae,et al.
Inventing the University
,
2005
.
[6]
Bob Broad.
What We Really Value: Beyond Rubrics in Teaching and Assessing Writing
,
2003
.
[7]
Peter Elbow.
Do we need a single standard of value for institutional assessment? An essay response to Asao Inoue's “community-based assessment pedagogy”
,
2006
.
[8]
A. Young,et al.
Writing across the Disciplines: Research into Practice.
,
1987
.
[9]
Chris M. Anson.
Assessing Writing in Cross-Curricular Programs: Determining the Locus of Activity.
,
2006
.
[10]
J. Leydens,et al.
Scoring Rubric Development: Validity and Reliability.
,
2000
.
[11]
Paul M. Santi,et al.
Optimizing Faculty Use of Writing as a Learning Tool in Geoscience Education
,
2006
.
[12]
S. Bennett,et al.
A rhetoric for writing teachers
,
1982
.
[13]
Barbara M. Olds,et al.
Matching Assessment Methods To Outcomes: Definitions And Research Questions
,
2000
.
[14]
Barbara M. Olds,et al.
A Portfolio Based Assessment Program
,
1996
.
[15]
Edward M. White,et al.
The Scoring of Writing Portfolios: Phase 2
,
2005
.