The problem of defining the nature and variety of academic tasks is growing in importance as more complex assessment tasks are introduced in many educational contexts. In this study, multidimensional scaling and cluster analysis were used to describe and categorize tasks from six graduate disciplines including academic psychology, applied psychology, English literature, journalism, physics, and electrical engineering. A sample of task descriptions was constructed through interviews with graduate students from these disciplines. A rating instrument was designed to describe task goals and to evaluate whether the tasks were well or ill structured with respect to various aspects of problem definition and problem solution. Graduate faculty used the rating instrument to characterize a sample of tasks from their discipline.
The scales were found reasonably reliable and were useful in identifying and describing task clusters and how such clusters varied both within and across disciplines. A cluster of short-term problems that were posed by someone other than the student was found in every field although the other characteristics of this cluster of tasks varied with discipline. For example, the short-term tasks in engineering and physics were well-structured, requiring the application of established principles, and having objective standards for judging performance. In contrast, a cluster of short-term tasks in English literature were very ill-structured because different conceptual approaches could be relevant, there were alternative methods for accomplishing the tasks, many possible solutions existed, and the student had to define an issue or question to consider. In all disciplines except physics, a cluster of complex tasks emerged that was characterized as having multiple objectives that needed to be satisfied. The cluster of complex tasks that was found in physics was not described clearly by the scales. Problem-finding was an important task characteristic in the social sciences and humanities but not in the physical sciences. The relevance of multidimensional scaling and clustering to test design is discussed.
[1]
S. Messick.
The Interplay of Evidence and Consequences in the Validation of Performance Assessments
,
1994
.
[2]
A. Biglan.
The characteristics of subject matter in different academic areas.
,
1973
.
[3]
J. Donald.
University Professors' Views of Knowledge and Validation Processes.
,
1990
.
[4]
Timothy A. Post,et al.
On the solving of ill-structured problems.
,
1988
.
[5]
Janet Gail Donald.
The Learning Task in Engineering Courses: a Study of Professors' Perceptions of the Learning Process in Six Selected Courses
,
1991
.
[6]
Allen Newell,et al.
Human Problem Solving.
,
1973
.
[7]
Vinod Goel,et al.
The Structure of Design Problem Spaces
,
1992,
Cogn. Sci..
[8]
Forrest W. Young,et al.
Nonmetric individual differences multidimensional scaling: An alternating least squares method with optimal scaling features
,
1977
.
[9]
J. Donald,et al.
Knowledge Structures: Methods for Exploring Course Content.
,
1983
.
[10]
N. Frederiksen.
The real test bias: Influences of testing on teaching and learning.
,
1984
.
[11]
Leona Schauble,et al.
Scientific Reasoning Across Different Domains
,
1992
.
[12]
J. Donald.
Professors' and Students' Conceptualizations of the Learning Task in Introductory Physics Courses.
,
1993
.
[13]
Laurie A. Broedling,et al.
Taxonomies of Human Performance: The Description of Human Tasks
,
1984
.
[14]
M. J. Norušis,et al.
SPSS for Windows : professional statistics, release 5
,
1992
.
[15]
J. H. Ward.
Hierarchical Grouping to Optimize an Objective Function
,
1963
.
[16]
B. Bloom.
Taxonomy of educational objectives
,
1956
.