Introduction Rubrics are a recognized instrument to support authentic assessments to describe student achievement (Andrade, 1996; Andrade, 2000). A rubric can be defined as a scoring tool that provides a set of criteria to assess a piece of work and includes gradations of quality or performance for each criterion. Rubrics can increase student-learning outcomes by making teachers' expectations explicit and by showing students how to meet these expectations (by presenting what level of quality is expected from their work). Rubrics are also useful to help students develop a critical sense of their own work by providing them with criteria to become more thoughtful judges of the quality of their own and others' work. True assessment emphasizes the application and use of knowledge to solve complex tasks that involve contextualized problems. Rubrics help students to understand the criteria for judgment from the beginning of their instruction (Montgomery, 2002). As tasks become more complex, there is often a gradual degradation of the structure and comprehension of the rubric. This problem is manifested especially when analytical rubrics are used (rubrics that break the evaluation down to simple components that are scored separately and then combined to produce the global evaluation). Quality criteria are difficult to use, both by teachers and students, if they become too abstract. A typical approach to rectify this issue is to disaggregate the complex criteria into a series of more understandable criteria of lower conceptual difficulty. A problem arises when a compact list of abstract or dense criteria is replaced by a long list of simpler ones, which in many cases can make them impractical and time-consuming. This situation is accentuated when each criterion is weighted to reflect its relative importance. Current Learning Management Systems (LMS) do not provide a solution to this problem, and as a result, analytical rubrics are often avoided in complex situations. In this context, the concept of an adaptable rubric emerges as a powerful mechanism to support different learning styles and rhythms. We define adaptable rubrics as those that provide multiple levels of detail, which can be expanded on demand. The level of detail can be adjusted and adapted to a specific teaching scenario and/or the students' level of understanding of quality concepts. If a student finds a particular criterion or its performance levels too difficult to understand, he/she can deploy an additional level of detail (if provided) for that specific criterion, where it is divided into several sub-criteria with a lower abstraction level. In this paper, we present a new computer-assisted rubric platform specifically designed to support adaptable rubrics. The main features of this platform are: * provides feedback (showing detailed scores and levels of performance, if requested). * supports different learning rhythms and styles (different levels of detail are deployed on demand by students, based on their choice). * collects metadata that could be used to support adaptive behavior in the future. * automates the management of different weights among scoring criteria during rubric creation. The platform is generic, as it can be used to manage any type of rubric. The implementation strategy, validation, and lessons learned while developing and testing our platform are presented in the paper. As an example of the application to a highly difficult and complex assessment problem, the developed system was used in a Mechanical Computer Aided Design (MCAD) training scenario at the undergraduate and graduate college level. The paper is structured as follows: second section describes the state of the art in platforms for scoring rubrics and confirms the lack of support for adaptable rubrics. Third section describes the architecture of the proposed system. The description includes design specifications and the most relevant implementation details. …
[1]
Manuel Contero,et al.
Implementation of Adaptable Rubrics for CAD Model Quality Formative Assessment
,
2016
.
[2]
Craig A. Mertler.
Designing Scoring Rubrics for Your Classroom.
,
2001
.
[3]
J. Michael Spector,et al.
Designing on-demand education for simultaneous development of domain-specific and self-directed learning skills
,
2015,
J. Comput. Assist. Learn..
[4]
D. Atkinson,et al.
Improving assessment processes in Higher Education: Student and teacher perceptions of the effectiveness of a rubric embedded in a LMS
,
2013
.
[5]
Anders Jonsson,et al.
The Use of Scoring Rubrics for Formative Assessment Purposes Revisited: A Review.
,
2013
.
[6]
W. Popham.
What's Wrong--and What's Right--with Rubrics.
,
1997
.
[7]
H. Andrade,et al.
A review of rubric use in higher education
,
2010
.
[8]
Davinia Hernández Leo,et al.
Enhancing Computer Assisted Assessment Using Rubrics in a QTI Editor
,
2009,
2009 Ninth IEEE International Conference on Advanced Learning Technologies.
[9]
John A. Kaliski,et al.
Improving the Efficiency and Effectiveness of Grading Through the Use of Computer-Assisted Grading Rubrics
,
2008
.
[10]
Anders Jonsson,et al.
The use of scoring rubrics: Reliability, validity, and educational consequences
,
2007
.
[11]
P. Orsmond,et al.
The Importance of Marking Criteria in the Use of Peer Assessment
,
1996
.
[12]
H. Andrade.
Using Rubrics To Promote Thinking and Learning.
,
2000
.
[13]
Robbert Smit,et al.
Assuring the quality of standards-oriented classroom assessment with rubrics for complex competencies
,
2014
.
[14]
Ville Karavirta,et al.
Rubyric: an online assessment tool for effortless authoring of personalized feedback
,
2009,
ITiCSE.
[15]
David T. Goomas,et al.
Computer-Assisted Rubric Evaluation: Enhancing Outcomes and Assessment Quality
,
2014
.
[16]
Libby Gerard,et al.
Automated Scoring of Constructed-Response Science Items: Prospects and Obstacles
,
2014,
Educational Measurement: Issues and Practice.
[17]
J. Russell Manson,et al.
Diagnostics and rubrics for assessing learning across the computational science curriculum
,
2010,
J. Comput. Sci..
[18]
Angela R. Bielefeldt,et al.
Sustainable Engineering Assessment Using Rubric-Based Analysis of Challenge Question Responses
,
2015
.
[19]
M. Simon,et al.
What's still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels
,
2004
.
[20]
Kathleen Montgomery,et al.
Authentic Tasks and Rubrics: Going Beyond Traditional Assessments in College Teaching
,
2002
.
[21]
Manuel Contero,et al.
Approach for developing coordinated rubrics to convey quality criteria in MCAD training
,
2015,
Comput. Aided Des..
[22]
David Engel,et al.
Educational Assessment Of Students
,
2016
.
[23]
Anastasios A. Economides,et al.
Evaluation of Computer Adaptive Testing Systems
,
2007,
Int. J. Web Based Learn. Teach. Technol..
[24]
H. Andrade,et al.
Student perspectives on rubric-referenced assessment
,
2005
.
[25]
Anastasios A. Economides,et al.
Evaluation parameters for computer-adaptive testing
,
2006,
Br. J. Educ. Technol..
[26]
Manuel Cebrián De La Serna,et al.
Federated eRubric Service to Facilitate Self-Regulated Learning in the European University Model
,
2014
.
[27]
Barbara M. Moskal,et al.
Scoring Rubrics: What, When and How?
,
2000
.