Rubric Development and Inter-Rater Reliability Issues in Assessing Learning Outcomes

This paper describes the development of rubrics that help evaluate student performance and relate that performance directly to the educational objectives of the program. Issues in accounting for different constituencies, selecting items for evaluation, and minimizing time required for data analysis are discussed. Aspects of testing the rubrics for consistency between different faculty raters are presented, as well as a specific example of how inconsistencies were addressed. Finally, a considerat ion of the difference between course and programmatic assessment and the applicability of rubric development to each type is discussed.