Case Based Modeling of Answer Points to Expedite Semi-Automated Evaluation of Subjective Papers

Researches have been carried out in the past and recent years for the automation of examination systems. But most of them target on-line examinations with either choice-based or very short descriptive answers at best. The primary goal of this paper is to propose a framework, where textual papers set for subjective questions, are supplemented with model answer points to facilitate the evaluation procedure in a semi-automated manner. The proposed framework also accommodates provisions for reward and penalty schemes. In the reward scheme, additional valid points provided by the examinees would earn them bonus marks as rewards. By incremental up-gradation of the question case-base with these extra answer-points, the examiner can incorporate an automatic fairness in the checking procedure. In the penalty scheme, unfair means adopted amongst neighboring examinees can be detected by maintaining seat plans in the form of a neighborhood graph. The degree of penalization can then be impartially ascertained by computing the degree of similarity amongst adjoining answer scripts. The main question-bank as well as the model answer points are all maintained using Case Based Reasoning strategies.