A Maximum Likelihood Based Offline Estimation of Student Capabilities and Question Difficulties

In recent years Computerized Adaptive Test (CAT) has gained popularity over conventional exams in evaluating student capabilities with desired accuracy. However, the key limitation of CAT is that it requires a large pool of pre-calibrated questions. In absence of such a calibrated question bank, offline exam with uncalibrated questions has to be conducted. Even today many important exams are offline, e.g., Graduated Aptitude Test in Engineering (GATE) and Joint Entrance Examination (JEE) that are conducted in India. In offline exams, typically, normalized marks are used as an estimate of the students’ capabilities. In this work, our key contribution is to verify whether marks obtained are indeed a good measure of students’ capabilities. To this end, we propose an evaluation methodology that mimics evaluation process of CAT. In our approach, based on the marks scored by students in various questions, we iteratively estimate question parameters like difficulty and discrimination, and student parameters like capability. Our algorithm uses alternating maximization to maximize the log likelihood estimate for the questions’ and students’ parameters given the marks. We prove that the alternating maximization process converges. We compare our approach with marks based evaluation using simulations. The simulation results show that our approach outperforms marks based evaluation.