Multi-task BERT for problem difficulty prediction

Existing problem difficulty prediction models are based on professionals’ estimation of the difficulty of the problem, or mining relevant feature information from a large number of user records. The recently proposed BERT model is pre-trained on a large unsupervised corpus and has achieved impressive results in various natural language processing tasks. In order to reduce the required feature information and improve the accuracy of problem difficulty prediction, a problem difficulty prediction method based on multi-task BERT (MTBERT) is proposed. Experiments were carried out on the real data sets of LeetCode and ZOJ, and several neural network models were compared to verify the effectiveness of the method.