Learning Text Representations for Finding Similar Exercises
暂无分享,去创建一个
Mathematical Intelligent Tutor System brings great convenience for both teachers and students. A basic task in the system is to find similar exercises, which examine students the same skills or knowledge. Inspired by previous work, we propose a new model called Siamese based Bidirectional Encoder Representations from Transformer (SBERT). After training on our Chinese math exercises dataset, AUC(Area Under Curve) of SBERT model can reach up to 0.90, which is higher than that of existed models. Visualization analysis also proves that our model obtains better text representing performance of exercises than previous work.
[1] Jonas Mueller,et al. Siamese Recurrent Architectures for Learning Sentence Similarity , 2016, AAAI.
[2] Enhong Chen,et al. Finding Similar Exercises in Online Education Systems , 2018, KDD.
[3] Yann LeCun,et al. Learning a similarity metric discriminatively, with application to face verification , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).
[4] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.