Priberam Labs at the NTCIR-15 SHINRA2020-ML: Classification Task

Wikipedia is an online encyclopedia available in 285 languages. It composes an extremely relevant Knowledge Base (KB), which could be leveraged by automatic systems for several purposes. However, the structure and organisation of such information are not prone to automatic parsing and understanding and it is, therefore, necessary to structure this knowledge. The goal of the current SHINRA2020ML task is to leverage Wikipedia pages in order to categorise their corresponding entities across 268 hierarchical categories, belonging to the Extended Named Entity (ENE) ontology. In this work, we propose three distinct models based on the contextualised embeddings yielded by Multilingual BERT. We explore the performances of a linear layer with and without explicit usage of the ontology’s hierarchy, and a Gated Recurrent Units (GRU) layer. We also test several pooling strategies to leverage BERT’s embeddings and selection criteria based on the labels’ scores. We were able to achieve good performance across a large variety of languages, including those not seen during the fine-tuning process (zero-shot languages).

[1]  Xian-yan Meng,et al.  Multilingual Short Text Classification Based on LDA and BiLSTM-CNN Neural Network , 2019, WISA.

[2]  Veselin Stoyanov,et al.  Unsupervised Cross-lingual Representation Learning at Scale , 2019, ACL.

[3]  Ritu Yadav Light-Weighted CNN for Text Classification , 2020, ArXiv.

[4]  Sebastian Ruder,et al.  MultiFiT: Efficient Multi-lingual Language Model Fine-tuning , 2019, EMNLP/IJCNLP.

[5]  Eva Schlinger,et al.  How Multilingual is Multilingual BERT? , 2019, ACL.

[6]  Anders Søgaard,et al.  A Survey of Cross-lingual Word Embedding Models , 2017, J. Artif. Intell. Res..

[7]  Aysu Ezen-Can,et al.  A Comparison of LSTM and BERT for Small Corpus , 2020, ArXiv.

[8]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[9]  Barbara Plank All-In-1 at IJCNLP-2017 Task 4: Short Text Classification with One Model for All Languages , 2017, IJCNLP.

[10]  Rongyi Cui,et al.  Multilingual Short Text Classification via Convolutional Neural Network , 2018, WISA.

[11]  Claire Cardie,et al.  Unsupervised Multilingual Word Embeddings , 2018, EMNLP.

[12]  S. Sekine,et al.  Overview of SHINRA2020-ML Task , 2022 .

[13]  Yiming Yang,et al.  Deep Learning for Extreme Multi-label Text Classification , 2017, SIGIR.

[14]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[15]  Steven Skiena,et al.  Polyglot: Distributed Word Representations for Multilingual NLP , 2013, CoNLL.

[16]  Thomas Wolf,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[17]  P. Quaresma,et al.  Classification through Combination of Monolingual Classifiers , 2010 .

[18]  Holger Schwenk,et al.  Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond , 2018, Transactions of the Association for Computational Linguistics.