Polyglot Prompt: Multilingual Multitask PrompTraining

This paper aims for a potential architectural breakthrough for multilingual learning and asks: could different tasks from different languages be modeled in a monolithic framework (without any task/language-specific module) ? The benefit of achieving this is not only that systems trained on low resources scenario can be assisted by more other languages and tasks, but opening new doors for future multilingual research. We approach this goal by developing a learning framework Polyglot Prompt , where prompting methods are introduced to learn a unified semantic space for different languages and tasks after proper multilingual prompt engineering . Experimentally, we perform a com-prehensive evaluation on 6 tasks (topic classification, sentiment classification, named entity recognition, question answering, natural language inference, summarization), 24 datasets, and 49 languages, which shows the efficacy of multilingual multitask prompting training and suggests several interesting observations. e.g., English prompts are polyglots since di-rectly applying them to task samples in other languages could result in a better improvement. We also present an interpretable multilingual evaluation methodology and show how the proposed framework, multilingual multitask prompt training, works. We release all datasets prompted in the best setting 1 and will release our code soon. 2

[1]  Pengfei Liu,et al.  DataLab: A Platform for Data Analysis and Intervention , 2022, ACL.

[2]  Kyunghyun Cho,et al.  DEEP: DEnoising Entity Pre-training for Neural Machine Translation , 2021, ACL.

[3]  Xinchao Wang,et al.  Memobert: Pre-Training Model with Prompt-Based Learning for Multimodal Emotion Recognition , 2021, ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[4]  Alexander M. Rush,et al.  Multitask Prompted Training Enables Zero-Shot Task Generalization , 2021, ICLR.

[5]  Minlie Huang,et al.  PPT: Pre-trained Prompt Tuning for Few-shot Learning , 2021, ACL.

[6]  Xi Victoria Lin,et al.  Few-shot Learning with Multilingual Language Models , 2021, ArXiv.

[7]  Hinrich Schutze,et al.  Discrete and Soft Prompting for Multilingual Models , 2021, EMNLP.

[8]  Rifat Shahriyar,et al.  XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages , 2021, FINDINGS.

[9]  Sebastian Ruder,et al.  Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks , 2021, ACL.

[10]  Furu Wei,et al.  mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs , 2021, EMNLP.

[11]  Brian Lester,et al.  The Power of Scale for Parameter-Efficient Prompt Tuning , 2021, EMNLP.

[12]  Jinlan Fu,et al.  XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation , 2021, EMNLP.

[13]  Graham Neubig,et al.  ExplainaBoard: An Explainable Leaderboard for NLP , 2021, ACL.

[14]  Colin Raffel,et al.  mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer , 2020, NAACL.

[15]  Timo Schick,et al.  Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference , 2020, EACL.

[16]  Jinlan Fu,et al.  Interpretable Multi-dataset Evaluation for Named Entity Recognition , 2020, EMNLP.

[17]  ChengXiang Zhai,et al.  Multi-task Learning for Multilingual Neural Machine Translation , 2020, EMNLP.

[18]  Noah A. Smith,et al.  The Multilingual Amazon Reviews Corpus , 2020, EMNLP.

[19]  Orhan Firat,et al.  XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization , 2020, ICML.

[20]  Eunsol Choi,et al.  TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages , 2020, Transactions of the Association for Computational Linguistics.

[21]  Marjan Ghazvininejad,et al.  Multilingual Denoising Pre-training for Neural Machine Translation , 2020, Transactions of the Association for Computational Linguistics.

[22]  Dan Roth,et al.  Cross-Lingual Ability of Multilingual BERT: An Empirical Study , 2019, ICLR.

[23]  Myle Ott,et al.  Unsupervised Cross-lingual Representation Learning at Scale , 2019, ACL.

[24]  Mikel Artetxe,et al.  On the Cross-lingual Transferability of Monolingual Representations , 2019, ACL.

[25]  Colin Raffel,et al.  Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..

[26]  Holger Schwenk,et al.  MLQA: Evaluating Cross-lingual Extractive Question Answering , 2019, ACL.

[27]  Lysandre Debut,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[28]  Qianchu Liu,et al.  Investigating Cross-Lingual Alignment Methods for Contextualized Embeddings with Token-Level Evaluation , 2019, CoNLL.

[29]  Yijia Liu,et al.  Cross-Lingual BERT Transformation for Zero-Shot Dependency Parsing , 2019, EMNLP.

[30]  Dan Roth,et al.  Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach , 2019, EMNLP.

[31]  Matt Gardner,et al.  Reasoning Over Paragraph Effects in Situations , 2019, EMNLP.

[32]  Jason Baldridge,et al.  PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification , 2019, EMNLP.

[33]  Trevor Cohn,et al.  Massively Multilingual Transfer for NER , 2019, ACL.

[34]  Guillaume Lample,et al.  Cross-lingual Language Model Pretraining , 2019, NeurIPS.

[35]  Noah A. Smith,et al.  Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning , 2019, EMNLP.

[36]  Yejin Choi,et al.  Social IQA: Commonsense Reasoning about Social Interactions , 2019, EMNLP 2019.

[37]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[38]  Guillaume Lample,et al.  XNLI: Evaluating Cross-lingual Sentence Representations , 2018, EMNLP.

[39]  Holger Schwenk,et al.  A Corpus for Multilingual Document Classification in Eight Languages , 2018, LREC.

[40]  Omer Levy,et al.  GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.

[41]  Heng Ji,et al.  A Multi-lingual Multi-task Architecture for Low-resource Sequence Labeling , 2018, ACL.

[42]  Graham Neubig,et al.  Learning Language Representations for Typology Prediction , 2017, EMNLP.

[43]  Philip Bachman,et al.  NewsQA: A Machine Comprehension Dataset , 2016, Rep4NLP@ACL.

[44]  Heng Ji,et al.  Cross-lingual Name Tagging and Linking for 282 Languages , 2017, ACL.

[45]  Anders Søgaard,et al.  Deep multi-task learning with low level tasks supervised at lower layers , 2016, ACL.

[46]  Jian Zhang,et al.  SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.

[47]  Xuanjing Huang,et al.  Recurrent Neural Network for Text Classification with Multi-Task Learning , 2016, IJCAI.

[48]  Ruslan Salakhutdinov,et al.  Multi-Task Cross-Lingual Sequence Tagging from Scratch , 2016, ArXiv.

[49]  Richard Socher,et al.  Ask Me Anything: Dynamic Memory Networks for Natural Language Processing , 2015, ICML.

[50]  Xiang Zhang,et al.  Character-level Convolutional Networks for Text Classification , 2015, NIPS.

[51]  Christopher Potts,et al.  A large annotated corpus for learning natural language inference , 2015, EMNLP.

[52]  Dianhai Yu,et al.  Multi-Task Learning for Multiple Language Translation , 2015, ACL.

[53]  Xiaodong Liu,et al.  Representation Learning Using Multi-Task Deep Neural Networks for Semantic Classification and Information Retrieval , 2015, NAACL.

[54]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[55]  Quoc V. Le,et al.  Sequence to Sequence Learning with Neural Networks , 2014, NIPS.

[56]  Christopher Potts,et al.  Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.

[57]  Matthew Richardson,et al.  MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text , 2013, EMNLP.

[58]  Christopher Potts,et al.  Learning Word Vectors for Sentiment Analysis , 2011, ACL.

[59]  Massimiliano Pontil,et al.  Convex multi-task feature learning , 2008, Machine Learning.

[60]  Tsong Yueh Chen,et al.  On the statistical properties of the F-measure , 2004, Fourth International Conference onQuality Software, 2004. QSIC 2004. Proceedings..

[61]  Massimiliano Pontil,et al.  Regularized multi--task learning , 2004, KDD.

[62]  Rich Caruana,et al.  Multitask Learning , 1997, Machine Learning.