UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training
暂无分享,去创建一个
Jingjing Liu | Shuohang Wang | Yu Cheng | Luowei Zhou | Mingyang Zhou | Linjie Li | Zhou Yu | Shuohang Wang | Jingjing Liu | Yu Cheng | Luowei Zhou | Linjie Li | Zhou Yu | Mingyang Zhou
[1] Qun Liu,et al. Sentence-Level Multilingual Multi-modal Embedding for Natural Language Processing , 2017, RANLP.
[2] Orhan Firat,et al. Massively Multilingual Neural Machine Translation , 2019, NAACL.
[3] Radu Soricut,et al. Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning , 2018, ACL.
[4] Khalil Sima'an,et al. Multi30K: Multilingual English-German Image Descriptions , 2016, VL@ACL.
[5] Guillaume Lample,et al. Cross-lingual Language Model Pretraining , 2019, NeurIPS.
[6] Gholamreza Anbarjafari,et al. Doubly Attentive Transformer Machine Translation , 2018, ArXiv.
[7] Nobuyuki Shimizu,et al. Visual Question Answering Dataset for Bilingual Image Understanding: A Study of Cross-Lingual Transfer Using Attention Maps , 2018, COLING.
[8] Desmond Elliott,et al. Imagination Improves Multimodal Translation , 2017, IJCNLP.
[9] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[10] Jianfeng Gao,et al. Unified Vision-Language Pre-Training for Image Captioning and VQA , 2020, AAAI.
[11] Xiaojun Wan,et al. Multimodal Transformer for Multimodal Machine Translation , 2020, ACL.
[12] Taku Kudo,et al. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing , 2018, EMNLP.
[13] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[14] Akikazu Takeuchi,et al. STAIR Captions: Constructing a Large-Scale Japanese Image Caption Dataset , 2017, ACL.
[15] Desmond Elliott,et al. Findings of the Third Shared Task on Multimodal Machine Translation , 2018, WMT.
[16] Yong Jae Lee,et al. A Visual Attention Grounding Neural Model for Multimodal Machine Translation , 2018, EMNLP.
[17] Xirong Li,et al. COCO-CN for Cross-Lingual Image Tagging, Captioning, and Retrieval , 2018, IEEE Transactions on Multimedia.
[18] Michael S. Bernstein,et al. Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations , 2016, International Journal of Computer Vision.
[19] Jindřich Helcl,et al. CUNI System for the WMT18 Multimodal Translation Task , 2018, WMT.
[20] Bryan A. Plummer,et al. Learning to Scale Multilingual Representations for Vision-Language Tasks , 2020, ECCV.
[21] Jonatas Wehrmann,et al. Language-Agnostic Visual-Semantic Embeddings , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[22] Jianlong Fu,et al. Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal Transformers , 2020, ArXiv.
[23] Graham Neubig,et al. XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization , 2020, ICML.
[24] Liwei Wang,et al. Learning Two-Branch Neural Networks for Image-Text Matching Tasks , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[25] Noah A. Smith,et al. A Simple, Fast, and Effective Reparameterization of IBM Model 2 , 2013, NAACL.
[26] Andrew Zisserman,et al. Visual Grounding in Video for Unsupervised Word Translation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Furu Wei,et al. VL-BERT: Pre-training of Generic Visual-Linguistic Representations , 2019, ICLR.
[28] Chris Callison-Burch,et al. Learning Translations via Images with a Massively Multilingual Image Dataset , 2018, ACL.
[29] Balaraman Ravindran,et al. Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning , 2015, NAACL.
[30] Yash Goyal,et al. Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Florian Metze,et al. How2: A Large-scale Dataset for Multimodal Language Understanding , 2018, NIPS 2018.
[32] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[33] Marcus Rohrbach,et al. 12-in-1: Multi-Task Vision and Language Representation Learning , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[35] Stefan Lee,et al. ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks , 2019, NeurIPS.
[36] Peter Young,et al. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions , 2014, TACL.
[37] Alexander Hauptmann,et al. Unsupervised Multimodal Neural Machine Translation with Pseudo Visual Pivoting , 2020, ACL.
[38] Louis-Philippe Morency,et al. MOSEAS: A Multimodal Language Dataset for Spanish, Portuguese, German and French , 2020, EMNLP.
[39] Veselin Stoyanov,et al. Unsupervised Cross-lingual Representation Learning at Scale , 2019, ACL.
[40] Frank Keller,et al. Image Pivoting for Learning Multilingual Multimodal Representations , 2017, EMNLP.
[41] Jianfeng Gao,et al. Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks , 2020, ECCV.
[42] V. Rodríguez-Doncel,et al. RDF Representation of Licenses for Language Resources , 2015, LDL@IJCNLP.
[43] Fei-Fei Li,et al. Deep visual-semantic alignments for generating image descriptions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[44] Xinlei Chen,et al. Microsoft COCO Captions: Data Collection and Evaluation Server , 2015, ArXiv.
[45] Jianfeng Gao,et al. M3P: Learning Universal Representations via Multitask Multilingual Multimodal Pre-training , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[46] Desmond Elliott,et al. Findings of the Second Shared Task on Multimodal Machine Translation and Multilingual Image Description , 2017, WMT.
[47] Donghyun Kim,et al. MULE: Multimodal Universal Language Embedding , 2020, AAAI.
[48] Xin Wang,et al. VaTeX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[49] Nan Duan,et al. Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training , 2019, AAAI.
[50] Qun Liu,et al. Incorporating Global Visual Features into Attention-based Neural Machine Translation. , 2017, EMNLP.
[51] Mohit Bansal,et al. LXMERT: Learning Cross-Modality Encoder Representations from Transformers , 2019, EMNLP.
[52] Zhou Yu,et al. Deep Modular Co-Attention Networks for Visual Question Answering , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).