An Embarrassingly Simple Model for Dialogue Relation Extraction
暂无分享,去创建一个
E. Chng | Aixin Sun | Hao Zhang | Fuzhao Xue
[1] Wei Han,et al. Dialogue Relation Extraction with Document-Level Heterogeneous Graph Attention Networks , 2020, Cognitive Computation.
[2] E. Chng,et al. GDPNet: Refining Latent Multi-View Graph for Relation Extraction , 2020, AAAI.
[3] Wei Lu,et al. Reasoning with Latent Structure Refinement for Document-Level Relation Extraction , 2020, ACL.
[4] Claire Cardie,et al. Dialogue-Based Relation Extraction , 2020, ACL.
[5] Richong Zhang,et al. Relation Extraction with Convolutional Network over Learnable Syntax-Transport Graph , 2020, AAAI.
[6] Wei Chu,et al. Symmetric Regularization based BERT for Pair-wise Semantic Reasoning , 2019, SIGIR.
[7] Furu Wei,et al. VL-BERT: Pre-training of Generic Visual-Linguistic Representations , 2019, ICLR.
[8] Omer Levy,et al. SpanBERT: Improving Pre-training by Representing and Predicting Spans , 2019, TACL.
[9] Ruihan Bao,et al. Group, Extract and Aggregate: Summarizing a Large Amount of Finance News for Forex Movement Prediction , 2019, EMNLP.
[10] Alexander Löser,et al. How Does BERT Answer Questions?: A Layer-Wise Analysis of Transformer Representations , 2019, CIKM.
[11] Noah A. Smith,et al. Knowledge Enhanced Contextual Word Representations , 2019, EMNLP.
[12] Wei Lu,et al. Attention Guided Graph Convolutional Networks for Relation Extraction , 2019, ACL.
[13] Yang Liu,et al. Fine-tune BERT for Extractive Summarization , 2019, ArXiv.
[14] Tudor Dumitras,et al. Shallow-Deep Networks: Understanding and Mitigating Network Overthinking , 2018, ICML.
[15] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[16] Danqi Chen,et al. Position-aware Attention and Supervised Data Improve Slot Filling , 2017, EMNLP.
[17] Marco Cuturi,et al. Soft-DTW: a Differentiable Loss Function for Time-Series , 2017, ICML.