Duluth at SemEval-2020 Task 7: Using Surprise as a Key to Unlock Humorous Headlines

We use pretrained transformer-based language models in SemEval-2020 Task 7: Assessing the Funniness of Edited News Headlines. Inspired by the incongruity theory of humor, we use a contrastive approach to capture the surprise in the edited headlines. In the official evaluation, our system gets 0.531 RMSE in Subtask 1, 11th among 49 submissions. In Subtask 2, our system gets 0.632 accuracy, 9th among 32 submissions.

[1]  Wanli Liu,et al.  A BERT-based Approach for Automatic Humor Detection and Scoring , 2019, IberLEF@SEPLN.

[2]  Douwe Kiela,et al.  SentEval: An Evaluation Toolkit for Universal Sentence Representations , 2018, LREC.

[3]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[4]  Guillermo Moncecchi,et al.  Is This a Joke? Detecting Humor in Spanish Tweets , 2016, IBERAMIA.

[5]  Nanyun Peng,et al.  Pun Generation with Surprise , 2019, NAACL-HLT.

[6]  Yann LeCun,et al.  Signature Verification Using A "Siamese" Time Delay Neural Network , 1993, Int. J. Pattern Recognit. Artif. Intell..

[7]  J. Shaw,et al.  Philosophy of Humor , 2010 .

[8]  Alex Wang,et al.  What do you learn from context? Probing for sentence structure in contextualized word representations , 2019, ICLR.

[9]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[10]  Henry A. Kautz,et al.  SemEval-2020 Task 7: Assessing Humor in Edited News Headlines , 2020, SEMEVAL.

[11]  Tony Veale,et al.  Incongruity in humor: Root cause or epiphenomenon? , 2004 .

[12]  Omer Levy,et al.  RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.

[13]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[14]  George Kurian,et al.  Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.

[15]  Ted Pedersen,et al.  Duluth at SemEval-2017 Task 6: Language Models in Humor Detection , 2017, SemEval@ACL.

[16]  R'emi Louf,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[17]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[18]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[19]  Luke S. Zettlemoyer,et al.  Deep Contextualized Word Representations , 2018, NAACL.

[20]  Michael Gamon,et al.  “President Vows to Cut Hair”: Dataset and Analysis of Creative Text Editing for Humorous Headlines , 2019, NAACL.

[21]  Rico Sennrich,et al.  Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.

[22]  Kevin Seppi,et al.  Humor Detection: A Transformer Gets the Last Laugh , 2019, EMNLP.

[23]  Roger Levy,et al.  A Computational Model of Linguistic Humor in Puns , 2015, Cogn. Sci..

[24]  Henry A. Kautz,et al.  Stimulating Creativity with FunLines: A Case Study of Humor Generation in Headlines , 2020, ACL.