Gradient-Boosted Decision Tree for Listwise Context Model in Multimodal Review Helpfulness Prediction

Multimodal Review Helpfulness Prediction (MRHP) aims to rank product reviews based on predicted helpfulness scores and has been widely applied in e-commerce via presenting customers with useful reviews. Previous studies commonly employ fully-connected neural networks (FCNNs) as the final score predictor and pairwise loss as the training objective. However, FCNNs have been shown to perform inefficient splitting for review features, making the model difficult to clearly differentiate helpful from unhelpful reviews. Furthermore, pairwise objective, which works on review pairs, may not completely capture the MRHP goal to produce the ranking for the entire review list, and possibly induces low generalization during testing. To address these issues, we propose a listwise attention network that clearly captures the MRHP ranking context and a listwise optimization objective that enhances model generalization. We further propose gradient-boosted decision tree as the score predictor to efficaciously partition product reviews' representations. Extensive experiments demonstrate that our method achieves state-of-the-art results and polished generalization performance on two large-scale MRHP benchmark datasets.

[1]  A. Luu,et al.  Effective Neural Topic Modeling with Embedding Clustering Regularization , 2023, ICML.

[2]  Xiaobao Wu InfoCTM: A Mutual Information Maximization Perspective of Cross-Lingual Topic Modeling , 2023, AAAI.

[3]  A. Luu,et al.  Mitigating Data Sparsity for Short Text Topic Modeling by Topic-Semantic Contrastive Learning , 2022, EMNLP.

[4]  A. Luu,et al.  Adaptive Contrastive Learning on Multimodal Transformer for Review Helpfulness Prediction , 2022, EMNLP.

[5]  Soujanya Poria,et al.  SANCL: Multimodal Review Helpfulness Prediction with Selective Attention and Natural Contrastive Learning , 2022, COLING.

[6]  A. Luu,et al.  Improving Neural Cross-Lingual Abstractive Summarization via Employing Optimal Transport Distance for Knowledge Distillation , 2022, AAAI.

[7]  Anh Tuan Luu,et al.  Contrastive Learning for Neural Topic Model , 2021, NeurIPS.

[8]  Tho Quan,et al.  Enriching and Controlling Global Semantics for Text Summarization , 2021, EMNLP.

[9]  Przemyslaw Pobrotyn,et al.  NeuralNDCG: Direct Optimisation of a Ranking Metric via Differentiable Relaxation of Sorting , 2021, ArXiv.

[10]  Chunping Li,et al.  Short Text Topic Modeling with Topic Distribution Quantization and Negative Sampling Decoder , 2020, EMNLP.

[11]  Jun Gao,et al.  Category-aware Graph Neural Networks for Improving E-commerce Review Helpfulness Prediction , 2020, CIKM.

[12]  Mario Marchand,et al.  Decision trees as partitioning machines to characterize their generalization properties , 2020, NeurIPS.

[13]  Zhixiong Zeng,et al.  Reasoning with Multimodal Sarcastic Tweets via Modeling Cross-Modality Contrast and Semantic Association , 2020, ACL.

[14]  Qiaozhu Mei,et al.  Learning-to-Rank with Partitioned Preference: Fast Estimation for the Plackett-Luce Model , 2020, AISTATS.

[15]  Joel R. Tetreault,et al.  Multimodal Categorization of Crisis Events in Social Media , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[16]  Marc Najork,et al.  Self-Attentive Document Interaction Networks for Permutation Equivariant Ranking , 2019, ArXiv.

[17]  Miao Fan,et al.  Product-Aware Helpfulness Prediction of Online Reviews , 2019, WWW.

[18]  Jun Zhou,et al.  Cross-Domain Review Helpfulness Prediction Based on Convolutional Neural Networks with Auxiliary Domain Discriminators , 2018, NAACL.

[19]  Jia Li,et al.  Latent Cross: Making Use of Context in Recurrent Recommender Systems , 2018, WSDM.

[20]  Zhiyuan Liu,et al.  Convolutional Neural Networks for Soft-Matching N-Grams in Ad-hoc Search , 2018, WSDM.

[21]  Hao Wang,et al.  Using Argument-based Features to Predict and Analyse Review Helpfulness , 2017, EMNLP.

[22]  Zhiguo Wang,et al.  Bilateral Multi-Perspective Matching for Natural Language Sentences , 2017, IJCAI.

[23]  Siu Cheung Hui,et al.  Utilizing Temporal Information for Taxonomy Construction , 2016, TACL.

[24]  Siu Cheung Hui,et al.  Learning Term Embeddings for Taxonomic Relation Identification Using Dynamic Weighting Neural Network , 2016, EMNLP.

[25]  Tomas Mikolov,et al.  Enriching Word Vectors with Subword Information , 2016, TACL.

[26]  See-Kiong Ng,et al.  Incorporating Trustiness and Collective Synonym/Contrastive Evidence into Taxonomy Construction , 2015, EMNLP.

[27]  Forrest Sheng Bao,et al.  Semantic Analysis and Helpfulness Prediction of Text for Online Product Reviews , 2015, ACL.

[28]  Srikumar Krishnamoorthy,et al.  Linguistic features for review helpfulness prediction , 2015, Expert Syst. Appl..

[29]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[30]  Tie-Yan Liu,et al.  Listwise approach to learning to rank: theory and algorithm , 2008, ICML '08.

[31]  Soo-Min Kim,et al.  Automatically Assessing Review Helpfulness , 2006, EMNLP.

[32]  Lidong Bing,et al.  Multi-perspective Coherent Reasoning for Helpfulness Prediction of Multimodal Reviews , 2021, ACL.

[33]  Josef Kittler,et al.  How Does Loss Function Affect Generalization Performance of Deep Learning? Application to Human Age Estimation , 2021, ICML.

[34]  Yi Tay,et al.  Are Neural Rankers still Outperformed by Gradient Boosted Decision Trees? , 2021, ICLR.