UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis

Global models are trained to be as generalizable as possible, with user invariance considered desirable since the models are shared across multitudes of users. As such, these models are often unable to produce personalized responses for individual users, based on their data. Contrary to widely-used personalization techniques based on few-shot learning, we propose UserIdentifier, a novel scheme for training a single shared model for all users. Our approach produces personalized responses by adding fixed, non-trainable user identifiers to the input data. We empirically demonstrate that this proposed method outperforms the prefix-tuning based state-of-the-art approach by up to 13%, on a suite of sentiment analysis datasets. We also show that, unlike prior work, this method needs neither any additional model parameters nor any extra rounds of few-shot fine-tuning.

[1]  R. Raskar,et al.  Privacy in Deep Learning: A Survey , 2020, ArXiv.

[2]  Wei Gao,et al.  Personalized Microblog Sentiment Classification via Adversarial Cross-lingual Multi-task Learning , 2018, EMNLP.

[3]  Milind Kulkarni,et al.  Survey of Personalization Techniques for Federated Learning , 2020, 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4).

[4]  Marilyn Smith,et al.  FIELD GUIDE , 1998 .

[5]  Y. Mansour,et al.  Three Approaches for Personalization with Applications to Federated Learning , 2020, ArXiv.

[6]  Graham Neubig,et al.  A Probabilistic Formulation of Unsupervised Text Style Transfer , 2020, ICLR.

[7]  Johannes Schneider,et al.  Mass Personalization of Deep Learning , 2019, ArXiv.

[8]  Alexander J. Smola,et al.  Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS) , 2014, KDD.

[9]  Suhas Diggavi,et al.  A Field Guide to Federated Optimization , 2021, ArXiv.

[10]  Lucie Flek,et al.  Returning the N to NLP: Towards Contextually Personalized Classification Models , 2020, ACL.

[11]  Tian Li,et al.  Fair Resource Allocation in Federated Learning , 2019, ICLR.

[12]  Ngoc Thang Vu,et al.  Meta Learning and Its Applications to Natural Language Processing , 2021, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Tutorial Abstracts.

[13]  Sebastian Caldas,et al.  LEAF: A Benchmark for Federated Settings , 2018, ArXiv.

[14]  Nan Duan,et al.  UserAdapter: Few-Shot User Learning in Sentiment Analysis , 2021, FINDINGS.

[15]  Yi Yang,et al.  Overcoming Language Variation in Sentiment Analysis with Social Attention , 2015, TACL.

[16]  Antoine Bordes,et al.  Training Millions of Personalized Dialogue Agents , 2018, EMNLP.

[17]  Percy Liang,et al.  Prefix-Tuning: Optimizing Continuous Prompts for Generation , 2021, ACL.

[18]  Mehrdad Mahdavi,et al.  Adaptive Personalized Federated Learning , 2020, ArXiv.

[19]  Peter Richtárik,et al.  Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.

[20]  Ting Liu,et al.  Document Modeling with Gated Recurrent Neural Network for Sentiment Classification , 2015, EMNLP.

[21]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[22]  Paul Cook,et al.  Evaluating Approaches to Personalizing Language Models , 2020, LREC.