SSN_MLRG1@LT-EDI-ACL2022: Multi-Class Classification using BERT models for Detecting Depression Signs from Social Media Text

DepSign-LT-EDI@ACL-2022 aims to ascer-tain the signs of depression of a person fromtheir messages and posts on social mediawherein people share their feelings and emo-tions. Given social media postings in English,the system should classify the signs of depres-sion into three labels namely “not depressed”,“moderately depressed”, and “severely de-pressed”. To achieve this objective, we haveadopted a fine-tuned BERT model. This solu-tion from team SSN_MLRG1 achieves 58.5%accuracy on the DepSign-LT-EDI@ACL-2022test set.

[1]  Bharathi Raja Chakravarthi,et al.  Findings of the Shared Task on Offensive Span Identification fromCode-Mixed Tamil-English Comments , 2022, DRAVIDIANLANGTECH.

[2]  T. T. Mirnalinee,et al.  Contextual emotion detection on text using gaussian process and tree based classifiers , 2022, Intell. Data Anal..

[3]  Bharathi Raja Chakravarthi,et al.  Overview of the DravidianCodeMix 2021 Shared Task on Sentiment Detection in Tamil, Malayalam, and Kannada , 2021, FIRE.

[4]  Bharathi Raja Chakravarthi,et al.  Findings of Shared Task on Offensive Language Identification in Tamil and Malayalam , 2021, FIRE.

[5]  Bharathi Raja Chakravarthi,et al.  Findings of the Shared Task on Hope Speech Detection for Equality, Diversity, and Inclusion , 2021, LTEDI.

[6]  T. T. Mirnalinee,et al.  Emotion Analysis on Text Using Multiple Kernel Gaussian... , 2021, Neural Processing Letters.

[7]  Henry Nunoo-Mensah,et al.  Transformer models for text-based emotion detection: a review of BERT-based approaches , 2021, Artificial Intelligence Review.

[8]  Jie Zhou,et al.  SenseMood: Depression Detection on Social Media , 2020, ICMR.

[9]  John P. McCrae,et al.  Corpus Creation for Sentiment Analysis in Code-Mixed Tamil-English Text , 2020, SLTU.

[10]  Jimmy J. Lin,et al.  DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference , 2020, ACL.

[11]  Betty van Aken,et al.  How Does BERT Answer Questions?: A Layer-Wise Analysis of Transformer Representations , 2019, CIKM.

[12]  Marcelo Luis Errecalde,et al.  A Text Classification Framework for Simple and Effective Early Depression Detection Over Social Media Streams , 2019, Expert Syst. Appl..

[13]  T. T. Mirnalinee,et al.  Contextual Emotion Detection in Text Using Ensemble Learning , 2019, Emerging Trends in Computing and Expert Technology.

[14]  T. T. Mirnalinee,et al.  SSN MLRG1 at SemEval-2018 Task 1: Emotion and Sentiment Intensity Detection Using Rule Based Feature Selection , 2018, *SEMEVAL.

[15]  Tat-Seng Chua,et al.  Depression Detection via Harvesting Social Media: A Multimodal Dictionary Learning Solution , 2017, IJCAI.

[16]  T. T. Mirnalinee,et al.  SSN_MLRG1 at SemEval-2017 Task 5: Fine-Grained Sentiment Analysis Using Multiple Kernel Gaussian Process Regression Model , 2017, *SEMEVAL.

[17]  T. T. Mirnalinee,et al.  SSN_MLRG1 at SemEval-2017 Task 4: Sentiment Analysis in Twitter Using Multi-Kernel Gaussian Process Classifier , 2017, *SEMEVAL.

[18]  Bharathi Raja Chakravarthi,et al.  Findings of the Shared Task on Emotion Analysis in Tamil , 2022, DRAVIDIANLANGTECH.

[19]  Bharathi Raja Chakravarthi,et al.  Findings of the Shared Task on Speech Recognition for Vulnerable Individuals in Tamil , 2022, LTEDI.

[20]  Bharathi Raja Chakravarthi,et al.  Findings of the Shared Task on Detecting Signs of Depression from Social Media , 2022, LTEDI.

[21]  T. T. Mirnalinee,et al.  TECHSSN at SemEval-2021 Task 7: Humor and Offense detection and classification using ColBERT embeddings , 2021, SEMEVAL.

[22]  Abrit Pal Singh,et al.  TECHSSN at HAHA @ IberLEF 2021: Humor Detection and Funniness Score Prediction using Deep Learning Techniques , 2021, IberLEF@SEPLN.

[23]  Bharathi Raja Chakravarthi HopeEDI: A Multilingual Hope Speech Detection Dataset for Equality, Diversity, and Inclusion , 2020, PEOPLES.

[24]  T. T. Mirnalinee,et al.  TECHSSN at SemEval-2020 Task 12: Offensive Language Detection Using BERT Embeddings , 2020, SEMEVAL.

[25]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.