ChatGPT outperforms crowd workers for text-annotation tasks

Many NLP applications require manual text annotations for a variety of tasks, notably to train classifiers or evaluate the performance of unsupervised models. Depending on the size and degree of complexity, the tasks may be conducted by crowd workers on platforms such as MTurk as well as trained annotators, such as research assistants. Using four samples of tweets and news articles (n = 6,183), we show that ChatGPT outperforms crowd workers for several annotation tasks, including relevance, stance, topics, and frame detection. Across the four datasets, the zero-shot accuracy of ChatGPT exceeds that of crowd workers by about 25 percentage points on average, while ChatGPT’s intercoder agreement exceeds that of both crowd workers and trained annotators for all tasks. Moreover, the per-annotation cost of ChatGPT is less than $0.003—about thirty times cheaper than MTurk. These results demonstrate the potential of large language models to drastically increase the efficiency of text classification.

[1]  Patrick Y. Wu,et al.  Large Language Models Can Be Used to Estimate the Ideologies of Politicians in a Zero-Shot Learning Setting , 2023, ArXiv.

[2]  Nikola Ljubesic,et al.  ChatGPT: Beginning of an End of Manual Linguistic Data Annotation? Use Case of Automatic Genre Identification , 2023, ArXiv.

[3]  Haewoon Kwak,et al.  Is ChatGPT better than Human Annotators? Potential and Limitations of ChatGPT in Explaining Implicit Hate Speech , 2023, WWW.

[4]  John J. Nay Large Language Models as Corporate Lobbyists , 2023, SSRN Electronic Journal.

[5]  Meysam Alizadeh,et al.  Content Moderation As a Political Issue: The Twitter Discourse Around Trump's Ban , 2022, Journal of Quantitative Description: Digital Media.

[6]  D. Wingate,et al.  Out of One, Many: Using Language Models to Simulate Human Samples , 2022, Political Analysis.

[7]  Eric Schulz,et al.  Using cognitive psychology to understand GPT-3 , 2022, Proceedings of the National Academy of Sciences of the United States of America.

[8]  S. Gu,et al.  Large Language Models are Zero-Shot Reasoners , 2022, NeurIPS.

[9]  Mai ElSherief,et al.  Latent Hatred: A Benchmark for Understanding Implicit Hate Speech , 2021, EMNLP.

[10]  Sarah C. Kucker,et al.  An MTurk Crisis? Shifts in Data Quality and the Impact on Study Results , 2019, Social Psychological and Personality Science.

[11]  Holger Döring,et al.  Party Positions from Wikipedia Classifications of Party Ideology , 2019, Political Analysis.

[12]  Benjamin E. Lauderdale,et al.  Crowd-sourced Text Analysis: Reproducible and Agile Production of Political Data , 2016, American Political Science Review.

[13]  Noah A. Smith,et al.  The Media Frames Corpus: Annotations of Frames Across Issues , 2015, ACL.

[14]  E. Duesterwald,et al.  Semi-Automated Data Labeling , 2020, NeurIPS.