X-Class: Text Classification with Extremely Weak Supervision

In this paper, we explore to conduct text classification with extremely weak supervision, i.e., only relying on the surface text of class names. This is a more challenging setting than the seed-driven weak supervision, which allows a few seed words per class. We opt to attack this problem from a representation learning perspective -- ideal document representations should lead to very close results between clustering and the desired classification. In particular, one can classify the same corpus differently (e.g., based on topics and locations), so document representations must be adaptive to the given class names. We propose a novel framework X-Class to realize it. Specifically, we first estimate comprehensive class representations by incrementally adding the most similar word to each class until inconsistency appears. Following a tailored mixture of class attention mechanisms, we obtain the document representation via a weighted average of contextualized token representations. We then cluster and align the documents to classes with the prior of each document assigned to its nearest class. Finally, we pick the most confident documents from each cluster to train a text classifier. Extensive experiments demonstrate that X-Class can rival and even outperform seed-driven weakly supervised methods on 7 benchmark datasets.

[1]  Christopher D. Manning,et al.  Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.

[2]  Jingbo Shang,et al.  META: Metadata-Empowered Weak Supervision for Text Classification , 2020, EMNLP.

[3]  Paulo E. Rauber,et al.  Visualizing Time-Dependent Data Using Dynamic t-SNE , 2016, EuroVis.

[4]  Chao Zhang,et al.  Weakly-Supervised Text Classification Using Label Names Only , 2020, EMNLP.

[5]  Chao Zhang,et al.  Discriminative Topic Mining via Category-Name Guided Text Embedding , 2020, WWW.

[6]  Thomas Wolf,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[7]  Dan Roth,et al.  Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach , 2019, EMNLP.

[8]  Jiawei Han,et al.  Weakly-Supervised Neural Text Classification , 2018, CIKM.

[9]  Roee Aharoni,et al.  Unsupervised Domain Clusters in Pretrained Language Models , 2020, ACL.

[10]  Ken Lang,et al.  NewsWeeder: Learning to Filter Netnews , 1995, ICML.

[11]  Richard O. Duda,et al.  Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.

[12]  Xiang Zhang,et al.  Character-level Convolutional Networks for Text Classification , 2015, NIPS.

[13]  Kawin Ethayarajh,et al.  How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings , 2019, EMNLP.

[14]  Jiawei Han,et al.  Doc2Cube: Allocating Documents to Text Cube Without Labeled Data , 2018, 2018 IEEE International Conference on Data Mining (ICDM).

[15]  Hwiyeol Jo,et al.  Delta-training: Simple Semi-Supervised Text Classification using Pretrained Word Embeddings , 2019, 1901.07651.

[16]  S. P. Lloyd,et al.  Least squares quantization in PCM , 1982, IEEE Trans. Inf. Theory.

[17]  D. Angluin,et al.  Learning From Noisy Examples , 1988, Machine Learning.

[18]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[19]  David M. W. Powers,et al.  Applications and Explanations of Zipf’s Law , 1998, CoNLL.

[20]  Jacob Goldberger,et al.  Training deep neural-networks using a noise adaptation layer , 2016, ICLR.

[21]  Jingbo Shang,et al.  Contextualized Weak Supervision for Text Classification , 2020, ACL.

[22]  Jiawei Han,et al.  Weakly-Supervised Hierarchical Text Classification , 2018, AAAI.