A Novel Text Classification Algorithm Based on Naïve Bayes and KL-Divergence
暂无分享,去创建一个
The Naive Bayes classifier is a popular machine learning method for text classification because it is fast and easy to implement and performs well. Its severe assumption that each feature word is independent with other feature words in a document makes higher efficiency possible but also adversely affects the quality of its results because some of feature words are interrelated. In this paper, in order to enhance the performance of the text classification, some solutions are proposed to some of the problems with Naïve Bayes classifiers. Based on the original Naive Bayes algorithm, we take feature weight into account and make it a factor and combine KL-divergence (relative entropy) between the words to improve Naïve Bayes classifier. The improved Naïve Bayes classification algorithm is called INBA. By theory and experiment analyses it is proved that INBA algorithm not only has advantages of Naïve Bayes classifier, but also results in higher classification accuracy, and the solutions are feasible, practical and effective.
[1] Vipin Kumar,et al. Text Categorization Using Weight Adjusted k-Nearest Neighbor Classification , 2001, PAKDD.
[2] Pingzhi Fan,et al. Proceedings of the 5th international conference on Parallel and Distributed Computing: applications and Technologies , 2004 .
[3] Thorsten Joachims,et al. Text Categorization with Support Vector Machines: Learning with Many Relevant Features , 1998, ECML.
[4] Andrew McCallum,et al. A comparison of event models for naive bayes text classification , 1998, AAAI 1998.