A Novel Text Classification Algorithm Based on Naïve Bayes and KL-Divergence

The Naive Bayes classifier is a popular machine learning method for text classification because it is fast and easy to implement and performs well. Its severe assumption that each feature word is independent with other feature words in a document makes higher efficiency possible but also adversely affects the quality of its results because some of feature words are interrelated. In this paper, in order to enhance the performance of the text classification, some solutions are proposed to some of the problems with Naïve Bayes classifiers. Based on the original Naive Bayes algorithm, we take feature weight into account and make it a factor and combine KL-divergence (relative entropy) between the words to improve Naïve Bayes classifier. The improved Naïve Bayes classification algorithm is called INBA. By theory and experiment analyses it is proved that INBA algorithm not only has advantages of Naïve Bayes classifier, but also results in higher classification accuracy, and the solutions are feasible, practical and effective.