Learning an Optimal Naive Bayes Classifier

The naive Bayes classifier is an efficient classification model that is easy to learn and has a high accuracy in many domains. However, it has two main drawbacks: (i) its classification accuracy decreases when the attributes are not independent, and (ii) it can not deal with nonparametric continuous attributes. In this work we propose a method that deals with both problems, and learns an optimal naive Bayes classifier. The method includes two phases, discretization and structural improvement, which are repeated alternately until the classification accuracy can not be improved. Discretization is based on the minimum description length principle. To deal with dependent and irrelevant attributes, we apply a structural improvement method that eliminates and/or joins attributes, based on mutual and conditional information measures. The method has been tested in two different domains with good results