Text Classification by Combining Different Distance Functions with Weights

The text classification is an important subject in the data mining. For the text classification, several methods have been developed up to now, as the nearest neighbor analysis, the latent semantic analysis, etc. The k-nearest neighbor (kNN) classification is a well-known simple and effective method for the classification of data in many domains. In the use of the kNN, the distance function is important to measure the distance and the similarity between data. To improve the performance of the classifier by the kNN, a new approach to combine multiple distance functions is proposed here. The weighting factors of elements in the distance function, are computed by GA for the effectiveness of the measurement. Further, an ensemble processing was developed for the improvement of the classification accuracy. Finally, it is shown by experiments that the methods, developed here, are effective in the text classification.

[1]  Naohiro Ishii,et al.  Text Classification by Combining Different Distance Functions withWeights , 2006, Seventh ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD'06).

[2]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[3]  Ian H. Witten,et al.  Data mining: practical machine learning tools and techniques, 3rd Edition , 1999 .

[4]  Stephen D. Bay Nearest neighbor classification from multiple feature subsets , 1999, Intell. Data Anal..

[5]  Tony R. Martinez,et al.  An Integrated Instance‐Based Learning Algorithm , 2000, Comput. Intell..

[6]  Naohiro Ishii,et al.  Classification by Instance-Based Learning Algorithm , 2005, IDEAL.

[7]  Tony R. Martinez,et al.  Improved Heterogeneous Distance Functions , 1996, J. Artif. Intell. Res..