Lazy Classifiers Using P-trees
暂无分享,去创建一个
Lazy classifiers store all of the training samples and do not build a classifier until a new sample needs to be classified. It differs from eager classifiers, such as decision tree induction, which build a general model (such as a decision tree) before receiving new samples. K-nearest neighbor (KNN) classification is a typical lazy classifier. Given a set of training data, a knearest neighbor classifier predicts the class value for an unknown tuple X by searching the training set for the k nearest neighbors to X and then assigning to X the most common class among its k nearest neighbors. Lazy classifiers are faster at training time than eager classifiers, but slower at predicating time since all computation is delayed to that time. In this paper, we introduce approaches to efficient construction of lazy classifiers, using a data structure, Peano Count Tree (P-tree). P-tree is a lossless and compressed representation of the original data that records the count information to facilitate efficient data mining. With P-tree structure, we introduced two classifiers, P-tree based k-nearest neighbor classifier (PKNN), and Podium Incremental Neighbor Evaluator (PINE). Performance analysis shows that our algorithms outperform classical KNN methods.
[1] Qiang Ding,et al. On Mining Satellite and other Remotely Sensed Images , 2001, DMKD.
[2] Qin Ding,et al. k-nearest Neighbor Classification on Spatial Data Streams Using P-trees , 2002, PAKDD.
[3] Jiawei Han,et al. Data Mining: Concepts and Techniques , 2000 .
[4] Belur V. Dasarathy,et al. Nearest neighbor (NN) norms: NN pattern classification techniques , 1991 .
[5] M A Neifeld,et al. Optical implementations of radial basis classifiers. , 1993, Applied optics.