Cached Suucient Statistics for Eecient Machine Learning with Large Datasets 1. Caching Suucient Statistics

This paper introduces new algorithms and data structures for quick counting for machine learning datasets. We focus on the counting task of constructing contingency tables, but our approach is also applicable to counting the number of records in a dataset that match conjunctive queries. Subject to certain assumptions, the costs of these operations can be shown to be independent of the number of records in the dataset and loglinear in the number of non-zero entries in the contingency table. We provide a very sparse data structure, the ADtree, to minimize memory use. We provide analytical worst-case bounds for this structure for several models of data distribution. We empirically demonstrate that tractably-sized data structures can be produced for large real-world datasets by (a) using a sparse tree structure that never allocates memory for counts of zero, (b) never allocating memory for counts that can be deduced from other counts, and (c) not bothering to expand the tree fully near its leaves. We show how the ADtree can be used to accelerate Bayes net structure nding algorithms, rule learning algorithms, and feature selection algorithms, and we provide a number of empirical results comparing ADtree methods against traditional direct counting approaches. We also discuss the possible uses of ADtrees in other machine learning methods, and discuss the merits of ADtrees in comparison with alternative representations such as kd-trees, R-trees and Frequent Sets. 1. Caching Su cient Statistics Computational e ciency is an important concern for machine learning algorithms, especially when applied to large datasets (Fayyad, Mannila, & Piatetsky-Shapiro, 1997; Fayyad & Uthurusamy, 1996) or in real-time scenarios. In earlier work we showed how kd-trees with multiresolution cached regression matrix statistics can enable very fast locally weighted and instance based regression (Moore, Schneider, & Deng, 1997). In this paper, we attempt to accelerate predictions for symbolic attributes using a kind of kd-tree that splits on all dimensions at all nodes. Many machine learning algorithms operating on datasets of symbolic attributes need to do frequent counting. This work is also applicable to Online Analytical Processing (OLAP) applications in data mining, where operations on large datasets such as multidimensional database access, DataCube operations (Harinarayan, Rajaraman, & Ullman, 1996), and association rule learning (Agrawal, Mannila, Srikant, Toivonen, & Verkamo, 1996) could be accelerated by fast counting. Let us begin by establishing some notation. We are given a data set with R records and M attributes. The attributes are called a1; a2; : : :aM . The value of attribute ai in the c 1998 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.

[1]  J. Ross Quinlan,et al.  Learning Efficient Classification Procedures and Their Application to Chess End Games , 1983 .

[2]  Antonin Guttman,et al.  R-trees: a dynamic index structure for spatial searching , 1984, SIGMOD '84.

[3]  Nick Roussopoulos,et al.  Direct spatial search on pictorial databases using packed R-trees , 1985, SIGMOD Conference.

[4]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[5]  Ron Rymon An SE-tree based Characterization of the Induction Problem , 1993, ICML.

[6]  Ron Kohavi,et al.  Irrelevant Features and the Subset Selection Problem , 1994, ICML.

[7]  Ron Kohavi,et al.  The Power of Decision Tables , 1995, ECML.

[8]  Ron Kohavi,et al.  Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid , 1996, KDD.

[9]  Venky Harinarayan,et al.  Implementing Data Cubes E ciently , 1996 .

[10]  Nir Friedman,et al.  On the Sample Complexity of Learning Bayesian Networks , 1996, UAI.

[11]  Heikki Mannila,et al.  Multiple Uses of Frequent Sets and Condensed Representations (Extended Abstract) , 1996, KDD.

[12]  Heikki Mannila,et al.  Fast Discovery of Association Rules , 1996, Advances in Knowledge Discovery and Data Mining.

[13]  Eduardo Sontag,et al.  Sample Complexity for Learning , 1996 .

[14]  Andrew W. Moore,et al.  Efficient Locally Weighted Polynomial Regression Predictions , 1997, ICML.

[15]  Andrew W. Moore,et al.  Cached Sufficient Statistics for Efficient Machine Learning with Large Datasets , 1998, J. Artif. Intell. Res..