Feature selection in Web applications by ROC inflections and powerset pruning

A basic problem of information processing is selecting enough features to ensure that events are accurately represented for classification problems, while simultaneously minimizing storage and processing of irrelevant or marginally important features. To address this problem, feature selection procedures perform a search through the feature power set to find the smallest subset meeting performance requirements. Major restrictions of existing procedures are that they typically explicitly or implicitly assume a fixed operating point, and make limited use of the statistical structure of the feature power set. We present a method that combines the Neyman-Pearson design procedure on finite data, with the directed set structure of the Receiver Operating Curves on the feature subsets, to determine the maximal size of the feature subsets that can be ranked in a given problem. The search can then be restricted to the smaller subsets, resulting in significant reductions in computational complexity. Optimizing the overall Receiver Operating Curve also allows for end users to select different operating points and cost functions to optimize. The algorithm also produces a natural method of Boolean representation of the minimal feature combinations that best describe the data near a given operating point. These representations are especially appropriate when describing data using common text-related features useful on the Web, such as thresholded TFIDF data. We show how to use these results to perform automatic Boolean query modification generation for distributed databases, such as niche metasearch engines.

[1]  William P. Birmingham,et al.  Architecture of a metasearch engine that supports user information needs , 1999, CIKM '99.

[2]  Luis Gravano,et al.  STARTS: Stanford proposal for Internet meta-searching , 1997, SIGMOD '97.

[3]  C. Lee Giles,et al.  Context and Page Analysis for Improved Web Search , 1998, IEEE Internet Comput..

[4]  Nripendra N. Biswas,et al.  Minimization of Boolean Functions , 1971, IEEE Transactions on Computers.

[5]  Michael D. Gordon,et al.  Web Search---Your Way , 2001, CACM.

[6]  Russell Greiner,et al.  PALO: A Probabilistic Hill-Climbing Algorithm , 1996, Artif. Intell..

[7]  Luis Gravano,et al.  Precision and recall of GlOSS estimators for database discovery , 1994, Proceedings of 3rd International Conference on Parallel and Distributed Information Systems.

[8]  C. Lee Giles,et al.  Inquirus, the NECI Meta Search Engine , 1998, Comput. Networks.

[9]  Luis Gravano,et al.  The Effectiveness of GlOSS for the Text Database Discovery Problem , 1994, SIGMOD Conference.

[10]  Luis Gravano,et al.  STARTS: Stanford Protocol Proposal for Internet Retrieval and Search , 1997 .

[11]  Rich Caruana,et al.  Greedy Attribute Selection , 1994, ICML.

[12]  Chris Buckley,et al.  Improving automatic query expansion , 1998, SIGIR '98.

[13]  C. Lee Giles,et al.  Bayesian Classification and Feature Selection from Finite Data Sets , 2000, UAI.

[14]  Pat Langley,et al.  Selection of Relevant Features and Examples in Machine Learning , 1997, Artif. Intell..

[15]  James R. Munkres,et al.  Topology; a first course , 1974 .

[16]  Ron Kohavi,et al.  Irrelevant Features and the Subset Selection Problem , 1994, ICML.

[17]  Ron Kohavi Feature Subset Selection as Search with Probabilistic Estimates , 1994 .

[18]  A. Gualtierotti H. L. Van Trees, Detection, Estimation, and Modulation Theory, , 1976 .

[19]  Michael Höding,et al.  Adapter Generation for Extracting and Querying Data from Web , 1999, WebDB.

[20]  Thomas G. Dietterich,et al.  Learning with Many Irrelevant Features , 1991, AAAI.

[21]  Pat Langley,et al.  Oblivious Decision Trees and Abstract Cases , 1994 .

[22]  Daphne Koller,et al.  Toward Optimal Feature Selection , 1996, ICML.