Dynamic integration of data mining methods in knowledge discovery systems
暂无分享,去创建一个
[1] Leo Breiman,et al. Bagging Predictors , 1996, Machine Learning.
[2] Richard Bellman,et al. Adaptive Control Processes: A Guided Tour , 1961, The Mathematical Gazette.
[3] Christopher J. Merz,et al. Using Correspondence Analysis to Combine Classifiers , 1999, Machine Learning.
[4] Catherine Blake,et al. UCI Repository of machine learning databases , 1998 .
[5] Padraig Cunningham,et al. Diversity versus Quality in Classification Ensembles Based on Feature Selection , 2000, ECML.
[6] Robert Tibshirani,et al. Discriminant Adaptive Nearest Neighbor Classification , 1995, IEEE Trans. Pattern Anal. Mach. Intell..
[7] Alexey Tsymbal,et al. Dynamic integration of multiple data mining techniques in a knowledge discovery management system , 1999, Defense, Security, and Sensing.
[8] Alexey Tsymbal,et al. Ensemble Feature Selection with Dynamic Integration of Classifiers , 2001 .
[9] Andrew W. Moore,et al. Locally Weighted Learning , 1997, Artificial Intelligence Review.
[10] Huan Liu,et al. Feature Selection for Classification , 1997, Intell. Data Anal..
[11] Michael J. Pazzani,et al. Classification and regression by combining models , 1998 .
[12] Salvatore J. Stolfo,et al. An extensible meta-learning approach for scalable and accurate inductive learning , 1996 .
[13] Richard O. Duda,et al. Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.
[14] Mark A. Hall,et al. Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning , 1999, ICML.
[15] Alexey Tsymbal,et al. Learning feature selection for medical databases , 1999, Proceedings 12th IEEE Symposium on Computer-Based Medical Systems (Cat. No.99CB36365).
[16] Saso Dzeroski,et al. Combining Multiple Models with Meta Decision Trees , 2000, PKDD.
[17] Ron Kohavi,et al. Wrappers for performance enhancement and oblivious decision graphs , 1995 .
[18] Pedro M. Domingos. Control-Sensitive Feature Selection for Lazy Learners , 1997, Artificial Intelligence Review.
[19] Christopher J. Merz,et al. UCI Repository of Machine Learning Databases , 1996 .
[20] Marvin Minsky,et al. Perceptrons: An Introduction to Computational Geometry , 1969 .
[21] Ron Kohavi,et al. Data Mining Using MLC a Machine Learning Library in C++ , 1996, Int. J. Artif. Intell. Tools.
[22] Ron Kohavi,et al. A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection , 1995, IJCAI.
[23] Moshe Koppel Sean P. Engelson. Integrating Multiple Classifiers By Finding Their Areas of Expertise , 1996 .
[24] Alexey Tsymbal,et al. Advanced dynamic selection of diagnostic methods , 1998, Proceedings. 11th IEEE Symposium on Computer-Based Medical Systems (Cat. No.98CB36237).
[25] Charles Elkan,et al. Boosting and Naive Bayesian learning , 1997 .
[26] Michael J. Pazzani,et al. Error reduction through learning multiple descriptions , 2004, Machine Learning.
[27] J. Ross Quinlan,et al. Induction of Decision Trees , 1986, Machine Learning.
[28] Robert E. Schapire,et al. A Brief Introduction to Boosting , 1999, IJCAI.
[29] Cullen Schaffer,et al. Selecting a classification method by cross-validation , 1993, Machine Learning.
[30] Kagan Tumer,et al. Classifier Combining: Analytical Results and Implications , 1995 .
[31] Anders Krogh,et al. Neural Network Ensembles, Cross Validation, and Active Learning , 1994, NIPS.
[32] Se June Hong,et al. Use of Contextaul Information for Feature Ranking and Discretization , 1997, IEEE Trans. Knowl. Data Eng..
[33] Tin Kam Ho,et al. The Random Subspace Method for Constructing Decision Forests , 1998, IEEE Trans. Pattern Anal. Mach. Intell..
[34] D. Kibler,et al. Instance-based learning algorithms , 2004, Machine Learning.
[35] Thomas G. Dietterich,et al. A study of distance-based machine learning algorithms , 1994 .
[36] Wei-Yin Loh,et al. A Comparison of Prediction Accuracy, Complexity, and Training Time of Thirty-Three Old and New Classification Algorithms , 2000, Machine Learning.
[37] Thomas G. Dietterich,et al. Solving Multiclass Learning Problems via Error-Correcting Output Codes , 1994, J. Artif. Intell. Res..
[38] David H. Wolpert,et al. Stacked generalization , 1992, Neural Networks.
[39] Leo Breiman,et al. Classification and Regression Trees , 1984 .
[40] Elie Bienenstock,et al. Neural Networks and the Bias/Variance Dilemma , 1992, Neural Computation.
[41] Alexey Tsymbal,et al. Arbiter Meta-Learning with Dynamic Selection of Classifiers and Its Experimental Investigation , 1999, ADBIS.
[42] Peter Kokol,et al. Comparison of Three Databases with a Decision Tree Approach in the Medical Field of Acute Appendicitis , 2001, MedInfo.
[43] David W. Opitz,et al. Feature Selection for Ensembles , 1999, AAAI/IAAI.
[44] Eric Bauer,et al. An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.
[45] Edwin P. D. Pednault,et al. Decomposition of Heterogeneous Classification Problems , 1997, IDA.
[46] Pedro M. Domingos,et al. Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier , 1996, ICML.
[47] Sebastian Thrun,et al. The MONK''s Problems-A Performance Comparison of Different Learning Algorithms, CMU-CS-91-197, Sch , 1991 .
[48] Thomas G. Dietterich. Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms , 1998, Neural Computation.
[49] Alexey Tsymbal,et al. Distance functions in dynamic integration of data mining techniques , 2000, SPIE Defense + Commercial Sensing.
[50] V. Terziyan,et al. Dynamic Integration of Data Mining Methods Using Selection in a Knowledge Discovery Management System , 1999 .
[51] Richard Maclin,et al. Ensembles as a Sequence of Classifiers , 1997, IJCAI.
[52] Alexey Tsymbal,et al. Decision Committee Learning with Dynamic Integration of Classifiers , 2000, ADBIS-DASFAA.
[53] Robert E. Schapire,et al. Using output codes to boost multiclass learning problems , 1997, ICML.
[54] Ron Kohavi,et al. MineSet: An Integrated System for Data Mining , 1997, KDD.
[55] Mario Vento,et al. Reliability Parameters to Improve Combination Strategies in Multi-Expert Systems , 1999, Pattern Analysis & Applications.
[56] Alexey Tsymbal,et al. A Dynamic Integration Algorithm for an Ensemble of Classifiers , 1999, ISMIS.
[57] João Gama,et al. Combining classification algorithms , 2000 .
[58] Pedro M. Domingos,et al. On the Optimality of the Simple Bayesian Classifier under Zero-One Loss , 1997, Machine Learning.
[59] Ron Kohavi,et al. Feature Selection for Knowledge Discovery and Data Mining , 1998 .
[60] Alexey Tsymbal,et al. Ensemble feature selection with the simple Bayesian classification , 2003, Inf. Fusion.
[61] Claire Cardie,et al. Improving Minority Class Prediction Using Case-Specific Feature Weights , 1997, ICML.
[62] Heikki Mannila,et al. A database perspective on knowledge discovery , 1996, CACM.
[63] Alexey Tsymbal,et al. Local Feature Selection with Dynamic Integration of Classifiers , 2001, Fundam. Informaticae.
[64] Peter W. Eklund. Comparative study of public-domain supervised machine-learning accuracy on the UCI database , 1999, Defense, Security, and Sensing.
[65] Thomas G. Dietterich. What is machine learning? , 2020, Archives of Disease in Childhood.
[66] Alexander Schnabl,et al. Development of Multi-Criteria Metrics for Evaluation of Data Mining Algorithms , 1997, KDD.
[67] David W. Opitz,et al. Generating Accurate and Diverse Members of a Neural-Network Ensemble , 1995, NIPS.
[68] Salvatore J. Stolfo,et al. On the Accuracy of Meta-learning for Scalable Data Mining , 2004, Journal of Intelligent Information Systems.
[69] Padraig Cunningham,et al. Using Diversity in Preparing Ensembles of Classifiers Based on Different Feature Subsets to Minimize Generalization Error , 2001, ECML.
[70] Tony R. Martinez,et al. Improved Heterogeneous Distance Functions , 1996, J. Artif. Intell. Res..
[71] B. Gorayska,et al. Cognitive Technology: In Search of a Humane Interface , 1995 .
[72] Wolfgang Gaul,et al. Classification and Positioning of Data Mining Tools , 1999 .
[73] William G. Baxt,et al. Improving the Accuracy of an Artificial Neural Network Using Multiple Differently Trained Networks , 1992, Neural Computation.
[74] Padhraic Smyth,et al. Knowledge Discovery and Data Mining: Towards a Unifying Framework , 1996, KDD.
[75] Ted Pedersen,et al. A Simple Approach to Building Ensembles of Naive Bayesian Classifiers for Word Sense Disambiguation , 2000, ANLP.
[76] Kagan Tumer,et al. Error Correlation and Error Reduction in Ensemble Classifiers , 1996, Connect. Sci..
[77] Christopher J. Merz,et al. Dynamical Selection of Learning Algorithms , 1995, AISTATS.
[78] Steven L. Salzberg. On Comparing Classifiers: A Critique of Current Research and Methods , 1999 .
[79] David A. Bell,et al. Designing a Kernel for Data Mining , 1997, IEEE Expert.
[80] George H. John. Enhancements to the data mining process , 1997 .
[81] Padhraic Smyth,et al. From Data Mining to Knowledge Discovery: An Overview , 1996, Advances in Knowledge Discovery and Data Mining.
[82] Alexey Tsymbal,et al. The decision support system for telemedicine based on multiple expertise , 1998, Int. J. Medical Informatics.
[83] Kevin W. Bowyer,et al. Combination of multiple classifiers using local accuracy estimates , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[84] Kagan Tumer,et al. Dimensionality Reduction Through Classifier Ensembles , 1999 .
[85] D. Opitz,et al. Popular Ensemble Methods: An Empirical Study , 1999, J. Artif. Intell. Res..
[86] Pedro M. Domingos. Knowledge Discovery Via Multiple Models , 1998, Intell. Data Anal..
[87] Geoffrey I. Webb,et al. MultiBoosting: A Technique for Combining Boosting and Wagging , 2000, Machine Learning.
[88] Fabio Roli,et al. Methods for dynamic classifier selection , 1999, Proceedings 10th International Conference on Image Analysis and Processing.
[89] Steven Salzberg,et al. A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features , 2004, Machine Learning.
[90] Alexey Tsymbal,et al. Ensemble feature selection with the simple Bayesian classification in medical diagnostics , 2002, Proceedings of 15th IEEE Symposium on Computer-Based Medical Systems (CBMS 2002).
[91] J. Ross Quinlan,et al. Bagging, Boosting, and C4.5 , 1996, AAAI/IAAI, Vol. 1.
[92] Alexey Tsymbal,et al. Bagging and Boosting with Dynamic Integration of Classifiers , 2000, PKDD.
[93] Thomas G. Dietterich. Machine-Learning Research Four Current Directions , 1997 .
[94] J. Ross Quinlan,et al. C4.5: Programs for Machine Learning , 1992 .
[95] Fabio Roli,et al. Dynamic classifier selection based on multiple classifier behaviour , 2001, Pattern Recognit..