A Framework to Adjust Dependency Measure Estimates for Chance
暂无分享,去创建一个
James Bailey | Karin M. Verspoor | Xuan Vinh Nguyen | Simone Romano | J. Bailey | X. Nguyen | Simone Romano
[1] Isabelle Guyon,et al. An Introduction to Variable and Feature Selection , 2003, J. Mach. Learn. Res..
[2] James Bailey,et al. Adjusting for Chance Clustering Comparison Measures , 2015, J. Mach. Learn. Res..
[3] Mathieu Serrurier,et al. Entropy evaluation based on confidence intervals of frequency estimates : Application to the learning of decision trees , 2015, ICML.
[4] Carolin Strobl,et al. Unbiased split selection for classification trees based on the Gini Index , 2007, Comput. Stat. Data Anal..
[5] P. Good. Permutation, Parametric, and Bootstrap Tests of Hypotheses , 2005 .
[6] James Bailey,et al. Standardized Mutual Information for Clustering Comparisons: One Step Further in Adjustment for Chance , 2014, ICML.
[7] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[8] Johannes Gehrke,et al. Bias Correction in Classification Tree Construction , 2001, ICML.
[9] Matthew Reimherr,et al. On Quantifying Dependence: A Framework for Developing Interpretable Measures , 2013, 1302.5233.
[10] Achim Zeileis,et al. Bias in random forest variable importance measures: Illustrations, sources and a solution , 2007, BMC Bioinformatics.
[11] James Bailey,et al. Information theoretic measures for clusterings comparison: is a correction for chance necessary? , 2009, ICML '09.
[12] Igor Kononenko,et al. On Biases in Estimating Multi-Valued Attributes , 1995, IJCAI.
[13] Thomas Lengauer,et al. Permutation importance: a corrected feature importance measure , 2010, Bioinform..
[14] Ian H. Witten,et al. Using a Permutation Test for Attribute Selection in Decision Trees , 1998, ICML.