A common approach for feature selection is to examine the variable importance scores for a machine learning model, as a way to understand which features are the most relevant for making predictions. Given the significance of feature selection, it is crucial for the calculated importance scores to reflect reality. Falsely overestimating the importance of irrelevant features can lead to false discoveries, while underestimating importance of relevant features may lead us to discard important features, resulting in poor model performance. Additionally, black-box models like XGBoost provide state-of-the art predictive performance, but cannot be easily understood by humans, and thus we rely on variable importance scores or methods for explainability like SHAP to offer insight into their behavior. In this paper, we investigate the performance of variable importance as a feature selection method across various black-box and interpretable machine learning methods. We compare the ability of CART, Optimal Trees, XGBoost and SHAP to correctly identify the relevant subset of variables across a number of experiments. The results show that regardless of whether we use the native variable importance method or SHAP, XGBoost fails to clearly distinguish between relevant and irrelevant features. On the other hand, the interpretable methods are able to correctly and efficiently identify irrelevant features, and thus offer significantly better performance for feature selection.
[1]
Tianqi Chen,et al.
XGBoost: A Scalable Tree Boosting System
,
2016,
KDD.
[2]
Dimitris Bertsimas,et al.
Multivariate Statistics and Machine Learning Under a Modern Optimization Lens
,
2015
.
[3]
Scott Lundberg,et al.
A Unified Approach to Interpreting Model Predictions
,
2017,
NIPS.
[4]
Steven Salzberg,et al.
Decision Tree Induction: How Effective is the Greedy Heuristic?
,
1995,
KDD.
[5]
Hyunjoong Kim,et al.
Classification Trees With Unbiased Multiway Splits
,
2001
.
[6]
Dimitris Bertsimas,et al.
Optimal classification trees
,
2017,
Machine Learning.
[7]
Igor Kononenko,et al.
On Biases in Estimating Multi-Valued Attributes
,
1995,
IJCAI.
[8]
K. Hornik,et al.
Unbiased Recursive Partitioning: A Conditional Inference Framework
,
2006
.
[9]
Carolin Strobl,et al.
Unbiased split selection for classification trees based on the Gini Index
,
2007,
Comput. Stat. Data Anal..
[10]
Wei-Yin Loh,et al.
Classification and regression trees
,
2011,
WIREs Data Mining Knowl. Discov..