Practitioners use feature importance to rank and eliminate weak predictors during model development in an effort to simplify models and improve generality. Unfortunately, they also routinely conflate such feature importance measures with feature impact, the isolated effect of an explanatory variable on the response variable. This can lead to real-world consequences when importance is inappropriately interpreted as impact for business or medical insight purposes. The dominant approach for computing importances is through interrogation of a fitted model, which works well for feature selection, but gives distorted measures of feature impact. The same method applied to the same data set can yield different feature importances, depending on the model, leading us to conclude that impact should be computed directly from the data. While there are nonparametric feature selection algorithms, they typically provide feature rankings, rather than measures of impact or importance. They also typically focus on single-variable associations with the response. In this paper, we give mathematical definitions of feature impact and importance, derived from partial dependence curves, that operate directly on the data. To assess quality, we show that features ranked by these definitions are competitive with existing feature selection techniques using three real data sets for predictive tasks.
[1]
J. Friedman.
Greedy function approximation: A gradient boosting machine.
,
2001
.
[2]
Larry A. Rendell,et al.
The Feature Selection Problem: Traditional Methods and a New Algorithm
,
1992,
AAAI.
[3]
Marko Robnik-Sikonja,et al.
Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF
,
2004,
Applied Intelligence.
[4]
Wei-Yin Loh,et al.
Classification and regression trees
,
2011,
WIREs Data Mining Knowl. Discov..
[5]
Ankur Taly,et al.
Axiomatic Attribution for Deep Networks
,
2017,
ICML.
[6]
Giles Hooker,et al.
Please Stop Permuting Features: An Explanation and Alternatives
,
2019,
ArXiv.
[7]
S. Lipovetsky,et al.
Analysis of regression in game theory approach
,
2001
.
[8]
Fuhui Long,et al.
Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy
,
2003,
IEEE Transactions on Pattern Analysis and Machine Intelligence.