An experimental studyofseveral decision issues forwrapper Feature Selection withMulti-Layer Perceptrons is presented, namely thestopping criterion, thedatasetwhere thesaliency ismeasured andthenetwork retraining before computing thesaliency. Experimental results withtheSequential Backward Selection procedure indicate thattheincrease inthe computational costassociated withretraining thenetwork with every feature temporarily removed before computing thesaliency isrewarded withasignificant performance improvement. Despite being quite intuitive, this ideahasbeenhardly usedinpractice. Regarding thestopping criterion andthedatasetwherethe saliency ismeasured, theprocedure profits frommeasuring the saliency inavalidation set, asreasonably expected. A somehow non-intuitive conclusion canbedrawnbylooking atthestopping criterion, whereitissuggested that forcing overtraining maybe asuseful asearly stopping. I.INTRODUCTION
[1]
Jihoon Yang,et al.
Feature Subset Selection Using a Genetic Algorithm
,
1998,
IEEE Intell. Syst..
[2]
J. Sopena,et al.
Neural networks with periodic and monotonic activation functions: a comparative study in classification problems
,
1999
.
[3]
E. Romero,et al.
Feature selection forcing overtraining may help to improve performance
,
2003,
Proceedings of the International Joint Conference on Neural Networks, 2003..
[4]
Kenneth W. Bauer,et al.
Integrated feature architecture selection
,
1996,
IEEE Trans. Neural Networks.
[5]
Ron Kohavi,et al.
Irrelevant Features and the Subset Selection Problem
,
1994,
ICML.