Experimental results reported in the machine learning AI literature can be misleading. This paper investigates the common processes of data averaging (reporting results in terms of the mean and standard deviation of the results from multiple trials) and data snooping in the context of neural networks, one of the most popular AI machine learning models. Both of these processes can result in misleading results and inaccurate conclusions. We demonstrate how easily this can happen and propose techniques for avoiding these very important problems. For data averaging, common presentation assumes that the distribution of individual results is Gaussian. However, we investigate the distribution for common problems and find that it often does not approximate the Gaussian distribution, may not be symmetric, and may be multimodal. We show that assuming Gaussian distributions can significantly affect the interpretation of results, especially those of comparison studies. For a controlled task, we find that the distribution of performance is skewed towards better performance for smoother target functions and skewed towards worse performance for more complex target functions. We propose new guidelines for reporting performance which provide more information about the actual distribution (e.g. box-whiskers plots). For data snooping, we demonstrate that optimization of performance via experimentation with multiple parameters can lead to significance being assigned to results which are due to chance. We suggest that precise descriptions of experimental techniques can be very important to the evaluation of results, and that we need to be aware of potential data snooping biases when formulating these experimental techniques (e.g. selecting the test procedure). Additionally, it is important to only rely on appropriate statistical tests and to ensure that any assumptions made in the tests are valid (e.g. normality of the distribution).
[1]
S.J.J. Smith,et al.
Empirical Methods for Artificial Intelligence
,
1995
.
[2]
John E. Moody,et al.
Note on Learning Rate Schedules for Stochastic Optimization
,
1990,
NIPS.
[3]
S. Hyakin,et al.
Neural Networks: A Comprehensive Foundation
,
1994
.
[4]
Lutz Prechelt,et al.
A quantitative study of experimental evaluations of neural network learning algorithms: Current research practice
,
1996,
Neural Networks.
[5]
G. Lugosi,et al.
Strong Universal Consistency of Neural Network Classifiers
,
1993,
Proceedings. IEEE International Symposium on Information Theory.
[6]
M. Braga,et al.
Exploratory Data Analysis
,
2018,
Encyclopedia of Social Network Analysis and Mining. 2nd Ed..
[7]
Andrew D. Back.
New techniques for nonlinear system identification : a rapprochement between neural networks and linear systems
,
1992
.
[8]
William H. Press,et al.
Numerical recipes
,
1990
.
[9]
Ronald L. Rivest,et al.
Training a 3-node neural network is NP-complete
,
1988,
COLT '88.
[10]
L. Glass,et al.
Oscillation and chaos in physiological control systems.
,
1977,
Science.
[11]
Jean-Didier Legat,et al.
A statistical neural network for high-dimensional vector classification
,
1995,
Proceedings of ICNN'95 - International Conference on Neural Networks.