The capabilities of artificial neural networks in body composition research

Abstract.When estimating in vivo body composition or combining such estimates with other results, multiple variables must be taken into account (e. g. binary attributes such as gender or continuous attributes such as most biosignals). Standard statistical models, such as logistic regression and multivariate analysis, presume well-defined distributions (e. g. normal distribution); they also presume independence among all inputs and only linear relationships, yet rarely are these requirements met in real life. As an alternative to these models, artificial neural networks can be used. In the present work, we describe the pre-processing and multivariate analysis of data using neural network techniques, providing examples from the medical field and making comparisons with classic statistical approaches. We also address the criticisms raised regarding neural network techniques and discuss their potential improvement.

[1]  Ron Kohavi,et al.  A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection , 1995, IJCAI.

[2]  Robert Tibshirani,et al.  An Introduction to the Bootstrap , 1994 .

[3]  Paulo J. G. Lisboa,et al.  Artificial Neural Networks in Biomedicine , 2000, Perspectives in Neural Computing.

[4]  Lutz Prechelt,et al.  Automatic early stopping using cross validation: quantifying the criteria , 1998, Neural Networks.

[5]  D. Rubin,et al.  Statistical Analysis with Missing Data , 1988 .

[6]  Kishan G. Mehrotra,et al.  Efficient classification for multiclass problems using modular neural networks , 1995, IEEE Trans. Neural Networks.

[7]  Carl G. Looney Stabilization and speedup of convergence in training feedforward neural networks , 1996, Neurocomputing.

[8]  Siegfried J. Pöppl,et al.  ACMD: A Practical Tool for Automatic Neural Net Based Learning , 2001, ISMDA.

[9]  J. Hanley,et al.  A method of comparing the areas under receiver operating characteristic curves derived from the same cases. , 1983, Radiology.

[10]  Warren S. Sarle,et al.  Stopped Training and Other Remedies for Overfitting , 1995 .

[11]  C. Lee Giles,et al.  What Size Neural Network Gives Optimal Generalization? Convergence Properties of Backpropagation , 1998 .

[12]  Derek Partridge,et al.  Ranking Pattern Recognition Features for Neural Networks , 1999 .

[13]  Klaus-Robert Müller,et al.  Asymptotic statistical theory of overtraining and cross-validation , 1997, IEEE Trans. Neural Networks.

[14]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[15]  C. Metz Basic principles of ROC analysis. , 1978, Seminars in nuclear medicine.

[16]  J. van Leeuwen,et al.  Neural Networks: Tricks of the Trade , 2002, Lecture Notes in Computer Science.

[17]  Soo-Young Lee,et al.  Training Algorithm with Incomplete Data for Feed-Forward Neural Networks , 1999, Neural Processing Letters.

[18]  David W. Hosmer,et al.  Applied Logistic Regression , 1991 .

[19]  Ferdinand Hergert,et al.  Improving model selection by nonconvergent methods , 1993, Neural Networks.

[20]  J. Stephen Judd,et al.  Optimal stopping and effective machine complexity in learning , 1993, Proceedings of 1995 IEEE International Symposium on Information Theory.