Dealing with limited data in ballistic impact scenarios: an empirical comparison of different neural network approaches

In the domain of high-speed impact between solids, the simulation of one trial entails the use of large resources and an elevated computational cost. The objective of this research is to find the best neural network associated with a new problem of ballistic impact, maximizing the quantity of trials available and simplifying their architecture. To achieve this goal, this paper proposes a tuning performance process based on four stages. These stages include existing statistical techniques, a combination of proposals to improve the performance and analyze the influence of each variable. To measure the quality of the different networks, two criteria based on information theory have been incorporated to reflect the fit of the data with respect to their complexity. The results obtained show that the application of an integrated tuning process in this domain permits improvement in the performance and efficiency of a neural network in comparison with different machine learning alternatives

[1]  Ángel García-Crespo,et al.  Multilayer Perceptron Training Optimization for High Speed Impacts Classification , 2008, World Congress on Engineering.

[2]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1992, Math. Control. Signals Syst..

[3]  Ron Kohavi,et al.  A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection , 1995, IJCAI.

[4]  Jin H. Huang,et al.  Detection of cracks using neural networks and computational mechanics , 2002 .

[5]  S. T. Buckland,et al.  An Introduction to the Bootstrap. , 1994 .

[6]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[7]  Fabrice Druaux,et al.  Autonomous learning algorithm for fully connected recurrent networks , 2003, ESANN.

[8]  Songwu Lu,et al.  Robust nonlinear system identification using neural-network models , 1998, IEEE Trans. Neural Networks.

[9]  Panlop Zeephongsekul,et al.  Predicting the Relationship Between the Size of Training Sample and the Predictive Power of Classifiers , 2004, KES.

[10]  Yichuang Sun,et al.  Neural network-based L1-norm optimisation approach for fault diagnosis of nonlinear circuits with tolerance , 2001 .

[11]  Thomas M. Cover,et al.  Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition , 1965, IEEE Trans. Electron. Comput..

[12]  Danilo P. Mandic,et al.  Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability , 2001 .

[13]  Dimitris A. Pados,et al.  On overfitting, generalization, and randomly expanded training sets , 2000, IEEE Trans. Neural Networks Learn. Syst..

[14]  Donald H. Foley Considerations of sample and feature size , 1972, IEEE Trans. Inf. Theory.

[15]  R. Lippmann,et al.  An introduction to computing with neural nets , 1987, IEEE ASSP Magazine.

[16]  Kiyotoshi Matsuoka,et al.  Noise injection into inputs in back-propagation learning , 1992, IEEE Trans. Syst. Man Cybern..

[17]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[18]  Naonori Ueda,et al.  Optimal Linear Combination of Neural Networks for Improving Classification Performance , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  Guozhong An,et al.  The Effects of Adding Noise During Backpropagation Training on a Generalization Performance , 1996, Neural Computation.

[20]  David Hinkley,et al.  Bootstrap Methods: Another Look at the Jackknife , 2008 .

[21]  Yun Zhang,et al.  Design of ensemble neural network using the Akaike information criterion , 2008, Eng. Appl. Artif. Intell..

[22]  Kevin L. Priddy,et al.  Artificial Neural Networks: An Introduction (SPIE Tutorial Texts in Optical Engineering, Vol. TT68) , 2005 .

[23]  Jure Zupan,et al.  Neural networks in chemistry , 1993 .

[24]  B. Efron Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation , 1983 .

[25]  Ragip Ince,et al.  Prediction of fracture parameters of concrete by Artificial Neural Networks , 2004 .

[26]  L. Breiman,et al.  Submodel selection and evaluation in regression. The X-random case , 1992 .

[27]  Vladimir Vapnik,et al.  Statistical learning theory , 1998 .

[28]  Ángel García-Crespo,et al.  Prediction of the response under impact of steel armours using a multilayer perceptron , 2007, Neural Computing and Applications.

[29]  Michael Biehl,et al.  The AdaTron: An Adaptive Perceptron Algorithm , 1989 .

[30]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[31]  M. Kenward,et al.  An Introduction to the Bootstrap , 2007 .

[32]  D. Saha,et al.  A neural network based prediction model for flood in a disaster management system with sensor networks , 2005, Proceedings of 2005 International Conference on Intelligent Sensing and Information Processing, 2005..

[33]  Kevin L. Priddy,et al.  Artificial neural networks - an introduction , 2005, Tutorial text series.

[34]  Haiping Du,et al.  Time series prediction using evolving radial basis function networks with new encoding scheme , 2008, Neurocomputing.

[35]  D. B. Fogel,et al.  AN INFORMATION CRITERION FOR OPTIMAL NEURAL NETWORK SELECTION , 1990, 1990 Conference Record Twenty-Fourth Asilomar Conference on Signals, Systems and Computers, 1990..

[36]  Charles E. Anderson,et al.  An overview of the theory of hydrocodes , 1987 .

[37]  Christian W. Dawson,et al.  The effect of different basis functions on a radial basis function network for time series prediction: A comparative study , 2006, Neurocomputing.

[38]  Danilo P. Mandic,et al.  Recurrent Neural Networks for Prediction , 2001 .

[39]  Timothy Masters,et al.  Advanced algorithms for neural networks: a C++ sourcebook , 1995 .

[40]  Geoffrey E. Hinton,et al.  Keeping the neural networks simple by minimizing the description length of the weights , 1993, COLT '93.

[41]  Sergio Ricci,et al.  Optimization of Helicopter Subfloor Components Under Crashworthiness Requirements Using Neural Networks , 2002 .

[42]  Cyril Goutte,et al.  Note on Free Lunches and Cross-Validation , 1997, Neural Computation.

[43]  Matías Gámez,et al.  A boosting approach for corporate failure prediction , 2007, Applied Intelligence.

[44]  D. Opitz,et al.  Popular Ensemble Methods: An Empirical Study , 1999, J. Artif. Intell. Res..

[45]  Rashad J. Rasras,et al.  The Artificial Neural Network Based Approach for Mortality Structure Analysis , 2006 .

[46]  Yi Zhao,et al.  Minimum description length criterion for modeling of chaotic attractors with multilayer perceptron networks , 2006, IEEE Transactions on Circuits and Systems I: Regular Papers.

[47]  W. Goldsmith,et al.  Impact: the theory and physical behaviour of colliding solids. , 1960 .

[48]  Tim Hendtlass,et al.  A Comparison of Neural Network Input Vector Selection Techniques , 2004, IEA/AIE.

[49]  Kevin Swingler,et al.  Applying neural networks - a practical guide , 1996 .

[50]  Asis Mazumdar,et al.  Optimization Of The Water Use In The River Damodar In West Bengal In India: An Integrated Multi-Reservoir System With The Help Of Artificial Neural Network , 2007 .

[51]  Shuxiang Xu,et al.  A novel approach for determining the optimal number of hidden layer neurons for FNN’s and its application in data mining , 2008 .

[52]  D. Agard,et al.  Microtubule nucleation by γ-tubulin complexes , 2011, Nature Reviews Molecular Cell Biology.

[53]  S. R. Bodner,et al.  Dynamic perforation of viscoplastic plates by rigid projectiles , 1983 .

[54]  Jocelyn Sietsma,et al.  Creating artificial neural networks that generalize , 1991, Neural Networks.

[55]  S. R. Bodner,et al.  Analysis of the mechanics of perforation of projectiles in metallic plates , 1974 .

[56]  Leonard Ziemiański,et al.  Neural networks in mechanics of structures and materials – new results and prospects of applications , 2001 .

[57]  Barry J. Wythoff,et al.  Backpropagation neural networks , 1993 .

[58]  M. Langseth,et al.  Ballistic penetration of steel plates , 1999 .

[59]  Shun-ichi Amari,et al.  Network information criterion-determining the number of hidden units for an artificial neural network model , 1994, IEEE Trans. Neural Networks.

[60]  Jooyoung Park,et al.  Universal Approximation Using Radial-Basis-Function Networks , 1991, Neural Computation.

[61]  J. Rissanen,et al.  Modeling By Shortest Data Description* , 1978, Autom..

[62]  H. Akaike A new look at the statistical model identification , 1974 .

[63]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[64]  Jose C. Principe,et al.  Neural and Adaptive Systems: Fundamentals through Simulations with CD-ROM , 1999 .

[65]  Petri Koistinen,et al.  Kernel regression and backpropagation training with noise , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[66]  Jonas A. Zukas,et al.  High velocity impact dynamics , 1990 .

[67]  Tomaso A. Poggio,et al.  Regularization Theory and Neural Networks Architectures , 1995, Neural Computation.

[68]  Christopher M. Bishop,et al.  Current address: Microsoft Research, , 2022 .

[69]  Petri Koistinen,et al.  Using additive noise in back-propagation training , 1992, IEEE Trans. Neural Networks.

[70]  J. Zupan,et al.  Neural Networks in Chemistry , 1993 .

[71]  Sergio Ricci,et al.  Neural network systems to reproduce crash behavior of structural components , 2004 .

[72]  Robert Tibshirani,et al.  A Comparison of Some Error Estimates for Neural Network Models , 1996, Neural Computation.

[73]  Lionel Tarassenko,et al.  Guide to Neural Computing Applications , 1998 .

[74]  Josef Kittler,et al.  Pattern recognition : a statistical approach , 1982 .

[75]  Eric Bauer,et al.  An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.

[76]  Martin Burger,et al.  Analysis of Tikhonov regularization for function approximation by neural networks , 2003, Neural Networks.

[77]  Dianhui Wang,et al.  Improved generalization of neural classifiers with enforced internal representation , 2007, Neurocomputing.

[78]  Kevin L. Priddy,et al.  Artificial Neural Networks: An Introduction (SPIE Tutorial Texts in Optical Engineering, Vol. TT68) , 2005 .

[79]  M. Gevrey,et al.  Review and comparison of methods to study the contribution of variables in artificial neural network models , 2003 .

[80]  Huan Liu,et al.  Incremental Feature Selection , 1998, Applied Intelligence.

[81]  Hojjat Adeli,et al.  Perceptron Learning in Engineering Design , 2008 .