Deterministic Numeric Simulation and Surrogate Models with White and Black Machine Learning Methods: A Case Study on Direct Mappings

The approximation and emulation of first principles based deterministic models are important problems in many disciplines, like physical and natural sciences, as well as in engineering (industrial design, creation of digital twins and other tasks). Typically they involve complex systems, described by partial differential or integral equations which must be solved for a variety of space and time boundary conditions. Finding these solutions is usually costly in terms of both computational resources and time. Surrogate models are an effective way of building approximations that may replace the use of the compled/costly original models, expediting and speeding operations. Computational intelligence techniques have proven suitable for surrogating purposes and this paper explores the characterization of a relatively simple deterministic system described by a partial differential equation, using white as well as black box approaches for direct supervised mappings (inverse mappings are explored elsewhere). In addition, unsupervised methods are used for gaining insight into the properties of the input and output state spaces. White-box ML techniques exposed the nature of the inter-dependencies and the importance of the predictor variables. Individually, support vector regression outperformed all other models for the fixed-location, fixed time and also for the fixedlocation, time dependent scenario. However, performance-wise, the ensemble composed of white-box techniques outperformed the one integrated by black-box methods from the point of view of error and correlation measures.

[1]  Amaury Lendasse,et al.  High-Performance Extreme Learning Machines: A Complete Toolbox for Big Data Applications , 2015, IEEE Access.

[2]  Minxia Luo,et al.  Ensemble extreme learning machine and sparse representation classification , 2016, J. Frankl. Inst..

[3]  Guang-Bin Huang,et al.  Reply to “Comments on “The Extreme Learning Machine”” , 2008, IEEE Transactions on Neural Networks.

[4]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[5]  Ian H. Witten,et al.  Induction of model trees for predicting continuous classes , 1996 .

[6]  Cândida Ferreira,et al.  Gene Expression Programming: Mathematical Modeling by an Artificial Intelligence , 2014, Studies in Computational Intelligence.

[7]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[8]  L. P. Wang,et al.  Comments on "The Extreme Learning Machine" , 2008, IEEE Trans. Neural Networks.

[9]  Alexander J. Smola,et al.  Support Vector Regression Machines , 1996, NIPS.

[10]  Chee Kheong Siew,et al.  Extreme learning machine: Theory and applications , 2006, Neurocomputing.

[11]  Vladimir M. Krasnopolsky,et al.  The Application of Neural Networks in the Earth System Sciences: Neural Networks Emulations for Complex Multidimensional Mappings , 2013 .

[12]  Julio J. Valdés,et al.  Exploring medical data using visual spaces with genetic programming and implicit functional mappings , 2007, GECCO '07.

[13]  Chee Kheong Siew,et al.  Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes , 2006, IEEE Transactions on Neural Networks.

[14]  Geoff Holmes,et al.  Generating Rule Sets from Model Trees , 1999, Australian Joint Conference on Artificial Intelligence.

[15]  Alain Chedin,et al.  A Neural Network Approach for a Fast and Accurate Computation of a Longwave Radiative Budget , 1998 .

[16]  Noel Lopes,et al.  An Evaluation of Multiple Feed-Forward Networks on GPUs , 2011, Int. J. Neural Syst..

[17]  J. Freidman,et al.  Multivariate adaptive regression splines , 1991 .

[18]  D. E. Rumelhart,et al.  chapter Parallel Distributed Processing, Exploration in the Microstructure of Cognition , 1986 .

[19]  P. Werbos,et al.  Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .

[20]  Avanti Shrikumar,et al.  Learning Important Features Through Propagating Activation Differences , 2017, ICML.

[21]  John R. Koza,et al.  Genetic programming - on the programming of computers by means of natural selection , 1993, Complex adaptive systems.

[22]  Erik Strumbelj,et al.  Explaining prediction models and individual predictions with feature contributions , 2014, Knowledge and Information Systems.

[23]  Bernhard E. Boser,et al.  A training algorithm for optimal margin classifiers , 1992, COLT '92.

[24]  J. R. Quinlan Learning With Continuous Classes , 1992 .

[25]  J. Friedman Estimating Functions of Mixed Ordinal and Categorical Variables Using Adaptive Splines , 1991 .

[26]  Julio J. Valdés,et al.  Archive ( NPArC ) Archives des publications du CNRC ( NPArC ) Virtual Reality High Dimensional Objective Spaces for Multi-Objective Optimization : An Improved Representation , 2010 .

[27]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[28]  Kai Zhang,et al.  Extreme learning machine and adaptive sparse representation for image classification , 2016, Neural Networks.

[29]  Yoh-Han Pao,et al.  Adaptive pattern recognition and neural networks , 1989 .

[30]  Stefan Kramer,et al.  Alternating model trees , 2015, SAC.