Using Machine Learning Methods to Predict Bias in Nuclear Criticality Safety

Abstract This paper describes the application of machine learning (ML) tools to the prediction of bias in the criticality safety analysis. In particular, a set of over 1000 experiments included in the Whisper package were utilized in a variety of ML algorithms (notably Random Forest and AdaBoost implemented in SciKit-Learn) using neutron multiplication (keff) sensitivities (with and without energy dependence) for individual nuclides, and optionally, the simulated keff as the training features. Ultimately, the ML model was used to predict the bias (simulated–experimental keff). The use of energy-integrated sensitivity profiles with simulated keff as training features lead to the best predictions as quantified by root-mean-square and mean absolute errors. In particular, the best-case estimates came from AdaBoost, with a mean absolute error of 0.00174, which is less than the mean experimental uncertainty of 0.00328 for the experiments included.

[1]  Bianca Zadrozny,et al.  Learning and evaluating classifiers under sample selection bias , 2004, ICML.

[2]  B. Rearden,et al.  Sensitivity- and Uncertainty-Based Criticality Safety Validation Techniques , 2004 .

[3]  R. Fullwood,et al.  Lecture notes for criticality safety , 1992 .

[4]  Thomas G. Dietterich,et al.  Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms , 2008 .

[5]  Forrest B. Brown,et al.  User Manual for Whisper-1.1 , 2017 .

[6]  Anthony Widjaja,et al.  Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2003, IEEE Transactions on Neural Networks.

[7]  Adrian F. M. Smith,et al.  A Bayesian CART algorithm , 1998 .

[8]  Gustavo E. A. P. A. Batista,et al.  Data mining with imbalanced class distributions: concepts and methods , 2009, IICAI.

[9]  B. Rearden Perturbation Theory Eigenvalue Sensitivity Analysis with Monte Carlo Techniques , 2004 .

[10]  安藤 寛,et al.  Cross-Validation , 1952, Encyclopedia of Machine Learning and Data Mining.

[11]  Trevor J. Hastie,et al.  Confidence intervals for random forests: the jackknife and the infinitesimal jackknife , 2013, J. Mach. Learn. Res..

[12]  Forrest B. Brown,et al.  Lecture Notes on Criticality Safety Validation Using MCNP & Whisper , 2016 .

[13]  D.P. Solomatine,et al.  AdaBoost.RT: a boosting algorithm for regression problems , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[14]  John S. Hendricks,et al.  Initial MCNP6 Release Overview , 2012 .

[15]  Thomas G. Dietterich Multiple Classifier Systems , 2000, Lecture Notes in Computer Science.

[16]  Forrest B. Brown,et al.  Whisper: Sensitivity/Uncertainty-Based Computational Methods and Software for Determining Baseline Upper Subcritical Limits , 2015 .

[17]  Forrest B. Brown,et al.  Methodology, verification, and performance of the continuous-energy nuclear data sensitivity capability in MCNP6 , 2013 .

[18]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[19]  Pat Langley,et al.  Selection of Relevant Features and Examples in Machine Learning , 1997, Artif. Intell..

[20]  Harris Drucker,et al.  Improving Regressors using Boosting Techniques , 1997, ICML.

[21]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.