Simulated annealing and neural networks as alternative methods for nonlinear constrained optimization

Abstract By highlighting a particular example of track fitting with geometrical and kinematical constraints of events from a particle physics experiment simulated annealing and backprop neural networks are investigated as alternative optimization methods. The two methods are different in the way that simulated annealing suffers from the enormous amount of function evaluations required while a trained neural network requires negligible computing time. This makes the former unpractical to use in analysis of large amounts of data, but we find that using any of these methods to compute start values has significant impact on the convergence of the constrained fits. Further, it enables us to reduce and estimate the effects introduced by the optimization algorithm. For our case of kinematic and geometric fitting of p p → K 0 K ± π ± Monte Carlo events we find that the rate of non-convergence, in the case that the standard analytical minimization did not work using the measured values as the start values, changes up to 40% when making a second attempt with start values from a global optimizer.