On algorithms for nonconvex optimization in the calculus of variations

A simple algorithm for the computation of local minima of nonconvex problems in one dimension is proposed. This algorithm avoids certain well-known local minima that are distant from the global minimum and can be used to improve the minima obtained using classical descent methods. The calculated minima have mesh scale oscillations typical of weakly converging sequences. It is shown that these local minima converge weakly to the correct limit for certain simple problems; however, numerical evidence suggests that the gradients may not converge to the correct weak limits, and this is verified using asymptotic expansions. Extensions to multiple dimensions and some numerical examples in two dimensions are considered.