Nonlinear Optimization
暂无分享,去创建一个
The nonlinear minimization problem is to find a (local) minimizer for an objective function f (·), which takes in a vector x ∈ R n as input and a scalar f (x) as output. A broad and important class of algorithms take an iterative form x k+1 = x k + α k h where if f x k = 0, we choose direction h k so that f x k h k < 0 and the step size α k so that f x k + α k h k < f x k. At current ponit x, consider the second order Taylor expansion of f f (x + h) ≈ q (h) = f (x) + h f (x) + 1 2 h f (x)h.
[1] D. Marquardt. An Algorithm for Least-Squares Estimation of Nonlinear Parameters , 1963 .
[2] Kaj Madsen,et al. Methods for Non-Linear Least Squares Problems , 1999 .
[3] P. E. Frandsen,et al. Unconstrained optimization , 1999 .
[4] K. Schittkowski,et al. NONLINEAR PROGRAMMING , 2022 .