For given data ($t_i\ , y_i), i=1, \ldots ,m$ , we consider the least squares fit of nonlinear models of the form F($\underset ~\to a\ , \underset ~\to \alpha\ ; t) = \sum_{j=1}^{n}\ g_j (\underset ~\to a ) \varphi_j (\underset ~\to \alpha\ ; t) , \underset ~\to a\ \epsilon R^s\ , \underset ~\to \alpha\ \epsilon R^k\ $. For this purpose we study the minimization of the nonlinear functional r($\underset ~\to a\ , \underset ~\to \alpha ) = \sum_{i=1}^{m} {(y_i - F(\underset ~\to a , \underset ~\to \alpha , t_i))}^2$. It is shown that by defining the matrix ${ \{\Phi (\underset ~\to \alpha\} }_{i,j} = \varphi_j (\underset ~\to \alpha ; t_i)$ , and the modified functional $r_2(\underset ~\to \alpha ) = \l\ \underset ~\to y\ - \Phi (\underset ~\to \alpha )\Phi^+(\underset ~\to \alpha ) \underset ~\to y \l_2^2$, it is possible to optimize first with respect to the parameters $\underset ~\to \alpha$ , and then to obtain, a posteriori, the optimal parameters $\overset ^\to {\underset ~\to a}$. The matrix $\Phi^+(\underset ~\to \alpha$) is the Moore-Penrose generalized inverse of $\Phi (\underset ~\to \alpha$), and we develop formulas for its Frechet derivative under the hypothesis that $\Phi (\underset ~\to \alpha$) is of constant (though not necessarily full) rank. From these formulas we readily obtain the derivatives of the orthogonal projectors associated with $\Phi (\underset ~\to \alpha$), and also that of the functional $r_2(\underset ~\to \alpha$). Detailed algorithms are presented which make extensive use of well-known reliable linear least squares techniques, and numerical results and comparisons are given. These results are generalizations of those of H. D. Scolnik [1971].
[1]
F. D. K. Roberts,et al.
Computing Best l_p Approximations by Functions Nonlinear in one Parameter
,
1970,
Comput. J..
[2]
Adi Ben-Israel.
A Newton-Raphson method for the solution of systems of equations
,
1966
.
[3]
C. Lawson,et al.
Extensions and applications of the Householder algorithm for solving linear least squares problems
,
1969
.
[4]
J. H. Wilkinson,et al.
Note on the iterative refinement of least squares solution
,
1966
.
[5]
Victor Pereyra,et al.
Iterative methods for solving nonlinear least squares problems
,
1967
.
[6]
G. Golub,et al.
Iterative refinements of linear least squares solutions by Householder transformations
,
1968
.
[7]
V. Pereyra.
Stability of general systems of linear equations
,
1969
.
[8]
G. Golub,et al.
Numerical computations for univariate linear models
,
1973
.
[9]
G. Golub.
MATRIX DECOMPOSITIONS AND STATISTICAL CALCULATIONS
,
1969
.
[10]
J. Daniel.
On the approximate minimization of functionals
,
1969
.
[11]
G. Stewart.
On the Continuity of the Generalized Inverse
,
1969
.
[12]
James Hardy Wilkinson,et al.
The Least Squares Problem and Pseudo-Inverses
,
1970,
Comput. J..
[13]
J. Dieudonne.
Foundations of Modern Analysis
,
1969
.
[14]
P. Wedin.
Perturbation bounds in connection with singular value decomposition
,
1972
.
[15]
R. Fletcher,et al.
A Class of Methods for Nonlinear Programming II Computational Experience
,
1970
.