Algorithms for Solving Equations

The main goal of this work is an attempt to understand the efficiency of algorithms for solving systems of equations. For example, consider a map / from complex Cartesian space C n to C n given by polynomials. How fast can one find good approximations to one or all of the zeros of / ? However, for clarity of exposition and development we will usually swing from one extreme to another about this example, proving theorems on one hand for a single complex polynomial and on the other hand for an analytic map / : F —> F from one Banach space to another (both real or both complex). This subject should undoubtedly be considered as numerical analysis. However, my point of view has been especially influenced by complexity theory of computer science. Thus the emphasis is on the algorithms themselves and a global study of their speed. Hence less emphasis is given to the results obtained by algorithms and the asymptotic criteria of efficiency that are often found in numerical analysis literature. This global study, or complexity theory, gives a more systematic, more abstract, more theoretical flavor to our approach, and thus we are less concerned about producing immediately faster methods for problem solving. A basic understanding of the tried and effective is sought. The global study of the algorithm forces the introduction of topology and geometry into the subject. If any algorithm has proved itself for the problem of nonlinear systems, it is Newton's method and its many modifications. The Greeks essentially used it for finding square roots; and for that purpose it is still a top method today. On the other hand, for nonlinear functional equations framed in a Banach space setting, Newton's method finds a central place. In the account here, the one theme is Newton's method. We will prove theorems about it for one variable (dealing with the fundamental theorem of algebra) and for maps of Banach spaces; we will approximate Dantzig's [8] simplex method for the linear programming problem by Newton's method.

[1]  F. J. Gould,et al.  Relations Between Several Path Following Algorithms and Local and Global Newton Methods , 1980 .

[2]  J. Dieudonne Foundations of Modern Analysis , 1969 .

[3]  A. Ostrowski Solution of equations in Euclidean and Banach spaces , 1973 .

[4]  C. McMullen Families of Rational Maps and Iterative Root-Finding Algorithms , 1987 .

[5]  Michael Shub,et al.  The Newtonian graph of a complex polynomial , 1988 .

[6]  Steve Smale,et al.  The Problem of the Average Speed of the Simplex Method , 1982, ISMP.

[7]  S. Smale,et al.  Computational complexity: on the geometry of polynomials and a theory of cost. I , 1985 .

[8]  George B. Dantzig,et al.  Linear programming and extensions , 1965 .

[9]  H. T. Kung THE COMPLEXITY OF OBTAINING STARTING POINTS FOR SOLVING OPERATOR EQUATIONS BY NEWTON'S METHOD , 1976 .

[10]  Narendra Karmarkar,et al.  A new polynomial-time algorithm for linear programming , 1984, STOC '84.

[11]  Herbert E. Scarf,et al.  The Solution of Systems of Piecewise Linear Equations , 1976, Math. Oper. Res..

[12]  R. Cottle Note on a Fundamental Theorem in Quadratic Programming , 1964 .

[13]  V. Pan Algebraic complexity of computing polynomial zeros , 1987 .

[14]  Michael A. Saunders,et al.  On projected newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method , 1986, Math. Program..

[15]  Stephen Smale,et al.  On the existence of generally convergent algorithms , 1986, J. Complex..

[16]  Jean-Philippe Vial,et al.  A polynomial newton method for linear programming , 2005, Algorithmica.

[17]  Sherman Wong NEWTON'S METHOD AND SYMBOLIC DYNAMICS , 1984 .

[18]  James Renegar,et al.  On the Efficiency of Newton's Method in Approximating All Zeros of a System of Complex Polynomials , 1987, Math. Oper. Res..

[19]  S. Smale Newton’s Method Estimates from Data at One Point , 1986 .

[20]  James Renegar,et al.  A polynomial-time algorithm, based on Newton's method, for linear programming , 1988, Math. Program..

[21]  Paul E. Wright,et al.  Statistical complexity of the power method for markov chains , 1989, J. Complex..

[22]  G. Dantzig,et al.  COMPLEMENTARY PIVOT THEORY OF MATHEMATICAL PROGRAMMING , 1968 .

[23]  P. Jonker,et al.  The continuous, desingularized Newton method for meromorphic functions , 1988 .

[24]  J. Yorke,et al.  Finding zeroes of maps: homotopy methods that are constructive with probability one , 1978 .

[25]  Jean Abadie,et al.  Méthode du GRG, méthode de Newton globale et application à la programmation mathématique , 1984 .

[26]  Andrzej P. Wierzbicki Note on the equivalence of Kuhn-Tucker complementarity conditions to an equation , 1982 .

[27]  S. Smale On the efficiency of algorithms of analysis , 1985 .

[28]  S. Smale The fundamental theorem of algebra and complexity theory , 1981 .

[29]  Henry C. Thacher,et al.  Applied and Computational Complex Analysis. , 1988 .

[30]  M. Marden Geometry of Polynomials , 1970 .

[31]  S. Smale Convergent process of price adjust-ment and global newton methods , 1976 .

[32]  James H. Curry,et al.  On zero finding methods of higher order from data at one point , 1989, J. Complex..

[33]  John L. Nazareth,et al.  Homotopy techniques in linear programming , 1986, Algorithmica.

[34]  Nimrod Megiddo,et al.  Boundary Behavior of Interior Point Algorithms in Linear Programming , 1989, Math. Oper. Res..

[35]  M. Hirsch,et al.  On Algorithms for Solving f(x)=0 , 1979 .

[36]  Lenore Blum,et al.  Towards an Asymptotic Analysis of Karmarkar's Algorithm , 1986, Inf. Process. Lett..

[37]  Henryk Wozniakowski A survey of information complexity , 1985 .

[38]  L. Kantorovich,et al.  Functional analysis in normed spaces , 1952 .

[39]  O. Mangasarian Equivalence of the Complementarity Problem to a System of Nonlinear Equations , 1976 .