The Conditioning of Linearizations of Matrix Polynomials

The standard way of solving the polynomial eigenvalue problem of degree $m$ in $n\times n$ matrices is to “linearize” to a pencil in $mn\times mn$ matrices and solve the generalized eigenvalue problem. For a given polynomial, $P$, infinitely many linearizations exist and they can have widely varying eigenvalue condition numbers. We investigate the conditioning of linearizations from a vector space $\mathbb{DL}(P)$ of pencils recently identified and studied by Mackey, Mackey, Mehl, and Mehrmann. We look for the best conditioned linearization and compare the conditioning with that of the original polynomial. Two particular pencils are shown always to be almost optimal over linearizations in $\mathbb{DL}(P)$ for eigenvalues of modulus greater than or less than 1, respectively, provided that the problem is not too badly scaled and that the pencils are linearizations. Moreover, under this scaling assumption, these pencils are shown to be about as well conditioned as the original polynomial. For quadratic eigenvalue problems that are not too heavily damped, a simple scaling is shown to convert the problem to one that is well scaled. We also analyze the eigenvalue conditioning of the widely used first and second companion linearizations. The conditioning of the first companion linearization relative to that of $P$ is shown to depend on the coefficient matrix norms, the eigenvalue, and the left s of the linearization and of $P$. The companion form is found to be potentially much more ill conditioned than $P$, but if the 2-norms of the coefficient matrices are all approximately 1 then the companion form and $P$ are guaranteed to have similar condition numbers. Analogous results hold for the second companion form. Our results are phrased in terms of both the standard relative condition number and the condition number of Dedieu and Tisseur [Linear Algebra Appl., 358 (2003), pp. 71-94] for the problem in homogeneous form, this latter condition number having the advantage of applying to zero and infinite eigenvalues.

[1]  D. Day,et al.  Quadratic eigenvalue problems. , 2007 .

[2]  K. E. Chu,et al.  Derivatives of Eigenvalues and Eigenvectors of Matrix Functions , 1993, SIAM J. Matrix Anal. Appl..

[3]  Peter Lancaster,et al.  The theory of matrices , 1969 .

[4]  P. Lancaster,et al.  Factorization of selfadjoint matrix polynomials with constant signature , 1982 .

[5]  N. Higham,et al.  Detecting a definite Hermitian pair and a hyperbolic or elliptic quadratic eigenvalue problem, and associated nearness problems , 2002 .

[6]  Rembert Reemtsen,et al.  Numerical Methods for Semi-Infinite Programming: A Survey , 1998 .

[7]  Frann Coise Tisseur Backward Error and Condition of Polynomial Eigenvalue Problems , 1999 .

[8]  F. R. Gantmakher The Theory of Matrices , 1984 .

[9]  J. H. Wilkinson The algebraic eigenvalue problem , 1966 .

[10]  Jean-Pierre Dedieu,et al.  Condition operators, condition numbers, and condition number theorem for the generalized eigenvalue problem , 1997 .

[11]  Paul Van Dooren,et al.  Normwise Scaling of Second Order Polynomial Matrices , 2004, SIAM J. Matrix Anal. Appl..

[12]  Karl Meerbergen,et al.  The Quadratic Eigenvalue Problem , 2001, SIAM Rev..

[13]  Volker Mehrmann,et al.  Vector Spaces of Linearizations for Matrix Polynomials , 2006, SIAM J. Matrix Anal. Appl..

[14]  Françoise Tisseur,et al.  Perturbation theory for homogeneous polynomial eigenvalue problems , 2003 .

[15]  Nicholas J. Higham,et al.  Symmetric Linearizations for Matrix Polynomials , 2006, SIAM J. Matrix Anal. Appl..

[16]  J. Eisenfeld Quadratic eigenvalue problems , 1968 .

[17]  G. Stewart,et al.  Matrix Perturbation Theory , 1990 .

[18]  Tetsuji Itoh Damped vibration mode superposition method for dynamic response analysis , 1973 .