The higher-order power method revisited: convergence proofs and effective initialization

We revisit the higher-order power method of De Lathauwer et al. (1995) for rank-one tensor approximation, and its relation to contrast maximization as used in blind deconvolution. We establish a simple convergence proof for the general nonsymmetric tensor case. We show also that a symmetric version of the algorithm, offering an order of magnitude reduction in computational complexity but discarded by De Lathauwer et al. as unpredictable, is likewise provably convergent. A new initialization scheme is also developed which, unlike the TSVD-based initialization, leads to a quantifiable proximity to the globally optimal solution.