Approximating nonlinear transformations of probability distributions for nonlinear independent component analysis

The nonlinear independent component analysis method introduced by Lappalainen and Honkela in 2000 uses a truncated Taylor series representation to approximate the nonlinear transformation from sources to observations. The approach uses information only at the single point of input mean and can produce poor results if the input variance is large. This feature has recently been identified to be the cause of instability of the algorithm with large source dimensionalities. In this paper, an improved approximation is presented. The derivatives used in the Taylor scheme are replaced with slopes evaluated by global Gauss-Hermite quadrature. The resulting approximation is more accurate under high input variance and the new learning algorithm is more stable with high source dimensionalities.