A gradient-based variable step-size LMS algorithm

An algorithm is presented which uses the power of the filtered gradient estimate to form the step-size parameter mu . Each weight coordinate has its own variable step-size parameter to update more accurately the weight vector in the direction of the minimum of the performance surface. Performance results are presented which show that the gradient-based variable step-size algorithm is capable of faster convergence with less misadjustment than the classic LMS (least mean square) algorithm. This performance improvement holds in general for various signal-to-noise ratios and eigenvalue spreads, and for both stationary and nonstationary signals. The complexity is only 2-3 times that of the LMS algorithm, making it an attractive choice for many adaptive filtering applications. A drawback of this algorithm is that the misadjustment is directly proportional to the minimum mean square error, making this algorithm dependent upon the mean value of the signals to be filtered.<<ETX>>