An Adaptive Algorithm Based On The Sigmoidal Function

In this work, we show the development of an adaptive algorithm based on the Ln(cosh \varepsilon) as cost function applied upon the error, called Sigmoidal Algorithm (SA). That function generates a surface which yields fast convergence along with lower misadjustment. It is similar to the family of algorithms proposed by Walach and Widrow [1]. The later ones were shown to behave poorer than the LMS algorithm [2], when the noise was Gaussian. We study the SA algorithm convergence behavior and find equations for the misadjustment and the learning time. Results showed that the SA had a better performance than the LMS when the noise had a Gaussian distribution.

[1]  Teresa H. Y. Meng,et al.  Stochastic gradient adaptation under general error criteria , 1994, IEEE Trans. Signal Process..

[2]  Allan Kardec Barros,et al.  An algorithm based on the even moments of the error , 2003, 2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718).

[3]  A. Constantinides,et al.  Least mean mixed-norm adaptive filtering , 1994 .

[4]  A.K. Barros,et al.  Analysis of the Time Constant for the Sigmoidal Algorithm Applied to Biomedical Signals , 2006, IEEE International Workshop on Medical Measurement and Applications, 2006. MeMea 2006..

[5]  S. Haykin,et al.  Adaptive Filter Theory , 1986 .

[6]  Bernard Widrow,et al.  The least mean fourth (LMF) adaptive algorithm and its family , 1984, IEEE Trans. Inf. Theory.