Harnessing Chaotic Activation Functions in Training Neural Network

We propose'harnessed Chaotic Activation Functions' (HCAF) to compute final activation of a neural network. That is biologically plausible to connect with neuron. Multilayer feed-forward neural networks are trained with a supervised algorithm which is loosely connected with biological learning. Bio-inspired system development is recently a challenging topic in intelligent system design. We investigate whether HCAF can enable the learning to be faster. Validity of the proposed method is examined by performing simulations on challenging five real benchmark classification problems. The HCAF has been examined to 2-bit, Diabetes, Wine, Glass and Soybean problems. The algorithm is shown to work better than other AFs used independently in BP such as sigmoid(SIG), arctangent (ATAN), logarithmic (LOG), robust chaos in neural network (RCNN), and that of jointly such as fusion of activation function (FAF).

[1]  Kazuyuki Murase,et al.  Faster Training Using Fusion of Activation Functions for Feed Forward Neural Networks , 2009, Int. J. Neural Syst..

[2]  M. K. Ali,et al.  Robust chaos in neural networks , 2000 .

[3]  G. Cheng,et al.  On the efficiency of chaos optimization algorithms for global optimization , 2007 .

[4]  Laxmidhar Behera,et al.  On Adaptive Learning Rate That Guarantees Convergence in Feedforward Networks , 2006, IEEE Transactions on Neural Networks.

[5]  Jia-hai Zhang,et al.  Activation Function of Wavelet Chaotic Neural Networks , 2010, 2010 International Conference on Computing, Control and Industrial Engineering.

[6]  Chi-Chung Cheung,et al.  Magnified gradient function with deterministic weight modification in adaptive learning , 2004, IEEE Transactions on Neural Networks.

[7]  Kazuyuki Murase,et al.  Injecting Chaos in Feedforward Neural Networks , 2011, Neural Processing Letters.

[8]  Kazuyuki Aihara,et al.  Deterministic prediction and chaos in squid axon response , 1992 .

[9]  Masahiro Nakagawa,et al.  Training Multilayer Neural Network by Global Chaos Optimization Algorithms , 2007, 2007 International Joint Conference on Neural Networks.

[10]  Kazuyuki Aihara,et al.  Global searching ability of chaotic neural networks , 1999 .

[11]  Kunikazu Kobayashi,et al.  Shapes of nonmonotonic activation functions in a chaotic neural network associative memory model and its evaluation , 2008 .

[12]  Hojjat Adeli,et al.  Object-oriented backpropagation and its application to structural design , 1994, Neurocomputing.