Direct Error Driven Learning for Deep Neural Networks with Applications to Bigdata

Abstract In this paper, generalization error for traditional learning regimes-based classification is demonstrated to increase in the presence of bigdata challenges such as noise and heterogeneity. To reduce this error while mitigating vanishing gradients, a deep neural network (NN)-based framework with a direct error-driven learning scheme is proposed. To reduce the impact of heterogeneity, an overall cost comprised of the learning error and approximate generalization error is defined where two NNs are utilized to estimate the costs respectively. To mitigate the issue of vanishing gradients, a direct error-driven learning regime is proposed where the error is directly utilized for learning. It is demonstrated that the proposed approach improves accuracy by 7 % over traditional learning regimes. The proposed approach mitigated the vanishing gradient problem and improved generalization by 6%.

[1]  Pascal Vincent,et al.  Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Han Liu,et al.  Challenges of Big Data Analysis. , 2013, National science review.

[3]  Steve R. Gunn,et al.  Result Analysis of the NIPS 2003 Feature Selection Challenge , 2004, NIPS.

[4]  Razvan Pascanu,et al.  On the difficulty of training recurrent neural networks , 2012, ICML.

[5]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[6]  Zhiwen Yu,et al.  Hybrid Adaptive Classifier Ensemble , 2015, IEEE Transactions on Cybernetics.

[7]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[8]  Arild Nøkland,et al.  Direct Feedback Alignment Provides Learning in Deep Neural Networks , 2016, NIPS.