Backpropagation method modification using Taylor series to improve accuracy of offline neural network training
暂无分享,去创建一个
[1] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[2] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[3] Sebastian Ruder,et al. An overview of gradient descent optimization algorithms , 2016, Vestnik komp'iuternykh i informatsionnykh tekhnologii.
[4] Quoc V. Le,et al. On optimization methods for deep learning , 2011, ICML.
[5] Jorge Nocedal,et al. On the limited memory BFGS method for large scale optimization , 1989, Math. Program..
[6] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[7] Ken-ichi Funahashi,et al. On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.
[8] Hao Yu,et al. Levenberg—Marquardt Training , 2011 .