Fundamentals and Learning of Artificial Neural Networks
暂无分享,去创建一个
[1] Yoshua Bengio,et al. Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.
[2] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[3] Yair Weiss,et al. Correctness of Local Probability Propagation in Graphical Models with Loops , 2000, Neural Computation.
[4] Jürgen Schmidhuber,et al. Deep learning in neural networks: An overview , 2014, Neural Networks.
[5] Xiao-Hu Yu,et al. Can backpropagation error surface not have local minima , 1992, IEEE Trans. Neural Networks.
[6] Geoffrey E. Hinton. A Practical Guide to Training Restricted Boltzmann Machines , 2012, Neural Networks: Tricks of the Trade.
[7] Bo Chen,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.
[8] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[9] Yu Wang,et al. Exploring the Granularity of Sparsity in Convolutional Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[10] Hong Wang,et al. A direct adaptive neural-network control for unknown nonlinear systems and its application , 1998, IEEE Trans. Neural Networks.
[11] Milton S. Boyd,et al. Designing a neural network for forecasting financial and economic time series , 1996, Neurocomputing.
[12] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[13] Haibo He,et al. Air-Breathing Hypersonic Vehicle Tracking Control Based on Adaptive Dynamic Programming , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[14] Paolo Frasconi,et al. Learning without local minima in radial basis function networks , 1995, IEEE Trans. Neural Networks.
[15] Warren B. Powell,et al. “Approximate dynamic programming: Solving the curses of dimensionality” by Warren B. Powell , 2007, Wiley Series in Probability and Statistics.
[16] Jürgen Schmidhuber,et al. Learning Precise Timing with LSTM Recurrent Networks , 2003, J. Mach. Learn. Res..
[17] Randall S. Sexton,et al. Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing , 1999, Eur. J. Oper. Res..
[18] Boris Polyak. Some methods of speeding up the convergence of iteration methods , 1964 .
[19] Asja Fischer,et al. Training Restricted Boltzmann Machines , 2015, KI - Künstliche Intelligenz.
[20] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[21] Haibo He,et al. A three-network architecture for on-line learning and optimization based on adaptive dynamic programming , 2012, Neurocomputing.
[22] Ali Farhadi,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.
[23] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o(1/k^2) , 1983 .
[25] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[26] Pinaki Mazumder,et al. A Low-Power Circuit for Adaptive Dynamic Programming , 2018, 2018 31st International Conference on VLSI Design and 2018 17th International Conference on Embedded Systems (VLSID).
[27] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[28] Alexander H. Waibel,et al. Modular Construction of Time-Delay Neural Networks for Speech Recognition , 1989, Neural Computation.
[29] Sheng Chen,et al. Recursive hybrid algorithm for non-linear system identification using radial basis function networks , 1992 .
[30] Honglak Lee,et al. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations , 2009, ICML '09.
[31] Jürgen Schmidhuber,et al. Learning to forget: continual prediction with LSTM , 1999 .
[32] Robert Kozma,et al. Complete stability analysis of a heuristic approximate dynamic programming control design , 2015, Autom..
[33] Shuchang Zhou,et al. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients , 2016, ArXiv.
[34] Vivienne Sze,et al. Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[35] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[36] F. Lewis,et al. Reinforcement Learning and Feedback Control: Using Natural Decision Methods to Design Optimal Adaptive Controllers , 2012, IEEE Control Systems.
[37] Paul J. Werbos,et al. Applications of advances in nonlinear sensitivity analysis , 1982 .
[38] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[39] Geoffrey E. Hinton,et al. A time-delay neural network architecture for isolated word recognition , 1990, Neural Networks.
[40] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[41] Esa Alhoniemi,et al. Clustering of the self-organizing map , 2000, IEEE Trans. Neural Networks Learn. Syst..
[42] Feng Liu,et al. A boundedness result for the direct heuristic dynamic programming , 2012, Neural Networks.
[43] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[44] Jürgen Schmidhuber,et al. LSTM: A Search Space Odyssey , 2015, IEEE Transactions on Neural Networks and Learning Systems.
[45] Haibo He,et al. Model-Free Dual Heuristic Dynamic Programming , 2015, IEEE Transactions on Neural Networks and Learning Systems.
[46] PAUL J. WERBOS,et al. Generalization of backpropagation with application to a recurrent gas market model , 1988, Neural Networks.
[47] Jinyu Wen,et al. Adaptive Learning in Tracking Control Based on the Dual Critic Network Design , 2013, IEEE Transactions on Neural Networks and Learning Systems.
[48] J J Hopfield,et al. Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.
[49] Andrew Zisserman,et al. Speeding up Convolutional Neural Networks with Low Rank Expansions , 2014, BMVC.
[50] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[51] Yoshua Bengio,et al. Why Does Unsupervised Pre-training Help Deep Learning? , 2010, AISTATS.
[52] Jooyoung Park,et al. Approximation and Radial-Basis-Function Networks , 1993, Neural Computation.
[53] Sheng Chen,et al. A clustering technique for digital communications channel equalization using radial basis function networks , 1993, IEEE Trans. Neural Networks.
[54] H. Kimura,et al. Stochastic real-valued reinforcement learning to solve a nonlinear control problem , 1999, IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028).
[55] Victor S. Lempitsky,et al. Fast ConvNets Using Group-Wise Brain Damage , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[56] Takeo Kanade,et al. Neural Network-Based Face Detection , 1998, IEEE Trans. Pattern Anal. Mach. Intell..
[57] Pinaki Mazumder,et al. A Scalable Low-Power Reconfigurable Accelerator for Action-Dependent Heuristic Dynamic Programming , 2018, IEEE Transactions on Circuits and Systems I: Regular Papers.
[58] Geoffrey E. Hinton,et al. Reducing the Dimensionality of Data with Neural Networks , 2006, Science.
[59] Mitsuo Kawato,et al. Neural network control for a closed-loop System using Feedback-error-learning , 1993, Neural Networks.
[60] Teuvo Kohonen,et al. The self-organizing map , 1990, Neurocomputing.
[61] Jooyoung Park,et al. Universal Approximation Using Radial-Basis-Function Networks , 1991, Neural Computation.
[62] L. Schuchman. Dither Signals and Their Effect on Quantization Noise , 1964 .
[63] Peter Dayan,et al. Q-learning , 1992, Machine Learning.
[64] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[65] Pascal Vincent,et al. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..
[66] Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method , 2012, ArXiv.
[67] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[68] Huaguang Zhang,et al. Adaptive Dynamic Programming: An Introduction , 2009, IEEE Computational Intelligence Magazine.
[69] Jennie Si,et al. Online learning control by association and reinforcement. , 2001, IEEE transactions on neural networks.
[70] Samy Bengio,et al. Show and tell: A neural image caption generator , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[71] Randall S. Sexton,et al. Comparing backpropagation with a genetic algorithm for neural network training , 1999 .
[72] F.L. Lewis,et al. Reinforcement learning and adaptive dynamic programming for feedback control , 2009, IEEE Circuits and Systems Magazine.
[73] Ah Chung Tsoi,et al. Face recognition: a convolutional neural-network approach , 1997, IEEE Trans. Neural Networks.
[74] Erkki Oja,et al. Engineering applications of the self-organizing map , 1996, Proc. IEEE.
[75] Natalie D. Enright Jerger,et al. Cnvlutin: Ineffectual-Neuron-Free Deep Neural Network Computing , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).
[76] Vivienne Sze,et al. Efficient Processing of Deep Neural Networks: A Tutorial and Survey , 2017, Proceedings of the IEEE.
[77] Mohamad T. Musavi,et al. On the training of radial basis function classifiers , 1992, Neural Networks.