暂无分享,去创建一个
Richard G. Baraniuk | Anima Anandkumar | Animesh Garg | Tan M. Nguyen | Richard Baraniuk | Anima Anandkumar | Animesh Garg | T. Nguyen
[1] Stanley Osher,et al. ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies , 2018, NeurIPS.
[2] Shakir Mohamed,et al. Variational Inference with Normalizing Flows , 2015, ICML.
[3] Frederick Tung,et al. Multi-level Residual Networks from Dynamical Systems View , 2017, ICLR.
[4] Hui Liu,et al. On-Demand Deep Model Compression for Mobile Devices: A Usage-Driven Model Selection Framework , 2018, MobiSys.
[5] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[6] Yingyan Lin,et al. EnergyNet: Energy-Efficient Dynamic Inference , 2018 .
[7] M. Hutchinson. A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines , 1989 .
[8] Misha Denil,et al. Noisy Activation Functions , 2016, ICML.
[9] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[10] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[11] David Duvenaud,et al. FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models , 2018, ICLR.
[12] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.
[13] David Duvenaud,et al. Neural Ordinary Differential Equations , 2018, NeurIPS.
[14] Xin Wang,et al. SkipNet: Learning Dynamic Routing in Convolutional Networks , 2017, ECCV.
[15] Yang You,et al. Scaling SGD Batch Size to 32K for ImageNet Training , 2017, ArXiv.
[16] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[17] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[18] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[19] VARUN CHANDOLA,et al. Anomaly detection: A survey , 2009, CSUR.
[20] Stanley Osher,et al. EnResNet: ResNet Ensemble via the Feynman-Kac Formalism , 2018, ArXiv.
[21] Ullrich Köthe,et al. Analyzing Inverse Problems with Invertible Neural Networks , 2018, ICLR.
[22] Li Zhang,et al. Spatially Adaptive Computation Time for Residual Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Rui Liu,et al. Conditional Adversarial Generative Flow for Controllable Image Synthesis , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.
[25] Yee Whye Teh,et al. Hybrid Models with Deep and Invertible Features , 2019, ICML.
[26] David Duvenaud,et al. Latent ODEs for Irregularly-Sampled Time Series , 2019, ArXiv.
[27] R. J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[28] Prafulla Dhariwal,et al. Glow: Generative Flow with Invertible 1x1 Convolutions , 2018, NeurIPS.
[29] Samy Bengio,et al. Density estimation using Real NVP , 2016, ICLR.
[30] Pieter Abbeel,et al. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.
[31] Quoc V. Le,et al. Don't Decay the Learning Rate, Increase the Batch Size , 2017, ICLR.
[32] Christopher M. Bishop,et al. Novelty detection and neural network validation , 1994 .
[33] Athanasios S. Polydoros,et al. Survey of Model-Based Reinforcement Learning: Applications on Robotics , 2017, J. Intell. Robotic Syst..
[34] Alex Graves,et al. Adaptive Computation Time for Recurrent Neural Networks , 2016, ArXiv.
[35] Tomas Mikolov,et al. Variable Computation in Recurrent Neural Networks , 2016, ICLR.