暂无分享,去创建一个
Bidyut Baran Chaudhuri | Shiv Ram Dubey | S. H. Shabbeer Basha | Satish Kumar Singh | B. Chaudhuri | S. Dubey | S. Singh
[1] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[2] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Jun Yu,et al. Coupled Deep Autoencoder for Single Image Super-Resolution , 2017, IEEE Transactions on Cybernetics.
[4] Léon Bottou,et al. Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.
[5] Zhuowen Tu,et al. Aggregated Residual Transformations for Deep Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Haibin Ling,et al. Siamese Cascaded Region Proposal Networks for Real-Time Visual Tracking , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[8] James Demmel,et al. Large Batch Optimization for Deep Learning: Training BERT in 76 minutes , 2019, ICLR.
[9] Timothy Dozat,et al. Incorporating Nesterov Momentum into Adam , 2016 .
[10] Ming Yang,et al. DeepFace: Closing the Gap to Human-Level Performance in Face Verification , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[11] Qionghai Dai,et al. A PID Controller Approach for Stochastic Optimization of Deep Networks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[12] Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method , 2012, ArXiv.
[13] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[14] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[15] Bidyut Baran Chaudhuri,et al. diffGrad: An Optimization Method for Convolutional Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.
[16] 拓海 杉山,et al. “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .
[17] Ali Farhadi,et al. You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[19] Xuancheng Ren,et al. An Adaptive and Momental Bound Method for Stochastic Learning , 2019, ArXiv.
[20] Jianbing Shen,et al. Triplet Loss in Siamese Network for Object Tracking , 2018, ECCV.
[21] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[22] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[23] James Philbin,et al. FaceNet: A unified embedding for face recognition and clustering , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Kurt Keutzer,et al. ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning , 2020, AAAI.
[25] Mercedes Eugenia Paoletti,et al. AngularGrad: A New Optimization Technique for Angular Convergence of Convolutional Neural Networks , 2021, ArXiv.
[26] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[27] Ross B. Girshick,et al. Fast R-CNN , 2015, 1504.08083.
[28] Kancharagunta Kishan Babu,et al. PCSGAN: Perceptual Cyclic-Synthesized Generative Adversarial Networks for Thermal and NIR to Visible Image Transformation , 2020, Neurocomputing.
[29] Siddhartha Chaudhuri,et al. BAE-NET: Branched Autoencoder for Shape Co-Segmentation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[30] Geoffrey E. Hinton,et al. On the importance of initialization and momentum in deep learning , 2013, ICML.
[31] J. Duncan,et al. AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients , 2020, NeurIPS.
[32] Ya Le,et al. Tiny ImageNet Visual Recognition Challenge , 2015 .
[33] Enhua Wu,et al. Squeeze-and-Excitation Networks , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[34] Xu Sun,et al. Adaptive Gradient Methods with Dynamic Bound of Learning Rate , 2019, ICLR.
[35] Liyuan Liu,et al. On the Variance of the Adaptive Learning Rate and Beyond , 2019, ICLR.
[36] Richard S. Zemel,et al. Aggregated Momentum: Stability Through Passive Damping , 2018, ICLR.
[37] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[38] Boris Polyak. Some methods of speeding up the convergence of iteration methods , 1964 .
[39] Seong Joon Oh,et al. Slowing Down the Weight Norm Increase in Momentum-based Optimizers , 2020, ArXiv.
[40] Bin Dong,et al. Nostalgic Adam: Weighing more of the past gradients when designing the adaptive learning rate , 2019, IJCAI.
[41] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[42] Lei Zhang,et al. Gradient Centralization: A New Optimization Technique for Deep Neural Networks , 2020, ECCV.
[43] Sanjiv Kumar,et al. Adaptive Methods for Nonconvex Optimization , 2018, NeurIPS.