Self-paced hybrid dilated convolutional neural networks

Convolutional neural networks (CNNs) can learn the features of samples by supervised manner, and obtain outstanding achievements in many application fields. In order to improve the performance and generalization of CNNs, we propose a self-learning hybrid dilated convolution neural network (SPHDCNN), which can choose relatively reliable samples according to the current learning ability during training. In order to avoid the loss of useful feature map information caused by pooling, we introduce hybrid dilated convolution. In the proposed SPHDCNN, weight is applied to each sample to reflect the easiness of the sample. SPHDCNN employs easier samples for training first, and then adds more difficulty samples gradually according to the current learning ability. It gradually improves its performance by this learning mechanism. Experimental results show SPHDCNN has strong generalization ability, and it achieves more advanced performance compared to the baseline method.

[1]  Shuiping Gou,et al.  Self-Paced Convolutional Neural Network for PolSAR Images Classification , 2019, Remote. Sens..

[2]  Yuchun Fang,et al.  Face completion with Hybrid Dilated Convolution , 2020, Signal Process. Image Commun..

[3]  Heiga Zen,et al.  WaveNet: A Generative Model for Raw Audio , 2016, SSW.

[4]  Franz Pernkopf,et al.  General Stochastic Networks for Classification , 2014, NIPS.

[5]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[6]  Sumit Basu,et al.  Teaching Classification Boundaries to Humans , 2013, AAAI.

[7]  Pascal Vincent,et al.  Contractive Auto-Encoders: Explicit Invariance During Feature Extraction , 2011, ICML.

[8]  Yoshua Bengio,et al.  An empirical evaluation of deep architectures on problems with many factors of variation , 2007, ICML '07.

[9]  Shiguang Shan,et al.  Self-Paced Curriculum Learning , 2015, AAAI.

[10]  Leon A. Gatys,et al.  Image Style Transfer Using Convolutional Neural Networks , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Maoguo Gong,et al.  Self-paced Convolutional Neural Networks , 2017, IJCAI.

[12]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[13]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[14]  Wei Zheng,et al.  Spectral rotation for deep one-step clustering , 2020, Pattern Recognit..

[15]  Qinghua Hu,et al.  Towards Generalized and Efficient Metric Learning on Riemannian Manifold , 2018, IJCAI.

[16]  Congcong Wang,et al.  Self-paced stacked denoising autoencoders based on differential evolution for change detection , 2018, Appl. Soft Comput..

[17]  Daphne Koller,et al.  Self-Paced Learning for Latent Variable Models , 2010, NIPS.

[18]  Garrison W. Cottrell,et al.  Understanding Convolution for Semantic Segmentation , 2017, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).

[19]  Qinghua Hu,et al.  Beyond Similar and Dissimilar Relations : A Kernel Regression Formulation for Metric Learning , 2018, IJCAI.

[20]  Antonio Plaza,et al.  A new deep convolutional neural network for fast hyperspectral image classification , 2017, ISPRS Journal of Photogrammetry and Remote Sensing.

[21]  Qiaosong Wang,et al.  Object Recognition in Aerial Images Using Convolutional Neural Networks , 2017, J. Imaging.

[22]  Pascal Vincent,et al.  Higher Order Contractive Auto-Encoder , 2011, ECML/PKDD.

[23]  Vladlen Koltun,et al.  Multi-Scale Context Aggregation by Dilated Convolutions , 2015, ICLR.

[24]  Deyu Meng,et al.  What Objective Does Self-paced Learning Indeed Optimize? , 2015, ArXiv.

[25]  Zhang Lu,et al.  Fast Single Image Super-Resolution Via Dilated Residual Networks , 2019, IEEE Access.

[26]  Fei-Fei Li,et al.  Shifting Weights: Adapting Object Detectors from Image to Video , 2012, NIPS.

[27]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[28]  Zhaowei Shang,et al.  A Multiscale Image Denoising Algorithm Based On Dilated Residual Convolution Network , 2018, IGTA.

[29]  Jason Weston,et al.  Curriculum learning , 2009, ICML '09.

[30]  Shichao Zhang,et al.  Spectral clustering via half-quadratic optimization , 2019, World Wide Web.