PCGAN-CHAR: Progressively Trained Classifier Generative Adversarial Networks for Classification of Noisy Handwritten Bangla Characters

Due to the sparsity of features, noise has proven to be a great inhibitor in the classification of handwritten characters. To combat this, most techniques perform denoising of the data before classification. In this paper, we consolidate the approach by training an all-in-one model that is able to classify even noisy characters. For classification, we progressively train a classifier generative adversarial network on the characters from low to high resolution. We show that by learning the features at each resolution independently a trained model is able to accurately classify characters even in the presence of noise. We experimentally demonstrate the effectiveness of our approach by classifying noisy versions of MNIST, handwritten Bangla Numeral, and Basic Character datasets.

[1]  Yoshua Bengio,et al.  How transferable are features in deep neural networks? , 2014, NIPS.

[2]  Jonathon Shlens,et al.  Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.

[3]  Supratik Mukhopadhyay,et al.  Pixel-Level Reconstruction and Classification for Noisy Handwritten Bangla Characters , 2018, 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR).

[4]  Alexei A. Efros,et al.  Context Encoders: Feature Learning by Inpainting , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Supratik Mukhopadhyay,et al.  Deep neural networks for texture classification - A theoretical analysis , 2018, Neural Networks.

[6]  Walid G. Aref,et al.  Decomposing a window into maximal quadtree blocks , 1993, Acta Informatica.

[7]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[8]  Jaakko Lehtinen,et al.  Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.

[9]  Jason Yosinski,et al.  Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Bidyut Baran Chaudhuri,et al.  Offline recognition of handwritten Bangla characters: an efficient two-stage approach , 2012, Pattern Analysis and Applications.

[11]  Supratik Mukhopadhyay,et al.  DeepSat: a learning framework for satellite imagery , 2015, SIGSPATIAL/GIS.

[12]  Supratik Mukhopadhyay,et al.  CactusNets: Layer Applicability as a Metric for Transfer Learning , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).

[13]  Lawrence D. Jackel,et al.  Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.

[14]  Ke Huang,et al.  Sparse Representation for Signal Classification , 2006, NIPS.

[15]  Tassos Markas,et al.  Quad Tree Structures for Image Compression Applications , 1992, Inf. Process. Manag..

[16]  Kouichi Sakurai,et al.  One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.

[17]  Jitendra Malik,et al.  Hypercolumns for object segmentation and fine-grained localization , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Bharath Hariharan,et al.  Low-Shot Visual Recognition by Shrinking and Hallucinating Features , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[19]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[20]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[21]  Sangram Ganguly,et al.  Learning Sparse Feature Representations Using Probabilistic Quadtrees and Deep Belief Nets , 2015, Neural Processing Letters.

[22]  Bidyut Baran Chaudhuri,et al.  Handwritten Numeral Databases of Indian Scripts and Multistage Recognition of Mixed Numerals , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[23]  Sangram Ganguly,et al.  A theoretical analysis of Deep Neural Networks for texture classification , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[24]  Marc'Aurelio Ranzato,et al.  Sparse Feature Learning for Deep Belief Networks , 2007, NIPS.