Generation of an Effective Training Feature Vector using VQ for Classification of Image Database

In supervised classification of image database, feature vectors of images with known classes, are used for training purpose. Feature vectors are extracted in such a way that it will represent maximum information in minimum elements. Accuracy of classification highly depends on the content of training feature vectors and number of training feature vectors. If the number of training images increases then the performance of classification also improves. But it also leads to more storage space and computation time. The main aim of this research is to reduce the number of feature vectors in an effective way so as to reduce memory space required and computation time as well as to increase an accuracy. This paper proposes three major steps for automatic classification of image database. First step is the generation of feature vector of an image using column transform, row mean vector and fusion method. Then vector Quantization (code book size 4,8 and 16) is applied to reduce the number of training feature vectors per class and generate an effective and compact representation of them. Finally nearest neighbor classification algorithm is used as a classifier. The experiments are conducted on augmented Wang database. The results for various transforms, different similarity measures, varying sizes of feature vector, three code book sizes and different number of training images, are analyzed and compared. Results show that the proposed method increases accuracy in most of the cases. General Terms Image Classification, Vector quantization, Algorithms, Image Database.

[1]  Yasuo Kuniyoshi,et al.  Scene Classification Using Generalized Local Correlation , 2009, MVA.

[2]  Elena Deza,et al.  Dictionary of distances , 2006 .

[3]  Alan R. Jones,et al.  Fast Fourier Transform , 1970, SIGP.

[4]  E. O. Brigham,et al.  The Fast Fourier Transform , 1967, IEEE Transactions on Systems, Man, and Cybernetics.

[5]  Fatimah Khalid,et al.  SUPERVISED ANN CLASSIFICATION FOR ENGINEERING MACHINED TEXTURES BASED ON ENHANCED FEATURES EXTRACTION AND REDUCTION SCHEME , 2013 .

[6]  Shamik Tiwari,et al.  Performance Analysis of Texture Image Classification Using Wavelet Feature , 2013 .

[7]  Robert M. Gray,et al.  An Algorithm for Vector Quantizer Design , 1980, IEEE Trans. Commun..

[8]  H. Kekre,et al.  Comparative performance of various trigonometric unitary transforms for transform image coding , 1978 .

[9]  Anil K. Jain,et al.  On image classification: city images vs. landscapes , 1998, Pattern Recognit..

[10]  Simone Santini,et al.  Similarity Measures , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  In Seop Na,et al.  Automatic Classification for Various Images Collections Using Two Stages Clustering Method , 2013 .

[12]  A. Jain,et al.  A Fast Karhunen-Loeve Transform for a Class of Random Processes , 1976, IEEE Trans. Commun..

[13]  James Ze Wang,et al.  SIMPLIcity: Semantics-Sensitive Integrated Matching for Picture LIbraries , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[14]  Daniel Madan,et al.  ANN and SVM Based War Scene Classification using Wavelet Features: A Comparative Study , 2011 .

[15]  A. R. Rahiman,et al.  Feature Fusion in Improving Object Class Recognition , 2012 .

[16]  Eli Shechtman,et al.  In defense of Nearest-Neighbor based image classification , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[17]  R. Gray,et al.  Vector quantization , 1984, IEEE ASSP Magazine.

[18]  Pietro Perona,et al.  A Bayesian hierarchical model for learning natural scene categories , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[19]  R. Hartley A More Symmetrical Fourier Analysis Applied to Transmission Problems , 1942, Proceedings of the IRE.

[20]  Jianqin Zhou,et al.  On discrete cosine transform , 2011, ArXiv.

[21]  James Ze Wang,et al.  SIMPLIcity: Semantics-Sensitive Integrated Matching for Picture LIbraries , 2001, IEEE Trans. Pattern Anal. Mach. Intell..