Clothing Retrieval Based on Local Similarity with Multiple Images

Recently, the online shopping market has been expanded, which has advanced studies of clothing retrieval via image search. For this study, we develop a novel clothing retrieval system considering local similarity, where users can retrieve their desired clothes which are globally similar to an image and partially similar to another image. We propose a method of coding global features by merging local descriptors extracted from multiple images. Furthermore, we design a system that re-evaluates output of similar image search by the similarity of local regions. We demonstrated that our method increased the probability of users finding their desired clothes from 39.7%-55.1%, compared to a standard similar image search system with global features of a single image. Statistical significance is proven using t-tests.

[1]  Huizhong Chen,et al.  Describing Clothing by Semantic Attributes , 2012, ECCV.

[2]  Min Xu,et al.  Efficient Clothing Retrieval with Semantic-Preserving Visual Phrases , 2012, ACCV.

[3]  Yihong Gong,et al.  Locality-constrained Linear Coding for image classification , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[4]  Changsheng Xu,et al.  Street-to-shop: Cross-scenario clothing retrieval via parts alignment and auxiliary set , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[5]  Andrew Zisserman,et al.  Learning Visual Attributes , 2007, NIPS.

[6]  Ying Wu,et al.  Object retrieval and localization with spatially-constrained similarity measure and k-NN re-ranking , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  Tamara L. Berg,et al.  Paper Doll Parsing: Retrieving Similar Styles to Parse Clothing Items , 2013, 2013 IEEE International Conference on Computer Vision.