Unsupervised Cross-Media Retrieval Using Domain Adaptation With Scene Graph

Existing cross-media retrieval methods are usually conducted under the supervised setting, which need lots of annotated training data. Generally, it is extremely labor-consuming to annotate cross-media data. So unsupervised cross-media retrieval is highly demanded, which is very challenging because it has to handle heterogeneous distributions across different media types without any annotated information. To address the above challenge, this paper proposes <italic>Domain Adaptation with Scene Graph (DASG)</italic> approach, which transfers knowledge from the source domain to improve cross-media retrieval in the target domain. Our DASG approach takes Visual Genome as the source domain, which contains image knowledge in the form of scene graph. The main contributions of this paper are as follows: First, we propose to address <italic>unsupervised cross-media retrieval</italic> by domain adaptation. Instead of using the labor-consuming annotated information of cross-media data in the training stage, our DASG approach learns cross-media correlation knowledge from Visual Genome, and then transfers the knowledge to cross-media retrieval through media alignment and distribution alignment. Second, our DASG approach utilizes fine-grained information via <italic>scene graph representation</italic> to enhance generalization capability across domains. The generated scene graph representation builds (<italic>subject</italic> <inline-formula> <tex-math notation="LaTeX">$\rightarrow $ </tex-math></inline-formula> <italic>relationship</italic> <inline-formula> <tex-math notation="LaTeX">$\rightarrow $ </tex-math></inline-formula> <italic>object</italic>) triplets by exploiting objects and relationships within image and text, which makes the cross-media correlation more precise and promotes unsupervised cross-media retrieval. Third, we exploit the related tasks including <italic>object and relationship detection</italic> for learning more discriminative features across domains. Leveraging the semantic information of objects and relationships improves cross-media correlation learning for retrieval. Experiments on two widely-used cross-media retrieval datasets, namely Flickr-30K and MS-COCO, show the effectiveness of our DASG approach.

[1]  Xiaogang Wang,et al.  Scene Graph Generation from Objects, Phrases and Region Captions , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[2]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[3]  George Trigeorgis,et al.  Domain Separation Networks , 2016, NIPS.

[4]  Xin Huang,et al.  An Overview of Cross-Media Retrieval: Concepts, Methodologies, Benchmarks, and Challenges , 2017, IEEE Transactions on Circuits and Systems for Video Technology.

[5]  Mengjie Zhang,et al.  Deep Reconstruction-Classification Networks for Unsupervised Domain Adaptation , 2016, ECCV.

[6]  Huimin Lu,et al.  Unsupervised cross-modal retrieval through adversarial learning , 2017, 2017 IEEE International Conference on Multimedia and Expo (ICME).

[7]  Michael S. Bernstein,et al.  Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations , 2016, International Journal of Computer Vision.

[8]  Roger Levy,et al.  A new approach to cross-modal multimedia retrieval , 2010, ACM Multimedia.

[9]  Bo Dai,et al.  Detecting Visual Relationships with Deep Relational Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Ruifan Li,et al.  Cross-modal Retrieval with Correspondence Autoencoder , 2014, ACM Multimedia.

[11]  Ishwar K. Sethi,et al.  Multimedia content processing through cross-modal association , 2003, MULTIMEDIA '03.

[12]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[13]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[14]  Qi Tian,et al.  Enhancing Micro-video Understanding by Harnessing External Sounds , 2017, ACM Multimedia.

[15]  Michael S. Bernstein,et al.  Image retrieval using scene graphs , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[16]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[17]  Yuan Shi,et al.  Geodesic flow kernel for unsupervised domain adaptation , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[18]  Xi Chen,et al.  Stacked Cross Attention for Image-Text Matching , 2018, ECCV.

[19]  Eric P. Xing,et al.  Deep Variation-Structured Reinforcement Learning for Visual Relationship and Attribute Detection , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  Danfei Xu,et al.  Scene Graph Generation by Iterative Message Passing , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[21]  Yang Yang,et al.  Adversarial Cross-Modal Retrieval , 2017, ACM Multimedia.

[22]  Krystian Mikolajczyk,et al.  Deep correlation for matching images and text , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[23]  Xiaohua Zhai,et al.  Learning Cross-Media Joint Representation With Sparse and Semisupervised Regularization , 2014, IEEE Transactions on Circuits and Systems for Video Technology.

[24]  Victor S. Lempitsky,et al.  Unsupervised Domain Adaptation by Backpropagation , 2014, ICML.

[25]  Yuxin Peng,et al.  CM-GANs , 2019, ACM Trans. Multim. Comput. Commun. Appl..

[26]  Ivor W. Tsang,et al.  Domain Transfer Multiple Kernel Learning , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[27]  Jia Deng,et al.  Pixels to Graphs by Associative Embedding , 2017, NIPS.

[28]  Peter Young,et al.  From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions , 2014, TACL.

[29]  Michael Isard,et al.  A Multi-View Embedding Space for Modeling Internet Images, Tags, and Their Semantics , 2012, International Journal of Computer Vision.

[30]  Basura Fernando,et al.  SPICE: Semantic Propositional Image Caption Evaluation , 2016, ECCV.

[31]  Yoshua Bengio,et al.  Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach , 2011, ICML.

[32]  Pietro Perona,et al.  Microsoft COCO: Common Objects in Context , 2014, ECCV.

[33]  John Shawe-Taylor,et al.  Canonical Correlation Analysis: An Overview with Application to Learning Methods , 2004, Neural Computation.

[34]  H. Hotelling Relations Between Two Sets of Variates , 1936 .

[35]  Yunhong Wang,et al.  Online Cross-Modal Scene Retrieval by Binary Representation and Semantic Graph , 2017, ACM Multimedia.

[36]  Gang Wang,et al.  Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[37]  Yi Yang,et al.  Beyond Doctors: Future Health Prediction from Multimedia and Multimodal Observations , 2015, ACM Multimedia.

[38]  Yuxin Peng,et al.  MHTN: Modal-Adversarial Hybrid Transfer Network for Cross-Modal Retrieval , 2017, IEEE Transactions on Cybernetics.

[39]  Yuxin Peng,et al.  Reinforced Cross-Media Correlation Learning by Context-Aware Bidirectional Translation , 2020, IEEE Transactions on Circuits and Systems for Video Technology.

[40]  Sanja Fidler,et al.  Order-Embeddings of Images and Language , 2015, ICLR.

[41]  Xiaohua Zhai,et al.  Semi-Supervised Cross-Media Feature Learning With Unified Patch Graph Regularization , 2016, IEEE Transactions on Circuits and Systems for Video Technology.