Bidirectional Joint Representation Learning with Symmetrical Deep Neural Networks for Multimodal and Crossmodal Applications
暂无分享,去创建一个
[1] Nitish Srivastava,et al. Learning Representations for Multimodal Data with Deep Belief Nets , 2012 .
[2] Geoffrey E. Hinton,et al. Reducing the Dimensionality of Data with Neural Networks , 2006, Science.
[3] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[4] Jason Weston,et al. Large scale image annotation: learning to rank with joint word-image embeddings , 2010, Machine Learning.
[5] H. T. Kung,et al. Multimodal sparse representation learning and applications , 2015, Journal of AI Humanities.
[6] Ruifan Li,et al. Cross-modal Retrieval with Correspondence Autoencoder , 2014, ACM Multimedia.
[7] Lin-Shan Lee,et al. Semantic retrieval of personal photos using a deep autoencoder fusing visual features with speech annotations represented as word/paragraph vectors , 2015, INTERSPEECH.
[8] Karel Jezek,et al. Comparing Semantic Models for Evaluating Automatic Document Summarization , 2015, TSD.
[9] Juhan Nam,et al. Multimodal Deep Learning , 2011, ICML.
[10] Maria Eskevich,et al. The Search and Hyperlinking Task at MediaEval 2013 , 2013, MediaEval.