Inspired by recent advances in leveraging multiple modalities in machine translation, we introduce an encoder-decoder pipeline that uses (1) specific objects within an image and their object labels, (2) a language model for decoding joint embedding of object features and the object labels. Our pipeline merges prior detected objects from the image and their object labels and then learns the sequences of captions describing the particular image. The decoder model learns to extract descriptions for the image from scratch by decoding the joint representation of the object visual features and their object classes conditioned by the encoder component. The idea of the model is to concentrate only on the specific objects of the image and their labels for generating descriptions of the image rather than visual feature of the entire image. The model needs to be calibrated more by adjusting the parameters and settings to result in better accuracy and performance.
[1]
Nazli Ikizler-Cinbis,et al.
Automatic Description Generation from Images: A Survey of Models, Datasets, and Evaluation Measures
,
2016,
J. Artif. Intell. Res..
[2]
Yejin Choi,et al.
Composing Simple Image Descriptions using Web-scale N-grams
,
2011,
CoNLL.
[3]
Frank Keller,et al.
Image Description using Visual Dependency Representations
,
2013,
EMNLP.
[4]
Ruslan Salakhutdinov,et al.
Multimodal Neural Language Models
,
2014,
ICML.
[5]
Yiannis Aloimonos,et al.
Corpus-Guided Sentence Generation of Natural Images
,
2011,
EMNLP.
[6]
Yejin Choi,et al.
Baby talk: Understanding and generating simple image descriptions
,
2011,
CVPR 2011.