Visual question answering (VQA) requires a high-level understanding of both questions and images, along with visual reasoning to predict the correct answer. Therefore, it is important to design an effective attention model to associate key regions in an image with key words in a question. Up to now, most attention-based approaches only model the relationships between individual regions in an image and words in a question. It is not enough to predict the correct answer for VQA, as human beings always think in terms of global information, not only local information. In this paper, we propose a novel multi-modality global fusion attention network (MGFAN) consisting of stacked global fusion attention (GFA) blocks, which can capture information from global perspectives. Our proposed method computes co-attention and self-attention at the same time, rather than computing them individually. We validate our proposed method on the two most commonly used benchmarks, the VQA-v2 datasets. Experimental results show that the proposed method outperforms the previous state-of-the-art. Our best single model achieves 70.67% accuracy on the test-dev set of VQA-v2.
[1]
Arun Kumar Sangaiah,et al.
Image caption generation with high-level image features
,
2019,
Pattern Recognit. Lett..
[2]
Yang Yang,et al.
Leveraging unpaired out-of-domain data for image captioning
,
2020,
Pattern Recognit. Lett..
[3]
Michele Nappi,et al.
Biometric surveillance using visual question answering
,
2019,
Pattern Recognit. Lett..
[4]
Michael S. Bernstein,et al.
Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations
,
2016,
International Journal of Computer Vision.
[5]
Zhou Yu,et al.
Beyond Bilinear: Generalized Multimodal Factorized High-Order Pooling for Visual Question Answering
,
2017,
IEEE Transactions on Neural Networks and Learning Systems.
[6]
Weifeng Zhang,et al.
Multimodal feature fusion by relational reasoning and attention for visual question answering
,
2020,
Inf. Fusion.