Tlemcen University at ImageCLEF 2019 Visual Question Answering Task

In this paper we describe our methodology of techno team participation at ImageCLEF Medical Visual Question Answering 2019 task. VQA-Med task is a challenge which combines computer vision with Natural Language Processing (NLP) in order to build a system that manages responses based on set of medical images and questions that suit them. We used a jointly learning for text and image method in order to solve the task, we tested a publicly available VQA network. We apply neural network and visual semantic embeddings method on this task. Our approach based on CNNs and RNN model achieve 0.486 of BLEU score.