The research area for solving question answering (QA) problems using artificial intelligence models is in a methodological transition period, and one such architecture, the dynamic memory network (DMN), is drawing attention for two key attributes: its attention mechanism defined by neural network operations and its modular architecture imitating cognition processes during QA of human. In this paper, we increased accuracy of the inferred answers, by adapting an automatic data augmentation method for lacking amount of training data, and by improving the ability of time perception. The experimental results showed that in the 1K-bAbI tasks, the modified DMN achieves 89.21% accuracy and passes twelve tasks which is 13.58% higher with passing four more tasks, as compared with one implementation of DMN. Additionally, DMN’s word embedding vectors form strong clusters after training. Moreover, the number of episodic passes and that of supporting facts shows direct correlation, which affects the performance significantly.
[1]
Jeffrey Pennington,et al.
GloVe: Global Vectors for Word Representation
,
2014,
EMNLP.
[2]
Richard Socher,et al.
Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
,
2015,
ICML.
[3]
Jason Weston,et al.
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
,
2015,
ICLR.
[4]
Yoshua Bengio,et al.
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling
,
2014,
ArXiv.
[5]
Richard Socher,et al.
Dynamic Memory Networks for Visual and Textual Question Answering
,
2016,
ICML.