Selecting a credible answer source from a massive number of sentences in a large amount of documents is essential for the real-world factoid Question Answering (QA) system. Neural text matching has been widely studied for answer source selection, which serves to measure the relevance between the target question and each candidate answer source. The interaction-based matching methods have achieved remarkable performance. However, the interaction is conditioned on either the coarse-grained word embeddings or the fine-grained encoder states. The interaction of different-granular latent information is omitted. We propose a multi-granularity interaction fusion model. It learns to perceive the interaction not only between the same granular latent information, but that of different-granular latent information. We reconcile these diverse interactions for the computing of global question-source relevance. We experiment on WikiQA. The results show that our method achieves competitive performance compared to the state of the art.