With the great success of convolutional neural networks (CNNs), interpretation of their internal network mechanism has been increasingly critical, while the network decision-making logic is still an open issue. In the bottom-up hierarchical logic of neuroscience, the decision-making process can be deduced from a series of sub-decision-making processes from low to high levels. Inspired by this, we propose the Concept-harmonized HierArchical INference (CHAIN) interpretation scheme. In CHAIN, a network decision-making process from shallow to deep layers is interpreted by the hierarchical backward inference based on visual concepts from high to low semantic levels. Firstly, we learned a general hierarchical visual-concept representation in CNN layered feature space by concept harmonizing model on a large concept dataset. Secondly, for interpreting a specific network decision-making process, we conduct the concept-harmonized hierarchical inference backward from the highest to the lowest semantic level. Specifically, the network learning for a target concept at a deeper layer is disassembled into that for concepts at shallower layers. Finally, a specific network decision-making process is explained as a form of concept-harmonized hierarchical inference, which is intuitively comparable to the bottom-up hierarchical visual recognition way. Quantitative and qualitative experiments demonstrate the effectiveness of the proposed CHAIN at both instance and class levels.