It is a challenging problem for humans to understand the predictions made by sophisticated machine learning techniques. This paper proposes a method to interpret complex machine learning models on individual samples. We propose to first transfer the generalization abilities from the target model to be interpreted, such as deep neural networks, to GBDT (Gradient Boosting Decision Tree) through knowledge distillation. We then present a local explanation method for GBDT by faithfully ranking the relative importance of interpretable predicates for explaining an individual sample. To show the effectiveness of our method, we study and explain machine learning tasks on a diverse of domains including sentiment analysis, tabular classification, and image classification. Experiments show competitive results of explanations that would help users better understand the predictions made by black-box models in some cases, as compared to related state-of-the-art explanation methods.