Deep learning techniques such as convolutional neural networks (CNNs) have been used in a wide range of fields due to their superior performance, e.g., image classification, autonomous driving and natural language processing. However, recent progress shows that deep learning models are vulnerable to adversarial samples, which are crafted by adding small perturbations on normal samples that are imperceptible to human beings but can mislead the deep learning models to output incorrect results. Many adversarial attack models are proposed and many adversarial detection methods are developed to detect adversarial samples generated by these attack models. However, the evaluations of these detection methods are fragmented and scatter in separate literature, and the community still lacks a comprehensive understanding of the ability and performance of existing adversarial detection methods when facing different attack models on different datasets. In this paper, by using image classification as the example application scenario, we conduct a comprehensive study on the performance of five mainstream adversarial detection methods against five major attack models on four widely used benchmark datasets. We find that the detection accuracy of different methods interleaves for different attack models and dataset. Moreover, besides detection accuracy, we also evaluate the time efficiency of different detection methods. The findings reported in this paper can provide useful insights when designing systems to detect adversarial samples and act as a guideline to design new methods to detect adversarial samples.