The problem of exploration remains one of the major challenges in deep reinforcement learning (RL). This paper proposes an approach to improve the exploration efficiency for distributional RL. First, this paper proposes a novel method to estimate the epistemic and aleatoric uncertainty for distributional RL using deep ensembles, which is inspired by Bayesian Deep Learning. Second, This paper presents a method to improve the exploration efficiency for deep distributional RL by using estimated epistemic uncertainty. Experimental results show that the proposed approach outperforms the baseline in Atari games.