We consider a human-in-the-loop scenario in the context of low-shot learning. Our approach was inspired by the fact that the viability of samples in novel categories cannot be sufficiently reflected by those limited observations. Some heterogeneous samples that are quite different from existing labeled novel data can inevitably emerge in the testing phase. To this end, we consider augmenting an uncertainty assessment module into low-shot learning system to account into the disturbance of those out-of-distribution (OOD) samples. Once detected, these OOD samples are passed to human beings for active labeling. Due to the discrete nature of this uncertainty assessment process, the whole Human-In-the-Loop Low-shot (HILL) learning framework is not end-to-end trainable. We hence revisited the learning system from the aspect of reinforcement learning and introduced the REINFORCE algorithm to optimize model parameters via policy gradient. The whole system gains noticeable improvements over existing low-shot learning approaches.