Abstract In recent years, research on single image super resolution has progressed with the development of deep convo-lutional neural networks(DCNNs). Among current techniques, models based on residual learning demonstrated great progress. Despite their great performances, the depth and width of the super-resolution models has increased a lot, which brought the challenges of computational complexity and memory consumption. In order to solve the above questions, attention has been paid to improving model efficiency. In this work, we address this issue by proposing a novel model with new residual block and new training method. By introducing the squeeze and excitation(SE) module and depthwise separable convolution, we can get a slimmer model with more efficiency. In addition, we apply a cascade training approach in training our model. Experiments on benchmark datasets show that our proposed image super resolution model achieves the state-of-the-art performance with fewer parameters and less time cost.