Semantic segmentation of surgical instruments provides essential priors for autonomous surgery. This task is however challenging since the fine-structure of surgical instruments requires the accurate segmentation of detailed regions in images. As the visual guidance for autonomous surgery, the algorithm should also be real-time and friendly to embedded systems. In this paper, a discriminative asymmetric learning framework is proposed to balance the efficiency and effectiveness of surgical instrument segmentation. Two convolutional neural networks with specific designs are deployed to extract the detail and semantic features of instruments. To reduce the redundancy of visual representation, the aggregator-discriminator mechanism is proposed to distinguish the features learned from different levels. Experiments demonstrate that the proposed method contributes to competitive segmentation accuracy and a higher efficiency compared to existing methods.