Target region segmentation of synthetic aperture radar (SAR) images is one of the challenging problems in SAR image interpretation. The existing conventional segmentation methods rely on parameter selection in different backgrounds. Compared with traditional methods, the deep-learning-based methods can reduce the dependency on parameters and achieve more accurate results. However, lacking annotation data limits the application of the deep-learning-based methods in SAR chip image segmentation aspect. To solve these problems, a refined network structure for SAR vehicle image semantic segmentation, namely, All-Convolutional networks (A-ConvNets)-based Mask (ACM) net, is proposed. The mask in the training dataset of the network is extracted from image reconstruction using the Attribute Scattering Center (ASC) model, which can solve the problem of the lack of manual annotation in the segmentation methods based on deep learning. The proposed ACM Net consists of a modified A-ConvNets-based backbone and two decoupled head branches which achieve target segmentation and label prediction results, respectively. Experiments on moving and stationary target acquisition and recognition (MSTAR) dataset show that the comprehensive segmentation performance of ACM Net is better than both traditional segmentation methods and deep-learning-based segmentation methods. The classification results outperform other instance or semantic segmentation methods with the state-of-the-art recognition accuracy.