SAR Image Representation Learning With Adversarial Autoencoder Networks
暂无分享,去创建一个
This paper focuses on the generalization ability of model for SAR automatic target recognition (ATR). An object-based similarity evaluation method for MSTAR datasets is proposed at first to show the relationship between classification accuracy and orientation difference between training and test images. It reveals poor orientation generalization ability of traditional methods for orientation interval larger than 10deg. In order to improve the orientation generalization ability, a novel adversarial autoencoder neural networks (AAN) is proposed in this paper. It learns a code-image-code cyclic network by adversarial training for the purpose of generating new samples at different azimuth angles. The learned orientation predictor and classifier is applied to test samples. Proposed network achieved over 86% classification accuracy on 7-type MSTAR datasets when minimum orientation interval is limited to 25deg, and is about 4% higher than baseline model A-ConvNets under the same condition.
[1] Haipeng Wang,et al. Target Classification Using the Deep Convolutional Networks for SAR Images , 2016, IEEE Transactions on Geoscience and Remote Sensing.
[2] Jaakko Lehtinen,et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.
[3] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[4] Qian Song,et al. Zero-Shot Learning of SAR Target Feature Space With Deep Generative Neural Networks , 2017, IEEE Geoscience and Remote Sensing Letters.