Abandon Locality: Frame-Wise Embedding Aided Transformer for Automatic Modulation Recognition

Automatic modulation recognition (AMR) has been considered as an efficient technique for non-cooperative communication and intelligent communication. In this work, we propose a modified transformer-based method for AMR, called frame-wise embedding aided transformer (FEA-T), aiming to extract the global correlation feature of the signal to obtain higher classification accuracy as well as lower time cost. To enhance the global modeling capability of the transformer, we design a frame-wise embedding module (FEM) to aggregate more samples into a token in the embedding stage to generate a more efficient token sequence. We also present the optimal frame length by analyzing the representation ability of each transformer layer for a better trade-off between the speed and the performance. Moreover, we design a novel dual-branch gate linear unit (DB-GLU) scheme for the feed-forward network of the transformer to reduce the model size and enhance the performance. Experimental results on RadioML2018.01A datasets demonstrate that the proposed method outperforms state-of-the-art works in terms of recognition accuracy and running speed.

[1]  Yunhao Shi,et al.  ConvLSTMAE: A Spatiotemporal Parallel Autoencoders for Automatic Modulation Classification , 2022, IEEE Communications Letters.

[2]  Jae-Woo Kim,et al.  A Hybrid Deep Learning Model for Automatic Modulation Classification , 2022, IEEE Wireless Communications Letters.

[3]  Swayambhoo Jain,et al.  MCformer: A Transformer Based Deep Neural Network for Automatic Modulation Classification , 2021, 2021 IEEE Global Communications Conference (GLOBECOM).

[4]  Jialang Xu,et al.  An Efficient Deep Learning Model for Automatic Modulation Recognition Based on Parameter Estimation and Transformation , 2021, IEEE Communications Letters.

[5]  Yonghong Hou,et al.  Deep Learning Based Modulation Recognition With Multi-Cue Fusion , 2021, IEEE Wireless Communications Letters.

[6]  Hao Luo,et al.  Automatic Modulation Classification Using CNN-LSTM Based Dual-Stream Structure , 2020, IEEE Trans. Veh. Technol..

[7]  S. Gelly,et al.  An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2020, ICLR.

[8]  Gerard Parr,et al.  A Spatiotemporal Multi-Channel Learning Framework for Automatic Modulation Recognition , 2020, IEEE Wireless Communications Letters.

[9]  Noam Shazeer,et al.  Talking-Heads Attention , 2020, ArXiv.

[10]  Ankit Singh Rawat,et al.  Low-Rank Bottleneck in Multi-head Attention Models , 2020, ICML.

[11]  Noam Shazeer,et al.  GLU Variants Improve Transformer , 2020, ArXiv.

[12]  Dong-Seong Kim,et al.  MCNet: An Efficient CNN Architecture for Robust Automatic Modulation Classification , 2020, IEEE Communications Letters.

[13]  T. Charles Clancy,et al.  Over-the-Air Deep Learning Based Radio Signal Classification , 2017, IEEE Journal of Selected Topics in Signal Processing.

[14]  Sofie Pollin,et al.  Deep Learning Models for Wireless Signal Classification With Distributed Low-Cost Spectrum Sensors , 2017, IEEE Transactions on Cognitive Communications and Networking.

[15]  Ping Zhang,et al.  Automatic Modulation Classification of Overlapped Sources Using Multiple Cumulants , 2017, IEEE Transactions on Vehicular Technology.

[16]  Geoffrey Ye Li,et al.  Cognitive radio networking and communications: an overview , 2011, IEEE Transactions on Vehicular Technology.

[17]  Octavia A. Dobre,et al.  On the likelihood-based approach to modulation classification , 2009, IEEE Transactions on Wireless Communications.