Deep-Learning-Enhanced NOMA Transceiver Design for Massive MTC: Challenges, State of the Art, and Future Directions

Non-orthogonal multiple access (NOMA) is a promising evolution path to meet the requirements of massive machine type communications (mMTC) in 5G and beyond. However, the deployment of NOMA is hindered by the non-unified signal processing architectures of various NOMA schemes and the inflexibility resulting from the offline design paradigm. The block-wise optimized transceivers make its performance far from the limit. The recent breakthrough of deep learning and its positive applications to wireless communications have paved the way to tackle these challenges. This article studies the effectiveness and efficiency of deep learning in enhancing NOMA performance. Specifically, we first present the deep neural network (DNN), which is constructed via a uniform signal processing architecture, and use it as the unified multiuser receiver in both data and model-driven approaches. This enables the end-to-end optimization of NOMA transceivers due to the universal function approximation property of DNN. On the other hand, with DNN we can automatically extract the user access behaviors out of the time-series signals and optimize the transceivers to match these cross-lay-er behaviors. We further analyze the integration of non-orthogonal communication and neural computation to accomplish high-efficiency data transmission at low cost. Finally, we identify some essential future directions of deep-learning-en-hanced NOMA from the perspectives of online reconfigurability and adaptability toward the ever changing environment in future mMTC.