Biologically Inspired Radio Signal Feature Extraction with Sparse Denoising Autoencoders

Automatic modulation classification (AMC) is an important task for modern communication systems; however, it is a challenging problem when signal features and precise models for generating each modulation may be unknown. We present a new biologically-inspired AMC method without the need for models or manually specified features --- thus removing the requirement for expert prior knowledge. We accomplish this task using regularized stacked sparse denoising autoencoders (SSDAs). Our method selects efficient classification features directly from raw in-phase/quadrature (I/Q) radio signals in an unsupervised manner. These features are then used to construct higher-complexity abstract features which can be used for automatic modulation classification. We demonstrate this process using a dataset generated with a software defined radio, consisting of random input bits encoded in 100-sample segments of various common digital radio modulations. Our results show correct classification rates of > 99% at 7.5 dB signal-to-noise ratio (SNR) and > 92% at 0 dB SNR in a 6-way classification test. Our experiments demonstrate a dramatically new and broadly applicable mechanism for performing AMC and related tasks without the need for expert-defined or modulation-specific signal information.

[1]  Geoffrey Zweig,et al.  Recent advances in deep learning for speech research at Microsoft , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[2]  Andrew Y. Ng,et al.  Unsupervised learning models of primary cortical receptive fields and receptive field plasticity , 2011, NIPS.

[3]  Arnaud Delorme,et al.  Spike-based strategies for rapid processing , 2001, Neural Networks.

[4]  Muazzam Ali Khan,et al.  Automatic Modulation Recognition of Communication Signals. , 2012 .

[5]  Tara N. Sainath,et al.  Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups , 2012, IEEE Signal Processing Magazine.

[6]  Anders Krogh,et al.  A Simple Weight Decay Can Improve Generalization , 1991, NIPS.

[7]  Minami Ito,et al.  Representation of Angles Embedded within Contour Stimuli in Area V2 of Macaque Monkeys , 2004, The Journal of Neuroscience.

[8]  Ali Abdi,et al.  Survey of automatic modulation classification techniques: classical approaches and new trends , 2007, IET Commun..

[9]  Bruno A Olshausen,et al.  Sparse coding of sensory inputs , 2004, Current Opinion in Neurobiology.

[10]  Elsayed Elsayed Azzouz,et al.  Modulation recognition using artificial neural networks , 1997, Signal Process..

[11]  Pascal Vincent,et al.  Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..

[12]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[13]  Kiseon Kim,et al.  On the detection and classification of quadrature digital modulations in broad-band noise , 1990, IEEE Trans. Commun..

[14]  P. Lennie Receptive fields , 2003, Current Biology.

[15]  Samir S. Soliman,et al.  Signal classification using statistical moments , 1992, IEEE Trans. Commun..

[16]  Kavita Burse,et al.  Channel Equalization Using Neural Networks: A Review , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[17]  Asoke K. Nandi,et al.  Automatic Modulation Classification Using Combination of Genetic Programming and KNN , 2012, IEEE Transactions on Wireless Communications.

[18]  Jürgen Schmidhuber,et al.  Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction , 2011, ICANN.

[19]  MengChu Zhou,et al.  Likelihood-Ratio Approaches to Automatic Modulation Classification , 2011, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[20]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[21]  G. Bi,et al.  Synaptic Modifications in Cultured Hippocampal Neurons: Dependence on Spike Timing, Synaptic Strength, and Postsynaptic Cell Type , 1998, The Journal of Neuroscience.

[22]  Romain Brette,et al.  Neuroinformatics Original Research Article Brian: a Simulator for Spiking Neural Networks in Python , 2022 .

[23]  Terrence J. Sejnowski,et al.  The “independent components” of natural scenes are edge filters , 1997, Vision Research.

[24]  D. Hubel,et al.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex , 1962, The Journal of physiology.

[25]  Wofgang Maas,et al.  Networks of spiking neurons: the third generation of neural network models , 1997 .

[26]  Yoshua Bengio,et al.  Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.

[27]  Soumyajit Mandal,et al.  Circuits for an RF cochlea , 2006, 2006 IEEE International Symposium on Circuits and Systems.

[28]  Sander M. Bohte,et al.  Computing with Spiking Neuron Networks , 2012, Handbook of Natural Computing.

[29]  Honglak Lee,et al.  Sparse deep belief net model for visual area V2 , 2007, NIPS.

[30]  J. Hegdé,et al.  Selectivity for Complex Shapes in Primate Visual Area V2 , 2000, The Journal of Neuroscience.

[31]  Peter Glöckner,et al.  Why Does Unsupervised Pre-training Help Deep Learning? , 2013 .

[32]  J. Endler Some general comments on the evolution and design of animal communication systems. , 1993, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[33]  J. P. Jones,et al.  An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. , 1987, Journal of neurophysiology.

[34]  Asoke K. Nandi,et al.  Automatic digital modulation recognition using artificial neural network and genetic algorithm , 2004, Signal Process..

[35]  Léon Bottou,et al.  Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.

[36]  B. Ramkumar,et al.  Automatic modulation classification for cognitive radios using cyclic feature detection , 2009, IEEE Circuits and Systems Magazine.

[37]  Yoram Singer,et al.  Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..

[38]  Nikil D. Dutt,et al.  A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors , 2009, Neural Networks.