Recent advances in machine learning for non-Gaussian data processing

With the widespread explosion of sensing and computing, an increasing number of industrial applications and an ever-growing amount of academic research generate massive multi-modal data from multiple sources. Gaussian distribution is the probability distribution ubiquitously used in statistics, signal processing, and pattern recognition. However, in reality data are neither always Gaussian nor can be safely assumed to be Gaussian disturbed. In many real-life applications, the distribution of data is, e.g., bounded, asymmetric, and, therefore, is not Gaussian distributed. It has been found in recent studies that explicitly utilizing the non-Gaussian characteristics of data (e.g., data with bounded support, data with semi-bounded support, and data with L1/L2-norm constraint) can significantly improve the performance of practical systems. Hence, it is of particular importance and interest to make thorough studies of the non-Gaussian data and the corresponding non-Gaussian statistical models (e.g., beta distribution for bounded support data, gamma distribution for semi-bounded support data, and Dirichlet/vMF distribution for data with L1/L2-norm constraint). In order to analyze and understand such kind of non-Gaussian distributed data, the developments of related learning theories, statistical models, and efficient algorithms become crucial. The scope of this special issue of the Elsevier's Journal on Neurocomputing is to provide theoretical foundations and ground-breaking models and algorithms to solve this challenge.