A Shift Tolerant Dictionary Training Method

Traditional dictionary learning method work by vectorizing long signals, and training on the frames of the data, thereby restricting the learning to time-localized atoms. We study a shift-tolerant approach to learning dictionaries, whereby the features are learned by training on shifted versions of the signal of interest. We propose an optimized Subspace Clustering learning method to accommodate the larger training set for shift-tolerant training. We illustrate up to 50% improvement in sparsity on training data for the Subspace Clustering method, and the KSVD method [1] with only a few integer shifts. We demonstrate improvement in sparsity for data outside the training set, and show that the improved sparsity translates into improved source separation of instantaneous audio mixtures.

[1]  Ahmed H. Tewfik,et al.  A Novel Subspace Clustering Method for Dictionary Design , 2009, ICA.

[2]  Pierre Vandergheynst,et al.  Shift-invariant dictionary learning for sparse representations: Extending K-SVD , 2008, 2008 16th European Signal Processing Conference.

[3]  S. Mallat A wavelet tour of signal processing , 1998 .

[4]  Rémi Gribonval,et al.  Performance measurement in blind audio source separation , 2006, IEEE Transactions on Audio, Speech, and Language Processing.

[5]  Ahmed H. Tewfik,et al.  Blind source separation using monochannel overcomplete dictionaries , 2008, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing.

[6]  Ahmed H. Tewfik,et al.  Two Improved Sparse Decomposition Methods for Blind Source Separation , 2007, ICA.

[7]  P. Laguna,et al.  Signal Processing , 2002, Yearbook of Medical Informatics.

[8]  A. Bruckstein,et al.  K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation , 2005 .

[9]  Shang-Liang Chen,et al.  Orthogonal least squares learning algorithm for radial basis function networks , 1991, IEEE Trans. Neural Networks.