Learning non-structured , overcomplete and sparsifying transform

Transform learning has been introduced and studied in [1],[2], [3] and [4]. An optimal transform learning for structured and overcomplete matrix was proposed in [5]. However, several issues (optimality, convergence and computational complexity) related to learning an incoherent, well-conditioned, non-structured and overcomplete sparsifing transform still remain open. Let X ∈ <N×L be a data matrix, having as columns data samples xi ∈ < , i ∈ I = {1, 2, ..., L}. Assuming a sparsifing transform model [1], then the problem formulation for learning the overcomplete transform matrix A ∈ <M×N , (M > N) has the following form:

[1]  M. Elad,et al.  $rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation , 2006, IEEE Transactions on Signal Processing.

[2]  Yoram Bresler,et al.  Structured Overcomplete Sparsifying Transform Learning with Convergence Guarantees and Applications , 2015, International Journal of Computer Vision.

[3]  Yoram Bresler,et al.  Doubly sparse transform learning with convergence guarantees , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[4]  Yoram Bresler,et al.  Learning sparsifying transforms for image processing , 2012, 2012 19th IEEE International Conference on Image Processing.

[5]  Yoram Bresler,et al.  $\ell_{0}$ Sparsifying Transform Learning With Efficient Optimal Updates and Convergence Guarantees , 2015, IEEE Transactions on Signal Processing.

[6]  Yoram Bresler,et al.  Learning overcomplete sparsifying transforms for signal processing , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[7]  Yoram Bresler,et al.  Closed-form solutions within sparsifying transform learning , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.