Robust transform learning

Dictionary learning follows a synthesis framework; the dictionary is learnt such that the data can be synthesized / re-generated from the coefficients. Transform learning on the other hand is based on analysis formulation; it learns a transform so as to generate the coefficients. The basic formulations of dictionary learning and transform learning employ a Euclidean cost function for the data fidelity term. Such cost functions are optimal when the noise / error in the system is Normally distributed, but not in the presence of sparse but large outliers. For such heavy tailed noise distributions, minimizing the absolute distance is more robust. There are several papers on robust dictionary learning. This work introduces robust transform learning. Experiments carried out on image analysis and impulse denoising elucidate the superiority of our method.

[1]  Yoram Bresler,et al.  Learning Sparsifying Transforms , 2013, IEEE Transactions on Signal Processing.

[2]  Jeffrey A. Fessler,et al.  A convergence proof of the split Bregman method for regularized least-squares problems , 2014, ArXiv.

[3]  Daniel Rueckert,et al.  Dictionary Learning and Time Sparsity for Dynamic MR Data Reconstruction , 2014, IEEE Transactions on Medical Imaging.

[4]  Angshul Majumdar,et al.  Nuclear norm regularized robust dictionary learning for energy disaggregation , 2016, 2016 24th European Signal Processing Conference (EUSIPCO).

[5]  Mike E. Davies,et al.  Dictionary Learning for Sparse Approximations With the Majorization Method , 2009, IEEE Transactions on Signal Processing.

[6]  Michael Elad,et al.  Dictionaries for Sparse Representation Modeling , 2010, Proceedings of the IEEE.

[7]  Yoram Bresler,et al.  Efficient Blind Compressed Sensing Using Sparsifying Transforms with Convergence Guarantees and Application to Magnetic Resonance Imaging , 2015, SIAM J. Imaging Sci..

[8]  Guillermo Sapiro,et al.  Discriminative learned dictionaries for local image analysis , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[9]  I. Barrodale,et al.  An Improved Algorithm for Discrete $l_1 $ Linear Approximation , 1973 .

[10]  Yoram Bresler,et al.  Online Sparsifying Transform Learning— Part I: Algorithms , 2015, IEEE Journal of Selected Topics in Signal Processing.

[11]  Bidyut Baran Chaudhuri,et al.  Databases for research on recognition of handwritten characters of Indian scripts , 2005, Eighth International Conference on Document Analysis and Recognition (ICDAR'05).

[12]  Rabab Kreidieh Ward,et al.  Robust dictionary learning: Application to signal disaggregation , 2016, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[13]  Michael Elad,et al.  Image Sequence Denoising via Sparse and Redundant Representations , 2009, IEEE Transactions on Image Processing.

[14]  Tom Goldstein,et al.  The Split Bregman Method for L1-Regularized Problems , 2009, SIAM J. Imaging Sci..

[15]  David J. Field,et al.  Sparse coding with an overcomplete basis set: A strategy employed by V1? , 1997, Vision Research.

[16]  Guillermo Sapiro,et al.  Supervised Dictionary Learning , 2008, NIPS.

[17]  David Dagan Feng,et al.  Dictionary learning based impulse noise removal via L1-L1 minimization , 2013, Signal Process..

[18]  E. Schlossmacher An Iterative Technique for Absolute Deviations Curve Fitting , 1973 .

[19]  H. Sebastian Seung,et al.  Learning the parts of objects by non-negative matrix factorization , 1999, Nature.

[20]  Frederick R. Forst,et al.  On robust estimation of the location parameter , 1980 .

[21]  J. Branham,et al.  Alternatives to least squares , 1982 .

[22]  Michael D. Gordon,et al.  Regularized Least Absolute Deviations Regression and an Efficient Algorithm for Parameter Tuning , 2006, Sixth International Conference on Data Mining (ICDM'06).

[23]  Yoram Bresler,et al.  Closed-form solutions within sparsifying transform learning , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[24]  Bidyut Baran Chaudhuri,et al.  Handwritten Numeral Databases of Indian Scripts and Multistage Recognition of Mixed Numerals , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[25]  Yoram Bresler,et al.  Blind compressed sensing using sparsifying transforms , 2015, 2015 International Conference on Sampling Theory and Applications (SampTA).

[26]  Yoram Bresler,et al.  Online Sparsifying Transform Learning—Part II: Convergence Analysis , 2015, IEEE Journal of Selected Topics in Signal Processing.

[27]  Chang-Hwan Son,et al.  Local Learned Dictionaries Optimized to Edge Orientation for Inverse Halftoning , 2014, IEEE Transactions on Image Processing.

[28]  Chandra Sekhar Seelamantula,et al.  ℓ1-K-SVD: A robust dictionary learning algorithm with simultaneous update , 2014, Signal Process..

[29]  Rama Chellappa,et al.  Analysis sparse coding models for image-based classification , 2014, 2014 IEEE International Conference on Image Processing (ICIP).

[30]  Rong Jin,et al.  Unifying discriminative visual codebook generation with classifier training for object category recognition , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[31]  Kjersti Engan,et al.  Method of optimal directions for frame design , 1999, 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258).

[32]  Mark A. Lukas,et al.  An L 1 estimation algorithm with degeneracy and linear constraints , 2002 .

[33]  G. O. Wesolowsky A new descent algorithm for the least absolute value regression problem , 1981 .

[34]  Gonzalo R. Arce,et al.  A Maximum Likelihood Approach to Least Absolute Deviation Regression , 2004, EURASIP J. Adv. Signal Process..