Reconstructable and Interpretable Representations for Time Series with Time-Skip Sparse Dictionary Learning

It is challenging to summarize time series signals into essential patterns that preserve the original characteristics of the signals. Good summarization allows one to reconstruct the original signal back while the reduced data size saves storage space and in turn accelerates processing that follows. This paper proposes a dictionary learning method for time series signals with a mechanism of skipping sparse codes along the time axis, utilizing redundancy in time. The proposed method gives compact and accurate representations of time series. Experimental results demonstrate that low errors in both signal reconstruction and classification are achieved by the proposed method while the size of representations is reduced. The degradation of the signal reconstruction errors caused by the proposed skipping mechanism was about 5% of the error magnitude, with about a 18 times fewer representation size. The accuracy of classification based on the proposed methods is always better than the state-of-the-art dictionary learning method for time series. The proposed idea can be an effective option when using dictionary learning, which is one of the fundamental techniques in signal processing and has various applications.

[1]  Yixin Chen,et al.  Multi-Scale Convolutional Neural Networks for Time Series Classification , 2016, ArXiv.

[2]  Guillermo Sapiro,et al.  Supervised Dictionary Learning , 2008, NIPS.

[3]  Allen Y. Yang,et al.  Robust Face Recognition via Sparse Representation , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Hideitsu Hino,et al.  Group Sparsity Tensor Factorization for Re-Identification of Open Mobility Traces , 2017, IEEE Transactions on Information Forensics and Security.

[5]  Sergey Malinchik,et al.  SAX-VSM: Interpretable Time Series Classification Using SAX and Vector Space Model , 2013, 2013 IEEE 13th International Conference on Data Mining.

[6]  Guillermo Sapiro,et al.  Online dictionary learning for sparse coding , 2009, ICML '09.

[7]  Jean Ponce,et al.  Sparse Modeling for Image and Vision Processing , 2014, Found. Trends Comput. Graph. Vis..

[8]  Xin-Shun Xu Dictionary Learning Based Hashing for Cross-Modal Retrieval , 2016, ACM Multimedia.

[9]  Eamonn J. Keogh,et al.  A Complexity-Invariant Distance Measure for Time Series , 2011, SDM.

[10]  Gaurav N. Pradhan,et al.  Association Rule Mining in Multiple, Multidimensional Time Series Medical Data , 2017, J. Heal. Informatics Res..

[11]  Michael Elad,et al.  Sparse and Redundant Representations - From Theory to Applications in Signal and Image Processing , 2010 .

[12]  Yiming Yang,et al.  Efficient Shift-Invariant Dictionary Learning , 2016, KDD.

[13]  Michael Elad,et al.  Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries , 2006, IEEE Transactions on Image Processing.

[14]  Regunathan Radhakrishnan,et al.  Time series analysis and segmentation using eigenvectors for mining semantic audio label sequences , 2004, 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763).

[15]  Eamonn J. Keogh,et al.  Making Time-Series Classification More Accurate Using Learned Constraints , 2004, SDM.

[16]  Lars Schmidt-Thieme,et al.  Learning time-series shapelets , 2014, KDD.

[17]  Yuan Li,et al.  Finding Structural Similarity in Time Series Data Using Bag-of-Patterns Representation , 2009, SSDBM.

[18]  Yi-Ling Chen,et al.  Discriminative Paired Dictionary Learning for Visual Recognition , 2016, ACM Multimedia.

[19]  George C. Runger,et al.  A Bag-of-Features Framework to Classify Time Series , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[20]  Hideitsu Hino,et al.  Group Sparsity Tensor Factorization for De-anonymization of Mobility Traces , 2015, 2015 IEEE Trustcom/BigDataSE/ISPA.

[21]  Junsong Yuan,et al.  Learning a Multi-class Discriminative Dictionary with Nonredundancy Constraints for Visual Classification , 2016, ACM Multimedia.

[22]  Jason Lines,et al.  Time-Series Classification with COTE: The Collective of Transformation-Based Ensembles , 2015, IEEE Transactions on Knowledge and Data Engineering.

[23]  Motoaki Kawanabe,et al.  Learning a common dictionary for subject-transfer decoding with resting calibration , 2015, NeuroImage.

[24]  Liangping Ma,et al.  Packet-based PSNR time series prediction for video teleconferencing , 2015, 2015 IEEE International Conference on Multimedia and Expo (ICME).

[25]  Mike E. Davies,et al.  Sparse and shift-Invariant representations of music , 2006, IEEE Transactions on Audio, Speech, and Language Processing.

[26]  Michael Elad,et al.  Low Bit-Rate Compression of Facial Images , 2007, IEEE Transactions on Image Processing.

[27]  Jian Pei,et al.  A brief survey on sequence classification , 2010, SKDD.

[28]  Hui Ding,et al.  Querying and mining of time series data: experimental comparison of representations and distance measures , 2008, Proc. VLDB Endow..

[29]  M. Yuan,et al.  Model selection and estimation in regression with grouped variables , 2006 .

[30]  Eamonn J. Keogh,et al.  Time series shapelets: a new primitive for data mining , 2009, KDD.

[31]  Patrick Schäfer The BOSS is concerned with time series classification in the presence of noise , 2014, Data Mining and Knowledge Discovery.

[32]  Hong Cheng,et al.  Image-to-Class Dynamic Time Warping for 3D hand gesture recognition , 2013, 2013 IEEE International Conference on Multimedia and Expo (ICME).

[33]  Guillermo Sapiro,et al.  Online Learning for Matrix Factorization and Sparse Coding , 2009, J. Mach. Learn. Res..

[34]  Cédric Gouy-Pailler,et al.  Multivariate temporal dictionary learning for EEG , 2013, Journal of Neuroscience Methods.

[35]  Ke Huang,et al.  Sparse Representation for Signal Classification , 2006, NIPS.