Matrix factorization methods can be viewed as decomposing each input example as a linear combination of basis vectors. X = BS, where X 2 Rd⇥m, B 2 Rd⇥n, S 2 Rn⇥m, where d is the dimension of the data, m is the number of examples, and each column of X corresponds to an individual input example, n is the number of basis vectors, and each column of B corresponds to an individual basis vector. Therefore, each column of S represents “coe cients” of the basis vectors for each input example. Algorithms such as PCA can be viewed as matrix factorization methods. There are several variants of matrix factorization, such as sparse matrix factorization(e.g. assuming the entries of B and S are sparse), factor analysis, and probabilistic matrix factorization. Other notable matrix factorization methods include independent component analysis(ICA) and non-negative matrix factorization(NMF).
[1]
Guillermo Sapiro,et al.
Non-local sparse models for image restoration
,
2009,
2009 IEEE 12th International Conference on Computer Vision.
[2]
Martha White,et al.
Convex Sparse Coding, Subspace Learning, and Semi-Supervised Extensions
,
2011,
AAAI.
[3]
Samy Bengio,et al.
Group Sparse Coding
,
2009,
NIPS.
[4]
Gregory Shakhnarovich,et al.
Sparse Coding for Learning Interpretable Spatio-Temporal Primitives
,
2010,
NIPS.
[5]
Hujun Bao,et al.
Sparse concept coding for visual analysis
,
2011,
CVPR 2011.
[6]
Jimeng Sun,et al.
Automatic Group Sparse Coding
,
2011,
AAAI.
[7]
Rajat Raina,et al.
Efficient sparse coding algorithms
,
2006,
NIPS.