Incremental Learning in the Non-negative Matrix Factorization
暂无分享,去创建一个
The non-negative matrix factorization (NMF) is capable of factorizing strictly positive data into strictly positive activations and base vectors. In its standard form, the input data must be presented as a batch of data. This means the NMF is only able to represent the input space contained in this batch of data whereas it is not able to adapt to changes afterwards. In this paper we propose a method to overcome this limitation and to enable the NMF to incrementally and continously adapt to new data. The proposed algorithm is able to cover the (possibly growing) input space without putting further constraints on the algorithm. We show that using our method the NMF is able to approximate the dimensionality of a dataset and therefore is capable to determine the required number of base vectors automatically.
[1] H. Sebastian Seung,et al. Algorithms for Non-negative Matrix Factorization , 2000, NIPS.
[2] Qiang Yang,et al. Detect and Track Latent Factors with Online Nonnegative Matrix Factorization , 2007, IJCAI.
[3] Victoria Stodden,et al. When Does Non-Negative Matrix Factorization Give a Correct Decomposition into Parts? , 2003, NIPS.
[4] H. Sebastian Seung,et al. Learning the parts of objects by non-negative matrix factorization , 1999, Nature.