Memory-efficient parallel computation of tensor and matrix products for big tensor decomposition
暂无分享,去创建一个
Nikos D. Sidiropoulos | George Karypis | Niranjay Ravindran | Shaden Smith | G. Karypis | N. Sidiropoulos | Shaden Smith | N. Ravindran | Niranjay Ravindran
[1] J. Kruskal. Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics , 1977 .
[2] Richard A. Harshman,et al. Foundations of the PARAFAC procedure: Models and conditions for an "explanatory" multi-model factor analysis , 1970 .
[3] J. Chang,et al. Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition , 1970 .
[4] Nikos D. Sidiropoulos,et al. Parallel Randomly Compressed Cubes : A scalable distributed architecture for big tensor decomposition , 2014, IEEE Signal Processing Magazine.
[5] Tamara G. Kolda,et al. Scalable Tensor Decompositions for Multi-aspect Data Mining , 2008, 2008 Eighth IEEE International Conference on Data Mining.
[6] Nikos D. Sidiropoulos,et al. ParCube: Sparse Parallelizable Tensor Decompositions , 2012, ECML/PKDD.
[7] Anastasios Kyrillidis,et al. Multi-Way Compressed Sensing for Sparse Low-Rank Tensors , 2012, IEEE Signal Processing Letters.
[8] Christos Faloutsos,et al. GigaTensor: scaling tensor analysis up by 100 times - algorithms and discoveries , 2012, KDD.
[9] N. Sidiropoulos,et al. On the uniqueness of multilinear decomposition of N‐way arrays , 2000 .
[10] A. Stegeman,et al. On Kruskal's uniqueness condition for the Candecomp/Parafac decomposition , 2007 .
[11] Tamara G. Kolda,et al. Efficient MATLAB Computations with Sparse and Factored Tensors , 2007, SIAM J. Sci. Comput..
[12] Estevam R. Hruschka,et al. Toward an Architecture for Never-Ending Language Learning , 2010, AAAI.