Scalable Kernel K-Means Clustering with Nystrom Approximation: Relative-Error Bounds
暂无分享,去创建一个
[1] Petros Drineas,et al. Structural Properties Underlying High-Quality Randomized Numerical Linear Algebra Algorithms , 2016, Handbook of Big Data.
[2] Sariel Har-Peled,et al. On coresets for k-means and k-median clustering , 2004, STOC '04.
[3] A. Hoffman,et al. Lower bounds for the partitioning of graphs , 1973 .
[4] A. Raftery,et al. Model‐based clustering for social networks , 2007 .
[5] Christos Boutsidis,et al. Near Optimal Column-Based Matrix Reconstruction , 2011, 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science.
[6] Dean P. Foster,et al. Faster Ridge Regression via the Subsampled Randomized Hadamard Transform , 2013, NIPS.
[7] Ameet Talwalkar,et al. MLlib: Machine Learning in Apache Spark , 2015, J. Mach. Learn. Res..
[8] M. Fiedler. Algebraic connectivity of graphs , 1973 .
[9] Oliver Rübel,et al. A Multi-Platform Evaluation of the Randomized CX Low-Rank Matrix Factorization in Spark , 2016, 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW).
[10] Edward Y. Chang,et al. Parallel Spectral Clustering in Distributed Systems , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[11] James T. Kwok,et al. Large-Scale Nyström Kernel Matrix Approximation Using Randomized SVD , 2015, IEEE Transactions on Neural Networks and Learning Systems.
[12] Zhihua Zhang,et al. Improving CUR matrix decomposition and the Nyström approximation via adaptive sampling , 2013, J. Mach. Learn. Res..
[13] Ke Chen,et al. On Coresets for k-Median and k-Means Clustering in Metric and Euclidean Spaces and Their Applications , 2009, SIAM J. Comput..
[14] Michael J. Franklin,et al. Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing , 2012, NSDI.
[15] Antonio Torralba,et al. Spectral Hashing , 2008, NIPS.
[16] James T. Kwok,et al. Clustered Nyström Method for Large Scale Manifold Learning and Dimension Reduction , 2010, IEEE Transactions on Neural Networks.
[17] Dean P. Foster,et al. Clustering Methods for Collaborative Filtering , 1998, AAAI 1998.
[18] Benjamin Recht,et al. Random Features for Large-Scale Kernel Machines , 2007, NIPS.
[19] Bernhard Schölkopf,et al. Nonlinear Component Analysis as a Kernel Eigenvalue Problem , 1998, Neural Computation.
[20] Gary L. Miller,et al. On the performance of spectral graph partitioning methods , 1995, SODA '95.
[21] Sergei Vassilvitskii,et al. k-means++: the advantages of careful seeding , 2007, SODA '07.
[22] Michael B. Cohen,et al. Dimensionality Reduction for k-Means Clustering and Low Rank Approximation , 2014, STOC.
[23] Christos Boutsidis,et al. Unsupervised Feature Selection for the $k$-means Clustering Problem , 2009, NIPS.
[24] Shang-Hua Teng,et al. Spectral partitioning works: planar graphs and finite element meshes , 1996, Proceedings of 37th Conference on Foundations of Computer Science.
[25] Dan Feldman,et al. Turning big data into tiny data: Constant-size coresets for k-means, PCA and projective clustering , 2013, SODA.
[26] Christos Boutsidis,et al. Randomized Dimensionality Reduction for $k$ -Means Clustering , 2011, IEEE Transactions on Information Theory.
[27] David P. Woodruff,et al. Low rank approximation and regression in input sparsity time , 2012, STOC '13.
[28] Inderjit S. Dhillon,et al. Kernel k-means: spectral clustering and normalized cuts , 2004, KDD.
[29] Ravishankar Krishnaswamy,et al. The Hardness of Approximation of Euclidean k-Means , 2015, SoCG.
[30] Petros Drineas,et al. On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning , 2005, J. Mach. Learn. Res..
[31] Sanjoy Dasgupta,et al. Random projection trees for vector quantization , 2008, 2008 46th Annual Allerton Conference on Communication, Control, and Computing.
[32] Rong Jin,et al. Approximate kernel k-means: solution to large scale kernel clustering , 2011, KDD.
[33] Shusen Wang,et al. Sketched Ridge Regression: Optimization Perspective, Statistical Perspective, and Model Averaging , 2017, ICML.
[34] Chris H. Q. Ding,et al. Spectral Relaxation for K-means Clustering , 2001, NIPS.
[35] Michael W. Mahoney,et al. Fast Randomized Kernel Ridge Regression with Statistical Guarantees , 2015, NIPS.
[36] Joydeep Ghosh,et al. Cluster Ensembles --- A Knowledge Reuse Framework for Combining Multiple Partitions , 2002, J. Mach. Learn. Res..
[37] M. Rudelson,et al. The Littlewood-Offord problem and invertibility of random matrices , 2007, math/0703503.
[38] W. B. Johnson,et al. Extensions of Lipschitz mappings into Hilbert space , 1984 .
[39] Michael W. Mahoney,et al. Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression , 2012, STOC '13.
[40] Meena Mahajan,et al. The Planar k-means Problem is NP-hard I , 2009 .
[41] Christos Boutsidis,et al. Random Projections for $k$-means Clustering , 2010, NIPS.
[42] J. Carroll,et al. A Feature-Based Approach to Market Segmentation via Overlapping K-Centroids Clustering , 1997 .
[43] Robert M. Haralick,et al. Image segmentation techniques , 1985, Comput. Vis. Graph. Image Process..
[44] Huy L. Nguyen,et al. OSNAP: Faster Numerical Linear Algebra Algorithms via Sparser Subspace Embeddings , 2012, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.
[45] David M. Mount,et al. A local search approximation algorithm for k-means clustering , 2002, SCG '02.
[46] David A. Forsyth,et al. Whos In the Picture , 2004, NIPS.
[47] Jitendra Malik,et al. Spectral grouping using the Nystrom method , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[48] Cameron Musco,et al. Recursive Sampling for the Nystrom Method , 2016, NIPS.
[49] Michael W. Mahoney,et al. Revisiting the Nystrom Method for Improved Large-scale Machine Learning , 2013, J. Mach. Learn. Res..
[50] Inderjit S. Dhillon,et al. Memory Efficient Kernel Approximation , 2014, ICML.
[51] J. Cheeger. A lower bound for the smallest eigenvalue of the Laplacian , 1969 .
[52] F. Quimby. What's in a picture? , 1993, Laboratory animal science.
[53] David P. Woodruff,et al. Fast approximation of matrix coherence and statistical leverage , 2011, ICML.
[54] R. Sharan,et al. Cluster analysis and its applications to gene expression data. , 2002, Ernst Schering Research Foundation workshop.
[55] Scott Shenker,et al. Spark: Cluster Computing with Working Sets , 2010, HotCloud.
[56] Michael Randolph Garey,et al. The complexity of the generalized Lloyd - Max problem , 1982, IEEE Trans. Inf. Theory.
[57] David P. Woodruff,et al. Improved Distributed Principal Component Analysis , 2014, NIPS.
[58] Okan Arikan. Compression of motion capture databases , 2006, ACM Trans. Graph..
[59] T. Tao,et al. Random Matrices: the Distribution of the Smallest Singular Values , 2009, 0903.0614.
[60] Chris H. Q. Ding,et al. On the Equivalence of Nonnegative Matrix Factorization and Spectral Clustering , 2005, SDM.
[61] S. P. Lloyd,et al. Least squares quantization in PCM , 1982, IEEE Trans. Inf. Theory.
[62] David P. Woodruff. Sketching as a Tool for Numerical Linear Algebra , 2014, Found. Trends Theor. Comput. Sci..
[63] Matthias W. Seeger,et al. Using the Nyström Method to Speed Up Kernel Machines , 2000, NIPS.
[64] Sanguthevar Rajasekaran,et al. Fast Algorithms for Constant Approximation k-Means Clustering , 2010, Trans. Mach. Learn. Data Min..
[65] Zhihua Zhang,et al. The Singular Value Decomposition, Applications and Beyond , 2015, ArXiv.
[66] Michael I. Jordan,et al. On Spectral Clustering: Analysis and an algorithm , 2001, NIPS.
[67] S. Muthukrishnan,et al. Relative-Error CUR Matrix Decompositions , 2007, SIAM J. Matrix Anal. Appl..
[68] VARUN CHANDOLA,et al. Anomaly detection: A survey , 2009, CSUR.
[69] Stephen Becker,et al. Randomized Clustered Nystrom for Large-Scale Kernel Machines , 2016, AAAI.
[70] Zhihua Zhang,et al. Towards More Efficient SPSD Matrix Approximation and CUR Matrix Decomposition , 2015, J. Mach. Learn. Res..
[71] C. Ding,et al. On the Equivalence of Nonnegative Matrix Factorization and K-means - Spectral Clustering , 2005 .
[72] Amit Kumar,et al. A simple linear time (1 + /spl epsiv/)-approximation algorithm for k-means clustering in any dimensions , 2004, 45th Annual IEEE Symposium on Foundations of Computer Science.
[73] Rong Jin,et al. Improved Bounds for the Nyström Method With Application to Kernel Classification , 2011, IEEE Transactions on Information Theory.
[74] Nathan Halko,et al. Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions , 2009, SIAM Rev..
[75] Jitendra Malik,et al. Normalized cuts and image segmentation , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[76] Michael B. Cohen,et al. Input Sparsity Time Low-rank Approximation via Ridge Leverage Score Sampling , 2015, SODA.
[77] E. Nyström. Über Die Praktische Auflösung von Integralgleichungen mit Anwendungen auf Randwertaufgaben , 1930 .
[78] Zhihua Zhang,et al. SPSD Matrix Approximation vis Column Selection: Theories, Algorithms, and Extensions , 2014, J. Mach. Learn. Res..
[79] Joel A. Tropp,et al. Improved Analysis of the subsampled Randomized Hadamard Transform , 2010, Adv. Data Sci. Adapt. Anal..
[80] Roman Vershynin,et al. Introduction to the non-asymptotic analysis of random matrices , 2010, Compressed Sensing.
[81] Ameet Talwalkar,et al. Sampling Methods for the Nyström Method , 2012, J. Mach. Learn. Res..
[82] Anthony Widjaja,et al. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2003, IEEE Transactions on Neural Networks.
[83] Petros Drineas,et al. Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication , 2006, SIAM J. Comput..
[84] S. Muthukrishnan,et al. Faster least squares approximation , 2007, Numerische Mathematik.
[85] Ameet Talwalkar,et al. On the Impact of Kernel Approximation on Learning Accuracy , 2010, AISTATS.
[86] Rong Jin,et al. Efficient Kernel Clustering Using Random Fourier Features , 2012, 2012 IEEE 12th International Conference on Data Mining.
[87] Michael W. Mahoney. Randomized Algorithms for Matrices and Data , 2011, Found. Trends Mach. Learn..
[88] Pierre Hansen,et al. NP-hardness of Euclidean sum-of-squares clustering , 2008, Machine Learning.
[89] Volkan Cevher,et al. Fixed-Rank Approximation of a Positive-Semidefinite Matrix from Streaming Data , 2017, NIPS.
[90] J. Matou. On Approximate Geometric K-clustering , 1999 .
[91] Rong Jin,et al. Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison , 2012, NIPS.
[92] Laurent Demanet,et al. Sublinear Randomized Algorithms for Skeleton Decompositions , 2011, SIAM J. Matrix Anal. Appl..
[93] Francis R. Bach,et al. Sharp analysis of low-rank kernel matrix approximations , 2012, COLT.
[94] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.