Variational Information Bottleneck for Unsupervised Clustering: Deep Gaussian Mixture Embedding
暂无分享,去创建一个
Abdellatif Zaidi | George Arvanitakis | Yigit Ugur | G. Arvanitakis | A. Zaidi | Yiğit Uğur | George Arvanitakis
[1] Qiang Liu,et al. A Survey of Clustering With Deep Learning: From the Perspective of Network Architecture , 2018, IEEE Access.
[2] Naftali Tishby,et al. The information bottleneck method , 2000, ArXiv.
[3] Huachun Tan,et al. Variational Deep Embedding: An Unsupervised and Generative Approach to Clustering , 2016, IJCAI.
[4] Ali Farhadi,et al. Unsupervised Deep Embedding for Clustering Analysis , 2015, ICML.
[5] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Stefano Soatto,et al. Information Dropout: Learning Optimal Representations Through Noisy Computation , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[7] Inaki Estella Aguerri,et al. Distributed Variational Representation Learning , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[8] John R. Hershey,et al. Approximating the Kullback Leibler Divergence Between Gaussian Mixture Models , 2007, 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07.
[9] Murray Shanahan,et al. Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders , 2016, ArXiv.
[10] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[11] Pascal Vincent,et al. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..
[12] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[13] Yiming Yang,et al. RCV1: A New Benchmark Collection for Text Categorization Research , 2004, J. Mach. Learn. Res..
[14] Michael Rabadi,et al. Kernel Methods for Machine Learning , 2015 .
[15] Jianping Yin,et al. Improved Deep Embedded Clustering with Local Structure Preservation , 2017, IJCAI.
[16] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[17] D. Rubin,et al. Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .
[18] Heng Tao Shen,et al. Principal Component Analysis , 2009, Encyclopedia of Biometrics.
[19] Shlomo Shamai,et al. On the Information Bottleneck Problems: Models, Connections, Applications and Information Theoretic Views , 2020, Entropy.
[20] Naftali Tishby,et al. Opening the Black Box of Deep Neural Networks via Information , 2017, ArXiv.
[21] Sam T. Roweis,et al. EM Algorithms for PCA and SPCA , 1997, NIPS.
[22] Honglak Lee,et al. An Analysis of Single-Layer Networks in Unsupervised Feature Learning , 2011, AISTATS.
[23] Chris H. Q. Ding,et al. K-means clustering via principal component analysis , 2004, ICML.
[24] Inaki Estella Aguerri,et al. Distributed Information Bottleneck Method for Discrete and Gaussian Sources , 2017, ArXiv.
[25] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[26] Noam Slonim,et al. The Information Bottleneck : Theory and Applications , 2006 .
[27] Karl Pearson F.R.S.. LIII. On lines and planes of closest fit to systems of points in space , 1901 .
[28] Joshua Zhexue Huang,et al. Extensions to the k-Means Algorithm for Clustering Large Data Sets with Categorical Values , 1998, Data Mining and Knowledge Discovery.
[29] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[30] Naftali Tishby,et al. Document clustering using word clusters via the information bottleneck method , 2000, SIGIR '00.
[31] J. A. Hartigan,et al. A k-means clustering algorithm , 1979 .
[32] Alexander A. Alemi,et al. Deep Variational Information Bottleneck , 2017, ICLR.
[33] D. Sculley,et al. Web-scale k-means clustering , 2010, WWW '10.