暂无分享,去创建一个
Michal Valko | Eva L. Dyer | William Gray-Roncal | Mohammad Gheshlaghi Azar | Kiran Bhaskaran-Nair | Keith B. Hengen | Mehdi Azabou | Max Dabagia | Chi-Heng Lin | Ran Liu | Erik C. Johnson | M. G. Azar | Michal Valko | K. B. Hengen | K. Bhaskaran-Nair | Ran Liu | William Gray-Roncal | M. Dabagia | Chi-Heng Lin | Mehdi Azabou
[1] Sung Ju Hwang,et al. Self-supervised Label Augmentation via Input Transformations , 2019, ICML.
[2] Sergey Levine,et al. Time-Contrastive Networks: Self-Supervised Learning from Video , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[3] Aapo Hyvärinen,et al. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models , 2010, AISTATS.
[4] Oncel Tuzel,et al. Subject-Aware Contrastive Learning for Biosignals , 2020, ArXiv.
[5] Bharath Hariharan,et al. Extending and Analyzing Self-Supervised Learning Across Domains , 2020, ECCV.
[6] Ronald M. Summers,et al. Anatomy-specific classification of medical images using deep convolutional nets , 2015, 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI).
[7] Chao Zhang,et al. Self-Adaptive Training: Bridging the Supervised and Self-Supervised Learning , 2021, ArXiv.
[8] Abhinav Gupta,et al. Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases , 2020, NeurIPS.
[9] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[10] Stefano Ermon,et al. Multi-label Contrastive Predictive Coding , 2020, Neural Information Processing Systems.
[11] Luis Perez,et al. The Effectiveness of Data Augmentation in Image Classification using Deep Learning , 2017, ArXiv.
[12] Daniel Guo,et al. Bootstrap Latent-Predictive Representations for Multitask Reinforcement Learning , 2020, ICML.
[13] Robert P. Sheridan,et al. Time-Split Cross-Validation as a Method for Estimating the Goodness of Prospective Prediction , 2013, J. Chem. Inf. Model..
[14] Cordelia Schmid,et al. What makes for good views for contrastive learning , 2020, NeurIPS.
[15] Alexei A. Efros,et al. What Should Not Be Contrastive in Contrastive Learning , 2020, ICLR.
[16] Ching-Yao Chuang,et al. Contrastive Learning with Hard Negative Samples , 2020, ArXiv.
[17] Julien Mairal,et al. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments , 2020, NeurIPS.
[18] Xinlei Chen,et al. Understanding Self-supervised Learning with Dual Deep Networks , 2020, ArXiv.
[19] Michal Valko,et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.
[20] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[21] Alan F. Smeaton,et al. Contrastive Representation Learning: A Framework and Review , 2020, IEEE Access.
[22] Qiang Liu,et al. Deep Graph Contrastive Representation Learning , 2020, ArXiv.
[23] Eva L. Dyer,et al. A cryptography-based approach for movement decoding , 2016, Nature Biomedical Engineering.
[24] Alexei A. Efros,et al. Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Matthias H. Hennig,et al. SpikeInterface, a unified framework for spike sorting , 2019, bioRxiv.
[26] Phillip Isola,et al. Contrastive Multiview Coding , 2019, ECCV.
[27] Thomas Brox,et al. Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[28] Kaiming He,et al. Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Guo-Jun Qi,et al. Contrastive Learning With Stronger Augmentations , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[30] René Vidal,et al. Sparse Subspace Clustering: Algorithm, Theory, and Applications , 2012, IEEE transactions on pattern analysis and machine intelligence.
[31] Stephen D. Van Hooser,et al. Neuronal Firing Rate Homeostasis Is Inhibited by Sleep and Promoted by Wake , 2016, Cell.
[32] Nicu Sebe,et al. Whitening for Self-Supervised Representation Learning , 2020, ICML.
[33] Yu Wang,et al. Joint Contrastive Learning with Infinite Possibilities , 2020, NeurIPS.
[34] Frédo Durand,et al. Data augmentation using learned transforms for one-shot medical image segmentation , 2019, ArXiv.
[35] Kevin M. Cury,et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning , 2018, Nature Neuroscience.
[36] Ali Etemad,et al. Self-Supervised Learning for ECG-Based Emotion Recognition , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[37] Yann LeCun,et al. Barlow Twins: Self-Supervised Learning via Redundancy Reduction , 2021, ICML.
[38] Razvan Pascanu,et al. BYOL works even without batch statistics , 2020, ArXiv.
[39] Daniel Cremers,et al. Learning by Association — A Versatile Semi-Supervised Training Method for Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[40] Jonathan Tompson,et al. Learning Actionable Representations from Visual Observations , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[41] Petros Drineas,et al. CUR matrix decompositions for improved data analysis , 2009, Proceedings of the National Academy of Sciences.
[42] Bernard Ghanem,et al. FLAG: Adversarial Data Augmentation for Graph Neural Networks , 2020, ArXiv.
[43] Chengxu Zhuang,et al. Local Aggregation for Unsupervised Learning of Visual Embeddings , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[44] Xue-Xin Wei,et al. Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE , 2020, NeurIPS.
[45] Pascal Vincent,et al. Dropout as data augmentation , 2015, ArXiv.
[46] R Devon Hjelm,et al. Data-Efficient Reinforcement Learning with Momentum Predictive Representations , 2020, ArXiv.
[47] Matthijs Douze,et al. Deep Clustering for Unsupervised Learning of Visual Features , 2018, ECCV.
[48] Yannis Kalantidis,et al. Hard Negative Mixing for Contrastive Learning , 2020, NeurIPS.
[49] Pietro Liò,et al. Deep Graph Infomax , 2018, ICLR.
[50] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[51] Rajesh P. N. Rao,et al. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. , 1999 .
[52] Jeremy F. Magland,et al. A Fully Automated Approach to Spike Sorting , 2017, Neuron.
[53] Aapo Hyvärinen,et al. Self-Supervised Representation Learning from Electroencephalography Signals , 2019, 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP).
[54] Xinlei Chen,et al. Exploring Simple Siamese Representation Learning , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[55] Nikos Komodakis,et al. Unsupervised Representation Learning by Predicting Image Rotations , 2018, ICLR.
[56] Oriol Vinyals,et al. Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.
[57] Laurens van der Maaten,et al. Self-Supervised Learning of Pretext-Invariant Representations , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[58] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[59] Alexei A. Efros,et al. Unsupervised Visual Representation Learning by Context Prediction , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[60] Ralf Wessel,et al. Cortical Circuit Dynamics Are Homeostatically Tuned to Criticality In Vivo , 2019, Neuron.
[61] Aapo Hyvärinen,et al. Uncovering the structure of clinical EEG signals with self-supervised learning , 2020, Journal of neural engineering.