暂无分享,去创建一个
[1] Preetum Nakkiran,et al. Distributional Generalization: A New Kind of Generalization , 2020, ArXiv.
[2] Simon Kornblith,et al. Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth , 2021, ICLR.
[3] Liwei Wang,et al. Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation , 2018, NeurIPS.
[4] Hao Li,et al. Visualizing the Loss Landscape of Neural Nets , 2017, NeurIPS.
[5] Georg Heigold,et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2021, ICLR.
[6] Fred A. Hamprecht,et al. Essentially No Barriers in Neural Network Energy Landscape , 2018, ICML.
[7] Bolei Zhou,et al. Learning Deep Features for Scene Recognition using Places Database , 2014, NIPS.
[8] Geoffrey E. Hinton,et al. Similarity of Neural Network Representations Revisited , 2019, ICML.
[9] Gintare Karolina Dziugaite,et al. Linear Mode Connectivity and the Lottery Ticket Hypothesis , 2019, ICML.
[10] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[11] Matthias Bethge,et al. On the surprising similarities between supervised and self-supervised models , 2020, ArXiv.
[12] Behnam Neyshabur,et al. The Deep Bootstrap Framework: Good Online Learners are Good Offline Generalizers , 2021, ICLR.
[13] Julien Mairal,et al. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments , 2020, NeurIPS.
[14] Alec Radford,et al. Scaling Laws for Neural Language Models , 2020, ArXiv.
[15] Mark Chen,et al. Generative Pretraining From Pixels , 2020, ICML.
[16] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Andrea Vedaldi,et al. Understanding Image Representations by Measuring Their Equivariance and Equivalence , 2014, International Journal of Computer Vision.
[18] Andrew Gordon Wilson,et al. Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs , 2018, NeurIPS.
[19] John Shawe-Taylor,et al. Canonical Correlation Analysis: An Overview with Application to Learning Methods , 2004, Neural Computation.
[20] Samy Bengio,et al. Insights on representational similarity in neural networks with canonical correlation , 2018, NeurIPS.
[21] Boaz Barak,et al. For self-supervised learning, Rationality implies generalization, provably , 2020, ICLR.
[22] Nikolaus Kriegeskorte,et al. Frontiers in Systems Neuroscience Systems Neuroscience , 2022 .
[23] Nick Cammarata,et al. An Overview of Early Vision in InceptionV1 , 2020 .
[24] Yoshua Bengio,et al. Understanding intermediate layers using linear classifier probes , 2016, ICLR.
[25] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[26] Joan Bruna,et al. Topology and Geometry of Half-Rectified Network Optimization , 2016, ICLR.
[27] Julien Mairal,et al. Emerging Properties in Self-Supervised Vision Transformers , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[28] Adrián Csiszárik,et al. Similarity and Matching of Neural Network Representations , 2021, NeurIPS.
[29] Hod Lipson,et al. Convergent Learning: Do different neural networks learn the same representations? , 2015, FE@NIPS.
[30] Alec Radford,et al. Multimodal Neurons in Artificial Neural Networks , 2021 .
[31] Dimitris Achlioptas,et al. Bad Global Minima Exist and SGD Can Reach Them , 2019, NeurIPS.
[32] Jascha Sohl-Dickstein,et al. SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability , 2017, NIPS.