A high-bias, low-variance introduction to Machine Learning for physicists
暂无分享,去创建一个
David J. Schwab | Charles K. Fisher | Pankaj Mehta | Marin Bukov | Ching-Hao Wang | Alexandre G. R. Day | Clint Richardson | D. Schwab | Pankaj Mehta | C. Richardson | M. Bukov | Ching-Hao Wang | A. G. Day
[1] Jens Eisert,et al. Reinforcement learning decoders for fault-tolerant quantum computation , 2018, Mach. Learn. Sci. Technol..
[2] Huichao Song,et al. Applications of deep learning to relativistic hydrodynamics , 2018, Physical Review Research.
[3] Ronald Davis,et al. Neural networks and deep learning , 2017 .
[4] T. Tohyama,et al. Characterization of photoexcited states in the half-filled one-dimensional extended Hubbard model assisted by machine learning , 2019, 1901.07900.
[5] Shotaro Shiba Funai,et al. Thermodynamics and Feature Extraction by Machine Learning , 2018, Physical Review Research.
[6] Mauro Paternostro,et al. Supervised learning of time-independent Hamiltonians for gate design , 2018, New Journal of Physics.
[7] Andrew M. Saxe,et al. High-dimensional dynamics of generalization error in neural networks , 2017, Neural Networks.
[8] A. Prakash,et al. Quantum gradient descent for linear systems and least squares , 2017, Physical Review A.
[9] Shikha Verma,et al. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2019, Vikalpa: The Journal for Decision Makers.
[10] Stefan Steinerberger,et al. Fast Interpolation-based t-SNE for Improved Visualization of Single-Cell RNA-Seq Data , 2017, Nature Methods.
[11] Pankaj Mehta,et al. The Minimum Environmental Perturbation Principle: A New Perspective on Niche Theory , 2019, bioRxiv.
[12] Jun-Jie Chen,et al. Manipulation of Spin Dynamics by Deep Reinforcement Learning Agent. , 2019 .
[13] Timo Hyart,et al. Machine learning assisted measurement of local topological invariants , 2019, 1901.03346.
[14] Michael Wilson,et al. Machine learning determination of dynamical parameters: The Ising model case , 2018, Physical Review B.
[15] Roger G. Melko,et al. Reconstructing quantum states with generative models , 2018, Nature Machine Intelligence.
[16] X. Wang,et al. Spin-qubit noise spectroscopy from randomized benchmarking by supervised learning , 2018, Physical Review A.
[17] Tom Rudelius. Learning to inflate. A gradient ascent approach to random inflation , 2018, Journal of Cosmology and Astroparticle Physics.
[18] B. Nord,et al. DeepCMB: Lensing Reconstruction of the Cosmic Microwave Background with Deep Neural Networks , 2018, Astron. Comput..
[19] J. Rottler,et al. Correlations in the shear flow of athermal amorphous solids: a principal component analysis , 2018, Journal of Statistical Mechanics: Theory and Experiment.
[20] Christoph Becker,et al. Identifying quantum phase transitions using artificial neural networks on experimental data , 2018, Nature Physics.
[21] Gregor Kasieczka,et al. QCD or what? , 2018, SciPost Physics.
[22] Rafael Chaves,et al. Machine Learning Nonlocal Correlations. , 2018, Physical review letters.
[23] K. Birgitta Whaley,et al. Towards quantum machine learning with tensor networks , 2018, Quantum Science and Technology.
[24] Pankaj Mehta,et al. Glassy Phase of Optimal Quantum Control. , 2018, Physical review letters.
[25] Thomas R. Bromley,et al. Batched quantum state exponentiation and quantum Hebbian learning , 2018, Quantum Mach. Intell..
[26] Maria Schuld,et al. Quantum Machine Learning in Feature Hilbert Spaces. , 2018, Physical review letters.
[27] Hartmut Neven,et al. Universal quantum control through deep reinforcement learning , 2018, npj Quantum Information.
[28] N. Maskara,et al. Advantages of versatile neural-network decoding for topological codes , 2018, Physical Review A.
[29] Yan Lu,et al. Deep convolutional neural networks for eigenvalue problems in mechanics , 2018, International Journal for Numerical Methods in Engineering.
[30] Jacob M. Taylor,et al. Machine learning techniques for state recognition and auto-tuning in quantum dots , 2017, npj Quantum Information.
[31] Simone Severini,et al. Experimental learning of quantum states , 2017, Science Advances.
[32] Michael I. Jordan,et al. First-order methods almost always avoid saddle points: The case of vanishing step-sizes , 2019, NeurIPS.
[33] S. Lloyd,et al. Quantum gradient descent and Newton’s method for constrained polynomial optimization , 2016, New Journal of Physics.
[34] Partha P. Mitra,et al. Critical Behavior and Universality Classes for an Algorithmic Phase Transition in Sparse Reconstruction , 2015, Journal of Statistical Physics.
[35] Peter Gerstoft,et al. Machine Learning in Seismology: Turning Data into Insights , 2018, Seismological Research Letters.
[36] Vladimir V. Mazurenko,et al. Supervised learning approach for recognizing magnetic skyrmion phases , 2018, Physical Review B.
[37] Marin Bukov,et al. Reinforcement learning for autonomous preparation of Floquet-engineered states: Inverting the quantum Kapitza oscillator , 2018, Physical Review B.
[38] H. Puszkarski,et al. Ferromagnetic resonance in thin films studied via cross-validation of numerical solutions of the Smit-Beljers equation: Application to (Ga,Mn)As , 2018, Physical Review B.
[39] Yi-Nan Wang,et al. Learning non-Higgsable gauge groups in 4D F-theory , 2018, Journal of High Energy Physics.
[40] Yann LeCun,et al. Comparing dynamics: deep neural networks versus glassy systems , 2018, ICML.
[41] Enrique Solano,et al. Measurement-based adaptation protocol with quantum reinforcement learning , 2018, Quantum Reports.
[42] Vedran Dunjko,et al. Neural network operations and Susuki–Trotter evolution of neural network states , 2018, International Journal of Quantum Information.
[43] Hui Zhai,et al. Machine learning of frustrated classical spin models (II): Kernel principal component analysis , 2018, Frontiers of Physics.
[44] Keisuke Fujii,et al. Quantum circuit learning , 2018, Physical Review A.
[45] Eric Mjolsness,et al. Learning dynamic Boltzmann distributions as reduced models of spatial chemical kinetics. , 2018, The Journal of chemical physics.
[46] Stefan Wessel,et al. Parameter diagnostics of phases and phase transition learning by neural networks , 2018, Physical Review B.
[47] Xin Wang,et al. Automatic spin-chain learning to explore the quantum speed limit , 2018, Physical Review A.
[48] Yusuke Nomura,et al. Constructing exact representations of quantum many-body systems with deep neural networks , 2018, Nature Communications.
[49] Daniel A. Lidar,et al. Quantum annealing versus classical machine learning applied to a simplified computational biology problem , 2018, npj Quantum Information.
[50] Pooya Ronagh,et al. Deep neural decoders for near term fault-tolerant experiments , 2018, Quantum Science and Technology.
[51] Cédric Bény,et al. Inferring relevant features: from QFT to PCA , 2018, International Journal of Quantum Information.
[52] Florian Marquardt,et al. Reinforcement Learning with Neural Networks for Quantum Feedback , 2018, Physical Review X.
[53] José Miguel Hernández-Lobato,et al. Taking gradients through experiments: LSTMs and memory proximal policy optimization for black-box quantum control , 2018, ISC Workshops.
[54] Pengfei Zhang,et al. Visualizing Neural Network Developing Perturbation Theory , 2018, Physical Review A.
[55] Leland McInnes,et al. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction , 2018, ArXiv.
[56] Lei Wang,et al. Neural Network Renormalization Group , 2018, Physical review letters.
[57] Michael C. Abbott,et al. Maximizing the information learned from finite data selects a simple model , 2017, Proceedings of the National Academy of Sciences.
[58] Satoshi Iso,et al. Scale-invariant Feature Extraction of Neural Network and Renormalization Group Flow , 2018, Physical review. E.
[59] W. Detmold,et al. Machine learning action parameters in lattice quantum chromodynamics , 2018, 1801.05784.
[60] Keisuke Fujii,et al. General framework for constructing fast and near-optimal machine-learning-based decoder of the topological stabilizer codes , 2018, 1801.04377.
[61] Giancarlo Fissore,et al. Thermodynamics of Restricted Boltzmann Machines and Related Learning Dynamics , 2018, Journal of Statistical Physics.
[62] Liang Fu,et al. Self-learning Monte Carlo with deep neural networks , 2018, Physical Review B.
[63] E. Miles Stoudenmire,et al. Learning relevant features of data with multi-scale tensor networks , 2017, ArXiv.
[64] Jing Chen,et al. Information Perspective to Probabilistic Modeling: Boltzmann Machines versus Born Machines , 2017, Entropy.
[65] Hythem Sidky,et al. Learning free energy landscapes using artificial neural networks. , 2017, The Journal of chemical physics.
[66] Jun Gao,et al. Experimental Machine Learning of Quantum States. , 2017, Physical review letters.
[67] Eyal Bairey,et al. Learning phase transitions from dynamics , 2017, Physical Review B.
[68] Michael I. Jordan,et al. Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent , 2017, COLT.
[69] Vedika Khemani,et al. Machine Learning Out-of-Equilibrium Phases of Matter. , 2017, Physical review letters.
[70] Hilbert J. Kappen,et al. On the role of synaptic stochasticity in training low-precision neural networks , 2017, Physical review letters.
[71] S. Lloyd,et al. Quantum Hopfield neural network , 2017, Physical Review A.
[72] Simone Severini,et al. Learning hard quantum distributions with variational autoencoders , 2017, npj Quantum Information.
[73] Samuel S. Schoenholz,et al. Combining Machine Learning and Physics to Understand Glassy Systems , 2017, Journal of Physics: Conference Series.
[74] Andrea Grisafi,et al. Symmetry-Adapted Machine Learning for Tensorial Properties of Atomistic Systems. , 2017, Physical review letters.
[75] Hans-J. Briegel,et al. Machine learning \& artificial intelligence in the quantum domain , 2017, ArXiv.
[76] Jun Wang,et al. Unsupervised Generative Modeling Using Matrix Product States , 2017, Physical Review X.
[77] Nicolai Friis,et al. Speeding-up the decision making of a learning agent using an ion trap quantum processor , 2017, Quantum Science and Technology.
[78] Rupak Biswas,et al. Opportunities and challenges for quantum-assisted machine learning in near-term quantum computers , 2017, Quantum Science and Technology.
[79] Pengfei Zhang,et al. Machine Learning Topological Invariants with Neural Networks , 2017, Physical review letters.
[80] David Von Dollen,et al. Quantum-Enhanced Reinforcement Learning for Finite-Episode Games with Discrete State Spaces , 2017, Front. Phys..
[81] Kelvin George Chng,et al. Unsupervised machine learning account of magnetic transitions in the Hubbard model. , 2017, Physical review. E.
[82] Bo Li,et al. Exploring the Function Space of Deep-Learning Machines , 2017, Physical review letters.
[83] M. Yung,et al. Neural-network-designed pulse sequences for robust control of singlet-Triplet qubits , 2017, 1708.00238.
[84] Simone Severini,et al. Quantum machine learning: a classical perspective , 2017, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences.
[85] Mario Krenn,et al. Active learning machine learns to create new quantum experiments , 2017, Proceedings of the National Academy of Sciences.
[86] Jun Li,et al. Separability-entanglement classifier via machine learning , 2017, Physical Review A.
[87] Pankaj Mehta,et al. Reinforcement Learning in Different Phases of Quantum Control , 2017, Physical Review X.
[88] Maria Schuld,et al. Quantum ensembles of quantum classifiers , 2017, Scientific Reports.
[89] Amnon Shashua,et al. Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design , 2017, ICLR.
[90] Adriano Barra,et al. Phase Diagram of Restricted Boltzmann Machines and Generalised Hopfield Networks with Arbitrary Priors , 2017, Physical review. E.
[91] Florent Krzakala,et al. A Deterministic and Generalized Framework for Unsupervised Learning with Restricted Boltzmann Machines , 2017, Physical Review X.
[92] J. Chen,et al. Equivalence of restricted Boltzmann machines and tensor network states , 2017, 1701.04831.
[93] Carlo Baldassi,et al. From inverse problems to learning: a Statistical Mechanics approach , 2018 .
[94] Y. Kluger,et al. Efficient Algorithms for t-distributed Stochastic Neighborhood Embedding , 2017, ArXiv.
[95] Akinori Tanaka,et al. Towards reduction of autocorrelation in HMC by machine learning , 2017, 1712.03893.
[96] Kazuyuki Tanaka,et al. Deep Neural Network Detects Quantum Phase Transition , 2017, ArXiv.
[97] Yarden Katz. Manufacturing an Artificial Intelligence Revolution , 2017 .
[98] Lei Wang,et al. Exploring cluster Monte Carlo updates with Boltzmann machines. , 2017, Physical review. E.
[99] C. K. Andersen,et al. Quantum parameter estimation with a neural network , 2017, 1711.05238.
[100] Yi-Kai Liu,et al. Super-polynomial and exponential improvements for quantum-enhanced reinforcement learning , 2017 .
[101] Xiaotong Ni,et al. Scalable Neural Network Decoders for Higher Dimensional Quantum Codes , 2017, 1710.09489.
[102] Zhaocheng Liu,et al. Simulating the Ising Model with a Deep Convolutional Generative Adversarial Network , 2017, 1710.04987.
[103] S. Foreman,et al. RG-inspired machine learning for lattice field theory , 2017, 1710.02079.
[104] Haiping Huang,et al. Mean-field theory of input dimensionality reduction in unsupervised deep neural networks , 2017, ArXiv.
[105] Andrew C. E. Reid,et al. Learning crystal plasticity using digital image correlation: Examples from discrete dislocation dynamics , 2017, ArXiv.
[106] Enrique Solano,et al. Generalized Quantum Reinforcement Learning with Quantum Technologies , 2017, ArXiv.
[107] Andrew S. Darmawan,et al. Restricted Boltzmann machine learning for solving strongly correlated quantum systems , 2017, 1709.06475.
[108] Nobuyuki Yoshioka,et al. Learning disordered topological phases by statistical recovery of symmetry , 2017, 1709.05790.
[109] Hiroki Saito,et al. Machine Learning Technique to Find Quantum Many-Body Ground States of Bosons on a Lattice , 2017, 1709.05468.
[110] Qiong Zhu,et al. Identifying Product Order with Restricted Boltzmann Machines , 2017, 1709.02597.
[111] Zhao Yang,et al. Machine Learning Spatial Geometry from Entanglement Features , 2017, 1709.01223.
[112] Alejandro Perdomo-Ortiz,et al. Quantum-assisted Helmholtz machines: A quantum–classical deep learning framework for industrial datasets in near-term devices , 2017, ArXiv.
[113] Roger G. Melko,et al. Deep Learning the Ising Model Near Criticality , 2017, J. Mach. Learn. Res..
[114] Giancarlo Fissore,et al. Spectral Learning of Restricted Boltzmann Machines , 2017, ArXiv.
[115] Chao-Hua Yu,et al. Quantum algorithms for ridge regression , 2017 .
[116] Steven Weinstein,et al. Learning the Einstein-Podolsky-Rosen correlations on a Restricted Boltzmann Machine , 2017, 1707.03114.
[117] Simon Trebst,et al. Quantum phase recognition via unsupervised machine learning , 2017, 1707.00663.
[118] U. Seifert,et al. Thermodynamic efficiency of learning a rule in neural networks , 2017, 1706.09713.
[119] A. Ramezanpour,et al. Optimization by a quantum reinforcement algorithm , 2017, ArXiv.
[120] Michael I. Jordan,et al. Gradient Descent Can Take Exponential Time to Escape Saddle Points , 2017, NIPS.
[121] Cesare Furlanello,et al. Towards meaningful physics from generative models , 2017, ArXiv.
[122] Liang Jiang,et al. Deep Neural Network Probabilistic Decoder for Stabilizer Codes , 2017, Scientific Reports.
[123] Nathan Srebro,et al. The Marginal Value of Adaptive Gradient Methods in Machine Learning , 2017, NIPS.
[124] P. Baireuther,et al. Machine-learning-assisted correction of correlated qubit errors in a topological code , 2017, 1705.07855.
[125] Alireza Alemi,et al. Exponential Capacity in an Autoencoder Neural Network with a Hidden Layer , 2017, 1705.07441.
[126] Yang Qi,et al. Self-learning Monte Carlo method: Continuous-time algorithm , 2017, 1705.06724.
[127] Manuel Scherzer,et al. Machine Learning of Explicit Order Parameters: From the Ising Model to SU(2) Lattice Gauge Theory , 2017, 1705.05582.
[128] Yi Zhang,et al. Machine learning Z 2 quantum spin liquids with quasiparticle statistics , 2017, 1705.01947.
[129] Tanaka Akinori,et al. Detection of Phase Transition via Convolutional Neural Networks , 2016, 1609.09087.
[130] Domingos S P Salazar,et al. Nonequilibrium thermodynamics of restricted Boltzmann machines. , 2017, Physical review. E.
[131] Z. Ringel,et al. Mutual information, neural networks and the renormalization group , 2017, Nature Physics.
[132] Titus Neupert,et al. Probing many-body localization with neural networks , 2017, 1704.01578.
[133] Maria Schuld,et al. Implementing a distance-based classifier with a quantum interference circuit , 2017, 1703.10793.
[134] Lyle Noakes,et al. Generating three-qubit quantum circuits with neural networks , 2017, 1703.10743.
[135] Chian-De Li,et al. Applications of neural networks to the studies of phase transitions of two-dimensional Potts models , 2017, 1703.02369.
[136] Sebastian Johann Wetzel,et al. Unsupervised learning of phase transitions: from principal component analysis to variational autoencoders , 2017, Physical review. E.
[137] Naftali Tishby,et al. Opening the Black Box of Deep Neural Networks via Information , 2017, ArXiv.
[138] Lei Wang,et al. Can Boltzmann Machines Discover Cluster Updates ? , 2017, Physical review. E.
[139] R. Zecchina,et al. Inverse statistical problems: from the inverse Ising problem to data science , 2017, 1702.01522.
[140] Antonio Celani,et al. Flow Navigation by Smart Microswimmers via Reinforcement Learning , 2017, Physical review letters.
[141] Ronald de Wolf,et al. A Survey of Quantum Learning Theory , 2017, ArXiv.
[142] Lucas Lamata,et al. Basic protocols in quantum reinforcement learning with superconducting circuits , 2017, Scientific Reports.
[143] Lu-Ming Duan,et al. Efficient representation of quantum many-body states with deep neural networks , 2017, Nature Communications.
[144] D. Deng,et al. Quantum Entanglement in Neural Network States , 2017, 1701.04844.
[145] Jeff Z Y Chen,et al. Identifying polymer states by machine learning. , 2017, Physical review. E.
[146] Ian J. Goodfellow,et al. NIPS 2016 Tutorial: Generative Adversarial Networks , 2016, ArXiv.
[147] Adriano Barra,et al. Phase transitions in Restricted Boltzmann Machines with generic priors , 2016, Physical review. E.
[148] Alexander A. Alemi,et al. Deep Variational Information Bottleneck , 2017, ICLR.
[149] Rémi Monasson,et al. Emergence of Compositional Representations in Restricted Boltzmann Machines , 2016, Physical review letters.
[150] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[151] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[152] Serena Bradde,et al. PCA Meets RG , 2016, Journal of Statistical Physics.
[153] Giacomo Torlai,et al. Neural Decoder for Topological Codes. , 2016, Physical review letters.
[154] S. Huber,et al. Learning phase transitions by confusion , 2016, Nature Physics.
[155] Jon M. Kleinberg,et al. Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.
[156] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[157] Barnabás Póczos,et al. Enabling Dark Energy Science with Deep Generative Models of Galaxy Images , 2016, AAAI.
[158] Matthias Troyer,et al. Solving the quantum many-body problem with artificial neural networks , 2016, Science.
[159] David J. Schwab,et al. The Deterministic Information Bottleneck , 2015, Neural Computation.
[160] T. Ohtsuki,et al. Deep Learning the Quantum Phase Transitions in Random Electron Systems: Applications to Three Dimensions , 2016, 1612.04909.
[161] Haiping Huang,et al. Statistical mechanics of unsupervised feature learning in a restricted Boltzmann machine with binary synapses , 2016, ArXiv.
[162] Peter E. Latham,et al. Zipf’s Law Arises Naturally When There Are Underlying, Unobserved Variables , 2016, PLoS Comput. Biol..
[163] Martin Wattenberg,et al. How to Use t-SNE Effectively , 2016 .
[164] A. Tanaka,et al. Detection of phase transition via convolutional neural network , 2016, 1609.09087.
[165] Ammar Daskin,et al. A Quantum Implementation Model for Artificial Neural Networks , 2016, ArXiv.
[166] Sebastian Ruder,et al. An overview of gradient descent optimization algorithms , 2016, Vestnik komp'iuternykh i informatsionnykh tekhnologii.
[167] Tom White,et al. Sampling Generative Networks: Notes on a Few Effective Techniques , 2016, ArXiv.
[168] Tom White,et al. Sampling Generative Networks: Notes on a Few Effective Techniques , 2016, ArXiv.
[169] M. Benedetti,et al. Quantum-assisted learning of graphical models with arbitrary pairwise connectivity , 2016, ArXiv.
[170] Cathy O'Neil,et al. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2016, Vikalpa: The Journal for Decision Makers.
[171] Surya Ganguli,et al. Statistical Mechanics of Optimal Convex Inference in High Dimensions , 2016 .
[172] Max Tegmark,et al. Why Does Deep and Cheap Learning Work So Well? , 2016, Journal of Statistical Physics.
[173] Gautam Reddy,et al. Learning to soar in turbulent environments , 2016, Proceedings of the National Academy of Sciences.
[174] Olivier Marre,et al. Relevant sparse codes with variational information bottleneck , 2016, NIPS.
[175] P. Vahle,et al. A convolutional neural network neutrino event classifier , 2016, ArXiv.
[176] Joshua B. Tenenbaum,et al. Building machines that learn and think like people , 2016, Behavioral and Brain Sciences.
[177] Roger G. Melko,et al. Machine learning phases of matter , 2016, Nature Physics.
[178] Tianqi Chen,et al. XGBoost: A Scalable Tree Boosting System , 2016, KDD.
[179] Tomaso Poggio,et al. Learning Functions: When Is Deep Better Than Shallow , 2016, 1603.00988.
[180] Ole Winther,et al. Ladder Variational Autoencoders , 2016, NIPS.
[181] Gautam Reddy,et al. Infomax Strategies for an Optimal Balance Between Exploration and Exploitation , 2016, Journal of Statistical Physics.
[182] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[183] Samy Bengio,et al. Generating Sentences from a Continuous Space , 2015, CoNLL.
[184] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[185] Ali Farhadi,et al. Unsupervised Deep Embedding for Clustering Analysis , 2015, ICML.
[186] David J. Schwab,et al. Supervised Learning with Tensor Networks , 2016, NIPS.
[187] Florent Krzakala,et al. Statistical physics of inference: thresholds and algorithms , 2015, ArXiv.
[188] Nadav Cohen,et al. On the Expressive Power of Deep Learning: A Tensor Analysis , 2015, COLT 2016.
[189] Charles K. Fisher,et al. Bayesian feature selection for high-dimensional linear regression via the Ising approximation with applications to genomics , 2015, Bioinform..
[190] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[191] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[192] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[193] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[194] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[195] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[196] Charles K. Fisher,et al. Bayesian Feature Selection with Strongly Regularizing Priors Maps to the Ising Model , 2014, Neural Computation.
[197] Jürgen Schmidhuber,et al. Deep learning in neural networks: An overview , 2014, Neural Networks.
[198] S. Frick,et al. Compressed Sensing , 2014, Computer Vision, A Reference Guide.
[199] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[200] David J. Schwab,et al. An exact mapping between the Variational Renormalization Group and Deep Learning , 2014, ArXiv.
[201] F. Petruccione,et al. An introduction to quantum machine learning , 2014, Contemporary Physics.
[202] O. A. von Lilienfeld,et al. Machine learning for many-body physics: The case of the Anderson impurity model , 2014, 1408.1143.
[203] Gilles Louppe,et al. Understanding Random Forests: From Theory to Practice , 2014, 1407.7502.
[204] Alessandro Laio,et al. Clustering by fast search and find of density peaks , 2014, Science.
[205] Tzyh Jong Tarn,et al. Fidelity-Based Probabilistic Q-Learning for Control of Quantum Systems , 2014, IEEE Transactions on Neural Networks and Learning Systems.
[206] Jonathon Shlens,et al. A Tutorial on Principal Component Analysis , 2014, ArXiv.
[207] P. Baldi,et al. Searching for exotic particles in high-energy physics with deep learning , 2014, Nature Communications.
[208] Florent Krzakala,et al. Variational free energies for compressed sensing , 2014, 2014 IEEE International Symposium on Information Theory.
[209] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[210] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[211] Surya Ganguli,et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks , 2013, ICLR.
[212] David J Schwab,et al. Zipf's law and criticality in multivariate data without fine-tuning. , 2013, Physical review letters.
[213] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[214] Laurens van der Maaten,et al. Accelerating t-SNE using tree-based algorithms , 2014, J. Mach. Learn. Res..
[215] Geoffrey E. Hinton,et al. On the importance of initialization and momentum in deep learning , 2013, ICML.
[216] S. Ganguli,et al. Statistical mechanics of complex neural systems and high dimensional data , 2013, 1301.7115.
[217] Marc'Aurelio Ranzato,et al. Building high-level features using large scale unsupervised learning , 2011, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[218] Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method , 2012, ArXiv.
[219] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[220] A. Pettitt,et al. Introduction to MCMC , 2012 .
[221] Hans-Peter Kriegel,et al. A survey on unsupervised outlier detection in high‐dimensional numerical data , 2012, Stat. Anal. Data Min..
[222] Pedro M. Domingos. A few useful things to know about machine learning , 2012, Commun. ACM.
[223] Kevin P. Murphy,et al. Machine learning - a probabilistic perspective , 2012, Adaptive computation and machine learning series.
[224] Yoshua Bengio,et al. Practical Recommendations for Gradient-Based Training of Deep Architectures , 2012, Neural Networks: Tricks of the Trade.
[225] Florent Krzakala,et al. Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices , 2012, ArXiv.
[226] R. Tibshirani. The Lasso Problem and Uniqueness , 2012, 1206.0313.
[227] Yoav Freund,et al. Boosting: Foundations and Algorithms , 2012 .
[228] David Barber,et al. Bayesian reasoning and machine learning , 2012 .
[229] Adriano Barra,et al. On the equivalence of Hopfield networks and Boltzmann Machines , 2011, Neural Networks.
[230] Steven R. White,et al. Studying Two Dimensional Systems With the Density Matrix Renormalization Group , 2011, 1105.1374.
[231] Geoffrey E. Hinton. A Practical Guide to Training Restricted Boltzmann Machines , 2012, Neural Networks: Tricks of the Trade.
[232] Léon Bottou,et al. Stochastic Gradient Descent Tricks , 2012, Neural Networks: Tricks of the Trade.
[233] Hsuan-Tien Lin,et al. Learning From Data , 2012 .
[234] Klaus-Robert Müller,et al. Efficient BackProp , 2012, Neural Networks: Tricks of the Trade.
[235] Florent Krzakala,et al. Statistical physics-based reconstruction in compressed sensing , 2011, ArXiv.
[236] Daniel Müllner,et al. Modern hierarchical, agglomerative clustering algorithms , 2011, ArXiv.
[237] Radford M. Neal. MCMC Using Hamiltonian Dynamics , 2011, 1206.1901.
[238] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[239] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[240] Trevor Hastie,et al. A statistical explanation of MaxEnt for ecologists , 2011 .
[241] Anirvan M. Sengupta,et al. Statistical Mechanics of Transcription-Factor Binding Site Discovery Using Hidden Markov Models , 2010, Journal of statistical physics.
[242] Nathan Halko,et al. An Algorithm for the Principal Component Analysis of Large Data Sets , 2010, SIAM J. Sci. Comput..
[243] Wei-Yin Loh,et al. Classification and regression trees , 2011, WIREs Data Mining Knowl. Discov..
[244] Geoffrey E. Hinton,et al. Phone Recognition with the Mean-Covariance Restricted Boltzmann Machine , 2010, NIPS.
[245] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[246] Massimo Vergassola,et al. Chasing information to search in random environments , 2009 .
[247] Geoffrey E. Hinton,et al. Using fast weights to improve persistent contrastive divergence , 2009, ICML '09.
[248] M. Mézard,et al. Information, Physics, and Computation , 2009 .
[249] Hans-Peter Kriegel,et al. Clustering high-dimensional data: A survey on subspace clustering, pattern-based clustering, and correlation clustering , 2009, TKDD.
[250] T. Hwa,et al. Identification of direct residue contacts in protein–protein interaction by message passing , 2009, Proceedings of the National Academy of Sciences.
[251] Michael I. Jordan,et al. Graphical Models, Exponential Families, and Variational Inference , 2008, Found. Trends Mach. Learn..
[252] Chuong B Do,et al. What is the expectation maximization algorithm? , 2008, Nature Biotechnology.
[253] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[254] Massimo Vergassola,et al. ‘Infotaxis’ as a strategy for searching without gradients , 2007, Nature.
[255] Nasser M. Nasrabadi,et al. Pattern Recognition and Machine Learning , 2006, Technometrics.
[256] G. Vidal. Entanglement renormalization. , 2005, Physical review letters.
[257] J. Sethna. Statistical Mechanics: Entropy, Order Parameters, and Complexity , 2021 .
[258] S. Geer,et al. Regularization in statistics , 2006 .
[259] Geoffrey E. Hinton,et al. Reducing the Dimensionality of Data with Neural Networks , 2006, Science.
[260] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[261] Pierre Geurts,et al. Extremely randomized trees , 2006, Machine Learning.
[262] Michael J. Berry,et al. Weak pairwise correlations imply strongly correlated network states in a neural population , 2005, Nature.
[263] Stephen P. Boyd,et al. Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.
[264] Michael C. Fu,et al. Chapter 19 Gradient Estimation , 2006, Simulation.
[265] L. McMillan,et al. A Fast Approximation to Multidimensional Scaling , 2006 .
[266] H. Zou,et al. Regularization and variable selection via the elastic net , 2005 .
[267] William Bialek,et al. Estimating mutual information and multi-information in large networks , 2005, ArXiv.
[268] Lior Rokach,et al. Data Mining And Knowledge Discovery Handbook , 2005 .
[269] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[270] Lior Rokach,et al. Clustering Methods , 2005, The Data Mining and Knowledge Discovery Handbook.
[271] Larry Wasserman,et al. All of Statistics: A Concise Course in Statistical Inference , 2004 .
[272] B. Turlach. DISCUSSION OF "LEAST ANGLE REGRESSION" BY EFRON ET AL. , 2004, math/0406472.
[273] R. Tibshirani,et al. Least angle regression , 2004, math/0406456.
[274] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[275] Wei-Yin Loh,et al. A Comparison of Prediction Accuracy, Complexity, and Training Time of Thirty-Three Old and New Classification Algorithms , 2000, Machine Learning.
[276] Michael I. Jordan,et al. An Introduction to Variational Methods for Graphical Models , 1999, Machine Learning.
[277] Christopher J. C. Burges,et al. A Tutorial on Support Vector Machines for Pattern Recognition , 1998, Data Mining and Knowledge Discovery.
[278] Hans-Peter Kriegel,et al. Density-Based Clustering in Spatial Databases: The Algorithm GDBSCAN and Its Applications , 1998, Data Mining and Knowledge Discovery.
[279] Nando de Freitas,et al. An Introduction to MCMC for Machine Learning , 2004, Machine Learning.
[280] Leo Breiman,et al. Bagging Predictors , 1996, Machine Learning.
[281] David J. C. MacKay,et al. Information Theory, Inference, and Learning Algorithms , 2004, IEEE Transactions on Information Theory.
[282] Gal Chechik,et al. Information Bottleneck for Gaussian Variables , 2003, J. Mach. Learn. Res..
[283] M. Tribus,et al. Probability theory: the logic of science , 2003 .
[284] William T. Freeman,et al. Understanding belief propagation and its generalizations , 2003 .
[285] Bogdan E. Popescu,et al. Importance Sampled Learning Ensembles , 2003 .
[286] Geoffrey E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.
[287] J. Friedman. Stochastic gradient boosting , 2002 .
[288] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[289] Radford M. Neal. Annealed importance sampling , 1998, Stat. Comput..
[290] M. Opper,et al. An Idiosyncratic Journey Beyond Mean Field Theory , 2001 .
[291] Trevor Hastie,et al. The Elements of Statistical Learning , 2001 .
[292] J. Tenenbaum,et al. A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.
[293] S T Roweis,et al. Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.
[294] Thomas G. Dietterich. Ensemble Methods in Machine Learning , 2000, Multiple Classifier Systems.
[295] Naftali Tishby,et al. The information bottleneck method , 2000, ArXiv.
[296] Chinatsu Aone,et al. Fast and effective text mining using linear-time document clustering , 1999, KDD '99.
[297] John P. Sullins. Artificial knowing: gender and the thinking machine , 1999, CSOC.
[298] Ning Qian,et al. On the momentum term in gradient descent learning algorithms , 1999, Neural Networks.
[299] Yoav Freund,et al. A Short Introduction to Boosting , 1999 .
[300] D. Botstein,et al. Cluster analysis and display of genome-wide expression patterns. , 1998, Proceedings of the National Academy of Sciences of the United States of America.
[301] Yoshua Bengio,et al. Convolutional networks for images, speech, and time series , 1998 .
[302] Tin Kam Ho,et al. The Random Subspace Method for Constructing Decision Forests , 1998, IEEE Trans. Pattern Anal. Mach. Intell..
[303] J. C. BurgesChristopher. A Tutorial on Support Vector Machines for Pattern Recognition , 1998 .
[304] Geoffrey E. Hinton,et al. A View of the Em Algorithm that Justifies Incremental, Sparse, and other Variants , 1998, Learning in Graphical Models.
[305] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[306] C. Jarzynski. Nonequilibrium Equality for Free Energy Differences , 1996, cond-mat/9610209.
[307] Hans-Peter Kriegel,et al. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise , 1996, KDD.
[308] Jack P. C. Kleijnen,et al. Optimization and Sensitivity Analysis of Computer Simulation Models by the Score Function Method , 1996 .
[309] Yoav Freund,et al. A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.
[310] Christopher M. Bishop,et al. Current address: Microsoft Research, , 2022 .
[311] Christopher M. Bishop,et al. Neural networks for pattern recognition , 1995 .
[312] White,et al. Density matrix formulation for quantum renormalization groups. , 1992, Physical review letters.
[313] Roberto Battiti,et al. First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's Method , 1992, Neural Computation.
[314] Allen Gersho,et al. Vector quantization and signal compression , 1991, The Kluwer international series in engineering and computer science.
[315] Teuvo Kohonen,et al. The self-organizing map , 1990, Neurocomputing.
[316] D. Amit. Modelling Brain Function: The World of Attractor Neural Networks , 1989 .
[317] P. Howe,et al. Multicritical points in two dimensions, the renormalization group and the ϵ expansion , 1989 .
[318] Lawrence R. Rabiner,et al. A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.
[319] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[320] Piet Hut,et al. A hierarchical O(N log N) force-calculation algorithm , 1986, Nature.
[321] Drew McDermott,et al. The Dark Ages of AI: A Panel Discussion at AAAI-84 , 1985, AI Mag..
[322] Sompolinsky,et al. Spin-glass models of neural networks. , 1985, Physical review. A, General physics.
[323] David Zipser,et al. Feature Discovery by Competive Learning , 1985, Cogn. Sci..
[324] Geoffrey E. Hinton,et al. A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..
[325] Y. Nesterov. A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .
[326] J J Hopfield,et al. Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.
[327] D. Freedman,et al. Some Asymptotic Theory for the Bootstrap , 1981 .
[328] K. Singh,et al. On the Asymptotic Accuracy of Efron's Bootstrap , 1981 .
[329] B. Efron. Bootstrap Methods: Another Look at the Jackknife , 1979 .
[330] D. Rubin,et al. Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .
[331] K. Wilson,et al. The Renormalization group and the epsilon expansion , 1973 .
[332] Richard E. Blahut,et al. Computation of channel capacity and rate-distortion functions , 1972, IEEE Trans. Inf. Theory.
[333] Suguru Arimoto,et al. An algorithm for computing the capacity of arbitrary discrete memoryless channels , 1972, IEEE Trans. Inf. Theory.
[334] Robert S. Bennett,et al. The intrinsic dimensionality of signal collections , 1969, IEEE Trans. Inf. Theory.
[335] Edwin T. Jaynes,et al. Prior Probabilities , 1968, Encyclopedia of Machine Learning.
[336] V. N. Popov,et al. Feynman Diagrams for the Yang-Mills Field , 1967 .
[337] Boris Polyak. Some methods of speeding up the convergence of iteration methods , 1964 .
[338] S. Kullback. Information Theory and Statistics , 1959 .
[339] J. Hubbard. Calculation of Partition Functions , 1959 .
[340] Joseph L. Zinnes,et al. Theory and Methods of Scaling. , 1958 .
[341] E. Jaynes. Information Theory and Statistical Mechanics , 1957 .
[342] R. L. Stratonovich. On a Method of Calculating Quantum Distribution Functions , 1957 .
[343] E. L. Lehmann,et al. Theory of point estimation , 1950 .
[344] Claude E. Shannon,et al. Communication theory of secrecy systems , 1949, Bell Syst. Tech. J..
[345] H. Jeffreys. An invariant form for the prior probability in estimation problems , 1946, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences.