Rigorous learning curve bounds from statistical mechanics

[1]  V. Vapnik Estimation of Dependences Based on Empirical Data , 2006 .

[2]  Michael J. Pazzani,et al.  A Framework for Average Case Analysis of Conjunctive Learning Algorithms , 1992, Machine Learning.

[3]  E. M. Oblow Implementing Valiant's Learnability Theory Using Random Sets , 2004, Machine Learning.

[4]  R. Schapire,et al.  Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and the VC Dimension , 1991, Machine Learning.

[5]  Luc Devroye,et al.  Lower bounds in pattern recognition and learning , 1995, Pattern Recognit..

[6]  Yann LeCun,et al.  Measuring the VC-Dimension of a Learning Machine , 1994, Neural Computation.

[7]  A. Engel,et al.  Statistical mechanics calculation of Vapnik-Chervonenkis bounds for perceptrons , 1993 .

[8]  Engel,et al.  Systems that can learn from examples: Replica calculation of uniform convergence bounds for perceptrons. , 1993, Physical review letters.

[9]  Hans Ulrich Simon,et al.  General bounds on the number of examples needed for learning probabilistic concepts , 1993, COLT '93.

[10]  T. Watkin,et al.  THE STATISTICAL-MECHANICS OF LEARNING A RULE , 1993 .

[11]  Yuh-Dauh Lyuu,et al.  Tight Bounds on Transition to Perfect Generalization in Perceptrons , 1992, Neural Computation.

[12]  David Haussler,et al.  Decision Theoretic Generalizations of the PAC Model for Neural Net and Other Learning Applications , 1992, Inf. Comput..

[13]  Shun-ichi Amari,et al.  Four Types of Learning Curves , 1992, Neural Computation.

[14]  Sompolinsky,et al.  Statistical mechanics of learning from examples. , 1992, Physical review. A, Atomic, molecular, and optical physics.

[15]  Gerald Tesauro,et al.  How Tight Are the Vapnik-Chervonenkis Bounds? , 1992, Neural Computation.

[16]  H. Balsters,et al.  Learnability with respect to fixed distributions , 1991 .

[17]  Yuh-Dauh Lyuu,et al.  The Transition to Perfect Generalization in Perceptrons , 1991, Neural Computation.

[18]  H. Sebastian Seung,et al.  Learning curves in large neural networks , 1991, COLT '91.

[19]  James A. Pittman,et al.  Recognizing Hand-Printed Letters and Digits Using Backpropagation Learning , 1991, Neural Computation.

[20]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[21]  Sompolinsky,et al.  Learning from examples in large neural networks. , 1990, Physical review letters.

[22]  Vijay K. Samalam,et al.  Exhaustive Learning , 1990, Neural Computation.

[23]  Robert E. Schapire,et al.  On the sample complexity of weak learning , 1990, COLT '90.

[24]  Györgyi,et al.  First-order transition to perfect generalization in a neural network with binary synapses. , 1990, Physical review. A, Atomic, molecular, and optical physics.

[25]  Esther Levin,et al.  A statistical approach to learning and generalization in layered neural networks , 1989, Proc. IEEE.

[26]  Michael J. Pazzani,et al.  Average case analysis of empirical and explanation-based learning algorithms , 1989 .

[27]  E. Gardner,et al.  Three unfinished works on the optimal storage capacity of networks , 1989 .

[28]  Leslie G. Valiant,et al.  A general lower bound on the number of examples needed for learning , 1988, COLT '88.

[29]  E. Gardner The space of interactions in neural network models , 1988 .

[30]  D. Pollard Convergence of stochastic processes , 1984 .

[31]  Vladimir Vapnik,et al.  Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics) , 1982 .

[32]  R. Dudley Central Limit Theorems for Empirical Measures , 1978 .

[33]  Vladimir Vapnik,et al.  Chervonenkis: On the uniform convergence of relative frequencies of events to their probabilities , 1971 .