Separating Models of Learning from Correlated and Uncorrelated Data

We consider a natural framework of learning from correlated data, in which successive examples used for learning are generated according to a random walk over the space of possible examples. Previous research has suggested that the Random Walk model is more powerful than comparable standard models of learning from independent examples, by exhibiting learning algorithms in the Random Walk framework that have no known counterparts in the standard model. We give strong evidence that the Random Walk model is indeed more powerful than the standard model, by showing that if any cryptographic one-way function exists (a universally held belief in public key cryptography), then there is a class of functions that can be learned efficiently in the Random Walk setting but not in the standard setting where all examples are independent.

[1]  Sebastien Roch On learning thresholds of parities and unions of rectangles in random walk models , 2007 .

[2]  Oded Goldreich Foundations of Cryptography: Index , 2001 .

[3]  Oded Goldreich,et al.  Foundations of Cryptography: Volume 1, Basic Tools , 2001 .

[4]  Ryan O'Donnell,et al.  Learning DNF from random walks , 2003, 44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings..

[5]  Oded Goldreich,et al.  Foundations of Cryptography: Basic Tools , 2000 .

[6]  Noam Nisan,et al.  Constant depth circuits, Fourier transform, and learnability , 1993, JACM.

[7]  Peter L. Bartlett,et al.  Exploiting Random Walks for Learning , 2002, Inf. Comput..

[8]  Umesh V. Vazirani,et al.  A Markovian extension of Valiant's learning model , 1990, Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science.

[9]  Leonid A. Levin,et al.  A Pseudorandom Generator from any One-way Function , 1999, SIAM J. Comput..

[10]  Jeffrey C. Jackson,et al.  An efficient membership-query algorithm for learning DNF with respect to the uniform distribution , 1994, Proceedings 35th Annual Symposium on Foundations of Computer Science.

[11]  Avrim Blum Learning a Function of r Relevant Variables , 2003, COLT.

[12]  Oded Goldreich Foundations of Cryptography: Volume 1 , 2006 .

[13]  Karsten A. Verbeurgt Learning DNF under the uniform distribution in quasi-polynomial time , 1990, COLT '90.

[14]  Nader H. Bshouty,et al.  On the Fourier spectrum of monotone functions , 1995, STOC '95.

[15]  Rocco A. Servedio,et al.  Learnability beyond AC0 , 2002, STOC '02.

[16]  S. Roch On learning thresholds of parities and unions of rectangles in random walk models , 2007, Random Struct. Algorithms.

[17]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, CACM.

[18]  David Gamarnik Extension of the PAC framework to finite and countable Markov chains , 2003, IEEE Trans. Inf. Theory.

[19]  Michael Kharitonov,et al.  Cryptographic hardness of distribution-specific learning , 1993, STOC.

[20]  Yishay Mansour,et al.  Weakly learning DNF and characterizing statistical query learning using Fourier analysis , 1994, STOC '94.

[21]  Nader H. Bshouty,et al.  More efficient PAC-learning of DNF with membership queries under the uniform distribution , 2004, J. Comput. Syst. Sci..

[22]  Silvio Micali,et al.  How to construct random functions , 1986, JACM.