The consistency of least-square regularized regression with negative association sequence

In the last few years, many known works in learning theory stepped over the classical assumption that samples are independent and identical distribution and investigated learning performance based on non-independent samples, as mixing sequences (e.g., α-mixing, β-mixing, ϕ-mixing etc.), they derived similar results with the investigation based on classical sample assumption. Negative association (NA) sequence is a kind of significant dependent random variables and plays an important role in non-independent sequences. It is widely applied to various subjects such as probability theory, statistics and stochastic processes. Therefore, it is essential to study the learning performance of learning process for dependent samples drawn from NA process. Obviously, samples in this learning process are not independent and identical distribution. The results in classical learning theory are not applied directly. In this paper, we study the consistency of least-square regularized regression with NA samples. We establi...

[1]  S. Smale,et al.  Learning Theory Estimates via Integral Operators and Their Approximations , 2007 .

[2]  Subhashis Ghosal,et al.  Extensions of the strong law of large numbers of Marcinkiewicz and Zygmund for dependent variables , 1996 .

[3]  Yan Liu,et al.  Precise large deviations for negatively associated random variables with consistently varying tails , 2007 .

[4]  Ingo Steinwart,et al.  Fast rates for support vector machines using Gaussian kernels , 2007, 0708.1838.

[5]  Yiming Ying,et al.  Support Vector Machine Soft Margin Classifiers: Error Analysis , 2004, J. Mach. Learn. Res..

[6]  D. Walkup,et al.  Association of Random Variables, with Applications , 1967 .

[7]  Qihe Tang,et al.  Maxima of sums and random sums for negatively associated random variables with heavy tails , 2004 .

[8]  George G. Roussas,et al.  Asymptotic normality of random fields of positively or negatively associated processes , 1994 .

[9]  V. Volkonskii,et al.  Some Limit Theorems for Random Functions. II , 1959 .

[10]  Di-Rong Chen,et al.  Learning rates of regularized regression for exponentially strongly mixing sequence , 2008 .

[11]  S. Smale,et al.  ONLINE LEARNING WITH MARKOV SAMPLING , 2009 .

[12]  Sergio Escalera,et al.  Error-Correcting Ouput Codes Library , 2010, J. Mach. Learn. Res..

[13]  K. Joag-dev,et al.  Negative Association of Random Variables with Applications , 1983 .

[14]  Marc Goovaerts,et al.  Upper and Lower Bounds for Sums of Random Variables. , 2000 .

[15]  M. Rosenblatt A CENTRAL LIMIT THEOREM AND A STRONG MIXING CONDITION. , 1956, Proceedings of the National Academy of Sciences of the United States of America.

[16]  Felipe Cucker,et al.  Best Choices for Regularization Parameters in Learning Theory: On the Bias—Variance Problem , 2002, Found. Comput. Math..

[17]  Yiming Ying,et al.  Learning Rates of Least-Square Regularized Regression , 2006, Found. Comput. Math..

[18]  Shanchao Yang Uniformly asymptotic normality of the regression weighted estimator for negatively associated samples , 2003 .

[19]  Luoqing Li,et al.  The performance bounds of learning machines based on exponentially strongly mixing sequences , 2007, Comput. Math. Appl..