Analysis of Regularized Least Square Algorithms with Beta-Mixing Input Sequences

The generalization performance is the important property of learning machines. It has been shown previously by Vapnik, Cucker and Smale, et.al. that, the empirical risks of learning machines based on an i.i.d. sequence must uniformly converge to their expected risks as the number of samples approaches infinity. This paper considers regularization schemes associated with the least square loss and reproducing kernel Hilbert spaces. It develops a theoretical analysis of generalization performances of regularized least squares on reproducing kernel Hilbert spaces for supervised learning with beta-mixing input sequences.