Penalized empirical relaxed greedy algorithm for fixed design Gaussian regression

Compared with l1-regularization algorithm, greedy algorithm has great advantage in computational complexity. In this paper, we consider the penalized empirical relaxed greedy algorithm, and analyze its efficiency in the fixed design Gaussian regression problem. Through a careful analysis, we provide the oracle inequalities in the case of finite and infinite dictionary, respectively via choosing appropriate number of greedy iterations. Relying on those oracle inequalities, we obtain the learning rate of the algorithm when the target function lies in the convex hull of the dictionary. Our results show that the error has O((ln n n )1 2) decay, which is the near optimal convergence rate in the literature.