Learning with Interdependent Data
暂无分享,去创建一个
In this chapter, we present a statistical framework to obtain guarantees on the generalization performance of classification algorithms in cases where the examples to classify cannot be assumed to be independently sampled from a fixed distribution. This work is motivated by the statistical analysis of algorithms for learning to rank, which can be reduced to the binary classification of interdependent pairs of objects. We describe two different formal frameworks of ranking, which correspond to different application scenarios. Then, we present a unifying framework for learning classifiers with non-i.i.d. data, we prove generic generalization error bounds based on an extension of the Rademacher complexity, and show how this generic bound can be specialized to different cases of learning to rank.