ON SURROGATE LOSS FUNCTIONS AND f -DIVERGENCES 1

The goal of binary classification is to estimate a discriminant function γ from observations of covariate vectors and corresponding binary labels. We consider an elaboration of this problem in which the covariates are not available directly but are transformed by a dimensionality-reducing quantizer Q. We present conditions on loss functions such that empirical risk minimization yields Bayes consistency when both the discriminant function and the quantizer are estimated. These conditions are stated in terms of a general correspondence between loss functions and a class of functionals known as Ali-Silvey or f -divergence functionals. Whereas this correspondence was established by Blackwell [Proc. 2nd Berkeley Symp. Probab. Statist. 1 (1951) 93–102. Univ. California Press, Berkeley] for the 0–1 loss, we extend the correspondence to the broader class of surrogate loss functions that play a key role in the general theory of Bayes consistency for binary classification. Our result makes it possible to pick out the (strict) subset of surrogate loss functions that yield Bayes consistency for joint estimation of the discriminant function and the quantizer.

[1]  D. Blackwell Comparison of Experiments , 1951 .

[2]  D. Blackwell Equivalent Comparisons of Experiments , 1953 .

[3]  R. N. Bradt On the Design and Comparison of Certain Dichotomous Experiments , 1954 .

[4]  S. M. Ali,et al.  A General Class of Coefficients of Divergence of One Distribution from Another , 1966 .

[5]  T. Kailath The Divergence and Bhattacharyya Distance Measures in Signal Selection , 1967 .

[6]  H. V. Poor,et al.  Applications of Ali-Silvey Distance Measures in the Design of Generalized Quantizers for Binary Decision Systems , 1977, IEEE Trans. Commun..

[7]  丸山 徹 Convex Analysisの二,三の進展について , 1977 .

[8]  Colin McDiarmid,et al.  Surveys in Combinatorics, 1989: On the method of bounded differences , 1989 .

[9]  R. Phelps Convex Functions, Monotone Operators and Differentiability , 1989 .

[10]  Maurizio Longo,et al.  Quantization for decentralized hypothesis testing under communication constraints , 1990, IEEE Trans. Inf. Theory.

[11]  J. Tsitsiklis Decentralized Detection' , 1993 .

[12]  Flemming Topsøe,et al.  Some inequalities for information divergence and related measures of discrimination , 2000, IEEE Trans. Inf. Theory.

[13]  Wenxin Jiang Process consistency for AdaBoost , 2003 .

[14]  Tong Zhang Statistical behavior and consistency of classification methods based on convex risk minimization , 2003 .

[15]  Shie Mannor,et al.  Greedy Algorithms for Classification -- Consistency, Convergence Rates, and Adaptivity , 2003, J. Mach. Learn. Res..

[16]  G. Lugosi,et al.  On the Bayes-risk consistency of regularized boosting methods , 2003 .

[17]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[18]  Michael I. Jordan,et al.  Nonparametric decentralized detection using kernel methods , 2005, IEEE Transactions on Signal Processing.

[19]  Michael I. Jordan,et al.  Convexity, Classification, and Risk Bounds , 2006 .