Scale-Space Kernels for Additive Modeling

Various forms of additive modeling techniques have been popularly used in many pattern recognition and machine learning related applications. The efficiency of any additive modeling technique relies significantly on the choice of the weak learner and the form of the loss function. In this paper, we propose a novel scale-space kernel based approach for additive modeling. Our method applies a few insights from the well-studied scale-space theory for choosing the optimal learners during different iterations of the boosting algorithms, which are simple yet powerful additive modeling methods. For each iteration of the additive modeling, weak learners that can best fit the current resolution of the data are chosen and then increase the resolution systematically. We demonstrate the results of the proposed framework on both synthetic and real datasets taken from the UCI machine learning repository. Though demonstrated specifically in the context of boosting algorithms, our approach is generic enough to be accommodated in general additive modeling techniques. Similarities and distinctions of the proposed algorithm with the popularly used radial basis function networks and wavelet decomposition method are also discussed.