Empirical Analysis of Collective Stability
暂无分享,去创建一个
When learning structured predictors, collective stability is an important factor for generalization. London et al. (2013) provide the first analysis of this effect, proving that collectively stable hypotheses produce less deviation between empirical risk and true risk, i.e., defect. We test this effect empirically using a collectively stable variant of maxmargin Markov networks. Our experiments on webpage classification validate that increasing the collective stability reduces the defect and can thus lead to lower overall test error.
[1] Ben Taskar,et al. Collective Stability in Structured Prediction: Generalization from One Example , 2013, ICML.
[2] Ben Taskar,et al. Discriminative Probabilistic Models for Relational Data , 2002, UAI.
[3] Ben Taskar,et al. Max-Margin Markov Networks , 2003, NIPS.
[4] Lise Getoor,et al. Collective Classification in Network Data , 2008, AI Mag..