Semi-supervised mixture-of-experts classification
暂无分享,去创建一个
[1] Elie Bienenstock,et al. Neural Networks and the Bias/Variance Dilemma , 1992, Neural Computation.
[2] Tom M. Mitchell,et al. Learning to Extract Symbolic Knowledge from the World Wide Web , 1998, AAAI/IAAI.
[3] J. Heckman. Sample selection bias as a specification error , 1979 .
[4] Fabio Gagliardi Cozman,et al. Semi-Supervised Learning of Mixture Models and Bayesian Networks , 2003 .
[5] Catherine Blake,et al. UCI Repository of machine learning databases , 1998 .
[6] Tom Heskes,et al. Bias/Variance Decompositions for Likelihood-Based Estimators , 1998, Neural Computation.
[7] Geoffrey E. Hinton,et al. Adaptive Mixtures of Local Experts , 1991, Neural Computation.
[8] Tong Zhang,et al. The Value of Unlabeled Data for Classification Problems , 2000, ICML 2000.
[9] Ron Kohavi,et al. Bias Plus Variance Decomposition for Zero-One Loss Functions , 1996, ICML.
[10] Michael I. Jordan,et al. Learning from Incomplete Data , 1994 .
[11] Sebastian Thrun,et al. Text Classification from Labeled and Unlabeled Documents using EM , 2000, Machine Learning.
[12] Avrim Blum,et al. The Bottleneck , 2021, Monopsony Capitalism.
[13] David J. Miller,et al. A Mixture of Experts Classifier with Learning Based on Both Labelled and Unlabelled Data , 1996, NIPS.
[14] David A. Landgrebe,et al. The effect of unlabeled samples in reducing the small sample size problem and mitigating the Hughes phenomenon , 1994, IEEE Trans. Geosci. Remote. Sens..