Training conditional random fields via gradient tree boosting
暂无分享,去创建一个
[1] J. M. Hammersley,et al. Markov fields on finite graphs and lattices , 1971 .
[2] J. Besag. Spatial Interaction and the Statistical Analysis of Lattice Systems , 1974 .
[3] J. Besag. Efficiency of pseudolikelihood estimation for simple Gaussian fields , 1977 .
[4] Leo Breiman,et al. Classification and Regression Trees , 1984 .
[5] Terrence J. Sejnowski,et al. Parallel Networks that Learn to Pronounce English Text , 1987, Complex Syst..
[6] T. Sejnowski,et al. Predicting the secondary structure of globular proteins using neural network models. , 1988, Journal of molecular biology.
[7] Lawrence R. Rabiner,et al. A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.
[8] D. Geman. Random fields and inverse problems in imaging , 1990 .
[9] Yoav Freund,et al. Experiments with a New Boosting Algorithm , 1996, ICML.
[10] Adwait Ratnaparkhi,et al. A Maximum Entropy Model for Part-Of-Speech Tagging , 1996, EMNLP.
[11] Andrew McCallum,et al. Maximum Entropy Markov Models for Information Extraction and Segmentation , 2000, ICML.
[12] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[13] Andrew McCallum,et al. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data , 2001, ICML.
[14] Thomas G. Dietterich. Machine Learning for Sequential Data: A Review , 2002, SSPR/SPR.
[15] Andrew McCallum,et al. Efficiently Inducing Features of Conditional Random Fields , 2002, UAI.