Empirical training for conditional random fields
暂无分享,去创建一个
In this paper (Zhu et al., 2013), we present a practi- cally scalable training method for CRFs called Empir- ical Training (EP). We show that the standard train- ing with unregularized log likelihood can have many maximum likelihood estimations (MLEs). Empirical training has a unique closed form MLE which can be calculated from the empirical distribution very fast. The MLE of the empirical training is also one MLE of the standard training. So empirical training can be competitive in precision to the standard training and piecewise training. And also we show that empirical training is unaffected by the label bias problem even it is a local normalized model. Experiments on two real- world NLP datasets also show that empirical training reduces the training time from weeks to seconds, and obtains competitive results to the standard and piece- wise training on linear-chain CRFs, especially when training data are insufficient.
[1] Andrew McCallum,et al. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data , 2001, ICML.
[2] Andrew McCallum,et al. Maximum Entropy Markov Models for Information Extraction and Segmentation , 2000, ICML.
[3] Djoerd Hiemstra,et al. Closed form maximum likelihood estimator of conditional random fields , 2013 .
[4] Andrew McCallum,et al. Piecewise training for structured prediction , 2009, Machine Learning.