Improving Chinese Tokenization With Linguistic Filters On Statistical Lexical Acquisition
暂无分享,去创建一个
[1] Keh-Yih Su,et al. A Preliminary Study On Unknown Word Problem In Chinese Word Segmentation , 1993, ROCLING/IJCLCLP.
[2] Gwyneth Tseng,et al. Chinese text segmentation for text retrieval: achievements and problems , 1993 .
[3] Chilin Shih,et al. A Stochastic Finite-State Word-Segmentation Algorithm for Chinese , 1994, ACL.
[4] Zimin Wu,et al. Chinese Text Segmentation for Text Retrieval: Achievements and Problems , 1993, J. Am. Soc. Inf. Sci..
[5] Pascale Fung,et al. Statistical Augmentation of a Chinese Machine-Readable Dictionary , 1994, ArXiv.
[6] Dekai Wu,et al. Aligning a Parallel English-Chinese Corpus Statistically With Lexical Criteria , 1994, ACL.
[7] WuZimin,et al. Chinese text segmentation for text retrieval , 1993 .
[8] Keh-Yih Su,et al. Statistical Models for Word Segmentation And Unknown Word Resolution , 1992, ROCLING.
[9] Chao-Huang Chang,et al. HMM-Based Part-of-Speech Tagging for Chinese Corpora , 1993, VLC@ACL.