An Unsupervised Machine Learning Approach to Segmentation of Clinician-Entered Free Text

Natural language processing, an important tool in biomedicine, fails without successful segmentation of words and sentences. Tokenization is a form of segmentation that identifies boundaries separating semantic units, for example words, dates, numbers and symbols, within a text. We sought to construct a highly generalizeable tokenization algorithm with no prior knowledge of characters or their function, based solely on the inherent statistical properties of token and sentence boundaries. Tokenizing clinician-entered free text, we achieved precision and recall of 92% and 93%, respectively compared to a whitespace token boundary detection algorithm. We classified over 80% of punctuation characters correctly, based on manual disambiguation with high inter-rater agreement (kappa=0.916). Our algorithm effectively discovered properties of whitespace and punctuation in the corpus without prior knowledge of either. Given the dynamic nature of biomedical language, and the variety of distinct sublanguages used, the effectiveness and generalizability of our novel tokenization algorithm make it a valuable tool.

[1]  Chunyu Kit,et al.  Tokenization as the Initial Phase in NLP , 1992, COLING.

[2]  Carol Friedman,et al.  Extracting Phenotypic Information from the Literature via Natural Language Processing , 2004, MedInfo.

[3]  Andrei Mikheev,et al.  Periods, Capitalized Words, etc. , 2002, CL.

[4]  Stephen B. Johnson,et al.  Automatic Learning of the Morphology of Medical Language using Information Compression , 2003, AMIA.

[5]  Pasi Tapanainen,et al.  What is a word, What is a sentence? Problems of Tokenization , 1994 .

[6]  Andrei Mikheev,et al.  Document centered approach to text normalization , 2000, SIGIR '00.

[7]  Stephen D. Persell,et al.  Assessing the validity of national quality measures for coronary artery disease using an electronic health record. , 2006, Archives of internal medicine.

[8]  Zellig S. Harris,et al.  Morpheme Boundaries within Words: Report on a Computer Test , 1970 .

[9]  George Hripcsak,et al.  The sublanguage of cross-coverage , 2002, AMIA.

[10]  John A. Goldsmith,et al.  Unsupervised Learning of the Morphology of a Natural Language , 2001, CL.

[11]  Michael Krauthammer,et al.  GENIES: a natural-language processing system for the extraction of molecular pathways from journal articles , 2001, ISMB.

[12]  Gilles Adda,et al.  Towards tokenization evaluation , 1998, LREC.

[13]  Carol Friedman,et al.  Limited parsing of notational text visit notes: ad-hoc vs. NLP approaches , 2000, AMIA.

[14]  Helmut Schmid Unsupervised Learning of Period Disambiguation for Tokenisation , 2000 .

[15]  W. DuMouchel,et al.  Unlocking Clinical Data from Narrative Reports: A Study of Natural Language Processing , 1995, Annals of Internal Medicine.