Improving Clinical Named-Entity Recognition with Transfer Learning.
暂无分享,去创建一个
Transfer learning is a powerful machine learning technique that enables the internalizing and reuse of prior knowledge to new tasks. Transfer learning is currently the starting point for recognition tasks such as computer vision. However, in natural language processing (NLP), the application of this technique is less prevalent. Our research investigates how, through the application of transfer learning, existing knowledge can be used to build more accurate NLP models. We subsequently applied these models to a named-entity recognition (NER) task. Our experimental results show significantly better recognition performance can be obtained through leveraging knowledge from a base model, trained using poorly annotated data.