This poster describes experiences processing the two-billion-word Hansard corpus using a fairly standard NLP pipeline on a high performance cluster. Herein we report how we were able to parallelise and apply a "traditional" single-threaded batch-oriented application to a platform that differs greatly from that for which it was originally designed. We start by discussing the tagging toolchain, its specific requirements and properties, and its performance characteristics. This is contrasted with a description of the cluster on which it was to run, and specific limitations are discussed such as the overhead of using SAN-based storage. We then go on to discuss the nature of the Hansard corpus, and describe which properties of this corpus in particular prove challenging for use on the system architecture used. The solution for tagging the corpus is then described, along with performance comparisons against a naive run on commodity hardware. We discuss the gains and benefits of using high-performance machinery rather than relatively cheap commodity hardware. Our poster provides a valuable scenario for large scale NLP pipelines and lessons learnt from the experience
[1]
Anthony McEnery,et al.
The UCREL Semantic Analysis System
,
2004
.
[2]
Peter Wittenburg,et al.
CLARIN: Common Language Resources and Technology Infrastructure
,
2008,
LREC.
[3]
Tao Yang,et al.
The Panasas ActiveScale Storage Cluster - Delivering Scalable High Bandwidth Storage
,
2004,
Proceedings of the ACM/IEEE SC2004 Conference.
[4]
Kalina Bontcheva,et al.
GATECloud.net: a platform for large-scale, open-source text processing on the cloud
,
2013,
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.
[5]
Mark Hedges,et al.
Preparing DARIAH
,
2011,
2011 IEEE Seventh International Conference on eScience.
[6]
Geoffrey Leech,et al.
Corpus Annotation: Linguistic Information from Computer Text Corpora
,
1997
.