Introduction to special issue on example-based machine translation

This special issue on EBMT is largely based on contributions from the MT Summit X in Phuket, Thailand; five of the papers here are extended, revised versions of presentations from the Second Workshop on Example-Based Machine Translation at the Summit (Cicekli, Lepage and D?nouai, Langlais and Gotti, Quirk and Menezes, and Hutchins), one paper is an extended, revised version of research presented at the Summit itself (Liu, Wang and Wu), one paper (Wu) builds on a presentation from the panel session at the 2005 Workshop, while another (Carl) extends ideas from a panel session from the Summit proper. The remaining paper (Groves and Way) was submitted especially for this special issue. In a broad categorization of these contributions, there are three discussion papers, which propose different definitions and views on EBMT, as well as six technical papers which contain state-of-the art system descriptions. Five years after the first EBMT workshop in 2001, which led to our book (Carl and Way 2003), the field has consider ably matured and evolved, but what counts as essential methods and techniques for an EBMT system remains open to controversy (cf. also Turcato and Popowich (2003) and Somers (2003) for other views). That said, perhaps the most notable change is the increased usage of statistical techniques and structured representations. On the one hand we see a continuation and refinement of previous systems (Cicekli, Groves and Way, Quirk and Menezes, Liu et al.), but also some welcome new (Langlais and Gotti) and innovative methods (Lepage and D?nouai). We start this special issue with the three discussion papers which shed light on how one might view EBMT, from a number of different directions, but especially given the obvious convergence between EBMT and phrase-based statistical MT (SMT).