Distributed Training for Multilingual Combined Tokenizer using Deep Learning Model and Simple Communication Protocol

In the big data era, text processing tends to be harder as the data increase. There is also the growth of deep learning model for solving natural language processing tasks without a need for hand-crafted rules. In this research, we provide two big solutions in the area of text preprocessing and distributed training for any neural-based model. We try to solve the most common text preprocessing which are word and sentence tokenization. Our proposed combined tokenizer is compared by using a single language model and multilanguage model. We also provide a simple communication using MQTT protocol to help the training distribution.

[1]  James F. Allen Natural language understanding , 1987, Bejnamin/Cummings series in computer science.

[2]  Miriam Bellver,et al.  Distributed training strategies for a computer vision deep learning algorithm on a distributed GPU cluster , 2017, ICCS.

[3]  Anthony K. H. Tung,et al.  SINGA: A Distributed Deep Learning Platform , 2015, ACM Multimedia.

[4]  Gideon S. Mann,et al.  Distributed Training Strategies for the Structured Perceptron , 2010, NAACL.

[5]  Chunyu Kit,et al.  Tokenization as the Initial Phase in NLP , 1992, COLING.

[6]  Roger A. Light Mosquitto: server and client implementation of the MQTT protocol , 2017, J. Open Source Softw..

[7]  Kevin Duh,et al.  DyNet: The Dynamic Neural Network Toolkit , 2017, ArXiv.

[8]  He Ma,et al.  Theano-MPI: A Theano-Based Distributed Training Framework , 2016, Euro-Par Workshops.

[9]  Torsten Hoefler,et al.  Demystifying Parallel and Distributed Deep Learning , 2018, ACM Comput. Surv..

[10]  Marc'Aurelio Ranzato,et al.  Large Scale Distributed Deep Networks , 2012, NIPS.

[11]  Yiming Yang,et al.  Distributed training of Large-scale Logistic models , 2013, ICML.

[12]  Panagiotis Tsakalides,et al.  Training a SVM-based classifier in distributed sensor networks , 2006, 2006 14th European Signal Processing Conference.

[13]  Bryan Jurish,et al.  Word and Sentence Tokenization with Hidden Markov Models , 2013, J. Lang. Technol. Comput. Linguistics.

[14]  Karanbir Singh Chahal,et al.  A Hitchhiker's Guide On Distributed Training of Deep Neural Networks , 2018, J. Parallel Distributed Comput..

[15]  Alexander Sergeev,et al.  Horovod: fast and easy distributed deep learning in TensorFlow , 2018, ArXiv.

[16]  Georg Heigold,et al.  Sequence discriminative distributed training of long short-term memory recurrent neural networks , 2014, INTERSPEECH.

[17]  Daniel Zeman,et al.  CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings , 2017 .

[18]  Zhaohui Zheng,et al.  Stochastic gradient boosted distributed decision trees , 2009, CIKM.

[19]  Nikko Strom,et al.  Scalable distributed DNN training using commodity GPU cloud computing , 2015, INTERSPEECH.

[20]  Gideon S. Mann,et al.  Efficient Large-Scale Distributed Training of Conditional Maximum Entropy Models , 2009, NIPS.

[21]  Zheng Zhang,et al.  MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems , 2015, ArXiv.