FarSpeech: Arabic Natural Language Processing for Live Arabic Speech

This paper presents FarSpeech, QCRI’s combined Arabic speech recognition, natural language processing (NLP), and dialect identification pipeline. It features modern web technologies to capture live audio, transcribes Arabic audio, NLP processes the transcripts, and identifies the dialect of the speaker. For transcription, we use QATS, which is a Kaldi-based ASR system that uses Time Delay Neural Networks (TDNN). For NLP, we use a SOTA Arabic NLP toolkit that employs various deep neural network and SVM based models. Finally, our dialect identification system uses multi-modality from both acoustic and linguistic input. FarSpeech presents different screens to display the transcripts, text segmentation, part-ofspeech tags, recognized named entities, diacritized text, and the identified dialect of the speech.