Towards Disorder-Independent Automatic Assessment of Emotional Competence in Neurological Patients with a Classical Emotion Recognition System: Application in Foreign Accent Syndrome

Emotive speech is a non-invasive and cost-effective biomarker in a wide spectrum of neurological disorders with computational systems built to automate the diagnosis. In order to explore the possibilities for the automation of a routine speech analysis in the presence of hard to learn pathology patterns, we propose a framework to assess the level of competence in paralinguistic communication. Initially, the assessment relies on a perceptual experiment completed by human listeners, and a model called the Aggregated Ear has been proposed that draws a conclusion about the level of competence demonstrated by the patient. Then, the automation of the Aggregated Ear has been undertaken and resulted in a computational model that summarizes the portfolio of speech evidence on the patient. The summarizing system has a classical emotion recognition system as its central component. The code and the data are available from the corresponding author on request.