Four-Features Evaluation of Text to Speech Systems for Three Social Robots

The success of social robotics is directly linked to their ability of interacting with people. Humans possess verbal and non-verbal communication skills, and, therefore, both are essential for social robots to get a natural human–robot interaction. This work focuses on the first of them since the majority of social robots implement an interaction system endowed with verbal capacities. In order to do this implementation, we must equip social robots with an artificial voice system. In robotics, a Text to Speech (TTS) system is the most common speech synthesizer technique. The performance of a speech synthesizer is mainly evaluated by its similarity to the human voice in relation to its intelligibility and expressiveness. In this paper, we present a comparative study of eight off-the-shelf TTS systems used in social robots. In order to carry out the study, 125 participants evaluated the performance of the following TTS systems: Google, Microsoft, Ivona, Loquendo, Espeak, Pico, AT&T, and Nuance. The evaluation was performed after observing videos where a social robot communicates verbally using one TTS system. The participants completed a questionnaire to rate each TTS system in relation to four features: intelligibility, expressiveness, artificiality, and suitability. In this study, four research questions were posed to determine whether it is possible to present a ranking of TTS systems in relation to each evaluated feature, or, on the contrary, there are no significant differences between them. Our study shows that participants found differences between the TTS systems evaluated in terms of intelligibility, expressiveness, and artificiality. The experiments also indicated that there was a relationship between the physical appearance of the robots (embodiment) and the suitability of TTS systems.

[1]  Marie-Josée Hamel,et al.  Establishing a Methodology for Benchmarking Speech Synthesis for Computer-Assisted Language Learning (CALL). , 2005 .

[2]  Hideki Kenmochi,et al.  VOCALOID - commercial singing synthesizer based on sample concatenation , 2007, INTERSPEECH.

[3]  María Malfaz,et al.  The Effects of an Impolite vs. a Polite Robot Playing Rock-Paper-Scissors , 2016, ICSR.

[4]  Thierry Dutoit,et al.  The MBROLA project: towards a set of high quality speech synthesizers free of use for non commercial purposes , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.

[5]  Parteek Kumar,et al.  Comparative study of text to speech system for Indian language , 2012 .

[6]  Fernando Alonso-Martín,et al.  Augmented Robotics Dialog System for Enhancing Human–Robot Interaction , 2015, Sensors.

[7]  Louis C. W. Pols,et al.  Evaluating text-to-speech systems: Some methodological aspects , 1990, Speech Commun..

[8]  Bram Vanderborght,et al.  Enhancing My Keepon robot: A simple and low-cost solution for robot platform in Human-Robot Interaction studies , 2014, The 23rd IEEE International Symposium on Robot and Human Interactive Communication.

[9]  Saleh Alshomrani,et al.  A Comparative Study of Arabic Text-to-Speech Synthesis Systems , 2014 .

[10]  Fernando Alonso-Martín,et al.  Using a Social Robot as a Gaming Platform , 2010, ICSR.

[11]  Masahiro Fujita,et al.  On activating human communications with pet-type robot AIBO , 2004, Proceedings of the IEEE.

[12]  S. Nakaoka,et al.  A Singing Robot Realized by a Collaboration of VOCALOID and Cybernetic Human HRP-4 C , 2010 .

[13]  Kazuyoshi Wada,et al.  Development and preliminary evaluation of a caregiver's manual for robot therapy using the therapeutic seal robot Paro , 2010, 19th International Symposium in Robot and Human Interactive Communication.

[14]  Pierre-Brice Wieber,et al.  Linear model predictive control of the locomotion of Pepper, a humanoid robot with omnidirectional wheels , 2014, 2014 IEEE-RAS International Conference on Humanoid Robots.

[15]  D H Klatt,et al.  Review of text-to-speech conversion for English. , 1987, The Journal of the Acoustical Society of America.

[16]  Fernando Alonso Martín Sistema de interacción humano-robot basado en diálogos multimodales y adaptables , 2016 .

[17]  Giulio Sandini,et al.  The iCub humanoid robot: an open platform for research in embodied cognition , 2008, PerMIS.

[18]  Bruce MacDonald,et al.  Towards Expressive Speech Synthesis in English on a Robotic Platform , 2006 .

[19]  R. Barber,et al.  Maggie: A Robotic Platform for Human-Robot Social Interaction , 2006, 2006 IEEE Conference on Robotics, Automation and Mechatronics.

[20]  Alexander L. Francis,et al.  Evaluating the Quality of Synthetic Speech , 1999 .

[21]  Simon King,et al.  Measuring a decade of progress in Text-to-Speech , 2014 .

[22]  María Malfaz,et al.  CHAPTER X Human-Robot Interaction in the MOnarCH project , 2016 .

[23]  Daryle Gardner-Bonneau,et al.  Human Factors and Voice Interactive Systems , 1999 .

[24]  Fernando Alonso-Martín,et al.  Maggie: A Social Robot as a Gaming Platform , 2011, Int. J. Soc. Robotics.

[25]  Michael H. O'Malley Text-to-speech conversion technology , 1990, Computer.

[26]  Nikolaos G. Tsagarakis,et al.  iCub: the design and realization of an open humanoid platform for cognitive and neuroscience research , 2007, Adv. Robotics.

[27]  Zöe Handley Is text-to-speech synthesis ready for use in computer-assisted language learning? , 2009, Speech Commun..

[28]  Mahesh Viswanathan,et al.  Measuring speech quality for text-to-speech systems: development and assessment of a modified mean opinion score (MOS) scale , 2005, Comput. Speech Lang..

[29]  Hanafiah Yussof,et al.  Humanoid robot NAO: Review of control and motion exploration , 2011, 2011 IEEE International Conference on Control System, Computing and Engineering.