Designing Personality Shifting Agent for Speech Recognition Failure

This paper proposes a method to shift an agent's personality during speech interaction to reduce users' negative impressions of speech recognition systems when speech recognition fails. Speech recognition failure makes users uncomfortable, and the cognitive strain in rephrasing commands is high. The proposed method aims to eliminate users' negative impression of agents by allowing an agent to have multiple personalities and accept responsibility for the failure, with the personality responsible for failure being removed from the task. System hardware remains the same, and users can continue to interact with another personality of the agent. Shifting the agent's personality is represented by a change in voice tone and LED color. Experimental results suggested that the proposed method reduces users' negative impressions by improving communication between users and the agent.

[1]  Marina Jirotka,et al.  Human-robot relationships and the development of responsible social robots , 2019, HTTF.

[2]  Jun Okamoto,et al.  Analysis of drivers' speech in a car environment , 2008, INTERSPEECH.

[3]  Yuichiro Yoshikawa,et al.  Proactive Conversation between Multiple Robots to Improve the Sense of Human-Robot Conversation , 2017, AAAI Fall Symposia.

[4]  Susan T. Fiske,et al.  Attention and weight in person perception: The impact of negative and extreme behavior. , 1980 .

[5]  Seiichi Nakagawa,et al.  Chat-like Spoken Dialog System for a Multi-party Dialog Incorporating Two Agents and a User , 2013 .

[6]  Stella George From sex and therapy bots to virtual assistants and tutors: how emotional should artificially intelligent agents be? , 2019, CUI.

[7]  Freddy Tapia,et al.  Device Control System for a Smart Home using Voice Commands: A Practical Case , 2018, ICIME 2018.

[8]  Thanh-Ha Le,et al.  Discriminate Natural versus Loudspeaker Emitted Speech , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[9]  Shrikanth S. Narayanan,et al.  Expressive speech synthesis using a concatenative synthesizer , 2002, INTERSPEECH.

[10]  G. Dodig-Crnkovic,et al.  Towards industrial robots with human-like moral responsibilities , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[11]  Yusuke Ijima,et al.  DNN-Based Speech Synthesis Using Speaker Codes , 2018, IEICE Trans. Inf. Syst..

[12]  N. Nachar The Mann ‐ Whitney U: A Test for Assessing Whether Two Independent Samples Come from the Same Distribution , 2007 .

[13]  Tatsuya Kawahara,et al.  Dialogue Behavior Control Model for Expressing a Character of Humanoid Robots , 2018, 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC).