Although recently there has been significant progress in the general usage and acceptance of speech technology in several developed countries there are still major gaps that prevent the majority of possible users from daily use of speech technology-based solutions. In this paper some of them are listed and some directions for bridging these gaps are proposed. Perhaps the most important gap is the "Black box" thinking of software developers. They suppose that inputting text into a text-to-speech (TTS) system will result in voice output that is relevant to the given context of the application. In case of automatic speech recognition (ASR) they wait for accurate text transcription (even punctuation). It is ignored that even humans are strongly influenced by a priori knowledge of the context, the communication partners, etc. For example by serially combining ASR + machine translation + TTS in a speech-to-speech translation system a male speaker at a slow speaking rate might be represented by a fast female voice at the other end. The science of semantic modelling is still in its infancy. In order to produce successful applications researchers of speech technology should find ways to build-in the a priori knowledge into the application environment, adapt their technologies and interfaces to the given scenario. This leads us to the gap between generic and domain specific solutions. For example intelligibility and speaking rate variability are the most important TTS evaluation factors for visually impaired users while human-like announcements at a standard rate and speaking style are required for railway station information systems. An increasing gap is being built between "large" languages/markets and "small" ones. Another gap is the one between closed and open application environments. For example there is hardly any mobile operating system that allows TTS output re-direction into a live telephone conversation. That is a basic need for rehabilitation applications of speech impaired people. Creating an open platform where "smaller" and "bigger" players of the field could equally plug-in their engines/solutions at proper quality assurance and with a fair share of income could help the situation. In the paper some examples are given about how our teams at BME TMIT try to bridge the gaps listed.
[1]
Géza Németh,et al.
Human voice or prompt generation? can they co-exist in an application?
,
2009,
INTERSPEECH.
[2]
Tamás Gábor Csapó,et al.
Modeling Irregular Voice in Statistical Parametric Speech Synthesis With Residual Codebook Based Excitation
,
2014,
IEEE Journal of Selected Topics in Signal Processing.
[3]
Géza Németh,et al.
Design issues of a corpus-based speech synthesizer
,
2005
.
[4]
E.A.M. Klabbers,et al.
High-quality speech output generation through advanced phrase concatenation
,
1997
.
[5]
Tamás Gábor Csapó,et al.
Increasing prosodic variability of text-to-speech synthesizers
,
2007,
INTERSPEECH.
[6]
E. Csala,et al.
Application of the NAO humanoid robot in the treatment of marrow-transplanted children
,
2012,
2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom).
[7]
Zöe Handley.
Is text-to-speech synthesis ready for use in computer-assisted language learning?
,
2009,
Speech Commun..
[8]
Tamás Gábor Csapó,et al.
Spemoticons: Text to Speech Based Emotional Auditory Cues
,
2011
.
[9]
Géza Németh,et al.
The Design, Implementation, and Operation of a Hungarian E-Mail Reader
,
2000,
Int. J. Speech Technol..
[10]
Géza Németh,et al.
Cross Platform Solution of Communication and Voice/Graphical User Interface for Mobile Devices in Vehicles
,
2007
.
[11]
Géza Németh,et al.
Replacing a Human Agent by an Automatic Reverse Directory Service
,
2007
.
[12]
Géza Németh,et al.
Profivox—A Hungarian Text-to-Speech System for Telecommunications Applications
,
2000,
Int. J. Speech Technol..
[13]
Géza Németh,et al.
Multilingual statistical text analysis, Zipf's law and Hungarian speech generation
,
2002
.
[14]
Géza Németh,et al.
Some aspects of synthetic elderly voices in ambient assisted living systems
,
2013,
2013 7th Conference on Speech Technology and Human - Computer Dialogue (SpeD).