HMM-based speech synthesis with unsupervised labeling of accentual context based on F0 quantization and average voice model

This paper proposes an HMM-based speech synthesis technique without any manual labeling of accent information for a target speaker's training data. To appropriately model the fundamental frequency (F0) feature of speech, the proposed technique uses coarsely quantized F0 symbols instead of accent types for the context-dependent labeling. By using F0 quantization, we can automatically conduct the labeling of F0 contexts for training data. When synthesizing speech, an average voice model trained in advance using manually labeled multiple speakers' speech data including accent information is used to create the label sequence for synthesis. Specifically, the input text is converted to a full context label sequence, and an F0 contour is generated from the label sequence and the average voice model. Then, a label sequence including the quantized F0 symbols is created from the generated F0 contour. We conduct objective and subjective evaluation tests, and discuss the results.