Phonetically-based LPC vector quantization of high quality speech

In this paper, we present a phonetically-based LPC vector quantization (VQ) method for high quality speech. Our objective is to quantize the LPC parameters at a lowest possible bit rate with no noticeable difference in listening tests when synthetic speech is obtained by exciting the quantized LPC synthesis filter with the unquantized excitation signal, derived from inverse filtering. The proposed scheme uses speaker independent speech classification to adapt corresponding VQ's. The classifier divides the speech into 7 phonetical categories. The LPC parameters of the corres· ponding categories are quantized by dedicated VQ's. In our experiments, listening tests are marle to determine the just-unnoticeable-difference bit number for each category. Since the results show that some categories do not require so many bits as others, we further classify the 7 categories into 4 groups. The conclusion is that the synthetic speech is negligibly different from the original when the bits per frame are 14, and that a total of 18 bits per frame can be sufficient without introducing any perceptible quantization noise.