Use of an auditory model to improve speech coders

A method for incorporating an auditory masking model in a speech coder using traditional articulatory models is presented. The auditory model attempts to model the frequency selectivity and masking properties of the human cochlea. Coding gain is achieved by analyzing the perceptual content of each sample in the spectrum. The scheme is thus able to introduce selective distortion that is a direct function of human hearing perception and is thus optimally matched to the hearing process. It is shown that good coding gain can be obtained with excellent speech quality. The algorithm can be used on its own or as a front end for traditional vocoders. it can also be implemented with very little computational overhead and low coding delay.<<ETX>>