Speaker-dependent Malay Vowel Recognition for a Child with Articulation Disorder Using Multi-layer Perceptron

This paper investigates the use of Neural Network in recognizing six Malay vowels of a child with articulation disorder in a speaker-dependent manner. The child is identified to have articulation errors in producing consonant sounds but not in vowel sounds. The speech sounds were recorded at a sampling rate of 20kHz with 16-bit resolution. Linear Predictive Coding was used to extract 24 speech features coeeficients from a segment of 20ms to 100 ms. The LPC coefficients were converted into cepstral coefficients before being fed into a Multi-layer Perceptron with one hidden layer for training and testing. The Multi-layer Perceptron was able to recognize the all speech sounds.