This paper proposes a technique which automatically estimates speakers' age only with acoustic, not linguistic, information of their utterances. This method is based upon speaker recognition techniques. In the current work, we firstly divided speakers of two databases, JNAS and S(senior)-JNAS, into two groups by listening tests. One group has only the speakers whose speech sounds so aged that one should take special care when he/she talks to them. The other group has the remaining speakers of the two databases. After that, each speaker group was modeled with GMM. Experiments of automatic identification of elderly speakers showed the correct identification rate of 91 %. To improve the performance, two prosodic features were considered, i.e, speech rate and local perturbation of power. Using these features, the identification rate was improved to 95%. Finally, using scores calculated by integrating GMMs with prosodic features, experiments were carried out to automatically estimate speakers' age. The results showed high correlation between speakers' age estimated subjectively by humans and automatically calculated score of ‘agedness’.
[1]
Jennifer Chu-Carroll,et al.
Collaborative Response Generation in Planning Dialogues
,
1998,
Comput. Linguistics.
[2]
Alison Cawsey,et al.
Explanation and interaction - the computer generation of explanatory dialogues
,
1992,
ACL-MIT press series in natural language processing.
[3]
Kiyohiro Shikano,et al.
Elderly acoustic model for large vocabulary continuous speech recognition
,
2001,
INTERSPEECH.
[4]
Hozumi Tanaka,et al.
A Bayesian Approach for User Modeling in Dialogue Systems
,
1994,
COLING.