From sound to vocal gesture : learning to ( co )-articulate with APEX

We report on two experiments illustrating the current performance of APEX, a computational model under development at SU and KTH. The main results are (i) that APEX reproduces observed formant data for vowels and voiced apical stops with high accuracy, and (ii) that it does so in an articulatorily natural manner. Although articulation and acoustic output show a many-to-one relation, our observations suggest that finding a unique mapping can be significantly facilitated by invoking the natural physiological constraints embodied in a model such as APEX.