Confusion modelling for automated lip-reading usingweighted finite-state transducers
暂无分享,去创建一个
[1] C. G. Fisher,et al. Confusions among visually perceived consonants. , 1968, Journal of speech and hearing research.
[2] Mehryar Mohri,et al. Finite-State Transducers in Language and Speech Processing , 1997, CL.
[3] M. Coleman,et al. Speechreading Skill and Visual Movement Sensitivity are Related in Deaf Speechreaders , 2005, Perception.
[4] Mehryar Mohri. Weighted Finite-State Transducer Algorithms. An Overview , 2004 .
[5] Stephen J. Cox,et al. The challenge of multispeaker lip-reading , 2008, AVSP.
[6] Stephen J. Cox,et al. Application of weighted finite-state transducers to improve recognition accuracy for dysarthric speech , 2008, INTERSPEECH.
[7] Timothy F. Cootes,et al. Extraction of Visual Features for Lipreading , 2002, IEEE Trans. Pattern Anal. Mach. Intell..
[8] Timothy F. Cootes,et al. Active Appearance Models , 2001, IEEE Trans. Pattern Anal. Mach. Intell..
[9] Barry-John Theobald,et al. Comparing visual features for lipreading , 2009, AVSP.
[10] Fernando Pereira,et al. Weighted finite-state transducers in speech recognition , 2002, Comput. Speech Lang..
[11] Mehryar Mohri. Compact Representations by Finite-State Transducers , 1994, ACL.
[12] P. L. Jackson. The Theoretical Minimal Unit for Visual Speech Perception: Visemes and Coarticulation. , 1988 .
[13] Johan Schalkwyk,et al. OpenFst: A General and Efficient Weighted Finite-State Transducer Library , 2007, CIAA.