Human audio-visual consonant recognition analyzed with three bimodal integration models
暂无分享,去创建一个
[1] Sadaoki Furui,et al. A stream-weight optimization method for audio-visual speech recognition using multi-stream HMMs , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.
[2] Sabri Gurbuz,et al. Multi-stream product modal audio-visual integration strategy for robust adaptive speech recognition , 2002, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing.
[3] Michael E. Tipping,et al. Probabilistic Principal Component Analysis , 1999 .
[4] G. A. Miller,et al. An Analysis of Perceptual Confusions Among Some English Consonants , 1955 .
[5] Christopher M. Bishop,et al. Mixtures of Probabilistic Principal Component Analyzers , 1999, Neural Computation.
[6] L. Braida. Crossmodal Integration in the Identification of Consonant Segments , 1991, The Quarterly journal of experimental psychology. A, Human experimental psychology.
[7] K. Grant,et al. Auditory-visual speech recognition by hearing-impaired subjects: consonant recognition, sentence recognition, and auditory-visual integration. , 1998, The Journal of the Acoustical Society of America.
[8] Zhanyu Ma,et al. A probabilistic principal component analysis based hidden Markov model for audio-visual speech recognition , 2008, 2008 42nd Asilomar Conference on Signals, Systems and Computers.
[9] D. Massaro. Speech Perception By Ear and Eye: A Paradigm for Psychological Inquiry , 1989 .