MEI (Mandarin-English Information) is an English-Chinese crosslingual spoken document retrieval (CL-SDR) system developed during the Johns Hopkins University Summer Workshop 2000. We integrate speech recognition, machine translation, and information retrieval technologies to perform CL-SDR. MEI advocates a multi-scale paradigm, where both Chinese words and subwords (characters and syllables) are used in retrieval. The use of subword units can complement the word unit in handling the problems of Chinese word tokenization ambiguity, Chinese homophone ambiguity, and out-of-vocabulary words in audio indexing. This paper focuses on multi-scale audio indexing in MEI. Experiments are based on the Topic Detection and Tracking Corpora (TDT-2 and TDT-3), where we indexed Voice of America Mandarin news broadcasts by speech recognition on both the word and subword scales. We discuss the development of the MEI syllable recognizer, the representations of spoken documents using overlapping subword n-grams and lattice structures. Results show that augmenting words with subwords is beneficial to CL-SDR performance.
[1]
W. Bruce Croft,et al.
The INQUERY Retrieval System
,
1992,
DEXA.
[2]
Hsin-Min Wang,et al.
MANDARIN-ENGLISH INFORMATION (MEI)
,
2000
.
[3]
Keh-Jiann Chen,et al.
Word Identification for Mandarin Chinese Sentences
,
1992,
COLING.
[4]
Pak-Chung Ching,et al.
Multi-scale audio indexing for Chinese spoken document retrieval
,
2000,
INTERSPEECH.
[5]
Kenney Ng,et al.
Subword-based approaches for spoken document retrieval
,
2000,
Speech Commun..
[6]
Puming Zhan,et al.
Dragon systems' 1998 broadcast news transcription system
,
1999,
EUROSPEECH.
[7]
Hong C. Leung,et al.
Lexical access for large-vocabulary speech recognition
,
1998,
ICSLP.
[8]
Hsin-Min Wang,et al.
Experiments in syllable-based retrieval of broadcast news speech in Mandarin Chinese
,
2000,
Speech Commun..