DIALOGUE CONTEXT-BASED RE-RANKING OF ASR HYPOTHESES

This paper shows how we can benefit from taking into account dialogue context when re-ranking speech recognition (ASR) hypotheses. We have carried out experiments with human subjects to investigate their ability to rank ASR hypotheses using dialogue context. Based on the results of these experiments we have explored how an automatic machine-learnt ranker profits from using dialogue context features. An evaluation of the ranking task shows that both the human subjects and the automatic classifier outperform the baseline (i.e. always choosing the topmost of an N-Best list) and that they perform better and better the more dialogue context is made available. Actually, the automatic classifier performs slightly better than the human subjects and reduces sentence error rate 53% in comparison to the baseline.