Adapting an Automatic Speech Recognition System to Event Classification of Electroencephalograms1

Identification of clinically significant events in electroencephalograms (EEGs) is a time-consuming task for neurologists [1]. EEG signals contain a variety of morphologies which relate to a combination of brain signals and noise/artifacts. Automated classification of such events has the potential to speed up the interpretation process and provide valuable input to other types of EEG decision-making software. Because of the similarities between EEGs and speech signals, both of which contain temporal/sequential information, one of our long-term goals has been to apply well-developed concepts from speech recognition to EEG processing. We have previously approached this by applying hidden Markov Models (HMMs) [2] [3] using a toolkit known as HTK [4]. In this poster, we discuss the application of a new high-performance speech recognition system known as Kaldi [5] to this task. Adaptation of this technology to the EEG problem has not been as straightforward as previously thought.