今日推荐

2012 - IEEE Transactions on Audio, Speech, and Language Processing

Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition

We propose a novel context-dependent (CD) model for large-vocabulary speech recognition (LVSR) that leverages recent advances in using deep belief networks for phone recognition. We describe a pre-trained deep neural network hidden Markov model (DNN-HMM) hybrid architecture that trains the DNN to produce a distribution over senones (tied triphone states) as its output. The deep belief network pre-training algorithm is a robust and often helpful way to initialize deep neural networks generatively that can aid in optimization and reduce generalization error. We illustrate the key components of our model, describe the procedure for applying CD-DNN-HMMs to LVSR, and analyze the effects of various modeling choices on performance. Experiments on a challenging business search dataset demonstrate that CD-DNN-HMMs can significantly outperform the conventional context-dependent Gaussian mixture model (GMM)-HMMs, with an absolute sentence accuracy improvement of 5.8% and 9.2% (or relative error reduction of 16.0% and 23.2%) over the CD-GMM-HMMs trained using the minimum phone error rate (MPE) and maximum-likelihood (ML) criteria, respectively.

2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing

Improving deep neural networks for LVCSR using rectified linear units and dropout

Recently, pre-trained deep neural networks (DNNs) have outperformed traditional acoustic models based on Gaussian mixture models (GMMs) on a variety of large vocabulary speech recognition benchmarks. Deep neural nets have also achieved excellent results on various computer vision tasks using a random “dropout” procedure that drastically improves generalization error by randomly omitting a fraction of the hidden units in all layers. Since dropout helps avoid over-fitting, it has also been successful on a small-scale phone recognition task using larger neural nets. However, training deep neural net acoustic models for large vocabulary speech recognition takes a very long time and dropout is likely to only increase training time. Neural networks with rectified linear unit (ReLU) non-linearities have been highly successful for computer vision tasks and proved faster to train than standard sigmoid units, sometimes also improving discriminative performance. In this work, we show on a 50-hour English Broadcast News task that modified deep neural networks using ReLUs trained with dropout during frame level training provide an 4.2% relative improvement over a DNN trained with sigmoid units, and a 14.4% relative improvement over a strong GMM/HMM system. We were able to obtain our results with minimal human hyper-parameter tuning using publicly available Bayesian optimization code.

论文关键词

neural network machine learning artificial neural network deep learning convolutional neural network convolutional neural natural language deep neural network speech recognition social media neural network model hidden markov model markov model deep neural medical image computer vision object detection image classification conceptual design generative adversarial network gaussian mixture model facial expression generative adversarial deep convolutional neural deep reinforcement learning network architecture adversarial network mutual information deep learning model speech recognition system deep convolutional cad system image denoising speech enhancement neural network architecture convolutional network facial expression recognition feedforward neural network expression recognition nash equilibrium domain adaptation single image loss function based on deep neural net deep learning method semi-supervised learning deep learning algorithm data augmentation neural networks based image super-resolution deep belief network deep network feature learning enhancement based image synthesi multilayer neural network unsupervised domain adaptation learning task latent space single image super-resolution conditional generative adversarial media service neural networks trained acoustic modeling theoretic analysi speech enhancement based conditional generative multi-layer neural network quantitative structure-activity relationship conversational speech information bottleneck generative adversarial net training deep neural noisy label training deep adversarial perturbation adversarial net generative network batch normalization convolutional generative adversarial social media service deep convolutional generative update rule adversarial neural network deep neural net sensing mri convolutional generative adversarial sample wasserstein gan machine-learning algorithm robust training ventral stream binary weight gan training train deep neural ventral visual pathway deep generative adversarial current speech recognition pre-trained deep neural analysi of tweets deep feedforward neural improving deep learning frechet inception distance training generative adversarial stimulus feature medical image synthesi training generative community intelligence acoustic input overcoming catastrophic forgetting social reporting networks reveal context-dependent deep neural deep compression ventral pathway weights and activation extremely noisy