Deep learning gains lots of attentions in recent years and is more and more important for mining values in big data. However, to make deep learning practical for a wide range of applications in Tencent Inc., three requirements must be considered: 1) Lots of computational power are required to train a practical model with tens of millions of parameters and billions of samples for products such as automatic speech recognition (ASR), and the number of parameters and training data is still growing. 2) The capability of training larger model is necessary for better model quality. 3) Easy to use frameworks are valuable to do many experiments to perform model selection, such as finding an appropriate optimization algorithm and tuning optimal hyper-parameters. To accelerate training, support large models, and make experiments easier, we built Mariana, the Tencent deep learning platform, which utilizes GPU and CPU cluster to train models parallelly with three frameworks: 1) a multi-GPU data parallelism framework for deep neural networks (DNNs). 2) a multi-GPU model parallelism and data parallelism framework for deep convolutional neural networks (CNNs). 3) a CPU cluster framework for large scale DNNs. Mariana also provides built-in algorithms and features to facilitate experiments. Mariana is in production usage for more than one year, achieves state-of-the-art acceleration performance, and plays a key role in training models and improving quality for automatic speech recognition and image recognition in Tencent WeChat, a mobile social platform, and for Ad click-through rate prediction (pCTR) in Tencent QQ, an instant messaging platform, and Tencent Qzone, a social networking service.
[1]
Geoffrey E. Hinton,et al.
Reducing the Dimensionality of Data with Neural Networks
,
2006,
Science.
[2]
Ingemar J. Cox,et al.
IEEE Signal Processing Society
,
2022,
IEEE Journal of Selected Topics in Signal Processing.
[3]
Yoram Singer,et al.
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
,
2011,
J. Mach. Learn. Res..
[4]
Daniel Povey,et al.
The Kaldi Speech Recognition Toolkit
,
2011
.
[5]
Marc'Aurelio Ranzato,et al.
Large Scale Distributed Deep Networks
,
2012,
NIPS.
[6]
Geoffrey E. Hinton,et al.
ImageNet classification with deep convolutional neural networks
,
2012,
Commun. ACM.
[7]
Tara N. Sainath,et al.
Deep Neural Networks for Acoustic Modeling in Speech Recognition
,
2012
.
[8]
Tao Wang,et al.
Deep learning with COTS HPC systems
,
2013,
ICML.
[9]
Marc'Aurelio Ranzato,et al.
Multi-GPU Training of ConvNets
,
2013,
ICLR.
[10]
Trevor Darrell,et al.
Caffe: Convolutional Architecture for Fast Feature Embedding
,
2014,
ACM Multimedia.