In recent years, distributed deep learning is becoming popular in industry and academia. Although researchers want to use distributed systems for training, it has been reported that the communication cost for synchronizing gradients can be a bottleneck. Using low-precision gradients is a promising technique for reducing the bandwidth requirement. In this work, we propose Auto Precision Scaling (APS), an algorithm that can improve the accuracy when we communicate gradients by low-precision floating-point values. APS can improve the accuracy for all precisions with a trivial communication cost. Our experimental results show that for both image classification and segmentation, applying APS can train the state-of-the-art models by 8-bit floating-point gradients with no or only a tiny accuracy loss (<0.05%). Furthermore, we can avoid any accuracy loss by designing a hybrid-precision technique. Finally, we propose a performance model to evaluate the proposed method. Our experimental results show that APS can get a significant speedup over the state-of-the-art method. To make it available to researchers and developers, we design and implement a high-performance system for customized precision Deep Learning(CPD), which can simulate the training process using an arbitrary low-precision customized floating-point format. We integrate CPD into PyTorch and make it open-source to the public1.
[1]
Yuanzhou Yang,et al.
Highly Scalable Deep Learning Training System with Mixed-Precision: Training ImageNet in Four Minutes
,
2018,
ArXiv.
[2]
Kaiming He,et al.
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
,
2017,
ArXiv.
[3]
James Demmel,et al.
ImageNet Training in Minutes
,
2017,
ICPP.
[4]
Hao Wu,et al.
Mixed Precision Training
,
2017,
ICLR.
[5]
Nicholas J. Higham,et al.
INVERSE PROBLEMS NEWSLETTER
,
1991
.
[6]
Shengen Yan,et al.
GradientFlow: Optimizing Network Performance for Large-Scale Distributed DNN Training
,
2019,
IEEE Transactions on Big Data.
[7]
Tao Wang,et al.
Image Classification at Supercomputer Scale
,
2018,
ArXiv.
[8]
Takuya Akiba,et al.
Extremely Large Minibatch SGD: Training ResNet-50 on ImageNet in 15 Minutes
,
2017,
ArXiv.