Collection:
To accelerate distributed deep learning, gradient quantization technique is widely used to reduce the communication cost. However, the existing quantization schemes suffer from either model accuracy degradation or low compression ratio (arisen from a redu
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Videos in this product
Accelerating Distributed Deep Learning By Adaptive Gradient Quantization
To accelerate distributed deep learning, gradient quantization technique is widely used to reduce the communication cost. However, the existing quantization schemes suffer from either model accuracy degradation or low compression ratio (arisen from a redu