Already purchased this program?
Login to View
This video program is a part of the Premium package:
Overlap Local-Sgd: An Algorithmic Approach To Hide Communication Delays In Distributed Sgd
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Overlap Local-Sgd: An Algorithmic Approach To Hide Communication Delays In Distributed Sgd
Distributed stochastic gradient descent (SGD) is essential for scaling the machine learning algorithms to a large number of computing nodes. However, the infrastructures variability such as high communication delay or random node slowdown greatly impedes
Distributed stochastic gradient descent (SGD) is essential for scaling the machine learning algorithms to a large number of computing nodes. However, the infrastructures variability such as high communication delay or random node slowdown greatly impedes