On Distributed Stochastic Gradient Descent For Nonconvex Functions In The Presence Of Byzantines

We consider the distributed stochastic optimization problem of minimizing a nonconvex function $f$ in an adversarial setting. All the $w$ worker nodes in the network are expected to send their stochastic gradient vectors to the fusion center (or server).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Videos in this product