Adaptive Knowledge Distillation Based On Entropy

Knowledge distillation (KD) approach is widely used in the deep learning field mainly for model size reduction. KD utilizes soft labels of teacher model, which contain the dark- knowledge that one-hot ground-truth does not have. This knowledge can improve
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Videos in this product

Adaptive Knowledge Distillation Based On Entropy

00:14:43
0 views
Knowledge distillation (KD) approach is widely used in the deep learning field mainly for model size reduction. KD utilizes soft labels of teacher model, which contain the dark- knowledge that one-hot ground-truth does not have. This knowledge can improve