Revealing Backdoors, Post-Training, In Dnn Classifiers Via Novel Inference On Optimized Perturbations Inducing Group Misclassification

This video program is a part of the Premium package:

Revealing Backdoors, Post-Training, In Dnn Classifiers Via Novel Inference On Optimized Perturbations Inducing Group Misclassification


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Revealing Backdoors, Post-Training, In Dnn Classifiers Via Novel Inference On Optimized Perturbations Inducing Group Misclassification

0 views
  • Share
Recently, a special type of data poisoning (DP) attack against deep neural network (DNN) classifiers, known as a backdoor, was proposed. These attacks do not seek to degrade classification accuracy, but rather to have the classifier learn to classify to a
Recently, a special type of data poisoning (DP) attack against deep neural network (DNN) classifiers, known as a backdoor, was proposed. These attacks do not seek to degrade classification accuracy, but rather to have the classifier learn to classify to a