Monday, July 25, 2022

Defensive dropout for hardening deep neural networks under adversarial attacks

 Defensive dropout for hardening deep neural networks under adversarial attacks

Authors
Siyue Wang, Xiao Wang, Pu Zhao, Wujie Wen, David Kaeli, Peter Chin, Xue Lin
Publication date
2018/11/5
Book
(Best Paper Candidate, ICCAD 2018) Proceedings of the International Conference on Computer-Aided Design
Pages
1-8
Description
Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. This work provides a solution to hardening DNNs under adversarial attacks through defensive dropout. Besides using dropout during training for the best test accuracy, we propose to use dropout also at test time to achieve strong defense effects. We consider the problem of building robust DNNs as an attacker-defender two-player game, where the attacker and the defender know each others' strategies and try to optimize their own strategies towards an equilibrium. Based on the observations of the effect of test dropout rate on test accuracy and attack success rate, we propose a defensive dropout algorithm to determine an optimal test dropout rate given the neural network model …

No comments:

Post a Comment