| | |
| | |
Stat |
Members: 3669 Articles: 2'599'751 Articles rated: 2609
18 March 2025 |
|
| | | |
|
Article overview
| |
|
Rethinking Feature Uncertainty in Stochastic Neural Networks for Adversarial Robustness | Hao Yang
; Min Wang
; Zhengfei Yu
; Yun Zhou
; | Date: |
1 Jan 2022 | Abstract: | It is well-known that deep neural networks (DNNs) have shown remarkable
success in many fields. However, when adding an imperceptible magnitude
perturbation on the model input, the model performance might get rapid
decrease. To address this issue, a randomness technique has been proposed
recently, named Stochastic Neural Networks (SNNs). Specifically, SNNs inject
randomness into the model to defend against unseen attacks and improve the
adversarial robustness. However, existed studies on SNNs mainly focus on
injecting fixed or learnable noises to model weights/activations. In this
paper, we find that the existed SNNs performances are largely bottlenecked by
the feature representation ability. Surprisingly, simply maximizing the
variance per dimension of the feature distribution leads to a considerable
boost beyond all previous methods, which we named maximize feature distribution
variance stochastic neural network (MFDV-SNN). Extensive experiments on
well-known white- and black-box attacks show that MFDV-SNN achieves a
significant improvement over existing methods, which indicates that it is a
simple but effective method to improve model robustness. | Source: | arXiv, 2201.00148 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
| |
|
|
|