| | |
| | |
Stat |
Members: 3643 Articles: 2'487'895 Articles rated: 2609
28 March 2024 |
|
| | | |
|
Article overview
| |
|
There is Limited Correlation between Coverage and Robustness for Deep Neural Networks | Yizhen Dong
; Peixin Zhang
; Jingyi Wang
; Shuang Liu
; Jun Sun
; Jianye Hao
; Xinyu Wang
; Li Wang
; Jin Song Dong
; Dai Ting
; | Date: |
14 Nov 2019 | Abstract: | Deep neural networks (DNN) are increasingly applied in safety-critical
systems, e.g., for face recognition, autonomous car control and malware
detection. It is also shown that DNNs are subject to attacks such as
adversarial perturbation and thus must be properly tested. Many coverage
criteria for DNN since have been proposed, inspired by the success of code
coverage criteria for software programs. The expectation is that if a DNN is a
well tested (and retrained) according to such coverage criteria, it is more
likely to be robust. In this work, we conduct an empirical study to evaluate
the relationship between coverage, robustness and attack/defense metrics for
DNN. Our study is the largest to date and systematically done based on 100 DNN
models and 25 metrics. One of our findings is that there is limited correlation
between coverage and robustness, i.e., improving coverage does not help improve
the robustness. Our dataset and implementation have been made available to
serve as a benchmark for future studies on testing DNN. | Source: | arXiv, 1911.5904 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
browser claudebot
|
| |
|
|
|
| News, job offers and information for researchers and scientists:
| |