You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Three of the attacks presented (EAD, CW2, and BLB) are unbounded attacks: rather than finding the “worst-case” (i.e., highest loss) example within some distortion bound, they seek to find the closest input subject to the constraint that it is misclassified. Unbounded attacks should always reach 100% “success” eventually, if only by actually changing an image from one class into an image from the other class; the correct and meaningful metric to report for unbounded attacks is the distortion required.
The text was updated successfully, but these errors were encountered:
Three of the attacks presented (EAD, CW2, and BLB) are unbounded attacks: rather than finding the “worst-case” (i.e., highest loss) example within some distortion bound, they seek to find the closest input subject to the constraint that it is misclassified. Unbounded attacks should always reach 100% “success” eventually, if only by actually changing an image from one class into an image from the other class; the correct and meaningful metric to report for unbounded attacks is the distortion required.
We definitely agree with you that only reporting the success rate of attacks is not as useful. Actually, it is the initial motivation for us to evaluate other performance metrics of attacks such as imperceptibility, robustness, and computation cost in our paper. And also, we reported the L0, L2, and L\inf distortion performance of EAD, CW2, and BLB in Table III.
So definitely it's good that you do report it somewhere, but nevertheless it's not meaningful to talk the success rate of unbounded attacks. Again, you may want to read https://arxiv.org/abs/1902.06705.
Three of the attacks presented (EAD, CW2, and BLB) are unbounded attacks: rather than finding the “worst-case” (i.e., highest loss) example within some distortion bound, they seek to find the closest input subject to the constraint that it is misclassified. Unbounded attacks should always reach 100% “success” eventually, if only by actually changing an image from one class into an image from the other class; the correct and meaningful metric to report for unbounded attacks is the distortion required.
The text was updated successfully, but these errors were encountered: