You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is a basic observation that when given strictly more power, the adversary should never do worse. However, in Table VII the paper reports that MNIST adversarial examples with their l_infinity norm constrained to be less than 0.2 are harder to detect than when constrained to be within 0.5. The reason this table shows this effect is that FGSM, a single-step method, is used to generate these adversarial examples. The table should be re-generated with an optimization-based attack (that actually targets the defense; not a transfer attack) to give meaningful numbers.
The text was updated successfully, but these errors were encountered:
It is a basic observation that when given strictly more power, the adversary should never do worse. However, in Table VII the paper reports that MNIST adversarial examples with their l_infinity norm constrained to be less than 0.2 are harder to detect than when constrained to be within 0.5. The reason this table shows this effect is that FGSM, a single-step method, is used to generate these adversarial examples. The table should be re-generated with an optimization-based attack (that actually targets the defense; not a transfer attack) to give meaningful numbers.
Again, we made the observation that there is no clear relationship between the magnitude of perturbation of AEs and the detection AUC in non-adaptive scenarios, where detection does not need to know which type of the attack belongs to (gradient-based attacks or optimization-based attacks). From the experiments with FGSM, we at least demonstrated that there is no always strictly positive correlation between them.
I agree that's the observation you make. But the way you evaluate it is flawed. I'm not going to repeat my argument again, but instead refer you to Section 5.2 of https://arxiv.org/abs/1902.06705.
It is a basic observation that when given strictly more power, the adversary should never do worse. However, in Table VII the paper reports that MNIST adversarial examples with their l_infinity norm constrained to be less than 0.2 are harder to detect than when constrained to be within 0.5. The reason this table shows this effect is that FGSM, a single-step method, is used to generate these adversarial examples. The table should be re-generated with an optimization-based attack (that actually targets the defense; not a transfer attack) to give meaningful numbers.
The text was updated successfully, but these errors were encountered: