Adversarial attacks rely on a specific model’s gradient (since you’re essentially trying to find the most sensitive pixels).
Adversarial noise that affects model A won’t necessarily work on model B. That said, most people transfer train from well trained nets (ImageNet, Inception, etc).
Finally, not all SOTA methods are susceptible to adversarial attacks, eg capsule networks.
Adversarial noise that affects model A won’t necessarily work on model B. That said, most people transfer train from well trained nets (ImageNet, Inception, etc).
Finally, not all SOTA methods are susceptible to adversarial attacks, eg capsule networks.