Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Adversarial attacks rely on a specific model’s gradient (since you’re essentially trying to find the most sensitive pixels).

Adversarial noise that affects model A won’t necessarily work on model B. That said, most people transfer train from well trained nets (ImageNet, Inception, etc).

Finally, not all SOTA methods are susceptible to adversarial attacks, eg capsule networks.



>Finally, not all SOTA methods are susceptible to adversarial attacks, eg capsule networks.

They appear to be susceptible: https://arxiv.org/pdf/1906.03612.pdf


That’s neat; hadn’t seen that paper. Thanks for sharing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: