Citation: | Yunyi ZHOU, Haichang GAO, Jianping HE, et al., “Efficient Untargeted White-box Adversarial Attacks Based on Simple Initialization,” Chinese Journal of Electronics, vol. 33, no. 4, pp. 1–10, 2024 doi: 10.23919/cje.2022.00.449 |
[1] |
S. D. Zhang, H. C. Gao, and Q. X. Rao, “Defense against adversarial attacks by reconstructing images,” IEEE Transactions on Image Processing, vol. 30, pp. 6117–6129, 2021. doi: 10.1109/TIP.2021.3092582
|
[2] |
A. Madry, A. Makelov, L. Schmidt, et al., “Towards deep learning models resistant to adversarial attacks,” in Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada, https://arxiv.org/pdf/1706.06083.pdf, 2018.
|
[3] |
Y. Tashiro, Y. Song, and S. Ermon, “Diversity can be transferred: Output diversification for white-and black-box attacks,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, article no. 381, 2020.
|
[4] |
G. Sriramanan, S. Addepalli, A. Baburaj, et al., “Guided adversarial attack for evaluating and enhancing adversarial defenses,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, article no. 1704, 2020.
|
[5] |
X. J. Ma, L. X. Jiang, H. X. Huang, et al., “Imbalanced gradients: A subtle cause of overestimated adversarial robustness,” https://arxiv.org/abs/2006.13726, 2020-6-24.
|
[6] |
N. Antoniou, E. Georgiou, and A. Potamianos, “Alternating objectives generates stronger PGD-based adversarial attacks,” arXiv preprint, arXiv: 2212.07992, 2022.
|
[7] |
I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, https://arxiv.org/pdf/1412.6572.pdf, 2015.
|
[8] |
A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” arXiv preprint, arXiv: 1607.02533, 2017.
|
[9] |
N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in Proceedings of 2017 IEEE Symposium on Security and Privacy, San Jose, CA, USA, pp. 39–57, 2017.
|
[10] |
F. Croce and M. Hein, “Minimally distorted adversarial examples with a fast adaptive boundary attack,” https://openreview.net/forum?id=HJlzxgBtwH, 2020.
|
[11] |
F. Croce and M. Hein, “Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks,” in Proceedings of the 37th International Conference on Machine Learning, Virtual Event, article no. 206, 2020.
|
[12] |
M. Andriushchenko, F. Croce, N. Flammarion, et al., “Square attack: A query-efficient black-box adversarial attack via random search,” in Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, pp. 484–501, 2020.
|
[13] |
Y. Liu, Y. Cheng, L. Gao, et al., “Practical evaluation of adversarial robustness via adaptive auto attack,” in Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, pp. 15105–15114, 2022.
|
[14] |
H. C. Zhang and J. Y. Wang, “Defense against adversarial attacks using feature scattering-based adversarial training,” in Proceedings of the 32nd Advances in Neural Information Processing Systems, Vancouver, Canada, pp. 1829–1839, 2019.
|
[15] |
H. C. Zhang and W. Xu, “Adversarial interpolation training: A simple approach for improving model robustness,” https://openreview.net/forum?id=Syejj0NYvr, 2020.
|
[16] |
Y. Carmon, A. Raghunathan, L. Schmidt, et al., “Unlabeled data improves adversarial robustness,” in Proceedings of the 32nd Advances in Neural Information Processing Systems, Vancouver, Canada, pp. 11190–11201, 2019.
|
[17] |
H. Y. Zhang, Y. D. Yu, J. T. Jiao, et al., “Theoretically principled trade-off between robustness and accuracy,” in Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, pp. 7472–7482, 2019.
|
[18] |
D. Hendrycks, K. Lee, and M. Mazeika, “Using pre-training can improve model robustness and uncertainty,” in Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, pp. 2712–2721, 2019.
|
[19] |
D. X. Wu, S. T. Xia, and Y. S. Wang, “Adversarial weight perturbation helps robust generalization,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, article no. 249, 2020.
|
[20] |
L. Rice, E. Wong, and J. Z. Kolter, “Overfitting in adversarially robust deep learning,” in Proceedings of the 37th International Conference on Machine Learning, Virtual Event, article no. 749, 2020.
|
[21] |
S. A. Rebuff, S. Gowal, D. A. Calian, et al., “Fixing data augmentation to improve adversarial robustness,” arXiv preprint, arXiv: 2103.01946, 2021.
|
[22] |
S. Gowal, S. A. Rebuff, O. Wiles, et al., “Improving robustness using generated data,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, Virtual Event, pp. 4218–4233, 2021.
|
[23] |
S. Addepalli, S. Jain, G. Sriramanan, et al., “Towards achieving adversarial robustness beyond perceptual limits,” in Proceedings of the ICLR 2022, Vienna,Austria, https://ieeexplore.ieee.org/document/9157734/, 2022.
|
[24] |
H. Salman, A. Ilyas, L. Engstrom, et al., “Do adversarially robust ImageNet models transfer better?” in Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, article no. 208, 2020.
|
[25] |
E. Wong, L. Rice, and J. Z. Kolter, “Fast is better than free: Revisiting adversarial training,” in Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, https://arxiv.org/abs/2001.03994, 2020.
|
[26] |
C. X. Yin, J. Tang, Z. Y. Xu, et al., “Adversarial meta-learning,” https://openreview.net/forum?id=Z_3x5eFk1l-, 2021.
|
[27] |
M. Goldblum, L. Fowl, and T. Goldstein, “Adversarially robust few-shot learning: A meta-learning approach,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, article no. 1501, 2020.
|
[28] |
R. Wang, K. D. Xu, S. J. Liu, et al., “On fast adversarial robustness adaptation in model-agnostic meta-learning,” in Proceedings of the 9th International Conference on Learning Representations, Vienna, Austria, https://arxiv.org/abs/2102.10454, 2021.
|
[29] |
F. Croce, M. Andriushchenko, V. Sehwag, et al., “RobustBench: A standardized adversarial robustness benchmark,” in Proceedings of the 1st Neural Information Processing Systems Track on Datasets and Benchmarks, Virtual Event, https://arxiv.org/abs/2010.09670v1, 2021.
|
[30] |
L. Engstrom, A. Ilyas, H. Salman, et al., “Robustness (python library),” Available at: https://github.com/MadryLab/robustness, 2019.
|