Citation: | Kunlan XIANG, Haomiao YANG, Mengyu GE, et al., “Data Reconstruction Attacks Against Highly Compressed Gradients,” Chinese Journal of Electronics, vol. 33, no. 5, pp. 1–13, 2024 doi: 10.23919/cje.2022.00.457 |
[1] |
Q. Yang, Y. Liu, T. J. Chen, et al., “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology, vol. 10, no. 2, article no. 12, 2019. doi: 10.1145/3298981
|
[2] |
Y. J. Lin, S. Han, H. Z. Mao, et al., “Deep gradient compression: Reducing the communication bandwidth for distributed training,” in Proceedings of the 6th International Conference on Learning Representations, Vancouver, BC, Canada, 2018.
|
[3] |
L. Abrahamyan, Y. M. Chen, G. Bekoulis, et al., “Learned gradient compression for distributed deep learning,” IEEE Transactions on Neural Networks and learning Systems, vol. 33, no. 12, pp. 7330–7344, 2022. doi: 10.1109/TNNLS.2021.3084806
|
[4] |
C. Y. Chen, J. Choi, D. Brand, et al., “AdaComp: Adaptive residual gradient compression for data-parallel distributed training,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2018.
|
[5] |
L. Melis, C. Z. Song, E. De Cristofaro, et al., “Exploiting unintended feature leakage in collaborative learning,” in Proceedings of 2019 IEEE Symposium on Security and Privacy, San Francisco, CA, USA, pp. 691–706, 2019.
|
[6] |
B. Zhao, K. R. Mopuri, and H. Bilen, “IDLG: Improved deep leakage from gradients,” arXiv preprint, arXiv: 2001.02610, 2020.
|
[7] |
Z. B. Wang, M. K. Song, Z. F. Zhang, et al., “Beyond inferring class representatives: User-level privacy leakage from federated learning,” in Proceedings of IEEE Conference on Computer Communications, Paris, France, pp. 2512–2520, 2019.
|
[8] |
L. G. Zhu, Z. J. Liu, and S. Han, “Deep leakage from gradients,” in Proceedings of the 33rd Conference on Neural Information Processing Systems, Vancouver, Canada, pp. 14747–14756, 2019.
|
[9] |
J. Geiping, H. Bauermeister, H. Dröge, et al., “Inverting gradients-how easy is it to break privacy in federated learning?” in Proceedings of the 34th Conference on Neural Information Processing Systems, Vancouver, Canada, pp. 16937–16947, 2020.
|
[10] |
H. X. Yin, A. Mallya, A. Vahdat, et al., “See through gradients: Image batch recovery via gradinversion,” in Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, pp. 16332–16341, 2021.
|
[11] |
J. Jeon, J. Kim, K. Lee, et al., “Gradient inversion with generative image prior,” in Proceedings of the 35th Conference on Neural Information Processing Systems, pp. 29898–29908, 2021.
|
[12] |
A. Krizhevsky, “Learning multiple layers of features from tiny images,” University of Toronto, 2009.
|
[13] |
A. Shafee and T. A. Awaad, “Privacy attacks against deep learning models and their countermeasures,” Journal of Systems Architecture, vol. 114 article no. 101940, 2021. doi: 10.1016/j.sysarc.2020.101940
|
[14] |
D. Enthoven and Z. Al-Ars, “An overview of federated deep learning privacy attacks and defensive strategies,” arXiv preprint, arXiv: 2004.04676, 2020.
|
[15] |
K. Ganju, Q. Wang, W. Yang, et al., “Property inference attacks on fully connected neural networks using permutation invariant representations,” in Proceedings of 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, Canada, pp. 619–633, 2018.
|
[16] |
B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the GAN: Information leakage from collaborative deep learning,” in Proceedings of 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, pp. 603–618, 2017.
|
[17] |
Z. H. Li, J. X. Zhang, L. Y. Liu, et al., “Auditing privacy defenses in federated learning via generative gradient leakage,” in Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, pp. 10122-10132, 2022.
|
[18] |
A. Hatamizadeh, H. X. Yin, H. Roth, et al., “GradViT: Gradient inversion of vision transformers,” in Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, pp. 10011-10020, 2022.
|
[19] |
H. C. Ren, J. J. Deng, and X. H. Xie, “GRNN: Generative regression neural network—a data leakage attack for federated learning,” ACM Transactions on Intelligent Systems and Technology, vol. 13, no. 4, article no. 65, 2022. doi: 10.1145/3510032
|
[20] |
J. Y. Zhu and M. Blaschko, “R-GAP: Recursive gradient attack on privacy,” in Proceedings of the 9th International Conference on Learning Representations, 2021.
|
[21] |
W. Q. Wei, L. Liu, M. Loper, et al., “A framework for evaluating gradient leakage attacks in federated learning,” arXiv preprint, arXiv: 2004.10397, 2020.
|
[22] |
M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning,” in Proceedings of 2019 IEEE Symposium on Security and Privacy, San Francisco, CA, USA, pp. 739–753, 2019.
|
[23] |
R. Shokri, M. Stronati, C. Z. Song, et al., “Membership inference attacks against machine learning models,” in Proceedings of 2017 IEEE Symposium on Security and Privacy, San Jose, CA, USA, pp. 3–18, 2017.
|
[24] |
J. Qian, H. Nassar, and L. K. Hansen, “Minimal model structure analysis for input reconstruction in federated learning” arXiv preprint, arXiv: 2010.15718, 2021.
|
[25] |
C. Z. Song and V. Shmatikov, “Overlearning reveals sensitive attributes,” in Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.
|
[26] |
T. Orekondy, S. J. Oh, Y. Zhang, et al., “Gradient-leaks: Understanding and controlling deanonymization in federated learning,” arXiv preprint, arXiv: 1805.05838, 2020.
|
[27] |
A. Wainakh, T. Müßig, T. Grube, et al., “Label leakage from gradients in distributed machine learning,” in Proceedings of the 2021 IEEE 18th Annual Consumer Communications & Networking Conference, Las Vegas, NV, USA, pp. 1–4, 2021.
|
[28] |
H. M. Yang, M. Y. Ge, K. L. Xiang, et al., “Using highly compressed gradients in federated learning for data reconstruction attacks,” IEEE Transactions on Information Forensics and Security, vol. 18 pp. 818–830, 2023. doi: 10.1109/TIFS.2022.3227761
|
[29] |
G. B. Huang, M. Mattar, T. Berg, et al., “Labeled faces in the wild: A database forstudying face recognition in unconstrained environments,” in Proceedings of Workshop on Faces in 'Real-Life' Images: Detection, Alignment, and Recognition, Marseille, France, 2008.
|
[30] |
Z. W. Liu, P. Luo, X. G. Wang, et al., “Deep learning face attributes in the wild,” Proceedings of 2015 IEEE International Conference on Computer Vision, Santiago, Chile, pp. 3730–3738, 2015.
|
[31] |
Y. LeCun, “The MNIST database of handwritten digits,” Available at: http://yann.lecun.com/exdb/mnist/, 1998.
|
[32] |
K. M. He, X. Y. Zhang, S. Q. Ren, et al., “Deep residual learning for image recognition,” Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, pp. 770–778, 2016.
|
[33] |
Y. Lecun, L. Bottou, Y. Bengio, et al., “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. doi: 10.1109/5.726791
|
[34] |
D. C. Liu and J. Nocedal, “On the limited memory BFGS method for large scale optimization,” Mathematical Programming, vol. 45, no. 1-3, pp. 503–528, 1989. doi: 10.1007/BF01589116
|
[35] |
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, 2015.
|
[36] |
H. Y. Zhang, M. Cisse, Y. N. Dauphin, et al., “mixup: Beyond empirical risk minimization,” Proceedings of the 6th International Conference on Learning Representations, Vancouver, BC, Canada, 2018.
|
[37] |
Y. B. Huang, Z. Song, K. Li, et al., “Instahide: Instance-hiding schemes for private distributed learning,” in Proceedings of the 37th International Conference on Machine Learning, pp. 4507–4518, 2020.
|
[38] |
T. Y. Pang, K. Xu, and J. Zhu, “Mixup inference: Better exploiting mixup to defend adversarial attacks,” in Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.
|
[39] |
A. Lamb, V. Verma, J. Kannala, et al., “Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy,” in Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, London, UK, pp. 95–103, 2019.
|
[40] |
Y. S. B. Huang, S. Gupta, Z. Song, et al., “Evaluating gradient inversion attacks and defenses in federated learning,” in Proceedings of the 35th Conference on Neural Information Processing Systems, pp. 7232–7241, 2021.
|
[41] |
Y. F. Han and X. L. Zhang, “Robust federated learning via collaborative machine teaching,” in Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, pp. 4075–4082, 2020.
|