Turn off MathJax
Article Contents
Kunlan XIANG, Haomiao YANG, Mengyu GE, et al., “Data Reconstruction Attacks Against Highly Compressed Gradients,” Chinese Journal of Electronics, vol. 33, no. 5, pp. 1–13, 2024 doi: 10.23919/cje.2022.00.457
Citation: Kunlan XIANG, Haomiao YANG, Mengyu GE, et al., “Data Reconstruction Attacks Against Highly Compressed Gradients,” Chinese Journal of Electronics, vol. 33, no. 5, pp. 1–13, 2024 doi: 10.23919/cje.2022.00.457

Data Reconstruction Attacks Against Highly Compressed Gradients

doi: 10.23919/cje.2022.00.457
More Information
  • Author Bio:

    Kunlan XIANG received her B.S. degree in Software Engineering from the Xi’an University of Technology (XUT) in 2022. She is currently pursuing the M.S. degree in School of Computer Science, University of Electronic Science and Technology of China (UESTC). Her research interests include cloud computing, IoT security and AI security, federated learning security. (Email: kunlan_xiang@163.com)

    Haomiao YANG received the M.S. and Ph.D. degrees in Computer Applied Technology from the University of Electronic Science and Technology of China (UESTC), Chengdu, China, in 2004 and 2008, respectively.,He has worked as a Postdoctoral Fellow with the Research Center of Information Cross over Security, Kyungil University, Gyeongsan, South Korea, for one year until June 2013. He is currently a Professor with the School of Computer Science and Engineering and the Center for Cyber Security, UESTC. His research interests include cryptography, cloud security, and cybersecurity for aviation communication. (Email: haomyang@uestc.edu.cn)

    Mengyu GE received his B.S. degree in Communication Engineering from the University of Electronic Science and Technology of China (UESTC) in 2020 and M.S. degree in Electrical Engineering from the Johns Hopkins University (JHU) in 2022 and in Cyberspace Security from UESTC in 2023. His research interests include cloud computing, IoT security and AI security. (Email: lomo123456@foxmail.com)

    Xiaofen WANG received the M.S. and Ph.D. degrees in Cryptography from Xidian University, Xi’an, China, in 2006 and 2009, respectively. She is currently an Associate Professor with the School of Computer Science and Engineering and the Center for Cyber Security, University of Electronic Science and Technology of China, Chengdu, China. Her research interests include public key cryptography and its applications in wireless networks, smart grid, and cloud computing. (Email: xfwang@uestc.edu.cn)

    Hongning DAI received the Ph.D. degree in Computer Science and Engineering from the Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, in 2008. He is currently an Associate Professor with the Department of Computer Science, Hong Kong Baptist University, Hong Kong. His current research interests include the Internet of Things, big data, and blockchain technology. He has served as the associate editor for IEEE Communications Survey and Tutorials, IEEE Transactions on Intelligent Transportation Systems, IEEE Transactions on Industrial Informatics, IEEE Transactions on Industrial Cyber-Physical Systems, Ad Hoc Networks, and Connection Science. He is also a senior member of Association for Computing Machinery (ACM). (Email: hndai@ieee.org)

  • Corresponding author: Email: haomyang@uestc.edu.cn
  • Received Date: 2022-12-31
  • Accepted Date: 2023-01-12
  • Available Online: 2024-03-08
  • Federated learning (FL) exchanges gradients instead of local training data and is therefore considered to protect data privacy. However, recent studies have shown that these gradients can be used to perform a data reconstruction attack (DRA). Nevertheless, none of these attacks can be applicable to highly compressed gradients, while most practical FL systems share highly compressed gradients for communication bandwidth reduction. In this work, we find that during the Top-K gradient compression, some rows of the fully-connected layer gradient with the same index as the ground-truth labels are larger in absolute value than the rest of the gradients, so they are not compressed (we call this phenomenon Label-gradient-remain as in the following). Building upon the Label-gradient-remain phenomenon, we introduce a DRA method termed highly compressed gradient leakage attack (HLA) designed specifically for highly compressed gradients. Especially, we design two new initialization methods for this attack, Init-Attribute and Init-Feature. Compared with the commonly used initialization method using noise, Init-Attribute can compensate for the information loss caused by high gradient compression, thus improving the effectiveness of DRA. Specifically, Init-Attribute first infers attributes from gradients and then finds the most similar data from the auxiliary dataset as the initialization data according to the inferred attributes. To ensure Init-Attribute works effectively, the auxiliary dataset requires extensive manual annotations, so we further develop Init-Feature, which generates initialization data directly by decoding gradients, thereby eliminating the need for manual annotation. Experiments on multiple benchmark datasets show that our proposed method is still effective even if 99.9% of the gradients are compressed to zero (i.e., a compression ratio of 0.1%).
  • loading
  • [1]
    Q. Yang, Y. Liu, T. J. Chen, et al., “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology, vol. 10, no. 2, article no. 12, 2019. doi: 10.1145/3298981
    [2]
    Y. J. Lin, S. Han, H. Z. Mao, et al., “Deep gradient compression: Reducing the communication bandwidth for distributed training,” in Proceedings of the 6th International Conference on Learning Representations, Vancouver, BC, Canada, 2018.
    [3]
    L. Abrahamyan, Y. M. Chen, G. Bekoulis, et al., “Learned gradient compression for distributed deep learning,” IEEE Transactions on Neural Networks and learning Systems, vol. 33, no. 12, pp. 7330–7344, 2022. doi: 10.1109/TNNLS.2021.3084806
    [4]
    C. Y. Chen, J. Choi, D. Brand, et al., “AdaComp: Adaptive residual gradient compression for data-parallel distributed training,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2018.
    [5]
    L. Melis, C. Z. Song, E. De Cristofaro, et al., “Exploiting unintended feature leakage in collaborative learning,” in Proceedings of 2019 IEEE Symposium on Security and Privacy, San Francisco, CA, USA, pp. 691–706, 2019.
    [6]
    B. Zhao, K. R. Mopuri, and H. Bilen, “IDLG: Improved deep leakage from gradients,” arXiv preprint, arXiv: 2001.02610, 2020.
    [7]
    Z. B. Wang, M. K. Song, Z. F. Zhang, et al., “Beyond inferring class representatives: User-level privacy leakage from federated learning,” in Proceedings of IEEE Conference on Computer Communications, Paris, France, pp. 2512–2520, 2019.
    [8]
    L. G. Zhu, Z. J. Liu, and S. Han, “Deep leakage from gradients,” in Proceedings of the 33rd Conference on Neural Information Processing Systems, Vancouver, Canada, pp. 14747–14756, 2019.
    [9]
    J. Geiping, H. Bauermeister, H. Dröge, et al., “Inverting gradients-how easy is it to break privacy in federated learning?” in Proceedings of the 34th Conference on Neural Information Processing Systems, Vancouver, Canada, pp. 16937–16947, 2020.
    [10]
    H. X. Yin, A. Mallya, A. Vahdat, et al., “See through gradients: Image batch recovery via gradinversion,” in Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, pp. 16332–16341, 2021.
    [11]
    J. Jeon, J. Kim, K. Lee, et al., “Gradient inversion with generative image prior,” in Proceedings of the 35th Conference on Neural Information Processing Systems, pp. 29898–29908, 2021.
    [12]
    A. Krizhevsky, “Learning multiple layers of features from tiny images,” University of Toronto, 2009.
    [13]
    A. Shafee and T. A. Awaad, “Privacy attacks against deep learning models and their countermeasures,” Journal of Systems Architecture, vol. 114 article no. 101940, 2021. doi: 10.1016/j.sysarc.2020.101940
    [14]
    D. Enthoven and Z. Al-Ars, “An overview of federated deep learning privacy attacks and defensive strategies,” arXiv preprint, arXiv: 2004.04676, 2020.
    [15]
    K. Ganju, Q. Wang, W. Yang, et al., “Property inference attacks on fully connected neural networks using permutation invariant representations,” in Proceedings of 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, Canada, pp. 619–633, 2018.
    [16]
    B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the GAN: Information leakage from collaborative deep learning,” in Proceedings of 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, pp. 603–618, 2017.
    [17]
    Z. H. Li, J. X. Zhang, L. Y. Liu, et al., “Auditing privacy defenses in federated learning via generative gradient leakage,” in Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, pp. 10122-10132, 2022.
    [18]
    A. Hatamizadeh, H. X. Yin, H. Roth, et al., “GradViT: Gradient inversion of vision transformers,” in Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, pp. 10011-10020, 2022.
    [19]
    H. C. Ren, J. J. Deng, and X. H. Xie, “GRNN: Generative regression neural network—a data leakage attack for federated learning,” ACM Transactions on Intelligent Systems and Technology, vol. 13, no. 4, article no. 65, 2022. doi: 10.1145/3510032
    [20]
    J. Y. Zhu and M. Blaschko, “R-GAP: Recursive gradient attack on privacy,” in Proceedings of the 9th International Conference on Learning Representations, 2021.
    [21]
    W. Q. Wei, L. Liu, M. Loper, et al., “A framework for evaluating gradient leakage attacks in federated learning,” arXiv preprint, arXiv: 2004.10397, 2020.
    [22]
    M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning,” in Proceedings of 2019 IEEE Symposium on Security and Privacy, San Francisco, CA, USA, pp. 739–753, 2019.
    [23]
    R. Shokri, M. Stronati, C. Z. Song, et al., “Membership inference attacks against machine learning models,” in Proceedings of 2017 IEEE Symposium on Security and Privacy, San Jose, CA, USA, pp. 3–18, 2017.
    [24]
    J. Qian, H. Nassar, and L. K. Hansen, “Minimal model structure analysis for input reconstruction in federated learning” arXiv preprint, arXiv: 2010.15718, 2021.
    [25]
    C. Z. Song and V. Shmatikov, “Overlearning reveals sensitive attributes,” in Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.
    [26]
    T. Orekondy, S. J. Oh, Y. Zhang, et al., “Gradient-leaks: Understanding and controlling deanonymization in federated learning,” arXiv preprint, arXiv: 1805.05838, 2020.
    [27]
    A. Wainakh, T. Müßig, T. Grube, et al., “Label leakage from gradients in distributed machine learning,” in Proceedings of the 2021 IEEE 18th Annual Consumer Communications & Networking Conference, Las Vegas, NV, USA, pp. 1–4, 2021.
    [28]
    H. M. Yang, M. Y. Ge, K. L. Xiang, et al., “Using highly compressed gradients in federated learning for data reconstruction attacks,” IEEE Transactions on Information Forensics and Security, vol. 18 pp. 818–830, 2023. doi: 10.1109/TIFS.2022.3227761
    [29]
    G. B. Huang, M. Mattar, T. Berg, et al., “Labeled faces in the wild: A database forstudying face recognition in unconstrained environments,” in Proceedings of Workshop on Faces in 'Real-Life' Images: Detection, Alignment, and Recognition, Marseille, France, 2008.
    [30]
    Z. W. Liu, P. Luo, X. G. Wang, et al., “Deep learning face attributes in the wild,” Proceedings of 2015 IEEE International Conference on Computer Vision, Santiago, Chile, pp. 3730–3738, 2015.
    [31]
    Y. LeCun, “The MNIST database of handwritten digits,” Available at: http://yann.lecun.com/exdb/mnist/, 1998.
    [32]
    K. M. He, X. Y. Zhang, S. Q. Ren, et al., “Deep residual learning for image recognition,” Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, pp. 770–778, 2016.
    [33]
    Y. Lecun, L. Bottou, Y. Bengio, et al., “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. doi: 10.1109/5.726791
    [34]
    D. C. Liu and J. Nocedal, “On the limited memory BFGS method for large scale optimization,” Mathematical Programming, vol. 45, no. 1-3, pp. 503–528, 1989. doi: 10.1007/BF01589116
    [35]
    D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, 2015.
    [36]
    H. Y. Zhang, M. Cisse, Y. N. Dauphin, et al., “mixup: Beyond empirical risk minimization,” Proceedings of the 6th International Conference on Learning Representations, Vancouver, BC, Canada, 2018.
    [37]
    Y. B. Huang, Z. Song, K. Li, et al., “Instahide: Instance-hiding schemes for private distributed learning,” in Proceedings of the 37th International Conference on Machine Learning, pp. 4507–4518, 2020.
    [38]
    T. Y. Pang, K. Xu, and J. Zhu, “Mixup inference: Better exploiting mixup to defend adversarial attacks,” in Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.
    [39]
    A. Lamb, V. Verma, J. Kannala, et al., “Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy,” in Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, London, UK, pp. 95–103, 2019.
    [40]
    Y. S. B. Huang, S. Gupta, Z. Song, et al., “Evaluating gradient inversion attacks and defenses in federated learning,” in Proceedings of the 35th Conference on Neural Information Processing Systems, pp. 7232–7241, 2021.
    [41]
    Y. F. Han and X. L. Zhang, “Robust federated learning via collaborative machine teaching,” in Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, pp. 4075–4082, 2020.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(10)  / Tables(8)

    Article Metrics

    Article views (39) PDF downloads(3) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return