Volume 33 Issue 1
Jan.  2024
Turn off MathJax
Article Contents
Wenbin YANG, Xueluan GONG, Yanjiao CHEN, et al., “SwiftTheft: A Time-Efficient Model Extraction Attack Framework Against Cloud-Based Deep Neural Networks,” Chinese Journal of Electronics, vol. 33, no. 1, pp. 90–100, 2024 doi: 10.23919/cje.2022.00.377
Citation: Wenbin YANG, Xueluan GONG, Yanjiao CHEN, et al., “SwiftTheft: A Time-Efficient Model Extraction Attack Framework Against Cloud-Based Deep Neural Networks,” Chinese Journal of Electronics, vol. 33, no. 1, pp. 90–100, 2024 doi: 10.23919/cje.2022.00.377

SwiftTheft: A Time-Efficient Model Extraction Attack Framework Against Cloud-Based Deep Neural Networks

doi: 10.23919/cje.2022.00.377
More Information
  • Author Bio:

    Wenbin YANG was born in 1997. He received the B.S. degree from School of Cyber Science and Engineering at Wuhan University, in 2020. He is currently pursuing the M.S. degree in School of Cyber Science and Engineering at Wuhan University, China. His research interests include machine learning and deep learning security. (Email: yangwenbin@whu.edu.cn)

    Xueluan GONG was born in 1996. She received the B.S. degree in computer science and electronic engineering from Hunan University in 2018. She is currently pursuing the Ph.D. degree in the School of Computer Science, Wuhan University, China. Her research interests include network security and AI security. (Email: xueluangong@whu.edu.cn)

    Yanjiao CHEN received the B.E. degree in electronic engineering from Tsinghua University in 2010 and Ph.D. degree in computer science and engineering from Hong Kong University of Science and Technology in 2015. She is currently a Bairen Researcher in Zhejiang University, China. Her research interests include computer networks, wireless system security, and network economy. She is a Member of the IEEE

    Qian WANG was born in 1980. He received the Ph.D. degree from the Illinois Institute of Technology, USA. He is a Professor at the School of Cyber Science and Engineering, Wuhan University, China. His research interests include AI security, data storage, search and computation outsourcing security, etc. Prof. Wang received the National Science Fund for Excellent Young Scholars of China in 2018. He is a recipient of the 2016 IEEE Asia-Pacific Outstanding Young Researcher Award. He serves as Associate Editors for IEEE Transactions on Dependable and Secure Computing (TDSC) and IEEE Transactions on Information Forensics and Security (TIFS). (Email: qianwang@whu.edu.cn)

    Jianshuo DONG is currently an undergraduate at the School of Cyber Science and Engineering in Wuhan University, China. His research interests include machine learning and deep learning security

  • Corresponding author: Email: qianwang@whu.edu.cn
  • Received Date: 2022-11-05
  • Accepted Date: 2023-03-13
  • Available Online: 2023-07-14
  • Publish Date: 2024-01-05
  • With the rise of artificial intelligence and cloud computing, machine-learning-as-a-service platforms, such as Google, Amazon, and IBM, have emerged to provide sophisticated tasks for cloud applications. These proprietary models are vulnerable to model extraction attacks due to their commercial value. In this paper, we propose a time-efficient model extraction attack framework called SwiftTheft that aims to steal the functionality of cloud-based deep neural network models. We distinguish SwiftTheft from the existing works with a novel distribution estimation algorithm and reference model settings, finding the most informative query samples without querying the victim model. The selected query samples can be applied to various cloud models with a one-time selection. We evaluate our proposed method through extensive experiments on three victim models and six datasets, with up to 16 models for each dataset. Compared to the existing attacks, SwiftTheft increases agreement (i.e., similarity) by 8% while consuming 98% less selecting time.
  • loading
  • [1]
    S. Pal, Y. Gupta, A. Shukla, et al., “ActiveThief: Model extraction using active learning and unannotated public data,” in Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, pp. 865–872, 2020.
    [2]
    T. Orekondy, B. Schiele, and M. Fritz, “Knockoff nets: Stealing functionality of black-box models,” in Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, pp. 4954–4963, 2019.
    [3]
    N. Papernot, P. McDaniel, I. Goodfellow, et al., “Practical black-box attacks against machine learning,” in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, Abu Dhabi, United Arab Emirates, pp. 506–519, 2017.
    [4]
    Q. X. Zhang, W. C. Ma, Y. J. Wang, et al., “Backdoor Attacks on Image Classification Models in Deep Neural Networks,” Chinese Journal of Electronics, vol. 31, no. 2, pp. 199–212, 2022. doi: 10.1049/cje.2021.00.126
    [5]
    F. Tramèr, F. Zhang, A. Juels, et al., “Stealing machine learning models via prediction APIs,” in Proceedings of the 25th USENIX Conference on Security Symposium, Austin, TX, USA, pp. 601–618, 2016.
    [6]
    B. H. Wang and N. Z. Gong, “Stealing hyperparameters in machine learning,” in Proceedings of 2018 IEEE Symposium on Security and Privacy, San Francisco, CA, USA, pp. 36–52, 2018.
    [7]
    V. Duddu, D. Samanta, D. V. Rao, et al., “Stealing neural networks via timing side channels,” arXiv preprint, arXiv: 1812.11720, 2018.
    [8]
    H. G. Yu, K. C. Yang, T. Zhang, et al., “CloudLeak: Large-scale deep learning models stealing through adversarial examples,” in Proceedings of the 27th Annual Network and Distributed System Security Symposium, San Diego, CA, USA, 2020.
    [9]
    J. R. Correia-Silva, R. F. Berriel, C. Badue, et al., “Copycat CNN: Stealing knowledge by persuading confession with random non-labeled data,” in Proceedings of 2018 International Joint Conference on Neural Networks, Rio de Janeiro, Brazil, pp. 1–8, 2018.
    [10]
    C. Ding, Z. Lu, F. J. Xu, et al ., “Towards Transmission-Friendly and Robust CNN Models over Cloud and Device,” arXiv preprint, arXiv: 2207.09616, 2022.
    [11]
    M. Ribeiro, K. Grolinger, and M. A. M. Capretz, “MlaaS: Machine learning as a service,” in Proceedings of the IEEE 14th International Conference on Machine Learning and Applications, Miami, FL, USA, pp. 896–902, 2015.
    [12]
    M. Simon, E. Rodner, and J. Denzler, “ImageNet pre-trained models with batch normalization,” arXiv preprint , arXiv: 1612.01452, 2016.
    [13]
    V. Chandrasekaran, K. Chaudhuri, I. Giacomelli, et al ., “Exploring connections between active learning and model extraction,” in Proceedings of the 29th USENIX Security Symposium , pp. 1309–1326, 2020.
    [14]
    M. Ducoffe and F. Precioso, “Adversarial active learning for deep networks: A margin based approach,” arXiv preprint , arXiv: 1802.09841, 2018.
    [15]
    S. Kariyappa, A. Prakash, and M. K. Qureshi, “MAZE: Data-free model stealing attack using zeroth-order gradient estimation,” in Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, pp. 13814–13823, 2021.
    [16]
    J. B. Truong, P. Maini, R. J. Walls, et al ., “Data-free model extraction,” in Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, pp. 4771–4780, 2021.
    [17]
    M. Juuti, S. Szyller, S. Marchal, et al ., “PRADA: Protecting against DNN model stealing attacks,” in Proceedings of 2019 IEEE European Symposium on Security and Privacy, Stockholm, Sweden, pp. 512–527, 2019.
    [18]
    A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Proceedings of the 5th International Conference on Learning Representations , Toulon, France, 2015.
    [19]
    X. L. Gong, Y. J. Chen, W. B. Yang, et al ., “InverseNet: Augmenting model extraction attacks with training data inversion,” in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence , Montreal, Canada, pp. 2439–2447, 2021.
    [20]
    Z. Y. Zhang, Y. Z. Chen, and D. A. Wagner, “Towards characterizing model extraction queries and how to detect them,” EECS Department, University of California, Technical Report , No. UCB/EECS-2021-126, 2021.
    [21]
    R. Schumann and I. Rehbein, “Active learning via membership query synthesis for semi-supervised sentence classification,” in Proceedings of the 23rd Conference on Computational Natural Language Learning, Hong Kong, China, pp. 472–481, 2019.
    [22]
    Y. F. Yan, S. J. Huang, S. Y. Chen, et al., “Active learning with query generation for cost-effective text classification,” in Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, pp. 6583–6590, 2020.
    [23]
    S. Hong and J. Chae, “Active learning with multiple kernels,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 7, pp. 2980–2994, 2022. doi: 10.1109/TNNLS.2020.3047953
    [24]
    C. Mayer and R. Timofte, “Adversarial sampling for active learning,” in Proceedings of 2020 IEEE Winter Conference on Applications of Computer Vision, Snowmass, CO, USA, pp. 3071–3079, 2020.
    [25]
    B. C. Zhang, L. Li, S. J. Yang, et al., “State-relabeling adversarial active learning,” in Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, pp. 8756–8765, 2020.
    [26]
    M. Jagielski, N. Carlini, D. Berthelot, et al ., “High accuracy and high fidelity extraction of neural networks,” in Proceedings of the 29th USENIX Security Symposium, Virtual event, pp. 1345–1362, 2020.
    [27]
    I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, arXiv: 1412.6572, 2015.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(3)  / Tables(5)

    Article Metrics

    Article views (249) PDF downloads(26) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return