Call for Papers: Explainability, Robustness, and Security in AI Systems

  • Share:
Release Date: 2022-11-04 Visited: 

Guest Editors

Prof. Qian Wang, Wuhan University

qianwang@whu.edu.cn

Prof. Chao Shen, Xi’an Jiaotong University

chaoshen@mail.xjtu.edu.cn

Prof. Qi Li, Tsinghua University

qli01@tsinghua.edu.cn

Prof. Cong Wang, City University of Hong Kong 

congwang@cityu.edu.hk

Introduction

Artificial Intelligence (AI) has grown to be an integral part of our lives and has even shown its great value for military use. The ongoing Russia–Ukraine war is an evident demonstration of how AI technologies can be adopted in modern warfare. Examples include: overseeing the battlefield, collecting and archiving signals intelligence, mounting cyber-attacks, misinformation campaign, etc. Despite the easy-to-see benefits brought by AI, recent research revealed that improper use of AI technologies (e.g., Deepfakes) could amplify the negative impact of misinformation behavior. For example, adversaries can utilize DL/ML tools to craft misleading and fake content, like doctored videos of invading forces and fake live streams, and post them on social media platforms. Besides the abuse of AI technologies, another concern is the inherent vulnerabilities of AI systems, which can be exploited to mislead the systems and/or steal private information. For example, orchestrated and conflicting data (e.g., adversarial examples and poisoned examples) can derail AI models and exploit vulnerabilities in algorithms. With these vulnerabilities, the adversaries can actively manipulate or break up the function of AI-enabled systems. Thus, understanding and exploring the vulnerabilities of AI systems is vital and urgent for developing robust and secure AI applications. However, due to the complexity of AI algorithms and models, it remains a big challenge to understand their behaviors and vulnerabilities, making it even more difficult to promote the wide adoption of AI technologies in many mission-critical applications. In order to push their reliability boundaries, this special issue aims to focus on the research of explainability, robustness, and security in AI systems.

Topics of Interest

This special issue seeks novel studies on designing, developing, presenting, testing, and evaluating approaches for explainability, robustness, and endogenous security in AI systems. Relevant topics include, but are not limited to, the following areas:

  • Attacks on machine learning and defense

  • Adversarial machine learning & robustness

  • Deep generative models for attacks and defenses

  • Image and video manipulation and detection

  • Speech synthesis and spoken language generation

  • Analysis and understanding of deep networks

  • Security and privacy of systems based on ML and AI

  • AI testing and testing for AI

  • Societal impact of AI

Submission Guidelines

1. The paper should belong to the author’s scientific research results, the data are true and reliable, have important academic and application value, and have not been published or read in public publications or conferences. Besides, there is no issue of duplicate submission.

2. The paper should include title, abstract, keywords, body text, and references, please refer to the recent papers on Chinese Journal of Electronics (it is a blind review, and please hide the authors and funding information).

3. Please submit the paper through the official website (http://cje.ejournal.org.cn/) The corresponding author’s contact address, telephone number, and E-mail address should be attached to the paper (Please fill in the system when you register on the website and do not put them in the manuscript. Articles will be reviewed anonymously according to the standard of regular submissions).

4. Please note “ERSinAI2023+” in the head of the paper title when submitting it.

Schedule

Submission Deadline: Jan. 1, 2023
Final Notification: Mar. 15, 2023

Contact

Yue LI

liyue@ejournal.org.cn

  • Share:
Release Date: 2022-11-04 Visited: