Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
-
Abstract
The robustness of graph neural networks (GNNs) is a critical research topic in deep learning. Many researchers have designed regularization methods to enhance the robustness of neural networks, but there is a lack of theoretical analysis on the principle of robustness. In order to tackle the weakness of current robustness designing methods, this paper gives new insights into how to guarantee the robustness of GNNs. A novel regularization strategy named Lya-Reg is designed to guarantee the robustness of GNNs by Lyapunov theory. Our results give new insights into how regularization can mitigate the various adversarial effects on different graph signals. Extensive experiments on various public datasets demonstrate that the proposed regularization method is more robust than the state-of-the-art methods such as L_1 -norm, L_2 -norm, L_21 -norm, Pro-GNN, PA-GNN and GARNET against various types of graph adversarial attacks.
-
-