Dong, Xuewen, Li, Jiachen, Li, Shujun, You, Zhichao, Qu, Qiang, Kholodov, Yaroslav, Shen, Yulong (2025) Adaptive backdoor attacks with reasonable constraints on graph neural networks. IEEE Transactions on Dependable and Secure Computing, 22 . ISSN 1545-5971. E-ISSN 1941-0018. (In press) (doi:10.1109/TDSC.2025.3543020) (Access to this publication is currently restricted. You may be able to access a copy if URLs are provided) (KAR id:109705)
PDF
Author's Accepted Manuscript
Language: English Restricted to Repository staff only
This work is licensed under a Creative Commons Attribution 4.0 International License.
|
|
Contact us about this Publication
|
![]() |
Official URL: https://doi.org/10.1109/TDSC.2025.3543020 |
Abstract
Recent studies show that graph neural networks (GNNs) are vulnerable to backdoor attacks. Existing backdoor attacks against GNNs use fixed-pattern triggers and lack reasonable trigger constraints, overlooking individual graph characteristics and rendering insufficient evasiveness. To tackle the above issues, we propose ABARC, the first Adaptive Backdoor Attack with Reasonable Constraints, applying to both graph-level and node-level tasks in GNNs. For graph-level tasks, we propose a subgraph backdoor attack independent of the graph's topology. It dynamically selects trigger nodes for each target graph and modifies node features with constraints based on graph similarity, feature range, and feature type. For node-level tasks, our attack begins with an analysis of node features, followed by selecting and modifying trigger features, which are then constrained by node similarity, feature range, and feature type. Furthermore, an adaptive edge-pruning mechanism is designed to reduce the impact of neighbors on target nodes, ensuring a high attack success rate (ASR). Experimental results show that even with reasonable constraints for attack evasiveness, our attack achieves a high ASR while incurring a marginal clean accuracy drop (CAD). When combined with the state-of-the-art defense randomized smoothing (RS) method, our attack maintains an ASR over 94%, surpassing existing attacks by more than 7%.
Item Type: | Article |
---|---|
DOI/Identification number: | 10.1109/TDSC.2025.3543020 |
Uncontrolled keywords: | graph neural networks; backdoor attacks; trigger constraint; backdoor evasiveness |
Subjects: |
Q Science > QA Mathematics (inc Computing science) Q Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming, > QA76.87 Neural computers, neural networks |
Divisions: |
Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing University-wide institutes > Institute of Cyber Security for Society |
Funders: | National Natural Science Foundation of China (https://ror.org/01h0zpd94) |
Depositing User: | Shujun Li |
Date Deposited: | 21 Apr 2025 09:08 UTC |
Last Modified: | 23 Apr 2025 02:56 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/109705 (The current URI for this page, for reference purposes) |
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):