Skip to main content
Kent Academic Repository

Local Differential Privacy is Not Enough: A Sample Reconstruction Attack against Federated Learning with Local Differential Privacy

You, Zhichao, Dong, Xuewen, Li, Shujun, Liu, Ximeng, Ma, Siqi, Shen, Yulong (2025) Local Differential Privacy is Not Enough: A Sample Reconstruction Attack against Federated Learning with Local Differential Privacy. IEEE Transactions on Information Forensics & Security, 20 . pp. 1519-1534. ISSN 1556-6013. (doi:10.1109/TIFS.2024.3515793) (KAR id:108691)

Abstract

Reconstruction attacks against federated learning (FL) aim to reconstruct users’ samples through users’ uploaded gradients. Local differential privacy (LDP) is regarded as an effective defense against various attacks, including sample reconstruction in FL, where gradients are clipped and perturbed. Existing attacks are ineffective in FL with LDP since clipped and perturbed gradients obliterate most sample information for reconstruction. Besides, existing attacks embed additional sample information into gradients to improve the attack effect and cause gradient expansion, leading to a more severe gradient clipping in FL with LDP. In this paper, we propose a sample reconstruction attack against LDP-based FL with any target models to reconstruct victims’ sensitive samples to illustrate that FL with LDP is not flawless. Considering gradient expansion in reconstruction attacks and noise in LDP, the core of the proposed attack is gradient compression and reconstructed sample denoising. For gradient compression, an inference structure based on sample characteristics is presented to reduce redundant gradients against LDP. For reconstructed sample denoising, we artificially introduce zero gradients to observe noise distribution and scale confidence interval to filter the noise. Theoretical proof guarantees the effectiveness of the proposed attack. Evaluations show that the proposed attack is the only attack that reconstructs victims’ training samples in LDP-based FL and has little impact on the target model’s accuracy. We conclude that LDP-based FL needs further improvements to defend against sample reconstruction attacks effectively.

Item Type: Article
DOI/Identification number: 10.1109/TIFS.2024.3515793
Uncontrolled keywords: Federated learning (FL), differential privacy, data privacy, sample reconstruction attack
Subjects: Q Science > QA Mathematics (inc Computing science) > QA 75 Electronic computers. Computer science
Q Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming, > QA76.76.E95 Expert Systems (Intelligent Knowledge Based Systems)
Q Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming, > QA76.87 Neural computers, neural networks
T Technology > TK Electrical engineering. Electronics. Nuclear engineering > TK5101 Telecommunications > TK5102.9 Signal processing
T Technology > TK Electrical engineering. Electronics. Nuclear engineering > TK7800 Electronics > TK7880 Applications of electronics > TK7882.P3 Pattern recognition systems
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing
University-wide institutes > Institute of Cyber Security for Society
Funders: National Natural Science Foundation of China (https://ror.org/01h0zpd94)
Depositing User: Shujun Li
Date Deposited: 07 Feb 2025 10:50 UTC
Last Modified: 12 Feb 2025 03:48 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/108691 (The current URI for this page, for reference purposes)

University of Kent Author Information

  • Depositors only (login required):

Total unique views of this page since July 2020. For more details click on the image.