Mitigating Model Inversion Attacks in Federated Learning Over Selectively Encrypted Gradients 


Vol. 14,  No. 4, pp. 257-264, Apr.  2025
https://doi.org/10.3745/TKIPS.2025.14.4.257


PDF
  Abstract

We introduce a novel method to mitigate against model inversion attacks in Federated Learning (FL). FL enables the training of a global model by sharing local gradients without exposing clients’ private data. However, model inversion attacks can reconstruct the private data from the shared gradients. To address this, the traditional defense mechanisms, such as Homomorphic Encryption (HE) and Differential Privacy (DP) have been implemented in deep learning model training to obsecure the private data. However, both mechanisms have limitations in balancing privacy, accuracy, and efficiency. Our approach selectively encrypts the gradients which contain more information about the private data, to balance between accuracy and computational efficiency. Additionally, optional DP noise is applied to unencrypted gradients to enhance data privacy. Comprehensive evaluations demonstrate that our method effectively balances the trade-off between privacy, accuracy, and efficiency outperforming existing defenses.

  Statistics


  Cite this article

[IEEE Style]

S. Ryu, H. Bae, Y. Lee, "Mitigating Model Inversion Attacks in Federated Learning Over Selectively Encrypted Gradients," The Transactions of the Korea Information Processing Society, vol. 14, no. 4, pp. 257-264, 2025. DOI: https://doi.org/10.3745/TKIPS.2025.14.4.257.

[ACM Style]

Seungyeon Ryu, Ho Bae, and Younghan Lee. 2025. Mitigating Model Inversion Attacks in Federated Learning Over Selectively Encrypted Gradients. The Transactions of the Korea Information Processing Society, 14, 4, (2025), 257-264. DOI: https://doi.org/10.3745/TKIPS.2025.14.4.257.