Articles producció científica> Enginyeria Informàtica i Matemàtiques

MemberShield: A framework for federated learning with membership privacy

  • Dades identificatives

    Identificador: imarina:9387481
    Autors:
    Ahmed, FaisalSanchez, DavidHaddi, ZouhairDomingo-Ferrer, Josep
    Resum:
    Federated Learning (FL) allows multiple data owners to build high-quality deep learning models collaboratively, by sharing only model updates and keeping data on their premises. Even though FL offers privacy-by-design, it is vulnerable to membership inference attacks (MIA), where an adversary tries to determine whether a sample was included in the training data. Existing defenses against MIA cannot offer meaningful privacy protection without significantly hampering the model's utility and causing a non-negligible training overhead. In this paper we analyze the underlying causes of the differences in the model behavior for member and non-member samples, which arise from model overfitting and facilitate MIAs. Accordingly, we propose MemberShield, a generalization-based defense method for MIAs that consists of: (i) one-time preprocessing of each client's training data labels that transforms one-hot encoded labels to soft labels and eventually exploits them in local training, and (ii) early stopping the training when the local model's validation accuracy does not improve on that of the global model for a number of epochs. Extensive empirical evaluations on three widely used datasets and four model architectures demonstrate that MemberShield outperforms stateof-the-art defense methods by delivering substantially better practical privacy protection against all forms of MIAs, while better preserving the target model utility. On top of that, our proposal significantly reduces training time and is straightforward to implement, by just tuning a single hyperparameter.
  • Altres:

    Autor segons l'article: Ahmed, Faisal; Sanchez, David; Haddi, Zouhair; Domingo-Ferrer, Josep
    Departament: Enginyeria Informàtica i Matemàtiques
    Autor/s de la URV: Ahmed, Faisal / Domingo Ferrer, Josep / Sánchez Ruenes, David / Sánchez Torres, David
    Paraules clau: Data protectio Data protection Deep learning models Federated learning Membership inference attack Neural-networks Privacy Ris
    Resum: Federated Learning (FL) allows multiple data owners to build high-quality deep learning models collaboratively, by sharing only model updates and keeping data on their premises. Even though FL offers privacy-by-design, it is vulnerable to membership inference attacks (MIA), where an adversary tries to determine whether a sample was included in the training data. Existing defenses against MIA cannot offer meaningful privacy protection without significantly hampering the model's utility and causing a non-negligible training overhead. In this paper we analyze the underlying causes of the differences in the model behavior for member and non-member samples, which arise from model overfitting and facilitate MIAs. Accordingly, we propose MemberShield, a generalization-based defense method for MIAs that consists of: (i) one-time preprocessing of each client's training data labels that transforms one-hot encoded labels to soft labels and eventually exploits them in local training, and (ii) early stopping the training when the local model's validation accuracy does not improve on that of the global model for a number of epochs. Extensive empirical evaluations on three widely used datasets and four model architectures demonstrate that MemberShield outperforms stateof-the-art defense methods by delivering substantially better practical privacy protection against all forms of MIAs, while better preserving the target model utility. On top of that, our proposal significantly reduces training time and is straightforward to implement, by just tuning a single hyperparameter.
    Àrees temàtiques: Artificial intelligence Astronomia / física Biotecnología Ciência da computação Ciências agrárias i Ciencias sociales Cognitive neuroscience Computer science, artificial intelligence Economia Engenharias iii Engenharias iv Filosofia/teologia:subcomissão filosofia General medicine Interdisciplinar Matemática / probabilidade e estatística Neurosciences Psicología Psychology Química
    Accès a la llicència d'ús: https://creativecommons.org/licenses/by/3.0/es/
    Adreça de correu electrònic de l'autor: josep.domingo@urv.cat david.sanchez@urv.cat
    Identificador de l'autor: 0000-0001-7213-4962 0000-0001-7275-7887
    Data d'alta del registre: 2024-10-26
    Versió de l'article dipositat: info:eu-repo/semantics/publishedVersion
    Referència a l'article segons font original: Neural Networks. 181 106768-
    Referència de l'ítem segons les normes APA: Ahmed, Faisal; Sanchez, David; Haddi, Zouhair; Domingo-Ferrer, Josep (2025). MemberShield: A framework for federated learning with membership privacy. Neural Networks, 181(), 106768-. DOI: 10.1016/j.neunet.2024.106768
    URL Document de llicència: https://repositori.urv.cat/ca/proteccio-de-dades/
    Entitat: Universitat Rovira i Virgili
    Any de publicació de la revista: 2025
    Tipus de publicació: Journal Publications
  • Paraules clau:

    Artificial Intelligence,Cognitive Neuroscience,Computer Science, Artificial Intelligence,Neurosciences
    Data protectio
    Data protection
    Deep learning models
    Federated learning
    Membership inference attack
    Neural-networks
    Privacy
    Ris
    Artificial intelligence
    Astronomia / física
    Biotecnología
    Ciência da computação
    Ciências agrárias i
    Ciencias sociales
    Cognitive neuroscience
    Computer science, artificial intelligence
    Economia
    Engenharias iii
    Engenharias iv
    Filosofia/teologia:subcomissão filosofia
    General medicine
    Interdisciplinar
    Matemática / probabilidade e estatística
    Neurosciences
    Psicología
    Psychology
    Química
  • Documents:

  • Cerca a google

    Search to google scholar