Articles producció científica> Enginyeria Informàtica i Matemàtiques

MemberShield: A framework for federated learning with membership privacy

  • Datos identificativos

    Identificador: imarina:9387481
    Autores:
    Ahmed, FaisalSanchez, DavidHaddi, ZouhairDomingo-Ferrer, Josep
    Resumen:
    Federated Learning (FL) allows multiple data owners to build high-quality deep learning models collaboratively, by sharing only model updates and keeping data on their premises. Even though FL offers privacy-by-design, it is vulnerable to membership inference attacks (MIA), where an adversary tries to determine whether a sample was included in the training data. Existing defenses against MIA cannot offer meaningful privacy protection without significantly hampering the model's utility and causing a non-negligible training overhead. In this paper we analyze the underlying causes of the differences in the model behavior for member and non-member samples, which arise from model overfitting and facilitate MIAs. Accordingly, we propose MemberShield, a generalization-based defense method for MIAs that consists of: (i) one-time preprocessing of each client's training data labels that transforms one-hot encoded labels to soft labels and eventually exploits them in local training, and (ii) early stopping the training when the local model's validation accuracy does not improve on that of the global model for a number of epochs. Extensive empirical evaluations on three widely used datasets and four model architectures demonstrate that MemberShield outperforms stateof-the-art defense methods by delivering substantially better practical privacy protection against all forms of MIAs, while better preserving the target model utility. On top of that, our proposal significantly reduces training time and is straightforward to implement, by just tuning a single hyperparameter.
  • Otros:

    Autor según el artículo: Ahmed, Faisal; Sanchez, David; Haddi, Zouhair; Domingo-Ferrer, Josep
    Departamento: Enginyeria Informàtica i Matemàtiques
    Autor/es de la URV: Ahmed, Faisal / Domingo Ferrer, Josep / Sánchez Ruenes, David / Sánchez Torres, David
    Palabras clave: Data protectio Data protection Deep learning models Federated learning Membership inference attack Neural-networks Privacy Ris
    Resumen: Federated Learning (FL) allows multiple data owners to build high-quality deep learning models collaboratively, by sharing only model updates and keeping data on their premises. Even though FL offers privacy-by-design, it is vulnerable to membership inference attacks (MIA), where an adversary tries to determine whether a sample was included in the training data. Existing defenses against MIA cannot offer meaningful privacy protection without significantly hampering the model's utility and causing a non-negligible training overhead. In this paper we analyze the underlying causes of the differences in the model behavior for member and non-member samples, which arise from model overfitting and facilitate MIAs. Accordingly, we propose MemberShield, a generalization-based defense method for MIAs that consists of: (i) one-time preprocessing of each client's training data labels that transforms one-hot encoded labels to soft labels and eventually exploits them in local training, and (ii) early stopping the training when the local model's validation accuracy does not improve on that of the global model for a number of epochs. Extensive empirical evaluations on three widely used datasets and four model architectures demonstrate that MemberShield outperforms stateof-the-art defense methods by delivering substantially better practical privacy protection against all forms of MIAs, while better preserving the target model utility. On top of that, our proposal significantly reduces training time and is straightforward to implement, by just tuning a single hyperparameter.
    Áreas temáticas: Artificial intelligence Astronomia / física Biotecnología Ciência da computação Ciências agrárias i Ciencias sociales Cognitive neuroscience Computer science, artificial intelligence Economia Engenharias iii Engenharias iv Filosofia/teologia:subcomissão filosofia General medicine Interdisciplinar Matemática / probabilidade e estatística Neurosciences Psicología Psychology Química
    Acceso a la licencia de uso: https://creativecommons.org/licenses/by/3.0/es/
    Direcció de correo del autor: josep.domingo@urv.cat david.sanchez@urv.cat
    Identificador del autor: 0000-0001-7213-4962 0000-0001-7275-7887
    Fecha de alta del registro: 2024-10-26
    Versión del articulo depositado: info:eu-repo/semantics/publishedVersion
    Referencia al articulo segun fuente origial: Neural Networks. 181 106768-
    Referencia de l'ítem segons les normes APA: Ahmed, Faisal; Sanchez, David; Haddi, Zouhair; Domingo-Ferrer, Josep (2025). MemberShield: A framework for federated learning with membership privacy. Neural Networks, 181(), 106768-. DOI: 10.1016/j.neunet.2024.106768
    URL Documento de licencia: https://repositori.urv.cat/ca/proteccio-de-dades/
    Entidad: Universitat Rovira i Virgili
    Año de publicación de la revista: 2025
    Tipo de publicación: Journal Publications
  • Palabras clave:

    Artificial Intelligence,Cognitive Neuroscience,Computer Science, Artificial Intelligence,Neurosciences
    Data protectio
    Data protection
    Deep learning models
    Federated learning
    Membership inference attack
    Neural-networks
    Privacy
    Ris
    Artificial intelligence
    Astronomia / física
    Biotecnología
    Ciência da computação
    Ciências agrárias i
    Ciencias sociales
    Cognitive neuroscience
    Computer science, artificial intelligence
    Economia
    Engenharias iii
    Engenharias iv
    Filosofia/teologia:subcomissão filosofia
    General medicine
    Interdisciplinar
    Matemática / probabilidade e estatística
    Neurosciences
    Psicología
    Psychology
    Química
  • Documentos:

  • Cerca a google

    Search to google scholar