Articles producció científicaEnginyeria Informàtica i Matemàtiques

LFighter: Defending against the label-flipping attack in federated learning

  • Dades identificatives

    Identificador:  imarina:9332577
    Autors:  Jebreel, NM; Domingo-Ferrer, J; Snáchez, D; Blanco-Justicia, A
    Resum:
    Federated learning (FL) provides autonomy and privacy by design to participating peers, who cooperatively build a machine learning (ML) model while keeping their private data in their devices. However, that same autonomy opens the door for malicious peers to poison the model by conducting either untargeted or targeted poisoning attacks. The label-flipping (LF) attack is a targeted poisoning attack where the attackers poison their training data by flipping the labels of some examples from one class (i.e., the source class) to another (i.e., the target class). Unfortunately, this attack is easy to perform and hard to detect, and it negatively impacts the performance of the global model. Existing defenses against LF are limited by assumptions on the distribution of the peers’ data and/or do not perform well with high-dimensional models. In this paper, we deeply investigate the LF attack behavior. We find that the contradicting objectives of attackers and honest peers on the source class examples are reflected on the parameter gradients corresponding to the neurons of the source and target classes in the output layer. This makes those gradients good discriminative features for the attack detection. Accordingly, we propose LFighter, a novel defense against the LF attack that first dynamically extracts those gradients from the peers’ local updates and then clusters the extracted gradients, analyzes the resulting clusters, and filters out potential bad updates before model aggregation. Extensive empirical analysis on three data sets shows the effectiveness of the proposed defense regardless of the data distribution or model dimensionality. Also, LFighter outperforms several state-of-the-art defenses by offering lower test error, higher overall accuracy, higher source class acc
  • Altres:

    Autor segons l'article: Jebreel, NM; Domingo-Ferrer, J; Snáchez, D; Blanco-Justicia, A
    Departament: Enginyeria Informàtica i Matemàtiques
    Autor/s de la URV: Blanco Justicia, Alberto / Domingo Ferrer, Josep
    Paraules clau: Security; Poisoning attacks; Label-flipping attacks; Federated learning; Deep learning models; defense
    Resum: Federated learning (FL) provides autonomy and privacy by design to participating peers, who cooperatively build a machine learning (ML) model while keeping their private data in their devices. However, that same autonomy opens the door for malicious peers to poison the model by conducting either untargeted or targeted poisoning attacks. The label-flipping (LF) attack is a targeted poisoning attack where the attackers poison their training data by flipping the labels of some examples from one class (i.e., the source class) to another (i.e., the target class). Unfortunately, this attack is easy to perform and hard to detect, and it negatively impacts the performance of the global model. Existing defenses against LF are limited by assumptions on the distribution of the peers’ data and/or do not perform well with high-dimensional models. In this paper, we deeply investigate the LF attack behavior. We find that the contradicting objectives of attackers and honest peers on the source class examples are reflected on the parameter gradients corresponding to the neurons of the source and target classes in the output layer. This makes those gradients good discriminative features for the attack detection. Accordingly, we propose LFighter, a novel defense against the LF attack that first dynamically extracts those gradients from the peers’ local updates and then clusters the extracted gradients, analyzes the resulting clusters, and filters out potential bad updates before model aggregation. Extensive empirical analysis on three data sets shows the effectiveness of the proposed defense regardless of the data distribution or model dimensionality. Also, LFighter outperforms several state-of-the-art defenses by offering lower test error, higher overall accuracy, higher source class accuracy, lower attack success rate, and higher stability of the source class accuracy. Our code and data are available for reproducibility purposes at https://github.com/NajeebJebreel/LFighter.
    Àrees temàtiques: Química; Psychology; Psicología; Neurosciences; Matemática / probabilidade e estatística; Interdisciplinar; General medicine; Filosofia/teologia:subcomissão filosofia; Engenharias iv; Engenharias iii; Economia; Computer science, artificial intelligence; Cognitive neuroscience; Ciencias sociales; Ciências agrárias i; Ciência da computação; Biotecnología; Astronomia / física; Artificial intelligence
    Accès a la llicència d'ús: https://creativecommons.org/licenses/by/3.0/es/
    Adreça de correu electrònic de l'autor: alberto.blanco@urv.cat; josep.domingo@urv.cat
    Data d'alta del registre: 2024-01-13
    Versió de l'article dipositat: info:eu-repo/semantics/acceptedVersion
    Enllaç font original: https://www.sciencedirect.com/science/article/abs/pii/S0893608023006421?via%3Dihub
    Referència a l'article segons font original: Neural Networks. 170 111-126
    Referència de l'ítem segons les normes APA: Jebreel, NM; Domingo-Ferrer, J; Snáchez, D; Blanco-Justicia, A (2024). LFighter: Defending against the label-flipping attack in federated learning. Neural Networks, 170(), 111-126. DOI: 10.1016/j.neunet.2023.11.019
    URL Document de llicència: https://repositori.urv.cat/ca/proteccio-de-dades/
    DOI de l'article: 10.1016/j.neunet.2023.11.019
    Entitat: Universitat Rovira i Virgili
    Any de publicació de la revista: 2024
    Tipus de publicació: Journal Publications
  • Paraules clau:

    Artificial Intelligence,Cognitive Neuroscience,Computer Science, Artificial Intelligence,Neurosciences
    Security
    Poisoning attacks
    Label-flipping attacks
    Federated learning
    Deep learning models
    defense
    Química
    Psychology
    Psicología
    Neurosciences
    Matemática / probabilidade e estatística
    Interdisciplinar
    General medicine
    Filosofia/teologia:subcomissão filosofia
    Engenharias iv
    Engenharias iii
    Economia
    Computer science, artificial intelligence
    Cognitive neuroscience
    Ciencias sociales
    Ciências agrárias i
    Ciência da computação
    Biotecnología
    Astronomia / física
    Artificial intelligence
  • Documents:

  • Cerca a google

    Search to google scholar