Articles producció científica> Enginyeria Informàtica i Matemàtiques

Generating Deep Learning Model-Specific Explanations at the End User's Side

  • Dades identificatives

    Identificador: imarina:9291527
    Autors:
    Haffar, RJebreel, NSanchez, DDomingo-Ferrer, J
    Resum:
    End users who cannot afford to collect and label big data to train accurate deep learning (DL) models resort to Machine Learning as a Service (MLaaS) providers, who provide paid access to accurate DL models. However, the lack of transparency in how the providers' models make predictions causes a problem of trust. A way to increase trust (and also to align with ethical regulations) is for predictions to be accompanied by explanations locally and independently generated by the end users (rather than by explanations offered by the model providers). Explanation methods using internal components of DL models (a.k.a. model-specific explanations) are more accurate and effective than those relying solely on the inputs and outputs (a.k.a. model-agnostic explanations). However, end users lack white-box access to the internal components of the providers' models. To tackle this issue, we propose a novel approach allowing an end user to locally generate model-specific explanations for a DL classification model accessed via a provider's API. First, we approximate the provider's model with a local surrogate model. We then use the surrogate model's components to locally generate model-specific explanations that approximate the explanations obtainable with white-box access to the provider's DL model. Specifically, we leverage the surrogate model's gradients to generate adversarial examples that counterfactually explain why an input example is classified into a specific class. Our approach only requires the end user to have unlabeled data of size 0.5% of the provider's training data and with a similar distribution; given the small size and unlabeled nature of these data, they can be assumed to be already available to the end user or even to be supplied by the provider to build trust in h
  • Altres:

    Autor segons l'article: Haffar, R; Jebreel, N; Sanchez, D; Domingo-Ferrer, J
    Departament: Enginyeria Informàtica i Matemàtiques
    Autor/s de la URV: Domingo Ferrer, Josep / Haffar, Rami / Jebreel, Najeeb Moharram Salim / Sánchez Ruenes, David
    Paraules clau: Recognition End-user explanations Deep learning classification models Counterfactual explanations Adversarial examples deep learning classification models counterfactual explanations adversarial examples
    Resum: End users who cannot afford to collect and label big data to train accurate deep learning (DL) models resort to Machine Learning as a Service (MLaaS) providers, who provide paid access to accurate DL models. However, the lack of transparency in how the providers' models make predictions causes a problem of trust. A way to increase trust (and also to align with ethical regulations) is for predictions to be accompanied by explanations locally and independently generated by the end users (rather than by explanations offered by the model providers). Explanation methods using internal components of DL models (a.k.a. model-specific explanations) are more accurate and effective than those relying solely on the inputs and outputs (a.k.a. model-agnostic explanations). However, end users lack white-box access to the internal components of the providers' models. To tackle this issue, we propose a novel approach allowing an end user to locally generate model-specific explanations for a DL classification model accessed via a provider's API. First, we approximate the provider's model with a local surrogate model. We then use the surrogate model's components to locally generate model-specific explanations that approximate the explanations obtainable with white-box access to the provider's DL model. Specifically, we leverage the surrogate model's gradients to generate adversarial examples that counterfactually explain why an input example is classified into a specific class. Our approach only requires the end user to have unlabeled data of size 0.5% of the provider's training data and with a similar distribution; given the small size and unlabeled nature of these data, they can be assumed to be already available to the end user or even to be supplied by the provider to build trust in his model. We demonstrate the accuracy and effectiveness of our approach through extensive experiments on two ML tasks: image classification and tabular data classification. The locally generated explanations are consistent with those obtainable with white-box access to the provider's model, thus giving end users an independent and reliable way to determine if the provider's model is trustworthy.
    Àrees temàtiques: Software Matemática / probabilidade e estatística Interdisciplinar Information systems Engenharias iv Economia Control and systems engineering Computer science, artificial intelligence Ciência da computação Artificial intelligence
    Accès a la llicència d'ús: https://creativecommons.org/licenses/by/3.0/es/
    Adreça de correu electrònic de l'autor: rami.haffar@urv.cat najeeb.jebreel@urv.cat rami.haffar@urv.cat najeeb.jebreel@urv.cat david.sanchez@urv.cat josep.domingo@urv.cat
    Identificador de l'autor: 0000-0001-7275-7887 0000-0001-7213-4962
    Data d'alta del registre: 2024-10-12
    Versió de l'article dipositat: info:eu-repo/semantics/acceptedVersion
    Enllaç font original: https://www.worldscientific.com/doi/abs/10.1142/S0218488522400219
    URL Document de llicència: https://repositori.urv.cat/ca/proteccio-de-dades/
    Referència a l'article segons font original: International Journal Of Uncertainty Fuzziness And Knowledge-Based Systems. 30 (SUPP02): 255-278
    Referència de l'ítem segons les normes APA: Haffar, R; Jebreel, N; Sanchez, D; Domingo-Ferrer, J (2022). Generating Deep Learning Model-Specific Explanations at the End User's Side. International Journal Of Uncertainty Fuzziness And Knowledge-Based Systems, 30(SUPP02), 255-278. DOI: 10.1142/S0218488522400219
    DOI de l'article: 10.1142/S0218488522400219
    Entitat: Universitat Rovira i Virgili
    Any de publicació de la revista: 2022
    Tipus de publicació: Journal Publications
  • Paraules clau:

    Artificial Intelligence,Computer Science, Artificial Intelligence,Control and Systems Engineering,Information Systems,Software
    Recognition
    End-user explanations
    Deep learning classification models
    Counterfactual explanations
    Adversarial examples
    deep learning classification models
    counterfactual explanations
    adversarial examples
    Software
    Matemática / probabilidade e estatística
    Interdisciplinar
    Information systems
    Engenharias iv
    Economia
    Control and systems engineering
    Computer science, artificial intelligence
    Ciência da computação
    Artificial intelligence
  • Documents:

  • Cerca a google

    Search to google scholar