Articles producció científica> Enginyeria Informàtica i Matemàtiques

Evaluating the disclosure risk of anonymized documents via a machine learning-based re-identification attack

  • Identification data

    Identifier: imarina:9385341
    Authors:
    Manzanares-Salor, BenetSanchez, DavidLison, Pierre
    Abstract:
    The availability of textual data depicting human-centered features and behaviors is crucial for many data mining and machine learning tasks. However, data containing personal information should be anonymized prior making them available for secondary use. A variety of text anonymization methods have been proposed in the last years, which are standardly evaluated by comparing their outputs with human-based anonymizations. The residual disclosure risk is estimated with the recall metric, which quantifies the proportion of manually annotated re-identifying terms successfully detected by the anonymization algorithm. Nevertheless, recall is not a risk metric, which leads to several drawbacks. First, it requires a unique ground truth, and this does not hold for text anonymization, where several masking choices could be equally valid to prevent re-identification. Second, it relies on human judgements, which are inherently subjective and prone to errors. Finally, the recall metric weights terms uniformly, thereby ignoring the fact that the influence on the disclosure risk of some missed terms may be much larger than of others. To overcome these drawbacks, in this paper we propose a novel method to evaluate the disclosure risk of anonymized texts by means of an automated re-identification attack. We formalize the attack as a multi-class classification task and leverage state-of-the-art neural language models to aggregate the data sources that attackers may use to build the classifier. We illustrate the effectiveness of our method by assessing the disclosure risk of several methods for text anonymization under different attack configurations. Empirical results show substantial privacy risks for most existing anonymization methods.
  • Others:

    Author, as appears in the article.: Manzanares-Salor, Benet; Sanchez, David; Lison, Pierre
    Department: Enginyeria Informàtica i Matemàtiques
    URV's Author/s: Manzanares Salor, Benet / Sánchez Ruenes, David
    Keywords: Text anonymization Record linkage Re-identification risk Privacy-preserving data publishing Privac Language models Language model De-identification
    Abstract: The availability of textual data depicting human-centered features and behaviors is crucial for many data mining and machine learning tasks. However, data containing personal information should be anonymized prior making them available for secondary use. A variety of text anonymization methods have been proposed in the last years, which are standardly evaluated by comparing their outputs with human-based anonymizations. The residual disclosure risk is estimated with the recall metric, which quantifies the proportion of manually annotated re-identifying terms successfully detected by the anonymization algorithm. Nevertheless, recall is not a risk metric, which leads to several drawbacks. First, it requires a unique ground truth, and this does not hold for text anonymization, where several masking choices could be equally valid to prevent re-identification. Second, it relies on human judgements, which are inherently subjective and prone to errors. Finally, the recall metric weights terms uniformly, thereby ignoring the fact that the influence on the disclosure risk of some missed terms may be much larger than of others. To overcome these drawbacks, in this paper we propose a novel method to evaluate the disclosure risk of anonymized texts by means of an automated re-identification attack. We formalize the attack as a multi-class classification task and leverage state-of-the-art neural language models to aggregate the data sources that attackers may use to build the classifier. We illustrate the effectiveness of our method by assessing the disclosure risk of several methods for text anonymization under different attack configurations. Empirical results show substantial privacy risks for most existing anonymization methods.
    Thematic Areas: Information systems Engenharias iv Engenharias iii Computer science, information systems Computer science, artificial intelligence Computer science applications Computer networks and communications Ciências biológicas i Ciência da computação
    licence for use: https://creativecommons.org/licenses/by/3.0/es/
    Author's mail: benet.manzanares@urv.cat david.sanchez@urv.cat
    Author identifier: 0000-0001-7275-7887
    Record's date: 2025-03-15
    Paper version: info:eu-repo/semantics/publishedVersion
    Licence document URL: https://repositori.urv.cat/ca/proteccio-de-dades/
    Paper original source: Data Mining And Knowledge Discovery. 38 (6): 4040-4075
    APA: Manzanares-Salor, Benet; Sanchez, David; Lison, Pierre (2024). Evaluating the disclosure risk of anonymized documents via a machine learning-based re-identification attack. Data Mining And Knowledge Discovery, 38(6), 4040-4075. DOI: 10.1007/s10618-024-01066-3
    Entity: Universitat Rovira i Virgili
    Journal publication year: 2024
    Publication Type: Journal Publications
  • Keywords:

    Computer Networks and Communications,Computer Science Applications,Computer Science, Artificial Intelligence,Computer Science, Information Systems,Information Systems
    Text anonymization
    Record linkage
    Re-identification risk
    Privacy-preserving data publishing
    Privac
    Language models
    Language model
    De-identification
    Information systems
    Engenharias iv
    Engenharias iii
    Computer science, information systems
    Computer science, artificial intelligence
    Computer science applications
    Computer networks and communications
    Ciências biológicas i
    Ciência da computação
  • Documents:

  • Cerca a google

    Search to google scholar