(k\)-Anonymity in Practice: How Generalisation and Suppression Affect Machine Learning Classifiers
The protection of private information is a crucial issue in data-driven research and business contexts. Typically, techniques like anonymisation or (selective) deletion are introduced in order to allow data sharing, e. g. in the case of collaborative research endeavours. For use with anonymisation t...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-06 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Slijepčević, Djordje Henzl, Maximilian Klausner, Lukas Daniel Dam, Tobias Kieseberg, Peter Zeppelzauer, Matthias |
description | The protection of private information is a crucial issue in data-driven research and business contexts. Typically, techniques like anonymisation or (selective) deletion are introduced in order to allow data sharing, e. g. in the case of collaborative research endeavours. For use with anonymisation techniques, the \(k\)-anonymity criterion is one of the most popular, with numerous scientific publications on different algorithms and metrics. Anonymisation techniques often require changing the data and thus necessarily affect the results of machine learning models trained on the underlying data. In this work, we conduct a systematic comparison and detailed investigation into the effects of different \(k\)-anonymisation algorithms on the results of machine learning models. We investigate a set of popular \(k\)-anonymisation algorithms with different classifiers and evaluate them on different real-world datasets. Our systematic evaluation shows that with an increasingly strong \(k\)-anonymity constraint, the classification performance generally degrades, but to varying degrees and strongly depending on the dataset and anonymisation method. Furthermore, Mondrian can be considered as the method with the most appealing properties for subsequent classification. |
doi_str_mv | 10.48550/arxiv.2102.04763 |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2488085979</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2488085979</sourcerecordid><originalsourceid>FETCH-proquest_journals_24880859793</originalsourceid><addsrcrecordid>eNqNisFKAzEQQIMgWLQf4G3Aix52zSabbuqtFLUHBUGPQhnXWZ26TtYkq_bvreAHeHo83lPquNJl7Z3T5xi_-bM0lTalrpuZ3VMTY21V-NqYAzVNaaO1NrPGOGcn6un07fGsWEiQ7TvnLbDAXcQ2c0sXsApfcE1CEXtOmDkIoDzD_TgMkVL69UXXUZvhFttXFoIbwigsL7DscTd0TDEdqf0O-0TTPx6qk6vLh-WqGGL4GCnl9SaMUXZpbWrvtXfzZm7_d_0AyzJLew</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2488085979</pqid></control><display><type>article</type><title>(k\)-Anonymity in Practice: How Generalisation and Suppression Affect Machine Learning Classifiers</title><source>Free eJournals</source><creator>Slijepčević, Djordje ; Henzl, Maximilian ; Klausner, Lukas Daniel ; Dam, Tobias ; Kieseberg, Peter ; Zeppelzauer, Matthias</creator><creatorcontrib>Slijepčević, Djordje ; Henzl, Maximilian ; Klausner, Lukas Daniel ; Dam, Tobias ; Kieseberg, Peter ; Zeppelzauer, Matthias</creatorcontrib><description>The protection of private information is a crucial issue in data-driven research and business contexts. Typically, techniques like anonymisation or (selective) deletion are introduced in order to allow data sharing, e. g. in the case of collaborative research endeavours. For use with anonymisation techniques, the \(k\)-anonymity criterion is one of the most popular, with numerous scientific publications on different algorithms and metrics. Anonymisation techniques often require changing the data and thus necessarily affect the results of machine learning models trained on the underlying data. In this work, we conduct a systematic comparison and detailed investigation into the effects of different \(k\)-anonymisation algorithms on the results of machine learning models. We investigate a set of popular \(k\)-anonymisation algorithms with different classifiers and evaluate them on different real-world datasets. Our systematic evaluation shows that with an increasingly strong \(k\)-anonymity constraint, the classification performance generally degrades, but to varying degrees and strongly depending on the dataset and anonymisation method. Furthermore, Mondrian can be considered as the method with the most appealing properties for subsequent classification.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2102.04763</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Classification ; Classifiers ; Data retrieval ; Datasets ; Machine learning ; Performance degradation ; Scientific papers</subject><ispartof>arXiv.org, 2022-06</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780,27902</link.rule.ids></links><search><creatorcontrib>Slijepčević, Djordje</creatorcontrib><creatorcontrib>Henzl, Maximilian</creatorcontrib><creatorcontrib>Klausner, Lukas Daniel</creatorcontrib><creatorcontrib>Dam, Tobias</creatorcontrib><creatorcontrib>Kieseberg, Peter</creatorcontrib><creatorcontrib>Zeppelzauer, Matthias</creatorcontrib><title>(k\)-Anonymity in Practice: How Generalisation and Suppression Affect Machine Learning Classifiers</title><title>arXiv.org</title><description>The protection of private information is a crucial issue in data-driven research and business contexts. Typically, techniques like anonymisation or (selective) deletion are introduced in order to allow data sharing, e. g. in the case of collaborative research endeavours. For use with anonymisation techniques, the \(k\)-anonymity criterion is one of the most popular, with numerous scientific publications on different algorithms and metrics. Anonymisation techniques often require changing the data and thus necessarily affect the results of machine learning models trained on the underlying data. In this work, we conduct a systematic comparison and detailed investigation into the effects of different \(k\)-anonymisation algorithms on the results of machine learning models. We investigate a set of popular \(k\)-anonymisation algorithms with different classifiers and evaluate them on different real-world datasets. Our systematic evaluation shows that with an increasingly strong \(k\)-anonymity constraint, the classification performance generally degrades, but to varying degrees and strongly depending on the dataset and anonymisation method. Furthermore, Mondrian can be considered as the method with the most appealing properties for subsequent classification.</description><subject>Algorithms</subject><subject>Classification</subject><subject>Classifiers</subject><subject>Data retrieval</subject><subject>Datasets</subject><subject>Machine learning</subject><subject>Performance degradation</subject><subject>Scientific papers</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNisFKAzEQQIMgWLQf4G3Aix52zSabbuqtFLUHBUGPQhnXWZ26TtYkq_bvreAHeHo83lPquNJl7Z3T5xi_-bM0lTalrpuZ3VMTY21V-NqYAzVNaaO1NrPGOGcn6un07fGsWEiQ7TvnLbDAXcQ2c0sXsApfcE1CEXtOmDkIoDzD_TgMkVL69UXXUZvhFttXFoIbwigsL7DscTd0TDEdqf0O-0TTPx6qk6vLh-WqGGL4GCnl9SaMUXZpbWrvtXfzZm7_d_0AyzJLew</recordid><startdate>20220622</startdate><enddate>20220622</enddate><creator>Slijepčević, Djordje</creator><creator>Henzl, Maximilian</creator><creator>Klausner, Lukas Daniel</creator><creator>Dam, Tobias</creator><creator>Kieseberg, Peter</creator><creator>Zeppelzauer, Matthias</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220622</creationdate><title>(k\)-Anonymity in Practice: How Generalisation and Suppression Affect Machine Learning Classifiers</title><author>Slijepčević, Djordje ; Henzl, Maximilian ; Klausner, Lukas Daniel ; Dam, Tobias ; Kieseberg, Peter ; Zeppelzauer, Matthias</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24880859793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Classification</topic><topic>Classifiers</topic><topic>Data retrieval</topic><topic>Datasets</topic><topic>Machine learning</topic><topic>Performance degradation</topic><topic>Scientific papers</topic><toplevel>online_resources</toplevel><creatorcontrib>Slijepčević, Djordje</creatorcontrib><creatorcontrib>Henzl, Maximilian</creatorcontrib><creatorcontrib>Klausner, Lukas Daniel</creatorcontrib><creatorcontrib>Dam, Tobias</creatorcontrib><creatorcontrib>Kieseberg, Peter</creatorcontrib><creatorcontrib>Zeppelzauer, Matthias</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Slijepčević, Djordje</au><au>Henzl, Maximilian</au><au>Klausner, Lukas Daniel</au><au>Dam, Tobias</au><au>Kieseberg, Peter</au><au>Zeppelzauer, Matthias</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>(k\)-Anonymity in Practice: How Generalisation and Suppression Affect Machine Learning Classifiers</atitle><jtitle>arXiv.org</jtitle><date>2022-06-22</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>The protection of private information is a crucial issue in data-driven research and business contexts. Typically, techniques like anonymisation or (selective) deletion are introduced in order to allow data sharing, e. g. in the case of collaborative research endeavours. For use with anonymisation techniques, the \(k\)-anonymity criterion is one of the most popular, with numerous scientific publications on different algorithms and metrics. Anonymisation techniques often require changing the data and thus necessarily affect the results of machine learning models trained on the underlying data. In this work, we conduct a systematic comparison and detailed investigation into the effects of different \(k\)-anonymisation algorithms on the results of machine learning models. We investigate a set of popular \(k\)-anonymisation algorithms with different classifiers and evaluate them on different real-world datasets. Our systematic evaluation shows that with an increasingly strong \(k\)-anonymity constraint, the classification performance generally degrades, but to varying degrees and strongly depending on the dataset and anonymisation method. Furthermore, Mondrian can be considered as the method with the most appealing properties for subsequent classification.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2102.04763</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2488085979 |
source | Free eJournals |
subjects | Algorithms Classification Classifiers Data retrieval Datasets Machine learning Performance degradation Scientific papers |
title | (k\)-Anonymity in Practice: How Generalisation and Suppression Affect Machine Learning Classifiers |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T12%3A49%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=(k%5C)-Anonymity%20in%20Practice:%20How%20Generalisation%20and%20Suppression%20Affect%20Machine%20Learning%20Classifiers&rft.jtitle=arXiv.org&rft.au=Slijep%C4%8Devi%C4%87,%20Djordje&rft.date=2022-06-22&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2102.04763&rft_dat=%3Cproquest%3E2488085979%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2488085979&rft_id=info:pmid/&rfr_iscdi=true |