Secure Collaborative Training and Inference for XGBoost

In recent years, gradient boosted decision tree learning has proven to be an effective method of training robust models. Moreover, collaborative learning among multiple parties has the potential to greatly benefit all parties involved, but organizations have also encountered obstacles in sharing sen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2020-10
Hauptverfasser: Law, Andrew, Leung, Chester, Poddar, Rishabh, Popa, Raluca Ada, Shi, Chenyu, Sima, Octavian, Yu, Chaofan, Zhang, Xingmeng, Zheng, Wenting
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Law, Andrew
Leung, Chester
Poddar, Rishabh
Popa, Raluca Ada
Shi, Chenyu
Sima, Octavian
Yu, Chaofan
Zhang, Xingmeng
Zheng, Wenting
description In recent years, gradient boosted decision tree learning has proven to be an effective method of training robust models. Moreover, collaborative learning among multiple parties has the potential to greatly benefit all parties involved, but organizations have also encountered obstacles in sharing sensitive data due to business, regulatory, and liability concerns. We propose Secure XGBoost, a privacy-preserving system that enables multiparty training and inference of XGBoost models. Secure XGBoost protects the privacy of each party's data as well as the integrity of the computation with the help of hardware enclaves. Crucially, Secure XGBoost augments the security of the enclaves using novel data-oblivious algorithms that prevent access side-channel attacks on enclaves induced via access pattern leakage.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2449055234</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2449055234</sourcerecordid><originalsourceid>FETCH-proquest_journals_24490552343</originalsourceid><addsrcrecordid>eNqNyr0KwjAUQOEgCBbtOwScC_Emsbpa_Jvt4FZivZWUkqs3qc-vgw_gdIbvTEQGWq-KjQGYiTzGXikF6xKs1ZkoL9iOjLKiYXA3Ypf8G2XNzgcfHtKFuzyHDhlDi7IjltfjjiimhZh2boiY_zoXy8O-rk7Fk-k1YkxNTyOHLzVgzFZZC9ro_64P66c1YQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2449055234</pqid></control><display><type>article</type><title>Secure Collaborative Training and Inference for XGBoost</title><source>Open Access: Freely Accessible Journals by multiple vendors</source><creator>Law, Andrew ; Leung, Chester ; Poddar, Rishabh ; Popa, Raluca Ada ; Shi, Chenyu ; Sima, Octavian ; Yu, Chaofan ; Zhang, Xingmeng ; Zheng, Wenting</creator><creatorcontrib>Law, Andrew ; Leung, Chester ; Poddar, Rishabh ; Popa, Raluca Ada ; Shi, Chenyu ; Sima, Octavian ; Yu, Chaofan ; Zhang, Xingmeng ; Zheng, Wenting</creatorcontrib><description>In recent years, gradient boosted decision tree learning has proven to be an effective method of training robust models. Moreover, collaborative learning among multiple parties has the potential to greatly benefit all parties involved, but organizations have also encountered obstacles in sharing sensitive data due to business, regulatory, and liability concerns. We propose Secure XGBoost, a privacy-preserving system that enables multiparty training and inference of XGBoost models. Secure XGBoost protects the privacy of each party's data as well as the integrity of the computation with the help of hardware enclaves. Crucially, Secure XGBoost augments the security of the enclaves using novel data-oblivious algorithms that prevent access side-channel attacks on enclaves induced via access pattern leakage.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Collaboration ; Decision trees ; Inference ; Liability ; Machine learning ; Privacy ; Training</subject><ispartof>arXiv.org, 2020-10</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Law, Andrew</creatorcontrib><creatorcontrib>Leung, Chester</creatorcontrib><creatorcontrib>Poddar, Rishabh</creatorcontrib><creatorcontrib>Popa, Raluca Ada</creatorcontrib><creatorcontrib>Shi, Chenyu</creatorcontrib><creatorcontrib>Sima, Octavian</creatorcontrib><creatorcontrib>Yu, Chaofan</creatorcontrib><creatorcontrib>Zhang, Xingmeng</creatorcontrib><creatorcontrib>Zheng, Wenting</creatorcontrib><title>Secure Collaborative Training and Inference for XGBoost</title><title>arXiv.org</title><description>In recent years, gradient boosted decision tree learning has proven to be an effective method of training robust models. Moreover, collaborative learning among multiple parties has the potential to greatly benefit all parties involved, but organizations have also encountered obstacles in sharing sensitive data due to business, regulatory, and liability concerns. We propose Secure XGBoost, a privacy-preserving system that enables multiparty training and inference of XGBoost models. Secure XGBoost protects the privacy of each party's data as well as the integrity of the computation with the help of hardware enclaves. Crucially, Secure XGBoost augments the security of the enclaves using novel data-oblivious algorithms that prevent access side-channel attacks on enclaves induced via access pattern leakage.</description><subject>Algorithms</subject><subject>Collaboration</subject><subject>Decision trees</subject><subject>Inference</subject><subject>Liability</subject><subject>Machine learning</subject><subject>Privacy</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNyr0KwjAUQOEgCBbtOwScC_Emsbpa_Jvt4FZivZWUkqs3qc-vgw_gdIbvTEQGWq-KjQGYiTzGXikF6xKs1ZkoL9iOjLKiYXA3Ypf8G2XNzgcfHtKFuzyHDhlDi7IjltfjjiimhZh2boiY_zoXy8O-rk7Fk-k1YkxNTyOHLzVgzFZZC9ro_64P66c1YQ</recordid><startdate>20201006</startdate><enddate>20201006</enddate><creator>Law, Andrew</creator><creator>Leung, Chester</creator><creator>Poddar, Rishabh</creator><creator>Popa, Raluca Ada</creator><creator>Shi, Chenyu</creator><creator>Sima, Octavian</creator><creator>Yu, Chaofan</creator><creator>Zhang, Xingmeng</creator><creator>Zheng, Wenting</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20201006</creationdate><title>Secure Collaborative Training and Inference for XGBoost</title><author>Law, Andrew ; Leung, Chester ; Poddar, Rishabh ; Popa, Raluca Ada ; Shi, Chenyu ; Sima, Octavian ; Yu, Chaofan ; Zhang, Xingmeng ; Zheng, Wenting</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24490552343</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Collaboration</topic><topic>Decision trees</topic><topic>Inference</topic><topic>Liability</topic><topic>Machine learning</topic><topic>Privacy</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Law, Andrew</creatorcontrib><creatorcontrib>Leung, Chester</creatorcontrib><creatorcontrib>Poddar, Rishabh</creatorcontrib><creatorcontrib>Popa, Raluca Ada</creatorcontrib><creatorcontrib>Shi, Chenyu</creatorcontrib><creatorcontrib>Sima, Octavian</creatorcontrib><creatorcontrib>Yu, Chaofan</creatorcontrib><creatorcontrib>Zhang, Xingmeng</creatorcontrib><creatorcontrib>Zheng, Wenting</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Law, Andrew</au><au>Leung, Chester</au><au>Poddar, Rishabh</au><au>Popa, Raluca Ada</au><au>Shi, Chenyu</au><au>Sima, Octavian</au><au>Yu, Chaofan</au><au>Zhang, Xingmeng</au><au>Zheng, Wenting</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Secure Collaborative Training and Inference for XGBoost</atitle><jtitle>arXiv.org</jtitle><date>2020-10-06</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>In recent years, gradient boosted decision tree learning has proven to be an effective method of training robust models. Moreover, collaborative learning among multiple parties has the potential to greatly benefit all parties involved, but organizations have also encountered obstacles in sharing sensitive data due to business, regulatory, and liability concerns. We propose Secure XGBoost, a privacy-preserving system that enables multiparty training and inference of XGBoost models. Secure XGBoost protects the privacy of each party's data as well as the integrity of the computation with the help of hardware enclaves. Crucially, Secure XGBoost augments the security of the enclaves using novel data-oblivious algorithms that prevent access side-channel attacks on enclaves induced via access pattern leakage.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_2449055234
source Open Access: Freely Accessible Journals by multiple vendors
subjects Algorithms
Collaboration
Decision trees
Inference
Liability
Machine learning
Privacy
Training
title Secure Collaborative Training and Inference for XGBoost
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T06%3A31%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Secure%20Collaborative%20Training%20and%20Inference%20for%20XGBoost&rft.jtitle=arXiv.org&rft.au=Law,%20Andrew&rft.date=2020-10-06&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2449055234%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2449055234&rft_id=info:pmid/&rfr_iscdi=true