Runtime Optimizations for Tree-Based Machine Learning Models
Tree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression trees for lear...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on knowledge and data engineering 2014-09, Vol.26 (9), p.2281-2292 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2292 |
---|---|
container_issue | 9 |
container_start_page | 2281 |
container_title | IEEE transactions on knowledge and data engineering |
container_volume | 26 |
creator | Asadi, Nima Lin, Jimmy de Vries, Arjen P. |
description | Tree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression trees for learning to rank. Although exceedingly simple conceptually, most implementations of tree-based models do not efficiently utilize modern superscalar processors. By laying out data structures in memory in a more cache-conscious fashion, removing branches from the execution flow using a technique called predication, and micro-batching predictions using a technique called vectorization, we are able to better exploit modern processor architectures. Experiments on synthetic data and on three standard learning-to-rank datasets show that our approach is significantly faster than standard implementations. |
doi_str_mv | 10.1109/TKDE.2013.73 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TKDE_2013_73</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6513227</ieee_id><sourcerecordid>3408770271</sourcerecordid><originalsourceid>FETCH-LOGICAL-c321t-ba536286e63f682d0007b0aba7a6c6fb0f8adf4fd73007185f8363333de59c2e3</originalsourceid><addsrcrecordid>eNo9kM1LAzEQxYMoWKs3b14WvLo1k2w-FrxorR_YUpB6Dtndiaa0uzXZHupfb0rFubyB-TGP9wi5BDoCoOXt4u1xMmIU-EjxIzIAIXTOoITjtNMC8oIX6pScxbiklGqlYUDu3rdt79eYzTdJ_I_tfdfGzHUhWwTE_MFGbLKZrb98i9kUbWh9-5nNugZX8ZycOLuKePGnQ_LxNFmMX_Lp_Pl1fD_Na86gzysruGRaouROatYkc1VRW1llZS1dRZ22jStco3i6gBZOc8nTNCjKmiEfkuvD303ovrcYe7PstqFNliZlLEqhNbBE3RyoOnQxBnRmE_zahp0Bavb9mH0_Zt-PUTzhVwfcI-I_KgVwxhT_BRjCX44</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1554958812</pqid></control><display><type>article</type><title>Runtime Optimizations for Tree-Based Machine Learning Models</title><source>IEEE Electronic Library (IEL)</source><creator>Asadi, Nima ; Lin, Jimmy ; de Vries, Arjen P.</creator><creatorcontrib>Asadi, Nima ; Lin, Jimmy ; de Vries, Arjen P.</creatorcontrib><description>Tree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression trees for learning to rank. Although exceedingly simple conceptually, most implementations of tree-based models do not efficiently utilize modern superscalar processors. By laying out data structures in memory in a more cache-conscious fashion, removing branches from the execution flow using a technique called predication, and micro-batching predictions using a technique called vectorization, we are able to better exploit modern processor architectures. Experiments on synthetic data and on three standard learning-to-rank datasets show that our approach is significantly faster than standard implementations.</description><identifier>ISSN: 1041-4347</identifier><identifier>EISSN: 1558-2191</identifier><identifier>DOI: 10.1109/TKDE.2013.73</identifier><identifier>CODEN: ITKEEH</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Arrays ; Indexes ; Information Storage and Retrieval ; Information Technology and Systems ; Learning to Rank ; Optimization ; Predictive models ; Program processors ; Regression tree analysis ; Scalability and Efficiency ; Web Search</subject><ispartof>IEEE transactions on knowledge and data engineering, 2014-09, Vol.26 (9), p.2281-2292</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Sep 2014</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c321t-ba536286e63f682d0007b0aba7a6c6fb0f8adf4fd73007185f8363333de59c2e3</citedby><cites>FETCH-LOGICAL-c321t-ba536286e63f682d0007b0aba7a6c6fb0f8adf4fd73007185f8363333de59c2e3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6513227$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6513227$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Asadi, Nima</creatorcontrib><creatorcontrib>Lin, Jimmy</creatorcontrib><creatorcontrib>de Vries, Arjen P.</creatorcontrib><title>Runtime Optimizations for Tree-Based Machine Learning Models</title><title>IEEE transactions on knowledge and data engineering</title><addtitle>TKDE</addtitle><description>Tree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression trees for learning to rank. Although exceedingly simple conceptually, most implementations of tree-based models do not efficiently utilize modern superscalar processors. By laying out data structures in memory in a more cache-conscious fashion, removing branches from the execution flow using a technique called predication, and micro-batching predictions using a technique called vectorization, we are able to better exploit modern processor architectures. Experiments on synthetic data and on three standard learning-to-rank datasets show that our approach is significantly faster than standard implementations.</description><subject>Arrays</subject><subject>Indexes</subject><subject>Information Storage and Retrieval</subject><subject>Information Technology and Systems</subject><subject>Learning to Rank</subject><subject>Optimization</subject><subject>Predictive models</subject><subject>Program processors</subject><subject>Regression tree analysis</subject><subject>Scalability and Efficiency</subject><subject>Web Search</subject><issn>1041-4347</issn><issn>1558-2191</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2014</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kM1LAzEQxYMoWKs3b14WvLo1k2w-FrxorR_YUpB6Dtndiaa0uzXZHupfb0rFubyB-TGP9wi5BDoCoOXt4u1xMmIU-EjxIzIAIXTOoITjtNMC8oIX6pScxbiklGqlYUDu3rdt79eYzTdJ_I_tfdfGzHUhWwTE_MFGbLKZrb98i9kUbWh9-5nNugZX8ZycOLuKePGnQ_LxNFmMX_Lp_Pl1fD_Na86gzysruGRaouROatYkc1VRW1llZS1dRZ22jStco3i6gBZOc8nTNCjKmiEfkuvD303ovrcYe7PstqFNliZlLEqhNbBE3RyoOnQxBnRmE_zahp0Bavb9mH0_Zt-PUTzhVwfcI-I_KgVwxhT_BRjCX44</recordid><startdate>201409</startdate><enddate>201409</enddate><creator>Asadi, Nima</creator><creator>Lin, Jimmy</creator><creator>de Vries, Arjen P.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>201409</creationdate><title>Runtime Optimizations for Tree-Based Machine Learning Models</title><author>Asadi, Nima ; Lin, Jimmy ; de Vries, Arjen P.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c321t-ba536286e63f682d0007b0aba7a6c6fb0f8adf4fd73007185f8363333de59c2e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2014</creationdate><topic>Arrays</topic><topic>Indexes</topic><topic>Information Storage and Retrieval</topic><topic>Information Technology and Systems</topic><topic>Learning to Rank</topic><topic>Optimization</topic><topic>Predictive models</topic><topic>Program processors</topic><topic>Regression tree analysis</topic><topic>Scalability and Efficiency</topic><topic>Web Search</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Asadi, Nima</creatorcontrib><creatorcontrib>Lin, Jimmy</creatorcontrib><creatorcontrib>de Vries, Arjen P.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on knowledge and data engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Asadi, Nima</au><au>Lin, Jimmy</au><au>de Vries, Arjen P.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Runtime Optimizations for Tree-Based Machine Learning Models</atitle><jtitle>IEEE transactions on knowledge and data engineering</jtitle><stitle>TKDE</stitle><date>2014-09</date><risdate>2014</risdate><volume>26</volume><issue>9</issue><spage>2281</spage><epage>2292</epage><pages>2281-2292</pages><issn>1041-4347</issn><eissn>1558-2191</eissn><coden>ITKEEH</coden><abstract>Tree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression trees for learning to rank. Although exceedingly simple conceptually, most implementations of tree-based models do not efficiently utilize modern superscalar processors. By laying out data structures in memory in a more cache-conscious fashion, removing branches from the execution flow using a technique called predication, and micro-batching predictions using a technique called vectorization, we are able to better exploit modern processor architectures. Experiments on synthetic data and on three standard learning-to-rank datasets show that our approach is significantly faster than standard implementations.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TKDE.2013.73</doi><tpages>12</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1041-4347 |
ispartof | IEEE transactions on knowledge and data engineering, 2014-09, Vol.26 (9), p.2281-2292 |
issn | 1041-4347 1558-2191 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TKDE_2013_73 |
source | IEEE Electronic Library (IEL) |
subjects | Arrays Indexes Information Storage and Retrieval Information Technology and Systems Learning to Rank Optimization Predictive models Program processors Regression tree analysis Scalability and Efficiency Web Search |
title | Runtime Optimizations for Tree-Based Machine Learning Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T06%3A09%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Runtime%20Optimizations%20for%20Tree-Based%20Machine%20Learning%20Models&rft.jtitle=IEEE%20transactions%20on%20knowledge%20and%20data%20engineering&rft.au=Asadi,%20Nima&rft.date=2014-09&rft.volume=26&rft.issue=9&rft.spage=2281&rft.epage=2292&rft.pages=2281-2292&rft.issn=1041-4347&rft.eissn=1558-2191&rft.coden=ITKEEH&rft_id=info:doi/10.1109/TKDE.2013.73&rft_dat=%3Cproquest_RIE%3E3408770271%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1554958812&rft_id=info:pmid/&rft_ieee_id=6513227&rfr_iscdi=true |