Memory-Efficient Global Refinement of Decision-Tree Ensembles and its Application to Face Alignment

Ren et al. recently introduced a method for aggregating multiple decision trees into a strong predictor by interpreting a path taken by a sample down each tree as a binary vector and performing linear regression on top of these vectors stacked together. They provided experimental evidence that the m...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Markuš, Nenad, Gogić, Ivan, Pandžić, Igor S, Ahlberg, Jörgen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Markuš, Nenad
Gogić, Ivan
Pandžić, Igor S
Ahlberg, Jörgen
description Ren et al. recently introduced a method for aggregating multiple decision trees into a strong predictor by interpreting a path taken by a sample down each tree as a binary vector and performing linear regression on top of these vectors stacked together. They provided experimental evidence that the method offers advantages over the usual approaches for combining decision trees (random forests and boosting). The method truly shines when the regression target is a large vector with correlated dimensions, such as a 2D face shape represented with the positions of several facial landmarks. However, we argue that their basic method is not applicable in many practical scenarios due to large memory requirements. This paper shows how this issue can be solved through the use of quantization and architectural changes of the predictor that maps decision tree-derived encodings to the desired output.
doi_str_mv 10.48550/arxiv.1702.08481
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1702_08481</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1702_08481</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-578586ccb4b3eed25acd7ac66e818119681da2d83dc8215e53d4bc34caa592e3</originalsourceid><addsrcrecordid>eNotj8FKxDAURbNxIaMf4Mr8QGqTNGlmWcbOKMwg6OzL68urBNK0tEWcv9eOri4cLgcOYw8yzwpnTP4E03f4ymSZqyx3hZO3DE_UD9NF1F0XMFBa-CEOLUT-Tl1I1K9k6PgzYZjDkMR5IuJ1mqlvI80ckudhmXk1jjEgLL8Xvgx8D0i8iuEzrYI7dtNBnOn-fzfsY1-fdy_i-HZ43VVHAbaUwpTOOIvYFq0m8soA-hLQWnLSSbm1TnpQ3mmPTklDRvuiRV0ggNkq0hv2-Ge9RjbjFHqYLs0a21xj9Q9_0VBN</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Memory-Efficient Global Refinement of Decision-Tree Ensembles and its Application to Face Alignment</title><source>arXiv.org</source><creator>Markuš, Nenad ; Gogić, Ivan ; Pandžić, Igor S ; Ahlberg, Jörgen</creator><creatorcontrib>Markuš, Nenad ; Gogić, Ivan ; Pandžić, Igor S ; Ahlberg, Jörgen</creatorcontrib><description>Ren et al. recently introduced a method for aggregating multiple decision trees into a strong predictor by interpreting a path taken by a sample down each tree as a binary vector and performing linear regression on top of these vectors stacked together. They provided experimental evidence that the method offers advantages over the usual approaches for combining decision trees (random forests and boosting). The method truly shines when the regression target is a large vector with correlated dimensions, such as a 2D face shape represented with the positions of several facial landmarks. However, we argue that their basic method is not applicable in many practical scenarios due to large memory requirements. This paper shows how this issue can be solved through the use of quantization and architectural changes of the predictor that maps decision tree-derived encodings to the desired output.</description><identifier>DOI: 10.48550/arxiv.1702.08481</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Neural and Evolutionary Computing</subject><creationdate>2017-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1702.08481$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1702.08481$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Markuš, Nenad</creatorcontrib><creatorcontrib>Gogić, Ivan</creatorcontrib><creatorcontrib>Pandžić, Igor S</creatorcontrib><creatorcontrib>Ahlberg, Jörgen</creatorcontrib><title>Memory-Efficient Global Refinement of Decision-Tree Ensembles and its Application to Face Alignment</title><description>Ren et al. recently introduced a method for aggregating multiple decision trees into a strong predictor by interpreting a path taken by a sample down each tree as a binary vector and performing linear regression on top of these vectors stacked together. They provided experimental evidence that the method offers advantages over the usual approaches for combining decision trees (random forests and boosting). The method truly shines when the regression target is a large vector with correlated dimensions, such as a 2D face shape represented with the positions of several facial landmarks. However, we argue that their basic method is not applicable in many practical scenarios due to large memory requirements. This paper shows how this issue can be solved through the use of quantization and architectural changes of the predictor that maps decision tree-derived encodings to the desired output.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FKxDAURbNxIaMf4Mr8QGqTNGlmWcbOKMwg6OzL68urBNK0tEWcv9eOri4cLgcOYw8yzwpnTP4E03f4ymSZqyx3hZO3DE_UD9NF1F0XMFBa-CEOLUT-Tl1I1K9k6PgzYZjDkMR5IuJ1mqlvI80ckudhmXk1jjEgLL8Xvgx8D0i8iuEzrYI7dtNBnOn-fzfsY1-fdy_i-HZ43VVHAbaUwpTOOIvYFq0m8soA-hLQWnLSSbm1TnpQ3mmPTklDRvuiRV0ggNkq0hv2-Ge9RjbjFHqYLs0a21xj9Q9_0VBN</recordid><startdate>20170227</startdate><enddate>20170227</enddate><creator>Markuš, Nenad</creator><creator>Gogić, Ivan</creator><creator>Pandžić, Igor S</creator><creator>Ahlberg, Jörgen</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20170227</creationdate><title>Memory-Efficient Global Refinement of Decision-Tree Ensembles and its Application to Face Alignment</title><author>Markuš, Nenad ; Gogić, Ivan ; Pandžić, Igor S ; Ahlberg, Jörgen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-578586ccb4b3eed25acd7ac66e818119681da2d83dc8215e53d4bc34caa592e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><toplevel>online_resources</toplevel><creatorcontrib>Markuš, Nenad</creatorcontrib><creatorcontrib>Gogić, Ivan</creatorcontrib><creatorcontrib>Pandžić, Igor S</creatorcontrib><creatorcontrib>Ahlberg, Jörgen</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Markuš, Nenad</au><au>Gogić, Ivan</au><au>Pandžić, Igor S</au><au>Ahlberg, Jörgen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Memory-Efficient Global Refinement of Decision-Tree Ensembles and its Application to Face Alignment</atitle><date>2017-02-27</date><risdate>2017</risdate><abstract>Ren et al. recently introduced a method for aggregating multiple decision trees into a strong predictor by interpreting a path taken by a sample down each tree as a binary vector and performing linear regression on top of these vectors stacked together. They provided experimental evidence that the method offers advantages over the usual approaches for combining decision trees (random forests and boosting). The method truly shines when the regression target is a large vector with correlated dimensions, such as a 2D face shape represented with the positions of several facial landmarks. However, we argue that their basic method is not applicable in many practical scenarios due to large memory requirements. This paper shows how this issue can be solved through the use of quantization and architectural changes of the predictor that maps decision tree-derived encodings to the desired output.</abstract><doi>10.48550/arxiv.1702.08481</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1702.08481
ispartof
issn
language eng
recordid cdi_arxiv_primary_1702_08481
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Neural and Evolutionary Computing
title Memory-Efficient Global Refinement of Decision-Tree Ensembles and its Application to Face Alignment
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T00%3A16%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Memory-Efficient%20Global%20Refinement%20of%20Decision-Tree%20Ensembles%20and%20its%20Application%20to%20Face%20Alignment&rft.au=Marku%C5%A1,%20Nenad&rft.date=2017-02-27&rft_id=info:doi/10.48550/arxiv.1702.08481&rft_dat=%3Carxiv_GOX%3E1702_08481%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true