When are Deep Networks really better than Decision Forests at small sample sizes, and how?

Deep networks and decision forests (such as random forests and gradient boosted trees) are the leading machine learning methods for structured and tabular data, respectively. Many papers have empirically compared large numbers of classifiers on one or two different domains (e.g., on 100 different ta...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-11
Hauptverfasser: Xu, Haoyin, Kinfu, Kaleab A, LeVine, Will, Panda, Sambit, Dey, Jayanta, Ainsworth, Michael, Yu-Chung, Peng, Madi Kusmanov, Engert, Florian, White, Christopher M, Vogelstein, Joshua T, Priebe, Carey E
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Xu, Haoyin
Kinfu, Kaleab A
LeVine, Will
Panda, Sambit
Dey, Jayanta
Ainsworth, Michael
Yu-Chung, Peng
Madi Kusmanov
Engert, Florian
White, Christopher M
Vogelstein, Joshua T
Priebe, Carey E
description Deep networks and decision forests (such as random forests and gradient boosted trees) are the leading machine learning methods for structured and tabular data, respectively. Many papers have empirically compared large numbers of classifiers on one or two different domains (e.g., on 100 different tabular data settings). However, a careful conceptual and empirical comparison of these two strategies using the most contemporary best practices has yet to be performed. Conceptually, we illustrate that both can be profitably viewed as "partition and vote" schemes. Specifically, the representation space that they both learn is a partitioning of feature space into a union of convex polytopes. For inference, each decides on the basis of votes from the activated nodes. This formulation allows for a unified basic understanding of the relationship between these methods. Empirically, we compare these two strategies on hundreds of tabular data settings, as well as several vision and auditory settings. Our focus is on datasets with at most 10,000 samples, which represent a large fraction of scientific and biomedical datasets. In general, we found forests to excel at tabular and structured data (vision and audition) with small sample sizes, whereas deep nets performed better on structured data with larger sample sizes. This suggests that further gains in both scenarios may be realized via further combining aspects of forests and networks. We will continue revising this technical report in the coming months with updated results.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2568300526</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2568300526</sourcerecordid><originalsourceid>FETCH-proquest_journals_25683005263</originalsourceid><addsrcrecordid>eNqNzLEKwjAUheEgCBbtO1xwtRATU7s5qMXJSRBcStQrbU2TmptS9Ont4AM4neH_OCMWCSmXSbYSYsJioppzLtK1UEpG7HIu0YL2CDvEFo4YeuefBB61MW-4YgjoIZTaDuBWUeUs5M4jBQIdgJqBAemmNQhUfZAWoO0dStdvZmz80IYw_u2UzfP9aXtIWu9e3fBQ1K7zdkiFUGkmOVcilf-pL8zUQeY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2568300526</pqid></control><display><type>article</type><title>When are Deep Networks really better than Decision Forests at small sample sizes, and how?</title><source>Free E- Journals</source><creator>Xu, Haoyin ; Kinfu, Kaleab A ; LeVine, Will ; Panda, Sambit ; Dey, Jayanta ; Ainsworth, Michael ; Yu-Chung, Peng ; Madi Kusmanov ; Engert, Florian ; White, Christopher M ; Vogelstein, Joshua T ; Priebe, Carey E</creator><creatorcontrib>Xu, Haoyin ; Kinfu, Kaleab A ; LeVine, Will ; Panda, Sambit ; Dey, Jayanta ; Ainsworth, Michael ; Yu-Chung, Peng ; Madi Kusmanov ; Engert, Florian ; White, Christopher M ; Vogelstein, Joshua T ; Priebe, Carey E</creatorcontrib><description>Deep networks and decision forests (such as random forests and gradient boosted trees) are the leading machine learning methods for structured and tabular data, respectively. Many papers have empirically compared large numbers of classifiers on one or two different domains (e.g., on 100 different tabular data settings). However, a careful conceptual and empirical comparison of these two strategies using the most contemporary best practices has yet to be performed. Conceptually, we illustrate that both can be profitably viewed as "partition and vote" schemes. Specifically, the representation space that they both learn is a partitioning of feature space into a union of convex polytopes. For inference, each decides on the basis of votes from the activated nodes. This formulation allows for a unified basic understanding of the relationship between these methods. Empirically, we compare these two strategies on hundreds of tabular data settings, as well as several vision and auditory settings. Our focus is on datasets with at most 10,000 samples, which represent a large fraction of scientific and biomedical datasets. In general, we found forests to excel at tabular and structured data (vision and audition) with small sample sizes, whereas deep nets performed better on structured data with larger sample sizes. This suggests that further gains in both scenarios may be realized via further combining aspects of forests and networks. We will continue revising this technical report in the coming months with updated results.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Audio data ; Datasets ; Machine learning ; Structured data ; Tables (data)</subject><ispartof>arXiv.org, 2021-11</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Xu, Haoyin</creatorcontrib><creatorcontrib>Kinfu, Kaleab A</creatorcontrib><creatorcontrib>LeVine, Will</creatorcontrib><creatorcontrib>Panda, Sambit</creatorcontrib><creatorcontrib>Dey, Jayanta</creatorcontrib><creatorcontrib>Ainsworth, Michael</creatorcontrib><creatorcontrib>Yu-Chung, Peng</creatorcontrib><creatorcontrib>Madi Kusmanov</creatorcontrib><creatorcontrib>Engert, Florian</creatorcontrib><creatorcontrib>White, Christopher M</creatorcontrib><creatorcontrib>Vogelstein, Joshua T</creatorcontrib><creatorcontrib>Priebe, Carey E</creatorcontrib><title>When are Deep Networks really better than Decision Forests at small sample sizes, and how?</title><title>arXiv.org</title><description>Deep networks and decision forests (such as random forests and gradient boosted trees) are the leading machine learning methods for structured and tabular data, respectively. Many papers have empirically compared large numbers of classifiers on one or two different domains (e.g., on 100 different tabular data settings). However, a careful conceptual and empirical comparison of these two strategies using the most contemporary best practices has yet to be performed. Conceptually, we illustrate that both can be profitably viewed as "partition and vote" schemes. Specifically, the representation space that they both learn is a partitioning of feature space into a union of convex polytopes. For inference, each decides on the basis of votes from the activated nodes. This formulation allows for a unified basic understanding of the relationship between these methods. Empirically, we compare these two strategies on hundreds of tabular data settings, as well as several vision and auditory settings. Our focus is on datasets with at most 10,000 samples, which represent a large fraction of scientific and biomedical datasets. In general, we found forests to excel at tabular and structured data (vision and audition) with small sample sizes, whereas deep nets performed better on structured data with larger sample sizes. This suggests that further gains in both scenarios may be realized via further combining aspects of forests and networks. We will continue revising this technical report in the coming months with updated results.</description><subject>Audio data</subject><subject>Datasets</subject><subject>Machine learning</subject><subject>Structured data</subject><subject>Tables (data)</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNzLEKwjAUheEgCBbtO1xwtRATU7s5qMXJSRBcStQrbU2TmptS9Ont4AM4neH_OCMWCSmXSbYSYsJioppzLtK1UEpG7HIu0YL2CDvEFo4YeuefBB61MW-4YgjoIZTaDuBWUeUs5M4jBQIdgJqBAemmNQhUfZAWoO0dStdvZmz80IYw_u2UzfP9aXtIWu9e3fBQ1K7zdkiFUGkmOVcilf-pL8zUQeY</recordid><startdate>20211102</startdate><enddate>20211102</enddate><creator>Xu, Haoyin</creator><creator>Kinfu, Kaleab A</creator><creator>LeVine, Will</creator><creator>Panda, Sambit</creator><creator>Dey, Jayanta</creator><creator>Ainsworth, Michael</creator><creator>Yu-Chung, Peng</creator><creator>Madi Kusmanov</creator><creator>Engert, Florian</creator><creator>White, Christopher M</creator><creator>Vogelstein, Joshua T</creator><creator>Priebe, Carey E</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211102</creationdate><title>When are Deep Networks really better than Decision Forests at small sample sizes, and how?</title><author>Xu, Haoyin ; Kinfu, Kaleab A ; LeVine, Will ; Panda, Sambit ; Dey, Jayanta ; Ainsworth, Michael ; Yu-Chung, Peng ; Madi Kusmanov ; Engert, Florian ; White, Christopher M ; Vogelstein, Joshua T ; Priebe, Carey E</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25683005263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Audio data</topic><topic>Datasets</topic><topic>Machine learning</topic><topic>Structured data</topic><topic>Tables (data)</topic><toplevel>online_resources</toplevel><creatorcontrib>Xu, Haoyin</creatorcontrib><creatorcontrib>Kinfu, Kaleab A</creatorcontrib><creatorcontrib>LeVine, Will</creatorcontrib><creatorcontrib>Panda, Sambit</creatorcontrib><creatorcontrib>Dey, Jayanta</creatorcontrib><creatorcontrib>Ainsworth, Michael</creatorcontrib><creatorcontrib>Yu-Chung, Peng</creatorcontrib><creatorcontrib>Madi Kusmanov</creatorcontrib><creatorcontrib>Engert, Florian</creatorcontrib><creatorcontrib>White, Christopher M</creatorcontrib><creatorcontrib>Vogelstein, Joshua T</creatorcontrib><creatorcontrib>Priebe, Carey E</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xu, Haoyin</au><au>Kinfu, Kaleab A</au><au>LeVine, Will</au><au>Panda, Sambit</au><au>Dey, Jayanta</au><au>Ainsworth, Michael</au><au>Yu-Chung, Peng</au><au>Madi Kusmanov</au><au>Engert, Florian</au><au>White, Christopher M</au><au>Vogelstein, Joshua T</au><au>Priebe, Carey E</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>When are Deep Networks really better than Decision Forests at small sample sizes, and how?</atitle><jtitle>arXiv.org</jtitle><date>2021-11-02</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Deep networks and decision forests (such as random forests and gradient boosted trees) are the leading machine learning methods for structured and tabular data, respectively. Many papers have empirically compared large numbers of classifiers on one or two different domains (e.g., on 100 different tabular data settings). However, a careful conceptual and empirical comparison of these two strategies using the most contemporary best practices has yet to be performed. Conceptually, we illustrate that both can be profitably viewed as "partition and vote" schemes. Specifically, the representation space that they both learn is a partitioning of feature space into a union of convex polytopes. For inference, each decides on the basis of votes from the activated nodes. This formulation allows for a unified basic understanding of the relationship between these methods. Empirically, we compare these two strategies on hundreds of tabular data settings, as well as several vision and auditory settings. Our focus is on datasets with at most 10,000 samples, which represent a large fraction of scientific and biomedical datasets. In general, we found forests to excel at tabular and structured data (vision and audition) with small sample sizes, whereas deep nets performed better on structured data with larger sample sizes. This suggests that further gains in both scenarios may be realized via further combining aspects of forests and networks. We will continue revising this technical report in the coming months with updated results.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_2568300526
source Free E- Journals
subjects Audio data
Datasets
Machine learning
Structured data
Tables (data)
title When are Deep Networks really better than Decision Forests at small sample sizes, and how?
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T06%3A05%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=When%20are%20Deep%20Networks%20really%20better%20than%20Decision%20Forests%20at%20small%20sample%20sizes,%20and%20how?&rft.jtitle=arXiv.org&rft.au=Xu,%20Haoyin&rft.date=2021-11-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2568300526%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2568300526&rft_id=info:pmid/&rfr_iscdi=true