Investigating the Histogram Loss in Regression

It is becoming increasingly common in regression to train neural networks that model the entire distribution even if only the mean is required for prediction. This additional modeling often comes with performance gain and the reasons behind the improvement are not fully known. This paper investigate...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Imani, Ehsan, Luedemann, Kai, Scholnick-Hughes, Sam, Elelimy, Esraa, White, Martha
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Imani, Ehsan
Luedemann, Kai
Scholnick-Hughes, Sam
Elelimy, Esraa
White, Martha
description It is becoming increasingly common in regression to train neural networks that model the entire distribution even if only the mean is required for prediction. This additional modeling often comes with performance gain and the reasons behind the improvement are not fully known. This paper investigates a recent approach to regression, the Histogram Loss, which involves learning the conditional distribution of the target variable by minimizing the cross-entropy between a target distribution and a flexible histogram prediction. We design theoretical and empirical analyses to determine why and when this performance gain appears, and how different components of the loss contribute to it. Our results suggest that the benefits of learning distributions in this setup come from improvements in optimization rather than modelling extra information. We then demonstrate the viability of the Histogram Loss in common deep learning applications without a need for costly hyperparameter tuning.
doi_str_mv 10.48550/arxiv.2402.13425
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_13425</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_13425</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-981800c3792c606dac0fecb1846e0da0a3b22e7b86a3d98382e13da030a308fb3</originalsourceid><addsrcrecordid>eNotzsFqg0AUBdDZZFGSfkBXnR_Qvpmn47gMoamCUCjZy1OfdqDRMCMh-ftY29WFe-FyhHhRECc2TeGN_M1dY52AjhUmOn0ScTleOcxuoNmNg5y_WRYuzNPg6SyrKQTpRvnFg-cQ3DTuxKann8DP_7kVp-P76VBE1edHedhXEZksjXKrLECLWa5bA6ajFnpuG2UTw9AREDZac9ZYQ9jlFq1mhUuPywK2b3ArXv9uV3B98e5M_l7_wusVjg-eFz0N</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Investigating the Histogram Loss in Regression</title><source>arXiv.org</source><creator>Imani, Ehsan ; Luedemann, Kai ; Scholnick-Hughes, Sam ; Elelimy, Esraa ; White, Martha</creator><creatorcontrib>Imani, Ehsan ; Luedemann, Kai ; Scholnick-Hughes, Sam ; Elelimy, Esraa ; White, Martha</creatorcontrib><description>It is becoming increasingly common in regression to train neural networks that model the entire distribution even if only the mean is required for prediction. This additional modeling often comes with performance gain and the reasons behind the improvement are not fully known. This paper investigates a recent approach to regression, the Histogram Loss, which involves learning the conditional distribution of the target variable by minimizing the cross-entropy between a target distribution and a flexible histogram prediction. We design theoretical and empirical analyses to determine why and when this performance gain appears, and how different components of the loss contribute to it. Our results suggest that the benefits of learning distributions in this setup come from improvements in optimization rather than modelling extra information. We then demonstrate the viability of the Histogram Loss in common deep learning applications without a need for costly hyperparameter tuning.</description><identifier>DOI: 10.48550/arxiv.2402.13425</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.13425$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.13425$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Imani, Ehsan</creatorcontrib><creatorcontrib>Luedemann, Kai</creatorcontrib><creatorcontrib>Scholnick-Hughes, Sam</creatorcontrib><creatorcontrib>Elelimy, Esraa</creatorcontrib><creatorcontrib>White, Martha</creatorcontrib><title>Investigating the Histogram Loss in Regression</title><description>It is becoming increasingly common in regression to train neural networks that model the entire distribution even if only the mean is required for prediction. This additional modeling often comes with performance gain and the reasons behind the improvement are not fully known. This paper investigates a recent approach to regression, the Histogram Loss, which involves learning the conditional distribution of the target variable by minimizing the cross-entropy between a target distribution and a flexible histogram prediction. We design theoretical and empirical analyses to determine why and when this performance gain appears, and how different components of the loss contribute to it. Our results suggest that the benefits of learning distributions in this setup come from improvements in optimization rather than modelling extra information. We then demonstrate the viability of the Histogram Loss in common deep learning applications without a need for costly hyperparameter tuning.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzsFqg0AUBdDZZFGSfkBXnR_Qvpmn47gMoamCUCjZy1OfdqDRMCMh-ftY29WFe-FyhHhRECc2TeGN_M1dY52AjhUmOn0ScTleOcxuoNmNg5y_WRYuzNPg6SyrKQTpRvnFg-cQ3DTuxKann8DP_7kVp-P76VBE1edHedhXEZksjXKrLECLWa5bA6ajFnpuG2UTw9AREDZac9ZYQ9jlFq1mhUuPywK2b3ArXv9uV3B98e5M_l7_wusVjg-eFz0N</recordid><startdate>20240220</startdate><enddate>20240220</enddate><creator>Imani, Ehsan</creator><creator>Luedemann, Kai</creator><creator>Scholnick-Hughes, Sam</creator><creator>Elelimy, Esraa</creator><creator>White, Martha</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20240220</creationdate><title>Investigating the Histogram Loss in Regression</title><author>Imani, Ehsan ; Luedemann, Kai ; Scholnick-Hughes, Sam ; Elelimy, Esraa ; White, Martha</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-981800c3792c606dac0fecb1846e0da0a3b22e7b86a3d98382e13da030a308fb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Imani, Ehsan</creatorcontrib><creatorcontrib>Luedemann, Kai</creatorcontrib><creatorcontrib>Scholnick-Hughes, Sam</creatorcontrib><creatorcontrib>Elelimy, Esraa</creatorcontrib><creatorcontrib>White, Martha</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Imani, Ehsan</au><au>Luedemann, Kai</au><au>Scholnick-Hughes, Sam</au><au>Elelimy, Esraa</au><au>White, Martha</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Investigating the Histogram Loss in Regression</atitle><date>2024-02-20</date><risdate>2024</risdate><abstract>It is becoming increasingly common in regression to train neural networks that model the entire distribution even if only the mean is required for prediction. This additional modeling often comes with performance gain and the reasons behind the improvement are not fully known. This paper investigates a recent approach to regression, the Histogram Loss, which involves learning the conditional distribution of the target variable by minimizing the cross-entropy between a target distribution and a flexible histogram prediction. We design theoretical and empirical analyses to determine why and when this performance gain appears, and how different components of the loss contribute to it. Our results suggest that the benefits of learning distributions in this setup come from improvements in optimization rather than modelling extra information. We then demonstrate the viability of the Histogram Loss in common deep learning applications without a need for costly hyperparameter tuning.</abstract><doi>10.48550/arxiv.2402.13425</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.13425
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_13425
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
Statistics - Machine Learning
title Investigating the Histogram Loss in Regression
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T19%3A47%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Investigating%20the%20Histogram%20Loss%20in%20Regression&rft.au=Imani,%20Ehsan&rft.date=2024-02-20&rft_id=info:doi/10.48550/arxiv.2402.13425&rft_dat=%3Carxiv_GOX%3E2402_13425%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true