Auxiliary Quantile Forecasting with Linear Networks

We propose a novel multi-task method for quantile forecasting with shared Linear layers. Our method is based on the Implicit quantile learning approach, where samples from the Uniform distribution $\mathcal{U}(0, 1)$ are reparameterized to quantile values of the target distribution. We combine the i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jawed, Shayan, Schmidt-Thieme, Lars
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Jawed, Shayan
Schmidt-Thieme, Lars
description We propose a novel multi-task method for quantile forecasting with shared Linear layers. Our method is based on the Implicit quantile learning approach, where samples from the Uniform distribution $\mathcal{U}(0, 1)$ are reparameterized to quantile values of the target distribution. We combine the implicit quantile and input time series representations to directly forecast multiple quantile estimations for multiple horizons jointly. Prior works have adopted a Linear layer for the direct estimation of all forecasting horizons in a multi-task learning setup. We show that following similar intuition from multi-task learning to exploit correlations among forecast horizons, we can model multiple quantile estimates as auxiliary tasks for each of the forecast horizon to improve forecast accuracy across the quantile estimates compared to modeling only a single quantile estimate. We show learning auxiliary quantile tasks leads to state-of-the-art performance on deterministic forecasting benchmarks concerning the main-task of forecasting the 50$^{th}$ percentile estimate.
doi_str_mv 10.48550/arxiv.2212.02578
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2212_02578</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2212_02578</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-7a59a474a70d3c8e83444156cd46f26ca2a8828014dbabb10bd006c275cba6ed3</originalsourceid><addsrcrecordid>eNotzkluwjAYQGFvWCDgAKzqCyR4truMEFCkqAiJffR7oFikCXLC0NszlNXbPX0ITSnJhZGSzCDd4iVnjLKcMKnNEPHifIt1hPSHt2do-lgHvGxTcND1sfnB19gfcBmbAAl_h_7apmM3RoM91F2YvDtCu-ViN__Kys1qPS_KDJQ2mQb5CUIL0MRzZ4LhQggqlfNC7ZlywMAYZggV3oK1lFhPiHJMS2dBBc9H6ON_-1JXpxR_H8zqqa9een4HORU_VQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Auxiliary Quantile Forecasting with Linear Networks</title><source>arXiv.org</source><creator>Jawed, Shayan ; Schmidt-Thieme, Lars</creator><creatorcontrib>Jawed, Shayan ; Schmidt-Thieme, Lars</creatorcontrib><description>We propose a novel multi-task method for quantile forecasting with shared Linear layers. Our method is based on the Implicit quantile learning approach, where samples from the Uniform distribution $\mathcal{U}(0, 1)$ are reparameterized to quantile values of the target distribution. We combine the implicit quantile and input time series representations to directly forecast multiple quantile estimations for multiple horizons jointly. Prior works have adopted a Linear layer for the direct estimation of all forecasting horizons in a multi-task learning setup. We show that following similar intuition from multi-task learning to exploit correlations among forecast horizons, we can model multiple quantile estimates as auxiliary tasks for each of the forecast horizon to improve forecast accuracy across the quantile estimates compared to modeling only a single quantile estimate. We show learning auxiliary quantile tasks leads to state-of-the-art performance on deterministic forecasting benchmarks concerning the main-task of forecasting the 50$^{th}$ percentile estimate.</description><identifier>DOI: 10.48550/arxiv.2212.02578</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Methodology</subject><creationdate>2022-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2212.02578$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2212.02578$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Jawed, Shayan</creatorcontrib><creatorcontrib>Schmidt-Thieme, Lars</creatorcontrib><title>Auxiliary Quantile Forecasting with Linear Networks</title><description>We propose a novel multi-task method for quantile forecasting with shared Linear layers. Our method is based on the Implicit quantile learning approach, where samples from the Uniform distribution $\mathcal{U}(0, 1)$ are reparameterized to quantile values of the target distribution. We combine the implicit quantile and input time series representations to directly forecast multiple quantile estimations for multiple horizons jointly. Prior works have adopted a Linear layer for the direct estimation of all forecasting horizons in a multi-task learning setup. We show that following similar intuition from multi-task learning to exploit correlations among forecast horizons, we can model multiple quantile estimates as auxiliary tasks for each of the forecast horizon to improve forecast accuracy across the quantile estimates compared to modeling only a single quantile estimate. We show learning auxiliary quantile tasks leads to state-of-the-art performance on deterministic forecasting benchmarks concerning the main-task of forecasting the 50$^{th}$ percentile estimate.</description><subject>Computer Science - Learning</subject><subject>Statistics - Methodology</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzkluwjAYQGFvWCDgAKzqCyR4truMEFCkqAiJffR7oFikCXLC0NszlNXbPX0ITSnJhZGSzCDd4iVnjLKcMKnNEPHifIt1hPSHt2do-lgHvGxTcND1sfnB19gfcBmbAAl_h_7apmM3RoM91F2YvDtCu-ViN__Kys1qPS_KDJQ2mQb5CUIL0MRzZ4LhQggqlfNC7ZlywMAYZggV3oK1lFhPiHJMS2dBBc9H6ON_-1JXpxR_H8zqqa9een4HORU_VQ</recordid><startdate>20221205</startdate><enddate>20221205</enddate><creator>Jawed, Shayan</creator><creator>Schmidt-Thieme, Lars</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20221205</creationdate><title>Auxiliary Quantile Forecasting with Linear Networks</title><author>Jawed, Shayan ; Schmidt-Thieme, Lars</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-7a59a474a70d3c8e83444156cd46f26ca2a8828014dbabb10bd006c275cba6ed3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Methodology</topic><toplevel>online_resources</toplevel><creatorcontrib>Jawed, Shayan</creatorcontrib><creatorcontrib>Schmidt-Thieme, Lars</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jawed, Shayan</au><au>Schmidt-Thieme, Lars</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Auxiliary Quantile Forecasting with Linear Networks</atitle><date>2022-12-05</date><risdate>2022</risdate><abstract>We propose a novel multi-task method for quantile forecasting with shared Linear layers. Our method is based on the Implicit quantile learning approach, where samples from the Uniform distribution $\mathcal{U}(0, 1)$ are reparameterized to quantile values of the target distribution. We combine the implicit quantile and input time series representations to directly forecast multiple quantile estimations for multiple horizons jointly. Prior works have adopted a Linear layer for the direct estimation of all forecasting horizons in a multi-task learning setup. We show that following similar intuition from multi-task learning to exploit correlations among forecast horizons, we can model multiple quantile estimates as auxiliary tasks for each of the forecast horizon to improve forecast accuracy across the quantile estimates compared to modeling only a single quantile estimate. We show learning auxiliary quantile tasks leads to state-of-the-art performance on deterministic forecasting benchmarks concerning the main-task of forecasting the 50$^{th}$ percentile estimate.</abstract><doi>10.48550/arxiv.2212.02578</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2212.02578
ispartof
issn
language eng
recordid cdi_arxiv_primary_2212_02578
source arXiv.org
subjects Computer Science - Learning
Statistics - Methodology
title Auxiliary Quantile Forecasting with Linear Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T08%3A55%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Auxiliary%20Quantile%20Forecasting%20with%20Linear%20Networks&rft.au=Jawed,%20Shayan&rft.date=2022-12-05&rft_id=info:doi/10.48550/arxiv.2212.02578&rft_dat=%3Carxiv_GOX%3E2212_02578%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true