Controllable Dynamic Multi-Task Architectures

Multi-task learning commonly encounters competition for resources among tasks, specifically when model capacity is limited. This challenge motivates models which allow control over the relative importance of tasks and total compute cost during inference time. In this work, we propose such a controll...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Raychaudhuri, Dripta S, Suh, Yumin, Schulter, Samuel, Yu, Xiang, Faraki, Masoud, Roy-Chowdhury, Amit K, Chandraker, Manmohan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Raychaudhuri, Dripta S
Suh, Yumin
Schulter, Samuel
Yu, Xiang
Faraki, Masoud
Roy-Chowdhury, Amit K
Chandraker, Manmohan
description Multi-task learning commonly encounters competition for resources among tasks, specifically when model capacity is limited. This challenge motivates models which allow control over the relative importance of tasks and total compute cost during inference time. In this work, we propose such a controllable multi-task network that dynamically adjusts its architecture and weights to match the desired task preference as well as the resource constraints. In contrast to the existing dynamic multi-task approaches that adjust only the weights within a fixed architecture, our approach affords the flexibility to dynamically control the total computational cost and match the user-preferred task importance better. We propose a disentangled training of two hypernetworks, by exploiting task affinity and a novel branching regularized loss, to take input preferences and accordingly predict tree-structured models with adapted weights. Experiments on three multi-task benchmarks, namely PASCAL-Context, NYU-v2, and CIFAR-100, show the efficacy of our approach. Project page is available at https://www.nec-labs.com/~mas/DYMU.
doi_str_mv 10.48550/arxiv.2203.14949
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2203_14949</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2203_14949</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-ec56ad901f0d0af3e18c2dc4c47f7d71edc38b020eece9c807c0d95307ba6f63</originalsourceid><addsrcrecordid>eNotzr1uwjAUQGEvHSroA3RqXsDpdezE8YhS-iOBGGCPbq6vVQsDlRNQefuqtNPZjj4hHhWUpq1reMb8HS9lVYEulXHG3QvZnY5TPqWEQ-Li5XrEQ6RifU5TlDsc98Ui02ecmKZz5nEu7gKmkR_-OxPb1-Wue5erzdtHt1hJbKyTTHWD3oEK4AGDZtVS5cmQscF6q9iTbgeogJnYUQuWwLtagx2wCY2eiae_643bf-V4wHztf9n9ja1_AHUiPZg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Controllable Dynamic Multi-Task Architectures</title><source>arXiv.org</source><creator>Raychaudhuri, Dripta S ; Suh, Yumin ; Schulter, Samuel ; Yu, Xiang ; Faraki, Masoud ; Roy-Chowdhury, Amit K ; Chandraker, Manmohan</creator><creatorcontrib>Raychaudhuri, Dripta S ; Suh, Yumin ; Schulter, Samuel ; Yu, Xiang ; Faraki, Masoud ; Roy-Chowdhury, Amit K ; Chandraker, Manmohan</creatorcontrib><description>Multi-task learning commonly encounters competition for resources among tasks, specifically when model capacity is limited. This challenge motivates models which allow control over the relative importance of tasks and total compute cost during inference time. In this work, we propose such a controllable multi-task network that dynamically adjusts its architecture and weights to match the desired task preference as well as the resource constraints. In contrast to the existing dynamic multi-task approaches that adjust only the weights within a fixed architecture, our approach affords the flexibility to dynamically control the total computational cost and match the user-preferred task importance better. We propose a disentangled training of two hypernetworks, by exploiting task affinity and a novel branching regularized loss, to take input preferences and accordingly predict tree-structured models with adapted weights. Experiments on three multi-task benchmarks, namely PASCAL-Context, NYU-v2, and CIFAR-100, show the efficacy of our approach. Project page is available at https://www.nec-labs.com/~mas/DYMU.</description><identifier>DOI: 10.48550/arxiv.2203.14949</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2022-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2203.14949$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2203.14949$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Raychaudhuri, Dripta S</creatorcontrib><creatorcontrib>Suh, Yumin</creatorcontrib><creatorcontrib>Schulter, Samuel</creatorcontrib><creatorcontrib>Yu, Xiang</creatorcontrib><creatorcontrib>Faraki, Masoud</creatorcontrib><creatorcontrib>Roy-Chowdhury, Amit K</creatorcontrib><creatorcontrib>Chandraker, Manmohan</creatorcontrib><title>Controllable Dynamic Multi-Task Architectures</title><description>Multi-task learning commonly encounters competition for resources among tasks, specifically when model capacity is limited. This challenge motivates models which allow control over the relative importance of tasks and total compute cost during inference time. In this work, we propose such a controllable multi-task network that dynamically adjusts its architecture and weights to match the desired task preference as well as the resource constraints. In contrast to the existing dynamic multi-task approaches that adjust only the weights within a fixed architecture, our approach affords the flexibility to dynamically control the total computational cost and match the user-preferred task importance better. We propose a disentangled training of two hypernetworks, by exploiting task affinity and a novel branching regularized loss, to take input preferences and accordingly predict tree-structured models with adapted weights. Experiments on three multi-task benchmarks, namely PASCAL-Context, NYU-v2, and CIFAR-100, show the efficacy of our approach. Project page is available at https://www.nec-labs.com/~mas/DYMU.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr1uwjAUQGEvHSroA3RqXsDpdezE8YhS-iOBGGCPbq6vVQsDlRNQefuqtNPZjj4hHhWUpq1reMb8HS9lVYEulXHG3QvZnY5TPqWEQ-Li5XrEQ6RifU5TlDsc98Ui02ecmKZz5nEu7gKmkR_-OxPb1-Wue5erzdtHt1hJbKyTTHWD3oEK4AGDZtVS5cmQscF6q9iTbgeogJnYUQuWwLtagx2wCY2eiae_643bf-V4wHztf9n9ja1_AHUiPZg</recordid><startdate>20220328</startdate><enddate>20220328</enddate><creator>Raychaudhuri, Dripta S</creator><creator>Suh, Yumin</creator><creator>Schulter, Samuel</creator><creator>Yu, Xiang</creator><creator>Faraki, Masoud</creator><creator>Roy-Chowdhury, Amit K</creator><creator>Chandraker, Manmohan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220328</creationdate><title>Controllable Dynamic Multi-Task Architectures</title><author>Raychaudhuri, Dripta S ; Suh, Yumin ; Schulter, Samuel ; Yu, Xiang ; Faraki, Masoud ; Roy-Chowdhury, Amit K ; Chandraker, Manmohan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-ec56ad901f0d0af3e18c2dc4c47f7d71edc38b020eece9c807c0d95307ba6f63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Raychaudhuri, Dripta S</creatorcontrib><creatorcontrib>Suh, Yumin</creatorcontrib><creatorcontrib>Schulter, Samuel</creatorcontrib><creatorcontrib>Yu, Xiang</creatorcontrib><creatorcontrib>Faraki, Masoud</creatorcontrib><creatorcontrib>Roy-Chowdhury, Amit K</creatorcontrib><creatorcontrib>Chandraker, Manmohan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Raychaudhuri, Dripta S</au><au>Suh, Yumin</au><au>Schulter, Samuel</au><au>Yu, Xiang</au><au>Faraki, Masoud</au><au>Roy-Chowdhury, Amit K</au><au>Chandraker, Manmohan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Controllable Dynamic Multi-Task Architectures</atitle><date>2022-03-28</date><risdate>2022</risdate><abstract>Multi-task learning commonly encounters competition for resources among tasks, specifically when model capacity is limited. This challenge motivates models which allow control over the relative importance of tasks and total compute cost during inference time. In this work, we propose such a controllable multi-task network that dynamically adjusts its architecture and weights to match the desired task preference as well as the resource constraints. In contrast to the existing dynamic multi-task approaches that adjust only the weights within a fixed architecture, our approach affords the flexibility to dynamically control the total computational cost and match the user-preferred task importance better. We propose a disentangled training of two hypernetworks, by exploiting task affinity and a novel branching regularized loss, to take input preferences and accordingly predict tree-structured models with adapted weights. Experiments on three multi-task benchmarks, namely PASCAL-Context, NYU-v2, and CIFAR-100, show the efficacy of our approach. Project page is available at https://www.nec-labs.com/~mas/DYMU.</abstract><doi>10.48550/arxiv.2203.14949</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2203.14949
ispartof
issn
language eng
recordid cdi_arxiv_primary_2203_14949
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Controllable Dynamic Multi-Task Architectures
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T10%3A01%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Controllable%20Dynamic%20Multi-Task%20Architectures&rft.au=Raychaudhuri,%20Dripta%20S&rft.date=2022-03-28&rft_id=info:doi/10.48550/arxiv.2203.14949&rft_dat=%3Carxiv_GOX%3E2203_14949%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true