A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision

There has been a recent explosion of computer vision models which perform many tasks and are composed of an image encoder (usually a ViT) and an autoregressive decoder (usually a Transformer). However, most of this work simply presents one system and its results, leaving many questions regarding des...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Beyer, Lucas, Wan, Bo, Madan, Gagan, Pavetic, Filip, Steiner, Andreas, Kolesnikov, Alexander, Pinto, André Susano, Bugliarello, Emanuele, Wang, Xiao, Yu, Qihang, Chen, Liang-Chieh, Zhai, Xiaohua
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Beyer, Lucas
Wan, Bo
Madan, Gagan
Pavetic, Filip
Steiner, Andreas
Kolesnikov, Alexander
Pinto, André Susano
Bugliarello, Emanuele
Wang, Xiao
Yu, Qihang
Chen, Liang-Chieh
Zhai, Xiaohua
description There has been a recent explosion of computer vision models which perform many tasks and are composed of an image encoder (usually a ViT) and an autoregressive decoder (usually a Transformer). However, most of this work simply presents one system and its results, leaving many questions regarding design decisions and trade-offs of such systems unanswered. In this work, we aim to provide such answers. We take a close look at autoregressive decoders for multi-task learning in multimodal computer vision, including classification, captioning, visual question answering, and optical character recognition. Through extensive systematic experiments, we study the effects of task and data mixture, training and regularization hyperparameters, conditioning type and specificity, modality combination, and more. Importantly, we compare these to well-tuned single-task baselines to highlight the cost incurred by multi-tasking. A key finding is that a small decoder learned on top of a frozen pretrained encoder works surprisingly well. We call this setup locked-image tuning with decoder (LiT-decoder). It can be seen as teaching a decoder to interact with a pretrained vision model via natural language.
doi_str_mv 10.48550/arxiv.2303.17376
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2303_17376</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2303_17376</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-8b88aceae2eecca31089ba109e4daac1ebd7e2fafd7a6fd8046640ded2e1a8c03</originalsourceid><addsrcrecordid>eNotz7FOwzAUQFEvDKjwAUy8H0iw49R2xyhAQSpiIOoavdjPldU2ruykon-PKEx3u9Jh7EHwsjbLJX_C9B3OZSW5LIWWWt2ydQNf0-wuED008xQT7RLlHM4Ez2Sjo5TBxwQf82EKRYd5H8YdhBHaeDzNEyXYhhzieMduPB4y3f93wbrXl659Kzaf6_e22RSotCrMYAxaQqqIrEUpuFkNKPiKaodoBQ1OU-XRO43KO8NrpWruyFUk0FguF-zxb3uV9KcUjpgu_a-ov4rkD61zSAw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision</title><source>arXiv.org</source><creator>Beyer, Lucas ; Wan, Bo ; Madan, Gagan ; Pavetic, Filip ; Steiner, Andreas ; Kolesnikov, Alexander ; Pinto, André Susano ; Bugliarello, Emanuele ; Wang, Xiao ; Yu, Qihang ; Chen, Liang-Chieh ; Zhai, Xiaohua</creator><creatorcontrib>Beyer, Lucas ; Wan, Bo ; Madan, Gagan ; Pavetic, Filip ; Steiner, Andreas ; Kolesnikov, Alexander ; Pinto, André Susano ; Bugliarello, Emanuele ; Wang, Xiao ; Yu, Qihang ; Chen, Liang-Chieh ; Zhai, Xiaohua</creatorcontrib><description>There has been a recent explosion of computer vision models which perform many tasks and are composed of an image encoder (usually a ViT) and an autoregressive decoder (usually a Transformer). However, most of this work simply presents one system and its results, leaving many questions regarding design decisions and trade-offs of such systems unanswered. In this work, we aim to provide such answers. We take a close look at autoregressive decoders for multi-task learning in multimodal computer vision, including classification, captioning, visual question answering, and optical character recognition. Through extensive systematic experiments, we study the effects of task and data mixture, training and regularization hyperparameters, conditioning type and specificity, modality combination, and more. Importantly, we compare these to well-tuned single-task baselines to highlight the cost incurred by multi-tasking. A key finding is that a small decoder learned on top of a frozen pretrained encoder works surprisingly well. We call this setup locked-image tuning with decoder (LiT-decoder). It can be seen as teaching a decoder to interact with a pretrained vision model via natural language.</description><identifier>DOI: 10.48550/arxiv.2303.17376</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2023-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2303.17376$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2303.17376$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Beyer, Lucas</creatorcontrib><creatorcontrib>Wan, Bo</creatorcontrib><creatorcontrib>Madan, Gagan</creatorcontrib><creatorcontrib>Pavetic, Filip</creatorcontrib><creatorcontrib>Steiner, Andreas</creatorcontrib><creatorcontrib>Kolesnikov, Alexander</creatorcontrib><creatorcontrib>Pinto, André Susano</creatorcontrib><creatorcontrib>Bugliarello, Emanuele</creatorcontrib><creatorcontrib>Wang, Xiao</creatorcontrib><creatorcontrib>Yu, Qihang</creatorcontrib><creatorcontrib>Chen, Liang-Chieh</creatorcontrib><creatorcontrib>Zhai, Xiaohua</creatorcontrib><title>A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision</title><description>There has been a recent explosion of computer vision models which perform many tasks and are composed of an image encoder (usually a ViT) and an autoregressive decoder (usually a Transformer). However, most of this work simply presents one system and its results, leaving many questions regarding design decisions and trade-offs of such systems unanswered. In this work, we aim to provide such answers. We take a close look at autoregressive decoders for multi-task learning in multimodal computer vision, including classification, captioning, visual question answering, and optical character recognition. Through extensive systematic experiments, we study the effects of task and data mixture, training and regularization hyperparameters, conditioning type and specificity, modality combination, and more. Importantly, we compare these to well-tuned single-task baselines to highlight the cost incurred by multi-tasking. A key finding is that a small decoder learned on top of a frozen pretrained encoder works surprisingly well. We call this setup locked-image tuning with decoder (LiT-decoder). It can be seen as teaching a decoder to interact with a pretrained vision model via natural language.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7FOwzAUQFEvDKjwAUy8H0iw49R2xyhAQSpiIOoavdjPldU2ruykon-PKEx3u9Jh7EHwsjbLJX_C9B3OZSW5LIWWWt2ydQNf0-wuED008xQT7RLlHM4Ez2Sjo5TBxwQf82EKRYd5H8YdhBHaeDzNEyXYhhzieMduPB4y3f93wbrXl659Kzaf6_e22RSotCrMYAxaQqqIrEUpuFkNKPiKaodoBQ1OU-XRO43KO8NrpWruyFUk0FguF-zxb3uV9KcUjpgu_a-ov4rkD61zSAw</recordid><startdate>20230330</startdate><enddate>20230330</enddate><creator>Beyer, Lucas</creator><creator>Wan, Bo</creator><creator>Madan, Gagan</creator><creator>Pavetic, Filip</creator><creator>Steiner, Andreas</creator><creator>Kolesnikov, Alexander</creator><creator>Pinto, André Susano</creator><creator>Bugliarello, Emanuele</creator><creator>Wang, Xiao</creator><creator>Yu, Qihang</creator><creator>Chen, Liang-Chieh</creator><creator>Zhai, Xiaohua</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230330</creationdate><title>A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision</title><author>Beyer, Lucas ; Wan, Bo ; Madan, Gagan ; Pavetic, Filip ; Steiner, Andreas ; Kolesnikov, Alexander ; Pinto, André Susano ; Bugliarello, Emanuele ; Wang, Xiao ; Yu, Qihang ; Chen, Liang-Chieh ; Zhai, Xiaohua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-8b88aceae2eecca31089ba109e4daac1ebd7e2fafd7a6fd8046640ded2e1a8c03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Beyer, Lucas</creatorcontrib><creatorcontrib>Wan, Bo</creatorcontrib><creatorcontrib>Madan, Gagan</creatorcontrib><creatorcontrib>Pavetic, Filip</creatorcontrib><creatorcontrib>Steiner, Andreas</creatorcontrib><creatorcontrib>Kolesnikov, Alexander</creatorcontrib><creatorcontrib>Pinto, André Susano</creatorcontrib><creatorcontrib>Bugliarello, Emanuele</creatorcontrib><creatorcontrib>Wang, Xiao</creatorcontrib><creatorcontrib>Yu, Qihang</creatorcontrib><creatorcontrib>Chen, Liang-Chieh</creatorcontrib><creatorcontrib>Zhai, Xiaohua</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Beyer, Lucas</au><au>Wan, Bo</au><au>Madan, Gagan</au><au>Pavetic, Filip</au><au>Steiner, Andreas</au><au>Kolesnikov, Alexander</au><au>Pinto, André Susano</au><au>Bugliarello, Emanuele</au><au>Wang, Xiao</au><au>Yu, Qihang</au><au>Chen, Liang-Chieh</au><au>Zhai, Xiaohua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision</atitle><date>2023-03-30</date><risdate>2023</risdate><abstract>There has been a recent explosion of computer vision models which perform many tasks and are composed of an image encoder (usually a ViT) and an autoregressive decoder (usually a Transformer). However, most of this work simply presents one system and its results, leaving many questions regarding design decisions and trade-offs of such systems unanswered. In this work, we aim to provide such answers. We take a close look at autoregressive decoders for multi-task learning in multimodal computer vision, including classification, captioning, visual question answering, and optical character recognition. Through extensive systematic experiments, we study the effects of task and data mixture, training and regularization hyperparameters, conditioning type and specificity, modality combination, and more. Importantly, we compare these to well-tuned single-task baselines to highlight the cost incurred by multi-tasking. A key finding is that a small decoder learned on top of a frozen pretrained encoder works surprisingly well. We call this setup locked-image tuning with decoder (LiT-decoder). It can be seen as teaching a decoder to interact with a pretrained vision model via natural language.</abstract><doi>10.48550/arxiv.2303.17376</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2303.17376
ispartof
issn
language eng
recordid cdi_arxiv_primary_2303_17376
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T10%3A15%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Study%20of%20Autoregressive%20Decoders%20for%20Multi-Tasking%20in%20Computer%20Vision&rft.au=Beyer,%20Lucas&rft.date=2023-03-30&rft_id=info:doi/10.48550/arxiv.2303.17376&rft_dat=%3Carxiv_GOX%3E2303_17376%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true