DepthART: Monocular Depth Estimation as Autoregressive Refinement Task

Despite recent success in discriminative approaches in monocular depth estimation its quality remains limited by training datasets. Generative approaches mitigate this issue by leveraging strong priors derived from training on internet-scale datasets. Recent studies have demonstrated that large text...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gabdullin, Bulat, Konovalova, Nina, Patakin, Nikolay, Senushkin, Dmitry, Konushin, Anton
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Gabdullin, Bulat
Konovalova, Nina
Patakin, Nikolay
Senushkin, Dmitry
Konushin, Anton
description Despite recent success in discriminative approaches in monocular depth estimation its quality remains limited by training datasets. Generative approaches mitigate this issue by leveraging strong priors derived from training on internet-scale datasets. Recent studies have demonstrated that large text-to-image diffusion models achieve state-of-the-art results in depth estimation when fine-tuned on small depth datasets. Concurrently, autoregressive generative approaches, such as the Visual AutoRegressive modeling~(VAR), have shown promising results in conditioned image synthesis. Following the visual autoregressive modeling paradigm, we introduce the first autoregressive depth estimation model based on the visual autoregressive transformer. Our primary contribution is DepthART -- a novel training method formulated as Depth Autoregressive Refinement Task. Unlike the original VAR training procedure, which employs static targets, our method utilizes a dynamic target formulation that enables model self-refinement and incorporates multi-modal guidance during training. Specifically, we use model predictions as inputs instead of ground truth token maps during training, framing the objective as residual minimization. Our experiments demonstrate that the proposed training approach significantly outperforms visual autoregressive modeling via next-scale prediction in the depth estimation task. The Visual Autoregressive Transformer trained with our approach on Hypersim achieves superior results on a set of unseen benchmarks compared to other generative and discriminative baselines.
doi_str_mv 10.48550/arxiv.2409.15010
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2409_15010</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2409_15010</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2409_150103</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw1DM0NTA04GRwc0ktKMlwDAqxUvDNz8tPLs1JLFIAiym4Fpdk5iaWZObnKSQWKziWluQXpaYXpRYXZ5alKgSlpmXmpeam5pUohCQWZ_MwsKYl5hSn8kJpbgZ5N9cQZw9dsI3xBUVAk4oq40E2x4NtNiasAgBN3DjA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>DepthART: Monocular Depth Estimation as Autoregressive Refinement Task</title><source>arXiv.org</source><creator>Gabdullin, Bulat ; Konovalova, Nina ; Patakin, Nikolay ; Senushkin, Dmitry ; Konushin, Anton</creator><creatorcontrib>Gabdullin, Bulat ; Konovalova, Nina ; Patakin, Nikolay ; Senushkin, Dmitry ; Konushin, Anton</creatorcontrib><description>Despite recent success in discriminative approaches in monocular depth estimation its quality remains limited by training datasets. Generative approaches mitigate this issue by leveraging strong priors derived from training on internet-scale datasets. Recent studies have demonstrated that large text-to-image diffusion models achieve state-of-the-art results in depth estimation when fine-tuned on small depth datasets. Concurrently, autoregressive generative approaches, such as the Visual AutoRegressive modeling~(VAR), have shown promising results in conditioned image synthesis. Following the visual autoregressive modeling paradigm, we introduce the first autoregressive depth estimation model based on the visual autoregressive transformer. Our primary contribution is DepthART -- a novel training method formulated as Depth Autoregressive Refinement Task. Unlike the original VAR training procedure, which employs static targets, our method utilizes a dynamic target formulation that enables model self-refinement and incorporates multi-modal guidance during training. Specifically, we use model predictions as inputs instead of ground truth token maps during training, framing the objective as residual minimization. Our experiments demonstrate that the proposed training approach significantly outperforms visual autoregressive modeling via next-scale prediction in the depth estimation task. The Visual Autoregressive Transformer trained with our approach on Hypersim achieves superior results on a set of unseen benchmarks compared to other generative and discriminative baselines.</description><identifier>DOI: 10.48550/arxiv.2409.15010</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2409.15010$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2409.15010$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gabdullin, Bulat</creatorcontrib><creatorcontrib>Konovalova, Nina</creatorcontrib><creatorcontrib>Patakin, Nikolay</creatorcontrib><creatorcontrib>Senushkin, Dmitry</creatorcontrib><creatorcontrib>Konushin, Anton</creatorcontrib><title>DepthART: Monocular Depth Estimation as Autoregressive Refinement Task</title><description>Despite recent success in discriminative approaches in monocular depth estimation its quality remains limited by training datasets. Generative approaches mitigate this issue by leveraging strong priors derived from training on internet-scale datasets. Recent studies have demonstrated that large text-to-image diffusion models achieve state-of-the-art results in depth estimation when fine-tuned on small depth datasets. Concurrently, autoregressive generative approaches, such as the Visual AutoRegressive modeling~(VAR), have shown promising results in conditioned image synthesis. Following the visual autoregressive modeling paradigm, we introduce the first autoregressive depth estimation model based on the visual autoregressive transformer. Our primary contribution is DepthART -- a novel training method formulated as Depth Autoregressive Refinement Task. Unlike the original VAR training procedure, which employs static targets, our method utilizes a dynamic target formulation that enables model self-refinement and incorporates multi-modal guidance during training. Specifically, we use model predictions as inputs instead of ground truth token maps during training, framing the objective as residual minimization. Our experiments demonstrate that the proposed training approach significantly outperforms visual autoregressive modeling via next-scale prediction in the depth estimation task. The Visual Autoregressive Transformer trained with our approach on Hypersim achieves superior results on a set of unseen benchmarks compared to other generative and discriminative baselines.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw1DM0NTA04GRwc0ktKMlwDAqxUvDNz8tPLs1JLFIAiym4Fpdk5iaWZObnKSQWKziWluQXpaYXpRYXZ5alKgSlpmXmpeam5pUohCQWZ_MwsKYl5hSn8kJpbgZ5N9cQZw9dsI3xBUVAk4oq40E2x4NtNiasAgBN3DjA</recordid><startdate>20240923</startdate><enddate>20240923</enddate><creator>Gabdullin, Bulat</creator><creator>Konovalova, Nina</creator><creator>Patakin, Nikolay</creator><creator>Senushkin, Dmitry</creator><creator>Konushin, Anton</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240923</creationdate><title>DepthART: Monocular Depth Estimation as Autoregressive Refinement Task</title><author>Gabdullin, Bulat ; Konovalova, Nina ; Patakin, Nikolay ; Senushkin, Dmitry ; Konushin, Anton</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2409_150103</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Gabdullin, Bulat</creatorcontrib><creatorcontrib>Konovalova, Nina</creatorcontrib><creatorcontrib>Patakin, Nikolay</creatorcontrib><creatorcontrib>Senushkin, Dmitry</creatorcontrib><creatorcontrib>Konushin, Anton</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gabdullin, Bulat</au><au>Konovalova, Nina</au><au>Patakin, Nikolay</au><au>Senushkin, Dmitry</au><au>Konushin, Anton</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DepthART: Monocular Depth Estimation as Autoregressive Refinement Task</atitle><date>2024-09-23</date><risdate>2024</risdate><abstract>Despite recent success in discriminative approaches in monocular depth estimation its quality remains limited by training datasets. Generative approaches mitigate this issue by leveraging strong priors derived from training on internet-scale datasets. Recent studies have demonstrated that large text-to-image diffusion models achieve state-of-the-art results in depth estimation when fine-tuned on small depth datasets. Concurrently, autoregressive generative approaches, such as the Visual AutoRegressive modeling~(VAR), have shown promising results in conditioned image synthesis. Following the visual autoregressive modeling paradigm, we introduce the first autoregressive depth estimation model based on the visual autoregressive transformer. Our primary contribution is DepthART -- a novel training method formulated as Depth Autoregressive Refinement Task. Unlike the original VAR training procedure, which employs static targets, our method utilizes a dynamic target formulation that enables model self-refinement and incorporates multi-modal guidance during training. Specifically, we use model predictions as inputs instead of ground truth token maps during training, framing the objective as residual minimization. Our experiments demonstrate that the proposed training approach significantly outperforms visual autoregressive modeling via next-scale prediction in the depth estimation task. The Visual Autoregressive Transformer trained with our approach on Hypersim achieves superior results on a set of unseen benchmarks compared to other generative and discriminative baselines.</abstract><doi>10.48550/arxiv.2409.15010</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2409.15010
ispartof
issn
language eng
recordid cdi_arxiv_primary_2409_15010
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title DepthART: Monocular Depth Estimation as Autoregressive Refinement Task
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T17%3A27%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DepthART:%20Monocular%20Depth%20Estimation%20as%20Autoregressive%20Refinement%20Task&rft.au=Gabdullin,%20Bulat&rft.date=2024-09-23&rft_id=info:doi/10.48550/arxiv.2409.15010&rft_dat=%3Carxiv_GOX%3E2409_15010%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true