Distraction-free Embeddings for Robust VQA

The generation of effective latent representations and their subsequent refinement to incorporate precise information is an essential prerequisite for Vision-Language Understanding (VLU) tasks such as Video Question Answering (VQA). However, most existing methods for VLU focus on sparsely sampling o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Dogra, Atharvan, Varshney, Deeksha, Kalyan, Ashwin, Deshpande, Ameet, Kumar, Neeraj
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Dogra, Atharvan
Varshney, Deeksha
Kalyan, Ashwin
Deshpande, Ameet
Kumar, Neeraj
description The generation of effective latent representations and their subsequent refinement to incorporate precise information is an essential prerequisite for Vision-Language Understanding (VLU) tasks such as Video Question Answering (VQA). However, most existing methods for VLU focus on sparsely sampling or fine-graining the input information (e.g., sampling a sparse set of frames or text tokens), or adding external knowledge. We present a novel "DRAX: Distraction Removal and Attended Cross-Alignment" method to rid our cross-modal representations of distractors in the latent space. We do not exclusively confine the perception of any input information from various modalities but instead use an attention-guided distraction removal method to increase focus on task-relevant information in latent embeddings. DRAX also ensures semantic alignment of embeddings during cross-modal fusions. We evaluate our approach on a challenging benchmark (SUTD-TrafficQA dataset), testing the framework's abilities for feature and event queries, temporal relation understanding, forecasting, hypothesis, and causal analysis through extensive experiments.
doi_str_mv 10.48550/arxiv.2309.00133
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_00133</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_00133</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-99b31fedc17cc30c873e53c2620f22509590a5874cb3d4c70f2fa182e337b00f3</originalsourceid><addsrcrecordid>eNotzr0KwjAUhuEsDqJegJOdhdaTnMY0o_gPgijFtSRpIgG1klbRu_d3-uAdPh5C-hSSNOMcRio8_D1hCDIBoIhtMpz5ugnKNL66xC5YG83P2palvxzryFUh2lf6VjfRYTfpkpZTp9r2_tsh-WKeT1fxZrtcTyebWI0FxlJqpM6WhgpjEEwm0HI0bMzAMcZBcgmKZyI1GsvUiHd1imbMIgoN4LBDBr_bL7a4Bn9W4Vl80MUXjS8UZjqi</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Distraction-free Embeddings for Robust VQA</title><source>arXiv.org</source><creator>Dogra, Atharvan ; Varshney, Deeksha ; Kalyan, Ashwin ; Deshpande, Ameet ; Kumar, Neeraj</creator><creatorcontrib>Dogra, Atharvan ; Varshney, Deeksha ; Kalyan, Ashwin ; Deshpande, Ameet ; Kumar, Neeraj</creatorcontrib><description>The generation of effective latent representations and their subsequent refinement to incorporate precise information is an essential prerequisite for Vision-Language Understanding (VLU) tasks such as Video Question Answering (VQA). However, most existing methods for VLU focus on sparsely sampling or fine-graining the input information (e.g., sampling a sparse set of frames or text tokens), or adding external knowledge. We present a novel "DRAX: Distraction Removal and Attended Cross-Alignment" method to rid our cross-modal representations of distractors in the latent space. We do not exclusively confine the perception of any input information from various modalities but instead use an attention-guided distraction removal method to increase focus on task-relevant information in latent embeddings. DRAX also ensures semantic alignment of embeddings during cross-modal fusions. We evaluate our approach on a challenging benchmark (SUTD-TrafficQA dataset), testing the framework's abilities for feature and event queries, temporal relation understanding, forecasting, hypothesis, and causal analysis through extensive experiments.</description><identifier>DOI: 10.48550/arxiv.2309.00133</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-08</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.00133$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.00133$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Dogra, Atharvan</creatorcontrib><creatorcontrib>Varshney, Deeksha</creatorcontrib><creatorcontrib>Kalyan, Ashwin</creatorcontrib><creatorcontrib>Deshpande, Ameet</creatorcontrib><creatorcontrib>Kumar, Neeraj</creatorcontrib><title>Distraction-free Embeddings for Robust VQA</title><description>The generation of effective latent representations and their subsequent refinement to incorporate precise information is an essential prerequisite for Vision-Language Understanding (VLU) tasks such as Video Question Answering (VQA). However, most existing methods for VLU focus on sparsely sampling or fine-graining the input information (e.g., sampling a sparse set of frames or text tokens), or adding external knowledge. We present a novel "DRAX: Distraction Removal and Attended Cross-Alignment" method to rid our cross-modal representations of distractors in the latent space. We do not exclusively confine the perception of any input information from various modalities but instead use an attention-guided distraction removal method to increase focus on task-relevant information in latent embeddings. DRAX also ensures semantic alignment of embeddings during cross-modal fusions. We evaluate our approach on a challenging benchmark (SUTD-TrafficQA dataset), testing the framework's abilities for feature and event queries, temporal relation understanding, forecasting, hypothesis, and causal analysis through extensive experiments.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr0KwjAUhuEsDqJegJOdhdaTnMY0o_gPgijFtSRpIgG1klbRu_d3-uAdPh5C-hSSNOMcRio8_D1hCDIBoIhtMpz5ugnKNL66xC5YG83P2palvxzryFUh2lf6VjfRYTfpkpZTp9r2_tsh-WKeT1fxZrtcTyebWI0FxlJqpM6WhgpjEEwm0HI0bMzAMcZBcgmKZyI1GsvUiHd1imbMIgoN4LBDBr_bL7a4Bn9W4Vl80MUXjS8UZjqi</recordid><startdate>20230831</startdate><enddate>20230831</enddate><creator>Dogra, Atharvan</creator><creator>Varshney, Deeksha</creator><creator>Kalyan, Ashwin</creator><creator>Deshpande, Ameet</creator><creator>Kumar, Neeraj</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230831</creationdate><title>Distraction-free Embeddings for Robust VQA</title><author>Dogra, Atharvan ; Varshney, Deeksha ; Kalyan, Ashwin ; Deshpande, Ameet ; Kumar, Neeraj</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-99b31fedc17cc30c873e53c2620f22509590a5874cb3d4c70f2fa182e337b00f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Dogra, Atharvan</creatorcontrib><creatorcontrib>Varshney, Deeksha</creatorcontrib><creatorcontrib>Kalyan, Ashwin</creatorcontrib><creatorcontrib>Deshpande, Ameet</creatorcontrib><creatorcontrib>Kumar, Neeraj</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dogra, Atharvan</au><au>Varshney, Deeksha</au><au>Kalyan, Ashwin</au><au>Deshpande, Ameet</au><au>Kumar, Neeraj</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Distraction-free Embeddings for Robust VQA</atitle><date>2023-08-31</date><risdate>2023</risdate><abstract>The generation of effective latent representations and their subsequent refinement to incorporate precise information is an essential prerequisite for Vision-Language Understanding (VLU) tasks such as Video Question Answering (VQA). However, most existing methods for VLU focus on sparsely sampling or fine-graining the input information (e.g., sampling a sparse set of frames or text tokens), or adding external knowledge. We present a novel "DRAX: Distraction Removal and Attended Cross-Alignment" method to rid our cross-modal representations of distractors in the latent space. We do not exclusively confine the perception of any input information from various modalities but instead use an attention-guided distraction removal method to increase focus on task-relevant information in latent embeddings. DRAX also ensures semantic alignment of embeddings during cross-modal fusions. We evaluate our approach on a challenging benchmark (SUTD-TrafficQA dataset), testing the framework's abilities for feature and event queries, temporal relation understanding, forecasting, hypothesis, and causal analysis through extensive experiments.</abstract><doi>10.48550/arxiv.2309.00133</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2309.00133
ispartof
issn
language eng
recordid cdi_arxiv_primary_2309_00133
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Distraction-free Embeddings for Robust VQA
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T18%3A31%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Distraction-free%20Embeddings%20for%20Robust%20VQA&rft.au=Dogra,%20Atharvan&rft.date=2023-08-31&rft_id=info:doi/10.48550/arxiv.2309.00133&rft_dat=%3Carxiv_GOX%3E2309_00133%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true