Hierarchical Conditional Relation Networks for Multimodal Video Question Answering

Video Question Answering (Video QA) challenges modelers in multiple fronts. Modeling video necessitates building not only spatio-temporal models for the dynamic visual channel but also multimodal structures for associated information channels such as subtitles or audio. Video QA adds at least two mo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer vision 2021-11, Vol.129 (11), p.3027-3050
Hauptverfasser: Le, Thao Minh, Le, Vuong, Venkatesh, Svetha, Tran, Truyen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 3050
container_issue 11
container_start_page 3027
container_title International journal of computer vision
container_volume 129
creator Le, Thao Minh
Le, Vuong
Venkatesh, Svetha
Tran, Truyen
description Video Question Answering (Video QA) challenges modelers in multiple fronts. Modeling video necessitates building not only spatio-temporal models for the dynamic visual channel but also multimodal structures for associated information channels such as subtitles or audio. Video QA adds at least two more layers of complexity – selecting relevant content for each channel in the context of the linguistic query, and composing spatio-temporal concepts and relations hidden in the data in response to the query. To address these requirements, we start with two insights: (a) content selection and relation construction can be jointly encapsulated into a conditional computational structure, and (b) video-length structures can be composed hierarchically. For (a) this paper introduces a general-reusable reusable neural unit dubbed Conditional Relation Network (CRN) taking as input a set of tensorial objects and translating into a new set of objects that encode relations of the inputs. The generic design of CRN helps ease the common complex model building process of Video QA by simple block stacking and rearrangements with flexibility in accommodating diverse input modalities and conditioning features across both visual and linguistic domains. As a result, we realize insight (b) by introducing Hierarchical Conditional Relation Networks (HCRN) for Video QA. The HCRN primarily aims at exploiting intrinsic properties of the visual content of a video as well as its accompanying channels in terms of compositionality, hierarchy, and near-term and far-term relation. HCRN is then applied for Video QA in two forms, short-form where answers are reasoned solely from the visual content of a video, and long-form where an additional associated information channel, such as movie subtitles, presented. Our rigorous evaluations show consistent improvements over state-of-the-art methods on well-studied benchmarks including large-scale real-world datasets such as TGIF-QA and TVQA, demonstrating the strong capabilities of our CRN unit and the HCRN for complex domains such as Video QA. To the best of our knowledge, the HCRN is the very first method attempting to handle long and short-form multimodal Video QA at the same time.
doi_str_mv 10.1007/s11263-021-01514-3
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2582666289</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A679328219</galeid><sourcerecordid>A679328219</sourcerecordid><originalsourceid>FETCH-LOGICAL-c392t-a740a1736a51bcabac5e7e7483faf3b08b0e360b3d7f1f8051ca41054e2d3ebb3</originalsourceid><addsrcrecordid>eNp9kU1P3DAQhq2qSN0Cf6CnSD31EJix43wcV6sWkICKBXq1HGe8Nc3GWzsr4N_jEKSKS-WDR9bzWK_mZewLwgkCVKcRkZciB445oMQiFx_YAmUlcixAfmQLaDjksmzwE_sc4wMA8JqLBVufOwo6mN_O6D5b-aFzo_NDmtfU62nMrml89OFPzKwP2dW-H93Wdwn45Try2c2e4iu2HOIjBTdsjtiB1X2k47f7kN3_-H63Os8vf55drJaXuRENH3NdFaCxEqWW2BrdaiOpoqqohdVWtFC3QKKEVnSVRVuDRKMLBFkQ7wS1rThkX-d_d8H_nVKoB78PKXpUXNa8LEteN4k6mamN7km5wfoxaJNOR1tn_EDWpfdlWTUibQQn4ds7ITEjPY0bvY9RXdyu37N8Zk3wMQayahfcVodnhaCmYtRcjErFqNdilEiSmKW4m9ZF4V_u_1gvCx6QIg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2582666289</pqid></control><display><type>article</type><title>Hierarchical Conditional Relation Networks for Multimodal Video Question Answering</title><source>SpringerLink Journals</source><creator>Le, Thao Minh ; Le, Vuong ; Venkatesh, Svetha ; Tran, Truyen</creator><creatorcontrib>Le, Thao Minh ; Le, Vuong ; Venkatesh, Svetha ; Tran, Truyen</creatorcontrib><description>Video Question Answering (Video QA) challenges modelers in multiple fronts. Modeling video necessitates building not only spatio-temporal models for the dynamic visual channel but also multimodal structures for associated information channels such as subtitles or audio. Video QA adds at least two more layers of complexity – selecting relevant content for each channel in the context of the linguistic query, and composing spatio-temporal concepts and relations hidden in the data in response to the query. To address these requirements, we start with two insights: (a) content selection and relation construction can be jointly encapsulated into a conditional computational structure, and (b) video-length structures can be composed hierarchically. For (a) this paper introduces a general-reusable reusable neural unit dubbed Conditional Relation Network (CRN) taking as input a set of tensorial objects and translating into a new set of objects that encode relations of the inputs. The generic design of CRN helps ease the common complex model building process of Video QA by simple block stacking and rearrangements with flexibility in accommodating diverse input modalities and conditioning features across both visual and linguistic domains. As a result, we realize insight (b) by introducing Hierarchical Conditional Relation Networks (HCRN) for Video QA. The HCRN primarily aims at exploiting intrinsic properties of the visual content of a video as well as its accompanying channels in terms of compositionality, hierarchy, and near-term and far-term relation. HCRN is then applied for Video QA in two forms, short-form where answers are reasoned solely from the visual content of a video, and long-form where an additional associated information channel, such as movie subtitles, presented. Our rigorous evaluations show consistent improvements over state-of-the-art methods on well-studied benchmarks including large-scale real-world datasets such as TGIF-QA and TVQA, demonstrating the strong capabilities of our CRN unit and the HCRN for complex domains such as Video QA. To the best of our knowledge, the HCRN is the very first method attempting to handle long and short-form multimodal Video QA at the same time.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-021-01514-3</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Audio data ; Channels ; Complexity ; Computer Imaging ; Computer Science ; Domains ; Image Processing and Computer Vision ; Linguistics ; Pattern Recognition ; Pattern Recognition and Graphics ; Questions ; Subtitles &amp; subtitling ; Vision</subject><ispartof>International journal of computer vision, 2021-11, Vol.129 (11), p.3027-3050</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021</rights><rights>COPYRIGHT 2021 Springer</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c392t-a740a1736a51bcabac5e7e7483faf3b08b0e360b3d7f1f8051ca41054e2d3ebb3</citedby><cites>FETCH-LOGICAL-c392t-a740a1736a51bcabac5e7e7483faf3b08b0e360b3d7f1f8051ca41054e2d3ebb3</cites><orcidid>0000-0002-8089-9962</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-021-01514-3$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-021-01514-3$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Le, Thao Minh</creatorcontrib><creatorcontrib>Le, Vuong</creatorcontrib><creatorcontrib>Venkatesh, Svetha</creatorcontrib><creatorcontrib>Tran, Truyen</creatorcontrib><title>Hierarchical Conditional Relation Networks for Multimodal Video Question Answering</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>Video Question Answering (Video QA) challenges modelers in multiple fronts. Modeling video necessitates building not only spatio-temporal models for the dynamic visual channel but also multimodal structures for associated information channels such as subtitles or audio. Video QA adds at least two more layers of complexity – selecting relevant content for each channel in the context of the linguistic query, and composing spatio-temporal concepts and relations hidden in the data in response to the query. To address these requirements, we start with two insights: (a) content selection and relation construction can be jointly encapsulated into a conditional computational structure, and (b) video-length structures can be composed hierarchically. For (a) this paper introduces a general-reusable reusable neural unit dubbed Conditional Relation Network (CRN) taking as input a set of tensorial objects and translating into a new set of objects that encode relations of the inputs. The generic design of CRN helps ease the common complex model building process of Video QA by simple block stacking and rearrangements with flexibility in accommodating diverse input modalities and conditioning features across both visual and linguistic domains. As a result, we realize insight (b) by introducing Hierarchical Conditional Relation Networks (HCRN) for Video QA. The HCRN primarily aims at exploiting intrinsic properties of the visual content of a video as well as its accompanying channels in terms of compositionality, hierarchy, and near-term and far-term relation. HCRN is then applied for Video QA in two forms, short-form where answers are reasoned solely from the visual content of a video, and long-form where an additional associated information channel, such as movie subtitles, presented. Our rigorous evaluations show consistent improvements over state-of-the-art methods on well-studied benchmarks including large-scale real-world datasets such as TGIF-QA and TVQA, demonstrating the strong capabilities of our CRN unit and the HCRN for complex domains such as Video QA. To the best of our knowledge, the HCRN is the very first method attempting to handle long and short-form multimodal Video QA at the same time.</description><subject>Artificial Intelligence</subject><subject>Audio data</subject><subject>Channels</subject><subject>Complexity</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Domains</subject><subject>Image Processing and Computer Vision</subject><subject>Linguistics</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Questions</subject><subject>Subtitles &amp; subtitling</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp9kU1P3DAQhq2qSN0Cf6CnSD31EJix43wcV6sWkICKBXq1HGe8Nc3GWzsr4N_jEKSKS-WDR9bzWK_mZewLwgkCVKcRkZciB445oMQiFx_YAmUlcixAfmQLaDjksmzwE_sc4wMA8JqLBVufOwo6mN_O6D5b-aFzo_NDmtfU62nMrml89OFPzKwP2dW-H93Wdwn45Try2c2e4iu2HOIjBTdsjtiB1X2k47f7kN3_-H63Os8vf55drJaXuRENH3NdFaCxEqWW2BrdaiOpoqqohdVWtFC3QKKEVnSVRVuDRKMLBFkQ7wS1rThkX-d_d8H_nVKoB78PKXpUXNa8LEteN4k6mamN7km5wfoxaJNOR1tn_EDWpfdlWTUibQQn4ds7ITEjPY0bvY9RXdyu37N8Zk3wMQayahfcVodnhaCmYtRcjErFqNdilEiSmKW4m9ZF4V_u_1gvCx6QIg</recordid><startdate>20211101</startdate><enddate>20211101</enddate><creator>Le, Thao Minh</creator><creator>Le, Vuong</creator><creator>Venkatesh, Svetha</creator><creator>Tran, Truyen</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PYYUZ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-8089-9962</orcidid></search><sort><creationdate>20211101</creationdate><title>Hierarchical Conditional Relation Networks for Multimodal Video Question Answering</title><author>Le, Thao Minh ; Le, Vuong ; Venkatesh, Svetha ; Tran, Truyen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c392t-a740a1736a51bcabac5e7e7483faf3b08b0e360b3d7f1f8051ca41054e2d3ebb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial Intelligence</topic><topic>Audio data</topic><topic>Channels</topic><topic>Complexity</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Domains</topic><topic>Image Processing and Computer Vision</topic><topic>Linguistics</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Questions</topic><topic>Subtitles &amp; subtitling</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Le, Thao Minh</creatorcontrib><creatorcontrib>Le, Vuong</creatorcontrib><creatorcontrib>Venkatesh, Svetha</creatorcontrib><creatorcontrib>Tran, Truyen</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Le, Thao Minh</au><au>Le, Vuong</au><au>Venkatesh, Svetha</au><au>Tran, Truyen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hierarchical Conditional Relation Networks for Multimodal Video Question Answering</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2021-11-01</date><risdate>2021</risdate><volume>129</volume><issue>11</issue><spage>3027</spage><epage>3050</epage><pages>3027-3050</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>Video Question Answering (Video QA) challenges modelers in multiple fronts. Modeling video necessitates building not only spatio-temporal models for the dynamic visual channel but also multimodal structures for associated information channels such as subtitles or audio. Video QA adds at least two more layers of complexity – selecting relevant content for each channel in the context of the linguistic query, and composing spatio-temporal concepts and relations hidden in the data in response to the query. To address these requirements, we start with two insights: (a) content selection and relation construction can be jointly encapsulated into a conditional computational structure, and (b) video-length structures can be composed hierarchically. For (a) this paper introduces a general-reusable reusable neural unit dubbed Conditional Relation Network (CRN) taking as input a set of tensorial objects and translating into a new set of objects that encode relations of the inputs. The generic design of CRN helps ease the common complex model building process of Video QA by simple block stacking and rearrangements with flexibility in accommodating diverse input modalities and conditioning features across both visual and linguistic domains. As a result, we realize insight (b) by introducing Hierarchical Conditional Relation Networks (HCRN) for Video QA. The HCRN primarily aims at exploiting intrinsic properties of the visual content of a video as well as its accompanying channels in terms of compositionality, hierarchy, and near-term and far-term relation. HCRN is then applied for Video QA in two forms, short-form where answers are reasoned solely from the visual content of a video, and long-form where an additional associated information channel, such as movie subtitles, presented. Our rigorous evaluations show consistent improvements over state-of-the-art methods on well-studied benchmarks including large-scale real-world datasets such as TGIF-QA and TVQA, demonstrating the strong capabilities of our CRN unit and the HCRN for complex domains such as Video QA. To the best of our knowledge, the HCRN is the very first method attempting to handle long and short-form multimodal Video QA at the same time.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-021-01514-3</doi><tpages>24</tpages><orcidid>https://orcid.org/0000-0002-8089-9962</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0920-5691
ispartof International journal of computer vision, 2021-11, Vol.129 (11), p.3027-3050
issn 0920-5691
1573-1405
language eng
recordid cdi_proquest_journals_2582666289
source SpringerLink Journals
subjects Artificial Intelligence
Audio data
Channels
Complexity
Computer Imaging
Computer Science
Domains
Image Processing and Computer Vision
Linguistics
Pattern Recognition
Pattern Recognition and Graphics
Questions
Subtitles & subtitling
Vision
title Hierarchical Conditional Relation Networks for Multimodal Video Question Answering
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T05%3A40%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hierarchical%20Conditional%20Relation%20Networks%20for%20Multimodal%20Video%20Question%20Answering&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Le,%20Thao%20Minh&rft.date=2021-11-01&rft.volume=129&rft.issue=11&rft.spage=3027&rft.epage=3050&rft.pages=3027-3050&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-021-01514-3&rft_dat=%3Cgale_proqu%3EA679328219%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2582666289&rft_id=info:pmid/&rft_galeid=A679328219&rfr_iscdi=true