See, Hear, Read: Leveraging Multimodality with Guided Attention for Abstractive Text Summarization
In recent years, abstractive text summarization with multimodal inputs has started drawing attention due to its ability to accumulate information from different source modalities and generate a fluent textual summary. However, existing methods use short videos as the visual modality and short summar...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-09 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Atri, Yash Kumar Pramanick, Shraman Goyal, Vikram Chakraborty, Tanmoy |
description | In recent years, abstractive text summarization with multimodal inputs has started drawing attention due to its ability to accumulate information from different source modalities and generate a fluent textual summary. However, existing methods use short videos as the visual modality and short summary as the ground-truth, therefore, perform poorly on lengthy videos and long ground-truth summary. Additionally, there exists no benchmark dataset to generalize this task on videos of varying lengths. In this paper, we introduce AVIATE, the first large-scale dataset for abstractive text summarization with videos of diverse duration, compiled from presentations in well-known academic conferences like NDSS, ICML, NeurIPS, etc. We use the abstract of corresponding research papers as the reference summaries, which ensure adequate quality and uniformity of the ground-truth. We then propose FLORAL, a factorized multi-modal Transformer based decoder-only language model, which inherently captures the intra-modal and inter-modal dynamics within various input modalities for the text summarization task. FLORAL utilizes an increasing number of self-attentions to capture multimodality and performs significantly better than traditional encoder-decoder based networks. Extensive experiments illustrate that FLORAL achieves significant improvement over the baselines in both qualitative and quantitative evaluations on the existing How2 dataset for short videos and newly introduced AVIATE dataset for videos with diverse duration, beating the best baseline on the two datasets by \(1.39\) and \(2.74\) ROUGE-L points respectively. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2530234164</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2530234164</sourcerecordid><originalsourceid>FETCH-proquest_journals_25302341643</originalsourceid><addsrcrecordid>eNqNyrsKwjAUgOEgCBb1HQ64WqhJW8VNxMugi3UvqTnVSNtocuLt6VXwAZz-4f9aLOBCjMJJzHmH9Z07R1HE0zFPEhGwIkMcwhqlHcIOpZrCBm9o5VE3R9j6inRtlKw0PeGu6QQrrxUqmBFhQ9o0UBoLs8KRlQfSN4Q9PggyX9fS6pf8kh5rl7Jy2P-1ywbLxX6-Di_WXD06ys_G2-azcp6IiIt4lMbiP_UGGDZFOw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2530234164</pqid></control><display><type>article</type><title>See, Hear, Read: Leveraging Multimodality with Guided Attention for Abstractive Text Summarization</title><source>Freely Accessible Journals</source><creator>Atri, Yash Kumar ; Pramanick, Shraman ; Goyal, Vikram ; Chakraborty, Tanmoy</creator><creatorcontrib>Atri, Yash Kumar ; Pramanick, Shraman ; Goyal, Vikram ; Chakraborty, Tanmoy</creatorcontrib><description>In recent years, abstractive text summarization with multimodal inputs has started drawing attention due to its ability to accumulate information from different source modalities and generate a fluent textual summary. However, existing methods use short videos as the visual modality and short summary as the ground-truth, therefore, perform poorly on lengthy videos and long ground-truth summary. Additionally, there exists no benchmark dataset to generalize this task on videos of varying lengths. In this paper, we introduce AVIATE, the first large-scale dataset for abstractive text summarization with videos of diverse duration, compiled from presentations in well-known academic conferences like NDSS, ICML, NeurIPS, etc. We use the abstract of corresponding research papers as the reference summaries, which ensure adequate quality and uniformity of the ground-truth. We then propose FLORAL, a factorized multi-modal Transformer based decoder-only language model, which inherently captures the intra-modal and inter-modal dynamics within various input modalities for the text summarization task. FLORAL utilizes an increasing number of self-attentions to capture multimodality and performs significantly better than traditional encoder-decoder based networks. Extensive experiments illustrate that FLORAL achieves significant improvement over the baselines in both qualitative and quantitative evaluations on the existing How2 dataset for short videos and newly introduced AVIATE dataset for videos with diverse duration, beating the best baseline on the two datasets by \(1.39\) and \(2.74\) ROUGE-L points respectively.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Coders ; Datasets ; Encoders-Decoders ; Scientific papers ; Video</subject><ispartof>arXiv.org, 2021-09</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Atri, Yash Kumar</creatorcontrib><creatorcontrib>Pramanick, Shraman</creatorcontrib><creatorcontrib>Goyal, Vikram</creatorcontrib><creatorcontrib>Chakraborty, Tanmoy</creatorcontrib><title>See, Hear, Read: Leveraging Multimodality with Guided Attention for Abstractive Text Summarization</title><title>arXiv.org</title><description>In recent years, abstractive text summarization with multimodal inputs has started drawing attention due to its ability to accumulate information from different source modalities and generate a fluent textual summary. However, existing methods use short videos as the visual modality and short summary as the ground-truth, therefore, perform poorly on lengthy videos and long ground-truth summary. Additionally, there exists no benchmark dataset to generalize this task on videos of varying lengths. In this paper, we introduce AVIATE, the first large-scale dataset for abstractive text summarization with videos of diverse duration, compiled from presentations in well-known academic conferences like NDSS, ICML, NeurIPS, etc. We use the abstract of corresponding research papers as the reference summaries, which ensure adequate quality and uniformity of the ground-truth. We then propose FLORAL, a factorized multi-modal Transformer based decoder-only language model, which inherently captures the intra-modal and inter-modal dynamics within various input modalities for the text summarization task. FLORAL utilizes an increasing number of self-attentions to capture multimodality and performs significantly better than traditional encoder-decoder based networks. Extensive experiments illustrate that FLORAL achieves significant improvement over the baselines in both qualitative and quantitative evaluations on the existing How2 dataset for short videos and newly introduced AVIATE dataset for videos with diverse duration, beating the best baseline on the two datasets by \(1.39\) and \(2.74\) ROUGE-L points respectively.</description><subject>Coders</subject><subject>Datasets</subject><subject>Encoders-Decoders</subject><subject>Scientific papers</subject><subject>Video</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyrsKwjAUgOEgCBb1HQ64WqhJW8VNxMugi3UvqTnVSNtocuLt6VXwAZz-4f9aLOBCjMJJzHmH9Z07R1HE0zFPEhGwIkMcwhqlHcIOpZrCBm9o5VE3R9j6inRtlKw0PeGu6QQrrxUqmBFhQ9o0UBoLs8KRlQfSN4Q9PggyX9fS6pf8kh5rl7Jy2P-1ywbLxX6-Di_WXD06ys_G2-azcp6IiIt4lMbiP_UGGDZFOw</recordid><startdate>20210915</startdate><enddate>20210915</enddate><creator>Atri, Yash Kumar</creator><creator>Pramanick, Shraman</creator><creator>Goyal, Vikram</creator><creator>Chakraborty, Tanmoy</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210915</creationdate><title>See, Hear, Read: Leveraging Multimodality with Guided Attention for Abstractive Text Summarization</title><author>Atri, Yash Kumar ; Pramanick, Shraman ; Goyal, Vikram ; Chakraborty, Tanmoy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25302341643</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Coders</topic><topic>Datasets</topic><topic>Encoders-Decoders</topic><topic>Scientific papers</topic><topic>Video</topic><toplevel>online_resources</toplevel><creatorcontrib>Atri, Yash Kumar</creatorcontrib><creatorcontrib>Pramanick, Shraman</creatorcontrib><creatorcontrib>Goyal, Vikram</creatorcontrib><creatorcontrib>Chakraborty, Tanmoy</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Atri, Yash Kumar</au><au>Pramanick, Shraman</au><au>Goyal, Vikram</au><au>Chakraborty, Tanmoy</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>See, Hear, Read: Leveraging Multimodality with Guided Attention for Abstractive Text Summarization</atitle><jtitle>arXiv.org</jtitle><date>2021-09-15</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>In recent years, abstractive text summarization with multimodal inputs has started drawing attention due to its ability to accumulate information from different source modalities and generate a fluent textual summary. However, existing methods use short videos as the visual modality and short summary as the ground-truth, therefore, perform poorly on lengthy videos and long ground-truth summary. Additionally, there exists no benchmark dataset to generalize this task on videos of varying lengths. In this paper, we introduce AVIATE, the first large-scale dataset for abstractive text summarization with videos of diverse duration, compiled from presentations in well-known academic conferences like NDSS, ICML, NeurIPS, etc. We use the abstract of corresponding research papers as the reference summaries, which ensure adequate quality and uniformity of the ground-truth. We then propose FLORAL, a factorized multi-modal Transformer based decoder-only language model, which inherently captures the intra-modal and inter-modal dynamics within various input modalities for the text summarization task. FLORAL utilizes an increasing number of self-attentions to capture multimodality and performs significantly better than traditional encoder-decoder based networks. Extensive experiments illustrate that FLORAL achieves significant improvement over the baselines in both qualitative and quantitative evaluations on the existing How2 dataset for short videos and newly introduced AVIATE dataset for videos with diverse duration, beating the best baseline on the two datasets by \(1.39\) and \(2.74\) ROUGE-L points respectively.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-09 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2530234164 |
source | Freely Accessible Journals |
subjects | Coders Datasets Encoders-Decoders Scientific papers Video |
title | See, Hear, Read: Leveraging Multimodality with Guided Attention for Abstractive Text Summarization |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T17%3A40%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=See,%20Hear,%20Read:%20Leveraging%20Multimodality%20with%20Guided%20Attention%20for%20Abstractive%20Text%20Summarization&rft.jtitle=arXiv.org&rft.au=Atri,%20Yash%20Kumar&rft.date=2021-09-15&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2530234164%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2530234164&rft_id=info:pmid/&rfr_iscdi=true |