Video summarization with u-shaped transformer

In recent years, supervised video summarization has made tremendous progress with treating it as a sequence-to-sequence learning task. However, traditional recurrent neural networks (RNNs) have limitations in sequence modeling of long sequences, and the use of a transformer for sequence modeling req...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2022-12, Vol.52 (15), p.17864-17880
Hauptverfasser: Chen, Yaosen, Guo, Bing, Shen, Yan, Zhou, Renshuang, Lu, Weichen, Wang, Wei, Wen, Xuming, Suo, Xinhua
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 17880
container_issue 15
container_start_page 17864
container_title Applied intelligence (Dordrecht, Netherlands)
container_volume 52
creator Chen, Yaosen
Guo, Bing
Shen, Yan
Zhou, Renshuang
Lu, Weichen
Wang, Wei
Wen, Xuming
Suo, Xinhua
description In recent years, supervised video summarization has made tremendous progress with treating it as a sequence-to-sequence learning task. However, traditional recurrent neural networks (RNNs) have limitations in sequence modeling of long sequences, and the use of a transformer for sequence modeling requires a large number of parameters. We propose an efficient U-shaped transformer for video summarization tasks in this paper to address this issue, which we call “Uformer”. Precisely, Uformer consists of three key components: embedding, Uformer block, and prediction head. First of all, the image features sequence is represented by the pre-trained deep convolutional network, then represented by a liner embedding. The image feature sequence differences are also represented by another liner embedding and concatenate together to form a two-stream embedding feature in the embedding component. Secondly, we stack multiple transformer layers into a U-shaped block to integrate the representations learned from the previous layers. Multi-scale Uformer can not only learn longer sequence information but also reduce the number of parameters and calculations. Finally, prediction head regression the localization of the keyframes and learning the corresponding classification scores. Uformer combine with non-maximum suppression (NMS) for post-processing to get the final video summarization. We improved the F-score from 50.2% to 53.9% by 3.7% on the SumMe dataset and improved F-score from 62.1% to 63.0% by 0.9% on the TVSum dataset. Our proposed model with 0.85M parameters which are only 32.32% of DR-DSN’s parameters.
doi_str_mv 10.1007/s10489-022-03451-1
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2737807018</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2737807018</sourcerecordid><originalsourceid>FETCH-LOGICAL-c249t-fe5eb56788a2ee0b00125c6569127508baecd1faa13b974ae764aa6931824c03</originalsourceid><addsrcrecordid>eNp9kEtLw0AUhQdRsFb_gKuA69F75z1LKb6g4KaIu2GS3tgUm9SZBNFfbzSCO1d3851zuB9j5wiXCGCvMoJynoMQHKTSyPGAzVBbya3y9pDNwAvFjfHPx-wk5y0ASAk4Y_ypWVNX5GG3i6n5jH3TtcV702-KgedN3NO66FNsc92lHaVTdlTH10xnv3fOVrc3q8U9Xz7ePSyul7wSyve8Jk2lNta5KIigBEChK6ONR2E1uDJStcY6RpSltyqSNSpG4yU6oSqQc3Yx1e5T9zZQ7sO2G1I7LgZhpXVgAd1IiYmqUpdzojrsUzN-8REQwreVMFkJo5XwYyXgGJJTKI9w-0Lpr_qf1Bey_WQJ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2737807018</pqid></control><display><type>article</type><title>Video summarization with u-shaped transformer</title><source>SpringerNature Journals</source><creator>Chen, Yaosen ; Guo, Bing ; Shen, Yan ; Zhou, Renshuang ; Lu, Weichen ; Wang, Wei ; Wen, Xuming ; Suo, Xinhua</creator><creatorcontrib>Chen, Yaosen ; Guo, Bing ; Shen, Yan ; Zhou, Renshuang ; Lu, Weichen ; Wang, Wei ; Wen, Xuming ; Suo, Xinhua</creatorcontrib><description>In recent years, supervised video summarization has made tremendous progress with treating it as a sequence-to-sequence learning task. However, traditional recurrent neural networks (RNNs) have limitations in sequence modeling of long sequences, and the use of a transformer for sequence modeling requires a large number of parameters. We propose an efficient U-shaped transformer for video summarization tasks in this paper to address this issue, which we call “Uformer”. Precisely, Uformer consists of three key components: embedding, Uformer block, and prediction head. First of all, the image features sequence is represented by the pre-trained deep convolutional network, then represented by a liner embedding. The image feature sequence differences are also represented by another liner embedding and concatenate together to form a two-stream embedding feature in the embedding component. Secondly, we stack multiple transformer layers into a U-shaped block to integrate the representations learned from the previous layers. Multi-scale Uformer can not only learn longer sequence information but also reduce the number of parameters and calculations. Finally, prediction head regression the localization of the keyframes and learning the corresponding classification scores. Uformer combine with non-maximum suppression (NMS) for post-processing to get the final video summarization. We improved the F-score from 50.2% to 53.9% by 3.7% on the SumMe dataset and improved F-score from 62.1% to 63.0% by 0.9% on the TVSum dataset. Our proposed model with 0.85M parameters which are only 32.32% of DR-DSN’s parameters.</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-022-03451-1</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Cognitive tasks ; Computer Science ; Datasets ; Dictionaries ; Embedding ; Laboratories ; Learning ; Localization ; Machines ; Manufacturing ; Mathematical models ; Mechanical Engineering ; Methods ; Modelling ; Neural networks ; Parameters ; Processes ; Recurrent neural networks ; Transformers ; Video data ; Video post-production</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2022-12, Vol.52 (15), p.17864-17880</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c249t-fe5eb56788a2ee0b00125c6569127508baecd1faa13b974ae764aa6931824c03</citedby><cites>FETCH-LOGICAL-c249t-fe5eb56788a2ee0b00125c6569127508baecd1faa13b974ae764aa6931824c03</cites><orcidid>0000-0002-7212-1755</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10489-022-03451-1$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10489-022-03451-1$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>315,781,785,27929,27930,41493,42562,51324</link.rule.ids></links><search><creatorcontrib>Chen, Yaosen</creatorcontrib><creatorcontrib>Guo, Bing</creatorcontrib><creatorcontrib>Shen, Yan</creatorcontrib><creatorcontrib>Zhou, Renshuang</creatorcontrib><creatorcontrib>Lu, Weichen</creatorcontrib><creatorcontrib>Wang, Wei</creatorcontrib><creatorcontrib>Wen, Xuming</creatorcontrib><creatorcontrib>Suo, Xinhua</creatorcontrib><title>Video summarization with u-shaped transformer</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>In recent years, supervised video summarization has made tremendous progress with treating it as a sequence-to-sequence learning task. However, traditional recurrent neural networks (RNNs) have limitations in sequence modeling of long sequences, and the use of a transformer for sequence modeling requires a large number of parameters. We propose an efficient U-shaped transformer for video summarization tasks in this paper to address this issue, which we call “Uformer”. Precisely, Uformer consists of three key components: embedding, Uformer block, and prediction head. First of all, the image features sequence is represented by the pre-trained deep convolutional network, then represented by a liner embedding. The image feature sequence differences are also represented by another liner embedding and concatenate together to form a two-stream embedding feature in the embedding component. Secondly, we stack multiple transformer layers into a U-shaped block to integrate the representations learned from the previous layers. Multi-scale Uformer can not only learn longer sequence information but also reduce the number of parameters and calculations. Finally, prediction head regression the localization of the keyframes and learning the corresponding classification scores. Uformer combine with non-maximum suppression (NMS) for post-processing to get the final video summarization. We improved the F-score from 50.2% to 53.9% by 3.7% on the SumMe dataset and improved F-score from 62.1% to 63.0% by 0.9% on the TVSum dataset. Our proposed model with 0.85M parameters which are only 32.32% of DR-DSN’s parameters.</description><subject>Artificial Intelligence</subject><subject>Cognitive tasks</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Dictionaries</subject><subject>Embedding</subject><subject>Laboratories</subject><subject>Learning</subject><subject>Localization</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mathematical models</subject><subject>Mechanical Engineering</subject><subject>Methods</subject><subject>Modelling</subject><subject>Neural networks</subject><subject>Parameters</subject><subject>Processes</subject><subject>Recurrent neural networks</subject><subject>Transformers</subject><subject>Video data</subject><subject>Video post-production</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kEtLw0AUhQdRsFb_gKuA69F75z1LKb6g4KaIu2GS3tgUm9SZBNFfbzSCO1d3851zuB9j5wiXCGCvMoJynoMQHKTSyPGAzVBbya3y9pDNwAvFjfHPx-wk5y0ASAk4Y_ypWVNX5GG3i6n5jH3TtcV702-KgedN3NO66FNsc92lHaVTdlTH10xnv3fOVrc3q8U9Xz7ePSyul7wSyve8Jk2lNta5KIigBEChK6ONR2E1uDJStcY6RpSltyqSNSpG4yU6oSqQc3Yx1e5T9zZQ7sO2G1I7LgZhpXVgAd1IiYmqUpdzojrsUzN-8REQwreVMFkJo5XwYyXgGJJTKI9w-0Lpr_qf1Bey_WQJ</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Chen, Yaosen</creator><creator>Guo, Bing</creator><creator>Shen, Yan</creator><creator>Zhou, Renshuang</creator><creator>Lu, Weichen</creator><creator>Wang, Wei</creator><creator>Wen, Xuming</creator><creator>Suo, Xinhua</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-7212-1755</orcidid></search><sort><creationdate>20221201</creationdate><title>Video summarization with u-shaped transformer</title><author>Chen, Yaosen ; Guo, Bing ; Shen, Yan ; Zhou, Renshuang ; Lu, Weichen ; Wang, Wei ; Wen, Xuming ; Suo, Xinhua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c249t-fe5eb56788a2ee0b00125c6569127508baecd1faa13b974ae764aa6931824c03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial Intelligence</topic><topic>Cognitive tasks</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Dictionaries</topic><topic>Embedding</topic><topic>Laboratories</topic><topic>Learning</topic><topic>Localization</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mathematical models</topic><topic>Mechanical Engineering</topic><topic>Methods</topic><topic>Modelling</topic><topic>Neural networks</topic><topic>Parameters</topic><topic>Processes</topic><topic>Recurrent neural networks</topic><topic>Transformers</topic><topic>Video data</topic><topic>Video post-production</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Yaosen</creatorcontrib><creatorcontrib>Guo, Bing</creatorcontrib><creatorcontrib>Shen, Yan</creatorcontrib><creatorcontrib>Zhou, Renshuang</creatorcontrib><creatorcontrib>Lu, Weichen</creatorcontrib><creatorcontrib>Wang, Wei</creatorcontrib><creatorcontrib>Wen, Xuming</creatorcontrib><creatorcontrib>Suo, Xinhua</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Access via ABI/INFORM (ProQuest)</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Proquest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Yaosen</au><au>Guo, Bing</au><au>Shen, Yan</au><au>Zhou, Renshuang</au><au>Lu, Weichen</au><au>Wang, Wei</au><au>Wen, Xuming</au><au>Suo, Xinhua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Video summarization with u-shaped transformer</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2022-12-01</date><risdate>2022</risdate><volume>52</volume><issue>15</issue><spage>17864</spage><epage>17880</epage><pages>17864-17880</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>In recent years, supervised video summarization has made tremendous progress with treating it as a sequence-to-sequence learning task. However, traditional recurrent neural networks (RNNs) have limitations in sequence modeling of long sequences, and the use of a transformer for sequence modeling requires a large number of parameters. We propose an efficient U-shaped transformer for video summarization tasks in this paper to address this issue, which we call “Uformer”. Precisely, Uformer consists of three key components: embedding, Uformer block, and prediction head. First of all, the image features sequence is represented by the pre-trained deep convolutional network, then represented by a liner embedding. The image feature sequence differences are also represented by another liner embedding and concatenate together to form a two-stream embedding feature in the embedding component. Secondly, we stack multiple transformer layers into a U-shaped block to integrate the representations learned from the previous layers. Multi-scale Uformer can not only learn longer sequence information but also reduce the number of parameters and calculations. Finally, prediction head regression the localization of the keyframes and learning the corresponding classification scores. Uformer combine with non-maximum suppression (NMS) for post-processing to get the final video summarization. We improved the F-score from 50.2% to 53.9% by 3.7% on the SumMe dataset and improved F-score from 62.1% to 63.0% by 0.9% on the TVSum dataset. Our proposed model with 0.85M parameters which are only 32.32% of DR-DSN’s parameters.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-022-03451-1</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0002-7212-1755</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0924-669X
ispartof Applied intelligence (Dordrecht, Netherlands), 2022-12, Vol.52 (15), p.17864-17880
issn 0924-669X
1573-7497
language eng
recordid cdi_proquest_journals_2737807018
source SpringerNature Journals
subjects Artificial Intelligence
Cognitive tasks
Computer Science
Datasets
Dictionaries
Embedding
Laboratories
Learning
Localization
Machines
Manufacturing
Mathematical models
Mechanical Engineering
Methods
Modelling
Neural networks
Parameters
Processes
Recurrent neural networks
Transformers
Video data
Video post-production
title Video summarization with u-shaped transformer
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-14T05%3A53%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Video%20summarization%20with%20u-shaped%20transformer&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Chen,%20Yaosen&rft.date=2022-12-01&rft.volume=52&rft.issue=15&rft.spage=17864&rft.epage=17880&rft.pages=17864-17880&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-022-03451-1&rft_dat=%3Cproquest_cross%3E2737807018%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2737807018&rft_id=info:pmid/&rfr_iscdi=true