Deep sequential collaborative cognition of vision and language based model for video description
Video description is to translate video into natural language with appropriate sentence patterns and decent words. The task is challenging due to the great semantic gap between visual content and language. Nowadays, many well-designed models are developed. However, the language information is often...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2023-09, Vol.82 (23), p.36207-36230 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 36230 |
---|---|
container_issue | 23 |
container_start_page | 36207 |
container_title | Multimedia tools and applications |
container_volume | 82 |
creator | Tang, Pengjie Tan, Yunlan Xia, Jiewu |
description | Video description is to translate video into natural language with appropriate sentence patterns and decent words. The task is challenging due to the great semantic gap between visual content and language. Nowadays, many well-designed models are developed. However, the language information is often insufficiently discovered and cannot be effectively integrated with visual representation, leading to that the correlations of vision and language are difficult to be constructed. Inspired by the process of human learning and cognition for vision and language, a deep collaborative cognition of vision and language based model (VL-DCC) is proposed in this work. In detail, an extra language encoding branch is designed and integrated with the visual motion encoding branch based on sequence to sequence pipeline during model learning, to simulate the process of human learning visual information and language. Additionally, a double VL-DCC (DVL-DCC) framework is developed to further improve the quality of generated sentences, where the element-wise addition and feature concatenation operation are employed and implemented on two different VL-DCC modules respectively to comprehensively capture visual and language semantics. Experiments on MSVD and MSR-VTT2016 datasets are conducted to evaluate the proposed model, and better results are achieved compared with the baseline model and other popular works, with the CIDEr reaching to 81.3 and 46.7 on the two datasets respectively. |
doi_str_mv | 10.1007/s11042-023-14887-z |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2866630894</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2866630894</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-f0a52204a5844a13d4e8c2e6f080b74ae997d21a0ef7bf42c0b716ad27418adc3</originalsourceid><addsrcrecordid>eNp9kE9PwzAMxSMEEmPwBThF4hxw0rTJjmj8lSZxgXNIG6fq1DUj6SbBpyejSNw4-dl6P1t-hFxyuOYA6iZxDlIwEAXjUmvFvo7IjJeqYEoJfpx1oYGpEvgpOUtpDcCrUsgZeb9D3NKEHzscxs72tAl9b-sQ7djtMXft0I1dGGjwdN-lg7KDo70d2p1tkdY2oaOb4LCnPsTscRiow9TEbnsAz8mJt33Ci986J28P96_LJ7Z6eXxe3q5YIxSMzIMthQBpSy2l5YWTqBuBlQcNtZIWFwvlBLeAXtVeiiZPeWWdUJJr65piTq6mvdsY8jNpNOuwi0M-aYSuqqoAvZDZJSZXE0NKEb3Zxm5j46fhYA5JmilJk5M0P0marwwVE5SyeWgx_q3-h_oGvO14SA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2866630894</pqid></control><display><type>article</type><title>Deep sequential collaborative cognition of vision and language based model for video description</title><source>SpringerLink Journals - AutoHoldings</source><creator>Tang, Pengjie ; Tan, Yunlan ; Xia, Jiewu</creator><creatorcontrib>Tang, Pengjie ; Tan, Yunlan ; Xia, Jiewu</creatorcontrib><description>Video description is to translate video into natural language with appropriate sentence patterns and decent words. The task is challenging due to the great semantic gap between visual content and language. Nowadays, many well-designed models are developed. However, the language information is often insufficiently discovered and cannot be effectively integrated with visual representation, leading to that the correlations of vision and language are difficult to be constructed. Inspired by the process of human learning and cognition for vision and language, a deep collaborative cognition of vision and language based model (VL-DCC) is proposed in this work. In detail, an extra language encoding branch is designed and integrated with the visual motion encoding branch based on sequence to sequence pipeline during model learning, to simulate the process of human learning visual information and language. Additionally, a double VL-DCC (DVL-DCC) framework is developed to further improve the quality of generated sentences, where the element-wise addition and feature concatenation operation are employed and implemented on two different VL-DCC modules respectively to comprehensively capture visual and language semantics. Experiments on MSVD and MSR-VTT2016 datasets are conducted to evaluate the proposed model, and better results are achieved compared with the baseline model and other popular works, with the CIDEr reaching to 81.3 and 46.7 on the two datasets respectively.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-023-14887-z</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Coding ; Cognition ; Cognition & reasoning ; Collaboration ; Computer Communication Networks ; Computer Science ; Data Structures and Information Theory ; Datasets ; Language ; Learning ; Multimedia Information Systems ; Natural language processing ; Semantics ; Sentences ; Special Purpose and Application-Based Systems ; Vision</subject><ispartof>Multimedia tools and applications, 2023-09, Vol.82 (23), p.36207-36230</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-f0a52204a5844a13d4e8c2e6f080b74ae997d21a0ef7bf42c0b716ad27418adc3</cites><orcidid>0000-0002-6220-7758</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-023-14887-z$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-023-14887-z$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>315,781,785,27928,27929,41492,42561,51323</link.rule.ids></links><search><creatorcontrib>Tang, Pengjie</creatorcontrib><creatorcontrib>Tan, Yunlan</creatorcontrib><creatorcontrib>Xia, Jiewu</creatorcontrib><title>Deep sequential collaborative cognition of vision and language based model for video description</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Video description is to translate video into natural language with appropriate sentence patterns and decent words. The task is challenging due to the great semantic gap between visual content and language. Nowadays, many well-designed models are developed. However, the language information is often insufficiently discovered and cannot be effectively integrated with visual representation, leading to that the correlations of vision and language are difficult to be constructed. Inspired by the process of human learning and cognition for vision and language, a deep collaborative cognition of vision and language based model (VL-DCC) is proposed in this work. In detail, an extra language encoding branch is designed and integrated with the visual motion encoding branch based on sequence to sequence pipeline during model learning, to simulate the process of human learning visual information and language. Additionally, a double VL-DCC (DVL-DCC) framework is developed to further improve the quality of generated sentences, where the element-wise addition and feature concatenation operation are employed and implemented on two different VL-DCC modules respectively to comprehensively capture visual and language semantics. Experiments on MSVD and MSR-VTT2016 datasets are conducted to evaluate the proposed model, and better results are achieved compared with the baseline model and other popular works, with the CIDEr reaching to 81.3 and 46.7 on the two datasets respectively.</description><subject>Coding</subject><subject>Cognition</subject><subject>Cognition & reasoning</subject><subject>Collaboration</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Data Structures and Information Theory</subject><subject>Datasets</subject><subject>Language</subject><subject>Learning</subject><subject>Multimedia Information Systems</subject><subject>Natural language processing</subject><subject>Semantics</subject><subject>Sentences</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Vision</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kE9PwzAMxSMEEmPwBThF4hxw0rTJjmj8lSZxgXNIG6fq1DUj6SbBpyejSNw4-dl6P1t-hFxyuOYA6iZxDlIwEAXjUmvFvo7IjJeqYEoJfpx1oYGpEvgpOUtpDcCrUsgZeb9D3NKEHzscxs72tAl9b-sQ7djtMXft0I1dGGjwdN-lg7KDo70d2p1tkdY2oaOb4LCnPsTscRiow9TEbnsAz8mJt33Ci986J28P96_LJ7Z6eXxe3q5YIxSMzIMthQBpSy2l5YWTqBuBlQcNtZIWFwvlBLeAXtVeiiZPeWWdUJJr65piTq6mvdsY8jNpNOuwi0M-aYSuqqoAvZDZJSZXE0NKEb3Zxm5j46fhYA5JmilJk5M0P0marwwVE5SyeWgx_q3-h_oGvO14SA</recordid><startdate>20230901</startdate><enddate>20230901</enddate><creator>Tang, Pengjie</creator><creator>Tan, Yunlan</creator><creator>Xia, Jiewu</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-6220-7758</orcidid></search><sort><creationdate>20230901</creationdate><title>Deep sequential collaborative cognition of vision and language based model for video description</title><author>Tang, Pengjie ; Tan, Yunlan ; Xia, Jiewu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-f0a52204a5844a13d4e8c2e6f080b74ae997d21a0ef7bf42c0b716ad27418adc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Coding</topic><topic>Cognition</topic><topic>Cognition & reasoning</topic><topic>Collaboration</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Data Structures and Information Theory</topic><topic>Datasets</topic><topic>Language</topic><topic>Learning</topic><topic>Multimedia Information Systems</topic><topic>Natural language processing</topic><topic>Semantics</topic><topic>Sentences</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tang, Pengjie</creatorcontrib><creatorcontrib>Tan, Yunlan</creatorcontrib><creatorcontrib>Xia, Jiewu</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Access via ABI/INFORM (ProQuest)</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tang, Pengjie</au><au>Tan, Yunlan</au><au>Xia, Jiewu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep sequential collaborative cognition of vision and language based model for video description</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2023-09-01</date><risdate>2023</risdate><volume>82</volume><issue>23</issue><spage>36207</spage><epage>36230</epage><pages>36207-36230</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Video description is to translate video into natural language with appropriate sentence patterns and decent words. The task is challenging due to the great semantic gap between visual content and language. Nowadays, many well-designed models are developed. However, the language information is often insufficiently discovered and cannot be effectively integrated with visual representation, leading to that the correlations of vision and language are difficult to be constructed. Inspired by the process of human learning and cognition for vision and language, a deep collaborative cognition of vision and language based model (VL-DCC) is proposed in this work. In detail, an extra language encoding branch is designed and integrated with the visual motion encoding branch based on sequence to sequence pipeline during model learning, to simulate the process of human learning visual information and language. Additionally, a double VL-DCC (DVL-DCC) framework is developed to further improve the quality of generated sentences, where the element-wise addition and feature concatenation operation are employed and implemented on two different VL-DCC modules respectively to comprehensively capture visual and language semantics. Experiments on MSVD and MSR-VTT2016 datasets are conducted to evaluate the proposed model, and better results are achieved compared with the baseline model and other popular works, with the CIDEr reaching to 81.3 and 46.7 on the two datasets respectively.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-023-14887-z</doi><tpages>24</tpages><orcidid>https://orcid.org/0000-0002-6220-7758</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1380-7501 |
ispartof | Multimedia tools and applications, 2023-09, Vol.82 (23), p.36207-36230 |
issn | 1380-7501 1573-7721 |
language | eng |
recordid | cdi_proquest_journals_2866630894 |
source | SpringerLink Journals - AutoHoldings |
subjects | Coding Cognition Cognition & reasoning Collaboration Computer Communication Networks Computer Science Data Structures and Information Theory Datasets Language Learning Multimedia Information Systems Natural language processing Semantics Sentences Special Purpose and Application-Based Systems Vision |
title | Deep sequential collaborative cognition of vision and language based model for video description |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T08%3A08%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20sequential%20collaborative%20cognition%20of%20vision%20and%20language%20based%20model%20for%20video%20description&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Tang,%20Pengjie&rft.date=2023-09-01&rft.volume=82&rft.issue=23&rft.spage=36207&rft.epage=36230&rft.pages=36207-36230&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-023-14887-z&rft_dat=%3Cproquest_cross%3E2866630894%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2866630894&rft_id=info:pmid/&rfr_iscdi=true |