CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations

Existing audio-language task-specific predictive approaches focus on building complicated late-fusion mechanisms. However, these models are facing challenges of overfitting with limited labels and low model generalization abilities. In this paper, we present a Cross-modal Transformer for Audio-and-L...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Li, Hang, Kang, Yu, Liu, Tianqiao, Ding, Wenbiao, Liu, Zitao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Li, Hang
Kang, Yu
Liu, Tianqiao
Ding, Wenbiao
Liu, Zitao
description Existing audio-language task-specific predictive approaches focus on building complicated late-fusion mechanisms. However, these models are facing challenges of overfitting with limited labels and low model generalization abilities. In this paper, we present a Cross-modal Transformer for Audio-and-Language, i.e., CTAL, which aims to learn the intra-modality and inter-modality connections between audio and language through two proxy tasks on a large amount of audio-and-language pairs: masked language modeling and masked cross-modal acoustic modeling. After fine-tuning our pre-trained model on multiple downstream audio-and-language tasks, we observe significant improvements across various tasks, such as, emotion classification, sentiment analysis, and speaker verification. On this basis, we further propose a specially-designed fusion mechanism that can be used in fine-tuning phase, which allows our pre-trained model to achieve better performance. Lastly, we demonstrate detailed ablation studies to prove that both our novel cross-modality fusion component and audio-language pre-training methods significantly contribute to the promising results.
doi_str_mv 10.48550/arxiv.2109.00181
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2109_00181</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2109_00181</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-df19fa48c00602459cf7d47c719d6a75ab125c3cf3d4dd460c52aa362a41d7ea3</originalsourceid><addsrcrecordid>eNotz8tKxDAYQOFsXMjoA7gyL5Cae1p3pXiDghe6L7-5lMA0GZKO6Nuro6uzO_AhdMVoI1ul6A2Uz_jRcEa7hlLWsnP0Okz9eItfiidbgZhiWvBQcq1kzQ72eCqQashl9QX_BPdHFzOB5MgIaTnC4vGbPxRffdpgiznVC3QWYF_95X93aLq_m4ZHMj4_PA39SEAbRlxgXQDZWko15VJ1NhgnjTWscxqMgnfGlRU2CCedk5paxQGE5iCZMx7EDl3_bU-m-VDiCuVr_rXNJ5v4Bk3JSec</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations</title><source>arXiv.org</source><creator>Li, Hang ; Kang, Yu ; Liu, Tianqiao ; Ding, Wenbiao ; Liu, Zitao</creator><creatorcontrib>Li, Hang ; Kang, Yu ; Liu, Tianqiao ; Ding, Wenbiao ; Liu, Zitao</creatorcontrib><description>Existing audio-language task-specific predictive approaches focus on building complicated late-fusion mechanisms. However, these models are facing challenges of overfitting with limited labels and low model generalization abilities. In this paper, we present a Cross-modal Transformer for Audio-and-Language, i.e., CTAL, which aims to learn the intra-modality and inter-modality connections between audio and language through two proxy tasks on a large amount of audio-and-language pairs: masked language modeling and masked cross-modal acoustic modeling. After fine-tuning our pre-trained model on multiple downstream audio-and-language tasks, we observe significant improvements across various tasks, such as, emotion classification, sentiment analysis, and speaker verification. On this basis, we further propose a specially-designed fusion mechanism that can be used in fine-tuning phase, which allows our pre-trained model to achieve better performance. Lastly, we demonstrate detailed ablation studies to prove that both our novel cross-modality fusion component and audio-language pre-training methods significantly contribute to the promising results.</description><identifier>DOI: 10.48550/arxiv.2109.00181</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Sound</subject><creationdate>2021-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2109.00181$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2109.00181$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Hang</creatorcontrib><creatorcontrib>Kang, Yu</creatorcontrib><creatorcontrib>Liu, Tianqiao</creatorcontrib><creatorcontrib>Ding, Wenbiao</creatorcontrib><creatorcontrib>Liu, Zitao</creatorcontrib><title>CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations</title><description>Existing audio-language task-specific predictive approaches focus on building complicated late-fusion mechanisms. However, these models are facing challenges of overfitting with limited labels and low model generalization abilities. In this paper, we present a Cross-modal Transformer for Audio-and-Language, i.e., CTAL, which aims to learn the intra-modality and inter-modality connections between audio and language through two proxy tasks on a large amount of audio-and-language pairs: masked language modeling and masked cross-modal acoustic modeling. After fine-tuning our pre-trained model on multiple downstream audio-and-language tasks, we observe significant improvements across various tasks, such as, emotion classification, sentiment analysis, and speaker verification. On this basis, we further propose a specially-designed fusion mechanism that can be used in fine-tuning phase, which allows our pre-trained model to achieve better performance. Lastly, we demonstrate detailed ablation studies to prove that both our novel cross-modality fusion component and audio-language pre-training methods significantly contribute to the promising results.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz8tKxDAYQOFsXMjoA7gyL5Cae1p3pXiDghe6L7-5lMA0GZKO6Nuro6uzO_AhdMVoI1ul6A2Uz_jRcEa7hlLWsnP0Okz9eItfiidbgZhiWvBQcq1kzQ72eCqQashl9QX_BPdHFzOB5MgIaTnC4vGbPxRffdpgiznVC3QWYF_95X93aLq_m4ZHMj4_PA39SEAbRlxgXQDZWko15VJ1NhgnjTWscxqMgnfGlRU2CCedk5paxQGE5iCZMx7EDl3_bU-m-VDiCuVr_rXNJ5v4Bk3JSec</recordid><startdate>20210901</startdate><enddate>20210901</enddate><creator>Li, Hang</creator><creator>Kang, Yu</creator><creator>Liu, Tianqiao</creator><creator>Ding, Wenbiao</creator><creator>Liu, Zitao</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210901</creationdate><title>CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations</title><author>Li, Hang ; Kang, Yu ; Liu, Tianqiao ; Ding, Wenbiao ; Liu, Zitao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-df19fa48c00602459cf7d47c719d6a75ab125c3cf3d4dd460c52aa362a41d7ea3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Hang</creatorcontrib><creatorcontrib>Kang, Yu</creatorcontrib><creatorcontrib>Liu, Tianqiao</creatorcontrib><creatorcontrib>Ding, Wenbiao</creatorcontrib><creatorcontrib>Liu, Zitao</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Hang</au><au>Kang, Yu</au><au>Liu, Tianqiao</au><au>Ding, Wenbiao</au><au>Liu, Zitao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations</atitle><date>2021-09-01</date><risdate>2021</risdate><abstract>Existing audio-language task-specific predictive approaches focus on building complicated late-fusion mechanisms. However, these models are facing challenges of overfitting with limited labels and low model generalization abilities. In this paper, we present a Cross-modal Transformer for Audio-and-Language, i.e., CTAL, which aims to learn the intra-modality and inter-modality connections between audio and language through two proxy tasks on a large amount of audio-and-language pairs: masked language modeling and masked cross-modal acoustic modeling. After fine-tuning our pre-trained model on multiple downstream audio-and-language tasks, we observe significant improvements across various tasks, such as, emotion classification, sentiment analysis, and speaker verification. On this basis, we further propose a specially-designed fusion mechanism that can be used in fine-tuning phase, which allows our pre-trained model to achieve better performance. Lastly, we demonstrate detailed ablation studies to prove that both our novel cross-modality fusion component and audio-language pre-training methods significantly contribute to the promising results.</abstract><doi>10.48550/arxiv.2109.00181</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2109.00181
ispartof
issn
language eng
recordid cdi_arxiv_primary_2109_00181
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Sound
title CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T12%3A15%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CTAL:%20Pre-training%20Cross-modal%20Transformer%20for%20Audio-and-Language%20Representations&rft.au=Li,%20Hang&rft.date=2021-09-01&rft_id=info:doi/10.48550/arxiv.2109.00181&rft_dat=%3Carxiv_GOX%3E2109_00181%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true