Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus
The punctuation restoration task aims to correctly punctuate the output transcriptions of automatic speech recognition systems. Previous punctuation models, either using text only or demanding the corresponding audio, tend to be constrained by real scenes, where unpunctuated sentences are a mixture...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Zhu, Yaoming Wu, Liwei Cheng, Shanbo Wang, Mingxuan |
description | The punctuation restoration task aims to correctly punctuate the output
transcriptions of automatic speech recognition systems. Previous punctuation
models, either using text only or demanding the corresponding audio, tend to be
constrained by real scenes, where unpunctuated sentences are a mixture of those
with and without audio. This paper proposes a unified multimodal punctuation
restoration framework, named UniPunc, to punctuate the mixed sentences with a
single model. UniPunc jointly represents audio and non-audio samples in a
shared latent space, based on which the model learns a hybrid representation
and punctuates both kinds of samples. We validate the effectiveness of the
UniPunc on real-world datasets, which outperforms various strong baselines
(e.g. BERT, MuSe) by at least 0.8 overall F1 scores, making a new
state-of-the-art. Extensive experiments show that UniPunc's design is a
pervasive solution: by grafting onto previous models, UniPunc enables them to
punctuate on the mixed corpus. Our code is available at
github.com/Yaoming95/UniPunc |
doi_str_mv | 10.48550/arxiv.2202.00468 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2202_00468</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2202_00468</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-e8b7877a2698d39d171fe4e31f2e67deb4b5b33153aa3bd5f92315d8eb4182873</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QIIfcewsUUQBqSkItevour6WLJK4chJo_54-WM2MRjPSIeSBs7wwSrEnSIfwkwvBRM5YUZpbst4OwQd0tJm7KfTRQUc_52E3zTCFONAvHKeYrn6ZoMffmL6pj4k24YAua86LMB1pHdN-Hu_IjYduxPt_XZDN8mVTv2Wrj9f3-nmVQalNhsZqozWIsjJOVo5r7rFAyb3AUju0hVVWSq4kgLRO-UqcgjOnghthtFyQx-vtBajdp9BDOrZnsPYCJv8AIrhJdg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus</title><source>arXiv.org</source><creator>Zhu, Yaoming ; Wu, Liwei ; Cheng, Shanbo ; Wang, Mingxuan</creator><creatorcontrib>Zhu, Yaoming ; Wu, Liwei ; Cheng, Shanbo ; Wang, Mingxuan</creatorcontrib><description>The punctuation restoration task aims to correctly punctuate the output
transcriptions of automatic speech recognition systems. Previous punctuation
models, either using text only or demanding the corresponding audio, tend to be
constrained by real scenes, where unpunctuated sentences are a mixture of those
with and without audio. This paper proposes a unified multimodal punctuation
restoration framework, named UniPunc, to punctuate the mixed sentences with a
single model. UniPunc jointly represents audio and non-audio samples in a
shared latent space, based on which the model learns a hybrid representation
and punctuates both kinds of samples. We validate the effectiveness of the
UniPunc on real-world datasets, which outperforms various strong baselines
(e.g. BERT, MuSe) by at least 0.8 overall F1 scores, making a new
state-of-the-art. Extensive experiments show that UniPunc's design is a
pervasive solution: by grafting onto previous models, UniPunc enables them to
punctuate on the mixed corpus. Our code is available at
github.com/Yaoming95/UniPunc</description><identifier>DOI: 10.48550/arxiv.2202.00468</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2022-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2202.00468$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2202.00468$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhu, Yaoming</creatorcontrib><creatorcontrib>Wu, Liwei</creatorcontrib><creatorcontrib>Cheng, Shanbo</creatorcontrib><creatorcontrib>Wang, Mingxuan</creatorcontrib><title>Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus</title><description>The punctuation restoration task aims to correctly punctuate the output
transcriptions of automatic speech recognition systems. Previous punctuation
models, either using text only or demanding the corresponding audio, tend to be
constrained by real scenes, where unpunctuated sentences are a mixture of those
with and without audio. This paper proposes a unified multimodal punctuation
restoration framework, named UniPunc, to punctuate the mixed sentences with a
single model. UniPunc jointly represents audio and non-audio samples in a
shared latent space, based on which the model learns a hybrid representation
and punctuates both kinds of samples. We validate the effectiveness of the
UniPunc on real-world datasets, which outperforms various strong baselines
(e.g. BERT, MuSe) by at least 0.8 overall F1 scores, making a new
state-of-the-art. Extensive experiments show that UniPunc's design is a
pervasive solution: by grafting onto previous models, UniPunc enables them to
punctuate on the mixed corpus. Our code is available at
github.com/Yaoming95/UniPunc</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QIIfcewsUUQBqSkItevour6WLJK4chJo_54-WM2MRjPSIeSBs7wwSrEnSIfwkwvBRM5YUZpbst4OwQd0tJm7KfTRQUc_52E3zTCFONAvHKeYrn6ZoMffmL6pj4k24YAua86LMB1pHdN-Hu_IjYduxPt_XZDN8mVTv2Wrj9f3-nmVQalNhsZqozWIsjJOVo5r7rFAyb3AUju0hVVWSq4kgLRO-UqcgjOnghthtFyQx-vtBajdp9BDOrZnsPYCJv8AIrhJdg</recordid><startdate>20220124</startdate><enddate>20220124</enddate><creator>Zhu, Yaoming</creator><creator>Wu, Liwei</creator><creator>Cheng, Shanbo</creator><creator>Wang, Mingxuan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220124</creationdate><title>Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus</title><author>Zhu, Yaoming ; Wu, Liwei ; Cheng, Shanbo ; Wang, Mingxuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-e8b7877a2698d39d171fe4e31f2e67deb4b5b33153aa3bd5f92315d8eb4182873</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhu, Yaoming</creatorcontrib><creatorcontrib>Wu, Liwei</creatorcontrib><creatorcontrib>Cheng, Shanbo</creatorcontrib><creatorcontrib>Wang, Mingxuan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhu, Yaoming</au><au>Wu, Liwei</au><au>Cheng, Shanbo</au><au>Wang, Mingxuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus</atitle><date>2022-01-24</date><risdate>2022</risdate><abstract>The punctuation restoration task aims to correctly punctuate the output
transcriptions of automatic speech recognition systems. Previous punctuation
models, either using text only or demanding the corresponding audio, tend to be
constrained by real scenes, where unpunctuated sentences are a mixture of those
with and without audio. This paper proposes a unified multimodal punctuation
restoration framework, named UniPunc, to punctuate the mixed sentences with a
single model. UniPunc jointly represents audio and non-audio samples in a
shared latent space, based on which the model learns a hybrid representation
and punctuates both kinds of samples. We validate the effectiveness of the
UniPunc on real-world datasets, which outperforms various strong baselines
(e.g. BERT, MuSe) by at least 0.8 overall F1 scores, making a new
state-of-the-art. Extensive experiments show that UniPunc's design is a
pervasive solution: by grafting onto previous models, UniPunc enables them to
punctuate on the mixed corpus. Our code is available at
github.com/Yaoming95/UniPunc</abstract><doi>10.48550/arxiv.2202.00468</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2202.00468 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2202_00468 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language |
title | Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T00%3A00%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Unified%20Multimodal%20Punctuation%20Restoration%20Framework%20for%20Mixed-Modality%20Corpus&rft.au=Zhu,%20Yaoming&rft.date=2022-01-24&rft_id=info:doi/10.48550/arxiv.2202.00468&rft_dat=%3Carxiv_GOX%3E2202_00468%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |