Selective Listening by Synchronizing Speech With Lips

A speaker extraction algorithm seeks to extract the speech of a target speaker from a multi-talker speech mixture when given a cue that represents the target speaker, such as a pre-enrolled speech utterance, or an accompanying video track. Visual cues are particularly useful when a pre-enrolled spee...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ACM transactions on audio, speech, and language processing speech, and language processing, 2022, Vol.30, p.1650-1664
Hauptverfasser: Pan, Zexu, Tao, Ruijie, Xu, Chenglin, Li, Haizhou
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1664
container_issue
container_start_page 1650
container_title IEEE/ACM transactions on audio, speech, and language processing
container_volume 30
creator Pan, Zexu
Tao, Ruijie
Xu, Chenglin
Li, Haizhou
description A speaker extraction algorithm seeks to extract the speech of a target speaker from a multi-talker speech mixture when given a cue that represents the target speaker, such as a pre-enrolled speech utterance, or an accompanying video track. Visual cues are particularly useful when a pre-enrolled speech is not available. In this work, we don't rely on the target speaker's pre-enrolled speech, but rather use the target speaker's face track as the speaker cue, that is referred to as the auxiliary reference, to form an attractor towards the target speaker. We advocate that the temporal synchronization between the speech and its accompanying lip movements is a direct and dominant audio-visual cue. Therefore, we propose a self-supervised pre-training strategy, to exploit the speech-lip synchronization cue for target speaker extraction, which allows us to leverage abundant unlabeled in-domain data. We transfer the knowledge from the pre-trained model to the attractor encoder of the speaker extraction network. We show that the proposed speaker extraction network outperforms various competitive baselines in terms of signal quality, perceptual quality, and intelligibility, achieving state-of-the-art performance.
doi_str_mv 10.1109/TASLP.2022.3153258
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2663643620</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9721129</ieee_id><sourcerecordid>2663643620</sourcerecordid><originalsourceid>FETCH-LOGICAL-c269t-32735bea97a10f6f560925196f8888d3afde9720242976017072e6bbd8b434e03</originalsourceid><addsrcrecordid>eNo9kEFLw0AQhRdRsNT-Ab0EPKfOzia72WMpaoWAQioelySdmC01ibupUH-9ia3OZYbhvZnHx9g1hznnoO_Wiyx9mSMgzgWPBcbJGZugQB1qAdH534waLtnM-y0AcFBaq2jC4ox2VPb2i4LU-p4a27wHxSHIDk1Zu7ax3-Mi64jKOnizfT3IOn_FLqp852l26lP2-nC_Xq7C9PnxablIwxKl7kOBSsQF5VrlHCpZxRI0xlzLKhlqI_JqQ1oNwSPUSgJXoJBkUWySIhIRgZiy2-PdzrWfe_K92bZ71wwvDUopZCQkjio8qkrXeu-oMp2zH7k7GA5mJGR-CZmRkDkRGkw3R5Mlon_DkIZz1OIHeWVfmQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2663643620</pqid></control><display><type>article</type><title>Selective Listening by Synchronizing Speech With Lips</title><source>Access via ACM Digital Library</source><source>IEEE Electronic Library (IEL)</source><creator>Pan, Zexu ; Tao, Ruijie ; Xu, Chenglin ; Li, Haizhou</creator><creatorcontrib>Pan, Zexu ; Tao, Ruijie ; Xu, Chenglin ; Li, Haizhou</creatorcontrib><description>A speaker extraction algorithm seeks to extract the speech of a target speaker from a multi-talker speech mixture when given a cue that represents the target speaker, such as a pre-enrolled speech utterance, or an accompanying video track. Visual cues are particularly useful when a pre-enrolled speech is not available. In this work, we don't rely on the target speaker's pre-enrolled speech, but rather use the target speaker's face track as the speaker cue, that is referred to as the auxiliary reference, to form an attractor towards the target speaker. We advocate that the temporal synchronization between the speech and its accompanying lip movements is a direct and dominant audio-visual cue. Therefore, we propose a self-supervised pre-training strategy, to exploit the speech-lip synchronization cue for target speaker extraction, which allows us to leverage abundant unlabeled in-domain data. We transfer the knowledge from the pre-trained model to the attractor encoder of the speaker extraction network. We show that the proposed speaker extraction network outperforms various competitive baselines in terms of signal quality, perceptual quality, and intelligibility, achieving state-of-the-art performance.</description><identifier>ISSN: 2329-9290</identifier><identifier>EISSN: 2329-9304</identifier><identifier>DOI: 10.1109/TASLP.2022.3153258</identifier><identifier>CODEN: ITASFA</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Coders ; Feature extraction ; Intelligibility ; Knowledge management ; Lips ; Multi-modal ; self-enrollment ; Signal quality ; speaker embedding ; Speech ; Speech processing ; Speech recognition ; speech-lip synchronization ; Synchronism ; Synchronization ; target speaker extraction ; Task analysis ; time-domain ; Tracking ; Visualization ; Voice recognition</subject><ispartof>IEEE/ACM transactions on audio, speech, and language processing, 2022, Vol.30, p.1650-1664</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c269t-32735bea97a10f6f560925196f8888d3afde9720242976017072e6bbd8b434e03</citedby><cites>FETCH-LOGICAL-c269t-32735bea97a10f6f560925196f8888d3afde9720242976017072e6bbd8b434e03</cites><orcidid>0000-0002-1584-6282 ; 0000-0001-9158-9401 ; 0000-0002-8106-1176</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9721129$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>315,781,785,797,4025,27928,27929,27930,54763</link.rule.ids></links><search><creatorcontrib>Pan, Zexu</creatorcontrib><creatorcontrib>Tao, Ruijie</creatorcontrib><creatorcontrib>Xu, Chenglin</creatorcontrib><creatorcontrib>Li, Haizhou</creatorcontrib><title>Selective Listening by Synchronizing Speech With Lips</title><title>IEEE/ACM transactions on audio, speech, and language processing</title><addtitle>TASLP</addtitle><description>A speaker extraction algorithm seeks to extract the speech of a target speaker from a multi-talker speech mixture when given a cue that represents the target speaker, such as a pre-enrolled speech utterance, or an accompanying video track. Visual cues are particularly useful when a pre-enrolled speech is not available. In this work, we don't rely on the target speaker's pre-enrolled speech, but rather use the target speaker's face track as the speaker cue, that is referred to as the auxiliary reference, to form an attractor towards the target speaker. We advocate that the temporal synchronization between the speech and its accompanying lip movements is a direct and dominant audio-visual cue. Therefore, we propose a self-supervised pre-training strategy, to exploit the speech-lip synchronization cue for target speaker extraction, which allows us to leverage abundant unlabeled in-domain data. We transfer the knowledge from the pre-trained model to the attractor encoder of the speaker extraction network. We show that the proposed speaker extraction network outperforms various competitive baselines in terms of signal quality, perceptual quality, and intelligibility, achieving state-of-the-art performance.</description><subject>Algorithms</subject><subject>Coders</subject><subject>Feature extraction</subject><subject>Intelligibility</subject><subject>Knowledge management</subject><subject>Lips</subject><subject>Multi-modal</subject><subject>self-enrollment</subject><subject>Signal quality</subject><subject>speaker embedding</subject><subject>Speech</subject><subject>Speech processing</subject><subject>Speech recognition</subject><subject>speech-lip synchronization</subject><subject>Synchronism</subject><subject>Synchronization</subject><subject>target speaker extraction</subject><subject>Task analysis</subject><subject>time-domain</subject><subject>Tracking</subject><subject>Visualization</subject><subject>Voice recognition</subject><issn>2329-9290</issn><issn>2329-9304</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNo9kEFLw0AQhRdRsNT-Ab0EPKfOzia72WMpaoWAQioelySdmC01ibupUH-9ia3OZYbhvZnHx9g1hznnoO_Wiyx9mSMgzgWPBcbJGZugQB1qAdH534waLtnM-y0AcFBaq2jC4ox2VPb2i4LU-p4a27wHxSHIDk1Zu7ax3-Mi64jKOnizfT3IOn_FLqp852l26lP2-nC_Xq7C9PnxablIwxKl7kOBSsQF5VrlHCpZxRI0xlzLKhlqI_JqQ1oNwSPUSgJXoJBkUWySIhIRgZiy2-PdzrWfe_K92bZ71wwvDUopZCQkjio8qkrXeu-oMp2zH7k7GA5mJGR-CZmRkDkRGkw3R5Mlon_DkIZz1OIHeWVfmQ</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Pan, Zexu</creator><creator>Tao, Ruijie</creator><creator>Xu, Chenglin</creator><creator>Li, Haizhou</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-1584-6282</orcidid><orcidid>https://orcid.org/0000-0001-9158-9401</orcidid><orcidid>https://orcid.org/0000-0002-8106-1176</orcidid></search><sort><creationdate>2022</creationdate><title>Selective Listening by Synchronizing Speech With Lips</title><author>Pan, Zexu ; Tao, Ruijie ; Xu, Chenglin ; Li, Haizhou</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c269t-32735bea97a10f6f560925196f8888d3afde9720242976017072e6bbd8b434e03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Coders</topic><topic>Feature extraction</topic><topic>Intelligibility</topic><topic>Knowledge management</topic><topic>Lips</topic><topic>Multi-modal</topic><topic>self-enrollment</topic><topic>Signal quality</topic><topic>speaker embedding</topic><topic>Speech</topic><topic>Speech processing</topic><topic>Speech recognition</topic><topic>speech-lip synchronization</topic><topic>Synchronism</topic><topic>Synchronization</topic><topic>target speaker extraction</topic><topic>Task analysis</topic><topic>time-domain</topic><topic>Tracking</topic><topic>Visualization</topic><topic>Voice recognition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Pan, Zexu</creatorcontrib><creatorcontrib>Tao, Ruijie</creatorcontrib><creatorcontrib>Xu, Chenglin</creatorcontrib><creatorcontrib>Li, Haizhou</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Xplore Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE/ACM transactions on audio, speech, and language processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Pan, Zexu</au><au>Tao, Ruijie</au><au>Xu, Chenglin</au><au>Li, Haizhou</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Selective Listening by Synchronizing Speech With Lips</atitle><jtitle>IEEE/ACM transactions on audio, speech, and language processing</jtitle><stitle>TASLP</stitle><date>2022</date><risdate>2022</risdate><volume>30</volume><spage>1650</spage><epage>1664</epage><pages>1650-1664</pages><issn>2329-9290</issn><eissn>2329-9304</eissn><coden>ITASFA</coden><abstract>A speaker extraction algorithm seeks to extract the speech of a target speaker from a multi-talker speech mixture when given a cue that represents the target speaker, such as a pre-enrolled speech utterance, or an accompanying video track. Visual cues are particularly useful when a pre-enrolled speech is not available. In this work, we don't rely on the target speaker's pre-enrolled speech, but rather use the target speaker's face track as the speaker cue, that is referred to as the auxiliary reference, to form an attractor towards the target speaker. We advocate that the temporal synchronization between the speech and its accompanying lip movements is a direct and dominant audio-visual cue. Therefore, we propose a self-supervised pre-training strategy, to exploit the speech-lip synchronization cue for target speaker extraction, which allows us to leverage abundant unlabeled in-domain data. We transfer the knowledge from the pre-trained model to the attractor encoder of the speaker extraction network. We show that the proposed speaker extraction network outperforms various competitive baselines in terms of signal quality, perceptual quality, and intelligibility, achieving state-of-the-art performance.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TASLP.2022.3153258</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-1584-6282</orcidid><orcidid>https://orcid.org/0000-0001-9158-9401</orcidid><orcidid>https://orcid.org/0000-0002-8106-1176</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2329-9290
ispartof IEEE/ACM transactions on audio, speech, and language processing, 2022, Vol.30, p.1650-1664
issn 2329-9290
2329-9304
language eng
recordid cdi_proquest_journals_2663643620
source Access via ACM Digital Library; IEEE Electronic Library (IEL)
subjects Algorithms
Coders
Feature extraction
Intelligibility
Knowledge management
Lips
Multi-modal
self-enrollment
Signal quality
speaker embedding
Speech
Speech processing
Speech recognition
speech-lip synchronization
Synchronism
Synchronization
target speaker extraction
Task analysis
time-domain
Tracking
Visualization
Voice recognition
title Selective Listening by Synchronizing Speech With Lips
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T11%3A54%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Selective%20Listening%20by%20Synchronizing%20Speech%20With%20Lips&rft.jtitle=IEEE/ACM%20transactions%20on%20audio,%20speech,%20and%20language%20processing&rft.au=Pan,%20Zexu&rft.date=2022&rft.volume=30&rft.spage=1650&rft.epage=1664&rft.pages=1650-1664&rft.issn=2329-9290&rft.eissn=2329-9304&rft.coden=ITASFA&rft_id=info:doi/10.1109/TASLP.2022.3153258&rft_dat=%3Cproquest_cross%3E2663643620%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2663643620&rft_id=info:pmid/&rft_ieee_id=9721129&rfr_iscdi=true