Structure of pauses in speech in the context of speaker verification and classification of speech type

Statistics of pauses appearing in Polish as a potential source of biometry information for automatic speaker recognition were described. The usage of three main types of acoustic pauses (silent, filled and breath pauses) and syntactic pauses (punctuation marks in speech transcripts) was investigated...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:EURASIP journal on audio, speech, and music processing speech, and music processing, 2016-11, Vol.2016 (1), p.1, Article 18
Hauptverfasser: Igras-Cybulska, Magdalena, Ziółko, Bartosz, Żelasko, Piotr, Witkowski, Marcin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 1
container_start_page 1
container_title EURASIP journal on audio, speech, and music processing
container_volume 2016
creator Igras-Cybulska, Magdalena
Ziółko, Bartosz
Żelasko, Piotr
Witkowski, Marcin
description Statistics of pauses appearing in Polish as a potential source of biometry information for automatic speaker recognition were described. The usage of three main types of acoustic pauses (silent, filled and breath pauses) and syntactic pauses (punctuation marks in speech transcripts) was investigated quantitatively in three types of spontaneous speech (presentations, simultaneous interpretation and radio interviews) and read speech (audio books). Selected parameters of pauses extracted for each speaker separately or for speaker groups were examined statistically to verify usefulness of information on pauses for speaker recognition and speaker profile estimation. Quantity and duration of filled pauses, audible breaths, and correlation between the temporal structure of speech and the syntax structure of the spoken language were the features which characterize speakers most. The experiment of using pauses in speaker biometry system (using Universal Background Model and i-vectors) resulted in 30 % equal error rate. Including pause-related features to the baseline Mel-frequency cepstral coefficient system has not significantly improved its performance. In the experiment with automatic recognition of three types of spontaneous speech, we achieved 78 % accuracy, using GMM classifier. Silent pause-related features allowed distinguishing between read and spontaneous speech by extreme gradient boosting with 75 % accuracy.
doi_str_mv 10.1186/s13636-016-0096-7
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_1837541035</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>4244133861</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-f7b69a71098fe24fbf75ce4daa7fcef3730781b8a7f4b3ff9d92a1027e63537c3</originalsourceid><addsrcrecordid>eNp1kM1KxDAUhYMoOI4-gLuA62p-2qRdyqCOMOBCXYc0c-N0HNuam4rz9qZUcDYuQs5NvnMuHEIuObvmvFQ3yKWSKmM8HVapTB-RGVelznItxPGBPiVniFvGClnkYkb8cwyDi0MA2nna2wEBadNS7AHcZlRxA9R1bYTvOCLpw75DoF8QGt84G5uupbZdU7eziH9PEzpmxH0P5-TE2x3Cxe89J6_3dy-LZbZ6enhc3K4yJ4sqZl7XqrKas6r0IHJfe104yNfWau_ASy2ZLnldpjGvpffVuhKWM6FByUJqJ-fkasrtQ_c5AEaz7YbQppWGl1IXOWcJnBM-US50iAG86UPzYcPecGbGOs1Up0l1mrFOo5NHTB5MbPsG4SD5X9MPtrR5mQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1837541035</pqid></control><display><type>article</type><title>Structure of pauses in speech in the context of speaker verification and classification of speech type</title><source>DOAJ Directory of Open Access Journals</source><source>Springer Nature OA Free Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><source>Alma/SFX Local Collection</source><source>SpringerLink Journals - AutoHoldings</source><creator>Igras-Cybulska, Magdalena ; Ziółko, Bartosz ; Żelasko, Piotr ; Witkowski, Marcin</creator><creatorcontrib>Igras-Cybulska, Magdalena ; Ziółko, Bartosz ; Żelasko, Piotr ; Witkowski, Marcin</creatorcontrib><description>Statistics of pauses appearing in Polish as a potential source of biometry information for automatic speaker recognition were described. The usage of three main types of acoustic pauses (silent, filled and breath pauses) and syntactic pauses (punctuation marks in speech transcripts) was investigated quantitatively in three types of spontaneous speech (presentations, simultaneous interpretation and radio interviews) and read speech (audio books). Selected parameters of pauses extracted for each speaker separately or for speaker groups were examined statistically to verify usefulness of information on pauses for speaker recognition and speaker profile estimation. Quantity and duration of filled pauses, audible breaths, and correlation between the temporal structure of speech and the syntax structure of the spoken language were the features which characterize speakers most. The experiment of using pauses in speaker biometry system (using Universal Background Model and i-vectors) resulted in 30 % equal error rate. Including pause-related features to the baseline Mel-frequency cepstral coefficient system has not significantly improved its performance. In the experiment with automatic recognition of three types of spontaneous speech, we achieved 78 % accuracy, using GMM classifier. Silent pause-related features allowed distinguishing between read and spontaneous speech by extreme gradient boosting with 75 % accuracy.</description><identifier>ISSN: 1687-4722</identifier><identifier>ISSN: 1687-4714</identifier><identifier>EISSN: 1687-4722</identifier><identifier>DOI: 10.1186/s13636-016-0096-7</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Acoustics ; Engineering ; Engineering Acoustics ; Mathematics in Music ; Signal,Image and Speech Processing</subject><ispartof>EURASIP journal on audio, speech, and music processing, 2016-11, Vol.2016 (1), p.1, Article 18</ispartof><rights>The Author(s). 2016</rights><rights>The Author(s) 2016</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c359t-f7b69a71098fe24fbf75ce4daa7fcef3730781b8a7f4b3ff9d92a1027e63537c3</citedby><cites>FETCH-LOGICAL-c359t-f7b69a71098fe24fbf75ce4daa7fcef3730781b8a7f4b3ff9d92a1027e63537c3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1186/s13636-016-0096-7$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1186/s13636-016-0096-7$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,777,781,861,27906,27907,41102,41470,42171,42539,51301,51558</link.rule.ids></links><search><creatorcontrib>Igras-Cybulska, Magdalena</creatorcontrib><creatorcontrib>Ziółko, Bartosz</creatorcontrib><creatorcontrib>Żelasko, Piotr</creatorcontrib><creatorcontrib>Witkowski, Marcin</creatorcontrib><title>Structure of pauses in speech in the context of speaker verification and classification of speech type</title><title>EURASIP journal on audio, speech, and music processing</title><addtitle>J AUDIO SPEECH MUSIC PROC</addtitle><description>Statistics of pauses appearing in Polish as a potential source of biometry information for automatic speaker recognition were described. The usage of three main types of acoustic pauses (silent, filled and breath pauses) and syntactic pauses (punctuation marks in speech transcripts) was investigated quantitatively in three types of spontaneous speech (presentations, simultaneous interpretation and radio interviews) and read speech (audio books). Selected parameters of pauses extracted for each speaker separately or for speaker groups were examined statistically to verify usefulness of information on pauses for speaker recognition and speaker profile estimation. Quantity and duration of filled pauses, audible breaths, and correlation between the temporal structure of speech and the syntax structure of the spoken language were the features which characterize speakers most. The experiment of using pauses in speaker biometry system (using Universal Background Model and i-vectors) resulted in 30 % equal error rate. Including pause-related features to the baseline Mel-frequency cepstral coefficient system has not significantly improved its performance. In the experiment with automatic recognition of three types of spontaneous speech, we achieved 78 % accuracy, using GMM classifier. Silent pause-related features allowed distinguishing between read and spontaneous speech by extreme gradient boosting with 75 % accuracy.</description><subject>Acoustics</subject><subject>Engineering</subject><subject>Engineering Acoustics</subject><subject>Mathematics in Music</subject><subject>Signal,Image and Speech Processing</subject><issn>1687-4722</issn><issn>1687-4714</issn><issn>1687-4722</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp1kM1KxDAUhYMoOI4-gLuA62p-2qRdyqCOMOBCXYc0c-N0HNuam4rz9qZUcDYuQs5NvnMuHEIuObvmvFQ3yKWSKmM8HVapTB-RGVelznItxPGBPiVniFvGClnkYkb8cwyDi0MA2nna2wEBadNS7AHcZlRxA9R1bYTvOCLpw75DoF8QGt84G5uupbZdU7eziH9PEzpmxH0P5-TE2x3Cxe89J6_3dy-LZbZ6enhc3K4yJ4sqZl7XqrKas6r0IHJfe104yNfWau_ASy2ZLnldpjGvpffVuhKWM6FByUJqJ-fkasrtQ_c5AEaz7YbQppWGl1IXOWcJnBM-US50iAG86UPzYcPecGbGOs1Up0l1mrFOo5NHTB5MbPsG4SD5X9MPtrR5mQ</recordid><startdate>20161109</startdate><enddate>20161109</enddate><creator>Igras-Cybulska, Magdalena</creator><creator>Ziółko, Bartosz</creator><creator>Żelasko, Piotr</creator><creator>Witkowski, Marcin</creator><general>Springer International Publishing</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20161109</creationdate><title>Structure of pauses in speech in the context of speaker verification and classification of speech type</title><author>Igras-Cybulska, Magdalena ; Ziółko, Bartosz ; Żelasko, Piotr ; Witkowski, Marcin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-f7b69a71098fe24fbf75ce4daa7fcef3730781b8a7f4b3ff9d92a1027e63537c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Acoustics</topic><topic>Engineering</topic><topic>Engineering Acoustics</topic><topic>Mathematics in Music</topic><topic>Signal,Image and Speech Processing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Igras-Cybulska, Magdalena</creatorcontrib><creatorcontrib>Ziółko, Bartosz</creatorcontrib><creatorcontrib>Żelasko, Piotr</creatorcontrib><creatorcontrib>Witkowski, Marcin</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>EURASIP journal on audio, speech, and music processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Igras-Cybulska, Magdalena</au><au>Ziółko, Bartosz</au><au>Żelasko, Piotr</au><au>Witkowski, Marcin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Structure of pauses in speech in the context of speaker verification and classification of speech type</atitle><jtitle>EURASIP journal on audio, speech, and music processing</jtitle><stitle>J AUDIO SPEECH MUSIC PROC</stitle><date>2016-11-09</date><risdate>2016</risdate><volume>2016</volume><issue>1</issue><spage>1</spage><pages>1-</pages><artnum>18</artnum><issn>1687-4722</issn><issn>1687-4714</issn><eissn>1687-4722</eissn><abstract>Statistics of pauses appearing in Polish as a potential source of biometry information for automatic speaker recognition were described. The usage of three main types of acoustic pauses (silent, filled and breath pauses) and syntactic pauses (punctuation marks in speech transcripts) was investigated quantitatively in three types of spontaneous speech (presentations, simultaneous interpretation and radio interviews) and read speech (audio books). Selected parameters of pauses extracted for each speaker separately or for speaker groups were examined statistically to verify usefulness of information on pauses for speaker recognition and speaker profile estimation. Quantity and duration of filled pauses, audible breaths, and correlation between the temporal structure of speech and the syntax structure of the spoken language were the features which characterize speakers most. The experiment of using pauses in speaker biometry system (using Universal Background Model and i-vectors) resulted in 30 % equal error rate. Including pause-related features to the baseline Mel-frequency cepstral coefficient system has not significantly improved its performance. In the experiment with automatic recognition of three types of spontaneous speech, we achieved 78 % accuracy, using GMM classifier. Silent pause-related features allowed distinguishing between read and spontaneous speech by extreme gradient boosting with 75 % accuracy.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><doi>10.1186/s13636-016-0096-7</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1687-4722
ispartof EURASIP journal on audio, speech, and music processing, 2016-11, Vol.2016 (1), p.1, Article 18
issn 1687-4722
1687-4714
1687-4722
language eng
recordid cdi_proquest_journals_1837541035
source DOAJ Directory of Open Access Journals; Springer Nature OA Free Journals; EZB-FREE-00999 freely available EZB journals; Alma/SFX Local Collection; SpringerLink Journals - AutoHoldings
subjects Acoustics
Engineering
Engineering Acoustics
Mathematics in Music
Signal,Image and Speech Processing
title Structure of pauses in speech in the context of speaker verification and classification of speech type
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T11%3A39%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Structure%20of%20pauses%20in%20speech%20in%20the%20context%20of%20speaker%20verification%20and%20classification%20of%20speech%20type&rft.jtitle=EURASIP%20journal%20on%20audio,%20speech,%20and%20music%20processing&rft.au=Igras-Cybulska,%20Magdalena&rft.date=2016-11-09&rft.volume=2016&rft.issue=1&rft.spage=1&rft.pages=1-&rft.artnum=18&rft.issn=1687-4722&rft.eissn=1687-4722&rft_id=info:doi/10.1186/s13636-016-0096-7&rft_dat=%3Cproquest_cross%3E4244133861%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1837541035&rft_id=info:pmid/&rfr_iscdi=true