Stethoscope-Sensed Speech and Breath-Sounds for Person Identification With Sparse Training Data
A novel person identification (PID) technique is developed in this study, which exploits a new biometric called bronchial breath sound and speech signal acquired by a stethoscope. In addition to investigating the acoustic characteristics of breath sounds for PID, we evaluate three identification met...
Gespeichert in:
Veröffentlicht in: | IEEE sensors journal 2020-01, Vol.20 (2), p.848-859 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 859 |
---|---|
container_issue | 2 |
container_start_page | 848 |
container_title | IEEE sensors journal |
container_volume | 20 |
creator | Tran, Van-Thuan Tsai, Wei-Ho |
description | A novel person identification (PID) technique is developed in this study, which exploits a new biometric called bronchial breath sound and speech signal acquired by a stethoscope. In addition to investigating the acoustic characteristics of breath sounds for PID, we evaluate three identification methods, including support vector machines (SVM), artificial neural networks (ANN), and i-vector approach. Recognizing the requirement that the amount of sound data collected from each person should be as small as possible, this work studies data augmentation (DA) techniques that avoid the system training process from the overfitting problem when the training sound data is insufficient. In addition, we apply feature engineering techniques to find the informative subset of breath sound features which is beneficial for PID. Our experiments were conducted using a dataset composed of 16 subjects, including an equal number of male and female participants. In the test phase, both Support Vector Machine combined with feature selection and Artificial Neural Networks approaches yielded the promising accuracies of 98%. |
doi_str_mv | 10.1109/JSEN.2019.2945364 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_JSEN_2019_2945364</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8856246</ieee_id><sourcerecordid>2333539942</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-87bad2e1f552d4ee243729178659cf7013c16d304b0143ff127c48144974aee43</originalsourceid><addsrcrecordid>eNo9kMtOwzAQRSMEEqXwAYiNJdYpfsbxkkeBogqQUgQ7y3XGJBU4wXYX_D2JWrGaGencmdHJsnOCZ4RgdfVUzZ9nFBM1o4oLVvCDbEKEKHMieXk49gznnMmP4-wkxg0eSCnkJNNVgtR00XY95BX4CDWqegDbIONrdBPApCavuq2vI3JdQK8QYufRogafWtdak9phfG9TM-RMiIBWwbS-9Z_oziRzmh058xXhbF-n2dv9fHX7mC9fHha318vcUsVSXsq1qSkQJwStOQAdXqWKyLIQyjqJCbOkqBnma0w4c45QaXlJOFeSGwDOptnlbm8fup8txKQ33Tb44aSmjDHBlOJ0oMiOsqGLMYDTfWi_TfjVBOvRox496tGj3nscMhe7TAsA_3xZioLygv0BImZuBg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2333539942</pqid></control><display><type>article</type><title>Stethoscope-Sensed Speech and Breath-Sounds for Person Identification With Sparse Training Data</title><source>IEEE Electronic Library (IEL)</source><creator>Tran, Van-Thuan ; Tsai, Wei-Ho</creator><creatorcontrib>Tran, Van-Thuan ; Tsai, Wei-Ho</creatorcontrib><description>A novel person identification (PID) technique is developed in this study, which exploits a new biometric called bronchial breath sound and speech signal acquired by a stethoscope. In addition to investigating the acoustic characteristics of breath sounds for PID, we evaluate three identification methods, including support vector machines (SVM), artificial neural networks (ANN), and i-vector approach. Recognizing the requirement that the amount of sound data collected from each person should be as small as possible, this work studies data augmentation (DA) techniques that avoid the system training process from the overfitting problem when the training sound data is insufficient. In addition, we apply feature engineering techniques to find the informative subset of breath sound features which is beneficial for PID. Our experiments were conducted using a dataset composed of 16 subjects, including an equal number of male and female participants. In the test phase, both Support Vector Machine combined with feature selection and Artificial Neural Networks approaches yielded the promising accuracies of 98%.</description><identifier>ISSN: 1530-437X</identifier><identifier>EISSN: 1558-1748</identifier><identifier>DOI: 10.1109/JSEN.2019.2945364</identifier><identifier>CODEN: ISJEAZ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Acoustics ; Artificial neural networks ; audio data augmentation ; Authentication ; bronchial breath sounds ; feature engineering ; i-vector ; Identification methods ; Neural networks ; person identification ; Position measurement ; Sensors ; Sound ; Speech recognition ; Stethoscope ; Support vector machines ; Training</subject><ispartof>IEEE sensors journal, 2020-01, Vol.20 (2), p.848-859</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-87bad2e1f552d4ee243729178659cf7013c16d304b0143ff127c48144974aee43</citedby><cites>FETCH-LOGICAL-c293t-87bad2e1f552d4ee243729178659cf7013c16d304b0143ff127c48144974aee43</cites><orcidid>0000-0002-0867-7951 ; 0000-0002-3197-679X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8856246$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8856246$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Tran, Van-Thuan</creatorcontrib><creatorcontrib>Tsai, Wei-Ho</creatorcontrib><title>Stethoscope-Sensed Speech and Breath-Sounds for Person Identification With Sparse Training Data</title><title>IEEE sensors journal</title><addtitle>JSEN</addtitle><description>A novel person identification (PID) technique is developed in this study, which exploits a new biometric called bronchial breath sound and speech signal acquired by a stethoscope. In addition to investigating the acoustic characteristics of breath sounds for PID, we evaluate three identification methods, including support vector machines (SVM), artificial neural networks (ANN), and i-vector approach. Recognizing the requirement that the amount of sound data collected from each person should be as small as possible, this work studies data augmentation (DA) techniques that avoid the system training process from the overfitting problem when the training sound data is insufficient. In addition, we apply feature engineering techniques to find the informative subset of breath sound features which is beneficial for PID. Our experiments were conducted using a dataset composed of 16 subjects, including an equal number of male and female participants. In the test phase, both Support Vector Machine combined with feature selection and Artificial Neural Networks approaches yielded the promising accuracies of 98%.</description><subject>Acoustics</subject><subject>Artificial neural networks</subject><subject>audio data augmentation</subject><subject>Authentication</subject><subject>bronchial breath sounds</subject><subject>feature engineering</subject><subject>i-vector</subject><subject>Identification methods</subject><subject>Neural networks</subject><subject>person identification</subject><subject>Position measurement</subject><subject>Sensors</subject><subject>Sound</subject><subject>Speech recognition</subject><subject>Stethoscope</subject><subject>Support vector machines</subject><subject>Training</subject><issn>1530-437X</issn><issn>1558-1748</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kMtOwzAQRSMEEqXwAYiNJdYpfsbxkkeBogqQUgQ7y3XGJBU4wXYX_D2JWrGaGencmdHJsnOCZ4RgdfVUzZ9nFBM1o4oLVvCDbEKEKHMieXk49gznnMmP4-wkxg0eSCnkJNNVgtR00XY95BX4CDWqegDbIONrdBPApCavuq2vI3JdQK8QYufRogafWtdak9phfG9TM-RMiIBWwbS-9Z_oziRzmh058xXhbF-n2dv9fHX7mC9fHha318vcUsVSXsq1qSkQJwStOQAdXqWKyLIQyjqJCbOkqBnma0w4c45QaXlJOFeSGwDOptnlbm8fup8txKQ33Tb44aSmjDHBlOJ0oMiOsqGLMYDTfWi_TfjVBOvRox496tGj3nscMhe7TAsA_3xZioLygv0BImZuBg</recordid><startdate>20200115</startdate><enddate>20200115</enddate><creator>Tran, Van-Thuan</creator><creator>Tsai, Wei-Ho</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-0867-7951</orcidid><orcidid>https://orcid.org/0000-0002-3197-679X</orcidid></search><sort><creationdate>20200115</creationdate><title>Stethoscope-Sensed Speech and Breath-Sounds for Person Identification With Sparse Training Data</title><author>Tran, Van-Thuan ; Tsai, Wei-Ho</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-87bad2e1f552d4ee243729178659cf7013c16d304b0143ff127c48144974aee43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Acoustics</topic><topic>Artificial neural networks</topic><topic>audio data augmentation</topic><topic>Authentication</topic><topic>bronchial breath sounds</topic><topic>feature engineering</topic><topic>i-vector</topic><topic>Identification methods</topic><topic>Neural networks</topic><topic>person identification</topic><topic>Position measurement</topic><topic>Sensors</topic><topic>Sound</topic><topic>Speech recognition</topic><topic>Stethoscope</topic><topic>Support vector machines</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tran, Van-Thuan</creatorcontrib><creatorcontrib>Tsai, Wei-Ho</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE sensors journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tran, Van-Thuan</au><au>Tsai, Wei-Ho</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Stethoscope-Sensed Speech and Breath-Sounds for Person Identification With Sparse Training Data</atitle><jtitle>IEEE sensors journal</jtitle><stitle>JSEN</stitle><date>2020-01-15</date><risdate>2020</risdate><volume>20</volume><issue>2</issue><spage>848</spage><epage>859</epage><pages>848-859</pages><issn>1530-437X</issn><eissn>1558-1748</eissn><coden>ISJEAZ</coden><abstract>A novel person identification (PID) technique is developed in this study, which exploits a new biometric called bronchial breath sound and speech signal acquired by a stethoscope. In addition to investigating the acoustic characteristics of breath sounds for PID, we evaluate three identification methods, including support vector machines (SVM), artificial neural networks (ANN), and i-vector approach. Recognizing the requirement that the amount of sound data collected from each person should be as small as possible, this work studies data augmentation (DA) techniques that avoid the system training process from the overfitting problem when the training sound data is insufficient. In addition, we apply feature engineering techniques to find the informative subset of breath sound features which is beneficial for PID. Our experiments were conducted using a dataset composed of 16 subjects, including an equal number of male and female participants. In the test phase, both Support Vector Machine combined with feature selection and Artificial Neural Networks approaches yielded the promising accuracies of 98%.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSEN.2019.2945364</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-0867-7951</orcidid><orcidid>https://orcid.org/0000-0002-3197-679X</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1530-437X |
ispartof | IEEE sensors journal, 2020-01, Vol.20 (2), p.848-859 |
issn | 1530-437X 1558-1748 |
language | eng |
recordid | cdi_crossref_primary_10_1109_JSEN_2019_2945364 |
source | IEEE Electronic Library (IEL) |
subjects | Acoustics Artificial neural networks audio data augmentation Authentication bronchial breath sounds feature engineering i-vector Identification methods Neural networks person identification Position measurement Sensors Sound Speech recognition Stethoscope Support vector machines Training |
title | Stethoscope-Sensed Speech and Breath-Sounds for Person Identification With Sparse Training Data |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T05%3A03%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Stethoscope-Sensed%20Speech%20and%20Breath-Sounds%20for%20Person%20Identification%20With%20Sparse%20Training%20Data&rft.jtitle=IEEE%20sensors%20journal&rft.au=Tran,%20Van-Thuan&rft.date=2020-01-15&rft.volume=20&rft.issue=2&rft.spage=848&rft.epage=859&rft.pages=848-859&rft.issn=1530-437X&rft.eissn=1558-1748&rft.coden=ISJEAZ&rft_id=info:doi/10.1109/JSEN.2019.2945364&rft_dat=%3Cproquest_RIE%3E2333539942%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2333539942&rft_id=info:pmid/&rft_ieee_id=8856246&rfr_iscdi=true |