An efficient adaptive artificial neural network based text to speech synthesizer for Hindi language
Speech recognition is one of the major research regions these days under speech processing. This paper depends on developing a whole process that takes the input as the text file from the user and provides the output in speech form. This paper proposes a text to speech synthesizer for the Hindi lang...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2021-07, Vol.80 (16), p.24669-24695 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 24695 |
---|---|
container_issue | 16 |
container_start_page | 24669 |
container_title | Multimedia tools and applications |
container_volume | 80 |
creator | Kumari, Ruchika Dev, Amita Kumar, Ashwani |
description | Speech recognition is one of the major research regions these days under speech processing. This paper depends on developing a whole process that takes the input as the text file from the user and provides the output in speech form. This paper proposes a text to speech synthesizer for the Hindi language depends on the coefficients of Mel-frequency cepstral (MFCC) features are extracted to the production and linguistic constraints proposed for modeling the parameters such as intonation, duration, and syllable intensities. The features extracted from the MFCC features are phrasing, fundamental frequency, duration, etc. Neural network models are discovered to confine the features as mentioned earlier, employing MFCC. The performance of the proposed ALO-ANN is computed utilizing objective measures such as prediction error (η), standard deviation (σ), and linear correlation coefficient (χ). The accuracy predicted of the proposed ALO-ANN models is high when compared with other models such as DNN and ANN. The prediction accuracy is high for ALO-ANN models when compared with other models. |
doi_str_mv | 10.1007/s11042-021-10771-w |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2548389689</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2548389689</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-ad804ef21013b543f5e2a92cc780327c401bfbca7d8a8d0feb70fb1cbd4aa1633</originalsourceid><addsrcrecordid>eNp9kMFOwzAQRCMEEqXwA5wscQ7s2kmdHKsKKFIlLnC2HGfdupQk2C6lfD1pg8SN04xWM7PSS5JrhFsEkHcBETKeAscUQUpMdyfJCHMpUik5nvZeFJDKHPA8uQhhDYCTnGejxEwbRtY646iJTNe6i-6TmPbRHY56wxra-qPEXevfWKUD1SzSV2SxZaEjMisW9k1cUXDf5JltPZu7pnZso5vlVi_pMjmzehPo6lfHyevD_ctsni6eH59m00VqBJYx1XUBGVmOgKLKM2Fz4rrkxsgCBJcmA6xsZbSsC13UYKmSYCs0VZ1pjRMhxsnNsNv59mNLIap1u_VN_1LxPCtEUU6Ksk_xIWV8G4Inqzrv3rXfKwR1gKkGmKqHqY4w1a4viaEU-nCzJP83_U_rB47lecc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2548389689</pqid></control><display><type>article</type><title>An efficient adaptive artificial neural network based text to speech synthesizer for Hindi language</title><source>Springer Nature - Complete Springer Journals</source><creator>Kumari, Ruchika ; Dev, Amita ; Kumar, Ashwani</creator><creatorcontrib>Kumari, Ruchika ; Dev, Amita ; Kumar, Ashwani</creatorcontrib><description>Speech recognition is one of the major research regions these days under speech processing. This paper depends on developing a whole process that takes the input as the text file from the user and provides the output in speech form. This paper proposes a text to speech synthesizer for the Hindi language depends on the coefficients of Mel-frequency cepstral (MFCC) features are extracted to the production and linguistic constraints proposed for modeling the parameters such as intonation, duration, and syllable intensities. The features extracted from the MFCC features are phrasing, fundamental frequency, duration, etc. Neural network models are discovered to confine the features as mentioned earlier, employing MFCC. The performance of the proposed ALO-ANN is computed utilizing objective measures such as prediction error (η), standard deviation (σ), and linear correlation coefficient (χ). The accuracy predicted of the proposed ALO-ANN models is high when compared with other models such as DNN and ANN. The prediction accuracy is high for ALO-ANN models when compared with other models.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-021-10771-w</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial neural networks ; Computer Communication Networks ; Computer Science ; Constraint modelling ; Correlation coefficients ; Data Structures and Information Theory ; Error analysis ; Feature extraction ; Hindi language ; Model accuracy ; Multimedia Information Systems ; Neural networks ; Resonant frequencies ; Special Purpose and Application-Based Systems ; Speech processing ; Speech recognition ; Synthesis</subject><ispartof>Multimedia tools and applications, 2021-07, Vol.80 (16), p.24669-24695</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-ad804ef21013b543f5e2a92cc780327c401bfbca7d8a8d0feb70fb1cbd4aa1633</citedby><cites>FETCH-LOGICAL-c319t-ad804ef21013b543f5e2a92cc780327c401bfbca7d8a8d0feb70fb1cbd4aa1633</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-021-10771-w$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-021-10771-w$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Kumari, Ruchika</creatorcontrib><creatorcontrib>Dev, Amita</creatorcontrib><creatorcontrib>Kumar, Ashwani</creatorcontrib><title>An efficient adaptive artificial neural network based text to speech synthesizer for Hindi language</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Speech recognition is one of the major research regions these days under speech processing. This paper depends on developing a whole process that takes the input as the text file from the user and provides the output in speech form. This paper proposes a text to speech synthesizer for the Hindi language depends on the coefficients of Mel-frequency cepstral (MFCC) features are extracted to the production and linguistic constraints proposed for modeling the parameters such as intonation, duration, and syllable intensities. The features extracted from the MFCC features are phrasing, fundamental frequency, duration, etc. Neural network models are discovered to confine the features as mentioned earlier, employing MFCC. The performance of the proposed ALO-ANN is computed utilizing objective measures such as prediction error (η), standard deviation (σ), and linear correlation coefficient (χ). The accuracy predicted of the proposed ALO-ANN models is high when compared with other models such as DNN and ANN. The prediction accuracy is high for ALO-ANN models when compared with other models.</description><subject>Artificial neural networks</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Constraint modelling</subject><subject>Correlation coefficients</subject><subject>Data Structures and Information Theory</subject><subject>Error analysis</subject><subject>Feature extraction</subject><subject>Hindi language</subject><subject>Model accuracy</subject><subject>Multimedia Information Systems</subject><subject>Neural networks</subject><subject>Resonant frequencies</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Speech processing</subject><subject>Speech recognition</subject><subject>Synthesis</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>BENPR</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kMFOwzAQRCMEEqXwA5wscQ7s2kmdHKsKKFIlLnC2HGfdupQk2C6lfD1pg8SN04xWM7PSS5JrhFsEkHcBETKeAscUQUpMdyfJCHMpUik5nvZeFJDKHPA8uQhhDYCTnGejxEwbRtY646iJTNe6i-6TmPbRHY56wxra-qPEXevfWKUD1SzSV2SxZaEjMisW9k1cUXDf5JltPZu7pnZso5vlVi_pMjmzehPo6lfHyevD_ctsni6eH59m00VqBJYx1XUBGVmOgKLKM2Fz4rrkxsgCBJcmA6xsZbSsC13UYKmSYCs0VZ1pjRMhxsnNsNv59mNLIap1u_VN_1LxPCtEUU6Ksk_xIWV8G4Inqzrv3rXfKwR1gKkGmKqHqY4w1a4viaEU-nCzJP83_U_rB47lecc</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Kumari, Ruchika</creator><creator>Dev, Amita</creator><creator>Kumar, Ashwani</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PKEHL</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope></search><sort><creationdate>20210701</creationdate><title>An efficient adaptive artificial neural network based text to speech synthesizer for Hindi language</title><author>Kumari, Ruchika ; Dev, Amita ; Kumar, Ashwani</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-ad804ef21013b543f5e2a92cc780327c401bfbca7d8a8d0feb70fb1cbd4aa1633</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Constraint modelling</topic><topic>Correlation coefficients</topic><topic>Data Structures and Information Theory</topic><topic>Error analysis</topic><topic>Feature extraction</topic><topic>Hindi language</topic><topic>Model accuracy</topic><topic>Multimedia Information Systems</topic><topic>Neural networks</topic><topic>Resonant frequencies</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Speech processing</topic><topic>Speech recognition</topic><topic>Synthesis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kumari, Ruchika</creatorcontrib><creatorcontrib>Dev, Amita</creatorcontrib><creatorcontrib>Kumar, Ashwani</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied & Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kumari, Ruchika</au><au>Dev, Amita</au><au>Kumar, Ashwani</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An efficient adaptive artificial neural network based text to speech synthesizer for Hindi language</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2021-07-01</date><risdate>2021</risdate><volume>80</volume><issue>16</issue><spage>24669</spage><epage>24695</epage><pages>24669-24695</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Speech recognition is one of the major research regions these days under speech processing. This paper depends on developing a whole process that takes the input as the text file from the user and provides the output in speech form. This paper proposes a text to speech synthesizer for the Hindi language depends on the coefficients of Mel-frequency cepstral (MFCC) features are extracted to the production and linguistic constraints proposed for modeling the parameters such as intonation, duration, and syllable intensities. The features extracted from the MFCC features are phrasing, fundamental frequency, duration, etc. Neural network models are discovered to confine the features as mentioned earlier, employing MFCC. The performance of the proposed ALO-ANN is computed utilizing objective measures such as prediction error (η), standard deviation (σ), and linear correlation coefficient (χ). The accuracy predicted of the proposed ALO-ANN models is high when compared with other models such as DNN and ANN. The prediction accuracy is high for ALO-ANN models when compared with other models.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-021-10771-w</doi><tpages>27</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1380-7501 |
ispartof | Multimedia tools and applications, 2021-07, Vol.80 (16), p.24669-24695 |
issn | 1380-7501 1573-7721 |
language | eng |
recordid | cdi_proquest_journals_2548389689 |
source | Springer Nature - Complete Springer Journals |
subjects | Artificial neural networks Computer Communication Networks Computer Science Constraint modelling Correlation coefficients Data Structures and Information Theory Error analysis Feature extraction Hindi language Model accuracy Multimedia Information Systems Neural networks Resonant frequencies Special Purpose and Application-Based Systems Speech processing Speech recognition Synthesis |
title | An efficient adaptive artificial neural network based text to speech synthesizer for Hindi language |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T22%3A54%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20efficient%20adaptive%20artificial%20neural%20network%20based%20text%20to%20speech%20synthesizer%20for%20Hindi%20language&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Kumari,%20Ruchika&rft.date=2021-07-01&rft.volume=80&rft.issue=16&rft.spage=24669&rft.epage=24695&rft.pages=24669-24695&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-021-10771-w&rft_dat=%3Cproquest_cross%3E2548389689%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2548389689&rft_id=info:pmid/&rfr_iscdi=true |