Classification of Arabic fricative consonants according to their places of articulation

Many technology systems have used voice recognition applications to transcribe a speaker’s speech into text that can be used by these systems. One of the most complex tasks in speech identification is to know, which acoustic cues will be used to classify sounds. This study presents an approach for c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of electrical and computer engineering (Malacca, Malacca) Malacca), 2022-02, Vol.12 (1), p.936
Hauptverfasser: Elfahm, Youssef, Abajaddi, Nesrine, Mounir, Badia, Elmaazouzi, Laila, Mounir, Ilham, Farchi, Abdelmajid
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 1
container_start_page 936
container_title International journal of electrical and computer engineering (Malacca, Malacca)
container_volume 12
creator Elfahm, Youssef
Abajaddi, Nesrine
Mounir, Badia
Elmaazouzi, Laila
Mounir, Ilham
Farchi, Abdelmajid
description Many technology systems have used voice recognition applications to transcribe a speaker’s speech into text that can be used by these systems. One of the most complex tasks in speech identification is to know, which acoustic cues will be used to classify sounds. This study presents an approach for characterizing Arabic fricative consonants in two groups (sibilant and non-sibilant). From an acoustic point of view, our approach is based on the analysis of the energy distribution, in frequency bands, in a syllable of the consonant-vowel type. From a practical point of view, our technique has been implemented, in the MATLAB software, and tested on a corpus built in our laboratory. The results obtained show that the percentage energy distribution in a speech signal is a very powerful parameter in the classification of Arabic fricatives. We obtained an accuracy of 92% for non-sibilant consonants /f, χ, ɣ, ʕ, ћ, and h/, 84% for sibilants /s, sҁ, z, Ӡ and ∫/, and 89% for the whole classification rate. In comparison to other algorithms based on neural networks and support vector machines (SVM), our classification system was able to provide a higher classification rate.
doi_str_mv 10.11591/ijece.v12i1.pp936-945
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2604885447</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2604885447</sourcerecordid><originalsourceid>FETCH-LOGICAL-c198t-a57cecffbf67d5ff1ade46d152cf3aad4bd684dc1b509ce9c8359f69deaa75b93</originalsourceid><addsrcrecordid>eNotkFtLAzEQhYMoWGr_ggR83prs5vpYipdCwRfFx5CdJJqybtZkW_Dfu259muFw5gznQ-iWkjWlXNP7ePDg1ydaR7oeBt2ISjN-gRa1rOuq5lJdTjtRqlKSqGu0KiW2hDHJiBR8gd63nZ2kEMGOMfU4BbzJto2AQ561k8eQ-pJ6248FW4CUXew_8Jjw-OljxkNnwZe_Q5vHCMduDrpBV8F2xa_-5xK9PT68bp-r_cvTbrvZV0C1GivLJXgIoQ1COh4Ctc4z4SivITTWOtY6oZgD2nKiwWtQDddBaOetlbzVzRLdnXOHnL6PvozmkI65n16aWhCmFJ-6Ti5xdkFOpWQfzJDjl80_hhIzczQzRzNzNDNHM3FsfgHRcWwr</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2604885447</pqid></control><display><type>article</type><title>Classification of Arabic fricative consonants according to their places of articulation</title><source>EZB-FREE-00999 freely available EZB journals</source><creator>Elfahm, Youssef ; Abajaddi, Nesrine ; Mounir, Badia ; Elmaazouzi, Laila ; Mounir, Ilham ; Farchi, Abdelmajid</creator><creatorcontrib>Elfahm, Youssef ; Abajaddi, Nesrine ; Mounir, Badia ; Elmaazouzi, Laila ; Mounir, Ilham ; Farchi, Abdelmajid</creatorcontrib><description>Many technology systems have used voice recognition applications to transcribe a speaker’s speech into text that can be used by these systems. One of the most complex tasks in speech identification is to know, which acoustic cues will be used to classify sounds. This study presents an approach for characterizing Arabic fricative consonants in two groups (sibilant and non-sibilant). From an acoustic point of view, our approach is based on the analysis of the energy distribution, in frequency bands, in a syllable of the consonant-vowel type. From a practical point of view, our technique has been implemented, in the MATLAB software, and tested on a corpus built in our laboratory. The results obtained show that the percentage energy distribution in a speech signal is a very powerful parameter in the classification of Arabic fricatives. We obtained an accuracy of 92% for non-sibilant consonants /f, χ, ɣ, ʕ, ћ, and h/, 84% for sibilants /s, sҁ, z, Ӡ and ∫/, and 89% for the whole classification rate. In comparison to other algorithms based on neural networks and support vector machines (SVM), our classification system was able to provide a higher classification rate.</description><identifier>ISSN: 2088-8708</identifier><identifier>EISSN: 2722-2578</identifier><identifier>EISSN: 2088-8708</identifier><identifier>DOI: 10.11591/ijece.v12i1.pp936-945</identifier><language>eng</language><publisher>Yogyakarta: IAES Institute of Advanced Engineering and Science</publisher><subject>Acoustics ; Algorithms ; Articulation (speech) ; Classification ; Consonants (speech) ; Energy distribution ; Frequencies ; Neural networks ; Speech recognition ; Support vector machines ; Task complexity ; Voice recognition</subject><ispartof>International journal of electrical and computer engineering (Malacca, Malacca), 2022-02, Vol.12 (1), p.936</ispartof><rights>Copyright IAES Institute of Advanced Engineering and Science Feb 2022</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c198t-a57cecffbf67d5ff1ade46d152cf3aad4bd684dc1b509ce9c8359f69deaa75b93</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Elfahm, Youssef</creatorcontrib><creatorcontrib>Abajaddi, Nesrine</creatorcontrib><creatorcontrib>Mounir, Badia</creatorcontrib><creatorcontrib>Elmaazouzi, Laila</creatorcontrib><creatorcontrib>Mounir, Ilham</creatorcontrib><creatorcontrib>Farchi, Abdelmajid</creatorcontrib><title>Classification of Arabic fricative consonants according to their places of articulation</title><title>International journal of electrical and computer engineering (Malacca, Malacca)</title><description>Many technology systems have used voice recognition applications to transcribe a speaker’s speech into text that can be used by these systems. One of the most complex tasks in speech identification is to know, which acoustic cues will be used to classify sounds. This study presents an approach for characterizing Arabic fricative consonants in two groups (sibilant and non-sibilant). From an acoustic point of view, our approach is based on the analysis of the energy distribution, in frequency bands, in a syllable of the consonant-vowel type. From a practical point of view, our technique has been implemented, in the MATLAB software, and tested on a corpus built in our laboratory. The results obtained show that the percentage energy distribution in a speech signal is a very powerful parameter in the classification of Arabic fricatives. We obtained an accuracy of 92% for non-sibilant consonants /f, χ, ɣ, ʕ, ћ, and h/, 84% for sibilants /s, sҁ, z, Ӡ and ∫/, and 89% for the whole classification rate. In comparison to other algorithms based on neural networks and support vector machines (SVM), our classification system was able to provide a higher classification rate.</description><subject>Acoustics</subject><subject>Algorithms</subject><subject>Articulation (speech)</subject><subject>Classification</subject><subject>Consonants (speech)</subject><subject>Energy distribution</subject><subject>Frequencies</subject><subject>Neural networks</subject><subject>Speech recognition</subject><subject>Support vector machines</subject><subject>Task complexity</subject><subject>Voice recognition</subject><issn>2088-8708</issn><issn>2722-2578</issn><issn>2088-8708</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNotkFtLAzEQhYMoWGr_ggR83prs5vpYipdCwRfFx5CdJJqybtZkW_Dfu259muFw5gznQ-iWkjWlXNP7ePDg1ydaR7oeBt2ISjN-gRa1rOuq5lJdTjtRqlKSqGu0KiW2hDHJiBR8gd63nZ2kEMGOMfU4BbzJto2AQ561k8eQ-pJ6248FW4CUXew_8Jjw-OljxkNnwZe_Q5vHCMduDrpBV8F2xa_-5xK9PT68bp-r_cvTbrvZV0C1GivLJXgIoQ1COh4Ctc4z4SivITTWOtY6oZgD2nKiwWtQDddBaOetlbzVzRLdnXOHnL6PvozmkI65n16aWhCmFJ-6Ti5xdkFOpWQfzJDjl80_hhIzczQzRzNzNDNHM3FsfgHRcWwr</recordid><startdate>20220201</startdate><enddate>20220201</enddate><creator>Elfahm, Youssef</creator><creator>Abajaddi, Nesrine</creator><creator>Mounir, Badia</creator><creator>Elmaazouzi, Laila</creator><creator>Mounir, Ilham</creator><creator>Farchi, Abdelmajid</creator><general>IAES Institute of Advanced Engineering and Science</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BVBZV</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220201</creationdate><title>Classification of Arabic fricative consonants according to their places of articulation</title><author>Elfahm, Youssef ; Abajaddi, Nesrine ; Mounir, Badia ; Elmaazouzi, Laila ; Mounir, Ilham ; Farchi, Abdelmajid</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c198t-a57cecffbf67d5ff1ade46d152cf3aad4bd684dc1b509ce9c8359f69deaa75b93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Acoustics</topic><topic>Algorithms</topic><topic>Articulation (speech)</topic><topic>Classification</topic><topic>Consonants (speech)</topic><topic>Energy distribution</topic><topic>Frequencies</topic><topic>Neural networks</topic><topic>Speech recognition</topic><topic>Support vector machines</topic><topic>Task complexity</topic><topic>Voice recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Elfahm, Youssef</creatorcontrib><creatorcontrib>Abajaddi, Nesrine</creatorcontrib><creatorcontrib>Mounir, Badia</creatorcontrib><creatorcontrib>Elmaazouzi, Laila</creatorcontrib><creatorcontrib>Mounir, Ilham</creatorcontrib><creatorcontrib>Farchi, Abdelmajid</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>East &amp; South Asia Database</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>International journal of electrical and computer engineering (Malacca, Malacca)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Elfahm, Youssef</au><au>Abajaddi, Nesrine</au><au>Mounir, Badia</au><au>Elmaazouzi, Laila</au><au>Mounir, Ilham</au><au>Farchi, Abdelmajid</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Classification of Arabic fricative consonants according to their places of articulation</atitle><jtitle>International journal of electrical and computer engineering (Malacca, Malacca)</jtitle><date>2022-02-01</date><risdate>2022</risdate><volume>12</volume><issue>1</issue><spage>936</spage><pages>936-</pages><issn>2088-8708</issn><eissn>2722-2578</eissn><eissn>2088-8708</eissn><abstract>Many technology systems have used voice recognition applications to transcribe a speaker’s speech into text that can be used by these systems. One of the most complex tasks in speech identification is to know, which acoustic cues will be used to classify sounds. This study presents an approach for characterizing Arabic fricative consonants in two groups (sibilant and non-sibilant). From an acoustic point of view, our approach is based on the analysis of the energy distribution, in frequency bands, in a syllable of the consonant-vowel type. From a practical point of view, our technique has been implemented, in the MATLAB software, and tested on a corpus built in our laboratory. The results obtained show that the percentage energy distribution in a speech signal is a very powerful parameter in the classification of Arabic fricatives. We obtained an accuracy of 92% for non-sibilant consonants /f, χ, ɣ, ʕ, ћ, and h/, 84% for sibilants /s, sҁ, z, Ӡ and ∫/, and 89% for the whole classification rate. In comparison to other algorithms based on neural networks and support vector machines (SVM), our classification system was able to provide a higher classification rate.</abstract><cop>Yogyakarta</cop><pub>IAES Institute of Advanced Engineering and Science</pub><doi>10.11591/ijece.v12i1.pp936-945</doi></addata></record>
fulltext fulltext
identifier ISSN: 2088-8708
ispartof International journal of electrical and computer engineering (Malacca, Malacca), 2022-02, Vol.12 (1), p.936
issn 2088-8708
2722-2578
2088-8708
language eng
recordid cdi_proquest_journals_2604885447
source EZB-FREE-00999 freely available EZB journals
subjects Acoustics
Algorithms
Articulation (speech)
Classification
Consonants (speech)
Energy distribution
Frequencies
Neural networks
Speech recognition
Support vector machines
Task complexity
Voice recognition
title Classification of Arabic fricative consonants according to their places of articulation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T13%3A49%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Classification%20of%20Arabic%20fricative%20consonants%20according%20to%20their%20places%20of%20articulation&rft.jtitle=International%20journal%20of%20electrical%20and%20computer%20engineering%20(Malacca,%20Malacca)&rft.au=Elfahm,%20Youssef&rft.date=2022-02-01&rft.volume=12&rft.issue=1&rft.spage=936&rft.pages=936-&rft.issn=2088-8708&rft.eissn=2722-2578&rft_id=info:doi/10.11591/ijece.v12i1.pp936-945&rft_dat=%3Cproquest_cross%3E2604885447%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2604885447&rft_id=info:pmid/&rfr_iscdi=true