Tunisian dialect recognition based on hybrid techniques

In this research paper, an Arabic Automatic Speech Recognition System is implemented in order to recognize ten Arabic digits (from zero to nine) spoken in Tunisian dialect (Darija). This system is divided in two main modules: The feature extraction module by combining a few conventional feature extr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International arab journal of information technology 2018, Vol.15 (1), p.58-65
Hauptverfasser: Husayni, Muhammad, Bu Said, Lutfi, Masud, Hassani
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 65
container_issue 1
container_start_page 58
container_title International arab journal of information technology
container_volume 15
creator Husayni, Muhammad
Bu Said, Lutfi
Masud, Hassani
description In this research paper, an Arabic Automatic Speech Recognition System is implemented in order to recognize ten Arabic digits (from zero to nine) spoken in Tunisian dialect (Darija). This system is divided in two main modules: The feature extraction module by combining a few conventional feature extraction techniques, and the recognition module by using Feed-Forward Back Propagation Neural Networks (FFBPNN). For this purpose, four oral proper corpora are prepared by five speakers each. Each speaker pronounced the ten digits five times. The chosen speakers are different in gender, age and physiological conditions. We focus our experiments on a speaker dependent system and we also examined the case of speaker independent system. The obtained recognition performances are almost ideal and reached up to 98.5% when we use for the feature extraction phase the Perceptual Linear Prediction technique (PLP) followed firstly by its first-order temporal derivative (ΔPLP ) and secondly by Vector Quantization of Linde-Buzo-Gray (VQLBG).
format Article
fullrecord <record><control><sourceid>emarefa</sourceid><recordid>TN_cdi_emarefa_primary_811833</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>811833</sourcerecordid><originalsourceid>FETCH-LOGICAL-e177t-aaece4fc02ad0a08e89a8fb43d15bf68155cf8a0130d7360274e0dbe7e285c013</originalsourceid><addsrcrecordid>eNpNzMFqAjEUBdAgChX1Ewr5gYFk3szkzbJItYLgRtfykrzUFBvtZFz49x3QhXdzD3dxR2KqG4QCdIvjF7-JRc4_agi0ZWPMVJj9LcUcKUkf6cyulx27y3eKfbwkaSmzlwNOd9tFL3t2pxT_bpznYhLonHnx7Jk4rD73y69iu1tvlh_bgrUxfUHEjqvgVElekULGljDYCryubWhQ17ULSEqD8gYaVZqKlbdsuMTaDfNMvD9--Zc6DnS8dnHQ_YhaIwD8A4CqQmk</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Tunisian dialect recognition based on hybrid techniques</title><source>EZB-FREE-00999 freely available EZB journals</source><creator>Husayni, Muhammad ; Bu Said, Lutfi ; Masud, Hassani</creator><creatorcontrib>Husayni, Muhammad ; Bu Said, Lutfi ; Masud, Hassani</creatorcontrib><description>In this research paper, an Arabic Automatic Speech Recognition System is implemented in order to recognize ten Arabic digits (from zero to nine) spoken in Tunisian dialect (Darija). This system is divided in two main modules: The feature extraction module by combining a few conventional feature extraction techniques, and the recognition module by using Feed-Forward Back Propagation Neural Networks (FFBPNN). For this purpose, four oral proper corpora are prepared by five speakers each. Each speaker pronounced the ten digits five times. The chosen speakers are different in gender, age and physiological conditions. We focus our experiments on a speaker dependent system and we also examined the case of speaker independent system. The obtained recognition performances are almost ideal and reached up to 98.5% when we use for the feature extraction phase the Perceptual Linear Prediction technique (PLP) followed firstly by its first-order temporal derivative (ΔPLP ) and secondly by Vector Quantization of Linde-Buzo-Gray (VQLBG).</description><identifier>ISSN: 1683-3198</identifier><identifier>EISSN: 1683-3198</identifier><language>eng</language><publisher>Zarqa, Jordan: Zarqa University</publisher><ispartof>International arab journal of information technology, 2018, Vol.15 (1), p.58-65</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784</link.rule.ids></links><search><creatorcontrib>Husayni, Muhammad</creatorcontrib><creatorcontrib>Bu Said, Lutfi</creatorcontrib><creatorcontrib>Masud, Hassani</creatorcontrib><title>Tunisian dialect recognition based on hybrid techniques</title><title>International arab journal of information technology</title><description>In this research paper, an Arabic Automatic Speech Recognition System is implemented in order to recognize ten Arabic digits (from zero to nine) spoken in Tunisian dialect (Darija). This system is divided in two main modules: The feature extraction module by combining a few conventional feature extraction techniques, and the recognition module by using Feed-Forward Back Propagation Neural Networks (FFBPNN). For this purpose, four oral proper corpora are prepared by five speakers each. Each speaker pronounced the ten digits five times. The chosen speakers are different in gender, age and physiological conditions. We focus our experiments on a speaker dependent system and we also examined the case of speaker independent system. The obtained recognition performances are almost ideal and reached up to 98.5% when we use for the feature extraction phase the Perceptual Linear Prediction technique (PLP) followed firstly by its first-order temporal derivative (ΔPLP ) and secondly by Vector Quantization of Linde-Buzo-Gray (VQLBG).</description><issn>1683-3198</issn><issn>1683-3198</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNpNzMFqAjEUBdAgChX1Ewr5gYFk3szkzbJItYLgRtfykrzUFBvtZFz49x3QhXdzD3dxR2KqG4QCdIvjF7-JRc4_agi0ZWPMVJj9LcUcKUkf6cyulx27y3eKfbwkaSmzlwNOd9tFL3t2pxT_bpznYhLonHnx7Jk4rD73y69iu1tvlh_bgrUxfUHEjqvgVElekULGljDYCryubWhQ17ULSEqD8gYaVZqKlbdsuMTaDfNMvD9--Zc6DnS8dnHQ_YhaIwD8A4CqQmk</recordid><startdate>2018</startdate><enddate>2018</enddate><creator>Husayni, Muhammad</creator><creator>Bu Said, Lutfi</creator><creator>Masud, Hassani</creator><general>Zarqa University</general><scope>ADJCN</scope><scope>AHFXO</scope></search><sort><creationdate>2018</creationdate><title>Tunisian dialect recognition based on hybrid techniques</title><author>Husayni, Muhammad ; Bu Said, Lutfi ; Masud, Hassani</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-e177t-aaece4fc02ad0a08e89a8fb43d15bf68155cf8a0130d7360274e0dbe7e285c013</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Husayni, Muhammad</creatorcontrib><creatorcontrib>Bu Said, Lutfi</creatorcontrib><creatorcontrib>Masud, Hassani</creatorcontrib><collection>الدوريات العلمية والإحصائية - e-Marefa Academic and Statistical Periodicals</collection><collection>معرفة - المحتوى العربي الأكاديمي المتكامل - e-Marefa Academic Complete</collection><jtitle>International arab journal of information technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Husayni, Muhammad</au><au>Bu Said, Lutfi</au><au>Masud, Hassani</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Tunisian dialect recognition based on hybrid techniques</atitle><jtitle>International arab journal of information technology</jtitle><date>2018</date><risdate>2018</risdate><volume>15</volume><issue>1</issue><spage>58</spage><epage>65</epage><pages>58-65</pages><issn>1683-3198</issn><eissn>1683-3198</eissn><abstract>In this research paper, an Arabic Automatic Speech Recognition System is implemented in order to recognize ten Arabic digits (from zero to nine) spoken in Tunisian dialect (Darija). This system is divided in two main modules: The feature extraction module by combining a few conventional feature extraction techniques, and the recognition module by using Feed-Forward Back Propagation Neural Networks (FFBPNN). For this purpose, four oral proper corpora are prepared by five speakers each. Each speaker pronounced the ten digits five times. The chosen speakers are different in gender, age and physiological conditions. We focus our experiments on a speaker dependent system and we also examined the case of speaker independent system. The obtained recognition performances are almost ideal and reached up to 98.5% when we use for the feature extraction phase the Perceptual Linear Prediction technique (PLP) followed firstly by its first-order temporal derivative (ΔPLP ) and secondly by Vector Quantization of Linde-Buzo-Gray (VQLBG).</abstract><cop>Zarqa, Jordan</cop><pub>Zarqa University</pub><tpages>8</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1683-3198
ispartof International arab journal of information technology, 2018, Vol.15 (1), p.58-65
issn 1683-3198
1683-3198
language eng
recordid cdi_emarefa_primary_811833
source EZB-FREE-00999 freely available EZB journals
title Tunisian dialect recognition based on hybrid techniques
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T10%3A43%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-emarefa&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Tunisian%20dialect%20recognition%20based%20on%20hybrid%20techniques&rft.jtitle=International%20arab%20journal%20of%20information%20technology&rft.au=Husayni,%20Muhammad&rft.date=2018&rft.volume=15&rft.issue=1&rft.spage=58&rft.epage=65&rft.pages=58-65&rft.issn=1683-3198&rft.eissn=1683-3198&rft_id=info:doi/&rft_dat=%3Cemarefa%3E811833%3C/emarefa%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true