Model Adaptation for ASR in low-resource Indian Languages

Automatic speech recognition (ASR) performance has improved drastically in recent years, mainly enabled by self-supervised learning (SSL) based acoustic models such as wav2vec2 and large-scale multi-lingual training like Whisper. A huge challenge still exists for low-resource languages where the ava...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Singh, Abhayjeet, Mehta, Arjun Singh, S, Ashish Khuraishi K, G, Deekshitha, Date, Gauri, Nanavati, Jai, Bandekar, Jesuraja, Basumatary, Karnalius, P, Karthika, Badiger, Sandhya, Udupa, Sathvik, Kumar, Saurabh, Savitha, Ghosh, Prasanta Kumar, V, Prashanthi, Pai, Priyanka, Nanavati, Raoul, Saxena, Rohan, Mora, Sai Praneeth Reddy, Raghavan, Srinivasa
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Singh, Abhayjeet
Mehta, Arjun Singh
S, Ashish Khuraishi K
G, Deekshitha
Date, Gauri
Nanavati, Jai
Bandekar, Jesuraja
Basumatary, Karnalius
P, Karthika
Badiger, Sandhya
Udupa, Sathvik
Kumar, Saurabh
Savitha
Ghosh, Prasanta Kumar
V, Prashanthi
Pai, Priyanka
Nanavati, Raoul
Saxena, Rohan
Mora, Sai Praneeth Reddy
Raghavan, Srinivasa
description Automatic speech recognition (ASR) performance has improved drastically in recent years, mainly enabled by self-supervised learning (SSL) based acoustic models such as wav2vec2 and large-scale multi-lingual training like Whisper. A huge challenge still exists for low-resource languages where the availability of both audio and text is limited. This is further complicated by the presence of multiple dialects like in Indian languages. However, many Indian languages can be grouped into the same families and share the same script and grammatical structure. This is where a lot of adaptation and fine-tuning techniques can be applied to overcome the low-resource nature of the data by utilising well-resourced similar languages. In such scenarios, it is important to understand the extent to which each modality, like acoustics and text, is important in building a reliable ASR. It could be the case that an abundance of acoustic data in a language reduces the need for large text-only corpora. Or, due to the availability of various pretrained acoustic models, the vice-versa could also be true. In this proposed special session, we encourage the community to explore these ideas with the data in two low-resource Indian languages of Bengali and Bhojpuri. These approaches are not limited to Indian languages, the solutions are potentially applicable to various languages spoken around the world.
doi_str_mv 10.48550/arxiv.2307.07948
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2307_07948</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2307_07948</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-5c60fa558aab64b8df2f942a8c4e6df429d3ab6386f1ed804bd90c2ac1690feb3</originalsourceid><addsrcrecordid>eNotj7tuwjAYRr0wIOABmOoXSOr4FnuMUEuRgpCAPfrjC7KU2siBQt--LWU6wycdfQehZUVKroQgr5Dv4aukjNQlqTVXU6S3yboBNxbOF7iEFLFPGTeHPQ4RD-lWZDemazYOb6INEHEL8XSFkxvnaOJhGN3iyRk6vr8dVx9Fu1tvVk1bgKxVIYwkHoRQAL3kvbKees0pKMOdtJ5TbdnvwpT0lbOK8N5qYiiYSmriXc9m6OVf-_jenXP4hPzd_TV0jwb2A6nrQXM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Model Adaptation for ASR in low-resource Indian Languages</title><source>arXiv.org</source><creator>Singh, Abhayjeet ; Mehta, Arjun Singh ; S, Ashish Khuraishi K ; G, Deekshitha ; Date, Gauri ; Nanavati, Jai ; Bandekar, Jesuraja ; Basumatary, Karnalius ; P, Karthika ; Badiger, Sandhya ; Udupa, Sathvik ; Kumar, Saurabh ; Savitha ; Ghosh, Prasanta Kumar ; V, Prashanthi ; Pai, Priyanka ; Nanavati, Raoul ; Saxena, Rohan ; Mora, Sai Praneeth Reddy ; Raghavan, Srinivasa</creator><creatorcontrib>Singh, Abhayjeet ; Mehta, Arjun Singh ; S, Ashish Khuraishi K ; G, Deekshitha ; Date, Gauri ; Nanavati, Jai ; Bandekar, Jesuraja ; Basumatary, Karnalius ; P, Karthika ; Badiger, Sandhya ; Udupa, Sathvik ; Kumar, Saurabh ; Savitha ; Ghosh, Prasanta Kumar ; V, Prashanthi ; Pai, Priyanka ; Nanavati, Raoul ; Saxena, Rohan ; Mora, Sai Praneeth Reddy ; Raghavan, Srinivasa</creatorcontrib><description>Automatic speech recognition (ASR) performance has improved drastically in recent years, mainly enabled by self-supervised learning (SSL) based acoustic models such as wav2vec2 and large-scale multi-lingual training like Whisper. A huge challenge still exists for low-resource languages where the availability of both audio and text is limited. This is further complicated by the presence of multiple dialects like in Indian languages. However, many Indian languages can be grouped into the same families and share the same script and grammatical structure. This is where a lot of adaptation and fine-tuning techniques can be applied to overcome the low-resource nature of the data by utilising well-resourced similar languages. In such scenarios, it is important to understand the extent to which each modality, like acoustics and text, is important in building a reliable ASR. It could be the case that an abundance of acoustic data in a language reduces the need for large text-only corpora. Or, due to the availability of various pretrained acoustic models, the vice-versa could also be true. In this proposed special session, we encourage the community to explore these ideas with the data in two low-resource Indian languages of Bengali and Bhojpuri. These approaches are not limited to Indian languages, the solutions are potentially applicable to various languages spoken around the world.</description><identifier>DOI: 10.48550/arxiv.2307.07948</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-07</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2307.07948$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2307.07948$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Singh, Abhayjeet</creatorcontrib><creatorcontrib>Mehta, Arjun Singh</creatorcontrib><creatorcontrib>S, Ashish Khuraishi K</creatorcontrib><creatorcontrib>G, Deekshitha</creatorcontrib><creatorcontrib>Date, Gauri</creatorcontrib><creatorcontrib>Nanavati, Jai</creatorcontrib><creatorcontrib>Bandekar, Jesuraja</creatorcontrib><creatorcontrib>Basumatary, Karnalius</creatorcontrib><creatorcontrib>P, Karthika</creatorcontrib><creatorcontrib>Badiger, Sandhya</creatorcontrib><creatorcontrib>Udupa, Sathvik</creatorcontrib><creatorcontrib>Kumar, Saurabh</creatorcontrib><creatorcontrib>Savitha</creatorcontrib><creatorcontrib>Ghosh, Prasanta Kumar</creatorcontrib><creatorcontrib>V, Prashanthi</creatorcontrib><creatorcontrib>Pai, Priyanka</creatorcontrib><creatorcontrib>Nanavati, Raoul</creatorcontrib><creatorcontrib>Saxena, Rohan</creatorcontrib><creatorcontrib>Mora, Sai Praneeth Reddy</creatorcontrib><creatorcontrib>Raghavan, Srinivasa</creatorcontrib><title>Model Adaptation for ASR in low-resource Indian Languages</title><description>Automatic speech recognition (ASR) performance has improved drastically in recent years, mainly enabled by self-supervised learning (SSL) based acoustic models such as wav2vec2 and large-scale multi-lingual training like Whisper. A huge challenge still exists for low-resource languages where the availability of both audio and text is limited. This is further complicated by the presence of multiple dialects like in Indian languages. However, many Indian languages can be grouped into the same families and share the same script and grammatical structure. This is where a lot of adaptation and fine-tuning techniques can be applied to overcome the low-resource nature of the data by utilising well-resourced similar languages. In such scenarios, it is important to understand the extent to which each modality, like acoustics and text, is important in building a reliable ASR. It could be the case that an abundance of acoustic data in a language reduces the need for large text-only corpora. Or, due to the availability of various pretrained acoustic models, the vice-versa could also be true. In this proposed special session, we encourage the community to explore these ideas with the data in two low-resource Indian languages of Bengali and Bhojpuri. These approaches are not limited to Indian languages, the solutions are potentially applicable to various languages spoken around the world.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7tuwjAYRr0wIOABmOoXSOr4FnuMUEuRgpCAPfrjC7KU2siBQt--LWU6wycdfQehZUVKroQgr5Dv4aukjNQlqTVXU6S3yboBNxbOF7iEFLFPGTeHPQ4RD-lWZDemazYOb6INEHEL8XSFkxvnaOJhGN3iyRk6vr8dVx9Fu1tvVk1bgKxVIYwkHoRQAL3kvbKees0pKMOdtJ5TbdnvwpT0lbOK8N5qYiiYSmriXc9m6OVf-_jenXP4hPzd_TV0jwb2A6nrQXM</recordid><startdate>20230716</startdate><enddate>20230716</enddate><creator>Singh, Abhayjeet</creator><creator>Mehta, Arjun Singh</creator><creator>S, Ashish Khuraishi K</creator><creator>G, Deekshitha</creator><creator>Date, Gauri</creator><creator>Nanavati, Jai</creator><creator>Bandekar, Jesuraja</creator><creator>Basumatary, Karnalius</creator><creator>P, Karthika</creator><creator>Badiger, Sandhya</creator><creator>Udupa, Sathvik</creator><creator>Kumar, Saurabh</creator><creator>Savitha</creator><creator>Ghosh, Prasanta Kumar</creator><creator>V, Prashanthi</creator><creator>Pai, Priyanka</creator><creator>Nanavati, Raoul</creator><creator>Saxena, Rohan</creator><creator>Mora, Sai Praneeth Reddy</creator><creator>Raghavan, Srinivasa</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230716</creationdate><title>Model Adaptation for ASR in low-resource Indian Languages</title><author>Singh, Abhayjeet ; Mehta, Arjun Singh ; S, Ashish Khuraishi K ; G, Deekshitha ; Date, Gauri ; Nanavati, Jai ; Bandekar, Jesuraja ; Basumatary, Karnalius ; P, Karthika ; Badiger, Sandhya ; Udupa, Sathvik ; Kumar, Saurabh ; Savitha ; Ghosh, Prasanta Kumar ; V, Prashanthi ; Pai, Priyanka ; Nanavati, Raoul ; Saxena, Rohan ; Mora, Sai Praneeth Reddy ; Raghavan, Srinivasa</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-5c60fa558aab64b8df2f942a8c4e6df429d3ab6386f1ed804bd90c2ac1690feb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Singh, Abhayjeet</creatorcontrib><creatorcontrib>Mehta, Arjun Singh</creatorcontrib><creatorcontrib>S, Ashish Khuraishi K</creatorcontrib><creatorcontrib>G, Deekshitha</creatorcontrib><creatorcontrib>Date, Gauri</creatorcontrib><creatorcontrib>Nanavati, Jai</creatorcontrib><creatorcontrib>Bandekar, Jesuraja</creatorcontrib><creatorcontrib>Basumatary, Karnalius</creatorcontrib><creatorcontrib>P, Karthika</creatorcontrib><creatorcontrib>Badiger, Sandhya</creatorcontrib><creatorcontrib>Udupa, Sathvik</creatorcontrib><creatorcontrib>Kumar, Saurabh</creatorcontrib><creatorcontrib>Savitha</creatorcontrib><creatorcontrib>Ghosh, Prasanta Kumar</creatorcontrib><creatorcontrib>V, Prashanthi</creatorcontrib><creatorcontrib>Pai, Priyanka</creatorcontrib><creatorcontrib>Nanavati, Raoul</creatorcontrib><creatorcontrib>Saxena, Rohan</creatorcontrib><creatorcontrib>Mora, Sai Praneeth Reddy</creatorcontrib><creatorcontrib>Raghavan, Srinivasa</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Singh, Abhayjeet</au><au>Mehta, Arjun Singh</au><au>S, Ashish Khuraishi K</au><au>G, Deekshitha</au><au>Date, Gauri</au><au>Nanavati, Jai</au><au>Bandekar, Jesuraja</au><au>Basumatary, Karnalius</au><au>P, Karthika</au><au>Badiger, Sandhya</au><au>Udupa, Sathvik</au><au>Kumar, Saurabh</au><au>Savitha</au><au>Ghosh, Prasanta Kumar</au><au>V, Prashanthi</au><au>Pai, Priyanka</au><au>Nanavati, Raoul</au><au>Saxena, Rohan</au><au>Mora, Sai Praneeth Reddy</au><au>Raghavan, Srinivasa</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Model Adaptation for ASR in low-resource Indian Languages</atitle><date>2023-07-16</date><risdate>2023</risdate><abstract>Automatic speech recognition (ASR) performance has improved drastically in recent years, mainly enabled by self-supervised learning (SSL) based acoustic models such as wav2vec2 and large-scale multi-lingual training like Whisper. A huge challenge still exists for low-resource languages where the availability of both audio and text is limited. This is further complicated by the presence of multiple dialects like in Indian languages. However, many Indian languages can be grouped into the same families and share the same script and grammatical structure. This is where a lot of adaptation and fine-tuning techniques can be applied to overcome the low-resource nature of the data by utilising well-resourced similar languages. In such scenarios, it is important to understand the extent to which each modality, like acoustics and text, is important in building a reliable ASR. It could be the case that an abundance of acoustic data in a language reduces the need for large text-only corpora. Or, due to the availability of various pretrained acoustic models, the vice-versa could also be true. In this proposed special session, we encourage the community to explore these ideas with the data in two low-resource Indian languages of Bengali and Bhojpuri. These approaches are not limited to Indian languages, the solutions are potentially applicable to various languages spoken around the world.</abstract><doi>10.48550/arxiv.2307.07948</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2307.07948
ispartof
issn
language eng
recordid cdi_arxiv_primary_2307_07948
source arXiv.org
subjects Computer Science - Computation and Language
title Model Adaptation for ASR in low-resource Indian Languages
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T02%3A41%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Model%20Adaptation%20for%20ASR%20in%20low-resource%20Indian%20Languages&rft.au=Singh,%20Abhayjeet&rft.date=2023-07-16&rft_id=info:doi/10.48550/arxiv.2307.07948&rft_dat=%3Carxiv_GOX%3E2307_07948%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true