Contextual Biasing of Named-Entities with Large Language Models

This paper studies contextual biasing with Large Language Models (LLMs), where during second-pass rescoring additional contextual information is provided to a LLM to boost Automatic Speech Recognition (ASR) performance. We propose to leverage prompts for a LLM without fine tuning during rescoring wh...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sun, Chuanneng, Ahmed, Zeeshan, Ma, Yingyi, Liu, Zhe, Kabela, Lucas, Pang, Yutong, Kalinli, Ozlem
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Sun, Chuanneng
Ahmed, Zeeshan
Ma, Yingyi
Liu, Zhe
Kabela, Lucas
Pang, Yutong
Kalinli, Ozlem
description This paper studies contextual biasing with Large Language Models (LLMs), where during second-pass rescoring additional contextual information is provided to a LLM to boost Automatic Speech Recognition (ASR) performance. We propose to leverage prompts for a LLM without fine tuning during rescoring which incorporate a biasing list and few-shot examples to serve as additional information when calculating the score for the hypothesis. In addition to few-shot prompt learning, we propose multi-task training of the LLM to predict both the entity class and the next token. To improve the efficiency for contextual biasing and to avoid exceeding LLMs' maximum sequence lengths, we propose dynamic prompting, where we select the most likely class using the class tag prediction, and only use entities in this class as contexts for next token prediction. Word Error Rate (WER) evaluation is performed on i) an internal calling, messaging, and dictation dataset, and ii) the SLUE-Voxpopuli dataset. Results indicate that biasing lists and few-shot examples can achieve 17.8% and 9.6% relative improvement compared to first pass ASR, and that multi-task training and dynamic prompting can achieve 20.0% and 11.3% relative WER improvement, respectively.
doi_str_mv 10.48550/arxiv.2309.00723
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_00723</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_00723</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-9a5b635c7e7507be7a439788abf6b5f356865a53102b76c3ba407374706c24503</originalsourceid><addsrcrecordid>eNotz71Ow0AQBOBrKFDgAahyL2Cz8d7e2hUCK_xIBpr01p5zdk5ybGRfILw9IdDMTDXSp9TNClKTE8GtTMfwmWYIRQrAGV6qu3Icoj_Gg_T6Icgchk6PrX6Tvd8m6yGGGPysv0Lc6Uqmzp9y6A5yGq_j1vfzlbpopZ_99X8v1OZxvSmfk-r96aW8rxKxjEkh5CxSw54J2HkWgwXnubjWOmqRbG5JCFeQObYNOjHAyIbBNpkhwIVa_t2eBfXHFPYyfde_kvoswR88AUHC</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Contextual Biasing of Named-Entities with Large Language Models</title><source>arXiv.org</source><creator>Sun, Chuanneng ; Ahmed, Zeeshan ; Ma, Yingyi ; Liu, Zhe ; Kabela, Lucas ; Pang, Yutong ; Kalinli, Ozlem</creator><creatorcontrib>Sun, Chuanneng ; Ahmed, Zeeshan ; Ma, Yingyi ; Liu, Zhe ; Kabela, Lucas ; Pang, Yutong ; Kalinli, Ozlem</creatorcontrib><description>This paper studies contextual biasing with Large Language Models (LLMs), where during second-pass rescoring additional contextual information is provided to a LLM to boost Automatic Speech Recognition (ASR) performance. We propose to leverage prompts for a LLM without fine tuning during rescoring which incorporate a biasing list and few-shot examples to serve as additional information when calculating the score for the hypothesis. In addition to few-shot prompt learning, we propose multi-task training of the LLM to predict both the entity class and the next token. To improve the efficiency for contextual biasing and to avoid exceeding LLMs' maximum sequence lengths, we propose dynamic prompting, where we select the most likely class using the class tag prediction, and only use entities in this class as contexts for next token prediction. Word Error Rate (WER) evaluation is performed on i) an internal calling, messaging, and dictation dataset, and ii) the SLUE-Voxpopuli dataset. Results indicate that biasing lists and few-shot examples can achieve 17.8% and 9.6% relative improvement compared to first pass ASR, and that multi-task training and dynamic prompting can achieve 20.0% and 11.3% relative WER improvement, respectively.</description><identifier>DOI: 10.48550/arxiv.2309.00723</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning ; Computer Science - Sound</subject><creationdate>2023-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.00723$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.00723$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Sun, Chuanneng</creatorcontrib><creatorcontrib>Ahmed, Zeeshan</creatorcontrib><creatorcontrib>Ma, Yingyi</creatorcontrib><creatorcontrib>Liu, Zhe</creatorcontrib><creatorcontrib>Kabela, Lucas</creatorcontrib><creatorcontrib>Pang, Yutong</creatorcontrib><creatorcontrib>Kalinli, Ozlem</creatorcontrib><title>Contextual Biasing of Named-Entities with Large Language Models</title><description>This paper studies contextual biasing with Large Language Models (LLMs), where during second-pass rescoring additional contextual information is provided to a LLM to boost Automatic Speech Recognition (ASR) performance. We propose to leverage prompts for a LLM without fine tuning during rescoring which incorporate a biasing list and few-shot examples to serve as additional information when calculating the score for the hypothesis. In addition to few-shot prompt learning, we propose multi-task training of the LLM to predict both the entity class and the next token. To improve the efficiency for contextual biasing and to avoid exceeding LLMs' maximum sequence lengths, we propose dynamic prompting, where we select the most likely class using the class tag prediction, and only use entities in this class as contexts for next token prediction. Word Error Rate (WER) evaluation is performed on i) an internal calling, messaging, and dictation dataset, and ii) the SLUE-Voxpopuli dataset. Results indicate that biasing lists and few-shot examples can achieve 17.8% and 9.6% relative improvement compared to first pass ASR, and that multi-task training and dynamic prompting can achieve 20.0% and 11.3% relative WER improvement, respectively.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71Ow0AQBOBrKFDgAahyL2Cz8d7e2hUCK_xIBpr01p5zdk5ybGRfILw9IdDMTDXSp9TNClKTE8GtTMfwmWYIRQrAGV6qu3Icoj_Gg_T6Icgchk6PrX6Tvd8m6yGGGPysv0Lc6Uqmzp9y6A5yGq_j1vfzlbpopZ_99X8v1OZxvSmfk-r96aW8rxKxjEkh5CxSw54J2HkWgwXnubjWOmqRbG5JCFeQObYNOjHAyIbBNpkhwIVa_t2eBfXHFPYyfde_kvoswR88AUHC</recordid><startdate>20230901</startdate><enddate>20230901</enddate><creator>Sun, Chuanneng</creator><creator>Ahmed, Zeeshan</creator><creator>Ma, Yingyi</creator><creator>Liu, Zhe</creator><creator>Kabela, Lucas</creator><creator>Pang, Yutong</creator><creator>Kalinli, Ozlem</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230901</creationdate><title>Contextual Biasing of Named-Entities with Large Language Models</title><author>Sun, Chuanneng ; Ahmed, Zeeshan ; Ma, Yingyi ; Liu, Zhe ; Kabela, Lucas ; Pang, Yutong ; Kalinli, Ozlem</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-9a5b635c7e7507be7a439788abf6b5f356865a53102b76c3ba407374706c24503</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Chuanneng</creatorcontrib><creatorcontrib>Ahmed, Zeeshan</creatorcontrib><creatorcontrib>Ma, Yingyi</creatorcontrib><creatorcontrib>Liu, Zhe</creatorcontrib><creatorcontrib>Kabela, Lucas</creatorcontrib><creatorcontrib>Pang, Yutong</creatorcontrib><creatorcontrib>Kalinli, Ozlem</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sun, Chuanneng</au><au>Ahmed, Zeeshan</au><au>Ma, Yingyi</au><au>Liu, Zhe</au><au>Kabela, Lucas</au><au>Pang, Yutong</au><au>Kalinli, Ozlem</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Contextual Biasing of Named-Entities with Large Language Models</atitle><date>2023-09-01</date><risdate>2023</risdate><abstract>This paper studies contextual biasing with Large Language Models (LLMs), where during second-pass rescoring additional contextual information is provided to a LLM to boost Automatic Speech Recognition (ASR) performance. We propose to leverage prompts for a LLM without fine tuning during rescoring which incorporate a biasing list and few-shot examples to serve as additional information when calculating the score for the hypothesis. In addition to few-shot prompt learning, we propose multi-task training of the LLM to predict both the entity class and the next token. To improve the efficiency for contextual biasing and to avoid exceeding LLMs' maximum sequence lengths, we propose dynamic prompting, where we select the most likely class using the class tag prediction, and only use entities in this class as contexts for next token prediction. Word Error Rate (WER) evaluation is performed on i) an internal calling, messaging, and dictation dataset, and ii) the SLUE-Voxpopuli dataset. Results indicate that biasing lists and few-shot examples can achieve 17.8% and 9.6% relative improvement compared to first pass ASR, and that multi-task training and dynamic prompting can achieve 20.0% and 11.3% relative WER improvement, respectively.</abstract><doi>10.48550/arxiv.2309.00723</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2309.00723
ispartof
issn
language eng
recordid cdi_arxiv_primary_2309_00723
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
Computer Science - Sound
title Contextual Biasing of Named-Entities with Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T21%3A02%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Contextual%20Biasing%20of%20Named-Entities%20with%20Large%20Language%20Models&rft.au=Sun,%20Chuanneng&rft.date=2023-09-01&rft_id=info:doi/10.48550/arxiv.2309.00723&rft_dat=%3Carxiv_GOX%3E2309_00723%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true