P-ICL: Point In-Context Learning for Named Entity Recognition with Large Language Models

In recent years, the rise of large language models (LLMs) has made it possible to directly achieve named entity recognition (NER) without any demonstration samples or only using a few samples through in-context learning (ICL). However, standard ICL only helps LLMs understand task instructions, forma...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jiang, Guochao, Ding, Zepeng, Shi, Yuchen, Yang, Deqing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Jiang, Guochao
Ding, Zepeng
Shi, Yuchen
Yang, Deqing
description In recent years, the rise of large language models (LLMs) has made it possible to directly achieve named entity recognition (NER) without any demonstration samples or only using a few samples through in-context learning (ICL). However, standard ICL only helps LLMs understand task instructions, format and input-label mapping, but neglects the particularity of the NER task itself. In this paper, we propose a new prompting framework P-ICL to better achieve NER with LLMs, in which some point entities are leveraged as the auxiliary information to recognize each entity type. With such significant information, the LLM can achieve entity classification more precisely. To obtain optimal point entities for prompting LLMs, we also proposed a point entity selection method based on K-Means clustering. Our extensive experiments on some representative NER benchmarks verify the effectiveness of our proposed strategies in P-ICL and point entity selection.
doi_str_mv 10.48550/arxiv.2405.04960
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_04960</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_04960</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-912a5a4582260bcf7c38637d0621bfb4d393b1020aa5e45f57d5a144a5fe2f2e3</originalsourceid><addsrcrecordid>eNotz71OwzAcBHAvDKjwAEz4BRz8nYQNRQUiGahQB7bon9gOllobuQbat6cUlrubTvohdMVoJRul6A3kffiquKSqorLV9By9rUjfmVu8SiEW3EfSpVjcvmDjIMcQZ-xTxs-wdRYvYwnlgF_dlOYYSkgRf4fyjg3k2R0zzp9wHE_Jus3uAp152Ozc5X8v0Pp-ue4eiXl56Ls7Q0DXlLSMgwKpGs41HSdfT6LRorZUczb6UVrRipFRTgGUk8qr2ipgUoLyjnvuxAJd_92eaMNHDlvIh-GXOJyI4gfGokrb</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>P-ICL: Point In-Context Learning for Named Entity Recognition with Large Language Models</title><source>arXiv.org</source><creator>Jiang, Guochao ; Ding, Zepeng ; Shi, Yuchen ; Yang, Deqing</creator><creatorcontrib>Jiang, Guochao ; Ding, Zepeng ; Shi, Yuchen ; Yang, Deqing</creatorcontrib><description>In recent years, the rise of large language models (LLMs) has made it possible to directly achieve named entity recognition (NER) without any demonstration samples or only using a few samples through in-context learning (ICL). However, standard ICL only helps LLMs understand task instructions, format and input-label mapping, but neglects the particularity of the NER task itself. In this paper, we propose a new prompting framework P-ICL to better achieve NER with LLMs, in which some point entities are leveraged as the auxiliary information to recognize each entity type. With such significant information, the LLM can achieve entity classification more precisely. To obtain optimal point entities for prompting LLMs, we also proposed a point entity selection method based on K-Means clustering. Our extensive experiments on some representative NER benchmarks verify the effectiveness of our proposed strategies in P-ICL and point entity selection.</description><identifier>DOI: 10.48550/arxiv.2405.04960</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-05</creationdate><rights>http://creativecommons.org/licenses/by-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.04960$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.04960$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Jiang, Guochao</creatorcontrib><creatorcontrib>Ding, Zepeng</creatorcontrib><creatorcontrib>Shi, Yuchen</creatorcontrib><creatorcontrib>Yang, Deqing</creatorcontrib><title>P-ICL: Point In-Context Learning for Named Entity Recognition with Large Language Models</title><description>In recent years, the rise of large language models (LLMs) has made it possible to directly achieve named entity recognition (NER) without any demonstration samples or only using a few samples through in-context learning (ICL). However, standard ICL only helps LLMs understand task instructions, format and input-label mapping, but neglects the particularity of the NER task itself. In this paper, we propose a new prompting framework P-ICL to better achieve NER with LLMs, in which some point entities are leveraged as the auxiliary information to recognize each entity type. With such significant information, the LLM can achieve entity classification more precisely. To obtain optimal point entities for prompting LLMs, we also proposed a point entity selection method based on K-Means clustering. Our extensive experiments on some representative NER benchmarks verify the effectiveness of our proposed strategies in P-ICL and point entity selection.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAcBHAvDKjwAEz4BRz8nYQNRQUiGahQB7bon9gOllobuQbat6cUlrubTvohdMVoJRul6A3kffiquKSqorLV9By9rUjfmVu8SiEW3EfSpVjcvmDjIMcQZ-xTxs-wdRYvYwnlgF_dlOYYSkgRf4fyjg3k2R0zzp9wHE_Jus3uAp152Ozc5X8v0Pp-ue4eiXl56Ls7Q0DXlLSMgwKpGs41HSdfT6LRorZUczb6UVrRipFRTgGUk8qr2ipgUoLyjnvuxAJd_92eaMNHDlvIh-GXOJyI4gfGokrb</recordid><startdate>20240508</startdate><enddate>20240508</enddate><creator>Jiang, Guochao</creator><creator>Ding, Zepeng</creator><creator>Shi, Yuchen</creator><creator>Yang, Deqing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240508</creationdate><title>P-ICL: Point In-Context Learning for Named Entity Recognition with Large Language Models</title><author>Jiang, Guochao ; Ding, Zepeng ; Shi, Yuchen ; Yang, Deqing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-912a5a4582260bcf7c38637d0621bfb4d393b1020aa5e45f57d5a144a5fe2f2e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Jiang, Guochao</creatorcontrib><creatorcontrib>Ding, Zepeng</creatorcontrib><creatorcontrib>Shi, Yuchen</creatorcontrib><creatorcontrib>Yang, Deqing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jiang, Guochao</au><au>Ding, Zepeng</au><au>Shi, Yuchen</au><au>Yang, Deqing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>P-ICL: Point In-Context Learning for Named Entity Recognition with Large Language Models</atitle><date>2024-05-08</date><risdate>2024</risdate><abstract>In recent years, the rise of large language models (LLMs) has made it possible to directly achieve named entity recognition (NER) without any demonstration samples or only using a few samples through in-context learning (ICL). However, standard ICL only helps LLMs understand task instructions, format and input-label mapping, but neglects the particularity of the NER task itself. In this paper, we propose a new prompting framework P-ICL to better achieve NER with LLMs, in which some point entities are leveraged as the auxiliary information to recognize each entity type. With such significant information, the LLM can achieve entity classification more precisely. To obtain optimal point entities for prompting LLMs, we also proposed a point entity selection method based on K-Means clustering. Our extensive experiments on some representative NER benchmarks verify the effectiveness of our proposed strategies in P-ICL and point entity selection.</abstract><doi>10.48550/arxiv.2405.04960</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2405.04960
ispartof
issn
language eng
recordid cdi_arxiv_primary_2405_04960
source arXiv.org
subjects Computer Science - Computation and Language
title P-ICL: Point In-Context Learning for Named Entity Recognition with Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T23%3A50%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=P-ICL:%20Point%20In-Context%20Learning%20for%20Named%20Entity%20Recognition%20with%20Large%20Language%20Models&rft.au=Jiang,%20Guochao&rft.date=2024-05-08&rft_id=info:doi/10.48550/arxiv.2405.04960&rft_dat=%3Carxiv_GOX%3E2405_04960%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true