"You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Large language models (LLMs) show amazing proficiency and fluency in the use of language. Does this mean that they have also acquired insightful linguistic knowledge about the language, to an extent that they can serve as an "expert linguistic annotator"? In this paper, we examine the succ...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Ettinger, Allyson Hwang, Jena D Pyatkin, Valentina Bhagavatula, Chandra Choi, Yejin |
description | Large language models (LLMs) show amazing proficiency and fluency in the use
of language. Does this mean that they have also acquired insightful linguistic
knowledge about the language, to an extent that they can serve as an "expert
linguistic annotator"? In this paper, we examine the successes and limitations
of the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaning
structure, focusing on the Abstract Meaning Representation (AMR; Banarescu et
al. 2013) parsing formalism, which provides rich graphical representations of
sentence meaning structure while abstracting away from surface forms. We
compare models' analysis of this semantic structure across two settings: 1)
direct production of AMR parses based on zero- and few-shot prompts, and 2)
indirect partial reconstruction of AMR via metalinguistic natural language
queries (e.g., "Identify the primary event of this sentence, and the predicate
corresponding to that event."). Across these settings, we find that models can
reliably reproduce the basic format of AMR, and can often capture core event,
argument, and modifier structure -- however, model outputs are prone to
frequent and major errors, and holistic analysis of parse acceptability shows
that even with few-shot demonstrations, models have virtually 0% success in
producing fully accurate parses. Eliciting natural language responses produces
similar patterns of errors. Overall, our findings indicate that these models
out-of-the-box can capture aspects of semantic structure, but there remain key
limitations in their ability to support fully accurate semantic analyses or
parses. |
doi_str_mv | 10.48550/arxiv.2310.17793 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_17793</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_17793</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-c8c029301a4f52bdfc993b3d5b5665a1d7082406c3f20bd33203e00552ca87a53</originalsourceid><addsrcrecordid>eNotj8tqwzAURLXpoqT9gK4qsncq61p-dGdC-gCHQsmmK3MtS60gkYyklKRfX9XpauAMM3AIucvZqqiFYA_oT-Z7xSGBvKoauCZfyw93pK1XtLV0c5qUj7Qz9vNoQjQyQesiRueXjwkfTAzUadp120AxpBb35x_lZ9gOIXqUkW4V2vRA39XkVVA27Y2zN-RK4z6o2_9ckN3TZrd-ybq359d122VYVpDJWjLeAMux0IIPo5ZNAwOMYhBlKTAfK1bzgpUSNGfDCMAZKMaE4BLrCgUsyP3ldlbtJ28O6M_9n3I_K8Mvo35Qkw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>"You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation</title><source>arXiv.org</source><creator>Ettinger, Allyson ; Hwang, Jena D ; Pyatkin, Valentina ; Bhagavatula, Chandra ; Choi, Yejin</creator><creatorcontrib>Ettinger, Allyson ; Hwang, Jena D ; Pyatkin, Valentina ; Bhagavatula, Chandra ; Choi, Yejin</creatorcontrib><description>Large language models (LLMs) show amazing proficiency and fluency in the use
of language. Does this mean that they have also acquired insightful linguistic
knowledge about the language, to an extent that they can serve as an "expert
linguistic annotator"? In this paper, we examine the successes and limitations
of the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaning
structure, focusing on the Abstract Meaning Representation (AMR; Banarescu et
al. 2013) parsing formalism, which provides rich graphical representations of
sentence meaning structure while abstracting away from surface forms. We
compare models' analysis of this semantic structure across two settings: 1)
direct production of AMR parses based on zero- and few-shot prompts, and 2)
indirect partial reconstruction of AMR via metalinguistic natural language
queries (e.g., "Identify the primary event of this sentence, and the predicate
corresponding to that event."). Across these settings, we find that models can
reliably reproduce the basic format of AMR, and can often capture core event,
argument, and modifier structure -- however, model outputs are prone to
frequent and major errors, and holistic analysis of parse acceptability shows
that even with few-shot demonstrations, models have virtually 0% success in
producing fully accurate parses. Eliciting natural language responses produces
similar patterns of errors. Overall, our findings indicate that these models
out-of-the-box can capture aspects of semantic structure, but there remain key
limitations in their ability to support fully accurate semantic analyses or
parses.</description><identifier>DOI: 10.48550/arxiv.2310.17793</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2023-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.17793$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.17793$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ettinger, Allyson</creatorcontrib><creatorcontrib>Hwang, Jena D</creatorcontrib><creatorcontrib>Pyatkin, Valentina</creatorcontrib><creatorcontrib>Bhagavatula, Chandra</creatorcontrib><creatorcontrib>Choi, Yejin</creatorcontrib><title>"You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation</title><description>Large language models (LLMs) show amazing proficiency and fluency in the use
of language. Does this mean that they have also acquired insightful linguistic
knowledge about the language, to an extent that they can serve as an "expert
linguistic annotator"? In this paper, we examine the successes and limitations
of the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaning
structure, focusing on the Abstract Meaning Representation (AMR; Banarescu et
al. 2013) parsing formalism, which provides rich graphical representations of
sentence meaning structure while abstracting away from surface forms. We
compare models' analysis of this semantic structure across two settings: 1)
direct production of AMR parses based on zero- and few-shot prompts, and 2)
indirect partial reconstruction of AMR via metalinguistic natural language
queries (e.g., "Identify the primary event of this sentence, and the predicate
corresponding to that event."). Across these settings, we find that models can
reliably reproduce the basic format of AMR, and can often capture core event,
argument, and modifier structure -- however, model outputs are prone to
frequent and major errors, and holistic analysis of parse acceptability shows
that even with few-shot demonstrations, models have virtually 0% success in
producing fully accurate parses. Eliciting natural language responses produces
similar patterns of errors. Overall, our findings indicate that these models
out-of-the-box can capture aspects of semantic structure, but there remain key
limitations in their ability to support fully accurate semantic analyses or
parses.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAURLXpoqT9gK4qsncq61p-dGdC-gCHQsmmK3MtS60gkYyklKRfX9XpauAMM3AIucvZqqiFYA_oT-Z7xSGBvKoauCZfyw93pK1XtLV0c5qUj7Qz9vNoQjQyQesiRueXjwkfTAzUadp120AxpBb35x_lZ9gOIXqUkW4V2vRA39XkVVA27Y2zN-RK4z6o2_9ckN3TZrd-ybq359d122VYVpDJWjLeAMux0IIPo5ZNAwOMYhBlKTAfK1bzgpUSNGfDCMAZKMaE4BLrCgUsyP3ldlbtJ28O6M_9n3I_K8Mvo35Qkw</recordid><startdate>20231026</startdate><enddate>20231026</enddate><creator>Ettinger, Allyson</creator><creator>Hwang, Jena D</creator><creator>Pyatkin, Valentina</creator><creator>Bhagavatula, Chandra</creator><creator>Choi, Yejin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231026</creationdate><title>"You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation</title><author>Ettinger, Allyson ; Hwang, Jena D ; Pyatkin, Valentina ; Bhagavatula, Chandra ; Choi, Yejin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-c8c029301a4f52bdfc993b3d5b5665a1d7082406c3f20bd33203e00552ca87a53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Ettinger, Allyson</creatorcontrib><creatorcontrib>Hwang, Jena D</creatorcontrib><creatorcontrib>Pyatkin, Valentina</creatorcontrib><creatorcontrib>Bhagavatula, Chandra</creatorcontrib><creatorcontrib>Choi, Yejin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ettinger, Allyson</au><au>Hwang, Jena D</au><au>Pyatkin, Valentina</au><au>Bhagavatula, Chandra</au><au>Choi, Yejin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>"You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation</atitle><date>2023-10-26</date><risdate>2023</risdate><abstract>Large language models (LLMs) show amazing proficiency and fluency in the use
of language. Does this mean that they have also acquired insightful linguistic
knowledge about the language, to an extent that they can serve as an "expert
linguistic annotator"? In this paper, we examine the successes and limitations
of the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaning
structure, focusing on the Abstract Meaning Representation (AMR; Banarescu et
al. 2013) parsing formalism, which provides rich graphical representations of
sentence meaning structure while abstracting away from surface forms. We
compare models' analysis of this semantic structure across two settings: 1)
direct production of AMR parses based on zero- and few-shot prompts, and 2)
indirect partial reconstruction of AMR via metalinguistic natural language
queries (e.g., "Identify the primary event of this sentence, and the predicate
corresponding to that event."). Across these settings, we find that models can
reliably reproduce the basic format of AMR, and can often capture core event,
argument, and modifier structure -- however, model outputs are prone to
frequent and major errors, and holistic analysis of parse acceptability shows
that even with few-shot demonstrations, models have virtually 0% success in
producing fully accurate parses. Eliciting natural language responses produces
similar patterns of errors. Overall, our findings indicate that these models
out-of-the-box can capture aspects of semantic structure, but there remain key
limitations in their ability to support fully accurate semantic analyses or
parses.</abstract><doi>10.48550/arxiv.2310.17793</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2310.17793 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2310_17793 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language |
title | "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T08%3A01%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=%22You%20Are%20An%20Expert%20Linguistic%20Annotator%22:%20Limits%20of%20LLMs%20as%20Analyzers%20of%20Abstract%20Meaning%20Representation&rft.au=Ettinger,%20Allyson&rft.date=2023-10-26&rft_id=info:doi/10.48550/arxiv.2310.17793&rft_dat=%3Carxiv_GOX%3E2310_17793%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |