Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text

Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural generation models have improved to the point where their outputs can often no longer be distinguished based o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Journal of artificial intelligence research 2023-01, Vol.77, p.103-166
Hauptverfasser: Gehrmann, Sebastian, Clark, Elizabeth, Sellam, Thibault
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 166
container_issue
container_start_page 103
container_title The Journal of artificial intelligence research
container_volume 77
creator Gehrmann, Sebastian
Clark, Elizabeth
Sellam, Thibault
description Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural generation models have improved to the point where their outputs can often no longer be distinguished based on the surface-level features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for evaluation research and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 generation papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo.
doi_str_mv 10.1613/jair.1.13715
format Article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1613_jair_1_13715</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1613_jair_1_13715</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1861-f1e45a16fbbb5aed289499bb747faf8f8b9d7efae922cbe1d7c7ffdd2b3b8d93</originalsourceid><addsrcrecordid>eNpNkMtOwzAURC0EEqWw4wP8ASTk5mWbXVW1BalSEWQf-XENLiGp7KSif9-0sGA1o9HMLA4h95DEUEL2uJXOxxBDxqC4IBNIWBkJVrDLf_6a3ISwTRIQeconRL3hbly59oP2n0jnXuovNHTZDa2RvevaJzqj74Pf44F2lm5U6KVuMFDX0sVeNsO5RF_HXe_0mNvO0xW26GU__lT409-SKyubgHd_OiXVclHNn6P1ZvUyn60jDbyEyALmhYTSKqUKiSblIhdCKZYzKy23XAnD0EoUaaoVgmGaWWtMqjLFjcim5OH3VvsuBI-23nn3Lf2hhqQ-4alPeGqoz3iyIyV7W30</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text</title><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>Free E- Journals</source><creator>Gehrmann, Sebastian ; Clark, Elizabeth ; Sellam, Thibault</creator><creatorcontrib>Gehrmann, Sebastian ; Clark, Elizabeth ; Sellam, Thibault</creatorcontrib><description>Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural generation models have improved to the point where their outputs can often no longer be distinguished based on the surface-level features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for evaluation research and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 generation papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo.</description><identifier>ISSN: 1076-9757</identifier><identifier>EISSN: 1076-9757</identifier><identifier>DOI: 10.1613/jair.1.13715</identifier><language>eng</language><ispartof>The Journal of artificial intelligence research, 2023-01, Vol.77, p.103-166</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c1861-f1e45a16fbbb5aed289499bb747faf8f8b9d7efae922cbe1d7c7ffdd2b3b8d93</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,864,27924,27925</link.rule.ids></links><search><creatorcontrib>Gehrmann, Sebastian</creatorcontrib><creatorcontrib>Clark, Elizabeth</creatorcontrib><creatorcontrib>Sellam, Thibault</creatorcontrib><title>Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text</title><title>The Journal of artificial intelligence research</title><description>Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural generation models have improved to the point where their outputs can often no longer be distinguished based on the surface-level features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for evaluation research and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 generation papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo.</description><issn>1076-9757</issn><issn>1076-9757</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpNkMtOwzAURC0EEqWw4wP8ASTk5mWbXVW1BalSEWQf-XENLiGp7KSif9-0sGA1o9HMLA4h95DEUEL2uJXOxxBDxqC4IBNIWBkJVrDLf_6a3ISwTRIQeconRL3hbly59oP2n0jnXuovNHTZDa2RvevaJzqj74Pf44F2lm5U6KVuMFDX0sVeNsO5RF_HXe_0mNvO0xW26GU__lT409-SKyubgHd_OiXVclHNn6P1ZvUyn60jDbyEyALmhYTSKqUKiSblIhdCKZYzKy23XAnD0EoUaaoVgmGaWWtMqjLFjcim5OH3VvsuBI-23nn3Lf2hhqQ-4alPeGqoz3iyIyV7W30</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Gehrmann, Sebastian</creator><creator>Clark, Elizabeth</creator><creator>Sellam, Thibault</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20230101</creationdate><title>Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text</title><author>Gehrmann, Sebastian ; Clark, Elizabeth ; Sellam, Thibault</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1861-f1e45a16fbbb5aed289499bb747faf8f8b9d7efae922cbe1d7c7ffdd2b3b8d93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gehrmann, Sebastian</creatorcontrib><creatorcontrib>Clark, Elizabeth</creatorcontrib><creatorcontrib>Sellam, Thibault</creatorcontrib><collection>CrossRef</collection><jtitle>The Journal of artificial intelligence research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gehrmann, Sebastian</au><au>Clark, Elizabeth</au><au>Sellam, Thibault</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text</atitle><jtitle>The Journal of artificial intelligence research</jtitle><date>2023-01-01</date><risdate>2023</risdate><volume>77</volume><spage>103</spage><epage>166</epage><pages>103-166</pages><issn>1076-9757</issn><eissn>1076-9757</eissn><abstract>Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural generation models have improved to the point where their outputs can often no longer be distinguished based on the surface-level features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for evaluation research and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 generation papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo.</abstract><doi>10.1613/jair.1.13715</doi><tpages>64</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1076-9757
ispartof The Journal of artificial intelligence research, 2023-01, Vol.77, p.103-166
issn 1076-9757
1076-9757
language eng
recordid cdi_crossref_primary_10_1613_jair_1_13715
source DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; Free E- Journals
title Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T07%3A08%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Repairing%20the%20Cracked%20Foundation:%20A%20Survey%20of%20Obstacles%20in%20Evaluation%20Practices%20for%20Generated%20Text&rft.jtitle=The%20Journal%20of%20artificial%20intelligence%20research&rft.au=Gehrmann,%20Sebastian&rft.date=2023-01-01&rft.volume=77&rft.spage=103&rft.epage=166&rft.pages=103-166&rft.issn=1076-9757&rft.eissn=1076-9757&rft_id=info:doi/10.1613/jair.1.13715&rft_dat=%3Ccrossref%3E10_1613_jair_1_13715%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true