Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP

We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13\% of papers had (i) sufficiently low ba...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Belz, Anya, Thomson, Craig, Reiter, Ehud, Abercrombie, Gavin, Alonso-Moral, Jose M, Arvan, Mohammad, Braggaar, Anouck, Cieliebak, Mark, Clark, Elizabeth, van Deemter, Kees, Dinkar, Tanvi, Dušek, Ondřej, Eger, Steffen, Fang, Qixiang, Gao, Mingqi, Gatt, Albert, Gkatzia, Dimitra, González-Corbelle, Javier, Hovy, Dirk, Hürlimann, Manuela, Ito, Takumi, Kelleher, John D, Klubicka, Filip, Krahmer, Emiel, Lai, Huiyuan, van der Lee, Chris, Li, Yiru, Mahamood, Saad, Mieskes, Margot, van Miltenburg, Emiel, Mosteiro, Pablo, Nissim, Malvina, Parde, Natalie, Plátek, Ondřej, Rieser, Verena, Ruan, Jie, Tetreault, Joel, Toral, Antonio, Wan, Xiaojun, Wanner, Leo, Watson, Lewis, Yang, Diyi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Belz, Anya
Thomson, Craig
Reiter, Ehud
Abercrombie, Gavin
Alonso-Moral, Jose M
Arvan, Mohammad
Braggaar, Anouck
Cieliebak, Mark
Clark, Elizabeth
van Deemter, Kees
Dinkar, Tanvi
Dušek, Ondřej
Eger, Steffen
Fang, Qixiang
Gao, Mingqi
Gatt, Albert
Gkatzia, Dimitra
González-Corbelle, Javier
Hovy, Dirk
Hürlimann, Manuela
Ito, Takumi
Kelleher, John D
Klubicka, Filip
Krahmer, Emiel
Lai, Huiyuan
van der Lee, Chris
Li, Yiru
Mahamood, Saad
Mieskes, Margot
van Miltenburg, Emiel
Mosteiro, Pablo
Nissim, Malvina
Parde, Natalie
Plátek, Ondřej
Rieser, Verena
Ruan, Jie
Tetreault, Joel
Toral, Antonio
Wan, Xiaojun
Wanner, Leo
Watson, Lewis
Yang, Diyi
description We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13\% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting a reproduction questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative) finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP.
doi_str_mv 10.48550/arxiv.2305.01633
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_01633</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_01633</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-d48a9fc02378bcd429aceb529893db4968c52242389bd9cc9b50f2fd992ece13</originalsourceid><addsrcrecordid>eNpNkMFSgzAURdm4cKof4Mr3AQUhAZq4YzqtZQa1o3XNhJDYzEDCJIDtj_i9InXh6i3um3PnHs-7i8IgJkkSPjB7UmOAcJgEYZRifO19PyvnlP6EXEtjW9Yro5fwoa1wndFOjQKyoT8a65awOXXCqlbonjWwbdiXe4TDUUDedmaCVKpR_RmMhMw5caH2U_wmOmvqgf972FsxKjM42A0t07AZWTPM1Q6Uhpdif-NdSdY4cft3F977dnNY7_zi9SlfZ4XP0hX265gwKnmI8IpUvI4RZVxUCaKE4rqKaUp4glCMMKFVTTmnVRJKJGtKkeAiwgvv_kKdvZTdNI7Zc_nrp5z94B8K8mQi</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP</title><source>arXiv.org</source><creator>Belz, Anya ; Thomson, Craig ; Reiter, Ehud ; Abercrombie, Gavin ; Alonso-Moral, Jose M ; Arvan, Mohammad ; Braggaar, Anouck ; Cieliebak, Mark ; Clark, Elizabeth ; van Deemter, Kees ; Dinkar, Tanvi ; Dušek, Ondřej ; Eger, Steffen ; Fang, Qixiang ; Gao, Mingqi ; Gatt, Albert ; Gkatzia, Dimitra ; González-Corbelle, Javier ; Hovy, Dirk ; Hürlimann, Manuela ; Ito, Takumi ; Kelleher, John D ; Klubicka, Filip ; Krahmer, Emiel ; Lai, Huiyuan ; van der Lee, Chris ; Li, Yiru ; Mahamood, Saad ; Mieskes, Margot ; van Miltenburg, Emiel ; Mosteiro, Pablo ; Nissim, Malvina ; Parde, Natalie ; Plátek, Ondřej ; Rieser, Verena ; Ruan, Jie ; Tetreault, Joel ; Toral, Antonio ; Wan, Xiaojun ; Wanner, Leo ; Watson, Lewis ; Yang, Diyi</creator><creatorcontrib>Belz, Anya ; Thomson, Craig ; Reiter, Ehud ; Abercrombie, Gavin ; Alonso-Moral, Jose M ; Arvan, Mohammad ; Braggaar, Anouck ; Cieliebak, Mark ; Clark, Elizabeth ; van Deemter, Kees ; Dinkar, Tanvi ; Dušek, Ondřej ; Eger, Steffen ; Fang, Qixiang ; Gao, Mingqi ; Gatt, Albert ; Gkatzia, Dimitra ; González-Corbelle, Javier ; Hovy, Dirk ; Hürlimann, Manuela ; Ito, Takumi ; Kelleher, John D ; Klubicka, Filip ; Krahmer, Emiel ; Lai, Huiyuan ; van der Lee, Chris ; Li, Yiru ; Mahamood, Saad ; Mieskes, Margot ; van Miltenburg, Emiel ; Mosteiro, Pablo ; Nissim, Malvina ; Parde, Natalie ; Plátek, Ondřej ; Rieser, Verena ; Ruan, Jie ; Tetreault, Joel ; Toral, Antonio ; Wan, Xiaojun ; Wanner, Leo ; Watson, Lewis ; Yang, Diyi</creatorcontrib><description>We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13\% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting a reproduction questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative) finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP.</description><identifier>DOI: 10.48550/arxiv.2305.01633</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-05</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.01633$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.01633$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Belz, Anya</creatorcontrib><creatorcontrib>Thomson, Craig</creatorcontrib><creatorcontrib>Reiter, Ehud</creatorcontrib><creatorcontrib>Abercrombie, Gavin</creatorcontrib><creatorcontrib>Alonso-Moral, Jose M</creatorcontrib><creatorcontrib>Arvan, Mohammad</creatorcontrib><creatorcontrib>Braggaar, Anouck</creatorcontrib><creatorcontrib>Cieliebak, Mark</creatorcontrib><creatorcontrib>Clark, Elizabeth</creatorcontrib><creatorcontrib>van Deemter, Kees</creatorcontrib><creatorcontrib>Dinkar, Tanvi</creatorcontrib><creatorcontrib>Dušek, Ondřej</creatorcontrib><creatorcontrib>Eger, Steffen</creatorcontrib><creatorcontrib>Fang, Qixiang</creatorcontrib><creatorcontrib>Gao, Mingqi</creatorcontrib><creatorcontrib>Gatt, Albert</creatorcontrib><creatorcontrib>Gkatzia, Dimitra</creatorcontrib><creatorcontrib>González-Corbelle, Javier</creatorcontrib><creatorcontrib>Hovy, Dirk</creatorcontrib><creatorcontrib>Hürlimann, Manuela</creatorcontrib><creatorcontrib>Ito, Takumi</creatorcontrib><creatorcontrib>Kelleher, John D</creatorcontrib><creatorcontrib>Klubicka, Filip</creatorcontrib><creatorcontrib>Krahmer, Emiel</creatorcontrib><creatorcontrib>Lai, Huiyuan</creatorcontrib><creatorcontrib>van der Lee, Chris</creatorcontrib><creatorcontrib>Li, Yiru</creatorcontrib><creatorcontrib>Mahamood, Saad</creatorcontrib><creatorcontrib>Mieskes, Margot</creatorcontrib><creatorcontrib>van Miltenburg, Emiel</creatorcontrib><creatorcontrib>Mosteiro, Pablo</creatorcontrib><creatorcontrib>Nissim, Malvina</creatorcontrib><creatorcontrib>Parde, Natalie</creatorcontrib><creatorcontrib>Plátek, Ondřej</creatorcontrib><creatorcontrib>Rieser, Verena</creatorcontrib><creatorcontrib>Ruan, Jie</creatorcontrib><creatorcontrib>Tetreault, Joel</creatorcontrib><creatorcontrib>Toral, Antonio</creatorcontrib><creatorcontrib>Wan, Xiaojun</creatorcontrib><creatorcontrib>Wanner, Leo</creatorcontrib><creatorcontrib>Watson, Lewis</creatorcontrib><creatorcontrib>Yang, Diyi</creatorcontrib><title>Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP</title><description>We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13\% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting a reproduction questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative) finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpNkMFSgzAURdm4cKof4Mr3AQUhAZq4YzqtZQa1o3XNhJDYzEDCJIDtj_i9InXh6i3um3PnHs-7i8IgJkkSPjB7UmOAcJgEYZRifO19PyvnlP6EXEtjW9Yro5fwoa1wndFOjQKyoT8a65awOXXCqlbonjWwbdiXe4TDUUDedmaCVKpR_RmMhMw5caH2U_wmOmvqgf972FsxKjM42A0t07AZWTPM1Q6Uhpdif-NdSdY4cft3F977dnNY7_zi9SlfZ4XP0hX265gwKnmI8IpUvI4RZVxUCaKE4rqKaUp4glCMMKFVTTmnVRJKJGtKkeAiwgvv_kKdvZTdNI7Zc_nrp5z94B8K8mQi</recordid><startdate>20230502</startdate><enddate>20230502</enddate><creator>Belz, Anya</creator><creator>Thomson, Craig</creator><creator>Reiter, Ehud</creator><creator>Abercrombie, Gavin</creator><creator>Alonso-Moral, Jose M</creator><creator>Arvan, Mohammad</creator><creator>Braggaar, Anouck</creator><creator>Cieliebak, Mark</creator><creator>Clark, Elizabeth</creator><creator>van Deemter, Kees</creator><creator>Dinkar, Tanvi</creator><creator>Dušek, Ondřej</creator><creator>Eger, Steffen</creator><creator>Fang, Qixiang</creator><creator>Gao, Mingqi</creator><creator>Gatt, Albert</creator><creator>Gkatzia, Dimitra</creator><creator>González-Corbelle, Javier</creator><creator>Hovy, Dirk</creator><creator>Hürlimann, Manuela</creator><creator>Ito, Takumi</creator><creator>Kelleher, John D</creator><creator>Klubicka, Filip</creator><creator>Krahmer, Emiel</creator><creator>Lai, Huiyuan</creator><creator>van der Lee, Chris</creator><creator>Li, Yiru</creator><creator>Mahamood, Saad</creator><creator>Mieskes, Margot</creator><creator>van Miltenburg, Emiel</creator><creator>Mosteiro, Pablo</creator><creator>Nissim, Malvina</creator><creator>Parde, Natalie</creator><creator>Plátek, Ondřej</creator><creator>Rieser, Verena</creator><creator>Ruan, Jie</creator><creator>Tetreault, Joel</creator><creator>Toral, Antonio</creator><creator>Wan, Xiaojun</creator><creator>Wanner, Leo</creator><creator>Watson, Lewis</creator><creator>Yang, Diyi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230502</creationdate><title>Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP</title><author>Belz, Anya ; Thomson, Craig ; Reiter, Ehud ; Abercrombie, Gavin ; Alonso-Moral, Jose M ; Arvan, Mohammad ; Braggaar, Anouck ; Cieliebak, Mark ; Clark, Elizabeth ; van Deemter, Kees ; Dinkar, Tanvi ; Dušek, Ondřej ; Eger, Steffen ; Fang, Qixiang ; Gao, Mingqi ; Gatt, Albert ; Gkatzia, Dimitra ; González-Corbelle, Javier ; Hovy, Dirk ; Hürlimann, Manuela ; Ito, Takumi ; Kelleher, John D ; Klubicka, Filip ; Krahmer, Emiel ; Lai, Huiyuan ; van der Lee, Chris ; Li, Yiru ; Mahamood, Saad ; Mieskes, Margot ; van Miltenburg, Emiel ; Mosteiro, Pablo ; Nissim, Malvina ; Parde, Natalie ; Plátek, Ondřej ; Rieser, Verena ; Ruan, Jie ; Tetreault, Joel ; Toral, Antonio ; Wan, Xiaojun ; Wanner, Leo ; Watson, Lewis ; Yang, Diyi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-d48a9fc02378bcd429aceb529893db4968c52242389bd9cc9b50f2fd992ece13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Belz, Anya</creatorcontrib><creatorcontrib>Thomson, Craig</creatorcontrib><creatorcontrib>Reiter, Ehud</creatorcontrib><creatorcontrib>Abercrombie, Gavin</creatorcontrib><creatorcontrib>Alonso-Moral, Jose M</creatorcontrib><creatorcontrib>Arvan, Mohammad</creatorcontrib><creatorcontrib>Braggaar, Anouck</creatorcontrib><creatorcontrib>Cieliebak, Mark</creatorcontrib><creatorcontrib>Clark, Elizabeth</creatorcontrib><creatorcontrib>van Deemter, Kees</creatorcontrib><creatorcontrib>Dinkar, Tanvi</creatorcontrib><creatorcontrib>Dušek, Ondřej</creatorcontrib><creatorcontrib>Eger, Steffen</creatorcontrib><creatorcontrib>Fang, Qixiang</creatorcontrib><creatorcontrib>Gao, Mingqi</creatorcontrib><creatorcontrib>Gatt, Albert</creatorcontrib><creatorcontrib>Gkatzia, Dimitra</creatorcontrib><creatorcontrib>González-Corbelle, Javier</creatorcontrib><creatorcontrib>Hovy, Dirk</creatorcontrib><creatorcontrib>Hürlimann, Manuela</creatorcontrib><creatorcontrib>Ito, Takumi</creatorcontrib><creatorcontrib>Kelleher, John D</creatorcontrib><creatorcontrib>Klubicka, Filip</creatorcontrib><creatorcontrib>Krahmer, Emiel</creatorcontrib><creatorcontrib>Lai, Huiyuan</creatorcontrib><creatorcontrib>van der Lee, Chris</creatorcontrib><creatorcontrib>Li, Yiru</creatorcontrib><creatorcontrib>Mahamood, Saad</creatorcontrib><creatorcontrib>Mieskes, Margot</creatorcontrib><creatorcontrib>van Miltenburg, Emiel</creatorcontrib><creatorcontrib>Mosteiro, Pablo</creatorcontrib><creatorcontrib>Nissim, Malvina</creatorcontrib><creatorcontrib>Parde, Natalie</creatorcontrib><creatorcontrib>Plátek, Ondřej</creatorcontrib><creatorcontrib>Rieser, Verena</creatorcontrib><creatorcontrib>Ruan, Jie</creatorcontrib><creatorcontrib>Tetreault, Joel</creatorcontrib><creatorcontrib>Toral, Antonio</creatorcontrib><creatorcontrib>Wan, Xiaojun</creatorcontrib><creatorcontrib>Wanner, Leo</creatorcontrib><creatorcontrib>Watson, Lewis</creatorcontrib><creatorcontrib>Yang, Diyi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Belz, Anya</au><au>Thomson, Craig</au><au>Reiter, Ehud</au><au>Abercrombie, Gavin</au><au>Alonso-Moral, Jose M</au><au>Arvan, Mohammad</au><au>Braggaar, Anouck</au><au>Cieliebak, Mark</au><au>Clark, Elizabeth</au><au>van Deemter, Kees</au><au>Dinkar, Tanvi</au><au>Dušek, Ondřej</au><au>Eger, Steffen</au><au>Fang, Qixiang</au><au>Gao, Mingqi</au><au>Gatt, Albert</au><au>Gkatzia, Dimitra</au><au>González-Corbelle, Javier</au><au>Hovy, Dirk</au><au>Hürlimann, Manuela</au><au>Ito, Takumi</au><au>Kelleher, John D</au><au>Klubicka, Filip</au><au>Krahmer, Emiel</au><au>Lai, Huiyuan</au><au>van der Lee, Chris</au><au>Li, Yiru</au><au>Mahamood, Saad</au><au>Mieskes, Margot</au><au>van Miltenburg, Emiel</au><au>Mosteiro, Pablo</au><au>Nissim, Malvina</au><au>Parde, Natalie</au><au>Plátek, Ondřej</au><au>Rieser, Verena</au><au>Ruan, Jie</au><au>Tetreault, Joel</au><au>Toral, Antonio</au><au>Wan, Xiaojun</au><au>Wanner, Leo</au><au>Watson, Lewis</au><au>Yang, Diyi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP</atitle><date>2023-05-02</date><risdate>2023</risdate><abstract>We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13\% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting a reproduction questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative) finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP.</abstract><doi>10.48550/arxiv.2305.01633</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2305.01633
ispartof
issn
language eng
recordid cdi_arxiv_primary_2305_01633
source arXiv.org
subjects Computer Science - Computation and Language
title Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T15%3A20%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Missing%20Information,%20Unresponsive%20Authors,%20Experimental%20Flaws:%20The%20Impossibility%20of%20Assessing%20the%20Reproducibility%20of%20Previous%20Human%20Evaluations%20in%20NLP&rft.au=Belz,%20Anya&rft.date=2023-05-02&rft_id=info:doi/10.48550/arxiv.2305.01633&rft_dat=%3Carxiv_GOX%3E2305_01633%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true