What does BERT Learn from Multiple-Choice Reading Comprehension Datasets?

Multiple-Choice Reading Comprehension (MCRC) requires the model to read the passage and question, and select the correct answer among the given options. Recent state-of-the-art models have achieved impressive performance on multiple MCRC datasets. However, such performance may not reflect the model&...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Si, Chenglei, Wang, Shuohang, Kan, Min-Yen, Jiang, Jing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Si, Chenglei
Wang, Shuohang
Kan, Min-Yen
Jiang, Jing
description Multiple-Choice Reading Comprehension (MCRC) requires the model to read the passage and question, and select the correct answer among the given options. Recent state-of-the-art models have achieved impressive performance on multiple MCRC datasets. However, such performance may not reflect the model's true ability of language understanding and reasoning. In this work, we adopt two approaches to investigate what BERT learns from MCRC datasets: 1) an un-readable data attack, in which we add keywords to confuse BERT, leading to a significant performance drop; and 2) an un-answerable data training, in which we train BERT on partial or shuffled input. Under un-answerable data training, BERT achieves unexpectedly high performance. Based on our experiments on the 5 key MCRC datasets - RACE, MCTest, MCScript, MCScript2.0, DREAM - we observe that 1) fine-tuned BERT mainly learns how keywords lead to correct prediction, instead of learning semantic understanding and reasoning; and 2) BERT does not need correct syntactic information to solve the task; 3) there exists artifacts in these datasets such that they can be solved even without the full context.
doi_str_mv 10.48550/arxiv.1910.12391
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1910_12391</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1910_12391</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-3efa446e80274ea03bbccc938a6bf18e141f8e1387966028cb2b242b7c5ed1873</originalsourceid><addsrcrecordid>eNotz8tKxDAYBeBsXMjoA7gyL9CxuTRJV6J11IGKMBRclj_pHxvojaSKvr3j6OYcOIsDHyFXLN9KUxT5DcSv8Lll5XFgXJTsnOzfelhpN2Oi97tDQ2uEOFEf55G-fAxrWAbMqn4ODukBoQvTO63mcYnY45TCPNEHWCHhmm4vyJmHIeHlf29I87hrquesfn3aV3d1BkqzTKAHKRWanGuJkAtrnXOlMKCsZwaZZP6YwuhSqZwbZ7nlklvtCuyY0WJDrv9uT5Z2iWGE-N3-mtqTSfwAU2NGJw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>What does BERT Learn from Multiple-Choice Reading Comprehension Datasets?</title><source>arXiv.org</source><creator>Si, Chenglei ; Wang, Shuohang ; Kan, Min-Yen ; Jiang, Jing</creator><creatorcontrib>Si, Chenglei ; Wang, Shuohang ; Kan, Min-Yen ; Jiang, Jing</creatorcontrib><description>Multiple-Choice Reading Comprehension (MCRC) requires the model to read the passage and question, and select the correct answer among the given options. Recent state-of-the-art models have achieved impressive performance on multiple MCRC datasets. However, such performance may not reflect the model's true ability of language understanding and reasoning. In this work, we adopt two approaches to investigate what BERT learns from MCRC datasets: 1) an un-readable data attack, in which we add keywords to confuse BERT, leading to a significant performance drop; and 2) an un-answerable data training, in which we train BERT on partial or shuffled input. Under un-answerable data training, BERT achieves unexpectedly high performance. Based on our experiments on the 5 key MCRC datasets - RACE, MCTest, MCScript, MCScript2.0, DREAM - we observe that 1) fine-tuned BERT mainly learns how keywords lead to correct prediction, instead of learning semantic understanding and reasoning; and 2) BERT does not need correct syntactic information to solve the task; 3) there exists artifacts in these datasets such that they can be solved even without the full context.</description><identifier>DOI: 10.48550/arxiv.1910.12391</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2019-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,782,887</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1910.12391$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1910.12391$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Si, Chenglei</creatorcontrib><creatorcontrib>Wang, Shuohang</creatorcontrib><creatorcontrib>Kan, Min-Yen</creatorcontrib><creatorcontrib>Jiang, Jing</creatorcontrib><title>What does BERT Learn from Multiple-Choice Reading Comprehension Datasets?</title><description>Multiple-Choice Reading Comprehension (MCRC) requires the model to read the passage and question, and select the correct answer among the given options. Recent state-of-the-art models have achieved impressive performance on multiple MCRC datasets. However, such performance may not reflect the model's true ability of language understanding and reasoning. In this work, we adopt two approaches to investigate what BERT learns from MCRC datasets: 1) an un-readable data attack, in which we add keywords to confuse BERT, leading to a significant performance drop; and 2) an un-answerable data training, in which we train BERT on partial or shuffled input. Under un-answerable data training, BERT achieves unexpectedly high performance. Based on our experiments on the 5 key MCRC datasets - RACE, MCTest, MCScript, MCScript2.0, DREAM - we observe that 1) fine-tuned BERT mainly learns how keywords lead to correct prediction, instead of learning semantic understanding and reasoning; and 2) BERT does not need correct syntactic information to solve the task; 3) there exists artifacts in these datasets such that they can be solved even without the full context.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz8tKxDAYBeBsXMjoA7gyL9CxuTRJV6J11IGKMBRclj_pHxvojaSKvr3j6OYcOIsDHyFXLN9KUxT5DcSv8Lll5XFgXJTsnOzfelhpN2Oi97tDQ2uEOFEf55G-fAxrWAbMqn4ODukBoQvTO63mcYnY45TCPNEHWCHhmm4vyJmHIeHlf29I87hrquesfn3aV3d1BkqzTKAHKRWanGuJkAtrnXOlMKCsZwaZZP6YwuhSqZwbZ7nlklvtCuyY0WJDrv9uT5Z2iWGE-N3-mtqTSfwAU2NGJw</recordid><startdate>20191027</startdate><enddate>20191027</enddate><creator>Si, Chenglei</creator><creator>Wang, Shuohang</creator><creator>Kan, Min-Yen</creator><creator>Jiang, Jing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20191027</creationdate><title>What does BERT Learn from Multiple-Choice Reading Comprehension Datasets?</title><author>Si, Chenglei ; Wang, Shuohang ; Kan, Min-Yen ; Jiang, Jing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-3efa446e80274ea03bbccc938a6bf18e141f8e1387966028cb2b242b7c5ed1873</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Si, Chenglei</creatorcontrib><creatorcontrib>Wang, Shuohang</creatorcontrib><creatorcontrib>Kan, Min-Yen</creatorcontrib><creatorcontrib>Jiang, Jing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Si, Chenglei</au><au>Wang, Shuohang</au><au>Kan, Min-Yen</au><au>Jiang, Jing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>What does BERT Learn from Multiple-Choice Reading Comprehension Datasets?</atitle><date>2019-10-27</date><risdate>2019</risdate><abstract>Multiple-Choice Reading Comprehension (MCRC) requires the model to read the passage and question, and select the correct answer among the given options. Recent state-of-the-art models have achieved impressive performance on multiple MCRC datasets. However, such performance may not reflect the model's true ability of language understanding and reasoning. In this work, we adopt two approaches to investigate what BERT learns from MCRC datasets: 1) an un-readable data attack, in which we add keywords to confuse BERT, leading to a significant performance drop; and 2) an un-answerable data training, in which we train BERT on partial or shuffled input. Under un-answerable data training, BERT achieves unexpectedly high performance. Based on our experiments on the 5 key MCRC datasets - RACE, MCTest, MCScript, MCScript2.0, DREAM - we observe that 1) fine-tuned BERT mainly learns how keywords lead to correct prediction, instead of learning semantic understanding and reasoning; and 2) BERT does not need correct syntactic information to solve the task; 3) there exists artifacts in these datasets such that they can be solved even without the full context.</abstract><doi>10.48550/arxiv.1910.12391</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1910.12391
ispartof
issn
language eng
recordid cdi_arxiv_primary_1910_12391
source arXiv.org
subjects Computer Science - Computation and Language
title What does BERT Learn from Multiple-Choice Reading Comprehension Datasets?
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-03T06%3A02%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=What%20does%20BERT%20Learn%20from%20Multiple-Choice%20Reading%20Comprehension%20Datasets?&rft.au=Si,%20Chenglei&rft.date=2019-10-27&rft_id=info:doi/10.48550/arxiv.1910.12391&rft_dat=%3Carxiv_GOX%3E1910_12391%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true