Incremental Reading for Question Answering
Any system which performs goal-directed continual learning must not only learn incrementally but process and absorb information incrementally. Such a system also has to understand when its goals have been achieved. In this paper, we consider these issues in the context of question answering. Current...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Abnar, Samira Bedrax-weiss, Tania Kwiatkowski, Tom Cohen, William W |
description | Any system which performs goal-directed continual learning must not only
learn incrementally but process and absorb information incrementally. Such a
system also has to understand when its goals have been achieved. In this paper,
we consider these issues in the context of question answering. Current
state-of-the-art question answering models reason over an entire passage, not
incrementally. As we will show, naive approaches to incremental reading, such
as restriction to unidirectional language models in the model, perform poorly.
We present extensions to the DocQA [2] model to allow incremental reading
without loss of accuracy. The model also jointly learns to provide the best
answer given the text that is seen so far and predict whether this best-so-far
answer is sufficient. |
doi_str_mv | 10.48550/arxiv.1901.04936 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1901_04936</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1901_04936</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-bc3f944470728b234fe7fbe4f80f9d15eb77faec2c3f54fc4c6078114ee8b4c3</originalsourceid><addsrcrecordid>eNotzkkLwjAUBOBcPIj6AzzZs9CaNK9NehRxA0Fc7iWJ70lBo6Su_971NDAMw8dYV_AEdJbxgQmP6paIgouEQyHzJuvPvQt4RH8xh2iNZlf5fUSnEK2uWF-qk4-Gvr5jeNdt1iBzqLHzzxbbTMbb0SxeLKfz0XARm1zlsXWSCgBQXKXaphIIFVkE0pyKncjQKkUGXfreZUAOXM6VFgIQtQUnW6z3e_1ay3OojiY8y4-5_JrlC2k3PBw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Incremental Reading for Question Answering</title><source>arXiv.org</source><creator>Abnar, Samira ; Bedrax-weiss, Tania ; Kwiatkowski, Tom ; Cohen, William W</creator><creatorcontrib>Abnar, Samira ; Bedrax-weiss, Tania ; Kwiatkowski, Tom ; Cohen, William W</creatorcontrib><description>Any system which performs goal-directed continual learning must not only
learn incrementally but process and absorb information incrementally. Such a
system also has to understand when its goals have been achieved. In this paper,
we consider these issues in the context of question answering. Current
state-of-the-art question answering models reason over an entire passage, not
incrementally. As we will show, naive approaches to incremental reading, such
as restriction to unidirectional language models in the model, perform poorly.
We present extensions to the DocQA [2] model to allow incremental reading
without loss of accuracy. The model also jointly learns to provide the best
answer given the text that is seen so far and predict whether this best-so-far
answer is sufficient.</description><identifier>DOI: 10.48550/arxiv.1901.04936</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2019-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1901.04936$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1901.04936$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Abnar, Samira</creatorcontrib><creatorcontrib>Bedrax-weiss, Tania</creatorcontrib><creatorcontrib>Kwiatkowski, Tom</creatorcontrib><creatorcontrib>Cohen, William W</creatorcontrib><title>Incremental Reading for Question Answering</title><description>Any system which performs goal-directed continual learning must not only
learn incrementally but process and absorb information incrementally. Such a
system also has to understand when its goals have been achieved. In this paper,
we consider these issues in the context of question answering. Current
state-of-the-art question answering models reason over an entire passage, not
incrementally. As we will show, naive approaches to incremental reading, such
as restriction to unidirectional language models in the model, perform poorly.
We present extensions to the DocQA [2] model to allow incremental reading
without loss of accuracy. The model also jointly learns to provide the best
answer given the text that is seen so far and predict whether this best-so-far
answer is sufficient.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzkkLwjAUBOBcPIj6AzzZs9CaNK9NehRxA0Fc7iWJ70lBo6Su_971NDAMw8dYV_AEdJbxgQmP6paIgouEQyHzJuvPvQt4RH8xh2iNZlf5fUSnEK2uWF-qk4-Gvr5jeNdt1iBzqLHzzxbbTMbb0SxeLKfz0XARm1zlsXWSCgBQXKXaphIIFVkE0pyKncjQKkUGXfreZUAOXM6VFgIQtQUnW6z3e_1ay3OojiY8y4-5_JrlC2k3PBw</recordid><startdate>20190115</startdate><enddate>20190115</enddate><creator>Abnar, Samira</creator><creator>Bedrax-weiss, Tania</creator><creator>Kwiatkowski, Tom</creator><creator>Cohen, William W</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190115</creationdate><title>Incremental Reading for Question Answering</title><author>Abnar, Samira ; Bedrax-weiss, Tania ; Kwiatkowski, Tom ; Cohen, William W</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-bc3f944470728b234fe7fbe4f80f9d15eb77faec2c3f54fc4c6078114ee8b4c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Abnar, Samira</creatorcontrib><creatorcontrib>Bedrax-weiss, Tania</creatorcontrib><creatorcontrib>Kwiatkowski, Tom</creatorcontrib><creatorcontrib>Cohen, William W</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Abnar, Samira</au><au>Bedrax-weiss, Tania</au><au>Kwiatkowski, Tom</au><au>Cohen, William W</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Incremental Reading for Question Answering</atitle><date>2019-01-15</date><risdate>2019</risdate><abstract>Any system which performs goal-directed continual learning must not only
learn incrementally but process and absorb information incrementally. Such a
system also has to understand when its goals have been achieved. In this paper,
we consider these issues in the context of question answering. Current
state-of-the-art question answering models reason over an entire passage, not
incrementally. As we will show, naive approaches to incremental reading, such
as restriction to unidirectional language models in the model, perform poorly.
We present extensions to the DocQA [2] model to allow incremental reading
without loss of accuracy. The model also jointly learns to provide the best
answer given the text that is seen so far and predict whether this best-so-far
answer is sufficient.</abstract><doi>10.48550/arxiv.1901.04936</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1901.04936 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1901_04936 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language |
title | Incremental Reading for Question Answering |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T10%3A10%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Incremental%20Reading%20for%20Question%20Answering&rft.au=Abnar,%20Samira&rft.date=2019-01-15&rft_id=info:doi/10.48550/arxiv.1901.04936&rft_dat=%3Carxiv_GOX%3E1901_04936%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |