Discourse Analysis via Questions and Answers: Parsing Dependency Structures of Questions Under Discussion
Automatic discourse processing is bottlenecked by data: current discourse formalisms pose highly demanding annotation tasks involving large taxonomies of discourse relations, making them inaccessible to lay annotators. This work instead adopts the linguistic framework of Questions Under Discussion (...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-05 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Wei-Jen, Ko Wu, Yating Cutter Dalton Dananjay Srinivas Durrett, Greg Li, Junyi Jessy |
description | Automatic discourse processing is bottlenecked by data: current discourse formalisms pose highly demanding annotation tasks involving large taxonomies of discourse relations, making them inaccessible to lay annotators. This work instead adopts the linguistic framework of Questions Under Discussion (QUD) for discourse analysis and seeks to derive QUD structures automatically. QUD views each sentence as an answer to a question triggered in prior context; thus, we characterize relationships between sentences as free-form questions, in contrast to exhaustive fine-grained taxonomies. We develop the first-of-its-kind QUD parser that derives a dependency structure of questions over full documents, trained using a large, crowdsourced question-answering dataset DCQA (Ko et al., 2022). Human evaluation results show that QUD dependency parsing is possible for language models trained with this crowdsourced, generalizable annotation scheme. We illustrate how our QUD structure is distinct from RST trees, and demonstrate the utility of QUD analysis in the context of document simplification. Our findings show that QUD parsing is an appealing alternative for automatic discourse processing. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2724403545</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2724403545</sourcerecordid><originalsourceid>FETCH-proquest_journals_27244035453</originalsourceid><addsrcrecordid>eNqNjj0LwjAYhIMgWLT_4QXnQs2HFTexiqOiziW0qaSUpOZtlP57U3BwdDq45-64CYkoY6tkwymdkRixSdOUrjMqBIuIzjWW1jtUsDOyHVAjvLSEi1fYa2sQpKkCwrdyuIWzdKjNA3LVKVMpUw5w7Z0ve-8Ugq1_evfAHYzzHjE4CzKtZYsq_uqcLI-H2_6UdM4-x1bRhB_hAxY0o5ynTHDB_kt9AP4eSN8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2724403545</pqid></control><display><type>article</type><title>Discourse Analysis via Questions and Answers: Parsing Dependency Structures of Questions Under Discussion</title><source>Free E- Journals</source><creator>Wei-Jen, Ko ; Wu, Yating ; Cutter Dalton ; Dananjay Srinivas ; Durrett, Greg ; Li, Junyi Jessy</creator><creatorcontrib>Wei-Jen, Ko ; Wu, Yating ; Cutter Dalton ; Dananjay Srinivas ; Durrett, Greg ; Li, Junyi Jessy</creatorcontrib><description>Automatic discourse processing is bottlenecked by data: current discourse formalisms pose highly demanding annotation tasks involving large taxonomies of discourse relations, making them inaccessible to lay annotators. This work instead adopts the linguistic framework of Questions Under Discussion (QUD) for discourse analysis and seeks to derive QUD structures automatically. QUD views each sentence as an answer to a question triggered in prior context; thus, we characterize relationships between sentences as free-form questions, in contrast to exhaustive fine-grained taxonomies. We develop the first-of-its-kind QUD parser that derives a dependency structure of questions over full documents, trained using a large, crowdsourced question-answering dataset DCQA (Ko et al., 2022). Human evaluation results show that QUD dependency parsing is possible for language models trained with this crowdsourced, generalizable annotation scheme. We illustrate how our QUD structure is distinct from RST trees, and demonstrate the utility of QUD analysis in the context of document simplification. Our findings show that QUD parsing is an appealing alternative for automatic discourse processing.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Context ; Data collection ; Discourse analysis ; Documents ; Free form ; Human performance ; Parsers ; Questions ; Sentences ; Taxonomy</subject><ispartof>arXiv.org, 2023-05</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>Wei-Jen, Ko</creatorcontrib><creatorcontrib>Wu, Yating</creatorcontrib><creatorcontrib>Cutter Dalton</creatorcontrib><creatorcontrib>Dananjay Srinivas</creatorcontrib><creatorcontrib>Durrett, Greg</creatorcontrib><creatorcontrib>Li, Junyi Jessy</creatorcontrib><title>Discourse Analysis via Questions and Answers: Parsing Dependency Structures of Questions Under Discussion</title><title>arXiv.org</title><description>Automatic discourse processing is bottlenecked by data: current discourse formalisms pose highly demanding annotation tasks involving large taxonomies of discourse relations, making them inaccessible to lay annotators. This work instead adopts the linguistic framework of Questions Under Discussion (QUD) for discourse analysis and seeks to derive QUD structures automatically. QUD views each sentence as an answer to a question triggered in prior context; thus, we characterize relationships between sentences as free-form questions, in contrast to exhaustive fine-grained taxonomies. We develop the first-of-its-kind QUD parser that derives a dependency structure of questions over full documents, trained using a large, crowdsourced question-answering dataset DCQA (Ko et al., 2022). Human evaluation results show that QUD dependency parsing is possible for language models trained with this crowdsourced, generalizable annotation scheme. We illustrate how our QUD structure is distinct from RST trees, and demonstrate the utility of QUD analysis in the context of document simplification. Our findings show that QUD parsing is an appealing alternative for automatic discourse processing.</description><subject>Annotations</subject><subject>Context</subject><subject>Data collection</subject><subject>Discourse analysis</subject><subject>Documents</subject><subject>Free form</subject><subject>Human performance</subject><subject>Parsers</subject><subject>Questions</subject><subject>Sentences</subject><subject>Taxonomy</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjj0LwjAYhIMgWLT_4QXnQs2HFTexiqOiziW0qaSUpOZtlP57U3BwdDq45-64CYkoY6tkwymdkRixSdOUrjMqBIuIzjWW1jtUsDOyHVAjvLSEi1fYa2sQpKkCwrdyuIWzdKjNA3LVKVMpUw5w7Z0ve-8Ugq1_evfAHYzzHjE4CzKtZYsq_uqcLI-H2_6UdM4-x1bRhB_hAxY0o5ynTHDB_kt9AP4eSN8</recordid><startdate>20230512</startdate><enddate>20230512</enddate><creator>Wei-Jen, Ko</creator><creator>Wu, Yating</creator><creator>Cutter Dalton</creator><creator>Dananjay Srinivas</creator><creator>Durrett, Greg</creator><creator>Li, Junyi Jessy</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230512</creationdate><title>Discourse Analysis via Questions and Answers: Parsing Dependency Structures of Questions Under Discussion</title><author>Wei-Jen, Ko ; Wu, Yating ; Cutter Dalton ; Dananjay Srinivas ; Durrett, Greg ; Li, Junyi Jessy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27244035453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Annotations</topic><topic>Context</topic><topic>Data collection</topic><topic>Discourse analysis</topic><topic>Documents</topic><topic>Free form</topic><topic>Human performance</topic><topic>Parsers</topic><topic>Questions</topic><topic>Sentences</topic><topic>Taxonomy</topic><toplevel>online_resources</toplevel><creatorcontrib>Wei-Jen, Ko</creatorcontrib><creatorcontrib>Wu, Yating</creatorcontrib><creatorcontrib>Cutter Dalton</creatorcontrib><creatorcontrib>Dananjay Srinivas</creatorcontrib><creatorcontrib>Durrett, Greg</creatorcontrib><creatorcontrib>Li, Junyi Jessy</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wei-Jen, Ko</au><au>Wu, Yating</au><au>Cutter Dalton</au><au>Dananjay Srinivas</au><au>Durrett, Greg</au><au>Li, Junyi Jessy</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Discourse Analysis via Questions and Answers: Parsing Dependency Structures of Questions Under Discussion</atitle><jtitle>arXiv.org</jtitle><date>2023-05-12</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Automatic discourse processing is bottlenecked by data: current discourse formalisms pose highly demanding annotation tasks involving large taxonomies of discourse relations, making them inaccessible to lay annotators. This work instead adopts the linguistic framework of Questions Under Discussion (QUD) for discourse analysis and seeks to derive QUD structures automatically. QUD views each sentence as an answer to a question triggered in prior context; thus, we characterize relationships between sentences as free-form questions, in contrast to exhaustive fine-grained taxonomies. We develop the first-of-its-kind QUD parser that derives a dependency structure of questions over full documents, trained using a large, crowdsourced question-answering dataset DCQA (Ko et al., 2022). Human evaluation results show that QUD dependency parsing is possible for language models trained with this crowdsourced, generalizable annotation scheme. We illustrate how our QUD structure is distinct from RST trees, and demonstrate the utility of QUD analysis in the context of document simplification. Our findings show that QUD parsing is an appealing alternative for automatic discourse processing.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-05 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2724403545 |
source | Free E- Journals |
subjects | Annotations Context Data collection Discourse analysis Documents Free form Human performance Parsers Questions Sentences Taxonomy |
title | Discourse Analysis via Questions and Answers: Parsing Dependency Structures of Questions Under Discussion |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T16%3A28%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Discourse%20Analysis%20via%20Questions%20and%20Answers:%20Parsing%20Dependency%20Structures%20of%20Questions%20Under%20Discussion&rft.jtitle=arXiv.org&rft.au=Wei-Jen,%20Ko&rft.date=2023-05-12&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2724403545%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2724403545&rft_id=info:pmid/&rfr_iscdi=true |