Controlling hallucinations at word level in data-to-text generation
Data-to-Text Generation (DTG) is a subfield of Natural Language Generation aiming at transcribing structured data in natural language descriptions. The field has been recently boosted by the use of neural-based generators which exhibit on one side great syntactic skills without the need of hand-craf...
Gespeichert in:
Veröffentlicht in: | Data mining and knowledge discovery 2022-01, Vol.36 (1), p.318-354 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 354 |
---|---|
container_issue | 1 |
container_start_page | 318 |
container_title | Data mining and knowledge discovery |
container_volume | 36 |
creator | Rebuffel, Clement Roberti, Marco Soulier, Laure Scoutheeten, Geoffrey Cancelliere, Rossella Gallinari, Patrick |
description | Data-to-Text Generation (DTG) is a subfield of Natural Language Generation aiming at transcribing structured data in natural language descriptions. The field has been recently boosted by the use of neural-based generators which exhibit on one side great syntactic skills without the need of hand-crafted pipelines; on the other side, the quality of the generated text reflects the quality of the training data, which in realistic settings only offer imperfectly aligned structure-text pairs. Consequently, state-of-art neural models include misleading statements –usually called hallucinations—in their outputs. The control of this phenomenon is today a major challenge for DTG, and is the problem addressed in the paper. Previous work deal with this issue at the instance level: using an alignment score for each table-reference pair. In contrast, we propose a finer-grained approach, arguing that hallucinations should rather be treated at the word level. Specifically, we propose a Multi-Branch Decoder which is able to leverage word-level labels to learn the relevant parts of each training instance. These labels are obtained following a simple and efficient scoring procedure based on co-occurrence analysis and dependency parsing. Extensive evaluations, via automated metrics and human judgment on the standard WikiBio benchmark, show the accuracy of our alignment labels and the effectiveness of the proposed Multi-Branch Decoder. Our model is able to reduce and control hallucinations, while keeping fluency and coherence in generated texts. Further experiments on a degraded version of ToTTo show that our model could be successfully used on very noisy settings. |
doi_str_mv | 10.1007/s10618-021-00801-4 |
format | Article |
fullrecord | <record><control><sourceid>proquest_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_03479792v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2621934553</sourcerecordid><originalsourceid>FETCH-LOGICAL-c397t-27814c75652c8b9ca01f122ccb332998da2ea075e22081ab6391aef325a424c83</originalsourceid><addsrcrecordid>eNp9kM1KAzEYRYMoWKsv4GrAlYvol79JZlmKWqHgRsFdSNNMnTImNUn9eXvTjujOVT7CuZfLQeicwBUBkNeJQE0UBkowgAKC-QEaESEZlqJ-Piw3UxwLReAYnaS0BgBBGYzQdBp8jqHvO7-qXkzfb23nTe6CT5XJ1UeIy6p3766vOl8tTTY4B5zdZ65Wzru4J0_RUWv65M5-3jF6ur15nM7w_OHufjqZY8samTGVinBb9ghq1aKxBkhLKLV2wRhtGrU01BmQwlEKiphFzRpiXMuoMJxyq9gYXQ69ZafexO7VxC8dTKdnk7ne_QHjspENfSeFvRjYTQxvW5eyXodt9GWepjUlDeNCsELRgbIxpBRd-1tLQO_E6kGsLmL1XqzmJcSGUCqwX7n4V_1P6ht34nlv</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2621934553</pqid></control><display><type>article</type><title>Controlling hallucinations at word level in data-to-text generation</title><source>SpringerNature Journals</source><creator>Rebuffel, Clement ; Roberti, Marco ; Soulier, Laure ; Scoutheeten, Geoffrey ; Cancelliere, Rossella ; Gallinari, Patrick</creator><creatorcontrib>Rebuffel, Clement ; Roberti, Marco ; Soulier, Laure ; Scoutheeten, Geoffrey ; Cancelliere, Rossella ; Gallinari, Patrick</creatorcontrib><description>Data-to-Text Generation (DTG) is a subfield of Natural Language Generation aiming at transcribing structured data in natural language descriptions. The field has been recently boosted by the use of neural-based generators which exhibit on one side great syntactic skills without the need of hand-crafted pipelines; on the other side, the quality of the generated text reflects the quality of the training data, which in realistic settings only offer imperfectly aligned structure-text pairs. Consequently, state-of-art neural models include misleading statements –usually called hallucinations—in their outputs. The control of this phenomenon is today a major challenge for DTG, and is the problem addressed in the paper. Previous work deal with this issue at the instance level: using an alignment score for each table-reference pair. In contrast, we propose a finer-grained approach, arguing that hallucinations should rather be treated at the word level. Specifically, we propose a Multi-Branch Decoder which is able to leverage word-level labels to learn the relevant parts of each training instance. These labels are obtained following a simple and efficient scoring procedure based on co-occurrence analysis and dependency parsing. Extensive evaluations, via automated metrics and human judgment on the standard WikiBio benchmark, show the accuracy of our alignment labels and the effectiveness of the proposed Multi-Branch Decoder. Our model is able to reduce and control hallucinations, while keeping fluency and coherence in generated texts. Further experiments on a degraded version of ToTTo show that our model could be successfully used on very noisy settings.</description><identifier>ISSN: 1384-5810</identifier><identifier>EISSN: 1573-756X</identifier><identifier>DOI: 10.1007/s10618-021-00801-4</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Alignment ; Artificial Intelligence ; Chemistry and Earth Sciences ; Computer Science ; Data Mining and Knowledge Discovery ; Document and Text Processing ; Hallucinations ; Information Storage and Retrieval ; Labels ; Natural language ; Physics ; Special Issue of the Journal Track of ECML PKDD 2022 ; Statistics for Engineering ; Structured data ; Training</subject><ispartof>Data mining and knowledge discovery, 2022-01, Vol.36 (1), p.318-354</ispartof><rights>The Author(s) 2021</rights><rights>The Author(s) 2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c397t-27814c75652c8b9ca01f122ccb332998da2ea075e22081ab6391aef325a424c83</citedby><cites>FETCH-LOGICAL-c397t-27814c75652c8b9ca01f122ccb332998da2ea075e22081ab6391aef325a424c83</cites><orcidid>0000-0003-1430-7006 ; 0000-0001-9827-7400 ; 0000-0001-9060-9001</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10618-021-00801-4$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10618-021-00801-4$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>230,315,781,785,886,27928,27929,41492,42561,51323</link.rule.ids><backlink>$$Uhttps://hal.sorbonne-universite.fr/hal-03479792$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Rebuffel, Clement</creatorcontrib><creatorcontrib>Roberti, Marco</creatorcontrib><creatorcontrib>Soulier, Laure</creatorcontrib><creatorcontrib>Scoutheeten, Geoffrey</creatorcontrib><creatorcontrib>Cancelliere, Rossella</creatorcontrib><creatorcontrib>Gallinari, Patrick</creatorcontrib><title>Controlling hallucinations at word level in data-to-text generation</title><title>Data mining and knowledge discovery</title><addtitle>Data Min Knowl Disc</addtitle><description>Data-to-Text Generation (DTG) is a subfield of Natural Language Generation aiming at transcribing structured data in natural language descriptions. The field has been recently boosted by the use of neural-based generators which exhibit on one side great syntactic skills without the need of hand-crafted pipelines; on the other side, the quality of the generated text reflects the quality of the training data, which in realistic settings only offer imperfectly aligned structure-text pairs. Consequently, state-of-art neural models include misleading statements –usually called hallucinations—in their outputs. The control of this phenomenon is today a major challenge for DTG, and is the problem addressed in the paper. Previous work deal with this issue at the instance level: using an alignment score for each table-reference pair. In contrast, we propose a finer-grained approach, arguing that hallucinations should rather be treated at the word level. Specifically, we propose a Multi-Branch Decoder which is able to leverage word-level labels to learn the relevant parts of each training instance. These labels are obtained following a simple and efficient scoring procedure based on co-occurrence analysis and dependency parsing. Extensive evaluations, via automated metrics and human judgment on the standard WikiBio benchmark, show the accuracy of our alignment labels and the effectiveness of the proposed Multi-Branch Decoder. Our model is able to reduce and control hallucinations, while keeping fluency and coherence in generated texts. Further experiments on a degraded version of ToTTo show that our model could be successfully used on very noisy settings.</description><subject>Alignment</subject><subject>Artificial Intelligence</subject><subject>Chemistry and Earth Sciences</subject><subject>Computer Science</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Document and Text Processing</subject><subject>Hallucinations</subject><subject>Information Storage and Retrieval</subject><subject>Labels</subject><subject>Natural language</subject><subject>Physics</subject><subject>Special Issue of the Journal Track of ECML PKDD 2022</subject><subject>Statistics for Engineering</subject><subject>Structured data</subject><subject>Training</subject><issn>1384-5810</issn><issn>1573-756X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kM1KAzEYRYMoWKsv4GrAlYvol79JZlmKWqHgRsFdSNNMnTImNUn9eXvTjujOVT7CuZfLQeicwBUBkNeJQE0UBkowgAKC-QEaESEZlqJ-Piw3UxwLReAYnaS0BgBBGYzQdBp8jqHvO7-qXkzfb23nTe6CT5XJ1UeIy6p3766vOl8tTTY4B5zdZ65Wzru4J0_RUWv65M5-3jF6ur15nM7w_OHufjqZY8samTGVinBb9ghq1aKxBkhLKLV2wRhtGrU01BmQwlEKiphFzRpiXMuoMJxyq9gYXQ69ZafexO7VxC8dTKdnk7ne_QHjspENfSeFvRjYTQxvW5eyXodt9GWepjUlDeNCsELRgbIxpBRd-1tLQO_E6kGsLmL1XqzmJcSGUCqwX7n4V_1P6ht34nlv</recordid><startdate>20220101</startdate><enddate>20220101</enddate><creator>Rebuffel, Clement</creator><creator>Roberti, Marco</creator><creator>Soulier, Laure</creator><creator>Scoutheeten, Geoffrey</creator><creator>Cancelliere, Rossella</creator><creator>Gallinari, Patrick</creator><general>Springer US</general><general>Springer Nature B.V</general><general>Springer</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>1XC</scope><orcidid>https://orcid.org/0000-0003-1430-7006</orcidid><orcidid>https://orcid.org/0000-0001-9827-7400</orcidid><orcidid>https://orcid.org/0000-0001-9060-9001</orcidid></search><sort><creationdate>20220101</creationdate><title>Controlling hallucinations at word level in data-to-text generation</title><author>Rebuffel, Clement ; Roberti, Marco ; Soulier, Laure ; Scoutheeten, Geoffrey ; Cancelliere, Rossella ; Gallinari, Patrick</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c397t-27814c75652c8b9ca01f122ccb332998da2ea075e22081ab6391aef325a424c83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Alignment</topic><topic>Artificial Intelligence</topic><topic>Chemistry and Earth Sciences</topic><topic>Computer Science</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Document and Text Processing</topic><topic>Hallucinations</topic><topic>Information Storage and Retrieval</topic><topic>Labels</topic><topic>Natural language</topic><topic>Physics</topic><topic>Special Issue of the Journal Track of ECML PKDD 2022</topic><topic>Statistics for Engineering</topic><topic>Structured data</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rebuffel, Clement</creatorcontrib><creatorcontrib>Roberti, Marco</creatorcontrib><creatorcontrib>Soulier, Laure</creatorcontrib><creatorcontrib>Scoutheeten, Geoffrey</creatorcontrib><creatorcontrib>Cancelliere, Rossella</creatorcontrib><creatorcontrib>Gallinari, Patrick</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Access via ABI/INFORM (ProQuest)</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Proquest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>Hyper Article en Ligne (HAL)</collection><jtitle>Data mining and knowledge discovery</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rebuffel, Clement</au><au>Roberti, Marco</au><au>Soulier, Laure</au><au>Scoutheeten, Geoffrey</au><au>Cancelliere, Rossella</au><au>Gallinari, Patrick</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Controlling hallucinations at word level in data-to-text generation</atitle><jtitle>Data mining and knowledge discovery</jtitle><stitle>Data Min Knowl Disc</stitle><date>2022-01-01</date><risdate>2022</risdate><volume>36</volume><issue>1</issue><spage>318</spage><epage>354</epage><pages>318-354</pages><issn>1384-5810</issn><eissn>1573-756X</eissn><abstract>Data-to-Text Generation (DTG) is a subfield of Natural Language Generation aiming at transcribing structured data in natural language descriptions. The field has been recently boosted by the use of neural-based generators which exhibit on one side great syntactic skills without the need of hand-crafted pipelines; on the other side, the quality of the generated text reflects the quality of the training data, which in realistic settings only offer imperfectly aligned structure-text pairs. Consequently, state-of-art neural models include misleading statements –usually called hallucinations—in their outputs. The control of this phenomenon is today a major challenge for DTG, and is the problem addressed in the paper. Previous work deal with this issue at the instance level: using an alignment score for each table-reference pair. In contrast, we propose a finer-grained approach, arguing that hallucinations should rather be treated at the word level. Specifically, we propose a Multi-Branch Decoder which is able to leverage word-level labels to learn the relevant parts of each training instance. These labels are obtained following a simple and efficient scoring procedure based on co-occurrence analysis and dependency parsing. Extensive evaluations, via automated metrics and human judgment on the standard WikiBio benchmark, show the accuracy of our alignment labels and the effectiveness of the proposed Multi-Branch Decoder. Our model is able to reduce and control hallucinations, while keeping fluency and coherence in generated texts. Further experiments on a degraded version of ToTTo show that our model could be successfully used on very noisy settings.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10618-021-00801-4</doi><tpages>37</tpages><orcidid>https://orcid.org/0000-0003-1430-7006</orcidid><orcidid>https://orcid.org/0000-0001-9827-7400</orcidid><orcidid>https://orcid.org/0000-0001-9060-9001</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1384-5810 |
ispartof | Data mining and knowledge discovery, 2022-01, Vol.36 (1), p.318-354 |
issn | 1384-5810 1573-756X |
language | eng |
recordid | cdi_hal_primary_oai_HAL_hal_03479792v1 |
source | SpringerNature Journals |
subjects | Alignment Artificial Intelligence Chemistry and Earth Sciences Computer Science Data Mining and Knowledge Discovery Document and Text Processing Hallucinations Information Storage and Retrieval Labels Natural language Physics Special Issue of the Journal Track of ECML PKDD 2022 Statistics for Engineering Structured data Training |
title | Controlling hallucinations at word level in data-to-text generation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T05%3A31%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Controlling%20hallucinations%20at%20word%20level%20in%20data-to-text%20generation&rft.jtitle=Data%20mining%20and%20knowledge%20discovery&rft.au=Rebuffel,%20Clement&rft.date=2022-01-01&rft.volume=36&rft.issue=1&rft.spage=318&rft.epage=354&rft.pages=318-354&rft.issn=1384-5810&rft.eissn=1573-756X&rft_id=info:doi/10.1007/s10618-021-00801-4&rft_dat=%3Cproquest_hal_p%3E2621934553%3C/proquest_hal_p%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2621934553&rft_id=info:pmid/&rfr_iscdi=true |