Learning to activate logic rules for textual reasoning

Most current textual reasoning models cannotlearn human-like reasoning process, and thus lack interpretability and logical accuracy. To help address this issue, we propose a novel reasoning model which learns to activate logic rules explicitly via deep reinforcement learning. It takes the form of Me...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2018-10, Vol.106, p.42-49
Hauptverfasser: Yao, Yiqun, Xu, Jiaming, Shi, Jing, Xu, Bo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 49
container_issue
container_start_page 42
container_title Neural networks
container_volume 106
creator Yao, Yiqun
Xu, Jiaming
Shi, Jing
Xu, Bo
description Most current textual reasoning models cannotlearn human-like reasoning process, and thus lack interpretability and logical accuracy. To help address this issue, we propose a novel reasoning model which learns to activate logic rules explicitly via deep reinforcement learning. It takes the form of Memory Networks but features a special memory that stores relational tuples, mimicking the “Image Schema” in human cognitive activities. We redefine textual reasoning as a sequential decision-making process modifying or retrieving from the memory, where logic rules serve as state-transition functions. Activating logic rules for reasoning involves two problems: variable binding and relation activating, and this is a first step to solve them jointly. Our model achieves an average error rate of 0.7% on bAbI-20, a widely-used synthetic reasoning benchmark, using less than 1k training samples and no supporting facts.
doi_str_mv 10.1016/j.neunet.2018.06.012
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2073319694</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608018301941</els_id><sourcerecordid>2073319694</sourcerecordid><originalsourceid>FETCH-LOGICAL-c362t-f15d596ade9f860935f7491ebd88f127ceec8938592a68b0300c498af6b184173</originalsourceid><addsrcrecordid>eNp9kD1PwzAQhi0EoqXwDxDKyJJwthPHXpBQxZdUiQVmy3Eulas0KbZTwb8nVQoj0y3P3fveQ8g1hYwCFXebrMOhw5gxoDIDkQFlJ2ROZalSVkp2SuYgFU8FSJiRixA2ACBkzs_JjAOwgpV0TsQKje9ct05inxgb3d5ETNp-7WzihxZD0vQ-ifgVB9MmHk3oD_QlOWtMG_DqOBfk4-nxffmSrt6eX5cPq9RywWLa0KIulDA1qkYKULxoylxRrGopG8pKi2jHjrJQzAhZwdjL5kqaRlRU5rTkC3I73d35_nPAEPXWBYttazrsh6AZlJxTJVQ-ovmEWt-H4LHRO--2xn9rCvpgTG_0ZEwfjGkQejQ2rt0cE4Zqi_Xf0q-iEbifABz_3Dv0OliHncXaebRR1737P-EHwlV9cA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2073319694</pqid></control><display><type>article</type><title>Learning to activate logic rules for textual reasoning</title><source>MEDLINE</source><source>ScienceDirect Journals (5 years ago - present)</source><creator>Yao, Yiqun ; Xu, Jiaming ; Shi, Jing ; Xu, Bo</creator><creatorcontrib>Yao, Yiqun ; Xu, Jiaming ; Shi, Jing ; Xu, Bo</creatorcontrib><description>Most current textual reasoning models cannotlearn human-like reasoning process, and thus lack interpretability and logical accuracy. To help address this issue, we propose a novel reasoning model which learns to activate logic rules explicitly via deep reinforcement learning. It takes the form of Memory Networks but features a special memory that stores relational tuples, mimicking the “Image Schema” in human cognitive activities. We redefine textual reasoning as a sequential decision-making process modifying or retrieving from the memory, where logic rules serve as state-transition functions. Activating logic rules for reasoning involves two problems: variable binding and relation activating, and this is a first step to solve them jointly. Our model achieves an average error rate of 0.7% on bAbI-20, a widely-used synthetic reasoning benchmark, using less than 1k training samples and no supporting facts.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2018.06.012</identifier><identifier>PMID: 30025271</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Artificial Intelligence - trends ; Decision Making - physiology ; Humans ; Image schema ; Learning - physiology ; Logic ; Logic rules ; Memory - physiology ; Memory networks ; Natural language reasoning ; Problem Solving - physiology ; Reinforcement learning</subject><ispartof>Neural networks, 2018-10, Vol.106, p.42-49</ispartof><rights>2018 Elsevier Ltd</rights><rights>Copyright © 2018 Elsevier Ltd. All rights reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c362t-f15d596ade9f860935f7491ebd88f127ceec8938592a68b0300c498af6b184173</citedby><cites>FETCH-LOGICAL-c362t-f15d596ade9f860935f7491ebd88f127ceec8938592a68b0300c498af6b184173</cites><orcidid>0000-0001-7635-1059 ; 0000-0003-3225-7145</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.neunet.2018.06.012$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30025271$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Yao, Yiqun</creatorcontrib><creatorcontrib>Xu, Jiaming</creatorcontrib><creatorcontrib>Shi, Jing</creatorcontrib><creatorcontrib>Xu, Bo</creatorcontrib><title>Learning to activate logic rules for textual reasoning</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>Most current textual reasoning models cannotlearn human-like reasoning process, and thus lack interpretability and logical accuracy. To help address this issue, we propose a novel reasoning model which learns to activate logic rules explicitly via deep reinforcement learning. It takes the form of Memory Networks but features a special memory that stores relational tuples, mimicking the “Image Schema” in human cognitive activities. We redefine textual reasoning as a sequential decision-making process modifying or retrieving from the memory, where logic rules serve as state-transition functions. Activating logic rules for reasoning involves two problems: variable binding and relation activating, and this is a first step to solve them jointly. Our model achieves an average error rate of 0.7% on bAbI-20, a widely-used synthetic reasoning benchmark, using less than 1k training samples and no supporting facts.</description><subject>Artificial Intelligence - trends</subject><subject>Decision Making - physiology</subject><subject>Humans</subject><subject>Image schema</subject><subject>Learning - physiology</subject><subject>Logic</subject><subject>Logic rules</subject><subject>Memory - physiology</subject><subject>Memory networks</subject><subject>Natural language reasoning</subject><subject>Problem Solving - physiology</subject><subject>Reinforcement learning</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kD1PwzAQhi0EoqXwDxDKyJJwthPHXpBQxZdUiQVmy3Eulas0KbZTwb8nVQoj0y3P3fveQ8g1hYwCFXebrMOhw5gxoDIDkQFlJ2ROZalSVkp2SuYgFU8FSJiRixA2ACBkzs_JjAOwgpV0TsQKje9ct05inxgb3d5ETNp-7WzihxZD0vQ-ifgVB9MmHk3oD_QlOWtMG_DqOBfk4-nxffmSrt6eX5cPq9RywWLa0KIulDA1qkYKULxoylxRrGopG8pKi2jHjrJQzAhZwdjL5kqaRlRU5rTkC3I73d35_nPAEPXWBYttazrsh6AZlJxTJVQ-ovmEWt-H4LHRO--2xn9rCvpgTG_0ZEwfjGkQejQ2rt0cE4Zqi_Xf0q-iEbifABz_3Dv0OliHncXaebRR1737P-EHwlV9cA</recordid><startdate>201810</startdate><enddate>201810</enddate><creator>Yao, Yiqun</creator><creator>Xu, Jiaming</creator><creator>Shi, Jing</creator><creator>Xu, Bo</creator><general>Elsevier Ltd</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-7635-1059</orcidid><orcidid>https://orcid.org/0000-0003-3225-7145</orcidid></search><sort><creationdate>201810</creationdate><title>Learning to activate logic rules for textual reasoning</title><author>Yao, Yiqun ; Xu, Jiaming ; Shi, Jing ; Xu, Bo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c362t-f15d596ade9f860935f7491ebd88f127ceec8938592a68b0300c498af6b184173</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Artificial Intelligence - trends</topic><topic>Decision Making - physiology</topic><topic>Humans</topic><topic>Image schema</topic><topic>Learning - physiology</topic><topic>Logic</topic><topic>Logic rules</topic><topic>Memory - physiology</topic><topic>Memory networks</topic><topic>Natural language reasoning</topic><topic>Problem Solving - physiology</topic><topic>Reinforcement learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yao, Yiqun</creatorcontrib><creatorcontrib>Xu, Jiaming</creatorcontrib><creatorcontrib>Shi, Jing</creatorcontrib><creatorcontrib>Xu, Bo</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yao, Yiqun</au><au>Xu, Jiaming</au><au>Shi, Jing</au><au>Xu, Bo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning to activate logic rules for textual reasoning</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2018-10</date><risdate>2018</risdate><volume>106</volume><spage>42</spage><epage>49</epage><pages>42-49</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>Most current textual reasoning models cannotlearn human-like reasoning process, and thus lack interpretability and logical accuracy. To help address this issue, we propose a novel reasoning model which learns to activate logic rules explicitly via deep reinforcement learning. It takes the form of Memory Networks but features a special memory that stores relational tuples, mimicking the “Image Schema” in human cognitive activities. We redefine textual reasoning as a sequential decision-making process modifying or retrieving from the memory, where logic rules serve as state-transition functions. Activating logic rules for reasoning involves two problems: variable binding and relation activating, and this is a first step to solve them jointly. Our model achieves an average error rate of 0.7% on bAbI-20, a widely-used synthetic reasoning benchmark, using less than 1k training samples and no supporting facts.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>30025271</pmid><doi>10.1016/j.neunet.2018.06.012</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0001-7635-1059</orcidid><orcidid>https://orcid.org/0000-0003-3225-7145</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2018-10, Vol.106, p.42-49
issn 0893-6080
1879-2782
language eng
recordid cdi_proquest_miscellaneous_2073319694
source MEDLINE; ScienceDirect Journals (5 years ago - present)
subjects Artificial Intelligence - trends
Decision Making - physiology
Humans
Image schema
Learning - physiology
Logic
Logic rules
Memory - physiology
Memory networks
Natural language reasoning
Problem Solving - physiology
Reinforcement learning
title Learning to activate logic rules for textual reasoning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T18%3A39%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20to%20activate%20logic%20rules%20for%20textual%20reasoning&rft.jtitle=Neural%20networks&rft.au=Yao,%20Yiqun&rft.date=2018-10&rft.volume=106&rft.spage=42&rft.epage=49&rft.pages=42-49&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2018.06.012&rft_dat=%3Cproquest_cross%3E2073319694%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2073319694&rft_id=info:pmid/30025271&rft_els_id=S0893608018301941&rfr_iscdi=true