Reduced implication-bias logic loss for neuro-symbolic learning
Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in the field of Neuro-Symbolic Learning. However, some differentiable operators could introduce significant biases during backpropagation, which can degrade...
Gespeichert in:
Veröffentlicht in: | Machine learning 2024-06, Vol.113 (6), p.3357-3377 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 3377 |
---|---|
container_issue | 6 |
container_start_page | 3357 |
container_title | Machine learning |
container_volume | 113 |
creator | He, Hao-Yuan Dai, Wang-Zhou Li, Ming |
description | Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in the field of Neuro-Symbolic Learning. However, some differentiable operators could introduce significant biases during backpropagation, which can degrade the performance of Neuro-Symbolic systems. In this paper, we demonstrate that the loss functions derived from fuzzy logic operators commonly exhibit a bias, referred to as
Implication Bias
. To mitigate this bias, we propose a simple yet efficient method to transform the biased loss functions into
Reduced Implication-bias Logic Loss (RILL)
. Empirical studies demonstrate that RILL outperforms the biased logic loss functions, especially when the knowledge base is incomplete or the supervised training data is insufficient. |
doi_str_mv | 10.1007/s10994-023-06436-4 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3053350364</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3053350364</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-67c697ca3b7419db9b0196b66e0c87d148a761d09f8f9f04f80ed8d5f18413b3</originalsourceid><addsrcrecordid>eNp9kEtLAzEQx4MoWFe_gKcFz9HJZvM6iRRfUBCk97B5lZTtbk26h357U1fw5mWGGf7_efwQuiVwTwDEQyagVIuhoRh4Szluz9CCMFFKxtk5WoCUDHPSsEt0lfMWABou-QI9fno3We_quNv30XaHOA7YxC7X_biJtsSc6zCmevBTGnE-7szYn_q-S0McNtfoInR99je_uULrl-f18g2vPl7fl08rbBsBB8yF5UrYjhrREuWMMkAUN5x7sFI40spOcOJABRlUgDZI8E46FohsCTW0Qnfz2H0avyafD3o7TmkoGzUFRikDWt6uUDOrbCpnJx_0PsVdl46agD5x0jMnXTjpH076ZKKzKRfxsPHpb_Q_rm8UB2px</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3053350364</pqid></control><display><type>article</type><title>Reduced implication-bias logic loss for neuro-symbolic learning</title><source>SpringerLink Journals - AutoHoldings</source><creator>He, Hao-Yuan ; Dai, Wang-Zhou ; Li, Ming</creator><creatorcontrib>He, Hao-Yuan ; Dai, Wang-Zhou ; Li, Ming</creatorcontrib><description>Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in the field of Neuro-Symbolic Learning. However, some differentiable operators could introduce significant biases during backpropagation, which can degrade the performance of Neuro-Symbolic systems. In this paper, we demonstrate that the loss functions derived from fuzzy logic operators commonly exhibit a bias, referred to as
Implication Bias
. To mitigate this bias, we propose a simple yet efficient method to transform the biased loss functions into
Reduced Implication-bias Logic Loss (RILL)
. Empirical studies demonstrate that RILL outperforms the biased logic loss functions, especially when the knowledge base is incomplete or the supervised training data is insufficient.</description><identifier>ISSN: 0885-6125</identifier><identifier>EISSN: 1573-0565</identifier><identifier>DOI: 10.1007/s10994-023-06436-4</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Back propagation ; Bias ; Cognition & reasoning ; Computer Science ; Control ; Fuzzy logic ; Knowledge ; Knowledge bases (artificial intelligence) ; Logic programming ; Machine Learning ; Mechatronics ; Natural Language Processing (NLP) ; Neural networks ; Operators ; Performance degradation ; Robotics ; Simulation and Modeling ; Special Issue of the ACML 2023</subject><ispartof>Machine learning, 2024-06, Vol.113 (6), p.3357-3377</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-67c697ca3b7419db9b0196b66e0c87d148a761d09f8f9f04f80ed8d5f18413b3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10994-023-06436-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10994-023-06436-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>He, Hao-Yuan</creatorcontrib><creatorcontrib>Dai, Wang-Zhou</creatorcontrib><creatorcontrib>Li, Ming</creatorcontrib><title>Reduced implication-bias logic loss for neuro-symbolic learning</title><title>Machine learning</title><addtitle>Mach Learn</addtitle><description>Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in the field of Neuro-Symbolic Learning. However, some differentiable operators could introduce significant biases during backpropagation, which can degrade the performance of Neuro-Symbolic systems. In this paper, we demonstrate that the loss functions derived from fuzzy logic operators commonly exhibit a bias, referred to as
Implication Bias
. To mitigate this bias, we propose a simple yet efficient method to transform the biased loss functions into
Reduced Implication-bias Logic Loss (RILL)
. Empirical studies demonstrate that RILL outperforms the biased logic loss functions, especially when the knowledge base is incomplete or the supervised training data is insufficient.</description><subject>Artificial Intelligence</subject><subject>Back propagation</subject><subject>Bias</subject><subject>Cognition & reasoning</subject><subject>Computer Science</subject><subject>Control</subject><subject>Fuzzy logic</subject><subject>Knowledge</subject><subject>Knowledge bases (artificial intelligence)</subject><subject>Logic programming</subject><subject>Machine Learning</subject><subject>Mechatronics</subject><subject>Natural Language Processing (NLP)</subject><subject>Neural networks</subject><subject>Operators</subject><subject>Performance degradation</subject><subject>Robotics</subject><subject>Simulation and Modeling</subject><subject>Special Issue of the ACML 2023</subject><issn>0885-6125</issn><issn>1573-0565</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kEtLAzEQx4MoWFe_gKcFz9HJZvM6iRRfUBCk97B5lZTtbk26h357U1fw5mWGGf7_efwQuiVwTwDEQyagVIuhoRh4Szluz9CCMFFKxtk5WoCUDHPSsEt0lfMWABou-QI9fno3We_quNv30XaHOA7YxC7X_biJtsSc6zCmevBTGnE-7szYn_q-S0McNtfoInR99je_uULrl-f18g2vPl7fl08rbBsBB8yF5UrYjhrREuWMMkAUN5x7sFI40spOcOJABRlUgDZI8E46FohsCTW0Qnfz2H0avyafD3o7TmkoGzUFRikDWt6uUDOrbCpnJx_0PsVdl46agD5x0jMnXTjpH076ZKKzKRfxsPHpb_Q_rm8UB2px</recordid><startdate>20240601</startdate><enddate>20240601</enddate><creator>He, Hao-Yuan</creator><creator>Dai, Wang-Zhou</creator><creator>Li, Ming</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20240601</creationdate><title>Reduced implication-bias logic loss for neuro-symbolic learning</title><author>He, Hao-Yuan ; Dai, Wang-Zhou ; Li, Ming</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-67c697ca3b7419db9b0196b66e0c87d148a761d09f8f9f04f80ed8d5f18413b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial Intelligence</topic><topic>Back propagation</topic><topic>Bias</topic><topic>Cognition & reasoning</topic><topic>Computer Science</topic><topic>Control</topic><topic>Fuzzy logic</topic><topic>Knowledge</topic><topic>Knowledge bases (artificial intelligence)</topic><topic>Logic programming</topic><topic>Machine Learning</topic><topic>Mechatronics</topic><topic>Natural Language Processing (NLP)</topic><topic>Neural networks</topic><topic>Operators</topic><topic>Performance degradation</topic><topic>Robotics</topic><topic>Simulation and Modeling</topic><topic>Special Issue of the ACML 2023</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>He, Hao-Yuan</creatorcontrib><creatorcontrib>Dai, Wang-Zhou</creatorcontrib><creatorcontrib>Li, Ming</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Machine learning</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>He, Hao-Yuan</au><au>Dai, Wang-Zhou</au><au>Li, Ming</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Reduced implication-bias logic loss for neuro-symbolic learning</atitle><jtitle>Machine learning</jtitle><stitle>Mach Learn</stitle><date>2024-06-01</date><risdate>2024</risdate><volume>113</volume><issue>6</issue><spage>3357</spage><epage>3377</epage><pages>3357-3377</pages><issn>0885-6125</issn><eissn>1573-0565</eissn><abstract>Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in the field of Neuro-Symbolic Learning. However, some differentiable operators could introduce significant biases during backpropagation, which can degrade the performance of Neuro-Symbolic systems. In this paper, we demonstrate that the loss functions derived from fuzzy logic operators commonly exhibit a bias, referred to as
Implication Bias
. To mitigate this bias, we propose a simple yet efficient method to transform the biased loss functions into
Reduced Implication-bias Logic Loss (RILL)
. Empirical studies demonstrate that RILL outperforms the biased logic loss functions, especially when the knowledge base is incomplete or the supervised training data is insufficient.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10994-023-06436-4</doi><tpages>21</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0885-6125 |
ispartof | Machine learning, 2024-06, Vol.113 (6), p.3357-3377 |
issn | 0885-6125 1573-0565 |
language | eng |
recordid | cdi_proquest_journals_3053350364 |
source | SpringerLink Journals - AutoHoldings |
subjects | Artificial Intelligence Back propagation Bias Cognition & reasoning Computer Science Control Fuzzy logic Knowledge Knowledge bases (artificial intelligence) Logic programming Machine Learning Mechatronics Natural Language Processing (NLP) Neural networks Operators Performance degradation Robotics Simulation and Modeling Special Issue of the ACML 2023 |
title | Reduced implication-bias logic loss for neuro-symbolic learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T00%3A34%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Reduced%20implication-bias%20logic%20loss%20for%20neuro-symbolic%20learning&rft.jtitle=Machine%20learning&rft.au=He,%20Hao-Yuan&rft.date=2024-06-01&rft.volume=113&rft.issue=6&rft.spage=3357&rft.epage=3377&rft.pages=3357-3377&rft.issn=0885-6125&rft.eissn=1573-0565&rft_id=info:doi/10.1007/s10994-023-06436-4&rft_dat=%3Cproquest_cross%3E3053350364%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3053350364&rft_id=info:pmid/&rfr_iscdi=true |