Differentiable Logic Machines

The integration of reasoning, learning, and decision-making is key to build more general artificial intelligence systems. As a step in this direction, we propose a novel neural-logic architecture, called differentiable logic machine (DLM), that can solve both inductive logic programming (ILP) and re...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zimmer, Matthieu, Feng, Xuening, Glanois, Claire, Jiang, Zhaohui, Zhang, Jianyi, Weng, Paul, Li, Dong, Hao, Jianye, Liu, Wulong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zimmer, Matthieu
Feng, Xuening
Glanois, Claire
Jiang, Zhaohui
Zhang, Jianyi
Weng, Paul
Li, Dong
Hao, Jianye
Liu, Wulong
description The integration of reasoning, learning, and decision-making is key to build more general artificial intelligence systems. As a step in this direction, we propose a novel neural-logic architecture, called differentiable logic machine (DLM), that can solve both inductive logic programming (ILP) and reinforcement learning (RL) problems, where the solution can be interpreted as a first-order logic program. Our proposition includes several innovations. Firstly, our architecture defines a restricted but expressive continuous relaxation of the space of first-order logic programs by assigning weights to predicates instead of rules, in contrast to most previous neural-logic approaches. Secondly, with this differentiable architecture, we propose several (supervised and RL) training procedures, based on gradient descent, which can recover a fully-interpretable solution (i.e., logic formula). Thirdly, to accelerate RL training, we also design a novel critic architecture that enables actor-critic algorithms. Fourthly, to solve hard problems, we propose an incremental training procedure that can learn a logic program progressively. Compared to state-of-the-art (SOTA) differentiable ILP methods, DLM successfully solves all the considered ILP problems with a higher percentage of successful seeds (up to 3.5$\times$). On RL problems, without requiring an interpretable solution, DLM outperforms other non-interpretable neural-logic RL approaches in terms of rewards (up to 3.9%). When enforcing interpretability, DLM can solve harder RL problems (e.g., Sorting, Path) Moreover, we show that deep logic programs can be learned via incremental supervised training. In addition to this excellent performance, DLM can scale well in terms of memory and computational time, especially during the testing phase where it can deal with much more constants ($>$2$\times$) than SOTA.
doi_str_mv 10.48550/arxiv.2102.11529
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2102_11529</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2102_11529</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-8bf28da4a0a5cad39abb6812a5687df43cdfb057eb3402041c1244eda369343f3</originalsourceid><addsrcrecordid>eNotzjkOgkAYQOFpLAx6AAsjFwBnZygNrgnGhp78s-kkiAaM0dsb0ep1Lx9CM4JTroTAS-he4ZlSgmlKiKD5GM3XwXvXufYRQDcuLm_nYOIjmEtoXT9BIw9N76b_Rqjabqpin5Sn3aFYlQnILE-U9lRZ4IBBGLAsB62lIhSEVJn1nBnrNRaZ04xjijkxhHLuLDCZM848i9Ditx189b0LV-je9ddZD072Ad0SNsk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Differentiable Logic Machines</title><source>arXiv.org</source><creator>Zimmer, Matthieu ; Feng, Xuening ; Glanois, Claire ; Jiang, Zhaohui ; Zhang, Jianyi ; Weng, Paul ; Li, Dong ; Hao, Jianye ; Liu, Wulong</creator><creatorcontrib>Zimmer, Matthieu ; Feng, Xuening ; Glanois, Claire ; Jiang, Zhaohui ; Zhang, Jianyi ; Weng, Paul ; Li, Dong ; Hao, Jianye ; Liu, Wulong</creatorcontrib><description>The integration of reasoning, learning, and decision-making is key to build more general artificial intelligence systems. As a step in this direction, we propose a novel neural-logic architecture, called differentiable logic machine (DLM), that can solve both inductive logic programming (ILP) and reinforcement learning (RL) problems, where the solution can be interpreted as a first-order logic program. Our proposition includes several innovations. Firstly, our architecture defines a restricted but expressive continuous relaxation of the space of first-order logic programs by assigning weights to predicates instead of rules, in contrast to most previous neural-logic approaches. Secondly, with this differentiable architecture, we propose several (supervised and RL) training procedures, based on gradient descent, which can recover a fully-interpretable solution (i.e., logic formula). Thirdly, to accelerate RL training, we also design a novel critic architecture that enables actor-critic algorithms. Fourthly, to solve hard problems, we propose an incremental training procedure that can learn a logic program progressively. Compared to state-of-the-art (SOTA) differentiable ILP methods, DLM successfully solves all the considered ILP problems with a higher percentage of successful seeds (up to 3.5$\times$). On RL problems, without requiring an interpretable solution, DLM outperforms other non-interpretable neural-logic RL approaches in terms of rewards (up to 3.9%). When enforcing interpretability, DLM can solve harder RL problems (e.g., Sorting, Path) Moreover, we show that deep logic programs can be learned via incremental supervised training. In addition to this excellent performance, DLM can scale well in terms of memory and computational time, especially during the testing phase where it can deal with much more constants ($&gt;$2$\times$) than SOTA.</description><identifier>DOI: 10.48550/arxiv.2102.11529</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence</subject><creationdate>2021-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2102.11529$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2102.11529$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zimmer, Matthieu</creatorcontrib><creatorcontrib>Feng, Xuening</creatorcontrib><creatorcontrib>Glanois, Claire</creatorcontrib><creatorcontrib>Jiang, Zhaohui</creatorcontrib><creatorcontrib>Zhang, Jianyi</creatorcontrib><creatorcontrib>Weng, Paul</creatorcontrib><creatorcontrib>Li, Dong</creatorcontrib><creatorcontrib>Hao, Jianye</creatorcontrib><creatorcontrib>Liu, Wulong</creatorcontrib><title>Differentiable Logic Machines</title><description>The integration of reasoning, learning, and decision-making is key to build more general artificial intelligence systems. As a step in this direction, we propose a novel neural-logic architecture, called differentiable logic machine (DLM), that can solve both inductive logic programming (ILP) and reinforcement learning (RL) problems, where the solution can be interpreted as a first-order logic program. Our proposition includes several innovations. Firstly, our architecture defines a restricted but expressive continuous relaxation of the space of first-order logic programs by assigning weights to predicates instead of rules, in contrast to most previous neural-logic approaches. Secondly, with this differentiable architecture, we propose several (supervised and RL) training procedures, based on gradient descent, which can recover a fully-interpretable solution (i.e., logic formula). Thirdly, to accelerate RL training, we also design a novel critic architecture that enables actor-critic algorithms. Fourthly, to solve hard problems, we propose an incremental training procedure that can learn a logic program progressively. Compared to state-of-the-art (SOTA) differentiable ILP methods, DLM successfully solves all the considered ILP problems with a higher percentage of successful seeds (up to 3.5$\times$). On RL problems, without requiring an interpretable solution, DLM outperforms other non-interpretable neural-logic RL approaches in terms of rewards (up to 3.9%). When enforcing interpretability, DLM can solve harder RL problems (e.g., Sorting, Path) Moreover, we show that deep logic programs can be learned via incremental supervised training. In addition to this excellent performance, DLM can scale well in terms of memory and computational time, especially during the testing phase where it can deal with much more constants ($&gt;$2$\times$) than SOTA.</description><subject>Computer Science - Artificial Intelligence</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzjkOgkAYQOFpLAx6AAsjFwBnZygNrgnGhp78s-kkiAaM0dsb0ep1Lx9CM4JTroTAS-he4ZlSgmlKiKD5GM3XwXvXufYRQDcuLm_nYOIjmEtoXT9BIw9N76b_Rqjabqpin5Sn3aFYlQnILE-U9lRZ4IBBGLAsB62lIhSEVJn1nBnrNRaZ04xjijkxhHLuLDCZM848i9Ditx189b0LV-je9ddZD072Ad0SNsk</recordid><startdate>20210223</startdate><enddate>20210223</enddate><creator>Zimmer, Matthieu</creator><creator>Feng, Xuening</creator><creator>Glanois, Claire</creator><creator>Jiang, Zhaohui</creator><creator>Zhang, Jianyi</creator><creator>Weng, Paul</creator><creator>Li, Dong</creator><creator>Hao, Jianye</creator><creator>Liu, Wulong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210223</creationdate><title>Differentiable Logic Machines</title><author>Zimmer, Matthieu ; Feng, Xuening ; Glanois, Claire ; Jiang, Zhaohui ; Zhang, Jianyi ; Weng, Paul ; Li, Dong ; Hao, Jianye ; Liu, Wulong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-8bf28da4a0a5cad39abb6812a5687df43cdfb057eb3402041c1244eda369343f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><toplevel>online_resources</toplevel><creatorcontrib>Zimmer, Matthieu</creatorcontrib><creatorcontrib>Feng, Xuening</creatorcontrib><creatorcontrib>Glanois, Claire</creatorcontrib><creatorcontrib>Jiang, Zhaohui</creatorcontrib><creatorcontrib>Zhang, Jianyi</creatorcontrib><creatorcontrib>Weng, Paul</creatorcontrib><creatorcontrib>Li, Dong</creatorcontrib><creatorcontrib>Hao, Jianye</creatorcontrib><creatorcontrib>Liu, Wulong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zimmer, Matthieu</au><au>Feng, Xuening</au><au>Glanois, Claire</au><au>Jiang, Zhaohui</au><au>Zhang, Jianyi</au><au>Weng, Paul</au><au>Li, Dong</au><au>Hao, Jianye</au><au>Liu, Wulong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Differentiable Logic Machines</atitle><date>2021-02-23</date><risdate>2021</risdate><abstract>The integration of reasoning, learning, and decision-making is key to build more general artificial intelligence systems. As a step in this direction, we propose a novel neural-logic architecture, called differentiable logic machine (DLM), that can solve both inductive logic programming (ILP) and reinforcement learning (RL) problems, where the solution can be interpreted as a first-order logic program. Our proposition includes several innovations. Firstly, our architecture defines a restricted but expressive continuous relaxation of the space of first-order logic programs by assigning weights to predicates instead of rules, in contrast to most previous neural-logic approaches. Secondly, with this differentiable architecture, we propose several (supervised and RL) training procedures, based on gradient descent, which can recover a fully-interpretable solution (i.e., logic formula). Thirdly, to accelerate RL training, we also design a novel critic architecture that enables actor-critic algorithms. Fourthly, to solve hard problems, we propose an incremental training procedure that can learn a logic program progressively. Compared to state-of-the-art (SOTA) differentiable ILP methods, DLM successfully solves all the considered ILP problems with a higher percentage of successful seeds (up to 3.5$\times$). On RL problems, without requiring an interpretable solution, DLM outperforms other non-interpretable neural-logic RL approaches in terms of rewards (up to 3.9%). When enforcing interpretability, DLM can solve harder RL problems (e.g., Sorting, Path) Moreover, we show that deep logic programs can be learned via incremental supervised training. In addition to this excellent performance, DLM can scale well in terms of memory and computational time, especially during the testing phase where it can deal with much more constants ($&gt;$2$\times$) than SOTA.</abstract><doi>10.48550/arxiv.2102.11529</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2102.11529
ispartof
issn
language eng
recordid cdi_arxiv_primary_2102_11529
source arXiv.org
subjects Computer Science - Artificial Intelligence
title Differentiable Logic Machines
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T02%3A58%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Differentiable%20Logic%20Machines&rft.au=Zimmer,%20Matthieu&rft.date=2021-02-23&rft_id=info:doi/10.48550/arxiv.2102.11529&rft_dat=%3Carxiv_GOX%3E2102_11529%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true