Interpretable and Explainable Logical Policies via Neurally Guided Symbolic Abstraction
The limited priors required by neural networks make them the dominating choice to encode and learn policies using reinforcement learning (RL). However, they are also black-boxes, making it hard to understand the agent's behaviour, especially when working on the image level. Therefore, neuro-sym...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The limited priors required by neural networks make them the dominating
choice to encode and learn policies using reinforcement learning (RL). However,
they are also black-boxes, making it hard to understand the agent's behaviour,
especially when working on the image level. Therefore, neuro-symbolic RL aims
at creating policies that are interpretable in the first place. Unfortunately,
interpretability is not explainability. To achieve both, we introduce Neurally
gUided Differentiable loGic policiEs (NUDGE). NUDGE exploits trained neural
network-based agents to guide the search of candidate-weighted logic rules,
then uses differentiable logic to train the logic agents. Our experimental
evaluation demonstrates that NUDGE agents can induce interpretable and
explainable policies while outperforming purely neural ones and showing good
flexibility to environments of different initial states and problem sizes. |
---|---|
DOI: | 10.48550/arxiv.2306.01439 |