Interpreting Neural Policies with Disentangled Tree Representations
The advancement of robots, particularly those functioning in complex human-centric environments, relies on control solutions that are driven by machine learning. Understanding how learning-based controllers make decisions is crucial since robots are often safety-critical systems. This urges a formal...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The advancement of robots, particularly those functioning in complex
human-centric environments, relies on control solutions that are driven by
machine learning. Understanding how learning-based controllers make decisions
is crucial since robots are often safety-critical systems. This urges a formal
and quantitative understanding of the explanatory factors in the
interpretability of robot learning. In this paper, we aim to study
interpretability of compact neural policies through the lens of disentangled
representation. We leverage decision trees to obtain factors of variation [1]
for disentanglement in robot learning; these encapsulate skills, behaviors, or
strategies toward solving tasks. To assess how well networks uncover the
underlying task dynamics, we introduce interpretability metrics that measure
disentanglement of learned neural dynamics from a concentration of decisions,
mutual information and modularity perspective. We showcase the effectiveness of
the connection between interpretability and disentanglement consistently across
extensive experimental analysis. |
---|---|
DOI: | 10.48550/arxiv.2210.06650 |