DeepDFA: Automata Learning through Neural Probabilistic Relaxations
In this work, we introduce DeepDFA, a novel approach to identifying Deterministic Finite Automata (DFAs) from traces, harnessing a differentiable yet discrete model. Inspired by both the probabilistic relaxation of DFAs and Recurrent Neural Networks (RNNs), our model offers interpretability post-tra...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this work, we introduce DeepDFA, a novel approach to identifying
Deterministic Finite Automata (DFAs) from traces, harnessing a differentiable
yet discrete model. Inspired by both the probabilistic relaxation of DFAs and
Recurrent Neural Networks (RNNs), our model offers interpretability
post-training, alongside reduced complexity and enhanced training efficiency
compared to traditional RNNs. Moreover, by leveraging gradient-based
optimization, our method surpasses combinatorial approaches in both scalability
and noise resilience. Validation experiments conducted on target regular
languages of varying size and complexity demonstrate that our approach is
accurate, fast, and robust to noise in both the input symbols and the output
labels of training data, integrating the strengths of both logical grammar
induction and deep learning. |
---|---|
DOI: | 10.48550/arxiv.2408.08622 |