Learning DAGs from Data with Few Root Causes
NeurIPS 2023 We present a novel perspective and algorithm for learning directed acyclic graphs (DAGs) from data generated by a linear structural equation model (SEM). First, we show that a linear SEM can be viewed as a linear transform that, in prior work, computes the data from a dense input vector...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | NeurIPS 2023 We present a novel perspective and algorithm for learning directed acyclic
graphs (DAGs) from data generated by a linear structural equation model (SEM).
First, we show that a linear SEM can be viewed as a linear transform that, in
prior work, computes the data from a dense input vector of random valued root
causes (as we will call them) associated with the nodes. Instead, we consider
the case of (approximately) few root causes and also introduce noise in the
measurement of the data. Intuitively, this means that the DAG data is produced
by few data-generating events whose effect percolates through the DAG. We prove
identifiability in this new setting and show that the true DAG is the global
minimizer of the $L^0$-norm of the vector of root causes. For data with few
root causes, with and without noise, we show superior performance compared to
prior DAG learning methods. |
---|---|
DOI: | 10.48550/arxiv.2305.15936 |