MissDiff: Training Diffusion Models on Tabular Data with Missing Values
The diffusion model has shown remarkable performance in modeling data distributions and synthesizing data. However, the vanilla diffusion model requires complete or fully observed data for training. Incomplete data is a common issue in various real-world applications, including healthcare and financ...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The diffusion model has shown remarkable performance in modeling data
distributions and synthesizing data. However, the vanilla diffusion model
requires complete or fully observed data for training. Incomplete data is a
common issue in various real-world applications, including healthcare and
finance, particularly when dealing with tabular datasets. This work presents a
unified and principled diffusion-based framework for learning from data with
missing values under various missing mechanisms. We first observe that the
widely adopted "impute-then-generate" pipeline may lead to a biased learning
objective. Then we propose to mask the regression loss of Denoising Score
Matching in the training phase. We prove the proposed method is consistent in
learning the score of data distributions, and the proposed training objective
serves as an upper bound for the negative likelihood in certain cases. The
proposed framework is evaluated on multiple tabular datasets using realistic
and efficacious metrics and is demonstrated to outperform state-of-the-art
diffusion model on tabular data with "impute-then-generate" pipeline by a large
margin. |
---|---|
DOI: | 10.48550/arxiv.2307.00467 |