LatentExplainer: Explaining Latent Representations in Deep Generative Models with Multi-modal Foundation Models
Deep generative models like VAEs and diffusion models have advanced various generation tasks by leveraging latent variables to learn data distributions and generate high-quality samples. Despite the field of explainable AI making strides in interpreting machine learning models, understanding latent...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep generative models like VAEs and diffusion models have advanced various
generation tasks by leveraging latent variables to learn data distributions and
generate high-quality samples. Despite the field of explainable AI making
strides in interpreting machine learning models, understanding latent variables
in generative models remains challenging. This paper introduces
\textit{LatentExplainer}, a framework for automatically generating semantically
meaningful explanations of latent variables in deep generative models.
\textit{LatentExplainer} tackles three main challenges: inferring the meaning
of latent variables, aligning explanations with inductive biases, and handling
varying degrees of explainability. Our approach perturbs latent variables,
interpreting changes in generated data, and uses multi-modal large language
models (MLLMs) to produce human-understandable explanations. We evaluate our
proposed method on several real-world and synthetic datasets, and the results
demonstrate superior performance in generating high-quality explanations for
latent variables. The results highlight the effectiveness of incorporating
inductive biases and uncertainty quantification, significantly enhancing model
interpretability. |
---|---|
DOI: | 10.48550/arxiv.2406.14862 |