Explaining latent representations of generative models with large multimodal models
Learning interpretable representations of data generative latent factors is an important topic for the development of artificial intelligence. With the rise of the large multimodal model, it can align images with text to generate answers. In this work, we propose a framework to comprehensively expla...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Learning interpretable representations of data generative latent factors is
an important topic for the development of artificial intelligence. With the
rise of the large multimodal model, it can align images with text to generate
answers. In this work, we propose a framework to comprehensively explain each
latent variable in the generative models using a large multimodal model. We
further measure the uncertainty of our generated explanations, quantitatively
evaluate the performance of explanation generation among multiple large
multimodal models, and qualitatively visualize the variations of each latent
variable to learn the disentanglement effects of different generative models on
explanations. Finally, we discuss the explanatory capabilities and limitations
of state-of-the-art large multimodal models. |
---|---|
DOI: | 10.48550/arxiv.2402.01858 |