Improving Generation and Evaluation of Visual Stories via Semantic Consistency
Story visualization is an under-explored task that falls at the intersection of many important research directions in both computer vision and natural language processing. In this task, given a series of natural language captions which compose a story, an agent must generate a sequence of images tha...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Story visualization is an under-explored task that falls at the intersection
of many important research directions in both computer vision and natural
language processing. In this task, given a series of natural language captions
which compose a story, an agent must generate a sequence of images that
correspond to the captions. Prior work has introduced recurrent generative
models which outperform text-to-image synthesis models on this task. However,
there is room for improvement of generated images in terms of visual quality,
coherence and relevance. We present a number of improvements to prior modeling
approaches, including (1) the addition of a dual learning framework that
utilizes video captioning to reinforce the semantic alignment between the story
and generated images, (2) a copy-transform mechanism for
sequentially-consistent story visualization, and (3) MART-based transformers to
model complex interactions between frames. We present ablation studies to
demonstrate the effect of each of these techniques on the generative power of
the model for both individual images as well as the entire narrative.
Furthermore, due to the complexity and generative nature of the task, standard
evaluation metrics do not accurately reflect performance. Therefore, we also
provide an exploration of evaluation metrics for the model, focused on aspects
of the generated frames such as the presence/quality of generated characters,
the relevance to captions, and the diversity of the generated images. We also
present correlation experiments of our proposed automated metrics with human
evaluations. Code and data available at:
https://github.com/adymaharana/StoryViz |
---|---|
DOI: | 10.48550/arxiv.2105.10026 |