Generative Timelines for Instructed Visual Assembly
The objective of this work is to manipulate visual timelines (e.g. a video) through natural language instructions, making complex timeline editing tasks accessible to non-expert or potentially even disabled users. We call this task Instructed visual assembly. This task is challenging as it requires...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The objective of this work is to manipulate visual timelines (e.g. a video)
through natural language instructions, making complex timeline editing tasks
accessible to non-expert or potentially even disabled users. We call this task
Instructed visual assembly. This task is challenging as it requires (i)
identifying relevant visual content in the input timeline as well as retrieving
relevant visual content in a given input (video) collection, (ii) understanding
the input natural language instruction, and (iii) performing the desired edits
of the input visual timeline to produce an output timeline. To address these
challenges, we propose the Timeline Assembler, a generative model trained to
perform instructed visual assembly tasks. The contributions of this work are
three-fold. First, we develop a large multimodal language model, which is
designed to process visual content, compactly represent timelines and
accurately interpret timeline editing instructions. Second, we introduce a
novel method for automatically generating datasets for visual assembly tasks,
enabling efficient training of our model without the need for human-labeled
data. Third, we validate our approach by creating two novel datasets for image
and video assembly, demonstrating that the Timeline Assembler substantially
outperforms established baseline models, including the recent GPT-4o, in
accurately executing complex assembly instructions across various real-world
inspired scenarios. |
---|---|
DOI: | 10.48550/arxiv.2411.12293 |