Reframe Anything: LLM Agent for Open World Video Reframing
The proliferation of mobile devices and social media has revolutionized content dissemination, with short-form video becoming increasingly prevalent. This shift has introduced the challenge of video reframing to fit various screen aspect ratios, a process that highlights the most compelling parts of...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The proliferation of mobile devices and social media has revolutionized
content dissemination, with short-form video becoming increasingly prevalent.
This shift has introduced the challenge of video reframing to fit various
screen aspect ratios, a process that highlights the most compelling parts of a
video. Traditionally, video reframing is a manual, time-consuming task
requiring professional expertise, which incurs high production costs. A
potential solution is to adopt some machine learning models, such as video
salient object detection, to automate the process. However, these methods often
lack generalizability due to their reliance on specific training data. The
advent of powerful large language models (LLMs) open new avenues for AI
capabilities. Building on this, we introduce Reframe Any Video Agent (RAVA), a
LLM-based agent that leverages visual foundation models and human instructions
to restructure visual content for video reframing. RAVA operates in three
stages: perception, where it interprets user instructions and video content;
planning, where it determines aspect ratios and reframing strategies; and
execution, where it invokes the editing tools to produce the final video. Our
experiments validate the effectiveness of RAVA in video salient object
detection and real-world reframing tasks, demonstrating its potential as a tool
for AI-powered video editing. |
---|---|
DOI: | 10.48550/arxiv.2403.06070 |