Reframe Anything: LLM Agent for Open World Video Reframing

The proliferation of mobile devices and social media has revolutionized content dissemination, with short-form video becoming increasingly prevalent. This shift has introduced the challenge of video reframing to fit various screen aspect ratios, a process that highlights the most compelling parts of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Cao, Jiawang, Wu, Yongliang, Chi, Weiheng, Zhu, Wenbo, Su, Ziyue, Wu, Jay
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Cao, Jiawang
Wu, Yongliang
Chi, Weiheng
Zhu, Wenbo
Su, Ziyue
Wu, Jay
description The proliferation of mobile devices and social media has revolutionized content dissemination, with short-form video becoming increasingly prevalent. This shift has introduced the challenge of video reframing to fit various screen aspect ratios, a process that highlights the most compelling parts of a video. Traditionally, video reframing is a manual, time-consuming task requiring professional expertise, which incurs high production costs. A potential solution is to adopt some machine learning models, such as video salient object detection, to automate the process. However, these methods often lack generalizability due to their reliance on specific training data. The advent of powerful large language models (LLMs) open new avenues for AI capabilities. Building on this, we introduce Reframe Any Video Agent (RAVA), a LLM-based agent that leverages visual foundation models and human instructions to restructure visual content for video reframing. RAVA operates in three stages: perception, where it interprets user instructions and video content; planning, where it determines aspect ratios and reframing strategies; and execution, where it invokes the editing tools to produce the final video. Our experiments validate the effectiveness of RAVA in video salient object detection and real-world reframing tasks, demonstrating its potential as a tool for AI-powered video editing.
doi_str_mv 10.48550/arxiv.2403.06070
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_06070</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_06070</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-8731962f13efe7c36ed85ec6aeff2819416fc9498d84cbddc800d604d4935f043</originalsourceid><addsrcrecordid>eNotj8FuwjAQRH3poaL9gJ7wDyRdx45jc4tQWyqlQqoQHCNj79JIkERuhMjflwZOc5l5msfYi4BUmTyHVxcvzTnNFMgUNBTwyBbfSNGdkJftOPw07WHBq-qLlwdsB05d5OseW77r4jHwbROw47fBtfnEHsgdf_H5njO2eX_bLFdJtf74XJZV4nQBiSmksDojIZGw8FJjMDl67ZAoM8IqoclbZU0wyu9D8AYgaFBBWZkTKDlj8xt2Ol_3sTm5ONb_EvUkIf8AYWFAUg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Reframe Anything: LLM Agent for Open World Video Reframing</title><source>arXiv.org</source><creator>Cao, Jiawang ; Wu, Yongliang ; Chi, Weiheng ; Zhu, Wenbo ; Su, Ziyue ; Wu, Jay</creator><creatorcontrib>Cao, Jiawang ; Wu, Yongliang ; Chi, Weiheng ; Zhu, Wenbo ; Su, Ziyue ; Wu, Jay</creatorcontrib><description>The proliferation of mobile devices and social media has revolutionized content dissemination, with short-form video becoming increasingly prevalent. This shift has introduced the challenge of video reframing to fit various screen aspect ratios, a process that highlights the most compelling parts of a video. Traditionally, video reframing is a manual, time-consuming task requiring professional expertise, which incurs high production costs. A potential solution is to adopt some machine learning models, such as video salient object detection, to automate the process. However, these methods often lack generalizability due to their reliance on specific training data. The advent of powerful large language models (LLMs) open new avenues for AI capabilities. Building on this, we introduce Reframe Any Video Agent (RAVA), a LLM-based agent that leverages visual foundation models and human instructions to restructure visual content for video reframing. RAVA operates in three stages: perception, where it interprets user instructions and video content; planning, where it determines aspect ratios and reframing strategies; and execution, where it invokes the editing tools to produce the final video. Our experiments validate the effectiveness of RAVA in video salient object detection and real-world reframing tasks, demonstrating its potential as a tool for AI-powered video editing.</description><identifier>DOI: 10.48550/arxiv.2403.06070</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Human-Computer Interaction</subject><creationdate>2024-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.06070$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.06070$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Cao, Jiawang</creatorcontrib><creatorcontrib>Wu, Yongliang</creatorcontrib><creatorcontrib>Chi, Weiheng</creatorcontrib><creatorcontrib>Zhu, Wenbo</creatorcontrib><creatorcontrib>Su, Ziyue</creatorcontrib><creatorcontrib>Wu, Jay</creatorcontrib><title>Reframe Anything: LLM Agent for Open World Video Reframing</title><description>The proliferation of mobile devices and social media has revolutionized content dissemination, with short-form video becoming increasingly prevalent. This shift has introduced the challenge of video reframing to fit various screen aspect ratios, a process that highlights the most compelling parts of a video. Traditionally, video reframing is a manual, time-consuming task requiring professional expertise, which incurs high production costs. A potential solution is to adopt some machine learning models, such as video salient object detection, to automate the process. However, these methods often lack generalizability due to their reliance on specific training data. The advent of powerful large language models (LLMs) open new avenues for AI capabilities. Building on this, we introduce Reframe Any Video Agent (RAVA), a LLM-based agent that leverages visual foundation models and human instructions to restructure visual content for video reframing. RAVA operates in three stages: perception, where it interprets user instructions and video content; planning, where it determines aspect ratios and reframing strategies; and execution, where it invokes the editing tools to produce the final video. Our experiments validate the effectiveness of RAVA in video salient object detection and real-world reframing tasks, demonstrating its potential as a tool for AI-powered video editing.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Human-Computer Interaction</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FuwjAQRH3poaL9gJ7wDyRdx45jc4tQWyqlQqoQHCNj79JIkERuhMjflwZOc5l5msfYi4BUmTyHVxcvzTnNFMgUNBTwyBbfSNGdkJftOPw07WHBq-qLlwdsB05d5OseW77r4jHwbROw47fBtfnEHsgdf_H5njO2eX_bLFdJtf74XJZV4nQBiSmksDojIZGw8FJjMDl67ZAoM8IqoclbZU0wyu9D8AYgaFBBWZkTKDlj8xt2Ol_3sTm5ONb_EvUkIf8AYWFAUg</recordid><startdate>20240309</startdate><enddate>20240309</enddate><creator>Cao, Jiawang</creator><creator>Wu, Yongliang</creator><creator>Chi, Weiheng</creator><creator>Zhu, Wenbo</creator><creator>Su, Ziyue</creator><creator>Wu, Jay</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240309</creationdate><title>Reframe Anything: LLM Agent for Open World Video Reframing</title><author>Cao, Jiawang ; Wu, Yongliang ; Chi, Weiheng ; Zhu, Wenbo ; Su, Ziyue ; Wu, Jay</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-8731962f13efe7c36ed85ec6aeff2819416fc9498d84cbddc800d604d4935f043</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Human-Computer Interaction</topic><toplevel>online_resources</toplevel><creatorcontrib>Cao, Jiawang</creatorcontrib><creatorcontrib>Wu, Yongliang</creatorcontrib><creatorcontrib>Chi, Weiheng</creatorcontrib><creatorcontrib>Zhu, Wenbo</creatorcontrib><creatorcontrib>Su, Ziyue</creatorcontrib><creatorcontrib>Wu, Jay</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cao, Jiawang</au><au>Wu, Yongliang</au><au>Chi, Weiheng</au><au>Zhu, Wenbo</au><au>Su, Ziyue</au><au>Wu, Jay</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Reframe Anything: LLM Agent for Open World Video Reframing</atitle><date>2024-03-09</date><risdate>2024</risdate><abstract>The proliferation of mobile devices and social media has revolutionized content dissemination, with short-form video becoming increasingly prevalent. This shift has introduced the challenge of video reframing to fit various screen aspect ratios, a process that highlights the most compelling parts of a video. Traditionally, video reframing is a manual, time-consuming task requiring professional expertise, which incurs high production costs. A potential solution is to adopt some machine learning models, such as video salient object detection, to automate the process. However, these methods often lack generalizability due to their reliance on specific training data. The advent of powerful large language models (LLMs) open new avenues for AI capabilities. Building on this, we introduce Reframe Any Video Agent (RAVA), a LLM-based agent that leverages visual foundation models and human instructions to restructure visual content for video reframing. RAVA operates in three stages: perception, where it interprets user instructions and video content; planning, where it determines aspect ratios and reframing strategies; and execution, where it invokes the editing tools to produce the final video. Our experiments validate the effectiveness of RAVA in video salient object detection and real-world reframing tasks, demonstrating its potential as a tool for AI-powered video editing.</abstract><doi>10.48550/arxiv.2403.06070</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.06070
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_06070
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Human-Computer Interaction
title Reframe Anything: LLM Agent for Open World Video Reframing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T11%3A44%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Reframe%20Anything:%20LLM%20Agent%20for%20Open%20World%20Video%20Reframing&rft.au=Cao,%20Jiawang&rft.date=2024-03-09&rft_id=info:doi/10.48550/arxiv.2403.06070&rft_dat=%3Carxiv_GOX%3E2403_06070%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true