Open Vocabulary Multi-Label Video Classification

Pre-trained vision-language models (VLMs) have enabled significant progress in open vocabulary computer vision tasks such as image classification, object detection and image segmentation. Some recent works have focused on extending VLMs to open vocabulary single label action classification in videos...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gupta, Rohit, Rizve, Mamshad Nayeem, Unnikrishnan, Jayakrishnan, Tawari, Ashish, Tran, Son, Shah, Mubarak, Yao, Benjamin, Chilimbi, Trishul
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Gupta, Rohit
Rizve, Mamshad Nayeem
Unnikrishnan, Jayakrishnan
Tawari, Ashish
Tran, Son
Shah, Mubarak
Yao, Benjamin
Chilimbi, Trishul
description Pre-trained vision-language models (VLMs) have enabled significant progress in open vocabulary computer vision tasks such as image classification, object detection and image segmentation. Some recent works have focused on extending VLMs to open vocabulary single label action classification in videos. However, previous methods fall short in holistic video understanding which requires the ability to simultaneously recognize multiple actions and entities e.g., objects in the video in an open vocabulary setting. We formulate this problem as open vocabulary multilabel video classification and propose a method to adapt a pre-trained VLM such as CLIP to solve this task. We leverage large language models (LLMs) to provide semantic guidance to the VLM about class labels to improve its open vocabulary performance with two key contributions. First, we propose an end-to-end trainable architecture that learns to prompt an LLM to generate soft attributes for the CLIP text-encoder to enable it to recognize novel classes. Second, we integrate a temporal modeling module into CLIP's vision encoder to effectively model the spatio-temporal dynamics of video concepts as well as propose a novel regularized finetuning technique to ensure strong open vocabulary classification performance in the video domain. Our extensive experimentation showcases the efficacy of our approach on multiple benchmark datasets.
doi_str_mv 10.48550/arxiv.2407.09073
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_09073</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_09073</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_090733</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zOwNDA35mQw8C9IzVMIy09OTCrNSSyqVPAtzSnJ1PVJTErNUQjLTEnNV3DOSSwuzkzLTE4syczP42FgTUvMKU7lhdLcDPJuriHOHrpgs-MLijJzgcbEg-yIB9thTFgFADYSMKk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Open Vocabulary Multi-Label Video Classification</title><source>arXiv.org</source><creator>Gupta, Rohit ; Rizve, Mamshad Nayeem ; Unnikrishnan, Jayakrishnan ; Tawari, Ashish ; Tran, Son ; Shah, Mubarak ; Yao, Benjamin ; Chilimbi, Trishul</creator><creatorcontrib>Gupta, Rohit ; Rizve, Mamshad Nayeem ; Unnikrishnan, Jayakrishnan ; Tawari, Ashish ; Tran, Son ; Shah, Mubarak ; Yao, Benjamin ; Chilimbi, Trishul</creatorcontrib><description>Pre-trained vision-language models (VLMs) have enabled significant progress in open vocabulary computer vision tasks such as image classification, object detection and image segmentation. Some recent works have focused on extending VLMs to open vocabulary single label action classification in videos. However, previous methods fall short in holistic video understanding which requires the ability to simultaneously recognize multiple actions and entities e.g., objects in the video in an open vocabulary setting. We formulate this problem as open vocabulary multilabel video classification and propose a method to adapt a pre-trained VLM such as CLIP to solve this task. We leverage large language models (LLMs) to provide semantic guidance to the VLM about class labels to improve its open vocabulary performance with two key contributions. First, we propose an end-to-end trainable architecture that learns to prompt an LLM to generate soft attributes for the CLIP text-encoder to enable it to recognize novel classes. Second, we integrate a temporal modeling module into CLIP's vision encoder to effectively model the spatio-temporal dynamics of video concepts as well as propose a novel regularized finetuning technique to ensure strong open vocabulary classification performance in the video domain. Our extensive experimentation showcases the efficacy of our approach on multiple benchmark datasets.</description><identifier>DOI: 10.48550/arxiv.2407.09073</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.09073$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.09073$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gupta, Rohit</creatorcontrib><creatorcontrib>Rizve, Mamshad Nayeem</creatorcontrib><creatorcontrib>Unnikrishnan, Jayakrishnan</creatorcontrib><creatorcontrib>Tawari, Ashish</creatorcontrib><creatorcontrib>Tran, Son</creatorcontrib><creatorcontrib>Shah, Mubarak</creatorcontrib><creatorcontrib>Yao, Benjamin</creatorcontrib><creatorcontrib>Chilimbi, Trishul</creatorcontrib><title>Open Vocabulary Multi-Label Video Classification</title><description>Pre-trained vision-language models (VLMs) have enabled significant progress in open vocabulary computer vision tasks such as image classification, object detection and image segmentation. Some recent works have focused on extending VLMs to open vocabulary single label action classification in videos. However, previous methods fall short in holistic video understanding which requires the ability to simultaneously recognize multiple actions and entities e.g., objects in the video in an open vocabulary setting. We formulate this problem as open vocabulary multilabel video classification and propose a method to adapt a pre-trained VLM such as CLIP to solve this task. We leverage large language models (LLMs) to provide semantic guidance to the VLM about class labels to improve its open vocabulary performance with two key contributions. First, we propose an end-to-end trainable architecture that learns to prompt an LLM to generate soft attributes for the CLIP text-encoder to enable it to recognize novel classes. Second, we integrate a temporal modeling module into CLIP's vision encoder to effectively model the spatio-temporal dynamics of video concepts as well as propose a novel regularized finetuning technique to ensure strong open vocabulary classification performance in the video domain. Our extensive experimentation showcases the efficacy of our approach on multiple benchmark datasets.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zOwNDA35mQw8C9IzVMIy09OTCrNSSyqVPAtzSnJ1PVJTErNUQjLTEnNV3DOSSwuzkzLTE4syczP42FgTUvMKU7lhdLcDPJuriHOHrpgs-MLijJzgcbEg-yIB9thTFgFADYSMKk</recordid><startdate>20240712</startdate><enddate>20240712</enddate><creator>Gupta, Rohit</creator><creator>Rizve, Mamshad Nayeem</creator><creator>Unnikrishnan, Jayakrishnan</creator><creator>Tawari, Ashish</creator><creator>Tran, Son</creator><creator>Shah, Mubarak</creator><creator>Yao, Benjamin</creator><creator>Chilimbi, Trishul</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240712</creationdate><title>Open Vocabulary Multi-Label Video Classification</title><author>Gupta, Rohit ; Rizve, Mamshad Nayeem ; Unnikrishnan, Jayakrishnan ; Tawari, Ashish ; Tran, Son ; Shah, Mubarak ; Yao, Benjamin ; Chilimbi, Trishul</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_090733</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Gupta, Rohit</creatorcontrib><creatorcontrib>Rizve, Mamshad Nayeem</creatorcontrib><creatorcontrib>Unnikrishnan, Jayakrishnan</creatorcontrib><creatorcontrib>Tawari, Ashish</creatorcontrib><creatorcontrib>Tran, Son</creatorcontrib><creatorcontrib>Shah, Mubarak</creatorcontrib><creatorcontrib>Yao, Benjamin</creatorcontrib><creatorcontrib>Chilimbi, Trishul</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gupta, Rohit</au><au>Rizve, Mamshad Nayeem</au><au>Unnikrishnan, Jayakrishnan</au><au>Tawari, Ashish</au><au>Tran, Son</au><au>Shah, Mubarak</au><au>Yao, Benjamin</au><au>Chilimbi, Trishul</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Open Vocabulary Multi-Label Video Classification</atitle><date>2024-07-12</date><risdate>2024</risdate><abstract>Pre-trained vision-language models (VLMs) have enabled significant progress in open vocabulary computer vision tasks such as image classification, object detection and image segmentation. Some recent works have focused on extending VLMs to open vocabulary single label action classification in videos. However, previous methods fall short in holistic video understanding which requires the ability to simultaneously recognize multiple actions and entities e.g., objects in the video in an open vocabulary setting. We formulate this problem as open vocabulary multilabel video classification and propose a method to adapt a pre-trained VLM such as CLIP to solve this task. We leverage large language models (LLMs) to provide semantic guidance to the VLM about class labels to improve its open vocabulary performance with two key contributions. First, we propose an end-to-end trainable architecture that learns to prompt an LLM to generate soft attributes for the CLIP text-encoder to enable it to recognize novel classes. Second, we integrate a temporal modeling module into CLIP's vision encoder to effectively model the spatio-temporal dynamics of video concepts as well as propose a novel regularized finetuning technique to ensure strong open vocabulary classification performance in the video domain. Our extensive experimentation showcases the efficacy of our approach on multiple benchmark datasets.</abstract><doi>10.48550/arxiv.2407.09073</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2407.09073
ispartof
issn
language eng
recordid cdi_arxiv_primary_2407_09073
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Open Vocabulary Multi-Label Video Classification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T22%3A26%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Open%20Vocabulary%20Multi-Label%20Video%20Classification&rft.au=Gupta,%20Rohit&rft.date=2024-07-12&rft_id=info:doi/10.48550/arxiv.2407.09073&rft_dat=%3Carxiv_GOX%3E2407_09073%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true