Temporal Segment Transformer for Action Segmentation

Recognizing human actions from untrimmed videos is an important task in activity understanding, and poses unique challenges in modeling long-range temporal relations. Recent works adopt a predict-and-refine strategy which converts an initial prediction to action segments for global context modeling....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Liu, Zhichao, Wang, Leshan, Zhou, Desen, Wang, Jian, Zhang, Songyang, Bai, Yang, Ding, Errui, Fan, Rui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Liu, Zhichao
Wang, Leshan
Zhou, Desen
Wang, Jian
Zhang, Songyang
Bai, Yang
Ding, Errui
Fan, Rui
description Recognizing human actions from untrimmed videos is an important task in activity understanding, and poses unique challenges in modeling long-range temporal relations. Recent works adopt a predict-and-refine strategy which converts an initial prediction to action segments for global context modeling. However, the generated segment representations are often noisy and exhibit inaccurate segment boundaries, over-segmentation and other problems. To deal with these issues, we propose an attention based approach which we call \textit{temporal segment transformer}, for joint segment relation modeling and denoising. The main idea is to denoise segment representations using attention between segment and frame representations, and also use inter-segment attention to capture temporal correlations between segments. The refined segment representations are used to predict action labels and adjust segment boundaries, and a final action segmentation is produced based on voting from segment masks. We show that this novel architecture achieves state-of-the-art accuracy on the popular 50Salads, GTEA and Breakfast benchmarks. We also conduct extensive ablations to demonstrate the effectiveness of different components of our design.
doi_str_mv 10.48550/arxiv.2302.13074
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2302_13074</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2302_13074</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-b486757f47cad6c183af13d73320d41c21e6dcd2c9a37db7c90301314cb88f853</originalsourceid><addsrcrecordid>eNo1jskKwjAURbNxIdUPcGV_oDXJS5t0KcUJBBd2X14zSKETsYj-vfPqcLlwOIQsGI2FShK6Qn-vbzEHymMGVIopEYVth95jE57tpbXdGBYeu6vrfWt9-EK41mPdd_8b32NGJg6bq53_GJBiuynyfXQ87Q75-hhhKkVUCZXKRDohNZpUMwXoGBgJwKkRTHNmU6MN1xmCNJXUGQXKgAldKeVUAgFZfrWf7HLwdYv-Ub7zy08-PAF5Xj9Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Temporal Segment Transformer for Action Segmentation</title><source>arXiv.org</source><creator>Liu, Zhichao ; Wang, Leshan ; Zhou, Desen ; Wang, Jian ; Zhang, Songyang ; Bai, Yang ; Ding, Errui ; Fan, Rui</creator><creatorcontrib>Liu, Zhichao ; Wang, Leshan ; Zhou, Desen ; Wang, Jian ; Zhang, Songyang ; Bai, Yang ; Ding, Errui ; Fan, Rui</creatorcontrib><description>Recognizing human actions from untrimmed videos is an important task in activity understanding, and poses unique challenges in modeling long-range temporal relations. Recent works adopt a predict-and-refine strategy which converts an initial prediction to action segments for global context modeling. However, the generated segment representations are often noisy and exhibit inaccurate segment boundaries, over-segmentation and other problems. To deal with these issues, we propose an attention based approach which we call \textit{temporal segment transformer}, for joint segment relation modeling and denoising. The main idea is to denoise segment representations using attention between segment and frame representations, and also use inter-segment attention to capture temporal correlations between segments. The refined segment representations are used to predict action labels and adjust segment boundaries, and a final action segmentation is produced based on voting from segment masks. We show that this novel architecture achieves state-of-the-art accuracy on the popular 50Salads, GTEA and Breakfast benchmarks. We also conduct extensive ablations to demonstrate the effectiveness of different components of our design.</description><identifier>DOI: 10.48550/arxiv.2302.13074</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2302.13074$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2302.13074$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Zhichao</creatorcontrib><creatorcontrib>Wang, Leshan</creatorcontrib><creatorcontrib>Zhou, Desen</creatorcontrib><creatorcontrib>Wang, Jian</creatorcontrib><creatorcontrib>Zhang, Songyang</creatorcontrib><creatorcontrib>Bai, Yang</creatorcontrib><creatorcontrib>Ding, Errui</creatorcontrib><creatorcontrib>Fan, Rui</creatorcontrib><title>Temporal Segment Transformer for Action Segmentation</title><description>Recognizing human actions from untrimmed videos is an important task in activity understanding, and poses unique challenges in modeling long-range temporal relations. Recent works adopt a predict-and-refine strategy which converts an initial prediction to action segments for global context modeling. However, the generated segment representations are often noisy and exhibit inaccurate segment boundaries, over-segmentation and other problems. To deal with these issues, we propose an attention based approach which we call \textit{temporal segment transformer}, for joint segment relation modeling and denoising. The main idea is to denoise segment representations using attention between segment and frame representations, and also use inter-segment attention to capture temporal correlations between segments. The refined segment representations are used to predict action labels and adjust segment boundaries, and a final action segmentation is produced based on voting from segment masks. We show that this novel architecture achieves state-of-the-art accuracy on the popular 50Salads, GTEA and Breakfast benchmarks. We also conduct extensive ablations to demonstrate the effectiveness of different components of our design.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1jskKwjAURbNxIdUPcGV_oDXJS5t0KcUJBBd2X14zSKETsYj-vfPqcLlwOIQsGI2FShK6Qn-vbzEHymMGVIopEYVth95jE57tpbXdGBYeu6vrfWt9-EK41mPdd_8b32NGJg6bq53_GJBiuynyfXQ87Q75-hhhKkVUCZXKRDohNZpUMwXoGBgJwKkRTHNmU6MN1xmCNJXUGQXKgAldKeVUAgFZfrWf7HLwdYv-Ub7zy08-PAF5Xj9Q</recordid><startdate>20230225</startdate><enddate>20230225</enddate><creator>Liu, Zhichao</creator><creator>Wang, Leshan</creator><creator>Zhou, Desen</creator><creator>Wang, Jian</creator><creator>Zhang, Songyang</creator><creator>Bai, Yang</creator><creator>Ding, Errui</creator><creator>Fan, Rui</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230225</creationdate><title>Temporal Segment Transformer for Action Segmentation</title><author>Liu, Zhichao ; Wang, Leshan ; Zhou, Desen ; Wang, Jian ; Zhang, Songyang ; Bai, Yang ; Ding, Errui ; Fan, Rui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-b486757f47cad6c183af13d73320d41c21e6dcd2c9a37db7c90301314cb88f853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Zhichao</creatorcontrib><creatorcontrib>Wang, Leshan</creatorcontrib><creatorcontrib>Zhou, Desen</creatorcontrib><creatorcontrib>Wang, Jian</creatorcontrib><creatorcontrib>Zhang, Songyang</creatorcontrib><creatorcontrib>Bai, Yang</creatorcontrib><creatorcontrib>Ding, Errui</creatorcontrib><creatorcontrib>Fan, Rui</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Zhichao</au><au>Wang, Leshan</au><au>Zhou, Desen</au><au>Wang, Jian</au><au>Zhang, Songyang</au><au>Bai, Yang</au><au>Ding, Errui</au><au>Fan, Rui</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Temporal Segment Transformer for Action Segmentation</atitle><date>2023-02-25</date><risdate>2023</risdate><abstract>Recognizing human actions from untrimmed videos is an important task in activity understanding, and poses unique challenges in modeling long-range temporal relations. Recent works adopt a predict-and-refine strategy which converts an initial prediction to action segments for global context modeling. However, the generated segment representations are often noisy and exhibit inaccurate segment boundaries, over-segmentation and other problems. To deal with these issues, we propose an attention based approach which we call \textit{temporal segment transformer}, for joint segment relation modeling and denoising. The main idea is to denoise segment representations using attention between segment and frame representations, and also use inter-segment attention to capture temporal correlations between segments. The refined segment representations are used to predict action labels and adjust segment boundaries, and a final action segmentation is produced based on voting from segment masks. We show that this novel architecture achieves state-of-the-art accuracy on the popular 50Salads, GTEA and Breakfast benchmarks. We also conduct extensive ablations to demonstrate the effectiveness of different components of our design.</abstract><doi>10.48550/arxiv.2302.13074</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2302.13074
ispartof
issn
language eng
recordid cdi_arxiv_primary_2302_13074
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Temporal Segment Transformer for Action Segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T05%3A51%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Temporal%20Segment%20Transformer%20for%20Action%20Segmentation&rft.au=Liu,%20Zhichao&rft.date=2023-02-25&rft_id=info:doi/10.48550/arxiv.2302.13074&rft_dat=%3Carxiv_GOX%3E2302_13074%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true