SFGANS Self-supervised Future Generator for human ActioN Segmentation
The ability to locate and classify action segments in long untrimmed video is of particular interest to many applications such as autonomous cars, robotics and healthcare applications. Today, the most popular pipeline for action segmentation is composed of encoding the frames into feature vectors, w...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Berman, Or Goldbraikh, Adam Laufer, Shlomi |
description | The ability to locate and classify action segments in long untrimmed video is
of particular interest to many applications such as autonomous cars, robotics
and healthcare applications. Today, the most popular pipeline for action
segmentation is composed of encoding the frames into feature vectors, which are
then processed by a temporal model for segmentation. In this paper we present a
self-supervised method that comes in the middle of the standard pipeline and
generated refined representations of the original feature vectors. Experiments
show that this method improves the performance of existing models on different
sub-tasks of action segmentation, even without additional hyper parameter
tuning. |
doi_str_mv | 10.48550/arxiv.2401.00438 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2401_00438</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2401_00438</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-73ab74ca608392edb73ff122443563bbbab56d36636ce603979e4342d04484aa3</originalsourceid><addsrcrecordid>eNotj8FOg0AURWfjwlQ_wJXzA-DAe8wMS9IUNGnqgu7JG3ijJIU2AzT692J1cXNzF-cmR4inRMVos0y9UPjqr3GKKomVQrD3YleXVXGoZc0nH03LhcO1n7iT5TIvgWXFIweaz0H6NZ_LQKMs2rk_H1biY-BxpnWMD-LO02nix__eiGO5O25fo_179bYt9hFpYyMD5Ay2pJWFPOXOGfA-SVNEyDQ458hlugOtQbesFeQmZwRMO4VokQg24vnv9ubRXEI_UPhufn2amw_8ACDNRHg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SFGANS Self-supervised Future Generator for human ActioN Segmentation</title><source>arXiv.org</source><creator>Berman, Or ; Goldbraikh, Adam ; Laufer, Shlomi</creator><creatorcontrib>Berman, Or ; Goldbraikh, Adam ; Laufer, Shlomi</creatorcontrib><description>The ability to locate and classify action segments in long untrimmed video is
of particular interest to many applications such as autonomous cars, robotics
and healthcare applications. Today, the most popular pipeline for action
segmentation is composed of encoding the frames into feature vectors, which are
then processed by a temporal model for segmentation. In this paper we present a
self-supervised method that comes in the middle of the standard pipeline and
generated refined representations of the original feature vectors. Experiments
show that this method improves the performance of existing models on different
sub-tasks of action segmentation, even without additional hyper parameter
tuning.</description><identifier>DOI: 10.48550/arxiv.2401.00438</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2401.00438$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2401.00438$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Berman, Or</creatorcontrib><creatorcontrib>Goldbraikh, Adam</creatorcontrib><creatorcontrib>Laufer, Shlomi</creatorcontrib><title>SFGANS Self-supervised Future Generator for human ActioN Segmentation</title><description>The ability to locate and classify action segments in long untrimmed video is
of particular interest to many applications such as autonomous cars, robotics
and healthcare applications. Today, the most popular pipeline for action
segmentation is composed of encoding the frames into feature vectors, which are
then processed by a temporal model for segmentation. In this paper we present a
self-supervised method that comes in the middle of the standard pipeline and
generated refined representations of the original feature vectors. Experiments
show that this method improves the performance of existing models on different
sub-tasks of action segmentation, even without additional hyper parameter
tuning.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOg0AURWfjwlQ_wJXzA-DAe8wMS9IUNGnqgu7JG3ijJIU2AzT692J1cXNzF-cmR4inRMVos0y9UPjqr3GKKomVQrD3YleXVXGoZc0nH03LhcO1n7iT5TIvgWXFIweaz0H6NZ_LQKMs2rk_H1biY-BxpnWMD-LO02nix__eiGO5O25fo_179bYt9hFpYyMD5Ay2pJWFPOXOGfA-SVNEyDQ458hlugOtQbesFeQmZwRMO4VokQg24vnv9ubRXEI_UPhufn2amw_8ACDNRHg</recordid><startdate>20231231</startdate><enddate>20231231</enddate><creator>Berman, Or</creator><creator>Goldbraikh, Adam</creator><creator>Laufer, Shlomi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231231</creationdate><title>SFGANS Self-supervised Future Generator for human ActioN Segmentation</title><author>Berman, Or ; Goldbraikh, Adam ; Laufer, Shlomi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-73ab74ca608392edb73ff122443563bbbab56d36636ce603979e4342d04484aa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Berman, Or</creatorcontrib><creatorcontrib>Goldbraikh, Adam</creatorcontrib><creatorcontrib>Laufer, Shlomi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Berman, Or</au><au>Goldbraikh, Adam</au><au>Laufer, Shlomi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SFGANS Self-supervised Future Generator for human ActioN Segmentation</atitle><date>2023-12-31</date><risdate>2023</risdate><abstract>The ability to locate and classify action segments in long untrimmed video is
of particular interest to many applications such as autonomous cars, robotics
and healthcare applications. Today, the most popular pipeline for action
segmentation is composed of encoding the frames into feature vectors, which are
then processed by a temporal model for segmentation. In this paper we present a
self-supervised method that comes in the middle of the standard pipeline and
generated refined representations of the original feature vectors. Experiments
show that this method improves the performance of existing models on different
sub-tasks of action segmentation, even without additional hyper parameter
tuning.</abstract><doi>10.48550/arxiv.2401.00438</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2401.00438 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2401_00438 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | SFGANS Self-supervised Future Generator for human ActioN Segmentation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T22%3A47%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SFGANS%20Self-supervised%20Future%20Generator%20for%20human%20ActioN%20Segmentation&rft.au=Berman,%20Or&rft.date=2023-12-31&rft_id=info:doi/10.48550/arxiv.2401.00438&rft_dat=%3Carxiv_GOX%3E2401_00438%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |