Interactive time sequence labeling method based on image segmentation

The invention discloses an interactive time sequence labeling method based on image segmentation, which comprises the following steps of: obtaining video data, selecting an initial target by a manual frame, and obtaining a target frame position of the initial target; generating a segmentation mask o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: WANG YINRUI, ZHANG HUAYU, DAI HAOWEI, LU HAIYANG, CHENG YUAN, LIANG ANYANG, YANG MINGLUN
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator WANG YINRUI
ZHANG HUAYU
DAI HAOWEI
LU HAIYANG
CHENG YUAN
LIANG ANYANG
YANG MINGLUN
description The invention discloses an interactive time sequence labeling method based on image segmentation, which comprises the following steps of: obtaining video data, selecting an initial target by a manual frame, and obtaining a target frame position of the initial target; generating a segmentation mask of the video data according to an SAM segmentation algorithm in combination with the position of the target frame and the manual prompt, and obtaining an initial frame mask graph of the video data; and inputting the frame image of the video data and the initial frame mask image to the time sequence memory segmentation model, and carrying out frame-by-frame segmentation by the time sequence memory segmentation model along with manual supervision and correction to obtain a target annotated video. According to the method, semi-automatic interactive segmentation and time sequence segmentation are combined, automatic labeling is guided manually, dependence on fixed rules or templates is avoided, and high flexibility is a
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN118279721A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN118279721A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN118279721A3</originalsourceid><addsrcrecordid>eNqNyjEOwjAMBdAsDAi4gzkAQ8pQGFFVVBYm9spNPyFS4hTicn6ExAGY3vKWpr2I4sVOwxukIYEKnjPEgSIPiEE8JegjjzRwwUhZKCT23-cTRFlDlrVZ3DkWbH6uzPbc3ppuhyn3KBM7CLRvrtYeqvpYV_a0_-d8AAuGM2o</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Interactive time sequence labeling method based on image segmentation</title><source>esp@cenet</source><creator>WANG YINRUI ; ZHANG HUAYU ; DAI HAOWEI ; LU HAIYANG ; CHENG YUAN ; LIANG ANYANG ; YANG MINGLUN</creator><creatorcontrib>WANG YINRUI ; ZHANG HUAYU ; DAI HAOWEI ; LU HAIYANG ; CHENG YUAN ; LIANG ANYANG ; YANG MINGLUN</creatorcontrib><description>The invention discloses an interactive time sequence labeling method based on image segmentation, which comprises the following steps of: obtaining video data, selecting an initial target by a manual frame, and obtaining a target frame position of the initial target; generating a segmentation mask of the video data according to an SAM segmentation algorithm in combination with the position of the target frame and the manual prompt, and obtaining an initial frame mask graph of the video data; and inputting the frame image of the video data and the initial frame mask image to the time sequence memory segmentation model, and carrying out frame-by-frame segmentation by the time sequence memory segmentation model along with manual supervision and correction to obtain a target annotated video. According to the method, semi-automatic interactive segmentation and time sequence segmentation are combined, automatic labeling is guided manually, dependence on fixed rules or templates is avoided, and high flexibility is a</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240702&amp;DB=EPODOC&amp;CC=CN&amp;NR=118279721A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76516</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240702&amp;DB=EPODOC&amp;CC=CN&amp;NR=118279721A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>WANG YINRUI</creatorcontrib><creatorcontrib>ZHANG HUAYU</creatorcontrib><creatorcontrib>DAI HAOWEI</creatorcontrib><creatorcontrib>LU HAIYANG</creatorcontrib><creatorcontrib>CHENG YUAN</creatorcontrib><creatorcontrib>LIANG ANYANG</creatorcontrib><creatorcontrib>YANG MINGLUN</creatorcontrib><title>Interactive time sequence labeling method based on image segmentation</title><description>The invention discloses an interactive time sequence labeling method based on image segmentation, which comprises the following steps of: obtaining video data, selecting an initial target by a manual frame, and obtaining a target frame position of the initial target; generating a segmentation mask of the video data according to an SAM segmentation algorithm in combination with the position of the target frame and the manual prompt, and obtaining an initial frame mask graph of the video data; and inputting the frame image of the video data and the initial frame mask image to the time sequence memory segmentation model, and carrying out frame-by-frame segmentation by the time sequence memory segmentation model along with manual supervision and correction to obtain a target annotated video. According to the method, semi-automatic interactive segmentation and time sequence segmentation are combined, automatic labeling is guided manually, dependence on fixed rules or templates is avoided, and high flexibility is a</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNyjEOwjAMBdAsDAi4gzkAQ8pQGFFVVBYm9spNPyFS4hTicn6ExAGY3vKWpr2I4sVOwxukIYEKnjPEgSIPiEE8JegjjzRwwUhZKCT23-cTRFlDlrVZ3DkWbH6uzPbc3ppuhyn3KBM7CLRvrtYeqvpYV_a0_-d8AAuGM2o</recordid><startdate>20240702</startdate><enddate>20240702</enddate><creator>WANG YINRUI</creator><creator>ZHANG HUAYU</creator><creator>DAI HAOWEI</creator><creator>LU HAIYANG</creator><creator>CHENG YUAN</creator><creator>LIANG ANYANG</creator><creator>YANG MINGLUN</creator><scope>EVB</scope></search><sort><creationdate>20240702</creationdate><title>Interactive time sequence labeling method based on image segmentation</title><author>WANG YINRUI ; ZHANG HUAYU ; DAI HAOWEI ; LU HAIYANG ; CHENG YUAN ; LIANG ANYANG ; YANG MINGLUN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN118279721A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>WANG YINRUI</creatorcontrib><creatorcontrib>ZHANG HUAYU</creatorcontrib><creatorcontrib>DAI HAOWEI</creatorcontrib><creatorcontrib>LU HAIYANG</creatorcontrib><creatorcontrib>CHENG YUAN</creatorcontrib><creatorcontrib>LIANG ANYANG</creatorcontrib><creatorcontrib>YANG MINGLUN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>WANG YINRUI</au><au>ZHANG HUAYU</au><au>DAI HAOWEI</au><au>LU HAIYANG</au><au>CHENG YUAN</au><au>LIANG ANYANG</au><au>YANG MINGLUN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Interactive time sequence labeling method based on image segmentation</title><date>2024-07-02</date><risdate>2024</risdate><abstract>The invention discloses an interactive time sequence labeling method based on image segmentation, which comprises the following steps of: obtaining video data, selecting an initial target by a manual frame, and obtaining a target frame position of the initial target; generating a segmentation mask of the video data according to an SAM segmentation algorithm in combination with the position of the target frame and the manual prompt, and obtaining an initial frame mask graph of the video data; and inputting the frame image of the video data and the initial frame mask image to the time sequence memory segmentation model, and carrying out frame-by-frame segmentation by the time sequence memory segmentation model along with manual supervision and correction to obtain a target annotated video. According to the method, semi-automatic interactive segmentation and time sequence segmentation are combined, automatic labeling is guided manually, dependence on fixed rules or templates is avoided, and high flexibility is a</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN118279721A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title Interactive time sequence labeling method based on image segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T15%3A46%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=WANG%20YINRUI&rft.date=2024-07-02&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN118279721A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true