Training a neural network to predict superpixels using segmentation-aware affinity loss

Segmentation is the identification of separate objects within an image. An example is identification of a pedestrian passing in front of a car, where the pedestrian is a first object and the car is a second object. Superpixel segmentation is the identification of regions of pixels within an object t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Liu, Ming-Yu, Jampani, Varun, Yang, Ming-Hsuan, Sun, Deqing, Tu, Wei-Chih, Kautz, Jan
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Liu, Ming-Yu
Jampani, Varun
Yang, Ming-Hsuan
Sun, Deqing
Tu, Wei-Chih
Kautz, Jan
description Segmentation is the identification of separate objects within an image. An example is identification of a pedestrian passing in front of a car, where the pedestrian is a first object and the car is a second object. Superpixel segmentation is the identification of regions of pixels within an object that have similar properties. An example is identification of pixel regions having a similar color, such as different articles of clothing worn by the pedestrian and different components of the car. A pixel affinity neural network (PAN) model is trained to generate pixel affinity maps for superpixel segmentation. The pixel affinity map defines the similarity of two points in space. In an embodiment, the pixel affinity map indicates a horizontal affinity and vertical affinity for each pixel in the image. The pixel affinity map is processed to identify the superpixels.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US11256961B2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US11256961B2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US11256961B23</originalsourceid><addsrcrecordid>eNqNjUEKwjAQAHPxIOof1gf0kIoFr4ri3YrHstRtCcYk7G6o_t4IPsDTXIaZubm1jC64MAJCoMzoC3SK_ACNkJjurleQnIiTe5EXyPK1hcYnBUV1MVQ4IRPgMJSSvsFHkaWZDeiFVj8uzPp0bA_nilLsSBL2VD7d9WJtvW12jd3Xm3-cD8obOvw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Training a neural network to predict superpixels using segmentation-aware affinity loss</title><source>esp@cenet</source><creator>Liu, Ming-Yu ; Jampani, Varun ; Yang, Ming-Hsuan ; Sun, Deqing ; Tu, Wei-Chih ; Kautz, Jan</creator><creatorcontrib>Liu, Ming-Yu ; Jampani, Varun ; Yang, Ming-Hsuan ; Sun, Deqing ; Tu, Wei-Chih ; Kautz, Jan</creatorcontrib><description>Segmentation is the identification of separate objects within an image. An example is identification of a pedestrian passing in front of a car, where the pedestrian is a first object and the car is a second object. Superpixel segmentation is the identification of regions of pixels within an object that have similar properties. An example is identification of pixel regions having a similar color, such as different articles of clothing worn by the pedestrian and different components of the car. A pixel affinity neural network (PAN) model is trained to generate pixel affinity maps for superpixel segmentation. The pixel affinity map defines the similarity of two points in space. In an embodiment, the pixel affinity map indicates a horizontal affinity and vertical affinity for each pixel in the image. The pixel affinity map is processed to identify the superpixels.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; HANDLING RECORD CARRIERS ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; PHYSICS ; PRESENTATION OF DATA ; RECOGNITION OF DATA ; RECORD CARRIERS</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220222&amp;DB=EPODOC&amp;CC=US&amp;NR=11256961B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220222&amp;DB=EPODOC&amp;CC=US&amp;NR=11256961B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Liu, Ming-Yu</creatorcontrib><creatorcontrib>Jampani, Varun</creatorcontrib><creatorcontrib>Yang, Ming-Hsuan</creatorcontrib><creatorcontrib>Sun, Deqing</creatorcontrib><creatorcontrib>Tu, Wei-Chih</creatorcontrib><creatorcontrib>Kautz, Jan</creatorcontrib><title>Training a neural network to predict superpixels using segmentation-aware affinity loss</title><description>Segmentation is the identification of separate objects within an image. An example is identification of a pedestrian passing in front of a car, where the pedestrian is a first object and the car is a second object. Superpixel segmentation is the identification of regions of pixels within an object that have similar properties. An example is identification of pixel regions having a similar color, such as different articles of clothing worn by the pedestrian and different components of the car. A pixel affinity neural network (PAN) model is trained to generate pixel affinity maps for superpixel segmentation. The pixel affinity map defines the similarity of two points in space. In an embodiment, the pixel affinity map indicates a horizontal affinity and vertical affinity for each pixel in the image. The pixel affinity map is processed to identify the superpixels.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>HANDLING RECORD CARRIERS</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>PHYSICS</subject><subject>PRESENTATION OF DATA</subject><subject>RECOGNITION OF DATA</subject><subject>RECORD CARRIERS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNjUEKwjAQAHPxIOof1gf0kIoFr4ri3YrHstRtCcYk7G6o_t4IPsDTXIaZubm1jC64MAJCoMzoC3SK_ACNkJjurleQnIiTe5EXyPK1hcYnBUV1MVQ4IRPgMJSSvsFHkaWZDeiFVj8uzPp0bA_nilLsSBL2VD7d9WJtvW12jd3Xm3-cD8obOvw</recordid><startdate>20220222</startdate><enddate>20220222</enddate><creator>Liu, Ming-Yu</creator><creator>Jampani, Varun</creator><creator>Yang, Ming-Hsuan</creator><creator>Sun, Deqing</creator><creator>Tu, Wei-Chih</creator><creator>Kautz, Jan</creator><scope>EVB</scope></search><sort><creationdate>20220222</creationdate><title>Training a neural network to predict superpixels using segmentation-aware affinity loss</title><author>Liu, Ming-Yu ; Jampani, Varun ; Yang, Ming-Hsuan ; Sun, Deqing ; Tu, Wei-Chih ; Kautz, Jan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US11256961B23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2022</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>HANDLING RECORD CARRIERS</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>PHYSICS</topic><topic>PRESENTATION OF DATA</topic><topic>RECOGNITION OF DATA</topic><topic>RECORD CARRIERS</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Ming-Yu</creatorcontrib><creatorcontrib>Jampani, Varun</creatorcontrib><creatorcontrib>Yang, Ming-Hsuan</creatorcontrib><creatorcontrib>Sun, Deqing</creatorcontrib><creatorcontrib>Tu, Wei-Chih</creatorcontrib><creatorcontrib>Kautz, Jan</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Ming-Yu</au><au>Jampani, Varun</au><au>Yang, Ming-Hsuan</au><au>Sun, Deqing</au><au>Tu, Wei-Chih</au><au>Kautz, Jan</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Training a neural network to predict superpixels using segmentation-aware affinity loss</title><date>2022-02-22</date><risdate>2022</risdate><abstract>Segmentation is the identification of separate objects within an image. An example is identification of a pedestrian passing in front of a car, where the pedestrian is a first object and the car is a second object. Superpixel segmentation is the identification of regions of pixels within an object that have similar properties. An example is identification of pixel regions having a similar color, such as different articles of clothing worn by the pedestrian and different components of the car. A pixel affinity neural network (PAN) model is trained to generate pixel affinity maps for superpixel segmentation. The pixel affinity map defines the similarity of two points in space. In an embodiment, the pixel affinity map indicates a horizontal affinity and vertical affinity for each pixel in the image. The pixel affinity map is processed to identify the superpixels.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US11256961B2
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
HANDLING RECORD CARRIERS
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
PHYSICS
PRESENTATION OF DATA
RECOGNITION OF DATA
RECORD CARRIERS
title Training a neural network to predict superpixels using segmentation-aware affinity loss
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T01%3A00%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Liu,%20Ming-Yu&rft.date=2022-02-22&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS11256961B2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true