SeGAN: Segmenting and Generating the Invisible

Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. D...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ehsani, Kiana, Mottaghi, Roozbeh, Farhadi, Ali
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ehsani, Kiana
Mottaghi, Roozbeh
Farhadi, Ali
description Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder occludee relations, our method can infer depth layering.
doi_str_mv 10.48550/arxiv.1703.10239
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1703_10239</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1703_10239</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-ffa9ea5a7721d1ad69154aba5a9083f95f51fe85b40fd7b006b4e430205a41723</originalsourceid><addsrcrecordid>eNotzrEKwjAUBdAsDlL9ACf7A60vTdI0biJaBdFB9_JCX2qgBqki-vdqdbrcO1wOYxMOqSyUghl2T_9IuQaRcsiEGbL0SOViP4-P1Fwo3H1oYgx1XFKgDvt6P1O8DQ9_87alERs4bG80_mfETuvVablJdodyu1zsEsy1SZxDQ6hQ64zXHOvccCXRfhYDhXBGOcUdFcpKcLW2ALmVJAVkoFBynYmITX-3Pbi6dv6C3av6wqseLt4iSjw1</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SeGAN: Segmenting and Generating the Invisible</title><source>arXiv.org</source><creator>Ehsani, Kiana ; Mottaghi, Roozbeh ; Farhadi, Ali</creator><creatorcontrib>Ehsani, Kiana ; Mottaghi, Roozbeh ; Farhadi, Ali</creatorcontrib><description>Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder occludee relations, our method can infer depth layering.</description><identifier>DOI: 10.48550/arxiv.1703.10239</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2017-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1703.10239$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1703.10239$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ehsani, Kiana</creatorcontrib><creatorcontrib>Mottaghi, Roozbeh</creatorcontrib><creatorcontrib>Farhadi, Ali</creatorcontrib><title>SeGAN: Segmenting and Generating the Invisible</title><description>Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder occludee relations, our method can infer depth layering.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrEKwjAUBdAsDlL9ACf7A60vTdI0biJaBdFB9_JCX2qgBqki-vdqdbrcO1wOYxMOqSyUghl2T_9IuQaRcsiEGbL0SOViP4-P1Fwo3H1oYgx1XFKgDvt6P1O8DQ9_87alERs4bG80_mfETuvVablJdodyu1zsEsy1SZxDQ6hQ64zXHOvccCXRfhYDhXBGOcUdFcpKcLW2ALmVJAVkoFBynYmITX-3Pbi6dv6C3av6wqseLt4iSjw1</recordid><startdate>20170329</startdate><enddate>20170329</enddate><creator>Ehsani, Kiana</creator><creator>Mottaghi, Roozbeh</creator><creator>Farhadi, Ali</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20170329</creationdate><title>SeGAN: Segmenting and Generating the Invisible</title><author>Ehsani, Kiana ; Mottaghi, Roozbeh ; Farhadi, Ali</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-ffa9ea5a7721d1ad69154aba5a9083f95f51fe85b40fd7b006b4e430205a41723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Ehsani, Kiana</creatorcontrib><creatorcontrib>Mottaghi, Roozbeh</creatorcontrib><creatorcontrib>Farhadi, Ali</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ehsani, Kiana</au><au>Mottaghi, Roozbeh</au><au>Farhadi, Ali</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SeGAN: Segmenting and Generating the Invisible</atitle><date>2017-03-29</date><risdate>2017</risdate><abstract>Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder occludee relations, our method can infer depth layering.</abstract><doi>10.48550/arxiv.1703.10239</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1703.10239
ispartof
issn
language eng
recordid cdi_arxiv_primary_1703_10239
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title SeGAN: Segmenting and Generating the Invisible
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T00%3A31%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SeGAN:%20Segmenting%20and%20Generating%20the%20Invisible&rft.au=Ehsani,%20Kiana&rft.date=2017-03-29&rft_id=info:doi/10.48550/arxiv.1703.10239&rft_dat=%3Carxiv_GOX%3E1703_10239%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true