Learning Pixel-wise Labeling from the Internet without Human Interaction

Deep learning stands at the forefront in many computer vision tasks. However, deep neural networks are usually data-hungry and require a huge amount of well-annotated training samples. Collecting sufficient annotated data is very expensive in many applications, especially for pixel-level prediction...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Liu, Yun, Shi, Yujun, Bian, JiaWang, Zhang, Le, Cheng, Ming-Ming, Feng, Jiashi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Liu, Yun
Shi, Yujun
Bian, JiaWang
Zhang, Le
Cheng, Ming-Ming
Feng, Jiashi
description Deep learning stands at the forefront in many computer vision tasks. However, deep neural networks are usually data-hungry and require a huge amount of well-annotated training samples. Collecting sufficient annotated data is very expensive in many applications, especially for pixel-level prediction tasks such as semantic segmentation. To solve this fundamental issue, we consider a new challenging vision task, Internetly supervised semantic segmentation, which only uses Internet data with noisy image-level supervision of corresponding query keywords for segmentation model training. We address this task by proposing the following solution. A class-specific attention model unifying multiscale forward and backward convolutional features is proposed to provide initial segmentation "ground truth". The model trained with such noisy annotations is then improved by an online fine-tuning procedure. It achieves state-of-the-art performance under the weakly-supervised setting on PASCAL VOC2012 dataset. The proposed framework also paves a new way towards learning from the Internet without human interaction and could serve as a strong baseline therein. Code and data will be released upon the paper acceptance.
doi_str_mv 10.48550/arxiv.1805.07548
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1805_07548</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1805_07548</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-35ef44cf0a30e809b6167f50d10eaa763b5a9c907178591fa2c1a76944ba13943</originalsourceid><addsrcrecordid>eNotz71OwzAUhmEvDKjlApjwDSQcYzu2x6qipFIkGLpHJ-GYWkoc5Lp_d1_aMn3SM3zSy9izgFJZreEV0ykcSmFBl2C0so-sbghTDPGHf4UTDcUx7Ig32NFwNZ-mkect8XXMlCJlfgx5O-0zr_cjxjtjn8MU5-zB47Cjp_-dsc3qfbOsi-bzY71cNAVWxhZSk1eq94ASyILrKlEZr-FbACGaSnYaXe_ACGO1Ex7fevHHTqkOhXRKztjL_faW0v6mMGI6t9ek9pYkL5zJRks</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning Pixel-wise Labeling from the Internet without Human Interaction</title><source>arXiv.org</source><creator>Liu, Yun ; Shi, Yujun ; Bian, JiaWang ; Zhang, Le ; Cheng, Ming-Ming ; Feng, Jiashi</creator><creatorcontrib>Liu, Yun ; Shi, Yujun ; Bian, JiaWang ; Zhang, Le ; Cheng, Ming-Ming ; Feng, Jiashi</creatorcontrib><description>Deep learning stands at the forefront in many computer vision tasks. However, deep neural networks are usually data-hungry and require a huge amount of well-annotated training samples. Collecting sufficient annotated data is very expensive in many applications, especially for pixel-level prediction tasks such as semantic segmentation. To solve this fundamental issue, we consider a new challenging vision task, Internetly supervised semantic segmentation, which only uses Internet data with noisy image-level supervision of corresponding query keywords for segmentation model training. We address this task by proposing the following solution. A class-specific attention model unifying multiscale forward and backward convolutional features is proposed to provide initial segmentation "ground truth". The model trained with such noisy annotations is then improved by an online fine-tuning procedure. It achieves state-of-the-art performance under the weakly-supervised setting on PASCAL VOC2012 dataset. The proposed framework also paves a new way towards learning from the Internet without human interaction and could serve as a strong baseline therein. Code and data will be released upon the paper acceptance.</description><identifier>DOI: 10.48550/arxiv.1805.07548</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2018-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1805.07548$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1805.07548$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Yun</creatorcontrib><creatorcontrib>Shi, Yujun</creatorcontrib><creatorcontrib>Bian, JiaWang</creatorcontrib><creatorcontrib>Zhang, Le</creatorcontrib><creatorcontrib>Cheng, Ming-Ming</creatorcontrib><creatorcontrib>Feng, Jiashi</creatorcontrib><title>Learning Pixel-wise Labeling from the Internet without Human Interaction</title><description>Deep learning stands at the forefront in many computer vision tasks. However, deep neural networks are usually data-hungry and require a huge amount of well-annotated training samples. Collecting sufficient annotated data is very expensive in many applications, especially for pixel-level prediction tasks such as semantic segmentation. To solve this fundamental issue, we consider a new challenging vision task, Internetly supervised semantic segmentation, which only uses Internet data with noisy image-level supervision of corresponding query keywords for segmentation model training. We address this task by proposing the following solution. A class-specific attention model unifying multiscale forward and backward convolutional features is proposed to provide initial segmentation "ground truth". The model trained with such noisy annotations is then improved by an online fine-tuning procedure. It achieves state-of-the-art performance under the weakly-supervised setting on PASCAL VOC2012 dataset. The proposed framework also paves a new way towards learning from the Internet without human interaction and could serve as a strong baseline therein. Code and data will be released upon the paper acceptance.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUhmEvDKjlApjwDSQcYzu2x6qipFIkGLpHJ-GYWkoc5Lp_d1_aMn3SM3zSy9izgFJZreEV0ykcSmFBl2C0so-sbghTDPGHf4UTDcUx7Ig32NFwNZ-mkect8XXMlCJlfgx5O-0zr_cjxjtjn8MU5-zB47Cjp_-dsc3qfbOsi-bzY71cNAVWxhZSk1eq94ASyILrKlEZr-FbACGaSnYaXe_ACGO1Ex7fevHHTqkOhXRKztjL_faW0v6mMGI6t9ek9pYkL5zJRks</recordid><startdate>20180519</startdate><enddate>20180519</enddate><creator>Liu, Yun</creator><creator>Shi, Yujun</creator><creator>Bian, JiaWang</creator><creator>Zhang, Le</creator><creator>Cheng, Ming-Ming</creator><creator>Feng, Jiashi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20180519</creationdate><title>Learning Pixel-wise Labeling from the Internet without Human Interaction</title><author>Liu, Yun ; Shi, Yujun ; Bian, JiaWang ; Zhang, Le ; Cheng, Ming-Ming ; Feng, Jiashi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-35ef44cf0a30e809b6167f50d10eaa763b5a9c907178591fa2c1a76944ba13943</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Yun</creatorcontrib><creatorcontrib>Shi, Yujun</creatorcontrib><creatorcontrib>Bian, JiaWang</creatorcontrib><creatorcontrib>Zhang, Le</creatorcontrib><creatorcontrib>Cheng, Ming-Ming</creatorcontrib><creatorcontrib>Feng, Jiashi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Yun</au><au>Shi, Yujun</au><au>Bian, JiaWang</au><au>Zhang, Le</au><au>Cheng, Ming-Ming</au><au>Feng, Jiashi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Pixel-wise Labeling from the Internet without Human Interaction</atitle><date>2018-05-19</date><risdate>2018</risdate><abstract>Deep learning stands at the forefront in many computer vision tasks. However, deep neural networks are usually data-hungry and require a huge amount of well-annotated training samples. Collecting sufficient annotated data is very expensive in many applications, especially for pixel-level prediction tasks such as semantic segmentation. To solve this fundamental issue, we consider a new challenging vision task, Internetly supervised semantic segmentation, which only uses Internet data with noisy image-level supervision of corresponding query keywords for segmentation model training. We address this task by proposing the following solution. A class-specific attention model unifying multiscale forward and backward convolutional features is proposed to provide initial segmentation "ground truth". The model trained with such noisy annotations is then improved by an online fine-tuning procedure. It achieves state-of-the-art performance under the weakly-supervised setting on PASCAL VOC2012 dataset. The proposed framework also paves a new way towards learning from the Internet without human interaction and could serve as a strong baseline therein. Code and data will be released upon the paper acceptance.</abstract><doi>10.48550/arxiv.1805.07548</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1805.07548
ispartof
issn
language eng
recordid cdi_arxiv_primary_1805_07548
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Learning Pixel-wise Labeling from the Internet without Human Interaction
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T16%3A39%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Pixel-wise%20Labeling%20from%20the%20Internet%20without%20Human%20Interaction&rft.au=Liu,%20Yun&rft.date=2018-05-19&rft_id=info:doi/10.48550/arxiv.1805.07548&rft_dat=%3Carxiv_GOX%3E1805_07548%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true