Learning Pixel-wise Labeling from the Internet without Human Interaction
Deep learning stands at the forefront in many computer vision tasks. However, deep neural networks are usually data-hungry and require a huge amount of well-annotated training samples. Collecting sufficient annotated data is very expensive in many applications, especially for pixel-level prediction...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2018-05 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Liu, Yun Shi, Yujun Bian, JiaWang Zhang, Le Ming-Ming, Cheng Feng, Jiashi |
description | Deep learning stands at the forefront in many computer vision tasks. However, deep neural networks are usually data-hungry and require a huge amount of well-annotated training samples. Collecting sufficient annotated data is very expensive in many applications, especially for pixel-level prediction tasks such as semantic segmentation. To solve this fundamental issue, we consider a new challenging vision task, Internetly supervised semantic segmentation, which only uses Internet data with noisy image-level supervision of corresponding query keywords for segmentation model training. We address this task by proposing the following solution. A class-specific attention model unifying multiscale forward and backward convolutional features is proposed to provide initial segmentation "ground truth". The model trained with such noisy annotations is then improved by an online fine-tuning procedure. It achieves state-of-the-art performance under the weakly-supervised setting on PASCAL VOC2012 dataset. The proposed framework also paves a new way towards learning from the Internet without human interaction and could serve as a strong baseline therein. Code and data will be released upon the paper acceptance. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2073520289</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2073520289</sourcerecordid><originalsourceid>FETCH-proquest_journals_20735202893</originalsourceid><addsrcrecordid>eNqNikEKwjAQAIMgWLR_CHguxI2x9SxKhR48eC9RtjalTTTZUJ-vog_wNDAzE5aAlKusWAPMWBpCJ4SATQ5KyYSVFWpvjb3xk3lin40mIK_0BfuPa7wbOLXIj5bQWyQ-GmpdJF7GQduv1lcyzi7YtNF9wPTHOVse9uddmd29e0QMVHcuevtONYhcKhBQbOV_1wtFJzwZ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2073520289</pqid></control><display><type>article</type><title>Learning Pixel-wise Labeling from the Internet without Human Interaction</title><source>Free E- Journals</source><creator>Liu, Yun ; Shi, Yujun ; Bian, JiaWang ; Zhang, Le ; Ming-Ming, Cheng ; Feng, Jiashi</creator><creatorcontrib>Liu, Yun ; Shi, Yujun ; Bian, JiaWang ; Zhang, Le ; Ming-Ming, Cheng ; Feng, Jiashi</creatorcontrib><description>Deep learning stands at the forefront in many computer vision tasks. However, deep neural networks are usually data-hungry and require a huge amount of well-annotated training samples. Collecting sufficient annotated data is very expensive in many applications, especially for pixel-level prediction tasks such as semantic segmentation. To solve this fundamental issue, we consider a new challenging vision task, Internetly supervised semantic segmentation, which only uses Internet data with noisy image-level supervision of corresponding query keywords for segmentation model training. We address this task by proposing the following solution. A class-specific attention model unifying multiscale forward and backward convolutional features is proposed to provide initial segmentation "ground truth". The model trained with such noisy annotations is then improved by an online fine-tuning procedure. It achieves state-of-the-art performance under the weakly-supervised setting on PASCAL VOC2012 dataset. The proposed framework also paves a new way towards learning from the Internet without human interaction and could serve as a strong baseline therein. Code and data will be released upon the paper acceptance.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer vision ; Ground truth ; Image annotation ; Image segmentation ; Internet ; Machine learning ; Neural networks ; Pixels ; Semantic segmentation ; Semantics ; Training</subject><ispartof>arXiv.org, 2018-05</ispartof><rights>2018. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>Liu, Yun</creatorcontrib><creatorcontrib>Shi, Yujun</creatorcontrib><creatorcontrib>Bian, JiaWang</creatorcontrib><creatorcontrib>Zhang, Le</creatorcontrib><creatorcontrib>Ming-Ming, Cheng</creatorcontrib><creatorcontrib>Feng, Jiashi</creatorcontrib><title>Learning Pixel-wise Labeling from the Internet without Human Interaction</title><title>arXiv.org</title><description>Deep learning stands at the forefront in many computer vision tasks. However, deep neural networks are usually data-hungry and require a huge amount of well-annotated training samples. Collecting sufficient annotated data is very expensive in many applications, especially for pixel-level prediction tasks such as semantic segmentation. To solve this fundamental issue, we consider a new challenging vision task, Internetly supervised semantic segmentation, which only uses Internet data with noisy image-level supervision of corresponding query keywords for segmentation model training. We address this task by proposing the following solution. A class-specific attention model unifying multiscale forward and backward convolutional features is proposed to provide initial segmentation "ground truth". The model trained with such noisy annotations is then improved by an online fine-tuning procedure. It achieves state-of-the-art performance under the weakly-supervised setting on PASCAL VOC2012 dataset. The proposed framework also paves a new way towards learning from the Internet without human interaction and could serve as a strong baseline therein. Code and data will be released upon the paper acceptance.</description><subject>Computer vision</subject><subject>Ground truth</subject><subject>Image annotation</subject><subject>Image segmentation</subject><subject>Internet</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Pixels</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNikEKwjAQAIMgWLR_CHguxI2x9SxKhR48eC9RtjalTTTZUJ-vog_wNDAzE5aAlKusWAPMWBpCJ4SATQ5KyYSVFWpvjb3xk3lin40mIK_0BfuPa7wbOLXIj5bQWyQ-GmpdJF7GQduv1lcyzi7YtNF9wPTHOVse9uddmd29e0QMVHcuevtONYhcKhBQbOV_1wtFJzwZ</recordid><startdate>20180519</startdate><enddate>20180519</enddate><creator>Liu, Yun</creator><creator>Shi, Yujun</creator><creator>Bian, JiaWang</creator><creator>Zhang, Le</creator><creator>Ming-Ming, Cheng</creator><creator>Feng, Jiashi</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20180519</creationdate><title>Learning Pixel-wise Labeling from the Internet without Human Interaction</title><author>Liu, Yun ; Shi, Yujun ; Bian, JiaWang ; Zhang, Le ; Ming-Ming, Cheng ; Feng, Jiashi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20735202893</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer vision</topic><topic>Ground truth</topic><topic>Image annotation</topic><topic>Image segmentation</topic><topic>Internet</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Pixels</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Yun</creatorcontrib><creatorcontrib>Shi, Yujun</creatorcontrib><creatorcontrib>Bian, JiaWang</creatorcontrib><creatorcontrib>Zhang, Le</creatorcontrib><creatorcontrib>Ming-Ming, Cheng</creatorcontrib><creatorcontrib>Feng, Jiashi</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Yun</au><au>Shi, Yujun</au><au>Bian, JiaWang</au><au>Zhang, Le</au><au>Ming-Ming, Cheng</au><au>Feng, Jiashi</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Learning Pixel-wise Labeling from the Internet without Human Interaction</atitle><jtitle>arXiv.org</jtitle><date>2018-05-19</date><risdate>2018</risdate><eissn>2331-8422</eissn><abstract>Deep learning stands at the forefront in many computer vision tasks. However, deep neural networks are usually data-hungry and require a huge amount of well-annotated training samples. Collecting sufficient annotated data is very expensive in many applications, especially for pixel-level prediction tasks such as semantic segmentation. To solve this fundamental issue, we consider a new challenging vision task, Internetly supervised semantic segmentation, which only uses Internet data with noisy image-level supervision of corresponding query keywords for segmentation model training. We address this task by proposing the following solution. A class-specific attention model unifying multiscale forward and backward convolutional features is proposed to provide initial segmentation "ground truth". The model trained with such noisy annotations is then improved by an online fine-tuning procedure. It achieves state-of-the-art performance under the weakly-supervised setting on PASCAL VOC2012 dataset. The proposed framework also paves a new way towards learning from the Internet without human interaction and could serve as a strong baseline therein. Code and data will be released upon the paper acceptance.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2018-05 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2073520289 |
source | Free E- Journals |
subjects | Computer vision Ground truth Image annotation Image segmentation Internet Machine learning Neural networks Pixels Semantic segmentation Semantics Training |
title | Learning Pixel-wise Labeling from the Internet without Human Interaction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T16%3A29%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Learning%20Pixel-wise%20Labeling%20from%20the%20Internet%20without%20Human%20Interaction&rft.jtitle=arXiv.org&rft.au=Liu,%20Yun&rft.date=2018-05-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2073520289%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2073520289&rft_id=info:pmid/&rfr_iscdi=true |