Spatially Consistent Representation Learning

Self-supervised learning has been widely used to obtain transferrable representations from unlabeled images. Especially, recent contrastive learning methods have shown impressive performances on downstream image classification tasks. While these contrastive methods mainly focus on generating invaria...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Roh, Byungseok, Shin, Wuhyun, Kim, Ildoo, Kim, Sungwoong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Roh, Byungseok
Shin, Wuhyun
Kim, Ildoo
Kim, Sungwoong
description Self-supervised learning has been widely used to obtain transferrable representations from unlabeled images. Especially, recent contrastive learning methods have shown impressive performances on downstream image classification tasks. While these contrastive methods mainly focus on generating invariant global representations at the image-level under semantic-preserving transformations, they are prone to overlook spatial consistency of local representations and therefore have a limitation in pretraining for localization tasks such as object detection and instance segmentation. Moreover, aggressively cropped views used in existing contrastive methods can minimize representation distances between the semantically different regions of a single image. In this paper, we propose a spatially consistent representation learning algorithm (SCRL) for multi-object and location-specific tasks. In particular, we devise a novel self-supervised objective that tries to produce coherent spatial representations of a randomly cropped local region according to geometric translations and zooming operations. On various downstream localization tasks with benchmark datasets, the proposed SCRL shows significant performance improvements over the image-level supervised pretraining as well as the state-of-the-art self-supervised learning methods. Code is available at https://github.com/kakaobrain/scrl
doi_str_mv 10.48550/arxiv.2103.06122
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2103_06122</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2103_06122</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-b6610571eea6edd43263d25d05ea49f249bc9c6ff76b8df266dd5ae879316ef13</originalsourceid><addsrcrecordid>eNotzs1qg0AUBeDZdBFsHiCr-gDVzo9z1WWQNi0Igda9XJ07ZcCOMkqobx-TdnUOHDh8jB0ET7NCa_6C4dddUim4SjkIKXfs-WvCxeEwrHE1-tnNC_kl_qQp0Ly1bRt9XBMG7_z3I3uwOMy0_8-INW-vTfWe1OfTR3WsE4RcJh2A4DoXRAhkTKYkKCO14ZowK63Myq4ve7A2h64wVgIYo5GKvFQCyAoVsae_2zu3nYL7wbC2N3Z7Z6src109TA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Spatially Consistent Representation Learning</title><source>arXiv.org</source><creator>Roh, Byungseok ; Shin, Wuhyun ; Kim, Ildoo ; Kim, Sungwoong</creator><creatorcontrib>Roh, Byungseok ; Shin, Wuhyun ; Kim, Ildoo ; Kim, Sungwoong</creatorcontrib><description>Self-supervised learning has been widely used to obtain transferrable representations from unlabeled images. Especially, recent contrastive learning methods have shown impressive performances on downstream image classification tasks. While these contrastive methods mainly focus on generating invariant global representations at the image-level under semantic-preserving transformations, they are prone to overlook spatial consistency of local representations and therefore have a limitation in pretraining for localization tasks such as object detection and instance segmentation. Moreover, aggressively cropped views used in existing contrastive methods can minimize representation distances between the semantically different regions of a single image. In this paper, we propose a spatially consistent representation learning algorithm (SCRL) for multi-object and location-specific tasks. In particular, we devise a novel self-supervised objective that tries to produce coherent spatial representations of a randomly cropped local region according to geometric translations and zooming operations. On various downstream localization tasks with benchmark datasets, the proposed SCRL shows significant performance improvements over the image-level supervised pretraining as well as the state-of-the-art self-supervised learning methods. Code is available at https://github.com/kakaobrain/scrl</description><identifier>DOI: 10.48550/arxiv.2103.06122</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2021-03</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2103.06122$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2103.06122$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Roh, Byungseok</creatorcontrib><creatorcontrib>Shin, Wuhyun</creatorcontrib><creatorcontrib>Kim, Ildoo</creatorcontrib><creatorcontrib>Kim, Sungwoong</creatorcontrib><title>Spatially Consistent Representation Learning</title><description>Self-supervised learning has been widely used to obtain transferrable representations from unlabeled images. Especially, recent contrastive learning methods have shown impressive performances on downstream image classification tasks. While these contrastive methods mainly focus on generating invariant global representations at the image-level under semantic-preserving transformations, they are prone to overlook spatial consistency of local representations and therefore have a limitation in pretraining for localization tasks such as object detection and instance segmentation. Moreover, aggressively cropped views used in existing contrastive methods can minimize representation distances between the semantically different regions of a single image. In this paper, we propose a spatially consistent representation learning algorithm (SCRL) for multi-object and location-specific tasks. In particular, we devise a novel self-supervised objective that tries to produce coherent spatial representations of a randomly cropped local region according to geometric translations and zooming operations. On various downstream localization tasks with benchmark datasets, the proposed SCRL shows significant performance improvements over the image-level supervised pretraining as well as the state-of-the-art self-supervised learning methods. Code is available at https://github.com/kakaobrain/scrl</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzs1qg0AUBeDZdBFsHiCr-gDVzo9z1WWQNi0Igda9XJ07ZcCOMkqobx-TdnUOHDh8jB0ET7NCa_6C4dddUim4SjkIKXfs-WvCxeEwrHE1-tnNC_kl_qQp0Ly1bRt9XBMG7_z3I3uwOMy0_8-INW-vTfWe1OfTR3WsE4RcJh2A4DoXRAhkTKYkKCO14ZowK63Myq4ve7A2h64wVgIYo5GKvFQCyAoVsae_2zu3nYL7wbC2N3Z7Z6src109TA</recordid><startdate>20210310</startdate><enddate>20210310</enddate><creator>Roh, Byungseok</creator><creator>Shin, Wuhyun</creator><creator>Kim, Ildoo</creator><creator>Kim, Sungwoong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210310</creationdate><title>Spatially Consistent Representation Learning</title><author>Roh, Byungseok ; Shin, Wuhyun ; Kim, Ildoo ; Kim, Sungwoong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-b6610571eea6edd43263d25d05ea49f249bc9c6ff76b8df266dd5ae879316ef13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Roh, Byungseok</creatorcontrib><creatorcontrib>Shin, Wuhyun</creatorcontrib><creatorcontrib>Kim, Ildoo</creatorcontrib><creatorcontrib>Kim, Sungwoong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Roh, Byungseok</au><au>Shin, Wuhyun</au><au>Kim, Ildoo</au><au>Kim, Sungwoong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Spatially Consistent Representation Learning</atitle><date>2021-03-10</date><risdate>2021</risdate><abstract>Self-supervised learning has been widely used to obtain transferrable representations from unlabeled images. Especially, recent contrastive learning methods have shown impressive performances on downstream image classification tasks. While these contrastive methods mainly focus on generating invariant global representations at the image-level under semantic-preserving transformations, they are prone to overlook spatial consistency of local representations and therefore have a limitation in pretraining for localization tasks such as object detection and instance segmentation. Moreover, aggressively cropped views used in existing contrastive methods can minimize representation distances between the semantically different regions of a single image. In this paper, we propose a spatially consistent representation learning algorithm (SCRL) for multi-object and location-specific tasks. In particular, we devise a novel self-supervised objective that tries to produce coherent spatial representations of a randomly cropped local region according to geometric translations and zooming operations. On various downstream localization tasks with benchmark datasets, the proposed SCRL shows significant performance improvements over the image-level supervised pretraining as well as the state-of-the-art self-supervised learning methods. Code is available at https://github.com/kakaobrain/scrl</abstract><doi>10.48550/arxiv.2103.06122</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2103.06122
ispartof
issn
language eng
recordid cdi_arxiv_primary_2103_06122
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Spatially Consistent Representation Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T16%3A28%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Spatially%20Consistent%20Representation%20Learning&rft.au=Roh,%20Byungseok&rft.date=2021-03-10&rft_id=info:doi/10.48550/arxiv.2103.06122&rft_dat=%3Carxiv_GOX%3E2103_06122%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true