Pre-training with Random Orthogonal Projection Image Modeling

Masked Image Modeling (MIM) is a powerful self-supervised strategy for visual pre-training without the use of labels. MIM applies random crops to input images, processes them with an encoder, and then recovers the masked inputs with a decoder, which encourages the network to capture and learn struct...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Haghighat, Maryam, Moghadam, Peyman, Mohamed, Shaheer, Koniusz, Piotr
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Haghighat, Maryam
Moghadam, Peyman
Mohamed, Shaheer
Koniusz, Piotr
description Masked Image Modeling (MIM) is a powerful self-supervised strategy for visual pre-training without the use of labels. MIM applies random crops to input images, processes them with an encoder, and then recovers the masked inputs with a decoder, which encourages the network to capture and learn structural information about objects and scenes. The intermediate feature representations obtained from MIM are suitable for fine-tuning on downstream tasks. In this paper, we propose an Image Modeling framework based on random orthogonal projection instead of binary masking as in MIM. Our proposed Random Orthogonal Projection Image Modeling (ROPIM) reduces spatially-wise token information under guaranteed bound on the noise variance and can be considered as masking entire spatial image area under locally varying masking degrees. Since ROPIM uses a random subspace for the projection that realizes the masking step, the readily available complement of the subspace can be used during unmasking to promote recovery of removed information. In this paper, we show that using random orthogonal projection leads to superior performance compared to crop-based masking. We demonstrate state-of-the-art results on several popular benchmarks.
doi_str_mv 10.48550/arxiv.2310.18737
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_18737</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_18737</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-b8ad487c91519fb1140ef9203e7e335495221cb747e2d4c6d33a162238acba243</originalsourceid><addsrcrecordid>eNotj8tqwzAURLXJoiT9gK6qH3Bi6UqWvOgihD4CKQkle3MtXTsqtlRU08ffN027GhiYwxzGbkS5VFbrcoX5K3wsJZwLYQ2YK3Z3yFRMGUMMseefYTrxF4w-jXyfp1PqU8SBH3J6JTeFFPl2xJ74c_I0nAcLNutweKfr_5yz48P9cfNU7PaP2816V2BlTNFa9MoaVwst6q4VQpXU1bIEMgSgVa2lFK41ypD0ylUeAEUlJVh0LUoFc3b7h738b95yGDF_N78ezcUDfgD3pUID</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Pre-training with Random Orthogonal Projection Image Modeling</title><source>arXiv.org</source><creator>Haghighat, Maryam ; Moghadam, Peyman ; Mohamed, Shaheer ; Koniusz, Piotr</creator><creatorcontrib>Haghighat, Maryam ; Moghadam, Peyman ; Mohamed, Shaheer ; Koniusz, Piotr</creatorcontrib><description>Masked Image Modeling (MIM) is a powerful self-supervised strategy for visual pre-training without the use of labels. MIM applies random crops to input images, processes them with an encoder, and then recovers the masked inputs with a decoder, which encourages the network to capture and learn structural information about objects and scenes. The intermediate feature representations obtained from MIM are suitable for fine-tuning on downstream tasks. In this paper, we propose an Image Modeling framework based on random orthogonal projection instead of binary masking as in MIM. Our proposed Random Orthogonal Projection Image Modeling (ROPIM) reduces spatially-wise token information under guaranteed bound on the noise variance and can be considered as masking entire spatial image area under locally varying masking degrees. Since ROPIM uses a random subspace for the projection that realizes the masking step, the readily available complement of the subspace can be used during unmasking to promote recovery of removed information. In this paper, we show that using random orthogonal projection leads to superior performance compared to crop-based masking. We demonstrate state-of-the-art results on several popular benchmarks.</description><identifier>DOI: 10.48550/arxiv.2310.18737</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2023-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.18737$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.18737$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Haghighat, Maryam</creatorcontrib><creatorcontrib>Moghadam, Peyman</creatorcontrib><creatorcontrib>Mohamed, Shaheer</creatorcontrib><creatorcontrib>Koniusz, Piotr</creatorcontrib><title>Pre-training with Random Orthogonal Projection Image Modeling</title><description>Masked Image Modeling (MIM) is a powerful self-supervised strategy for visual pre-training without the use of labels. MIM applies random crops to input images, processes them with an encoder, and then recovers the masked inputs with a decoder, which encourages the network to capture and learn structural information about objects and scenes. The intermediate feature representations obtained from MIM are suitable for fine-tuning on downstream tasks. In this paper, we propose an Image Modeling framework based on random orthogonal projection instead of binary masking as in MIM. Our proposed Random Orthogonal Projection Image Modeling (ROPIM) reduces spatially-wise token information under guaranteed bound on the noise variance and can be considered as masking entire spatial image area under locally varying masking degrees. Since ROPIM uses a random subspace for the projection that realizes the masking step, the readily available complement of the subspace can be used during unmasking to promote recovery of removed information. In this paper, we show that using random orthogonal projection leads to superior performance compared to crop-based masking. We demonstrate state-of-the-art results on several popular benchmarks.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAURLXJoiT9gK6qH3Bi6UqWvOgihD4CKQkle3MtXTsqtlRU08ffN027GhiYwxzGbkS5VFbrcoX5K3wsJZwLYQ2YK3Z3yFRMGUMMseefYTrxF4w-jXyfp1PqU8SBH3J6JTeFFPl2xJ74c_I0nAcLNutweKfr_5yz48P9cfNU7PaP2816V2BlTNFa9MoaVwst6q4VQpXU1bIEMgSgVa2lFK41ypD0ylUeAEUlJVh0LUoFc3b7h738b95yGDF_N78ezcUDfgD3pUID</recordid><startdate>20231028</startdate><enddate>20231028</enddate><creator>Haghighat, Maryam</creator><creator>Moghadam, Peyman</creator><creator>Mohamed, Shaheer</creator><creator>Koniusz, Piotr</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231028</creationdate><title>Pre-training with Random Orthogonal Projection Image Modeling</title><author>Haghighat, Maryam ; Moghadam, Peyman ; Mohamed, Shaheer ; Koniusz, Piotr</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-b8ad487c91519fb1140ef9203e7e335495221cb747e2d4c6d33a162238acba243</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Haghighat, Maryam</creatorcontrib><creatorcontrib>Moghadam, Peyman</creatorcontrib><creatorcontrib>Mohamed, Shaheer</creatorcontrib><creatorcontrib>Koniusz, Piotr</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Haghighat, Maryam</au><au>Moghadam, Peyman</au><au>Mohamed, Shaheer</au><au>Koniusz, Piotr</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Pre-training with Random Orthogonal Projection Image Modeling</atitle><date>2023-10-28</date><risdate>2023</risdate><abstract>Masked Image Modeling (MIM) is a powerful self-supervised strategy for visual pre-training without the use of labels. MIM applies random crops to input images, processes them with an encoder, and then recovers the masked inputs with a decoder, which encourages the network to capture and learn structural information about objects and scenes. The intermediate feature representations obtained from MIM are suitable for fine-tuning on downstream tasks. In this paper, we propose an Image Modeling framework based on random orthogonal projection instead of binary masking as in MIM. Our proposed Random Orthogonal Projection Image Modeling (ROPIM) reduces spatially-wise token information under guaranteed bound on the noise variance and can be considered as masking entire spatial image area under locally varying masking degrees. Since ROPIM uses a random subspace for the projection that realizes the masking step, the readily available complement of the subspace can be used during unmasking to promote recovery of removed information. In this paper, we show that using random orthogonal projection leads to superior performance compared to crop-based masking. We demonstrate state-of-the-art results on several popular benchmarks.</abstract><doi>10.48550/arxiv.2310.18737</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.18737
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_18737
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Pre-training with Random Orthogonal Projection Image Modeling
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T10%3A14%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Pre-training%20with%20Random%20Orthogonal%20Projection%20Image%20Modeling&rft.au=Haghighat,%20Maryam&rft.date=2023-10-28&rft_id=info:doi/10.48550/arxiv.2310.18737&rft_dat=%3Carxiv_GOX%3E2310_18737%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true