Improving Contrastive Learning on Visually Homogeneous Mars Rover Images
Contrastive learning has recently demonstrated superior performance to supervised learning, despite requiring no training labels. We explore how contrastive learning can be applied to hundreds of thousands of unlabeled Mars terrain images, collected from the Mars rovers Curiosity and Perseverance, a...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-10 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Ward, Isaac Ronald Moore, Charles Pak, Kai Chen, Jingdao Goh, Edwin |
description | Contrastive learning has recently demonstrated superior performance to supervised learning, despite requiring no training labels. We explore how contrastive learning can be applied to hundreds of thousands of unlabeled Mars terrain images, collected from the Mars rovers Curiosity and Perseverance, and from the Mars Reconnaissance Orbiter. Such methods are appealing since the vast majority of Mars images are unlabeled as manual annotation is labor intensive and requires extensive domain knowledge. Contrastive learning, however, assumes that any given pair of distinct images contain distinct semantic content. This is an issue for Mars image datasets, as any two pairs of Mars images are far more likely to be semantically similar due to the lack of visual diversity on the planet's surface. Making the assumption that pairs of images will be in visual contrast - when they are in fact not - results in pairs that are falsely considered as negatives, impacting training performance. In this study, we propose two approaches to resolve this: 1) an unsupervised deep clustering step on the Mars datasets, which identifies clusters of images containing similar semantic content and corrects false negative errors during training, and 2) a simple approach which mixes data from different domains to increase visual diversity of the total training dataset. Both cases reduce the rate of false negative pairs, thus minimizing the rate in which the model is incorrectly penalized during contrastive training. These modified approaches remain fully unsupervised end-to-end. To evaluate their performance, we add a single linear layer trained to generate class predictions based on these contrastively-learned features and demonstrate increased performance compared to supervised models; observing an improvement in classification accuracy of 3.06% using only 10% of the labeled data. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2725729506</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2725729506</sourcerecordid><originalsourceid>FETCH-proquest_journals_27257295063</originalsourceid><addsrcrecordid>eNqNjcEKgkAURYcgSMp_eNBamN6k1loKg9pEtJVZvETReTXPEfr7DPqAVhfuuZw7UxEas0l2W8SFikVarTVmOaapiVR56p-ex8bVULAbvJWhGQnOZL37luzg3kiwXfeGknuuyREHgYv1AlceycOptzXJSs0fthOKf7lU6-PhVpTJpH8FkqFqOXg3oQqn7xz3qc7Mf6sPeAo8Wg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2725729506</pqid></control><display><type>article</type><title>Improving Contrastive Learning on Visually Homogeneous Mars Rover Images</title><source>Free E- Journals</source><creator>Ward, Isaac Ronald ; Moore, Charles ; Pak, Kai ; Chen, Jingdao ; Goh, Edwin</creator><creatorcontrib>Ward, Isaac Ronald ; Moore, Charles ; Pak, Kai ; Chen, Jingdao ; Goh, Edwin</creatorcontrib><description>Contrastive learning has recently demonstrated superior performance to supervised learning, despite requiring no training labels. We explore how contrastive learning can be applied to hundreds of thousands of unlabeled Mars terrain images, collected from the Mars rovers Curiosity and Perseverance, and from the Mars Reconnaissance Orbiter. Such methods are appealing since the vast majority of Mars images are unlabeled as manual annotation is labor intensive and requires extensive domain knowledge. Contrastive learning, however, assumes that any given pair of distinct images contain distinct semantic content. This is an issue for Mars image datasets, as any two pairs of Mars images are far more likely to be semantically similar due to the lack of visual diversity on the planet's surface. Making the assumption that pairs of images will be in visual contrast - when they are in fact not - results in pairs that are falsely considered as negatives, impacting training performance. In this study, we propose two approaches to resolve this: 1) an unsupervised deep clustering step on the Mars datasets, which identifies clusters of images containing similar semantic content and corrects false negative errors during training, and 2) a simple approach which mixes data from different domains to increase visual diversity of the total training dataset. Both cases reduce the rate of false negative pairs, thus minimizing the rate in which the model is incorrectly penalized during contrastive training. These modified approaches remain fully unsupervised end-to-end. To evaluate their performance, we add a single linear layer trained to generate class predictions based on these contrastively-learned features and demonstrate increased performance compared to supervised models; observing an improvement in classification accuracy of 3.06% using only 10% of the labeled data.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Clustering ; Curiosity (Mars rover) ; Datasets ; Domains ; Image contrast ; Mars ; Mars rovers ; Mars surface ; Performance evaluation ; Semantics ; Supervised learning ; Training</subject><ispartof>arXiv.org, 2022-10</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Ward, Isaac Ronald</creatorcontrib><creatorcontrib>Moore, Charles</creatorcontrib><creatorcontrib>Pak, Kai</creatorcontrib><creatorcontrib>Chen, Jingdao</creatorcontrib><creatorcontrib>Goh, Edwin</creatorcontrib><title>Improving Contrastive Learning on Visually Homogeneous Mars Rover Images</title><title>arXiv.org</title><description>Contrastive learning has recently demonstrated superior performance to supervised learning, despite requiring no training labels. We explore how contrastive learning can be applied to hundreds of thousands of unlabeled Mars terrain images, collected from the Mars rovers Curiosity and Perseverance, and from the Mars Reconnaissance Orbiter. Such methods are appealing since the vast majority of Mars images are unlabeled as manual annotation is labor intensive and requires extensive domain knowledge. Contrastive learning, however, assumes that any given pair of distinct images contain distinct semantic content. This is an issue for Mars image datasets, as any two pairs of Mars images are far more likely to be semantically similar due to the lack of visual diversity on the planet's surface. Making the assumption that pairs of images will be in visual contrast - when they are in fact not - results in pairs that are falsely considered as negatives, impacting training performance. In this study, we propose two approaches to resolve this: 1) an unsupervised deep clustering step on the Mars datasets, which identifies clusters of images containing similar semantic content and corrects false negative errors during training, and 2) a simple approach which mixes data from different domains to increase visual diversity of the total training dataset. Both cases reduce the rate of false negative pairs, thus minimizing the rate in which the model is incorrectly penalized during contrastive training. These modified approaches remain fully unsupervised end-to-end. To evaluate their performance, we add a single linear layer trained to generate class predictions based on these contrastively-learned features and demonstrate increased performance compared to supervised models; observing an improvement in classification accuracy of 3.06% using only 10% of the labeled data.</description><subject>Annotations</subject><subject>Clustering</subject><subject>Curiosity (Mars rover)</subject><subject>Datasets</subject><subject>Domains</subject><subject>Image contrast</subject><subject>Mars</subject><subject>Mars rovers</subject><subject>Mars surface</subject><subject>Performance evaluation</subject><subject>Semantics</subject><subject>Supervised learning</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjcEKgkAURYcgSMp_eNBamN6k1loKg9pEtJVZvETReTXPEfr7DPqAVhfuuZw7UxEas0l2W8SFikVarTVmOaapiVR56p-ex8bVULAbvJWhGQnOZL37luzg3kiwXfeGknuuyREHgYv1AlceycOptzXJSs0fthOKf7lU6-PhVpTJpH8FkqFqOXg3oQqn7xz3qc7Mf6sPeAo8Wg</recordid><startdate>20221017</startdate><enddate>20221017</enddate><creator>Ward, Isaac Ronald</creator><creator>Moore, Charles</creator><creator>Pak, Kai</creator><creator>Chen, Jingdao</creator><creator>Goh, Edwin</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20221017</creationdate><title>Improving Contrastive Learning on Visually Homogeneous Mars Rover Images</title><author>Ward, Isaac Ronald ; Moore, Charles ; Pak, Kai ; Chen, Jingdao ; Goh, Edwin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27257295063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Annotations</topic><topic>Clustering</topic><topic>Curiosity (Mars rover)</topic><topic>Datasets</topic><topic>Domains</topic><topic>Image contrast</topic><topic>Mars</topic><topic>Mars rovers</topic><topic>Mars surface</topic><topic>Performance evaluation</topic><topic>Semantics</topic><topic>Supervised learning</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Ward, Isaac Ronald</creatorcontrib><creatorcontrib>Moore, Charles</creatorcontrib><creatorcontrib>Pak, Kai</creatorcontrib><creatorcontrib>Chen, Jingdao</creatorcontrib><creatorcontrib>Goh, Edwin</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ward, Isaac Ronald</au><au>Moore, Charles</au><au>Pak, Kai</au><au>Chen, Jingdao</au><au>Goh, Edwin</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Improving Contrastive Learning on Visually Homogeneous Mars Rover Images</atitle><jtitle>arXiv.org</jtitle><date>2022-10-17</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Contrastive learning has recently demonstrated superior performance to supervised learning, despite requiring no training labels. We explore how contrastive learning can be applied to hundreds of thousands of unlabeled Mars terrain images, collected from the Mars rovers Curiosity and Perseverance, and from the Mars Reconnaissance Orbiter. Such methods are appealing since the vast majority of Mars images are unlabeled as manual annotation is labor intensive and requires extensive domain knowledge. Contrastive learning, however, assumes that any given pair of distinct images contain distinct semantic content. This is an issue for Mars image datasets, as any two pairs of Mars images are far more likely to be semantically similar due to the lack of visual diversity on the planet's surface. Making the assumption that pairs of images will be in visual contrast - when they are in fact not - results in pairs that are falsely considered as negatives, impacting training performance. In this study, we propose two approaches to resolve this: 1) an unsupervised deep clustering step on the Mars datasets, which identifies clusters of images containing similar semantic content and corrects false negative errors during training, and 2) a simple approach which mixes data from different domains to increase visual diversity of the total training dataset. Both cases reduce the rate of false negative pairs, thus minimizing the rate in which the model is incorrectly penalized during contrastive training. These modified approaches remain fully unsupervised end-to-end. To evaluate their performance, we add a single linear layer trained to generate class predictions based on these contrastively-learned features and demonstrate increased performance compared to supervised models; observing an improvement in classification accuracy of 3.06% using only 10% of the labeled data.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2725729506 |
source | Free E- Journals |
subjects | Annotations Clustering Curiosity (Mars rover) Datasets Domains Image contrast Mars Mars rovers Mars surface Performance evaluation Semantics Supervised learning Training |
title | Improving Contrastive Learning on Visually Homogeneous Mars Rover Images |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T04%3A44%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Improving%20Contrastive%20Learning%20on%20Visually%20Homogeneous%20Mars%20Rover%20Images&rft.jtitle=arXiv.org&rft.au=Ward,%20Isaac%20Ronald&rft.date=2022-10-17&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2725729506%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2725729506&rft_id=info:pmid/&rfr_iscdi=true |