Improving Contrastive Learning of Sentence Embeddings with Focal-InfoNCE

The recent success of SimCSE has greatly advanced state-of-the-art sentence representations. However, the original formulation of SimCSE does not fully exploit the potential of hard negative samples in contrastive learning. This study introduces an unsupervised contrastive learning framework that co...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-10
Hauptverfasser: Hou, Pengyue, Li, Xingyu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Hou, Pengyue
Li, Xingyu
description The recent success of SimCSE has greatly advanced state-of-the-art sentence representations. However, the original formulation of SimCSE does not fully exploit the potential of hard negative samples in contrastive learning. This study introduces an unsupervised contrastive learning framework that combines SimCSE with hard negative mining, aiming to enhance the quality of sentence embeddings. The proposed focal-InfoNCE function introduces self-paced modulation terms in the contrastive objective, downweighting the loss associated with easy negatives and encouraging the model focusing on hard negatives. Experimentation on various STS benchmarks shows that our method improves sentence embeddings in terms of Spearman's correlation and representation alignment and uniformity.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2876186971</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2876186971</sourcerecordid><originalsourceid>FETCH-proquest_journals_28761869713</originalsourceid><addsrcrecordid>eNqNjNEKgjAYRkcQJOU7DLoWdEu3rkVRiG7qXpb-lqL_apv2-hn0AF0dOOfjWxGPcR4F8sDYhvjW9mEYskSwOOYeKcrxafTc4Z2mGp1R1nUz0BMog1-pW3oBdIA10Gy8QdMs1tJ35x4017UaghJbfU6zHVm3arDg_7gl-zy7pkWw3L8msK7q9WRwSRWTIolkchQR_2_1ATK-O8Q</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2876186971</pqid></control><display><type>article</type><title>Improving Contrastive Learning of Sentence Embeddings with Focal-InfoNCE</title><source>Freely Accessible Journals</source><creator>Hou, Pengyue ; Li, Xingyu</creator><creatorcontrib>Hou, Pengyue ; Li, Xingyu</creatorcontrib><description>The recent success of SimCSE has greatly advanced state-of-the-art sentence representations. However, the original formulation of SimCSE does not fully exploit the potential of hard negative samples in contrastive learning. This study introduces an unsupervised contrastive learning framework that combines SimCSE with hard negative mining, aiming to enhance the quality of sentence embeddings. The proposed focal-InfoNCE function introduces self-paced modulation terms in the contrastive objective, downweighting the loss associated with easy negatives and encouraging the model focusing on hard negatives. Experimentation on various STS benchmarks shows that our method improves sentence embeddings in terms of Spearman's correlation and representation alignment and uniformity.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Learning ; Representations ; Sentences</subject><ispartof>arXiv.org, 2023-10</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Hou, Pengyue</creatorcontrib><creatorcontrib>Li, Xingyu</creatorcontrib><title>Improving Contrastive Learning of Sentence Embeddings with Focal-InfoNCE</title><title>arXiv.org</title><description>The recent success of SimCSE has greatly advanced state-of-the-art sentence representations. However, the original formulation of SimCSE does not fully exploit the potential of hard negative samples in contrastive learning. This study introduces an unsupervised contrastive learning framework that combines SimCSE with hard negative mining, aiming to enhance the quality of sentence embeddings. The proposed focal-InfoNCE function introduces self-paced modulation terms in the contrastive objective, downweighting the loss associated with easy negatives and encouraging the model focusing on hard negatives. Experimentation on various STS benchmarks shows that our method improves sentence embeddings in terms of Spearman's correlation and representation alignment and uniformity.</description><subject>Learning</subject><subject>Representations</subject><subject>Sentences</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjNEKgjAYRkcQJOU7DLoWdEu3rkVRiG7qXpb-lqL_apv2-hn0AF0dOOfjWxGPcR4F8sDYhvjW9mEYskSwOOYeKcrxafTc4Z2mGp1R1nUz0BMog1-pW3oBdIA10Gy8QdMs1tJ35x4017UaghJbfU6zHVm3arDg_7gl-zy7pkWw3L8msK7q9WRwSRWTIolkchQR_2_1ATK-O8Q</recordid><startdate>20231020</startdate><enddate>20231020</enddate><creator>Hou, Pengyue</creator><creator>Li, Xingyu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231020</creationdate><title>Improving Contrastive Learning of Sentence Embeddings with Focal-InfoNCE</title><author>Hou, Pengyue ; Li, Xingyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28761869713</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Learning</topic><topic>Representations</topic><topic>Sentences</topic><toplevel>online_resources</toplevel><creatorcontrib>Hou, Pengyue</creatorcontrib><creatorcontrib>Li, Xingyu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hou, Pengyue</au><au>Li, Xingyu</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Improving Contrastive Learning of Sentence Embeddings with Focal-InfoNCE</atitle><jtitle>arXiv.org</jtitle><date>2023-10-20</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>The recent success of SimCSE has greatly advanced state-of-the-art sentence representations. However, the original formulation of SimCSE does not fully exploit the potential of hard negative samples in contrastive learning. This study introduces an unsupervised contrastive learning framework that combines SimCSE with hard negative mining, aiming to enhance the quality of sentence embeddings. The proposed focal-InfoNCE function introduces self-paced modulation terms in the contrastive objective, downweighting the loss associated with easy negatives and encouraging the model focusing on hard negatives. Experimentation on various STS benchmarks shows that our method improves sentence embeddings in terms of Spearman's correlation and representation alignment and uniformity.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_2876186971
source Freely Accessible Journals
subjects Learning
Representations
Sentences
title Improving Contrastive Learning of Sentence Embeddings with Focal-InfoNCE
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T03%3A05%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Improving%20Contrastive%20Learning%20of%20Sentence%20Embeddings%20with%20Focal-InfoNCE&rft.jtitle=arXiv.org&rft.au=Hou,%20Pengyue&rft.date=2023-10-20&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2876186971%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2876186971&rft_id=info:pmid/&rfr_iscdi=true